The largest number representable in 64 bits

14 comments

Please no more comments to the extent of "i can define a much larger number in only 1 bit". What makes my blog post (hopefully) interesting is that I consider tiny programs for computing huge numbers in non-cheating languages, that are not specifically equipped for doing so.

[delayed]

It all goes over my head, but, what does the distribution of values look like? e.g. for unsigned integers its completely flat, for floating point its far too many zeros, and most of the numbers are centered around 0, what do these systems end up looking like?

[deleted]

What's the biggest up-arrow notation number you can spell with 64 bits?

https://mathworld.wolfram.com/KnuthUp-ArrowNotation.html

`9↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑9` seems like a reasonable guess (barring encoding cheats/trickery like @masfuerte commented!)

I can do you one better and specify that the normal base-2 integer represented by the bits is the number of up-arrows. But as /u/tromp has already pointed out, that is not very interesting.

Whatever largest number you can express in your system, I can represent a larger one in only one bit, using the following specification.

0=your largest number 1=your largest number + 1

To be pedantic, that is a instance of the Berry paradox [1] and no you can not [2] as that would be a violation of Godel's incompleteness theorems.

edit: To clarify further, you could create a new formal language L+ that axiomatically defines 0 as "largest number according to L", but that would no longer be L, it would be L+. For any given language with rules at this level of power you could not make that statement without creating a new language with even more powerful rules i.e. each specific set of rules is capped, you need to add more rules to increase that cap, but that is a different language.

[1] https://en.wikipedia.org/wiki/Berry_paradox

[2] https://terrytao.wordpress.com/2010/11/02/the-no-self-defeat...

It's not a paradox, because there is nothing logically inconsistent in my definition, unlike the Berry paradox.

To be more pedantic, yes you can, but only with a meta-language.

[deleted]

[deleted]

bits == entropy.

Everything else is word play.

I can do you one better. I can represent the largest number with a single binary bit.

I can do it in half a bit

Slow down there mr zip file

Can you give a formulation of the problem you are trying to answer?

To find the largest number that is computable by a program of at most 64 bits in a non-cheating language; i.e. one that's not geared toward producing large numbers.

Do you have a mathematical formulation, or?

Ultimately you seem to pick a random definition of computing and size and then work with that?

[deleted]

I'm going to agree with the downvoted people and say that this sort of approach is largely meaningless if you allow arbitrary mappings. IMO the most reasonable mathematical formulation given the structure of the integers (in the sense of e.g. Peano) is that to truly represent an integer you have to represent zero and each other representable number has a representable predecessor, i.e. to say you can represent 5 you need 0,1,2,3,4, and 5 to be representable. By a straightforward counting argument, 2^64-1 is then the largest representable number, in other words the obvious thing is right.

As I've replies several times before, we don't allow arbitrary mappings. We allow computable mappings but consider only obviously non-cheating languages like Turing machines or lambda calculus or Linux's bc or any existing programming language, that are not geared toward outputting insanely large numbers.

It's not "the largest representable number" because you're not representing numbers in any rigorous sense. If I give you 64 bits, you can't tell me what number those bits represent (first, because the rules of the game are ambiguous - what if I give you 8 bytes that are a valid program in two different languages; and second, because even if you made them precise, you don't know which bitstrings correspond to programs that halt). And if I give you a number, you can't tell me which 64 bits represent that number or even if the number is representable, and that's true even for small numbers and even if I give you unbounded time.

It seems far more natural to say that you're representing programs rather than numbers. And you're asking, what is the largest finite output you can get from a program in today's programming languages that is 8 bytes or less. Which is also fun and interesting!

I would say that all of those seem both arbitrary and geared toward outputting insanely large numbers (in the sense that the output of any Turing-complete language is). Now if you can make these claims in a mathematical rigorous way (i.e. without relying on a particular mapping like Turing Machines / Lambda Calculus, and without silly "up to a constant factor" cheats) then that would be more interesting.

Turing Machines and Lambda Calculus can only output insanely large numbers by building those numbers from scratch using their Turing completeness. So while lambda calculus can output something exceeding Loader's Number, it needs well over a thousand bits to do so. What I mean by "geared toward outputting insanely large numbers" is saying: I define a language in which the 1-bit program "0" outputs Loader's Number. That is obviously cheating.

There is unfortunately no mathematically rigorous way to define what is cheating, so it seems unreasonable to ask me for that.

In the spirit of floating points, I'd say posits offer an excellent insight into the trade-offs between precision and accuracy, while being meaningfully representative of a number system rather than some arbitrary functions.

Given time, this will output a bigger number, and it is only 48 bits:

   B0 39      mov al,'9'     //load character '9' to AL
   CD 29      int 29h        //print to screen
   EB FA      jmp short -6   //go again

That is not a number, that is infinity.

The (implicit) rules of the game require the number to be finite. The reason for this is not that infinity is not obviously "the largest" but that the game of "write infinity in the smallest number of {resource}" is trivial and uninteresting. (At least for any even remotely sensible encoding scheme. Malbolge[1] experts may chime up as to how easy it is to write infinity in that language.) So if you like, pretend we played that game already and we've moved on to this one. "Write infinity" is at best a warmup for this game.

(I'm not going to put up another reply for this, but the several people posting "ah, I will cleverly just declare 'the biggest number someone else encodes + 1'" are just posting infinity too. The argument is somewhat longer, but not that difficult.)

[1]: https://esolangs.org/wiki/Malbolge

It isn’t actually infinite since it can only do a finite number of iterations per second (though it would be large!), and there are only a finite number of seconds in the universe (near as we can tell).

This game assumes the computations run to completion on systems that will never run out of resources. No one in this universe will ever compute Ackerman's Number, BB(6), or the final answer given in the post. Computations that never complete are infinite.

If you are playing this game and can't produce a number that doesn't fit in this universe you are probably better suited playing something else. That's just table stakes. If it even qualifies as that. "Inscribe every subatomic particle in the universe with a 9 every planck instant of the universe until the heat death of the universe" doesn't even get off the starting line in games like this.

Another general comment: It feels like a lot of people are really flailing around here, and need to understand this is a game. It has rules. If you change the rules, you are playing a different game. There is nothing wrong with playing a different game. It is just a different game. The game is not immutably written in the structure of the universe, or a mathematical truth, it is a human choice. And there isn't necessarily a "why" to the rules any more than there's a "why" to why the bishop moves as it does in chess. You can, in fact, change that rule. There are thousands of such variants. It's just that you're playing a different game than chess at that point. If you don't want to play the author's game, then that's fine, but it doesn't change the game itself. And proposing different solutions is equivalent to saying that you can win a chess game by just flipping over the board and yelling "I win". You can do that. Perhaps you've even won some game. But whatever game you just won, it isn't chess.

[deleted]

Once you allow any format the question is completely meaningless. You can just define 0 to mean any number you want.

The post addresses this very issue:

> Precisely because the Turing machine model is so ancient and fixed, whatever emergent behavior we find in the Busy Beaver game, there can be no suspicion that we “cheated” by changing the model until we got the results we wanted.”

(Edit: oops, incorrect numbers)

Following BLC8's bytewise encoding convention of [1], w218's binary encoding 0100 0101 1010 1000 0110 0110 0000 0001 0101 1011 1011 0000 0011 1001 1101 0 gets padded with 3 arbitrary least significant bits, say 000, and becomes 45A8_6601_5BB0_39C0 in hexadecimal.

[1] https://www.ioccc.org/2012/tromp/

wow that entry to the international obfuscated c code contest

> The largest number (currently known to be) representable in 64 bits is w218

In my representation the bit pattern 00000000_00000000_00000000_00000000_00000000_00000000_00000000_00000001 stands for the number w218+1.

I win!

> Precisely because the Turing machine model is so ancient and fixed, whatever emergent behavior we find in the Busy Beaver game, there can be no suspicion that we “cheated” by changing the model until we got the results we wanted.

Sorry; no winning for cheaters:-(