Similarly, while the end result is quite elegant the consequences of the choices made to get there aren't really explained. It's quite a bit of extra effort to reason out what would have happened if things had been done differently.
Running code from the linked notebook (https://github.com/AdamScherlis/notebooks-python/blob/main/m...), I can see that a 32 bit representation of the number 3 decodes to the following float: 2.999999983422908
(This is from running `decode(encode(3, 32))`)
(log n) + 1
Still seems a fairly simple variation once you remove the arbitrary restriction. To the point: I don't believe for a second that anyone familiar with his solution, asked to make it more bit efficient, would not have come up with this. Nor do I believe they would call it anything other than a variation.
That doesn't make it less cool but I don't think it's like amazingly novel.
> [1 1 1 1 1 1 1] 2.004e+19728
Does that mean that the 8 bits version has numbers larger than this? Doesn't seem very useful for 10^100 is already infinity for all practical purposes.
Yes. Not just a bit larger but an even more ridiculous leap. Notice that you're iterating exponents. That last one (7 bits) was 2^65536 so the next one (8 bits) will be 2^2^65536.
Python objects to 2^65536 complaining that the base 10 string to represent the integer contains more than 4300 digits so it gave up.
> Doesn't seem very useful
By that logic we should just use fixed point because who needs to work with numbers as large as 2^1023 (64 bit IEEE 754). These things aren't useful unless you're doing something that needs them, in which case they are. I could see the 5 or 6 bit variant of such an iterated scheme potentially being useful as an intermediary representation for certain machine learning applications.