I could be wrong, but based on the similarities to interval arithmetic everyone has already identified, I'm pretty skeptical. At best, this could be a patent on a more efficient way to build interval arithmetic into a CPU architecture rather than a completely new technique.
As my British friends would say though, I can't be arsed to actually read the patent.
For example, sqrt(2) + sqrt(8) would print 3 sqrt(2) rather than 4.242640687119286~
I wish I knew how it worked, it's probably something simple like suggested in another comment.
It's patent law lingo. The patent covers both the idea (the "system") and subsequent implementations (the "apparatus") that are direct implementations of the idea.
You won't need many bits for the error, and operations can be made reasonably fast as only several lowest bits are affected.
Not directly related to the article, but many [1] irrational numbers (π for example, or sqrt(2)) can be represented in a computer in their entirety, i.e. "accurate to the last digit." Not all digits are stored at once in RAM, of course, but you can obtain an arbitrary digit (given sufficient time). That's precisely how computable numbers are defined (first by Turing in his 1936 paper that first defined the notion of computation, and was called On Computable Numbers, where the "numbers" in the title refer to real numbers, including irrational ones).
[1]: Relative to the irrational numbers that "we know", not to all uncountable ones, of course.
Moreover, these are clearly marked quotes from the press release and the patent. Maybe this technology doesn't merit an article. But if it does, quoting the inventor is exactly what one expect from coverage. Note that it does invite scepticism, starting with "claims" in the headline.
This gratuitous hatred of journalism is seriously getting out of hands.
Not only does this method not reduce floating point error, it reduces the precision that you have for any given number of bits.
Unfortunately I can't find any of the figures referenced in the patent to help me understand the novelty of this patent.
What about for binary repeating decimals like 0.3? Wouldn't it always raise that signal?
unums: https://en.wikipedia.org/wiki/Unum_(number_format)
interval arithmetic: https://en.wikipedia.org/wiki/Interval_arithmetic
That does seem useful but it's a bit akin to saying that you've solved the division-by-zero problem by inventing NaN. Suppose you're writing some critical piece of software and a floating point operation raises the "inaccurate" flag, how do you deal with that? Do you at least have access to the bounds computed by the hardware, so that you may decide to pick a more conservative value if that makes sense?
Besides the link to the "1991 Patriot missile failure" kinds of contradicts the claim that this would solve the issue since Wikipedia says:
>However, the timestamps of the two radar pulses being compared were converted to floating point differently: one correctly, the other introducing an error proportionate to the operation time so far (100 hours) caused by the truncation in a 24-bit fixed-point register.
If the problem comes from truncation in a FP register I'm not sure how this invention would've helped.
You can trap. ...but then again, existing arithmetic traps are not uniformly enabled by default.
EDIT: and for some other methods: https://en.wikipedia.org/wiki/Unum_%28number_format%29, particularly the latest one being the Posit method: http://superfri.org/superfri/article/download/137/232
EDIT2: of course other people can license it, but the other way to bring a new floating point to the scene would be through the same process that happened with IEEE 754. There are plenty of people who wouldn't touch anything patented at all, sometimes even with a patent clause.
I want this guy to be compensated, but I'd prefer this guy be compensated in a manner that doesn't prevent third parties from fixing their hardware. In general, I think bounties are a good solution to this. Failing that, there are plenty of trade groups and nonprofits and regulatory bodies that could be tasked (and funded) with acquiring and freely redistributing this class of innovation if we wanted to.
Apparently he did up until now, didn't he?
I agree he should get a percentage of the profits other companies make off of his invention, but it's not like he's entitled to any payments just because he liked to tinker around. Inventor's not a real profession.
Then that's their loss. This seems like the ideal scenario for patent protections: small inventor developing a genuinely novel and useful invention that big, rich companies would otherwise shamelessly copy.
https://ploum.net/working-with-patents/
> Note that by « valid », I mean that the Patent Office didn’t found a trivial prior art for this. It doesn’t mean that there is no prior art or that I’m the real inventor or that my invention works.
An example of an insane patent:
https://www.google.com/patents/US6960975
> Space vehicle propelled by the pressure of inflationary vacuum state US 6960975 B1
(via https://www.metabunk.org/do-patents-mean-the-invention-works... )
Apparently, there were design reasons why for electronic calculations a different mathematical formulation was more efficient. The competing manufacturers would discover this fact one by one, and Zuse was worried that someone may question his integriry, thinking he was the source of the leak. But no one did.
Michael Hanack (the materials chemist) used a different strategy: he would not patent anything so his inventions could be used my any market participant, and he would consult for all of them.
On the other hand, everyone is looking forward to the day the aptamer patent runs out. Uptake is limited (and you'd think that CRISPR/CAS9 has the same problem) because of unreasonableness (in case of CRISPR uncertainty) around licensing.
There's a reason Linux and GNU utilities are so massively widely used, and overall they've probably provided billions in economic value. They do that freely, for any human to use, and in fact that's part of their main value proposition. Both were born out of the legal nightmare that was UNIX at the time.
How should he get paid for this? In a very-ideal world, people and corporations that used his idea and had spare capital would voluntarily give him donations. In a better-than-this world, governments (or some other entity) would pay bounties to inventories out of a pool of tax money, based on both perceived usefulness of the invention and how widespread its use came to be.
It depends on whether he would consider the filing's "invention" to be within a reasonable definition of what should be patentable.
If yes, then he's just playing his part in our society's overall machinations for technical progress, and there's nothing really blameworthy about the filing.
If no, then he's being deeply selfish: He's capitalizing on the government's unjustified encroachment on our individual liberties, via the patent system, for his own personal gain.
> When the calculated result is no longer sufficiently accurate the result is so marked, as are all further calculations made using that value.
Solving it would be a pretty big deal. This doesn't feel like it is, though I admit I haven't worked on a similar problem in a long time. Kinda feels like patent trolling as I imagine that lots of companies have put bounds on detecting floating point errors when they need it. There are certainly lots of papers on it: https://www.google.com/search?q=floating+point+error+bounds
The hard part has been left as an exercise for the examiner.
The whole technique smells a bit fishy to me, but it might be genuine (in any way, the article seems more like marketing since the technical merit is not immediately obvious, and the difference with existing techniques not immediately clear).
To me, it looks like a specific mechanism for encoding the bounds and scale of error into a floating point representation, along with a pipeline for processing operations on operands of this form (presumably efficiently). So to me it looks like a specific variant of IA.
It looks like the purpose is to be implemented as an alternative to conventional floating point libraries and CPU modules. E.g., Intel might license this and add a floating point module based on this + instructions to access it to a future CPU. (Well, even if it's great and all is as advertised, and proves to be generally useful, I'm not sure it would jump right into the CPU. It would probably have to grow more organically first, but that's another discussion.)
I mean, I have no idea if this does all of what it says or if it does, whether that would prove to be generally useful enough to make it out of niche cases.
But it's interesting.
I'm not sure how much it increases computation time, but software for exactly this is freely available, see for instance Arb: https://github.com/fredrik-johansson/arb
For machine precision, I believe ordinary interval arithmetic is the best way to go still. Unfortunately, this not only uses twice as much space; the time overhead can be enormous on current processors due switching rounding modes (there are proposed processor improvements that will alleviate this problem). However, the better interval libraries batch operations to minimize such overhead, and it's even possible to write kernel routines for things like matrix multiplication and FFT that run just as fast as the ordinary floating-point versions (if you sacrifice some tightness of the error bounds).
Regarding the article, using a more compact encoding for intervals is a fairly old idea and I'm not really sure what is novel here.
Thanks for the numbers, how did you get that estimate? Did you consider SIMD?
It seems to be a system where the hardware design itself keeps track of the accuracy losses in floating point calculations, and provides them as part of the value itself.
The title is (predictably) exaggerated, but it's an interesting idea, and could potentially be a significant improvement in particular use cases.
Looking at the claims, it looks like he's patented an augmented floating point unit (hardware) that does bounded arithmetic. The #1 claim is "A processing device [with a] FPU [and a] bounded floating point unit (BFPU)." All the following claims are "The processing device as recited in claim 1" (e.g. CPU+FPU+BFPU) with subsequent changes.
This may seem odd, but it can be the difference between knowing and unknowing infringement. Knowing infringement results in triple damages.
IANAL — just repeating consistent advice I have received
Reading a patent is more likely to make you not infringe upon it than to make you knowingly infringe upon it.
Addition has a maximum accuracy of 1 LSB. Makes sense: the last bit could have been "rounded off" and 1.5+1.5 == 2 (but really 3 should have been returned).
Subtraction has unlimited error bounds (!!!). Well, I guess there's 53-bits of a double-precision float. So subtraction can theoretically create 53-bits of error.
In practice, you need to keep track of the error bounds during the runtime of the program. Its not something that can be computed at compile time. After all, addition of a positive and negative number IS subtraction. (so some subtractions are additions: with accuracy of 1LSB. While some additions are subtractions: with unlimited error bounds)
Is there something more novel to his approach?
Note that it’s a claim on the processing unit implementation (e.g. the FPU), not the method.
Nonetheless, I’d be very surprised if this stands the test of interval arithmetic prior art.
So the inventor gets a patent number for his LinkedIn profile, and USPTO get their fee, and that's the end of it. A win-win for all involved.
Earlier HN discussion of the phenomenon: https://news.ycombinator.com/item?id=16015371
Patents like this have "threat value", which is often happily exploited by "IP monetization" companies, contingent law firms, etc.
This is the kind of stuff that turns into 100x $50k settlement demands.
What he is doing appears so be interval arithmetic: https://en.wikipedia.org/wiki/Interval_arithmetic
Because we don't have infinite computer memory or processing power numbers have to be finite, so no one will ever "solve the floating point error problem" however being able to quantify the error is both extremely useful and extremely complex because you have to try to determine how the error propagates through all of the operations applied over the original input values.
In science this is also done based on the precision of the raw data... roughly through selecting a sensible number of significant figures in final calculation. In other words they omit all of the digits they deem to be potentially outside of the precision provided by the raw data, e.g your inputs a:123.456 and b:789.012 but your result from some multistep calculation is 12.714625243422799, obviously the extra precision is artificial and should be reduced to something slightly less than the input precision (because it will have been rounded).
For floating point math this is about going a step further by calculating the propagation of error from the end of the maximum length significand provided by IEEE 754 (where anything longer causes rounding and thus error), and trying to quantify how that window opens wider and wider as those rounding errors propagate towards more significant digits as more operations are performed. With interval arithmetic this is done by keeping track of the upper and lower bounds of that window (the real number existing somewhere within that window).
This doesn't solve any of the many issues that floating point math has, but it allows whatever is consuming it to potentially assign significance to the output of a calculation more precisely. i.e so that you can say 1369.462628234m is actually 1.4e3m (implying ± 100m) perhaps translating into understanding that your trajectory calculation isn't actually as accurate accurate as the output looks, but instead the target has a variance of up to 100x100 meters.
I expect the patent details a hardware implementation to make this practical at the instruction level rather than a likely very slow software implementation.
In the end, it seemed like any substantial computation ended up having extremely wide bounds, much wider than they deserved. Trying to invert a matrix often resulted in [-Inf .. +Inf] bounds.
Funny. There was a project at Sun Labs in the early 2000s that way far down this road. Without looking at its specifics, I am still surprised that the patent was accepted.