I mean... ignoring the bitwise arithmentic (which this only obvious to people used to doing binary operations) this is the kind of maths that an 11yo could do.
That said, the patented solution is a little more complex. But not by much.
Which makes me curious: what other patents have we violated in our day-to-day without even knowing it?
Patents are like the criminal code - always remember "Three Felonies a Day" [1]. The system is set up so that if you are one of the 99%, the 1% can come in and bust you at will if you become too much of an annoyance/threat. They will find something if they just keep digging deep enough (not to mention that they can have your entire company's activity combed through with a microscope if they find a sympathetic court), and blast you with enough charges and threaten sequential jail time so that you cannot reasonably do anything other than plead guilty and forfeit your right to a fair trial [2].
And for what it's worth, that "play by the rules as we want or we will destroy you" tactic can even hit multi-billion dollar companies like Epic Games. It's one thing if society decides to regulate business practices by the democratic process of lawmaking... but the fact that Apple can get away banning perfectly legal activities such as adult content, vaping [3] or using a non-Apple payment processor from hundreds of millions of people is just insane, not to mention incredibly damaging to the concept of democracy.
[1]: https://kottke.org/13/06/you-commit-three-felonies-a-day
[2]: https://innocenceproject.org/guilty-pleas-on-the-rise-crimin...
[3]: https://www.macrumors.com/2020/06/01/pax-vape-management-web...
I looked at the title while still waking up.
At first I thought of the low + (high - low) / 2 method. I then figured maybe it was better to simply predivide both numbers before adding and just correcting for the lowest bit (how was that ever patented?!).
However, I didn't like having to perform two divisions so I thought there was probably something clever one could do with bit operations to avoid it. But, still being tired, I decided I didn't want to actually spend time thinking on the problem and I'd already spent a minute on it.
Emphasis on supposed.
The granted patents include: laser used to exercise cat, and mobile wood based dog game (log used to play fetch).
https://abovethelaw.com/2017/10/8-of-my-favorite-stupid-pate...
https://patents.google.com/patent/US5443036A/en
https://patents.google.com/patent/US6360693
Apple steals the cake though. By patenting a geometric shape.
the patented solution immediately came to mind
And it's from 1996.
The actual patent system failure here is the patent is not useful -- it's not valuable. If you needed this solution, you could sit down and derive it in less than an hour. That's not because it's obvious, but because the scope is so small.
The only difference between this patent and say a media codec is how long it would take to reinvent it. It might take you 200 years to come up with something as good as h.265, but there's no magic to it. There's a problem, somebody came up with a solution, somebody else could do it again given enough time to work on it. This is true for everything that's ever been patented.
The point of patents is to compensate for value of the work needed to reinvent, and so the real problem here is that value is less than any sane minimum. The value is less than the patent examiner's time to evaluate it! But court rulings have said it doesn't matter how insignificant a patent is, as long as it does anything at all it's "useful", which leads to these kinds of worthless patents.
Any logic designer, who is not completely incompetent, when seeing the expression
(a / 2) + (b / 2) + (a & b & 1);
will notice that this is a 1-cycle operation, because it is just an ordinary single addition.
In hardware the divisions are made just by connecting the bits of the operands in the right places. Likewise, the "& 1" is done by connecting the LSB's of the operands to a single AND gate and the resulting bit is connected to the carry of the adder, so no extra hardware devices beyond a single adder are needed. This is really absolutely trivial for any logic designer.
The questions at any hiring interview, even for beginners, would be much more complex than how to implement this expression.
It is absolutely certain that such a patent should have never been granted, because both the formula and its implementation are obvious for any professional in the field.
Our experiences and training has changed dramatically over the past 26 years.
The patent issued in 1996 and wasn't revisited since then (because never asserted in litigation). The USPTO is a lot different now, a quarter-century later.
Please be more specific or link something that explains how they've improved.
Adding two bits produces a sum and a carry:
0 + 0 = 0, carry 0
1 + 0 = 1, carry 0
0 + 1 = 1, carry 0
1 + 1 = 0, carry 1
So the sum is XOR, and the carry is bitwise AND. We can rewrite x + y as (x ^ y) + (x & y)*2Distribute the divide, and you get (x ^ y)/2 + (x & y) which is the mystery expression.
(Note this distribution is safe only because (x & y)*2 is even.)
a & b is the bits they have in common. If they both have a bit x, you should keep it, because the average of x and x is x.
a ^ b is the bits that only one of them have. You should halve these bits, because the average of x and 0 is x/2.
What sets this post apart from the rest is that it actually provides useful information for everybody, after all those other posts of type "but that's obvious, I looked at it for 1 minute, saw half the solution, was too lazy for the rest and in hindsight it's all obvious. What a ridiculous patent system."
Not to say that the patent system is problematic...
How is the above SWAR? It looks like a regular set of instructions.
For example, when x and y are 32-bit integers holding 4 8-bit integers, you can do
z = (x ^ y) + (x & y) & 0x7f7f7f7f;
Now z holds four 8-bit integers which hold the sum (modulo 256) of the integers of x and y. The bit mask is to stop the carry from propagating.https://en.wikipedia.org/wiki/SWAR
Maybe that's the way it's meant?
Compilers might be smart enough to pick up the idiom used in the example and compile them to something done in parallel?
(a >> 1) + (b >> 1) + (a & 1) & (b & 1)Software patents are absolutely disgusting.
To be precise, legally it is "not obvious back in 1996." There is a lot of stuff that is obvious today that wasn't 25 years ago. That said, this one in particular probably would have been invalidated as obvious if it was ever litigated (and it was not). Also, the USPTO has reined in software patents a lot in recent years (but always people advocating for more or less).
Given its simplicity this makes me wonder if a compiler has ever transformed legal original IP code into patented code.
https://ai.googleblog.com/2006/06/extra-extra-read-all-about...
https://news.ycombinator.com/item?id=3530104
https://news.ycombinator.com/item?id=1130463
https://news.ycombinator.com/item?id=14906429
https://news.ycombinator.com/item?id=6799336
https://news.ycombinator.com/item?id=9857392
https://news.ycombinator.com/item?id=12147703
https://news.ycombinator.com/item?id=621557
https://news.ycombinator.com/item?id=7594625
https://news.ycombinator.com/item?id=9113001
https://news.ycombinator.com/item?id=16890739
If doomscrolling all that isn't enough to make you fear for mankind's future I'm pretty sure there's an Ulrich Drepper glibc bug report rejection related to this topic (or several) that you can google...
On topic: Raymond's post has some other great stuff. SWAR!
> I would argue that the bug is not in the algorithm -- the bug is in languages that don't detect integer overflow by default.
Concretely, this is true enough. But abstractly, not so much: the algorithm is actually "buggy" if you abstract the problem a little. Namely, finding a midpoint of two operands does not require that the operands be numbers, or even addable for that matter. The introduction of that requirement is therefore a bug (at least in my eyes). The easiest way to see this is to replace integers with pointers. Then adding two pointers isn't even a well-defined operation in the general case, let alone dividing them by two. Whereas subtracting them and moving half the distance is actually quite well-defined, and we can see it behaves better too.
I would probably go so far as to claim that this is not an isolated example of where thinking about problems more abstractly helps us come up with solutions that have non-obvious benefits.
Read https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63303 to see many problems around pointer differencing.
I haven't seen the mentioned proof, but if said proof is not formal and mechanized and/or does not consider all possibilities, including overflow, then should we really consider it to be a proof of correctness? It might prove some desirable properties, but I don't think we should leave it at that. I certainly don't think we should claim that "It is not sufficient merely to prove a program correct; you have to test it too."
When it comes to software programs, I believe proofs can and should be exhaustive. That does not necessarily mean you need to exhaustively test every input, but it does mean you need to prove correctness for every input, including inputs that might result in overflow or undefined behavior. Otherwise, we should not consider it a proof of correctness.
The trust in the significance of a proof is limited by the trust in the statement to be proven. The statement to be proven was "If XYZ assumptions about the statements in the underlying language hold, then the result of that function is the result of a binary search". The proof is correct. Just that XYZ contained something incorrect.
There is no free lunch. Even if we prove all our software formally correct, we still need to "program" the specifications, with all that comes with it. They can contain bugs, need to be debugged, revisited, etc. The above is an excellent example of this.
After reading the article, learning that
unsigned average(unsigned a, unsigned b)
{
return (a / 2) + (b / 2) + (a & b & 1);
}
was once patented actually made me a bit sad for our entire system of patents.Though actually it shouldn't be patented because it's an obvious implementation of a math formula (and math is not patentable).
There’s another algorithm that doesn’t depend on knowing which value is larger, the U.S. patent for which expired in 2016:
unsigned average(unsigned a, unsigned b)
{
return (a / 2) + (b / 2) + (a & b & 1);
}
There's no way that should be patentable.I have no idea why it was granted. I sent it to headquarters as a design pattern description.
That’s pretty much the definition of “prior art.”
I guess they were able to reshape it into a form that the patent office wouldn’t laugh out the door. I never really looked at it. I hate reading patents; even my own.
this would require a lawsuit to find out, but this wording in the description suggests the authors’ intent with the claims was to patent this method of computing an average on any processor:
> A general purpose computer or processor with suitable circuitry can execute the invention in a single instruction cycle (as is preferred) or multiple instruction cycles.
LZW compression[1] was patented, and Unisys actually had success extracting license fees with it[2]. In that sense, clearly an algorithm was "patented enough" that the patent was granted and used, even though the patent is literally just math. The MP3 patent is also just a mathematical algorithm which had even more legal success.
So, while technically algorithms are not patentable, in reality, the USPTO will grant patents for algorithms if you write enough legal gunk around them, and the difference doesn't really matter when a patent troll is sending threatening emails and your lawyers are demanding 20x the licensing fee to take the case.
[0]: https://en.wikipedia.org/wiki/Alice_Corp._v._CLS_Bank_Intern...
However, this is also why business method algorithms like the Amazon "One-Click" are not patentable in most countries. There is no trivial theoretical equivalence between one-click shopping and an electronic circuit.
In the US, unfortunately,they can be (although not all algorithms are patentable). For example algorithms used in many media formats are patented.
I've never come across this problem before, I read the headline and that solution came into my head immediately before I'd even clicked. I don't think I'm clever, surely half of HN feels the same way.
Software patents are comical.
Wow! Just wow...
unsigned average(unsigned a, unsigned b)
{
return (a >> 1) + (b >> 1) + (a & b & 1);
}That's completely retarded; it's literally the first solution I think of when I hear this problem.
In the case of the phone you already know the patented solution, which obviously makes it impossible for you to judge its obviousness. That presumable wasn't the case with GP and the presented problem.
I suspect the POWER team has a good sense of humor. There's also the EIEIO instruction https://www.ibm.com/docs/en/aix/7.2?topic=set-eieio-enforce-...
EDIT: it's included in the collection of methods in the article as expected.
unsigned midpoint(unsigned a, unsigned b) {
asm("add\t%1,%0\n\t"
"rcr\t%0"
: "+r"(a)
: "r"(b));
return a;
}
Although `(a & b) + (a ^ b) / 2` is probably the more conservative choice. unsigned average(unsigned a, unsigned b)
{
return (a & b) + (a ^ b) / 2;
}
A quick sanity check of this23 & 21 = 21
23 ^ 21 = 2
21 + 2 / 2 = 22 (order of operations)
I wonder why this is there. It seems the best solution but no one else is mentioning it. It also has no context near it. Nor is it stated correctly. It's just there on it's own.
They say this because they know the USPTO strategy is to just hand out patents after putting in some bare minimum effort to review, and postpone the real review process to the unlikely day that someone chooses to challenge it in court and can pay private firms to do their job for them.
The winners in this arrangement are the government, the big law firms, and the large corporations that can afford them.
* x86 SSE/AVX/AVX2 have (V)PAVGB and (V)PAVGW, for 8-bit and 16-bit unsigned integers. These are "rounding" instruction though: adding 1 to the sum before the shift.
* ARM "Neon" has signed and unsigned "Halving Addition". 8,16 or 32 bit integers. Rounding or truncating.
* RISC-V's new Vector Extension has instructions for both signed and unsigned "Averaging Addition". Rounding mode and integer size are modal.
* The on-the-way-out MIPS MSA set has instruction for signed, unsigned, rounded and truncated average, all integer widths.
Some ISAs also have "halving subtraction", but the purpose is not as obvious.
I was surprised that the article didn't mention the need for this in binary search, and the famous problems [1] that occured due to naive attempts.
[1]: https://en.m.wikipedia.org/wiki/Binary_search_algorithm
Gcc and Clang both recognize the pattern of shifts and OR that reproduce a rotation, and substitute the actual instruction, no intrinsic needed.
I bet MSVC does too.
It's 2022. stdint.h is old enough to drink, and is probably married with a kid on the way. Just include it already?
unsigned average(unsigned a, unsigned b)
{
return (a / 2) + (b / 2) + (a & b & 1);
}
That makes the patent system seem broken to me.One of the reasons I love Python is that integers never overflow, so this becomes a trivial problem.
https://www.askpython.com/python/built-in-methods/python-rou...
"Also, if the number is of the form x.5, then, the values will be rounded up if the roundup value is an even number. Otherwise, it will be rounded down.
For example, 2.5 will be rounded to 2, since 2 is the nearest even number, and 3.5 will be rounded to 4."
Consider a term project of an undergraduate CS course, where the goal is spelled out, but the method is left for discovery.
Methods developed within any such project immediately invalidate patents. They're apparently obvious to folks learning to become "skilled in the art".
Yes, in practice, reaching a legal threshold would be hard (are you sure the students didn't read the patent or any description directly resulting from it?). But I'd definitely run a "patent invalidation course" - if I had confidence that the results would actually affect patents.
(a>>1) + (b>>1) + (a&b&1)
No division needed. unsigned average(unsigned a, unsigned b) {
return (a + b) / 2;
}
At the end of the day it's all just text. There are plenty of steps before any of it does anything at all.Really depends on where you're coming from. Anyone who has dipped their toes in embedded programming will immediately know they are equivalent, and many will correct /2 to a bitshift, because that's what you want to happen.
I get that bit twiddling is obscure outside of low level programming, but bit shifts really is kindergarten stuff in this domain.
Turning (a+b)/2 into a/2 + b/2 is basic obvious math.
If you do it and to any basic testing you will realize you are getting of by one errors, locking at them can then make it obvious that when they appear and hence how to fix them.
Sure a proof is more complex, but then you can just trivially test it for all smaller-bit numbers over all possible inputs, hence making proofs unnecessary (for that numbers).
This is a solution a not yet graduated bachelor student can find in less then a day.
Having granted a patent for this should lead to disciplinary measurements against the person granting the patent tbh.
Props for including the assembler breakdown for every major CPU architecture.
Divide both operands by 2 was my first idea before loading the page. (I like to try that sometimes before reading the articles.)
I didn't think about the carry bit, but it seems like that would be a logical solution after 5 minutes of extra thinking.
I'm not sure how that's patentable.
That's insane to me.
But maybe there is more too it.
I didn't read the patent itself.
The patent is for a circuit design to perform that algorithm in a single cycle. The algorithm was never patented, nor could it be.