Its requires just two induction proofs: - One that for I=1, after N comparisons the largest number is at position 1 (Proven with induction) its the base case - The other, that for any I=n+1 if we assume that the first n slots are ordered we can treat n+1 as a new array of length N-n and solve using the base case proof.
Talking about computer science and not doing the bare minimum of computer science is peak software engineering :)
By realizing the low-indexed portion is always sorted, you've already proved the algorithm yourself and the prover is just checking for bugs in your logic.
I'm not saying the proof isnt valuable, just that it's not magical and actually requires the user to understand the majority of the proof already.
The low-indexed portion is sorted, but isn't guaranteed to contain the lowest or the highest i elements of the list (except when i=1), and the list is ultimately sorted in decreasing, not increasing order. The final sort doesn't occur until the last iteration of the outer loop when the inequality is reversed (the interesting variable, j, is on the right).
Because of that, the proof outline discussed here doesn't work.
Consider what happens if the unique smallest element starts at position n. It is placed at the start of the list (the correct final position) in the final iteration of the outer loop (i=n), and not before.
Proof (for simplicity the list is indexed 1 to n):
Let A[n] < A[k] for all k != n in the initial list.
Elements are only swapped when A[i] < A[j]. If i != n and j = n, then A[i] > A[n] = A[j], so A[n] is not swapped.
Then, when i = n and j = 1, A[i] = A[n] < A[1] = A[j], so A[1] and A[n] are swapped, placing the smallest element in position 1 at last.
Solution is even simpler:
Empty case after 1 iteration (I=1) the largest number is at position 1 Base case: after 2 iterations (I=2) the 2 first elements are ordered, and the largest number is at position 2
Assume N case: after N iterations the first N numbers are ordered (within the sub list, not for the entire array) and the largest number is at position N
N+1 case (I=N+1): For Every J<N+1 and A[J]>= A[I] nothing will happen From the first J where J<N+1 and A[J] < A[I] (denote x) each cell will switch as A[J] < A[J+1] At J=N+1 the largest number will move from A[N] to A[I] For every J>N+1 Nothing will happen as the largest number is at A[I]
Not part of the proof but to make it clear we get: - for J<x A[J]<A[I] - for J=x A[J]=A[I] - for J<x A[J]>=A[I] and the list is ordered for the first J elements
After j ranges from 1 to i, it will still be the case that the values from 1 to i are sorted. So you can assume that j starts at i without disturbing the induction condition.
The reason this is true is... If A[i] is >= any element A[j] for j < i, then it is clearly true. Otherwise, there is some smallest j in 1 to i-1 inclusive such that A[i] < A[j]. Then A[i] will swap with that A[j], then it will swap again with the next A[j+1] (because the original A[j] was < A[j+1]) by the induction case, then with the next element, ... swapping with every element until j reaches i.
After this is done we have maintained the condition that A[1] .. A[i] are sorted after j ranges from 1 to i.
In fact if k of the first i elements are <= A[i+1], then the first k iterations of the inner loop will not swap, and the next i-k iterations will swap. The latter can be seen to maintain order.
The last n-i iterations of the inner loop (where j>=i) do not affect this reasoning and can be omitted, as can the first iteration of the outer loop.
Clojure code:
(defn swap [items i j]
(assoc items
i (items j)
j (items i)))
(defn weirdsort [items]
(let [ct (count items)
indices (for [i (range ct)
j (range ct)]
[i j])]
(reduce (fn [items [i j]]
(let [ith (nth items i)
jth (nth items j)]
(if (< ith jth)
(swap items i j)
items)))
items indices))
)
Then in the repl: my-namespace> (def items (shuffle (range 20)))
#'my-namespace/items
my-namespace> items
[19 0 13 16 14 18 15 7 10 17 9 11 6 1 3 4 12 2 5 8]
my-namespace> (weirdsort items)
[0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19]If you compare it with both Insertion and Bubble sort. You can see it looks more like insertion sort than bubble sort.
I think it is very possible to write this algorithm by mistake in intro compsci classes when you try to code a bubble sort by heart. I would think TAs may have many such instances in their students' homework.
This one is kind of special, though, since it's somehow more offensive to intuition than bubble sort itself.
To the frustration of the rest of the development team who first called me an idiot (I was new) then they could not make quicksort run as fast on inputs that were capped at something like 500 items. Apparently, algorithmic complexity isn't everything.
> algorithmic complexity isn’t everything
Yeah very true. Or at least hopefully everyone knows that complexity analysis only applies to large n, and small inputs can change everything. In console video games it was very common to avoid dynamic allocation and to use bubble sort on small arrays. Also extremely common to avoid a sort completely and just do a linear search on the (small) array while querying, that can end up being much faster than sorting and binary searching, especially when the total number of queries or the number of queries per frame is also low.
The issue was the device was so low in memory (2MB of unified NV and flash for code, files and operating memory) that there simply did not exist enough space for a lot of things to be held to be any problem for the 20MHz ARM7 controller as long as you did not do anything stupid. 600kB of it was used by OpenSSL and further 300kB by operating system anyway (I am aware of how funny it sounds when OpenSSL takes twice as much space as OS).
It is not necessarily +10000, it can also be something like x5000.
Because CPUs really, really, really like working short, simple, predictable loops that traverse data in a simple pattern and hate when it is interrupted with something like dereferencing a pointer or looking up a missing page.
So your super complex and super intelligent algorithm might actually be only good on paper but doing more harm to your TLB cache, prefetcher, branch predictor, instruction pipeline, memory density, etc.
So there is this fun question:
"You are generating k random 64 bit integers. Each time you generate the integer, you have to insert it in a sorted collection. You implement two versions of the algorithm, one with a singly-linked list and one with a flat array. In both cases you are not allowed to cheat by using any external storage (indexes, caches, fingers, etc.)
The question: in your estimation, both algorithms being implemented optimally, what magnitude k needs to be for the linked list to start being more efficient than array list."
The fun part of this question is that the answer is: NEVER.
EDIT
^W^W^W
Nope, I'm wrong. I did a worked example on paper. I think the devious thing here is the algorithm looks simple at a lazy glance.
To sort: 4-3-6-9-1, next round pivot for the i loop in [].
[4]-3-6-9-1
9-[3]-4-6-1
3-9-[4]-6-1
3-4-9-[6]-1
3-4-6-9-[1]
1-2-3-6-9 & sorted
I can see that it sorts everything to the left of a pivot, then because it does that n times it comes to a sorted list. A reasonable proof will be more complicated than I thought.Does anyone here also use SPARK for this sort of thing? Are there other formal methods tools you'd use if you had to prove something like this?
Edit: Some things I noticed. The package gnat-12 does not have gnatprove. Ada mode for emacs requires a compilation step that failes with gnat community edition. With alire there is no system gnat so it cannot compile it (quite possible I'm missing something). In the end I gave up on using emacs. Gnatstudio wouldn't run for me until I realized it needed ncurses. It also had some unhandled exceptions for me (raised PROGRAM_ERROR : adjust/finalize raised GNATCOLL.VFS.VFS_INVALID_FILE_ERROR: gnatcoll-vfs.adb:340), but in the end I managed to get it up and running.
Edit2: After playing around with it, I'm extremely impressed with what spark can do. I made a function to add 7 to numbers. Tried putting a post condition that the return value is bigger than input. "Nuh uh, it could overflow". Ok so I add a pre condition that numbers must be less than 100. "Uhm, buddy you are passing in 200 right there". This is useful stuff for real and easy to write too.
It also explains how to install GNATprove: ``alr with gnatprove``
Them: So what shows do you watch?
Me: ... It's complicated.
There are a lot of different sorting algorithms. Like, a lot, a lot.
As I watch them, I try to figure out what they were optimizing for. Some only scan in one direction. Some only use the swap operation. Some seem to do the minimum number of writes. Some are incremental performance improvements over others.
When I see an algorithm like this, I don't assume the person who wrote it was an idiot. I assume they were optimizing for something that's not obvious to me. Its only modifying operation is swap, so maybe that operation is faster than an arbitrary insert for whatever system or data structure they're using. There are no temporary variables besides loop counters, so maybe they're on a memory-constrained environment. There's barely any code here, so maybe this is for a microcontroller with precious little ROM. Or maybe they're applying this as a binary patch and they have a strict ceiling to the number of ops they can fit in.
Or maybe it's just the first sorting algorithm they could think of in an environment that doesn't ship with one and the performance is adequate for their needs. In that case, it's optimized for developer time and productivity. And honestly, it's a far more elegant algorithm than my "naive" version would be.
These are all valid reasons to use a "naive" sorting algorithm.
While it's been on my list for a while, I'm more curious to try out Ada now.
https://learn.adacore.com/ - Pretty much the best free source until you're actually ready to commit to the language. The "Introduction to Ada" course took me maybe a week of 1-2 hours a day reading and practicing to go through. There's also a SPARK course that takes a bit longer, but is also interactive.
The language reference manuals for 2012 and 202x (which should become Ada 2022):
1- contracts are written in the same language as the code to be checked/proved. This made important to add expressive features to the language (for all / for some: quantifiers! expression functions, if- and case-statements and more recently the new delta-aggregates notation): these additions make the language far more expressive without too much loss of readability.
2- executable contracts: most contracts can be checked at runtime or proved. And the escape hatch (code only present for proof) is 'ghost' code which is also a nice addition.
Lots of little nifty additions to the language, from just 'thinking in contracts' or 'designing for probability'.
https://news.ycombinator.com/item?id=28758106 (318 comments)
(I'm really not trying to brag - I assume I must be missing something, that I'm naïve to think it obviously works, and it actually works for a different reason.)
In plain English - 'for every element, compare to every element, and swap position if necessary' - .. of course that works? It's as brute force as you can get?
if ((a[i] < a[j]) != (i < j)) swap(a, i, j) ;
Which obviously works.The surprising algorithm sorts even though it swaps elements that are already ordered.
But i-loop comes through 'afterwards', so when i=3 (value now 1) and j=1 (3) it sets them straight.
It still seems quite intuitive, but I think I cheated by skimming over it initially and thinking it was clear, whereas actually I've thought about it more now.
(Not to compare myself to anything like him, I'm no mathematician at all, but I'm reminded of an amusing Erdós anecdote in which he apparently paused mid-sentence in some lecture, left the room, and came back some time later continuing 'is clearly [...]' or similar!)
It is opposite!
It works but not as you think it does.
This is true the first iteration since max(A) is eventually swapped to A[1]. This is true in subsequent iterations since during the ith iteration, it inserts the next element, initially at A[i], into A[:i-1] and shift everything up with swaps with A[i] so A[:i] is sorted, with the max of A moved to A[i]. After that no more swaps happen in the ith iteration since A[i] contains the max of A.
but none of this makes a good read for people not already familiar with program proof tools.
I disagree on that. It is a good exercise to try to prove it by yourself and it was actually quite fun
Main mistakes author makes though
- trying to prove it works without first understanding why / how it works. Always simulate test-runs on paper / in your head before
- trying to prove on machine first. Always make sure you can do the proof on paper first. Then and only then should you battle with your verifier / solver to get it to understand your proof
After the first main loop, the first item will be the biggest item in the list.
The inner loop, as it always starts from 1 again will gradually replace the bigger items with smaller items, and so on.
> The inner loop, as it always starts from 1 again will gradually replace the bigger items with smaller items, and so on.
I think the "and so on" is the point. To put it more precisely, I think that anyone (with a suitable familiarity with the tools) can prove formally that "after the first main loop, the first item will be the biggest item in the list"; but I think that it might take a bit more work to prove the "and so on" in a way that satisfies a formal prover.
(As to the intuition—I can buy that the "and so on" part is reasonably intuitive, but only at a level of intuition that can also believe wrong things. For example, just always replacing bigger items with smaller items doesn't guarantee you'll get a sorted list!)
On paper: "it's just swaps" Formally: "how do I even specify this?"
(For every value e of the element type original array and the final array have the same number of occurrences of e. Show that that's transitive, and show it's preserved by swap.)
And thanks for writing this up, by the way. Having people less familiar with formal methods try and write up their experiences is, I think, much more effective at letting people in than having experts try and write tutorials.
const args = process.argv.slice(2).map(Number);
for (let i = 0; i < args.length; i++)
for (let j = 0; j < args.length; j++)
if (args[i] < args[j])
[args[i], args[j]] = [args[j], args[i]];
console.log(args.join(' '));
Didn't expect it to be actually that simple, but now that I wrote it, it makes complete sense.To PROVE it, is another thing. Good work.
for (i=0; i<n; ++i)
for (j=i+1; j<n; ++j)
if (a[j] < a[i])
swap(a[j], a[i]);
Here the inner loop basically finds a min value of the remaining subset. This progressively fills the array in the sorted order.The number of comparisons is bound by n^2/2 vs n^2 of TFA.
The one in TFA isn't, for one crucial reason: the comparison is the other way around... And yet, it sorts.
for (i=0; i<n; ++i)
for (j=0; j<(i ? i : n); ++j)
if (a[i] < a[j])
swap(a[i], a[j]);
Once the max has been swaped in, further inner iterations past i (the position of max) are redundant.However, we can go even further:
for (i=0; i<n; ++i)
for (j=0; j<i; ++j)
if (a[i] < a[j])
swap(a[i], a[j]);
This dispenses with the explicit max swap and simply does progressive insertion.Curiously, in such transformed form the algo echoes the 'selection sort' from my previous post, performs equivalently too (wrt ncmpr, nswp).
Well, shooting for the simplicity (as posited in the Fung's arxiv paper) -- this one is on par with TFA and somewhat simpler in expression than the selection sort.
This is also direct, less 'magical' and indeed less wasteful than TFA. So for those roll-your-own-sort moments I'd rather remember this one instead.
/* Sort the array
for (i = 0; i < array_size - 1; i++)
for (j = i + 1; j < array_size; j++)
if (array[i] > array[j]])
{
int temp = array[i];
array[i] = array[j]:
array[j] = temp;
}
So that's from the late 80s.1. j in the article runs from 0 to array_size-1 (if done in C like this, in the article it's a 1-based array so 1 to array_size). This sort has j run from i+1 to array_size-1.
2. The swap condition is reversed. In the article's sort the swap happens when array[i] < array[j].
I'm so confused. That "new" algorithm is just BubbleSort?
This sort is like a worse performance version of insertion sort. But insertion sort creates a partition (and in doing this performs a sort) of the first i+1 items, with i increasing until the entire sequence is sorted. This one will scan the entire sequence every time on the inner loop instead of just a subset of it and usually perform more swaps.
(Combsort as well as mentioned algorithm also consists of two loops and a swap)
I was at the same time happily churning out assembly code, talking to the vga card, writing a dos extender, etc.
You can actually do quite a few things without formal education!
I was welllll over my head with complex stuff, and sorting didn't appear in the profiles, so... I guess you can call that profile-guided learning? I had the formal training, later on, and even then it didn't stick until I faced the actual complexity problem head-on.
I'll never forget that whole weekend with my AMD Athlon 900 at 100% CPU sorting a 200 000 words dictionary... It was still not finished on Monday. Implemented (a very primitive) insertion sort and it was done in less than 2 minutes...
That was my 10000-a-day day https://xkcd.com/1053