Am I the only one for whom this crucial explanation didn’t click? Admittedly, I might be stupid.
Wikipedia is a bit more understandable: „The Cracovian product of two matrices, say A and B, is defined by A ∧ B = (B^T)A
Better I think would be to say "the result in column i and row j is the sum of product of elements in column i of the left cracovian and column j of the right cracovian".
And even by this definition the example given doesn't seem to track (and the strangeness of sometimes saying "+" and sometimes not, and having both "0" and "-0" in the example is bananas!):
{ 3 2 } { 1 -4 } = { 5 -2 }
{ -1 0 } { -2 3 } = { 0 2 }
3 * 1 + -1 * -2 == 5 -- check
3 * -4 + -1 * 3 == -15 -- what?
2 * 1 + 0 * -2 == 2 (okay, but shouldn't this be in the lower left, column 1 dotted with column 2?)
2 * -4 + 0 * 3 = -8 (now I'm really missing something)But in another link I found that it's column by column multiplication. So A × B = C, then C[i][j] = sum(A[k][i] * B[k][j]). Unfortunately, the example doesn't match that definition...
https://en.wikipedia.org/wiki/Cracovian
The Cracovian product of two matrices, say A and B, is defined by
A ∧ B = BT A,
where BT and A are assumed compatible for the common (Cayley) type of matrix multiplication and BT is the transpose of B.
Since (AB)T = BT AT, the products (A ∧ B) ∧ C and A ∧ (B ∧ C) will generally be different; thus, Cracovian multiplication is non-associative.
A good reference how to use them and why they are useful is here (pdf):
https://archive.computerhistory.org/resources/access/text/20...
That's very specific of Python. A few years ago we were multiplying a lot of matrices in Fortran and we tried to transpose one of the matrices before the multiplication. With -o0 it was a huge difference because the calculation used contiguous numbers and was more chache friendly. Anyway, with -o3 the compiler made some trick that made the difference disappear, but I never tried to understand what the compiler was doing.
Old texts got really worked up whether a vector was a row or column. The programming language APL resolved this quite nicely: A scalar has no dimensions, a vector has its length as its one dimension, ... Arbitrary rank objects all played nicely with each other, in this system.
A Cracovian is a character or two's difference in APL code. There's a benign form of mental illness learning anything, where one clutches onto something novel and obsesses over it, rather than asking "That was exciting! What novel idea will I learn in the next five minutes?" I have friends from my working class high school who still say "ASSUME makes an ask of you and me" as if they just heard it for the first time, while the most successful mathematicians that I know keep moving like sharks.
I wouldn't stall too long thinking about Cracovians, as amusing a skim as the post provided.
There was a claim near the top that some things are easier to compute when viewed as cracovians. then some explanation, then suddently it switches to numpy and showing the time is the same.
New title: "Cracovians are a Waste of (the Reader's) Time"?