BLAS is not only written in more efficient code, it's different algorithms altogether. BLAS can do a lot of optimizations that brings the total FLOP count to below what's usually considered required for matrix multiplication. (2m*n^2)
[1]: http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprogram...
Note that numpy may not use BLAS (this was done though as to avoid any hard dependencies on 3rd party libraries). I am not sure what you mean by putting the FLOP count below what's required. BLAS will still need O(N^3) operations for a NxN matrix multiplications, whether they are optimized or not. The biggest difference between libraries is usually in clever data organization/passing to use the cpu cache as efficiently as possible (memory throughput is usually the bottleneck until your data don't fit in RAM). You can easily gain one order of magnitude using MKL compared to a naive implementation in C (that you should never, ever do, BTW).
Why wouldn't they use an algorithm that is better than O(N^3)?
I rewrote my clojure code to use JBLAS (jblas.org) and found stunning improvements: https://gist.github.com/264a2756fc657140fdb8
I havent looked at the actual code in a while, but from what I have seen they use the standard vanilla matrix multiplication algorithm whose complexity is no less than O(N^3) (for square matrices).
They achieve the speed by exploiting the deep cache structure of modern machines. Essentialy by blocking (sometimes unrolling) loops and consuming data in a cache aware fashion. Given the latency in accessing the data directly from memory, getting the data off the cache can easily speed things up by 10s if not 100s of folds.
In otherwords, its the same algorithm just implemented more efficiently.
On the other hand is the interesting question if linear algebra can be done fast in a lisp without calling BLAS. I do not have the reference on the top of my head but there were a few papers on ACM where it was shown that lisp can indeed rival fortran speed in linear algebraic computation. I will try and dig it up later. In anycase I suspect that stalin will do a decent job optimizing a matrix multiplication routine written in lisp. I suspect that a part of the slowdown in clojure is also because it runs on jvm whose safety guarantees (bounds checking and no reordering of instructions) can come with a speed hit.
EDIT: here it is http://www.cs.berkeley.edu/~fateman/papers/lispfloat.ps
From the abstract:
Lisp, one of the oldest higher-level programming languages
has rarely been used for fast numerical (floating-point)
computation. We explore the benefits of Common Lisp, an
emerging new language standard with some excellent
implementations, for numerical computation.
We compare it to Fortran in terms of the speed of
generated code, as well as the structure and convenience
of the language. There are a surprising number of
advantages to Lisp, especially in cases where a mixture
of symbolic and numeric processing is needed
And the conclusion: In this article we have asserted, and shown through a
number of examples, that numerical computing in Lisp need
not be slow, and that many features of Common
Lisp are useful in the context of numerical computing. We
have demonstrated that the speed of compiled Common Lisp
code, though today somewhat slower than that of the best
compiled Fortran, could probably be as efficient, and in
some ways superior. We have suggested ways in which the
speed of this code might be further improved.
Should be read with http://dl.acm.org/citation.cfm?id=235815.235824 (this is probably pay-walled).For the particular problem discussed in the original article, there are at least two ways the multiplication A'A could potentially be made faster:
1) The blas _GEMM matrix multiplication routine lets you specify whether input matrix arguments are supposed to be transposed. This gets rid of the explicit transposition, AND in this problem, it lets you compute each element as the dot product of two unit stride vectors, instead of a dot product of an unit stride vector with a nonunit stride vector. For SSE, this makes a huge difference.
2) For the particular case A'A, there is the even more specialized _SYRK routine, which at the very least should be cache friendlier than a naive _GEMM (_GEMM could also figure out that it can use _SYRK for this problem, and presumably it does so in some implementation)