There are many languages, and language fads. Fortran was never sexy, never faddy.
My research codes from 30+ years ago still compile w/o issue to this day, and run, on my linux laptop. Even using the big endian data files (gfortran has a nice switch for that).
I don't really use it actively anymore. But I know many who do. And when I hear others say "X for scientific computing", I've got to chuckle a bit. C++ code I wrote 15 years ago won't compile today. Python ... the language changes within minor versions (ran into this at work last week, with 3.6.8 being sufficiently different than 3.9.x that I had to rewrite a number of functions for 3.9.x).
I've not had to change my Fortran. Or my 25+ year old Perl. They just work. Which is something of a base requirement for scientific code. If you hand someone a code base, and N months/years later, it doesn't work ... that helps no one.
Do you recall specifically why?
Also, isn't C pretty 'time proof'? The only time I've seen old C fail to compile was when dealing with a real relic of a codebase. If memory serves it used the pre-standardisation parameter-declaration syntax:
void my_func(x, y)
{
int x;
long y;
}A while ago I compared the execution time for some simple matrix arithmetic in rust, fortran (2003 I think), and python/numpy. (as an aside: as far as science is concerned, python without numpy doesn't exist.) Execution times were fairly similar.
What I didn't mention was the pretty-much-optimal fortran and python solutions took maybe a minute to code.
The optimal Rust solution took over a day.
I'll also note intent(in), intent(out) fulfil similar uses to '&' and '&mut'.
I remember the day the prof was late to class and came in looking much the worse for wear. The drive had experienced a head crash.
The best failure I know was due to beer fermenting in the warm space above the fancy disk of an old Sigma 3; with hilarious consequences. No opportunity to get worse for wear from that batch, obviously.
https://fortran-lang.org/compilers/
The legacy Flang and a new Flang. I don't know exactly the plans for the legacy Flang, but I assume the idea is to eventually use new Flang. The new Flang and LFortran where started at about the same time, they have a little bit different design. Both written in C++. LFortran has from the ground up written to be interactive (like Julia or Python), in addition to regular compilation to binaries. Some of the other goals of Flang and LFortran overlap.
I think it's good for Fortran to have at least two actively developed open source compilers.
> pi = 4.141592
Well, for the sake of a reasonable correctness in the code example. Must be a typo with the intended:
pi = 3.141592
Thanks for your efforts to help Fortran move forward.
[1]: https://fortran-lang.org/learn/quickstart/variables#declarin...
- (Comment) "array operations, ... are slower than explicit loops" May be related to the following:
- No mention of vectors.
- No mention of LINPACK, LAPACK, etc.
- No mention of GPUs.
Vectors would be arrays. Array operations will lose if the compiler doesn't scalarization/deforestation properly (if deforestation is an appropriate word when dealing with arrays rather than lists/trees). Subroutines libraries and GPUs are irrelevant.
In my experience GNU Fortran was always competitive with proprietary compilers at around the 20% level on average once the scheduling was sorted for a new architecture. That's from the times I could try SPARC SunOS and GNU/Linux, MIPS/Irix, Alpha/Tru64(?), RS6000/AIX, and x86 GNU/Linux. (I don't know about the times between those and Opteron.)
I don't have the numbers to hand, but it's at that level on the Polyhedron benchmarks relative to Ifort on SKX with roughly equivalent options. I think it was twice as fast on one case and basically only lost out badly where it unfortunately doesn't use vectorized maths routines for a couple of cases unusually dominated by them, whereas libvecm would be used for C. GNU Fortran is also surprisingly more reliable than Ifort, but has the bizarre mystique that had me ordered to use it against the advice of maintainers of the code, notwithstanding previous comparison with Ifort, and though the result crashed with the Ifort du jour -- which wasn't an immediate clincher.
I don't remember the numbers relative to XLF on POWER, but they were respectable, and I don't have access to the proprietary Arm compiler.
Anyhow, typical HPC run times are relatively insensitive to code generation compared with MPI (especially collectives), and typically aren't even reproducible below the 10% level. [Broad picture, mileage varies, measure and understand, etc.]