We've taken the "arrogance" quote out of the HN title and replaced it with a neutral description of the letters—perhaps a bit too neutral, given how fabulous the post is. But HN readers can figure that out for themselves, especially once a post gets so high on the front page.
I was surprised to discover recently that the word 'array' prior to 1950 was used exclusively to describe two dimensional tables of numbers that one might find in a matrix or determinant. But by the advent of FORTRAN I in 1957 and ALGOL 58, 'array' now referred exclusively to a one-dimensional entity, as compared with 'n-dimensional arrays'. I was interested in digging through John Backus's papers from this era to see if I could find any clues.
I was able to narrow down the near window to 1952-1954, since the FORTRAN preliminary report of 1954 uses the word 'array' casually in the modern one-dimensional sense as interchangeable with 'subscripted variables', the latter being the more common terminology at the time. By comparison, a virtually unknown paper by Rutishauser in 1952 describing the "for" loop did not use the word 'array' at all, only 'subscripted variables'. (Rutishauser was an accomplished mathematician and quite possibly the world's first computational scientist.) A paper by Laning and Zierler at MIT in 1954 describing a formula compiler also used only the term 'subscripted variables'.
Backus's papers also have evidence showing that FORTRAN I was clearly written specifically to take advantage of the IBM 704's machine capabilities. Not only was the IBM 704 the world's first commercially successful computer, it was also an improvement over the preceding IBM 701 in providing index registers (3 of them) and floating point instructions which were fast for its era. Backus's papers describe how providing hardware support for indexing and floating point was revolutionary, as all programs up to that time had to write in all these instructions by hand (and for many programs was pretty much all they did).
So it is clear to me now that the changeover in the implied dimensionality of the word 'array' must be related to how the array developed as a data structure abstracting away indexing operations. By the time IAL (pre-ALGOL) came on the scene in 1958, the idea of indexable homogeneous containers was already well established. But I still haven't found any strong smoking gun evidence introducing the one-dimensional sense of the word. I suspect further digging into the description of the IBM 704 may be necessary. The 704 was not the first to provide index registers, but it may have been the first to call them as such. (The Manchester Mark I computer of 1948 appears to be the first computer with an index register, but it was called a B line. The [patent](https://www.google.com/patents/US3012724) claiming to cover index registers uses the term "control instruction" - no arrays mentioned - but it very cutely describes numbers as residing in known locations or "addresses" in quotes.)
I started programming in earnest and for money in 1961 mostly in machine code on a variety of machines, and I'm pretty sure that the notions of single dimensional sequential locations of memory as structures (most often called arrays, but sometimes called lists), predated the idea of index registers. This is because I distinctly remember being first taught how to move through arrays sequentially just by direct address modification -- this was called indexing in the Air Force -- and later being introduced to the index register or registers -- depending on the machine.
My strong guess (from above) is that the array as sequential memory accessed by address modification of some kind was a primary idea in the very first useful computers in the late 40s, and that this idea pre-dated index registers.
It would be well worth your time to look at the instruction set of EDSAC, and try writing some programs.
Yes, there are at least two separate stages of historical development here. The first is when people realized it was useful to repeat the same operation on different data in memory and viewed the collection of data as a variable in its own right. The earliest term I can find for this concept is "subscripted variable" (many examples prior to 1954, e.g. Rutishauser, 1952 in the "for" loop paper; Laning and Zierler, 1954)
but the idea appears to go all the way back to Burk, Goldstine and Von Neumann in 1946. Quoting p. 9, paras. 3.3-4:
"In transferring information from the arithmetic organ back into the memory there are two types we must distinguish: Transfers of numbers as such and transfers of numbers which are parts of orders. The first case is quite obvious and needs no further explication. The second case is more subtle and serves to illustrate the generality and simplicity of the system. Consider, by way of illustration, the problem of interpolation in the system. Let us suppose that we have formulated the necessary instructions for performing an interpolation of order n in a sequence of data. The exact location in the memory of the (n + 1) quantities that bracket the desired functional value is, of course, a function of the argument. This argument probably is found as the result of a computation in the machine. We thus need an order which can substitute a number into a given order-in the case of interpolation the location of the argument or the group of arguments that is nearest in our table to the desired value. By means of such an order the results of a computation can be introduced into the instructions governing that or a different computation. This makes it possible for a sequence of instructions to be used with different sets of numbers located in different parts of the memory.
"To summarize, transfers into the memory will be of two sorts:
"Total substitutions, whereby the quantity previously stored is cleared out and replaced by a new number. Partial substitutions in which that part of an order containing a _memory location-number_-we assume the various positions in the memory are enumerated serially by memory location-numbers-is replaced by a new _memory location-number_.
"3.4. It is clear that one must be able to get numbers from any part of the memory at any time. The treatment in the case of orders can, however, b more methodical since one can at least partially arrange the control instructions in a linear sequence. Consequently the control will be so constructed that it will normally proceed from place n in the memory to place (n + 1) for its next instruction."
https://library.ias.edu/files/Prelim_Disc_Logical_Design.pdf
The language is of course archaic, but the idea described clearly is that of indexing in 3.3 and arrays in 3.4. They use the word "sequence" but arguably this usage is in its ordinary mathematical sense.
The written historical evidence, at least, would confirm your strong guess that the idea of arrays itself is older than index registers. There's a missing etymological link though: when did a sequence of data stored consecutively in memory become associated with the word "array"? Still, the earliest written reference I can find for this second stage of historical development is the 1954 preliminary report on FORTRAN.
Maybe the word "array" is somehow derived from the advent of RAM, which even in its earliest form in Williams tubes had memory locations arranged physically in two dimensions. So right from the start we have two dimensions physically, but only one dimension logically, since the earliest computer instructions only dealt with (one-dimensional) offsets, if at all. Furthermore, popular science accounts of magnetic core memory describe them in terms of arrays. To give one example, the June 1955 issue of the Scientific American (no 192, pp 92–100) writes about "magnetic core arrays".
http://www.nature.com/scientificamerican/journal/v192/n6/pdf...