This feels very retro. I used to work on this sort of thing, but that was back in the early 1980s. This reads like something from back then, when people were first figuring out how to think about concurrency.[1] That's how people thought about this back then.
There's been progress in the last four decades. There are completely different approaches, such as symbolic execution. The Microsoft Static Driver Verifier is an example. There's also a theory of eventually-consistent systems now, with conflict-free replicated data types. It's also more common today to think about concurrency in terms of messages rather than variable access, which is easier to reason about. There's also more of a handle on the tooling problem. Not enough of a handle, though. Proof of correctness software still hasn't really gone mainstream. (In IC design, though...)
There's a tendency in this area to fall in love with the notation and formalism, rather than viewing it as a means for making better software. In practice, most of the things you need to prove in program proving are trivial, and have to be mechanized or you never get anything done. Once in a while, a theoretically difficult problem shows up. You need some way to address those abstractly and feed the results back into the semi-automated system.
The main concurrent problem addressed in this paper is the Paxos algorithm. That's similar to the earlier arbiter problem.[2] The arbiter problem is also about resolving who wins access to a shared resource in a parallel system. It's down at the hardware level, where two processors talk to one memory, and has to be resolved in nanoseconds or less. There's no sound way to select a winner in one round. But, for each round, you can tell if the algorithm has settled and produced a winner. You have to iterate until it settles. There's no upper bound on how many rounds it takes, but it is statistically unlikely for it to take very many. Arbiters were the first hardware device to have that property, and it was very upsetting at the time. Multiprocessor computers were built before the arbiter problem was solved, and they did indeed have hardware race conditions on access to memory. I used to debug operating systems for those things, and we did get crashes from that.
Good to see someone is still plugging away on the classics.
[1] https://archive.org/details/manualzilla-id-5928072/mode/2up
The computer science of concurrency began with Dijkstra, not Wirth. And it was Dijkstra who introduced the P and V semaphore operations [1].
[1] https://www.cs.utexas.edu/users/EWD/transcriptions/EWD01xx/E...
[0] https://www.cs.utexas.edu/users/EWD/transcriptions/EWD00xx/E...
> Three decades ago, Dijkstra (1960) proposed the standard method of dynamic memory allocation for recursive procedures in block structured, sequential languages, such as Algol 60 (Naur 1963), Pascal (Wirth 1971) and C (Kernighan 1978).
Sheeesh, Dijkstra, can you please not show up everywhere like you're the Euler of computer science for FIVE MINUTES?!
Seriously though, that looks like a very interesting collection, will have to search through that in more detail later. Also curious about SuperPascal[1]. The first release/latest stable release column on the wiki page is kinda funny[2].
[0] http://brinch-hansen.net/papers/1995c.pdf
"You probably belong to one of two classes of people who I will call scientists and engineers. Scientists are computer scientists who are interested in concurrent computing. If you are a scientist, you should be well-prepared to decide if this book interests you and to read it if it does.
"Engineers are people involved in building concurrent programs. If you are an engineer, you might have a job title such as programmer, software engineer, or hardware designer. I need to warn you that this book is about a science, not about its practical application. Practice is discussed only to explain the motivation for the science. If you are interested just in using the science, you should read about the language TLA+ and its tools, which are the practical embodiment of the science [27, 34]. But if you want to understand the underlying science, then this book may be for you."
I tell my students that you can understand And rediscover the science part by understanding basic principles, by applying mathematical logic.
While the engineering part you need to learn, as the engineering decisions are full of conventions and constraints of the time of invention and the people inventing.
But understanding the science and knowing roughly the constraints gets you very far. There are usually only a few superficialities left, like concrete syntax, that are basically impossible to "understand" and need to be "learned".
(Of course the exact syntax and semantics of let’s say semaphores in POSIX belong to a different category. But I’m not sure I’d want to call it an “invention”.)
Does anyone have a suggestion for a good book? It should discuss CAS and how it's implemented on hardware level as a bare minimum.
"This is a preliminary version of a book. If you have any comments to make or questions to ask about it, please contact me by email. But when you do, include the version date. I expect there are many minor errors in this version. (I hope there are no major ones.) Anyone who is the first to report any error will be thanked in the book. If you find an error, please include in your email your name as you wish it to appear as well as the version date."
Those are not science.
Praxis’ method was a practical application of these concepts used in industry:
https://www.anthonyhall.org/Correctness_by_Construction.pdf
If the design is the hypothesis, would you count that as science?
https://www.cs.ox.ac.uk/people/bill.roscoe/publications/1.pd...