A common thing I ended up doing for some "small data" hack projects was extremely liberal usage of SQLite: SELECT ... UNION ( SELECT ... ... GROUP BY ... ( UNION ... etc ) ) ... absolutely terrible SQL, but it got the job done to return the 100 or so records I was interested in.
It'd be great if I could write me some SQL then pop it out into: fn_group001, fn_join(g1, g2, cols(c1, c2)), ...etc...
...and then have composable sub-components of what the janky SQL-COBOL syntax supports, but in a group().chain().join(...) style.
I think I keep running across DataLog as something that's recommended, and of course ProLog has some similarities.
Nothing has been compelling enough to warrant jumping off of SQL, but I really do agree with the grandparent comment: SQL (aka: COBOL) is pretty clunky and non-composable in a way that complicates what you'd think would be straightforward for interactive, non-programming usage.
SQL is powerful. It is also very old and has very large warts.
If persistence hooks were also baked in then you'd have something a little bit like stored procedures in databases but far more powerful and with a modern syntax. Couple this with a distributed database layer supporting either eventual consistency built on CRDTs or synchronization via raft/paxos and you'd have an amazing application platform.
It's always seemed dumb to me that data, which is in the very center of everything we do, feels like a bolted-on second class citizen from the perspective of pretty much all programming languages and runtime environments. "Oh, you want to work with your data? Well we didn't think about that..." Accessing the data requires weird incantations and hacks that feel like you're entering a 1970s time warp into a PDP-11 mainframe.
Instead the language and runtime environment should be built around the data. Put the data in the center like Copernicus did with the sun.
Why has nobody done this? Has anyone even tried?
Once you start getting fancier in your files, and the data grows large, you need special ways to read it. A Postgres database can be considered a single big file on disk. It is the Postgres server that is required to access the file in the most efficient way to store and randomly access enormous amounts of general data.
SQLite is interesting in that there is no server, it's just a special library that enables efficient random access of a single file, which can be thought of as a black box that only SQLite knows how to interpret.
Unless you mean, making something like SQL built directly into the language as a first class citizen. Mumps did something like this https://en.wikipedia.org/wiki/MUMPS
Working with 1000+-line SQL scripts written by other people is no fun. Why wouldn't you want to decompose that into legible, testable functions using an expressive language like Scala?
I don't know about automatically, but definitely more likely.
oh right, new language. that’ll definitely fix it. :eyeroll: