I'm still hoping to find a decent "SQL flavor" interpreter and/or translator. That would be a killer feature, allowing me to define "procs" in vanilla ANSI SQL, and translate them into whatever database's SQL format I need without worrying too-much about syntax and optimizing. Because you know, it's 2016, and I still need to know that fetching N rows of results from a table has multiple syntax forms depending on your chosen database and version.
I wonder if some of the ORM's would get you to a bit closer to the cross-platform proc. They need to know all of the variances of the db's in terms of the syntax I guess, but I doubt you'd ever get to a stage of writing cross-db stored procs that compiled to TransactSQL etc... or whether the effort would be worth it.
Have you had to do that? i.e. write the same functionality in two different databases as a stored proc? I guess software vendors might face that. I'd probably push the logic back into the app code if possible, unless the amount/processing of data precludes it for whatever reason.
tup only rebuilds the parts that need to be rebuilt. The build files you produce are dependency trees, it can recognize when a node in the tree hasn't changed and won't trigger further actions. Consider: You modify the comments in your .c source file, correcting a typo from "alpabetical" to "alphabetical". There has been no code change. Make will, without additional configurations, build the .o file, and then rebuild everything else down the line to the final lib or exe. tup will recognize, by default, that the .o file is unaltered, and won't trigger any further action.
tup also has a monitoring system that I intended to port to OS X but got distracted on. It will watch the filesystem for changes so it doesn't need to walk the filetree to identify the change when you issue the build command. It'll already know exactly what has changed and be ready to issue the (near-)optimal instructions.
My .org files may change, but the .c or whatever files don't as often, or they do but it's white space so the .o files don't change, etc. It's all in the documentation or the layout, or changes to the export (as html/pdf) options. So the whole project doesn't get rebuilt (like it might with make) just because I added a paragraph describing an algorithm.
It's also great for source code dissection. Using the noweb aspect I can break a file down from one initial source block into dozens, creating cross references across files and documents very easily, and then export this in a distributable format (currently doing this for a new project at work). I have a 100k SLOC project that we've taken on and while we understand the high level, none of us have ever worked on this source code (we work at the maintenance end mostly, this project has entered maintenance). Technically there's documentation on it. Doxygen can generate nice reports. But that doesn't beat the act of tearing the code apart line-by-line for really learning and understanding it.
This'll be the largest project I've attempted this on, so that's a new challenge. Usually sub-10k projects that I've tried this with, or just small sections of larger ones. But I'll be one of three guys on the developer side of this one (one above me, responsible for more than just code, and a junior dev just out of college with us). So either we fumble or one of us becomes a SME. This work is also shareable by generating a PDF or set of html files from my efforts and distributing it to them so what I learn can be more easily shared with others.
- Eliot, a project of mine, is a logging system for Python that actually gives you a concept of causality: http://eliot.readthedocs.io/en/0.12.0/introduction.html