It's a rhetorical question. The answer is patently obvious at this point: Lisp is evil, and you'd damned well better write all your code in C++ and XML and JavaScript and PL*SQL and CSS and XSLT and regular expressions and all those other God-fearing red-blooded manly patriotic all-American languages from now on. No more of this crazy Lisp talk, ya hear?>>
Here is what a make file should look like:
1) vpath statements for various file extensions
2) Phony targets for logical build units, including "all"
3) Generic rules for mapping .abc to .xyz; these rules should have exactly one line of code which executes an external script/tool
4) A list of edges in the dependency graph
5) A few variables as necessary to eliminate redundant edges
If you put any logic in your Makefiles, you are doing it wrong.
If your builds are slow, add empty dummy files for larger culling by timestamps. If timestamps are insufficient, codify early out logic into tools.
Not having logic in my Makefiles enables parallel execution and strong support for incremental builds. If I were to use Lisp as a build system, I'd create a DSL that had these same properties; forbidding arbitrary logic in my dependency graph. It's about finding the right balance to inject expressiveness without losing desirable properties of the DSL. This is why every developer needs to understand more about programming language design. Anytime you create any type of file format, you are doing it. And anytime you are writing any type of file format, you are reverse engineering it. Understanding the intended separation of logic for Makefiles helps me write better Makefiles.
"Close to"? Not a formal proof of Turing-completeness (and may only work with GNU make, not sure), but...
[me@host: ~]% cat fibo.mk
dec = $(patsubst .%,%,$1)
not = $(if $1,,.)
lteq = $(if $1,$(if $(findstring $1,$2),.,),.)
gteq = $(if $2,$(if $(findstring $2,$1),.,),.)
eq = $(and $(call lteq,$1,$2),$(call gteq,$1,$2))
lt = $(and $(call lteq,$1,$2),$(call not,$(call gteq,$1,$2)))
add = $1$2
sub = $(if $(call not,$2),$1,$(call sub,$(call dec,$1),$(call dec,$2)))
fibo = $(if $(call lt,$1,..),$1,$(call add,$(call fibo,$(call dec,$1)),$(call fibo,$(call sub,$1,..))))
numeral = $(words $(subst .,. ,$1))
go = $(or $(info $(call numeral,$(call fibo,$1))),$(call go,.$1))
_ := $(call go,)
[me@host: ~]% make -f fibo.mk
0
1
1
2
3
5
8
13
21
34
55
89
144
233
377
610
987
1597
^CThe way it works is that their are no mandatory newline characters in JSON. Whitespace between lexical elements is ignored, and any embedded newlines in strings can be escaped (i.e. as \n). So a log format that a few people are using today is like this:
{'kind': 'foo', 'id': 1, 'msg': 'hi'} {'kind': 'bar', 'id': 2, 'msg': 'there'}
Each log message takes up a single line in the file. You can trivially deserialize any line to a real data structure in your language of choice. You can (mostly) grep the lines, and they're human readable. I do this at work, and frequently have scripts like this:
scribereader foo | grep 'some expression' | python -c 'code here'
In this case we're storing logs in the format described above (a single JSON message per line), and scribereader is something that groks how scribe stores log files and outputs to stdout. The grep expression doesn't really understand JSON, but it catches all of the lines that I actually want to examine, and the false positive rate is very low (<0.1% typically). The final part of the pipe is some more complex python expression that actually introspects the data it's getting to do more filtering. You can of course substitute ruby, perl, etc. in place of the python expression.
I feel like this is a pretty good compromise between greppability, human readability, and the ability to programatically manipulate log data.
http://simonwillison.net/2008/Jun/15/steveys/
That's the problem with discussing old articles. Information gets updated
Still, the real XML-killer for me is YAML. It's even more readable than JSON, and allows many documents in a single file. This makes it excellent for logs, or for any application where your files get big and you want to stream records off them without having to parse the whole file into memory at once. Sure, you can do this with XML and parser hooks, but it's so much more of a pain than just iterating over top-level YAML documents.
Another killer feature is that it's simple enough that I've been able to ask clients to provide me with information in YAML format just by giving them an example record to follow. They're non-technical, but they can read it as easily as me. That's a pretty big win.
(:kind foo :id 1 :msg "hi") (:kind bar :id 2 :msg "there")And now anyone who can modify your log can execute arbitrary code in the reader process…
[ http://en.wikipedia.org/wiki/Capability-based_security ]
(for a more recently active discussion, try http://en.wikipedia.org/wiki/Domain_Specific_Language)
There is a point to be made for the idea that you are solving the wrong problem with the log parsing. On the other hand, if you are trying to interface with other developers' popular engines, you may not have a choice.
The CL-PCRE and Java regular expression engines don't have those somewhat expensive checks, and so are much more likely to encounter catastrophic behavior.
It takes a lot of pain out of XML processing. Don't have to remember the specifics of XPath/XQuery but you still have to deal with the pain of multiple namespace resolution inherit in XML.
(Say you want to prefix every sentence with the expression "And then he said, ". The perl way: replace every empty space after a period or the beginning of the string with "And then he said, ". Emacs way: While there are sentences, go to the beginning of the next sentence and insert "And then he said, ". Same result, different way of thinking.)
Modify your way of thinking, and the Emacs model is wonderful. (Why do you think there are so many more Emacs extensions than Eclipse extensions, even though you can pretty much use any JVM langauge to customize Eclipse? It's because customizing Emacs is fast and easy.)
Emacs clones based on both Scheme and Common Lisp have been written, but they haven't taken off.
(How hard can it be to select blocks on double-click? Even Terminal.app does it!)