If you like writing dense, clever regexs (which I do) then you'll love k & q. The amount that you can get done with just a few characters is unparalleled.
Which leads to, IMHO, their main drawback: k/q (like clever regexes) are often write-only code. Picking up another's codebase or even your own after some time has passed can be very hard/impossible because of how mindbendingly dense with logic the code it. Even if they were the best choice for a given domain, I'd try to steer clear of using them for anything other then exploratory work that doesn't need to be maintained.
The biggest messes I have to clean up come less from "clever" code than they do from people who try to program in K as if it were some other language. For example, somebody fond of for loops might write the following to apply a function 'f' to pairings of values and their index in a list:
result:()
i:0
do[#v
r:r,,f[v[i];i]
i:i+1
]
Ugly, complicated, but "close at hand". There is of course a much nicer and more idiomatic way to do the same thing: result: f'[v;!#v]
Most of the time conciseness isn't the goal, but you get it as a side effect of writing good code and working "with the grain" of the language. > their main drawback: k/q (like clever regexes) are often write-only code
This depends on the code reader's mentality. One line of k (or q or j or apl) would do what 10 lines of verbose languages do. For verbose languages, your expectation is spending 1 minute for 10 lines of code to fully understand it; but for terse languages, you need to change your expectation to spending 1 minute for only 1 line of code. You are not going to understand anything if you still want to spend 6 seconds per line.On the other hand, proficiency is important. You read English articles in a slow pace in your first grade; you are not going to be a speed reader without practice, even if English is the only language you speak. No one would expect you to speed read Japanese right after you can compose simple Japanese sentences.
For me it is the opposite. I like not having to type any more than necessary. I like writing as little code as possible.
I also like being able to read through a program listing without having to wade through page upon page of long, camel case function names, and trying to follow five levels of indentation.
Not because I think that is the "wrong" approach for everyone but because I personally struggle with overcoming language and documentation verbosity in order to make any progress. It is the wrong approach for me. I wonder if perhaps this is how it feels for the typical programmer, in the OP's comment, who might struggle with a lack of verbosity.
Sometimes I streamedit (search and replace) blocks of code to remove indentation and shorten long function names to two letter labels, just so I can read without distraction.
What are the chances of finding someone who has the same preference for terseness and who is as skilled as Arthur Whitney?
q.k is about 240 lines in the version I am using. Most of the functions in .q, .Q and the other namespaces fit on a single line. This is a thing of beauty, IMO. For me, this makes studying the language manageable.
Someone in this thread posted a fizzbuzz solution last week that I thought illustrated how one can "work outward", incrementally adding primitives to the left or right to keep modifying output until the desired result is reached.
I use kdb+ daily but I'm a slow learner and still not very good with composing programs from scratch in k.
What little I have written was also done by "working outward", so the fizzbuzz example gave me some hope maybe I'm not too far off track. Thank you for that example. I hope we see some more.
I am thankful k exists, even if the intepreter is not open source and there's no BSD port.
As someone else commented somewhere, perhaps the most awkward aspect of k is getting it to interface with anything else. I think this is what keeps me from using it more frequently for more daily tasks.
But this may be more of a problem with everything else, and not necessarily with k.
I wish a more competent C programmer than I would write a lightweight replacement using linenoise or libedit to substitute for "rlwrap".
I will concede that there is a culture of trying to be a bit too clever on the k4 mailing list but it's perfectly possible to write maintainable code in kdb+
I'm guessing that best way to address this issue is through liberal use of explanatory comments.
The "characters to clever/unmaintainable" ratios you achieve with k (which most kdb+ platforms end up reaching for at some point) are almost unparalleled. A famous example of how awesome/powerful/ridiculous k can be is the "4 lines of K" text editor: http://www.kparc.com/$/edit.k
I guess my point is that letting your guard down, for even a single line, can be orders of magnitude more trouble than it would be in most languages and for that reason I wouldn't base a new stack on it. Also, the developers for it are rare-ish and expensive + it has a serious learning curve for those unfamiliar with FP or Lisp.
Yes Q is dense but it can be annotated with comments.
I also have to maintain some java code written by this lot and it's nearly as unintelligible.
Reasons:
- Even in a OLAP database you end up with quite a few places that have very branchy code. Research on GPU friendly algorithms on things like (complex) JOINS and GROUP BY is pretty new. Additionally complex queries will functions and operations that you might not have a good GPU implementation for (like regex matching)
- Compression. You can use input data that compressed in anyway that there is a x86_64 library for. So you can now use LZ4, ZHUFF, GZIP, XZ. You can have 70+ independent threads decompressing input data (it's OLAP so it's pre-partitioned anyways). (Technically branching, again)
- Indexing techniques that cannot efficient implemented on the GPU can be used again. (Again branching)
- If you handle your own processing scheduling well, you will end up with near optimal IO / memory pattern (make sure to schedule the work on the core with local memory) and you not bound PCIe speed of the GPU. With enough PCIe lanes and lots of SSD drives you process as near memory speeds (esp. when we'll have Xpoint memory)
So the bottom line is if can intelligently farm out work in correct size chunks (it's OLAP so it's prob partitioned anyways) the the Phi is fantastic processor.
I'm primarily talking about the bootable package with Omni-Path interconnect (for multiple).
Lots of people complain about the conciseness of the language and that it is "write-once" code. I tend to disagree. While it might take a while to understand code you didn't write (or even code you wrote a while ago), focusing on writing in q rather than the terser k can improve readability tremendously.
My only wish is that someone would write a free/open-source 64-bit interpreter for q - with similar performance and speed to the closed version. Kona (for k) gets close https://github.com/kevinlawler/kona
- https://github.com/johnearnest/ok
It seems more like a kdb+ competitor than open-source alternative, and isn't using q.
it's a really beautiful little language once you get into it :-)
First time I noticed (mention of) recap at http://tech.marksblogg.com/benchmarks.html
But seriously, what a wonderful world it would be if all papers were this well written.
Of course, not quite, and that's discounting the (small) sync overhead but still, no need to shell out 4 big servers, overpriced phi chips and fancy wide bus memory.
The hardware doesn't seem consistent across different benchmarks. He says it's fast for a "cpu system", but for practical purposes Phi competes more with GPGPUs.
Would this be just as fast with one redis system with 512GB ram? I don't know too many apples to oranges here.
But what useful conclusions can be drawn from it?
Sort of meta, but Mark's job seems awesome. Gets all these toys and writes about configuring them. (The actual configuring is probably a pain but still)
% cat startmaster.q
k).Q.p:{$[~#.Q.D;.Q.p2[x;`:.]':y;(,/(,/.Q.p2[x]'/':)':(#.z.pd;0N)#.Q.P[i](;)'y)@<,/
Looks like line noise... :DI see that as being something I'd very much like.
[0]: https://en.wikipedia.org/wiki/GDDR5_SDRAM#Commercial_impleme...
Unlike typical computer memory architectures, where the memory bus connects multiple chips or modules to one controller, GDDR doesn't do that; every slice of the memory controller only speaks with a single chip, strictly point-to-point. (Reducing bus load and layout issues and thus allowing higher clock rates).
That's why, with GPUs, it's usually sufficient to say how wide the bus is (often 64 - 128 - 256 - 384 - 512 bits) to get a rough idea of it's performance, since memory clock frequencies occupy a rather narrow range. (However, narrow-bus, lower-end GPUs often don't use the same technology as higher-end GPUs, eg. DDR3 instead of GDDR5)
I love J compared with K, but that is because I found it first, and the differences between J and K are minimal, but a different enough to keep me using J.
https://www.walmart.com/ip/INTEL-SERVER-CPU-SC7120P-XEON-PHI...
http://tech.marksblogg.com/billion-nyc-taxi-rides-redshift.h...
Here's hoping some combo of Apache Arrow (also cache aware, much more language stack flexibilty), Aerospike (lua built in), Impala, and others, can finally take on this overpriced product, which has had a lack of serious competitors for 20 years, owing to its (price inelastic) finance client base.
kdb+ is available for raspberry pi, is that cross platform enough?
https://kx.com/2016/06/08/kx-releases-raspberry-pi-build-wit...
edit:
$ file q/l64/q
q/l64/q: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18,