It's also a column store, with compression. Runs super fast, I've used it in a couple of financial applications. Huge amounts of tick data, all coming down to your application nearly as fast as the hardware will allow.
Good support, the guys on Slack are responsive. No, I don't have shares in it, I just like it.
Regarding kdb, I've used it, but there are significant drawbacks. Costs a bunch of money, that's a big one. And the language... I mean it's nice to nerd out sometimes with a bit of code golf, but at some point you are going to snap out of it and decide that single characters are not as expressive as they seem.
If your thing is ad-hoc quant analysis, then maybe you like kdb. You can sit there and type little strings into the REPL all day in order to find money. But a lot of things are more like cron jobs, you know you need this particular query run on a schedule, so just turn it into something legible that the next guy will understand and maintain.
People could complain about abysmal language design or debugging but what I found the most frustration in the coding conventions that they had (or had not), and I think the language and the community play a big role there. But also the company culture: I asked why the code was so poorly documented (no comments, single letter parameters, arcane function names). "We understand it after some time and this way other teams cannot use our ideas."
Overall, their whole stack was outdated and ofc they could not do very interesting things with a tool such as Q. For example, they plotted graphs by copying data from qStudio to Excel...
The only good thing was they did not buy the docker / k8s bs and were deploying directly on servers. It makes sense that quants should be able to fix things in production very quickly but I think it would also make sense for web app developers not to wait 10 minutes (and that's when you have good infra) to see a fix in production.
I have a theory on why quants actually like kdb: it's a good *weapon*. It serves some purpose but I would not call it a *tool* as building with it is tedious. People like that it just works out of the box. But although you can use a sword to drive nails, it is not its purpose.
Continuing on that theory, LISP (especially Racket) would be the best *tool* available as it is not the most powerful language out of the box but allows to build a lot of abstractions with features to modify the language itself. C++ and Python are just great programming languages as you can build good software on them, Python being also a fairly good weapon.
Q might give the illusion of being the best language to explore quant data, but that's just because quants do not invest enough time into building good software and using good tools. When you actually master a Python IDE, you are definitely more productive than any Q programmer.
And don't get me started on performance (the link covers it anyway even though the prose is bad).
I remember being very impressed by Kdb+ (went to their meetups in Chicago). Large queries ran almost instantaneously. The APL like syntax was like a magic incantation that only math types were privy to. The salesperson mentioned KdB was so optimized that it fit in the L1 cache of a processor of the day.
Fast forward 10 years. I’m doing the same thing today with Python and DuckDB and Jupyter on Parquet files. DuckDB not only parallelizes, it vectorizes. I’m not sure how it benchmarks against kdb+ but the responsiveness of DuckDB at least feels as fast as kdb+ on large datasets. (Though I’m sure kdb+ is vastly more optimized). The difference? DuckDB is free.
It’s worth remembering that all the modern progress builds on top of years of work by Wes McKinney & co (many, many contributors).
Pandas
Polars (polar bear)
DuckDB
Python
I find it much more likely that you couldn’t understand their code and quit out of frustration.
If you were a highly skilled quant dev and this was a good seat, quitting after two weeks would have been a disaster to manage the next transition given the terms these contracts always have.
- charting
- machine learning/statsmodels
- html processing/webscrapes
Because for example you can just open a Jupyter Notebook and do:
import pykx as kx
df = kx.q(“select from foo where bar”)
plt.plot(df[“x”], df[“y”])
It’s truly an incredibly seamless and powerful integration. You get the best of both worlds and it may be the saving feature of the product in the next 10 years---
[1] RDB - RAM DB (recent in-memory data), IDB (Intraday DB - recent data which doesn't fit into RAM), HDB - Historical DB (usually partitioned by date or other time-based or integral column).
If your organization has already committed to serving some of these roles with other pieces of software, protocols, or formats, the benefits of vertical integration- both in development workflow and overall performance- are diminished. When kdb+ itself is both proprietary and expensive it is understandably difficult to justify a total commitment to it for new projects. It's a real shame, because the tech itself is a jewel.
I think Shakti could become a viable competitor to Kx if they included libraries that handle some common enterprise usecases, such as load balancing, user permissions and SSO. I have no doubt that an experienced K programmer could whip this up in a week or two, but in my experience a sufficiently large enterprise will specify that all these capabilities need to be implemented before they let the product in the door.
We've been playing around with the idea of a building such an integration layer in SQL on top of open-source technologies like Kafka, Flink, Postgres, and Iceberg with some syntactic sugar to make timeseries processing nicer in SQL: https://github.com/DataSQRL/sqrl/
The idea is to give you the power of kdb+ with open-source technologies and SQL in an integrated package by transpiling SQL, building the computational DAG, and then running an cost-based optimizer to "cut" the DAG to the underlying data technologies.
Get a free version out there that can be used for many things…
I think this has been the biggest impediment to kdb+ gaining recognition as a great technology/product and growing amongst the developer community.Having used kdb+ extensively in the finance world for years, I became a convert and a fan. There’s an elegance in its design and simplicity that seems very much rooted in the Unix philosophy. After I left finance, and no longer worked at a company that used kdb+, I often felt the urge to reach for kdb+ to use for little projects here and there. It was frustrating that I couldn’t use it anymore, or even just show colleagues this little known/niche tool and geek out a little on how simple and efficient it was for doing certain tasks/computations.
I had to write some C++ code in the past to send data into kdb and also a decoder for their wire protocol. For both I definitely had a kdb binary to test against.
I just needed to test against it. Maybe Kx gave us a development license or something, it was a good few years ago.
I built my own language for time-series analysis because of how much I hated q/kdb+, but Python has been the winner for a bunch of years now.
Agree with all the recommendations, except I think kx should open source the platform. This will attract the breed of developer that will want to contribute back to the ecosystem with improvements and tools.
Proprietary programming languages that are inconvenient for hobbyists to obtain- any more friction than cloning a git repo or installing via a package manager- have stunted open-source ecosystems, and in turn limited opportunities for grass-roots adoption.
1. ClickHouse is not a new technology — it has been open-source since 2016 and in development since 2009.
2. ClickHouse can do all three use cases: historical and real-time data, distributed and local processing (check clickhouse-local and chdb).
3. ClickHouse was the first SQL database with ASOF JOIN in the main product (in 2019) - after kdb+, which is not SQL.
Tellingly, nobody has pulled the trigger on a migration yet as I think it’s a big call with all of the integrations that KDB sprouts, but it definetly feels like the spiritual successor.
Greenfield development though would use Python.
Here is a link on how you do queries: https://code.kx.com/q/basics/funsql/
TL;DR;
This is a select: q)t:([] c1:`a`b`a`c`a`b`c; c2:101+til 7; c3:1.11+til 7)
And this is another select: q)?[t; ((>;`c2;35);(in;`c1;enlist[`b`c])); 0b; ()]
Mind that these are the basic queries :)))))
The future of kdb+ is in the toilet.
We’ve maintained a financial exchange w/ margining for 8 years with it, and I guarantee you that everyone was more than relieved - customers and employees alike, once we were able to lift and shift the whole thing to Java.
The readability and scalability is abysmal as soon as you move on from a quant desk scenario (which everyone agrees, it is more than amazing at.. panda and dask frames all feel like kindergarten toys compared), the disaster recovery options are basically bound to having distributed storage - which are by the way “too slow” for any real KDB application given the whole KDB concept marries storage and compute in a single thread.. and use-cases of data historical data, such as mentioned in the article, become very quickly awful: one kdb process handles one request at once, so you end up having to deploy & maintain hundreds of RDB keeping the last hour in memory, HDBs with the actual historical data, pausing for hourly write downs of the data, mirroring trees replicating the data using IPC over TCP from the matching engine down to the RDBs/HDBs, recon jobs to verify that the data across all the hosts.. Not to mention that such a TCP-IPC distribution tree with single threaded applications means that any single replica stuck down the line (e.g. big query, or too slow to restart) will typically lead to a complete lockup - all the way to the matching engine - so then you need to start writing logic for circuit breakers to trip both the distribution & the querying (nothing out of the box). And then at some point you need to start implementing custom sharding mechanisms for both distribution & querying (nothing out of the box once again..!) across the hundreds of processes and dozens of servers (which has implications with the circuit breakers) because replicating the whole KDB dataset across dozens of servers (to scale the requests/sec you can factually serve in a reasonable timeframe) get absolutely batshit crazy expensive.
And this is the architecture as designed and recommended by the KX consultants that you end up having to hire to “scale” to service nothing but a few billions dollars in daily leveraged trades.
Everything we have is now in Java - all financial/mathematical logic ported over 1:1 with no changes in data schema (neither in house neither for customers), uses disruptors, convenient chronicle/aeron queues that we can replay anytime (recovery, certifying, troubleshooting, rollback, benchmarks, etc), and infinitely scalable and sharded s3/trino/scylladb for historical.. Performance is orders of magnitude up (despite the thousands of hours micro-optimizing the KDB stack + the millions in KX consultants - and without any Java optimizations really), incidents became essentially non-existent overnight, and the payroll + infra bills got also divided by a very meaningful factor :]
1. it was fast by a huge margin for its time
2. the reason for its speed is the language behind it
3. it uses an esoteric language and still attains success
4. the core engine is implemented using surprisingly few lines of code
5. the core has been written and maintained by one person
All of these are things I've heard so I can't claim it's 100% true but I'm sure it's a combination of some of these.
I feel like APL and all its relatives had long ago gained legendary status. So the legend lives on - maybe longer than it should.
Don't get me wrong. It's still amazing!
And this is the architecture as designed and recommended by the KX consultants that you end up having to hire to “scale”
I think this hits on one of the major shortcomings of how FD/Kx have managed the technology going back 15+ years, IMHO.Historically it’s the consultants that brought in a lot of income, with each one building ad-hoc solutions for their clients and solving much more complicated enterprise-scale integration and resilience challenges. FD/Kx failed to identify the massive opportunity here, which was to truly invest in R&D and develop a set of common IP, based on robust architectures, libraries and solutions around the core kdb+ product that would be vastly more valuable and appealing to more customers. This could have led to a path where open sourcing kdb+ made sense, if they had a suite of valuable, complementary functionality that they could sell. But instead, they parked their consultants for countless billable hours at their biggest paying customer’s sites and helped them build custom infra around kdb+, reinventing wheels over and over again.
They were in a unique position for decades, with a front row seat to the pain points and challenges of top financial institutions, and somehow never produced a product that came close to the value and utility of kdb+, even though clearly it was only ever going to be a part of a larger software solution.
In fairness they produced the delta suite, but its focus and feature set seemed to be constantly in flux and underwhelming, trying to bury and hide kdb+ behind frustratingly pointless UI layers. The more recent attempts with Kx.ai I’m less familiar with, but seem to be a desperate marketing attempt to latch onto the next tech wave.
They have had some very talented technical staff over the years, including many of their consultants. I just think that if the leadership had embraced the core technology and understood the opportunity to build a valuable ecosystem, with a goal towards FOSS, things could look very different. All hindsight of course :)
Maybe it’s not too late to try that…
> ... and without any Java optimizations really ...
Come on, be honest! All of the core tech needs to be implemented in highly optimized GC-free Java. And you need to hire senior Java consultants who are highly specialized and do that for 10+ years and they also cost millions. I happen to know that BitMEX (located in Asia) has such consultants working from the EU. So, it's that easy to hire them!
For benchmarks, I would check out STAC M3... kdb+ holds 17 world records there and that is something we’re proud of. The Clickbench benchmarks cited in the article, however, aren’t designed for time series databases and kdb+ isn’t included (probably for that reason). I don’t think it’s relevant here. We also think that speed – and performance in general – is still important to our customers, as they continue to affirm.
As far as accessibility is concerned, I’d like to address in multiple parts:
1) We are invested in creating cloud-native features that are more appealing for smaller firms
2) q is the best language out there (in our opinion) but we also offer a path for Python (including Polars) and SQL developers, which is essential to expanding the kdb+ userbase to the maximum extent. Our entire Fusion interfaces was built to enable more interoperability. We also don’t mandate language lock-in... there is nothing preventing other languages from being used with kdb+.
3) Pricing—this comes up a lot. We already offer a free edition of kdb+ for non-commercial use that is very popular. We recognize there’s more we can do in this area (an opinion expressed by KX leadership too) so new pricing models are actively being evaluated.
4) Our latest release of kdb+ 4.1 included a renewed focus on ease of installation and use, and a new documentation hub is being launched this year to further enhance the developer experience.
5) Our Community is growing rapidly – with now over 6000 members and 10 courses available in KX Academy. We have more and more developers networking to help others learn kdb+ every day with a month-over-month net new increase of members for the past 30 months. We’ve recently launched a Slack channel and developer advocacy program too.
There’s a lot of criticism about kdb+ (and KX) in this article, but a lot of the things devs love the most about kdb+ have been left out. This includes efficiency/compactness, expressiveness of q, vertical integration, and speedy development workflow. Sure, if you want to combine 3-5 tools to do what kdb+ does you can go that route, but we feel we offer a vastly superior experience with performance at scale. A quality that extends to ALL our products, including Delta & KDB.AI, since they are all built on kdb+.
Note: I reached out to the author to discuss, but he declined to talk to us. We posted a response on his blog too, but he never published the comment. It's been a pretty closed off situation for us, so leaving this here.
Alternatives (which are open source) to KDB+ are split into two categories:
New Database Technologies (tick data store & ASOF JOIN): Clickhouse & QuestDB
Local Quant Analysis: Python – with DuckDB & Polars
Some personal thoughts:
Q is very expressive, and impressive performance can be extracted from kdb+, but the drawbacks are proprietary formats, vendor lock-in, costs, proprietary language and reliance on external consultants to make the system run adequately, which can increase operational costs.
I'm personally excited to see the open-source alternative stack emerging. Open Source time-series databases and tools like duckdb/polars for data science are a good combination. Storing everything in open formats like Parquet and leveraging high-performance frameworks like Arrow is probably where things are heading.
Seeing some disruption in this industry specifically is interesting; I think it will be beneficial, particularly for developers.
NB: disclosing that I'm from questdb to put thoughts in perspective