I would encourage Redis users to use Envoy Proxy as the Redis client (ie use vanilla client, and use Envoy as the cluster client). You get all the HA and usefulness of Redis Cluster, but way less of the headache. Also, strongly encourage people to check out Elasticache, which is really good.
And to your point about client libraries: for any shared behavior that needs to be local to the app (eg. session handling, permissions, etc.), it's more appropriate to spin up a sidecar that you can talk to over sockets than to try to build client libraries in each and every language. Your client libraries will differ in behavior and it is an incredible pain keeping them all up to date, patching every app, etc.
Envoy for S2S and traffic mesh + sidecars for shared behavior is better than building client smarts.
https://github.com/grisha/thredis/blob/master/README-THREDIS
Then I added SQLite to it, more details here:
This was all done mostly for fun, though I did use it in a an actual project for a while.
- https://github.com/Tencent/Tendis
- https://github.com/Netflix/dynomite
On a separate note, was FLASH (as in "flash memory") supposed to be an acronym? I've never seen people treat it as an acronym before.
I recommend keydb in almost every case redis is used.
We have still yet to evaluate Redis6.
Even better you might be able to avoid having a cluster at all. For many that’s the biggest win with KeyDB.
While I agree with your point, this concern is easily addressed with connection pools.
It’s basically back to the multi-threading vs multiprocessing debate that also exists in Python. With multiprocessing, you have more overhead but the client itself is simpler.
It’s kind of a weird in between where you need more scalability then a single redis but no so much that you want to go to a cluster.
Here is an analysis of the last 30 days of activity in OpenSearch
https://public-001.gitsense.com/insights/github/repos?r=gith...
If you switch to the impact view, you can see it's pretty much one guy doing all the work right now. The impact view also shows 1 frequent, 1 occasional and 13 seldom contributors, so I'm guessing the number of people working on OpenSearch is quite small.
Note, it is also quite possible that a lot of the work is being done behind the scene, so looking at the OpenSearch repo may not tell you the whole story. And if you search for OpenSearch in amazon's job board, you find they are hiring so I guess we'll have to wait.
Note: Do not install GitSense as the docker image has an out of date license that I'll need to update when I get the time.
If it provides same consistency, is threading like :
sock_read();
lock(datastructures);
set x=3;
unlock(datastructures);
sock_write();
In the Enterprise codebase we can take snapshots which lets us do reads without the lock but it’s a bit of work to enable for commands so it only applies to KEYS and SCAN at the moment.
I feel like this and the general tone of the article are needlessly antagonistic toward Redis. KeyDB is building their entire business off of it after all.
There may very well be a need for multi-threaded Redis, but Redis as it stands today is an amazing project and there's something to keeping it simple along the lines of the project philosophy.
It’s not my intention to be antagonistic. I’ve had a lot of projects over the years that went nowhere and a part of me is sad that the one with the most traction is a fork.
"If they won’t then we will." sounds harsh imho.
Or in short, where is KeyDB headed, longer term?
The long term goal of KeyDB is to let you balance your dataset across memory and disk in one database. In the future I think caches will just be a feature of a more full featured database and that's where we're heading with KeyDB.
Even persistence isn't really required. A pure analytics view of another database is still a database by itself, but it doesn't need to actually persist anything. It seems like querying is more important to the concept of a database rather than actual storage.
Some people use us for Active Rep or some of our other custom commands rather than just the Multithreading.
But has anyone tried to do a clean room implementation of Redis using Rust, but speaks the same wire protocol? You would get the zero-cost multi-threading, memory safety, etc, and it would be a drop in replacement.
I think you mean zero cost abstractions. Which aren't usually zero cost, but just zero additional cost over doing it yourself.
There's no such thing as zero cost multi threading. Just tradeoffs. Rust actually doesn't help with performance here (it gets in the way often) but it definitely does help with correctness - which is truly hard with multi threaded programs.
You kinda have to look at how things really work underneath before you can apply buzzwords to a database.
If you go to the landing page of the above, scroll down to the bottom, there is a TCP bypass solution graphed, using Solarflare Open Onload and it is capable of running several times as fast as the Linux Kernel TCP. I didn't test Redis with Open Onload, but I'm pretty sure you'll get a similar results since TCP is a major performance bottleneck in Redis as well.
The next approach you could take is using something like Glommio and take a thread-per-core design to Redis. I think that approach has a lot of potential, but the design becomes more complex (you now need something like distributed transactions for "cross-core" Redis transactions and mutli-gets)