I consider it to be a licensing nightmare as the normal oracle license means you’ll need to license every single vCPU on the cluster.
The express edition used here has some relatively anaemic usage limits which you could brush up against very quickly:
“Oracle Database 18c Express Edition automatically constrains itself to the following resource restrictions;
2 CPU threads; 2 GB of RAM;and 12 GB of user data.”
Still, it’s cool, but I would caution anyone against introducing an oracle dependency.
Things that oracle does right: high availability, consistency of data (which is harder than you think, and the tools we use tend to use are genuinely crap at this especially things like MySQL and MongoDB which are insanely popular), performance of OTLP workloads, and support*
Your mileage may vary on that last point.
The tl;dr: start with PostgreSQL and evaluate from there.
- No boolean type. Different folks will choose T/F chars, Y/N chars or 0/1 integers. - Strings are not nullable. An empty string and a null string are the same thing. - Date also has time.
By default, you end up with some very annoying bugs. Expect to actually change the way some of your columns work.
I'm just guessing but a CHAR would be 4 byte if it's Unicode and an INTEGER also right?
So you're telling me there's really no way to store just one or a few bits?
* Boolean: You don't really need it, just use 0/1 or Y/N (or J/N or O/N or whatever NLS equivalent of Y/N you want). You might have to implement a few different mappers for the same data type, so what?
* Strings: Semantically, a null string and an empty string are the same. You should be grateful the database works this way. All other DBs are wrong (you hear this a lot in Oracle land).
* Dates: Oh, just ignore the time if you don't want it. What do you mean, time zones? Add the correct offset in your software, you lazy XXX. BTW, we migrated to a server with another time zone, hope this doesn't impact you?
Now there are things that oracle is good at, mainly being trustworthy and scalable with your data, especially in clustered environments. But postgress is slowly but surely eating Oracle's niche here, and mysql has demonstrated customers will tolerate anything for lower costs. So their main value proposition today is that they are Enterprise Class, and all the big boys run Oracle so either join the club or get laughed at by the big boys.
I was horrified how bad it was compared to Sybase back then.
RAC requires shared block storage and L2 private network. Cloud SDNs and storage require gross hacks with horrible performance consequences. Even “modern” virtualization is painful, but can work.
Oracle isn’t going to fix all the bare-metal spaghetti assumptions in their clustered DB stack, and has been pretty clear about that.
This could be useful for throwaway dev/test environments. Or maybe for apps that aren’t performance intensive or critical that are in “maintenance mode” and folks want to lift-and-shift. But they’ll probably spend more time on that than fixing/replacing/retiring the app. And all will require smaller data sets than what I see with big company legacy systems.
And typically those systems are using Oracle to access data populated by another system, which makes me die a little inside.
Pretty clearly a play to get big companies into GCP contracts, more than anything real.
It’s to sell to execs who want to do the cool thing without paying to fix the old thing. ahem Thomas Kurian cough.
That said - it is entirely possible to run local storage for applications that provide their own replication. We run an LVM local storage provisioner for our in-cluster Postgres (orchestrated by Patroni). This gives us all sorts of snapshotting and resizing that you would find in network storage while having the performance of locally attached NVMEs