"The SQLite website (https://www.sqlite.org/) uses SQLite itself, of course,
and as of this writing (2015) it handles about 400K to 500K HTTP requests
per day, about 15-20% of which are dynamic pages touching the database.
Each dynamic page does roughly 200 SQL statements. [...]"
Wait... What? They're saying that their dynamic pages are executing 200 SQL statements per request?Off-topic: I enjoyed reading this article which was posted here a while ago http://www.sqlite.org/testing.html
Medium complexity Drupal 6 site: 200-300 queries
Wordpress site with accretion of plugins over the years: 100+ queries
Drupal 7 site: 100+ queries
Those totals might be inflated due to having administrative menus active.Many of these are key/value lookups which can be accelerated by serving from Memcached or Redis, although I imagine SQLite dishes them out pretty quickly as well.
I am curious to know how many people here solely use SQLite to power the back-end of their web application(s), especially when the page states, "SQLite does not compete with client/server databases."
(Or is the page referring to content-management-system-type websites?)
If you however cause writes when people open pages, look at content. Anything from hit counters to tracking behaviour. Then SQLite suddenly starts to scale pretty badly.
The same goes with how your processing works. Multiple processes that may all at some point cause writes? This is bad. Threads inside a process? That works.
However, if you're running your website on Apache, on a single webserver, then there's really only ONE client for your database, in which case SQLite works great, even if there's a heck of a lot of load.
SQLite is fundamentally a C library talking to a binary file format, so it's orders of magnitude faster than making a network connection to a client and then issuing SQL.
I've run medium-sized websites on an MVC framework talking to SQLite MANY times, and it works great.
Well, we've tried distributing applications that needed to share a DB backend. We tried a MS Access backend first, but it stopped working after about 5 people were using it. Then we migrated to SQLite, it handled well up to near 50 people, then its over-restrictive locking become a problem. Luckly by that time we got hold of an Postgres server.
Low to medium traffic websites don't usualy have much more than 10 working threads, so yes, I've used a lousely similar setup, and it worked. It'll depend on how much time is spent on DB access vs. local processing, and how fast the DB is accessed by those servers, so YMMV.
There's a LOT you can do with SQLite... not to mention that with simple read caching and reduced writes you can get a lot of performance out of SQLite. Highly interactive web applications, I wouldn't expect to handle more than 50 users on a single system backed by SQLite, as you mention... with SSD, you may get a couple more.
Django makes this really easy (it's the default). It's a shame other projects aren't on flexible ORMs. I'd love to be able to deploy WordPress and Drupal sites without dicking around creating databases.
So far the main issues I've been having is I have a lot of complex joins, and it doesn't seem to want to use appropriate indexes all the time. If I'm joining 2 tables on an indexed field, it is ok, but as soon as I add a 3rd or 4th table the performance falls apart. So my current workaround is to join 2 tables into a temporary table, add an index, then join that temp table to a 3rd one.
Now I really hope that Spartan can implement WebSQL as well...
I really enjoyed the simplicity of it all.
Sounds unlikely given that the standard was superseded by IndexedDB.
http://www.html5rocks.com/en/tutorials/webdatabase/websql-in...
Most browsers just use SQLite to back IndexedDB, though.
https://www.mapbox.com/guides/an-open-platform/#storing-tile...
you can have sqlite databases that are gigs in size and perform great.
Am I wrong in thinking it's just plain old row oriented storage and not something more aggregate oriented?
It is super easy to access from julia, R, python, etc, so instead of importing a CSV and manipulating the data, I find it a lot easier to connect to the sqlite database and use SQL for the a lot of the joining and manipulating.
Also, the same list if uses can be applied to nedb as a lightweight alternative to MongoDB.
I've used SQLite over NFS for replication. It worked fine.
Though again, if you want true streamed (i.e. per query) replication, features like hot standby, etc... that's not part of the model and you're better served by a server-model RDBMS.
https://www.sqlite.org/backup.html
The way I've used it (for a db of 300-400mb loaded in-memory in a 64-bit app) was to load the .db in memory, then detect if there are changes outside (notified, or updated the .db file timestamp from a different app using the db), then load a second copy of the .db and compare, then build diffs and report them in some kind of API fashion.
It was used for live update of various level editing data for a game level editor. Possibly not the best fit (too much hierarchical data), but worked nonetheless.
One thing that I miss dearly from SQLite is some form of postgres arrays, but I'm so glad that recursive with has been added recently to SQLite which allows for hierarchical data (child/parent storing ids to each other) to fetch without some extra information.