Asking because the answer changes the architecture significantly. If you're targeting live in-page data — extracting objects from the DOM as you browse, filtering them reactively — you may not need storage at all.
A Proxy-based observation layer gives you reactive queries without allocating anything new: the objects already exist in the tab's heap, you're just watching them mutate. No GC pressure, no persistence headaches, no query planner needed. That covers most of what you described: "items where price < 50 updates as you browse" is an event subscription with pattern matching, not a database problem.
The cases where you actually need storage — and therefore need to think about heap budgets, GC, serialization, query planning — are narrower:
Cross-session persistence (you want the data after the tab closes) Cross-tab aggregation (comparing prices across multiple open tabs simultaneously) Queries over historical data (not just what's on screen now, but what you saw across 20 pages of browsing)
Those are real storage problems.
But they're also the cases where you're competing with IndexedDB, OPFS, and SQLite WASM — and "I hate SQL" stops being enough of a reason to rebuild from scratch. What's the actual workflow you're trying to support?
I have a strictly personal application at: https://github.com/prettydiff/aphorio
In that project I have various data artifacts stored in memory that I am constantly having to query in various ways:
* sockets per server
* servers and their sockets
* ports in use and by what
* docker containers and their active state
* various hardware and OS data lists
Currently all this data is just objects attached to a big object and all defined by TypeScript interfaces. I am storing the information just fine, but getting the information I need for a particular task and in task's format requires a variety of different logic and object definitions in the form of internal services.
Instead it would be nice to have information stores that contain all I need in the way a SQL database does with tables. Except I hate the SQL language, and its not as fast as you would think. Last I saw the fastest SQL based database is SQLite and its really only 3x faster than using primitive read/write streams to the file system. I can do must faster by not dicking around with language constructs.
My proposal is to store the database in something that vaguely resembles a database table but is really just JavaScript objects/arrays in memory as part of the application's current runtime and that can return artifacts in either object or array format. Query statements would be a format of JavaScript objects. I could have a table for server data, socket data, port data, and each record links back to records in other tables as necessary, kind of like SQL foreign keys. So in short: stores, functions to do all the work, and storage format that can be either objects or arrays and take both objects and arrays as queries.
The reason I want to store the data as both objects and arrays is a performance hack discovered by Paul Heckel in 1978. The stores would actually be a collection of objects against a unique primary key that can be reference as though it were an array.
That said, you may not need to leave your stack at all. V8's native Map is already a key-value store — O(1) reads, no overhead, typed in TypeScript. Your "tables" are just Maps and cross-referencing is composite string keys:
sockets.set(serverId:{serverId}: serverId:{socketId} , socketData). No library, no dependency, no SQL. This covers your use case as described.
If you want ACID transactions and persistence without SQL, look at lmdb-js — a Node binding on LMDB, the fastest embedded KV store in existence, zero-copy reads, used in production for 20 years. Your tables become named databases, your records are typed values, your cross-references are composite keys. Same mental model you're building, with 20 years of correctness guarantees underneath.
What's the actual reason for building from scratch rather than using native Map for the in-memory case?