I could store it in a table with key and value columns, but since I always use it together as one thing, I don’t see the benefit and it just means more rows would need to be accessed.
Maybe its not a good design, but it works well for me and makes my life easier.
Hyperbole aside, the best option often is somewhere in between. I find that a relational database with columns for primary keys/relations/anything used in WHERE statements in the normal application path, and a json blob for everything else that's just attributes for the current row/object/document, makes for a very flexible and successful DB schema. You get all the strong relational consistency benefits of traditional schemas, plus the flexibility to add and modify features that don't require relation changes without touching the schema, and the ability to represent complex structures while still allowing ad-hoc and admin path queries to peek into them where necessary.
In fact, most "fully normalized" databases end up reimplementing a key/value store or twelve in there anyway (e.g. config settings, user attributes, and the like). Might as well just use JSON at that point.
So I have to agree with GP wondering why JSON is so important to people, and is even portrayed as a relief or saviour to developers. In my experience, JSON in a relational DB is always a sign of organizational failure, developers using the wrong tool for the job, or not knowing what they want or do.
Fully normalized databases are a nice academic idea, but the supposed benefits of going all the way don't materialize in the real world. That kind of approach is just old school cargo cult - just like full NoSQL is new school nonsense. Good developers know that the answer isn't to use whatever was the fad was when they were in school, be it fully relational databases or NoSQL or anything else, but rather to look at the available technologies and take the most appropriate bits and pieces in order to make a project successful.
After all, if JSON were nonsense, why would a serious relational database like PostgreSQL be adding full support for it? They know it has good use cases.
I know it has full use cases, so I use it, along with proper primary and foreign keys and a properly relational base data model. Yes, all my primary objects are tables with proper relations and foreign keys (and a few constraints for critical parts; not for everything because 100% database side consistency is also an impossible pipe dream, as not every business rule can be encoded sanely in SQL). Just don't expect me to add a user_attribute table to build a poor man's KVS just because people in the 90s thought that was the way to go and databases didn't support anything better. I'll have an attributes JSONB column instead.
And yes, JSON is just a trivial data serialization format that happens to have become de facto. It has an intuitive model and the concept isn't new. It just happens to have been what became popular and there is no reason not to use it. Keep in mind that PostgreSQL internally stores it in a more compact binary form anyway, and if clients for some programming languages don't yet support skipping the ASCII representation step that is an obvious feature that could be added later. At that point it ceases to be JSON and just becomes a generic data structure format following the JSON rules.
What's the point of putting, say, every single user management field into columns in a "users" table? The regular application is never going to have to relate users by their CSRF token, or what their UI language is, or any of the other dozens of incidental details associated to a user, some visible, some implementation details.
What matters are things like the username, email, name - things the app needs to actually run relational operations on.
If you look at any real application, pretty much everyone has given up on trying to keep everything relational. That would be a massive pain in the ass and you'd end up with hundreds of columns in your users table. You'll find some form of key/value store attached to the user instead. And if you're going to do that, you might as well use a json field.
And if you do use a json field with a database engine that supports it well, like PostgreSQL, you'll find that it can be indexed if you need it anyway, and querying it is easier than joining a pile of tables implementing a KVS or two. Because yes, I might want to make a report on what users' UI language is some day, but that doesn't mean it has to be a column when Postgres is perfectly happy peeking into jsonb. And I don't need an index that will just cause unnecessary overhead during common update operations that don't use it, when I can just run reports on a DB secondary and not care about performance.
I designed an application in this manner and it has turned out exceedingly well for me. We have only had about a dozen schema changes total across the lifetime of the app. One of them involved some JSON querying to refactor a field out of JSON and into its own table (because requirements changed) and that was no problem to run just like any other database update. If we need to move things to columns we will, but starting off with an educated guess of what will need to be a column and dumping everything else into JSON has undoubtedly saved us a lot of complexity and pain.
One of our tables is just a single primary key and then a json blob. It stores event configuration. Why? Because it's always loaded entirely and there is never any reason to run relational ops on it. It's a huge json blob with lots of little details, only edited administratively, with nested fields and sub-fields (which is way more readable than columns, which are a flat namespace), including a sub-document that is actually a structure that drives generation of a pile of HTML forms. If I'd tried to normalize this I would have ended up with a dozen tables, probably more complexity than the entire rest of the DB, and a huge pile of code to drive it all, and every single application change that added an admin knob would've had to involve a dabatase schema change... All for what? Zero benefit.
I don't need JSON per se, I want to store data with a predefined, recursive format which most of the time will be just serialized / deserialized but occasionally also queried without having any idea ahead of the time what the query will be. For all I care, it could be Python pickle, PHP serialize, XML or whatever ungodly format you want as long as the above stands. (XML actually works in MySQL thanks to ExtractValue and UpdateXML but alas, I break out in rashes when I touch XML :P)
Let's say you want to display a page of text and images. (Doh.) But you want to give your authors great flexibility and yet nice styling so you give them components: a rolodex, a two-by-two grid, tab groups and so forth (we have 44 such components). This stuff is recursive, obviously. Are you going to normalize this? It's doable but a world of pain is an accurate description for the results. The data you get back from deserializing JSON should map pretty well to the templating engine.
Rarely you want to query and update things, some analytics, some restructuring etc. If it were just a key-value store with page id and serialized data the only way to do maintenance would be to deserialize each page and manually dig in. Sure, it's doable but having the database do that is simply convenient. That's the reason we use database engines, right? At the end of the day, you don't need SQL, you can just iterate a K-V store and manually filter in the application in whatever ways you want -- it's just more convenient to have the engine do so. Same here. The nicest thing here is that if someone wants ongoing analytics you actually can add an index on a particular piece in the blob and go to town.
I know that you, dear developer, would never produce inconsistent data. But lots of other developers do.
It is often the case that you will need to query that inconsistent json data, but either the pain is too great, or the value too low, to normalize that data. Thus, you dump it as is into a json field or database.