Sure, on a site that uses ElasticSearch for its logs I would have no idea where to look at. I'd be more at ease with SQL, but first you need to locate the DB, figure out the schema, get the SQL dialect right.
That said, I'd be far more at ease writing a SQL query to extract analytics from logs than cooking up some regexes and doing complex stuff with awk.
And I find the --since/--until parameters to journalctl far easier than matching dates by regex. Or even the --boot parameter to restrict logs to a specific boot, which with would be probably doable with awk but definitely not as trivial.
I think that binary logs give you some compelling features, without taking away any: you can always just dump the logs on stdout and use grep as much as you want. :)