I’ve run quite a few cluster on such a configuration or alternatively 3 data/master-eligible nodes. It’s a safe configuration unless you manage to overload the elected master. But if you’re fighting that issue, you’ll have to go beyond 4 nodes and have a triplet of dedicated master-eligible nodes plus whatever data nodes you need.
I pretty much specifically avoid 4 node clusters. You’d have to either designate 3 of the four nodes as master-eligibile with a quorum of 2 or have all of them master-eligible with a quorum of 3. Both options allow for failure of a single node before the cluster becomes unavailable. Any other configuration would either fail immediately on a node loss (quorum 4) or be unsafe (quorum of 2, allows for split-brain)
I’d much rather opt for 4 data/master eligible nodes plus a dedicated master eligible node with a quorum of 3.
You also need to pick the number of replica suitably: each replica allows for the loss of a single random(!) data node while retaining all your data. Note that if losses are not random but you want to safeguard against loss of a rack or an availability zone or such, configurations are possible that distribute primary and replica suitably (“keep a full copy on either side”)