You appear to suggest that programmers with “rudimentary knowledge of transactions” should prefer lower isolation levels which sacrifice correctness for performance. If anything is “grossly irresponsible” here, it’s that.
Such isolation levels are notoriously difficult to reason about—even for experienced practitioners—and their misuse can and does introduce persistent data anomalies that can be costly to remediate.
Generally speaking, performance issues are significantly easier to diagnose and resolve than data anomalies, and they may be address in a targeted fashion as the need arises.
There’s no substitute for thinking. But if I had to prescribe general advice, it’d be this:
(1) When given the choice, select a modern database system which supports scalable and efficient serializable and snapshot transaction isolation levels.
(2) Use serializable isolation, by default, for all transactions.
(3) In the event that your transactions are not sufficiently performant, stop and investigate. Profile the system to identify the bottleneck.
(4) If the bottleneck is contention due to the transaction isolation level, stop. Assess whether the contention is inherent or whether it is incidental to the implementation or data model.
(5a) If the contention is incidental, do not lower the isolation level. Instead, refactor to eliminate the contention point. Congratulations; you are now done.
(5b) Otherwise, lower the isolation level—only for one or more of the transaction(s) in question—by a single step. Carefully assess the anomalies you have now introduced and the ramifications on the system as a whole. Look for other transactions which could intersect concurrently in time and space. Implement compensatory controls as necessary to accommodate the new behavior.
(6) Repeat only as necessary to achieve satisfaction.