This article is talking about precision, which is the proportion of positive results that are true. And it's okay for precision to be awful, especially when the condition is so rare. But it's only okay if the result is communicated alongside a statement saying what the precision is, which it seems these were not.
> the tests are inaccurate, when in reality the tests are accurate
If the test make someone consider terminating a pregnancy or even considering it, that's a lot of pain. So for that human, the test is failing its purpose potentially, depending on the value calculation of terminating a viable pregnancy vs the severity of the issue if it comes to term.
For a human, accuracy as you defined it means little to nothing. Usefulness and helpfulness are far better metrics, and such a high false positive rate is clearly causing issues in respect to those, which is what the article is highlighting.
How exactly do you plan on codifying usefulness and helpfulness?
A high false positive rate is not necessarily a bad thing and may instead be the catalyst for additional tests to confirm the first one. The tests accuracy may actually be 100%, which is great because it avoids a child being born with a fatal genetic disease. Would you prefer a high false negative rate that misses these diseases instead?
It’s clear the article is talking about why sensitivity is important in layman’s terms and while it could use better writing it’s a real problem in diagnostics. This is why you don’t ask men to take a pregnancy test to check for prostrate cancer. It is accurate but not sensitive.