When I've worked for organizations without QA teams, I introduce the concept of "sniff tests". This is a short (typically 1 hour) test session where anybody in the company / department is encouraged to come and bash on the new feature. The feature is supposed to be complete, but it always turns out that the edge cases just don't work. I've been in these test session where we have generated 100 bug tickets in an hour (many are duplicates). I like putting "" into every field and pressing submit. I like trying to just use the keyboard to navigate the UI. I run my system with larger fonts by default. I sometime run my browser at 110% zoom. It used to be surprising how often these simple tests would lead to problems. I'm not surprised any more!
We call those bug-bashes where we work, and they're also typically very productive in terms of defects discovered!
It's especially useful since during development of small features, it's usually just us programmers testing stuff out, which may not actually reflect how the end users will use our software.
The problem with devs testing their own & other devs code is that we test what we expect to work in the way we expect the user to use it. This completely misses all sorts of implementation error and edge cases.
Of course the dev tests the happy path they coded.. that's what they thought users would do, and what they thought users wanted! Doesn't mean devs were right, and frequently they are not..
Any time a "fix" is implemented, someone needs to be asking the right questions. Can this type of problem occur in other features / programs? What truly is the root cause, and how has that been addressed?
Is the secret that it only works if the entire company does it, like you suggest?
And yes, I completely realize that Scrum is terrible. I'm just trying to work within a system.
Not only do we uncover bugs, it’s a great way to get the whole company learning about the new things coming and for the product team to get unfiltered feed back.
Is that like… usefull to anyone? Especially if they are duplicates. It feels to me that 10 different bugs is enough to demonstrate that the feature is really bad, after that you are just kinda bouncing the rubble?
That's very clever. Precise test case in QA plus vague description given to dev. Haven't seen it before, thank you for sharing that insight.
<https://en.wikipedia.org/wiki/German_tank_problem>
There are similar methods used in estimating wildlife populations, usually based on catch-release (with banding or tagging of birds or terrestrial wildlife) or repeat-observation (as with whales, whose fluke patterns are distinctive).
As for software engineers handling QA, I'm very much in favor of development teams doing as much as possible. I often see tests bolted on to the very end of projects, which isn't going to lead to good tests. I think that software engineers are missing good training on what to be suspicious of, and what best practices are. There are tons of books written on things like "how to write baby's first test", but honestly, as an industry, we're past that. We need resources on what you should look out for while reviewing designs, what you should look out for while reviewing code, what should trigger alarm bells in your head while you're writing code.
I'm always surprised how I'll write some code that's weird, say to myself "this is weird", and then immediately write a test to watch it change from failing to passing. Like times when you're iterating over something where normally the exit condition is "i < max", but this one time, it's different, it actually has to be "i <= max". I get paranoid and write a lot of tests to check my work. Building that paranoia is key.
> I like putting "" into every field and pressing submit.
Going deeper into the training aspect, something I find very useful are fuzz tests. I have written a bunch of them and they have always found a few easy-to-fix but very-annoying-to-users bugs. I would never make a policy like "every PR must include a fuzz test", but I think it would be valuable to tell new hires how to write them, and why they might help find bugs. No need to have a human come up with weird inputs when your idle CI supercomputer can do it every night! (Of course, building that infrastructure is a pain. I run them on my workstation when I remember and it interests me. Great system.)
At the end of the day, I'm somewhat disappointed in the standards that people set for software. To me, if I make something for you and it blows up in your hands... I feel really shitty. So I try to avoid that in the software world by trying to break things as I make them, and ensure that if you're going to spend time using something, you don't have a bad experience. I think it's rare, and it shouldn't be, it should be something the organization values from the top to the bottom. I suppose the market doesn't incentive quality as much as it should, and as a result, organizations don't value it as much as they should. But wouldn't it be nice to be the one software company that just makes good stuff that always works and doesn't require you to have 2 week calls with the support team? I'd buy it. And I like making it. But I'm just a weirdo, I guess.
Could you share some details of fuzz tests that you've found useful? I tend to work with backend systems and am trying to figure out whether they will still be useful in addition to unit and integration tests.
Its demise was widely celebrated.
I’m sorry but this is just lol. Did the devs play back by creating bugs and seeing if your team could find them?
One key attribute to both companies is that it was dictated from on high that the QA team had final say whether the release went to production or not.
These days companies think having the developers write automated tests and spend an inordinate amount of time worrying over code coverage is better. I can't count how many products I've seen with 100% code coverage that objectively, quantifiably doesn't work.
I'm not saying automated testing is bad. I'm saying, just as the author does, that doing away with human QA testers is.
Not to mention, at one recent employer, the QE team wrote an enormous amount of code to perform their tests - It was more LOC than the modules being tested/certified.
... Then I left and my CIO let go the onshore QA team in favor of near term cost savings. Code quality went way down and within a year or two several apps needed to be entirely rewritten. Everything slowed down and people started pointing fingers, and before you knew it, it was time for "cloud native rearchitecting/reengineering" which required an SI to come in with "specialists".
So much code. I hope they had a QAQA team to test all that.
That’s because code coverage doesn’t find the bugs that result from code you didn’t write, but should have. Code coverage is but one measure, and to treat it as the measure is folly.
(But, yes, I have heard a test manager at a large software company we’ve all heard of declare that test team was done because 100% coverage.)
If you have something like this:
if condition1:
do_something1()
if condition2:
do_something2()
if condition3:
do_something3()
There are 8 possible paths for the code to follow here, but you can cover 100% of lines of code in one test. If your code coverage measure tells you that testing the case where conditions 1, 2, and 3 are all true achieves 100% coverage, your measure is worthless. Covering this code cannot be done in less than eight tests.I've seen companies where that's true and it was still trash because the QA were mostly low-paid contract workers who only did exactly what they were told and no more.
QA also REFUSED to let developers write automation tests, also REFUSED to let us run them ourselves.
What a nightmare.
YMMV, but just having the final say is not a silver bullet for sure.
have at your claims good sir.
The same people have been asking: why should I write tests if I can write new features?
And then one senior QA comes and destroys everything.
Once I found that if I press f5 50 times during a minute then backend will go in outOfMemory while spinning requests to the database.
This can be OK if the code executing without throwing exceptions is itself testing something. If you have a lot of assertions written directly into the code, as pre- or post-conditions for instance. But I'm guessing that wasn't the case here.
Unfortunately people in the industry who have actual power in planning budgets don't think so. An article is right. QA engineers now are viewed as janitors: no one respects then, better to outsource to cheap location.
- Have QA run pessimal versions of real use cases. Trying to sell a word processor to lawyers? Format the entire US legal code and a bajillion contracts in it, then duplicate it 10x and start filing usability/performance bugs.
- Have the engineers test everything with randomly generated workloads before committing. Run those tests nightly, and fix all the crashes / failures.
- Have Product Management (remember them?) work with marketing and sales to figure out what absolutely has to ship, and when.
Make sure it only takes one of the above three groups to stop ship, and also to stop non-essential development tasks.
An uncountable number of products have died or devolved because "we don't have time to do it that way, put in the quick fix"
At the second company we integrated Agile so we released every two or weeks (I can't recall which). The first time I heard about CI/CD was at this company from co-workers who came back from a conference all excited about it.
This is the bitter truth, no one wants to acknowledge.
DBAs and Infra, are in the same boat as QAs. Pendulam will swing back in not so long time frame i hope.
All it takes is one person on my team to defect and support an untenable amount of tech debt, and everyone on my team has to pay for it.
Ultimately it’s up to the customers. Will they walk because of bugs and outages or stay because of shiny new features?
An influx of dumb money like that can shape and distort the market and overwhelm the feedback loops that would otherwise give consumers influence.
Perhaps now rates are up, the equilibrium changes, but I think it’s still easy to overestimate the number of “first movers” in an industry and the power of tacit or unconscious collusion.
I've worked with conscientious engineers. Sometimes they are right but their delivery mechanism is broken. Sometimes they are just in the wrong place. If we're building a POC SASS product, it really doesn't need the quality of a avianoics microcontroller. All these trade-offs come with risk and cost, good engineers need to know the difference.
This often creates a situation where people need to "justify" their jobs. Usually this happens due to an over reliance upon metrics (see Goodhart's Law) rather than understanding what the metrics are proxying and what the actual purpose of the job is. A bad QA team is one who is overly nitpicky, looking to ensure they have something to say. A good QA team simultaneous checks for quality as well as trains employees to produce higher quality.
I do feel like there is a lack of training going on in the workforce. We had the "95%-ile isn't that good"[0] post on the front page not long ago and literally it is saying "It's easy to get to the top 5% of performers in any field because most performers don't actively train or have active feedback." It's like the difference between being on an amateur sports team vs a professional. Shouldn't businesses be operating like the latter? Constantly training? Should make hiring be viewed differently too, as in "can we turn this person into a top performer" rather than "are they already" because the latter isn't as meaningful as it appears when your environment is vastly different than the one where success was demonstrated.
There's a rampant cultural mind-virus that argues that 95%th percentile is somehow tons of work (rather than a lack of unforced mistakes), so everyone just writes it off. It's on full display at this very site. Just look on any post involving software quality, and read a bunch of comments suggesting widespread apathy from engineers.
Obviously every situation is different, but people seem to be pretty okay with relinquishing agency on these things and just going along with whatever local maxima their org operates in. It's not totally their fault, but they're not blameless either.
> Obviously every situation is different, but people seem to be pretty okay with relinquishing agency on these things and just going along with whatever local maxima their org operates in. It's not totally their fault, but they're not blameless either.
I agree here, to the letter. I don't blame low level employees for maximizing their local optima. But there's two main areas (among many) that just baffles me. The first is when this is top down. When a CEO and board are hyper focused on benchmarks rather than the evaluation. Being unable to distinguish the two (benchmarks and metrics are guides, not answers). The other is when you have highly trained and educated people actively ignoring this situation. To have an over-reliance on metrics and refusing to acknowledge that metrics are proxies and considering the nuances that they were explicitly trained to look for and is what meaningfully distinguishes them from less experienced people. I've been trying to coin the term Goodhart's Hell to describe this more general phenomena because I think it is a fairly apt and concise description. The general phenomena seems prolific, but I agree that the blame has higher weight to those issuing orders. Just like a soldier is not blameless for their participation in a war crime but the ones issuing the orders are going to receive higher critique due to the imbalance of power/knowledge.
Ironically I think we need to embrace the chaos a bit more. But that is rather in recognizing that ambiguity and uncertainty is inescapable rather than abandonment of any form of metric all together. I think modern society has gotten so good at measuring that we often forget that our tools are imprecise whereas previously the imprecision was so apparent that it was difficult to ignore. One could call this laziness but considering its systematic I'm not sure that's the right word.
On DevOps teams I see this constantly. Usually the best-compensated or most senior "Ops" guy or whatever they're called at the company spends a lot of his time extinguishing fires that were either entirely of his own creation/incompetence, which makes it look like he's "doing something." You automate away the majority of the toil there and this person doesn't have a job, yet this pattern is so insanely common. There's little incentive to do it right when doing it right means management thinks you sit there all day and do nothing.
There are a lot of these types that gravitate to crisis management roles like DevOps.
Sounds like the right incentive structure. If you don't mind, how are you judged? Do you feel like the system you're in is creating the appropriate incentives and actually being effective? Certainly this example is but I'd like to know more details from an expert so I can update my understanding.
"But then they'll leave for somewhere else for more money." </sarcasm>
Literally every company I have worked for. Meaningful training was always an uphill battle.
However, good training also requires someone good and broad in your technical ladder. They may not be the most up-to-date, but they need to be able to sniff out bullshit and call it out.
FAANG is no exception, either.
QA is almost always seen as a 'cost center' by the business and upper management. I have a hypothesis that you never ought to work in a department that is seen as a 'cost center'. The bonuses, the recognition, and the respect always goes to the money makers. The cost center is the first place to get more work with less hands, get blamed for failures, and ultimately fired when the business needs to slim up. I think the same thing applies to IT.
This spiral is why QA will always be a harder career than just taking similar skills and being a developer. It self reinforces that the best people get fed up and switch out as soon as they can.
Accessibility, observability, good logging, testing infrastructure improvements, CI/CD tweaks, stability, better linting and analyzer issues are all important, but you will be rewarded if you ship features fast.
This year I spent too much time on the former because I felt like that's what the team and app needed, because nobody on the team priorized these issues, and I'll be sweating at the end of the year performance reviews.
Now knowing this, I understand why the others didn't want to work on these items, so next year, I'll be wiser, and I'll focus on shipping features that get me the most visibility.
Sorry for the bugs in the app, but I need a job to pay my mortgage.
1) Some organizations have come to really value what QA/QC brings to the table. From my experience, this seems to be more visible in manufacturing than software. I speculate this is because software is more abstract by its very nature and waste is harder to track.
2) The really good QAs are those who really believe in its mission, rather than those who are looking for the path of least resistance.
Both of those underscore the value lies in organizations and individuals who really buy-in to the QA ethos. There are lots of examples of both who are simply going through the motions.
I don’t care though. I enjoy making things better and more robust. It makes my soul feel better. I’ll leave fucking things up to the cynics.
I'm fine with bonuses, etc going to other groups. I'm paid well as a tech worker, and many of those jobs would make absolutely miserable -- assuming I turned out to actually be good at them.
That's why I don't work in an IT department of a traditional business.
Well everything involved in making a product is seen as a cost, that includes the entire development team - QA, Developers, Devops, PM ....
And it’s easy to see why.
Software Quality, Cose Maintainability, Good Design. These things only matter if you are planning to work on that company for a long time. If you’re planning to stay a couple years then hop to the next company, the most optimal path is to rise fast by doing high visibility work, then find use your meteoric rise as a resume material to get a higher paying job. Rinse and repeat. If that project is going to break or become unmaintainable in a couple years, who cares? You’re not going to be there.
Recognize the pattern? Startups work the same. It’s the “growth mindset” imprinted everywhere. If this product becomes unmaintainable in 5 years, who cares? I will have exited and cashed in.
I don’t judge people who do that exactly because it’s the practice the companies themselves use. I don’t like it, I actually hate it, but I understand people are just playing by the rules.
The fun part is watching managers and executives complaining about employee turnover, lack of company engagement, quiet quitting, like this isn’t them tasting their own poison.
This is a reasonable stance for a startup to take. The majority of startups likely won't last five years as they tend to fail.
Being alive in five years with technical debt is a good problem for most startups to have, because that means they managed to make it five years.
There’s a lot to say about startup culture and the growth mindset, but I don’t consider it necessarily evil. It exists, lots of the products we use and love would be impossible to build without it. It can be extremely harmful, though. It burns out people, it leads to excessive risk taking, it favors aggressive, invasive marketing, it rewards reckless management - yet it works.
It isn’t good or evil, like mostly everything in the World. It’s just… there.
When I was a QA lead I often ran into software engineers that couldn’t be bothered to read a pipeline error message (and would complain daily in Slack) and when it came to optimizing the pipeline they would ignore base problems and pretend the issues that stemmed from the base problems were magical and not understood. Wasting days guessing at a solution.
The disrespect a QA engineer sees is not exaggerated in this article. Since most companies with QA orgs do not have a rigorous interviewing process like the Engineering orgs, the QA engineers are seen as lesser. The only SWE that have respect for them that I’ve met are the people who worked in QA themselves. The disrespect is so rampant that I myself have switched back to the Engineering org (I tried using seniority as a principal engineer and even shifted as a manager to make changes, but this failed because Engineering could not see past their own hubris and leadership peoples will not help you). My previous company before I was laid off hired a new CTO who claimed we could just automate away QA needs but had no examples of what she was talking about. This is the level of respect poured down from the top about building good software.
I use a lot of MSFT software and services in the "day job". I wish there was some kind of consequence to them for their declining quality.
This is exactly the point I was going to make. Their stock price is doing great! So obviously they've done the right thing for their position in the market: the market has rewarded them for not wasting money on QA and just letting users suffer with the bugs.
>I wish there was some kind of consequence to them for their declining quality.
If people keep insisting on throwing money at them no matter how bad their software is, then there's no reason for them to improve their quality.
I have only had QA teams that wrote "test plans" and executed them manually, and in rarer cases, via automated browser / device tests. I consider these types of tests to be valuable, but less so than "unit tests" or "integration tests".
With this model, I have found that the engineering team ends up being the QA team in practice, and then the actual QA team often only finds bugs that aren't really bugs, just creating noise and taking away more value than they provide.
I would love to learn about QA team models that work. Manual tests are great, but they only go so far in my experience.
I'm not trying to knock on QA folks, I'm just sharing my experience.
Where I've seen QA teams most effective is providing more function than "just" QA. I've seen them used for 2nd tier support. I've seen them used to support sales engineers. I've also seen QA teams that take their manual test plans and automate their execution (think Selenium or UiPath) and have seen those automations included in dev pipelines.
Finally, the QA team are the masters and caretakers of your test environment(s), all the different types of accounts you need for testing, they should have the knowledge of all the different browsers and OSes your customers are using, and so forth.
That's a lot for the dev team to take on.
Not disagreeing with this, but there's one thing they won't always be aware of. They won't always know what code a dev touched underneath the hood and what they might need to recheck (short of a full regression test every single time) to verify everything is still working.
I know that the component I adjusted for this feature might have also affected the component over in spots X, Y, and Z, because I looked at that code, and probably did a code search or a 'find references' check at some point to see where else it's getting called, and also I usually retest those other places as well (not every dev does, though. I've met some devs that think it's a waste of time and money for them to test anything and that's entirely QA's job).
A good QA person might also intuit other places that might be affected if it's a visible component that looks the same (but either I haven't worked with too many good QA people or that intuition is pretty rare, I'm guessing it's the latter because I believe I have worked with people who were good at QA). Because of that, I do my best to be proactive and go "oh by the way this code might have affected these other places, please include those in your tests".
These "Customer Support" reps, when functioning as QA, knew the product better than product or eng, exactly how you're describing. I did enjoy that model, but they also did not write tests for us. They primarily executed manual test plans, after deploys, in production. They did provide more value than creating noise, but the engineering team still was QA, at least from an automated test standpoint.
I might just be too old, but I remember when QA people didn't typically write tests, they manually tested your code and did all those weird things you were really hoping users wouldn't do. They found issues and bugs that would be hard to universally catch with tests.
Now we hoist QA on the user.
Working with younger devs I find that the very concept of QA is something that is increasingly foreign to them. It's astounding how often I've seen bugs get to prod and ask "how did it work when you play around with it locally?" only to get strange looks: it passed the type checker, why not ship it?
Programmer efficiency these days is measured in PRs/minute, so introducing bugs is not only not a problem, but great because it means you have another PR you can push in a few days once someone else notices it in prod! QA would have ruined this.
This drives me crazy. It's a cheap way of saying we're ok shipping crap. In the past, I've been part of some QA audits where the developers claimed their customer support log sufficed as their test plan. This wasn't safety-critical software, but it did involve what I would consider medium risk (e.g., regulatory compliance). The fact that they openly admit they are okay shipping bad products in that environment just doesn't make sense to me.
Every company is different on how they implement the QA function. Whether it be left to customer, developers, customer support, manual only QA, or SDET. It really comes down to how much leadership values quality or how leadership perceives QA.
If a company has a QA team, I think the most success comes when QA get involved early in the process. If it is a good QA team, they should be finding bugs before any code is written. The later they are involved, the later you find bugs (whether the bugs are just "noise" or not) and then the tighter they get squeezed between "code complete" and release. I think that the QA team should have automation skills so more time is spent on new test cases instead of re-executing manual test cases.
Anyways, from my vantage point, the article really hits hard. QA are sometimes treated as second class citizens and left out of many discussions that can give them the context to actually do their job well. And it gets worse as the good ones leave for development or product management. So the downward spiral is real.
but every once and a while you ran across a QA organization that actually had a deep understanding of the problem domain, and actually helped drive development. right there alongside dev the entire way. not only did they improve quality, but they actually saved everyone time.
It worked exceptionally well, in my opinion but the QA revolted; they didn't like it, said that it felt like we were trying to make them into software developers. And so it ended, and it was back to "throw it over the wall".
Sometimes that’ll get caught in code review, if your reviewer is thinking about the implementation.
I’ve worked in payroll and finance software. I don’t like it when users are the ones finding the bugs for us.
People wrote off QA completely unless it meant they didn't have to write tests, but, it didn't track from my (rather naive) perspective that tests are _always_ part of coding.
From that perspective, it seemed QA should A) manage go/nogo and manual testing of releases B) keep the CI green and tasks assigned for red (bonus points if they had capacity to try fixing red) C) longer term infra investments, ex. what can we do to migrate manual testing to integration testing, what can we do to make integration testing not-finicky in the age of mobile
I really enjoyed this article because it also indicates the slippery slide I saw there: we had a product that had a _60% success rate_ on setup. And the product was $200+ dollars to buy. In retrospect, the TL was into status games, not technical stuff, and when I made several breakthroughs that allowed us to automate testing of setup, they pulled me aside to warn me that I should avoid getting into it because people don't care.
It didn't compute to me back then, because leadership _incessantly_ talked about this being a #1 or #2 problem in quarterly team meetings.
But they were right. All that happened was my TL got mad because I kept going with it, my skip manager smiled and got a bottle of wine to celebrate with, I got shuffled off to work with QA for next 18 months, and no one ever really mentioned it again.
QC are the processes that ensure a quality product: things like tests, monitoring, metrology, audit trails, etc. No one person or team is responsible for these, rather they are processes that exist throughout.
QA is a role that ensures these and other quality-related processes are in place and operating correctly. An independent, top level view if possible. They may do this through testing, record reviews, regular inspections and audits, document and procedure reviews, analyzing metrics.
Yes, they will probably test here and there to make sure everything is in order, but this should be higher level - testing against specifications, acceptability and regulatory, perhaps some exploratory testing, etc.
Critically they should not be the QC process itself: rather they should be making sure the QC process is doing its job. QA's value is not in catching that one rare bug (though they might), but in long term quality, stability, and consistency.
I did not work well with the mid-manager, who was both my new boss and the QA person's (not too relevant here). However, I do give him credit for the person he hired.
That QA person, a young Indian woman with some experience, was actually phenomenal at her job, catching many mistakes of ours both in the frontend and in the APIs.
She not only did a bunch of manual testing (and thus discovered many user-facing edge cases the devs missed), she wrote all the test cases (exhaustively documented them in Excel, etc. for the higher-ups), AND the unit tests in Jest, AND all the end-to-end tests with Playwright. It drastically improved our coverage and added way more polish to our frontend than we otherwise would've had.
Did she know everything? No, there was some stuff she wasn't yet familiar with (namely DOM/CSS selectors and Xpath), and it took some back-and-forth to figure out a system of test IDs that worked well enough for everyone. She also wasn't super fluent with the many nuances of Javascript (but really, who is). There was also a bit of a language barrier (not bad, but noticeable). Overall, though, I thought she was incredible at her job, very bright, and ridiculously hard-working. I would often stay a little late, but she would usually be there for hours after the end of the day. She had to juggle both the technical dev/test tasks, the cultural barriers, and managing both up and across (as in producing useless test case reports in Excel for the higher ups, even though she was also writing the actual tests in code), dealing with complex inter-team dynamics, etc.
I would work with her again any day, and if I were in management, I'd have promoted the heck out of her, trained her in whatever systems/languages she was interested in learning, or at least given her a raise if she wanted to stay in QA. To my knowledge the company didn't have a defined promotion system though, so for as long as I was there, she remained QA :( I think it was still better than the opportunities she would've had in India, but man, she deserved so much more... if she had the opportunities I did as an American man, she'd probably be a CTO by now.
the one exception to this was when i was qa (never again) and i made sure we only ever did automated tests. unfortunately management was nonexistent, devs made zero effort to work with us, and naturally we were soon replaced by a cheap offshore indian team who couldn't tell you the difference between a computer and a fridge anyway.
i think a lot of it just stems from companies not caring about qa, not knowing who to hire, and not knowing what they want the people they hire to achieve. "qa" is just like "agile", where nobody can be bothered to actually learn anything about it, so they make something up and then pat themselves on the back for having it.
That said, those test plans are gold. They form the definition of the product’s behavior better than any Google Doc, Integration Test, or rotating PM ever could.
To me, unit tests are great to ensure the code doesn't have silly syntax errors and returns results as expected on the happy path of coding. I would never consider that QA no matter how much you randomize the unit test's input.
Humans pushing buttons, selecting items, hover their mouse over an element, doing all sorts of things that have no real reason but yet they are being done anyways will almost always wreck your perfect little unit tests. Why do you think we have session playback now, because no matter what a dev does to recreate an issue, it's never the exact same thing the user did. And there's always that one little WTF does that matter type of thing the user did without even knowing they were doing anything.
A good QA team are worth their weight in $someHighValueMineral. I worked with one person that was just special in his ability to find bugs. He was savant like. He could catch things that ultimately made me look better as the final released thing was rock solid. Even after other QA team members gave a thumbs up, he could still find something. There were days were I hated it, but it was always a better product because of his efforts.
You can extract a lot of business logic into those kinds of functions. There's a whole art in writing "unit testable code". Those unit tests have value.
What's left is the pile code and scenarios that need to be tested in other ways. But part of the art is in shrinking down that pile as much as possible.
I would also second another comment that pointed out that good QA folks often know the real surface area of the product better than anyone. And good QA folks also need to be seen as good QA folks. If you have a corporate culture that treats QA folks like secondary or lesser engineers, that will quickly be a self-fulfilling prophecy. The good ones will leave all the ones who fit your stereotype behind by transitioning into dev roles or finding a new team.
In either case, the optimal operating model is that QA is embedded in your product team. They participate in writing tickets, in setting test criteria and understanding the value of the work being done. "Finding bugs" is a low value task that anyone can do. Checking for product correctness requires a lot more insight and intuition. Automated test writing can really go either direction, but typically I'd expect engineers to write unit tests and QA to write e2e tests and only as much or as little as it actually saves time and can satisfactorily indicated success or failure of a user journey.
I've seen where this has went poorly as QA was slowly eroded. It became easier and easier to justify shoddy testing practices. Low-probability events don't come around often by their very nature and it can create complacency. I've seen some aerospace applications have some close calls related to shortcomings in QA integration; in those cases, luck saved the day, not good development practices.
They weren't there for engineering, they were there for product quality. Their expertise was that they knew what the product was supposed to do and made it did it. Things like "unit tests" help development but they don't make sure the product satisfies client requirements.
If engineering is really on top of it, they learn from QA and QA seems to have nothing to do. But don't let that situation fool you into thinking they are "just creating noise and taking away more value than they provide"
If your company hires the low end of that scale, any approach is going to have problems because your company has management problems. It’s very easy to take a lesson like “QA is an outdated concept” because that’s often easier than acknowledging the broken social system.
I worked at a place with a 10 year old legacy product and a 10,000 test case spreadsheet of each manual action a QA tester must perform on that product to greenlight any individual change. Obviously this lead to huge wait times to get anything deployed. Also was pretty amusing to catch all the bugs in production that their exhaustive spreadsheet totally overlooked. Almost as though it did not have value in the first place.
I think that era is over after the great SDET layoffs of 2014/2015? Now I guess some SDE teams are tasked with this kind of dev work.
I have a few times. But the only common thing in the QA industry, is that every company does it differently and think they're doing it the "normal way".
So you can make your QA teams create plenty of tests if you give them the right tools.
We got to a point where the default was that all of our 10,000's of test runs each night would flash green if no new code was introduced. Tests that flashed red were almost always due to recent code additions, and therefore easily identified and fixed. It let our team develop knowing that any bugs they introduced would be quickly caught, and this translated to being able to confidently take on crazy new projects - like re-writing our transaction processing system post-launch and getting a 10x speed increase out of it.
In the end our focus on quality led to velocity - they weren't mutually exclusive at all. We don't think this is an isolated phenomenon, which led us to our newest project - but that's a story for another time.
Long story short, it wasn't. It was like taking away a crutch. Of course we could have been more diligent about testing before having QA validate it, but it slowed development down so much trying to learn all the things we never thought to test that QA did automatically.
https://blog.southparkcommons.com/move-fast-or-die/
A bit recent to have affected Yahoo - but it sells a good story.
We would celebrate the first time someone broke something.
Let anyone touch any part of the codebase and get in there to fix a bug or build a feature. Yes, this can cause bugs. Yes, that is an acceptable tradeoff. We had a shared mythology about the times that the site went down and we all rallied together to quickly bring it back up.
Sounds like hell: running as close to the edge of the cliff as you can. Presumably totally ignoring thousands of papercuts of slightly broken functionality. Optimising to produce an infinite number of shallow bugs.I bet the people responsible for Facebook Ads Manager are a lot less enthusiastic about "move fast and break things", although I'd be interested to hear an opposing viewpoint from anyone here who's worked for that group.
Yes, part of the job was to write and run manual test suites, and to sign off as the DRI that a certain version had had all the automated and manual tests pass before release.
But their main value was in the completely vague mandate "get in there and try to break it." Having someone who knows the system and can really dig into the weak spots to find problems that devs will try to handwave away ("just one report of the problem? probably just some flaky browser extension") is so valuable.
In my current job, I have tried for 5+ years to get leadership to agree to a FT QA function. No dice. "Developers should test their own code." Yeah and humans should stop polluting the ocean and using fossil fuels, how's that going?
I worked at a company with a world-class QA team. They were amazing and I can't say enough nice things about them. They were comprehensive and professional and amazing human beings. They had great attention to detail and they catalogued a huge spreadsheet of manual things to test. Engineers loved working with them.
However -- the end result was that engineers got lazy. They were throwing code over to QA while barely testing code themselves. They were entirely reliant on manual QA, so every release bounced back and forth several times before release. Sometimes, we had feature branches being tested for months, creating HUGE merge conflicts.
Of course, management noticed this was inefficient, so they formed another team dedicated to automated QA. But their coverage was always tiny, and they didn't have resources to cover every release, so everyone wanted to continue using manual QA for CYA purposes.
When I started my own company, I hired some of my old engineering coworkers. I decided to not hire QA at all, which was controversial because we _loved_ our old QA team. However, the end result was that we were much faster.
1. It forced us to invest heavily on automation (parallelizing the bejesus out of everything, so it runs in <15min), making us much faster
2. Engineers had a _lot_ of motivation to test things well themselves because there was no CYA culture. They couldn't throw things over a wall and wash their hands of any accountability.
We also didn't have a lack of end-to-end tests, as the author alludes to. Almost all of our tests were functional / integration tests, that run on top of a docker-compose set up that simulated production pretty well. After all, are unit tests where you mock every data source helpful at all? We invested a lot of time in making realistic fixtures.
Sure, we released some small bugs. But we never had huge, show stopping bugs because engineers acted as owners, carefully testing the worst-case scenarios themselves.
The only downside was that we were slower to catch subtle, not-caught-by-Sentry bugs, so things like UX transition weirdness. But that was mostly okay.
Now, there is still a use case for manual QA -- it's a question of risk tolerance. However, most applications don't fit that category.
> This created a self-reinforcing spiral, in which anyone “good enough at coding” or fed up with being treated poorly would leave QA. Similarly, others would assume anyone in QA wasn’t “good enough” to exit the discipline. No one recommends the field to new grads. Eventually, the whole thing seemed like it wasn’t worth it any more. Our divorce with QA was a cold one — companies just said “we’re no longer going to have that function, figure it out.”
I've worked with a handful of talented career software QA people in the past. The sanity they can bring to a complex system is amazing, but it seems like a shrinking niche. The writing was on the wall for me that I needed and wanted to get into fulltime dev as QA got increasingly squeezed, mistreated, and misunderstood. At so many companies QA roles went into a death spiral and haven't recovered.
Now, as the author points out, a lot of younger engineers have never worked with a QA team in their career. Or maybe have worked with a crappy QA team because its been devalued so much. So many people have never seen good QA that no one advocates for it.
This is why. Engineers have some of the most inflated egos that they set an extremely high bar for being “part of the club”. Sometimes that’s corporate policy (hire better than you) and sometimes it’s just toxicity (I am better than you). Without realizing that the most valuable skills they could learn are soft skills. I’m open to finding anyone willing to code. Whether it’s from QA, sales, Gerry’s nephew, recent CS grad, Designer turned coder, or that business analyst that taught themselves Python/Pandas.
A good QA team is sorely missed. A bad QA team turns the whole notion of QA teams sour. Just the same for development teams :D
I think devs are first line of defense. Unit tests etc. QA is second line (should we release?), feature testing, regression, UX continuity, etc. There’s value in it if you can afford it.
Luckily, that's easy: the "fault model", all the ways things can break. That tends to be a lot more complex than the operating model, the domain model, or the business model.
Once all the potential issues and associated costs for all the fault models are enumerated, then QA can happily offer to any other organization the responsibility for each one, and see who steps up to take it on.
In many cases, it can be done more cheaply in design, engineering, or automation; it's usually easier to prevent a problem than capture, triage, debug, fix, and re-deploy.
Organizations commonly make the mistake of being oblivious to the fault models and failing to allocate responsibility. That's possible because most failures are rare, and the link from consequences back to cause is often unclear. The responsibility allocation devolves to blame, and blame to "who touched this last"? But catastrophic feedback is a terrible way to learn, and chronic irritants are among the best ways to lose customers and staff.
>it's usually easier to prevent a problem than capture, triage, debug, fix, and re-deploy.
It really depends on the risk of the fault. To a PM under schedule pressure, the higher risk may be to break schedule in order to redesign to mitigate the fault. As you said, many failures are low probability, so PMs are used to rolling the dice and getting away with it. Often they've moved on before those failures rear their ugly heads.
An organization really needs the processes that establish guardrails against these biases. Establishing requirements to use the tools to define the fault model can go a long way, although I've seen people get away with turning a blind eye to those requirements as well. You also need to mate it with strong accountability.
There are no "Software Quality Assurance" academic degrees, there's barely any research into testing methodologies, there's barely any commercial engagement in the space aside from test run environments (aka, selling shovels to gold diggers), and let's face truth, also in tooling. And everything but software QA is an even worse state, with "training" usually consisting of a few weeks of "learning on the job".
Basic stuff like "how does one even write a test plan", "how does one keep track of bugs", "how to determine what needs to be tested in what way (unit, integration, e2e)" is at best cargo-culted in the organization, at worst everyone is left to reinvent the wheel themselves, and you end up with 13 different "testing" jobs in a manually clicked-together Jenkins server, for one project.
> Defect Investigation: Reproduction, or “repro”, is a critical part of managing bugs. In order to expedite fixes, somebody has to do the legwork to translate “I tried to buy a movie ticket and it didn’t work” into “character encoding issues broke the purchase flow for a customer with a non-English character in their name”.
And this would normally not be the job of a QA person, that's 1st level support's job, but outsourcing to Indian body shops or outright AI chatbots is cheaper than hiring competent support staff.
That also ties in to another aspect I found lacking in the article: users are expected to be your testers for free aka you sell bananaware. No matter if it's AAA games, computer OSes, phones, even cars...
I can tell you, I definitely didn't get training for QA tasks, but here I am doing them anyways. It's just work that needs to be done.
Yeah and that is my point. It would be way better for the entire field of QA if there were at least a commonly agreed base framework of concepts and ways to do and especially to name things, if alone because the lack of standardization wrecks one's ability to even get testers and makes onboarding them to your "in house standards" a very costly endeavour.
There's a lot of this actually. Entire communities of people working on software quality assurance. Practitioners in this space call their field "resilience engineering".
The field likes to talk a lot about system design. Especially in the intersection of humans and machines. Stuff like "How do you set up a system (the org) such that production bugs are less likely to make it all the way to users"
What training is in your opinion needed?
I get asked, every week, if they are contributing and what are they contributing.
It's exhausting, so I can't imagine what it feels like to actually BE in QA.
Every time I looked at one of those AI images my brain just kept seeing all the little weird parts that didn't make sense. Like a brain itch.
it's not one or the other. in my experience it's a decision of "no images" vs "ai images".
in this case, probably "no images" would've been better for the reading experience. but there was never any illustrator getting paid
They should know the product inside out, moreover, they know all the annoying bits that are unsexy and not actively developed.
yes, they find bugs and junk, but, they know how your product should be used, and the quickest/easiest way to use it. Which are often two different paths.
Bring your QA in the product cycle, ask them what the stuff that pisses them off the most.
They also should be the masters of writing clear and precise instructions, something devs and product owners could learn from.
Getting rid of everyone with testing expertise, and treating testing expertise as inherently less valuable that implementation expertise? Sure, you could convince me that was a bad idea.
If you want to call it “Testing and Exploration” you’d get no argument from me. (Though I do think you’ll find that team is hard to staff.)
> Focus: There is real value in having people at your company whose focus is on the quality of your end product. Quality might be “everybody’s job”…but it should also be “somebody’s job”. Yes indeed, naturally every person have just one focus, having dedicated person focus on QA is important.
Another practice, or buzz word (or used to be buzz word:) ), Exploratory Testing, which can pretty much be conducted only by dedicated QA.
I tried to draw attention to the fact that at least some manual QA is needed, but even after obvious fail (some people lost their job) managers are adamant. Automation, 'special bug-hunting projects', 'we should concentrate on code quality' lectures, all-hands testing - anything, instead of very obvious solution to get QA team back. Development time is up, regression is often, communication became harder.
The only QA who still works in the company (now in a different role) became invaluable, because he is one of the very few people who deeply understands the product as a whole and knows all the services we work with.
I can't think of another example of so very obvious mistake and solution to it, that's being ignored so relentless.
For me it's a no brainer, if I were CEO / CTO, until product-market-fit is achieved and exponential growth is visible, I'd just outsource Q&A and that's that.
If your product is non-trivial in size or scope, ie it is not a cookie-cutter solution, then the testing of your product will also be non-trivial if you want it to work and have a good reputation (including during those all-important live demos, poc's, etc).
QA does not mean "click on things and go through the happy path and everything is fine" - not saying you are implying that, but gosh the amount of companies that think it's child's play is crazy.
Are there many products that reach such sizes without achieving Product Market Fit (PMF)? I feel like after this step is achieved, QA becomes pivotal and involves a great combination of manual and automated procedures. So I agree with you in this regard.
But going back to my initial assumption. I think starting a fresh company without PMF and spending a lot on QA until that is achieved, might not be the best approach.
I experienced both worlds: I worked in an organization where 4 QA engineers tested each release that was built by 6 software engineers. Now I'm in a situation where 0 QA engineers test the work of 8 software engineers. In the second case the software engineers actually do all the testing, but not that systematically because it's not their job to optimize the testing process.
Having someone with the capabilities of a software engineer who's daily work is uncovering defects and verifying functionality is important. Paying someone who owns the testing process is more than justified commercially. The problem is: You don't find those people. For various reasons. Therefor you are stuck with making the software engineers do the testing.
But there is hope. There is a new standard released for my industry that requires organizations to have a QA department that is independent of the software engineering department. If they don't have that, they are not allowed to role out there software product complaint to the standard. Maybe this will help to reintroduce the QA Engineer as an important and prestigious role.
There's two kinds of tests. Regression testing, that should be automated and written and maintained by devs. New feature or change testing should be done by those that defined them, namely Product people. In the best case it's an iterative and collaborative process, where things can be evaluated in dev/test environments, staging environments, or production for beta flag enabled test users.
SWEs must produce unit testing because they know the code best. Dumping this responsibility onto QA is slow, evil, and wrong like quality control only vs. QA+QC.
QA teams must have the authority to ensure complicated code gets unit testing code coverage.
QA teams should provide partial and full-up integration tools and testing with the support of SWEs.
QA teams must have stop-the-assembly-line authority to ensure quality and testing requirements are met.
QA teams (or tools teams that support multiple QA teams) must make testing faster and more efficient.
There ought to be a suite of smoke tests such that untested code cannot be committed to the "production" branch, whatever that looks like, except in rare extreme emergencies.
All production-ready commits should be squashed, signed-off by another SWE, and have a test plan.
Test plans should be auto-generated, wherever possible.
Tests combined with test infrastructure should be able to be added to auto-bisect/blame breakage
Which tests must run and pass to accept a production proposed diff should be auto-minimized to those that touch particular area(s) and their precise-as-possible-but-still-correct dependencies.
Other areas that must not be neglected and shoveled onto SWEs: product management, UAT, UX, and operations.
Now, don't get me wrong, my teams always do unit and integration testing, along with automation of those two. Devs are responsible for the quality of their work. But ultimately it os the product team, with input from their QA team the ones deciding if a new feature is ready for release as it is, or needs more polish
QA is still extremely valuable in any software that has long deployment lead times. Mobile apps, On-Prem solutions, anything that cannot be deployed or rolled back within minutes can benefit from a dedicated QA team that can manage the risk appropriately.
> QA has always been about risk management.
100%.
QA should be related to identifying risk, likelihood of failure, impact of failure to user, client and company. The earlier this is done in the varying processes, the better. ("shift left" but I've seen a ton of differences with how people describe this, but generally QA should start getting involved in the "design phase")
Another example from my own first-hand experience:
A company I worked for made a product that plugged into machines that were manufacturing parts, and based on your parameters it would tell you whether or not the part was "good" or "bad".
When interviewing the leadership of the company, as well as the majority of the engineering group, "what is the biggest risk with this product" they all said "if the product locks up!". Upon further discussion, I pulled out a much larger, insidious risk; "what if our product tells the client that the part is 'good' when it is not?"
In this example, the part could be involved in a medical device that keeps someone alive.
You're not going to be able to roll that back.
The best way I've found to sell QA to management (especially sales/marketing/non-technical management), is to redefine them as marketing. QA output is as much about product and brand reputation management as finding bugs. IMO, nothing alienates customers faster than bugs, and bad experiences result in poor reputation. Marketing and sales people can usually assign value to passive marketing efforts, and recognise things that are damaging to retention and future sales.
We are currently in a “phase 2” test, of the project we’ve worked on, for the last year or so. It has shown us some issues (nothing major, though). Phase 1 testing showed us some nasties.
I had to force the team to do all this testing. They would have happily released before phase 1. I don’t think it would have ended well.
[0] https://littlegreenviper.com/miscellany/testing-harness-vs-u...
When I worked in the simulation space we used to get models sent in by customers where a convergence problem or crash would occur in 12 hours of running on a 128 core machine. Those were impossible as a developer to work with in debug mode which made the runtime even longer, so they needed someone to identify the cause of the problem and distill it down to a much smaller model where the bug could be replicated. The QA team in that were really application engineers and were subject matter experts, and they were absolutely invaluable.
Basically, it was a way to get a developer to do more testing during development before handing over a feature to QA. This sets up the imagined possibility of firing all QA staff and having developers write perfect code that never needs to be tested thoroughly. Looks great on paper...
At a previous company, they started firing all of the manual QA devs and replacing them with offshore QA people who could do automated testing development (Cypress tests). The only problem was that those fired QA team members had significant business domain knowledge that never was transferred or written down. The result was a dumpster fire every week, with the usual offshore nightmares and panicked responses.
Make no mistake about this, it's just a cost-cutting measure with an impact that is rarely felt immediately by management. I've worked with stellar manual QA people and our effort resulted in bulletproof software that handled edge cases, legacy customers, API versions, etc. Without these people, your software eventually becomes IT-IS-WHAT-IT-IS and your position is always threatened due to the lack of quality controls.
In my experience the slowest part has been marking a feature as done. I loved working at places with QA. I could assign tickets to QA once the PR was up.
Now I gotta build in that I’ll be bumping PRs for review for approximately 30-50% of the time I’m working on a feature.
At least with the comic style, you could plausibly say that that's canon to her character.
1. hire a contractor who just has no idea about anything. 2. hire someone and place them outside the engineering org (on the support team as a "support engineer" seems pretty popular) where they have little to no interaction with either engineering _or_ customers and expect them to work miracles.
1) someone who deeply understands how the product should work
2) someone who’s good at writing performant and maintainable tests
QA are usually the best people to know what's what when the rubber meets the road. They know where the bodies are buried. They often understand and have a pulse on customers and usage better than product managers. They know the ins-and-outs of how various features interact and how a product as a whole works better than silo'd developers.
Historically they were effectively the only "product owners" that could certify a release before it went out. They would coordinate with the right people to ensure all technical and non-technical deliverables and dependencies were met before releases. T hey would be the best approximate and power users.
They often maintained test infrastructures on which deployments could be tested. In fact, they were the origins of automation or DevOps as a "thing" because they are the ones who saw all the friction points daily giving rise to CI/CD. Often times nobody listened because they were concerned with features.
QA has always been about investing resources within an organization to improve it. In effect, building things for within it to improve it. Now that we have gotten the message to some degree of optimizing some manual pain points -- we kick to the curb those who got us there without any regard to the value they provided -- and instead decide to push to prod and test on production to piss off customers even more.
If you ask yourself -- would you feel comfortable driving a car that was built and tested with automation alone? What about a space shuttle or an airplane? Or about medications?Or would you prefer that a human test drivers or test pilots put those products through their paces before signing off on them? -- it might drive it home better.
But then again software industry has devolved since the tech bro + VC culture pretty much ate it as they chased those sweet $$$