​​ Have you ever wondered why QA is a different role than developer? It’s not obvious why this should be the case. In fact, big tech companies like Meta, Microsoft, Amazon or Google barely employ any testers.

However, there are many psychological, ethical and jurisprudential considerations to take into account. Humans tend to fall prey to the confirmation bias, which developers are particularly susceptible to. In simple terms, if you expect something to work you’re not open to the possibility of that thing not working. You’re much more likely to interpret every piece of evidence as proof of something else instead of a bug in your code. In essence, you’ll want to explain away any evidence presented to you.

Still, I think that the jurisprudential aspects are more important and interesting. There’s a legal principle that states nemo iudex in causa sua, Latin for “no one is a judge in their own case”. It plainly means that no one is a fair judge to themselves, but not only that: no one can judge a case in which they have an interest, even if they’re not “part” of it. Judges must be neutral, impartial and fair. And in essence, QA engineers are judges.

Requirements are norms

For the sake of the argument, bear in mind that “quality assurance” and “software testing” are not entirely synonymous. Software testing is a big part, but not the entirety of quality assurance. QA is not only about “verifying” software works as expected, but also about refining what “expected behavior” means in any given context. The two main sides of that quality assurance process are analyzing requirements and advancing enhancements, whenever possible.

Now, when we read requirements (often in the form of user stories) we quickly realize they tell us what the system ought to do as opposed to what it is actually doing. As a matter of fact, requirements are norms. That implies that everything we know about deontic logic can be applied to requirement analysis. Even more than that: the whole body of legal reasoning can be adapted to this task.

In fact, I do it all the time. When I was working as a Google contractor, I was doing session-based testing at the Google Workspace Marketplace. Since we were testing 3rd-party apps, there were no clearly defined criteria on how to test each one of them individually, but a set of general guidelines established by Google in its place. One day I had a dilemma: I found a general requirement saying A, and a specific requirement saying not A. What is the solution?

This might leave some people scratching their heads, but not lawyers. For them the answer is very simple: lex specialis derogat legi generali. The law governing the specific subject matter overrides a law governing only general matters.

Pretty much in the same fashion as real judges, QA engineers must not only contrast the expected behavior (norm, requirement) against the actual behavior, but also define what the applicable norm even is. The process of determining what the law is entails facing ambiguity and vagueness in the text, but also legal gaps.

Legal gaps are essentially unregulated cases. In a software engineering context, it translates to potential or actual use cases that have no associated requirements. Here is when the “testing” aspect recedes and “quality assurance” steps in. It’s the QA engineer’s task to remove any ambiguity and vagueness in the requirements, but also proactively suggest improvements that didn’t occur to anyone before but are perfectly compatible, and complementary, to the explicit requirements. And let’s not forget that there are also implicit requirements, use cases that follow logically from the requirements but can’t be found anywhere in the documentation. In summary, there’s a creative aspect of QA engineering that’s not just “applying requirements” (see legal realism for a deep dive into how judges actually create law).

Allowed vs. Forbidden

It’s generally stated that for security purposes, blocklists are a bad practice, simply because you can’t think of every single security flaw that might arise in your system. Therefore, allowlists have become the industry standard: only the specifically pre-approved connections can be done.

As free citizens, we’re used to the general private law principle consisting in that everything which is not forbidden is allowed. Meaning that, in principle, you’re permitted to do anything you want as long as there’s no law explicitly prohibiting that behavior. This closely parallels the blocklist mentality.

In security testing, though, the opposite principle should apply: everything which is not allowed is forbidden. This principle comes from the English common law, and applies only to the public authorities. In the public administration, actions are limited to the powers explicitly granted to the authorities by law. If there’s no law authorizing them to act, then the act is invalid. In other words, all actions by the administration should be “allowlisted” by previously sanctioned norms.

This has profound implications for requirements interpretation. Take the example of field level security. Who has permissions to edit this or that field? Should we assume that everyone is allowed unless they’re excluded for some specified reason, or rather think that no one should have permission unless they fall under an explicit requirement? The answer is: the second option is always best. If everyone is allowed to change data in the database, that is very likely to cause problems. Even more, according to Murphy’s law, that’s bound to generate issues. It’s better to play safe and decide explicit rules governing user permissions before anybody can generate unexpected headaches.

Actually, allowlists should ideally reflect the company’s bylaws, its own set of governing rules that establish rights, obligations and prohibitions for every employee. If these rules aren’t defined already, it’s highly recommended that you set some time apart to discuss permissions.

Burden of proof and beyond reasonable doubt

In order to find bugs, it’d be unreasonable to assume that the app has bugs unless proven to the contrary. This is because it’s very hard to prove a negative (i.e. that there are no bugs at all), a fact that’s reflected by the epistemological principle that absence of evidence is not evidence of absence. In other words, you can’t conclude that there are no bugs in the app from the observation that you haven’t found any.

In reality, apps have what I like to call “presumption of innocence”. They’re “innocent” (bugless) until we can actually prove that they’re “guilty” (buggy). It’s the QA who has the burden of proof of establishing beyond a reasonable doubt that there’s a bug indeed. Simply put, the QA person must convince developers that there’s no other reasonable explanation that can come from the evidence presented except for a defect in the code.

Conclusion

Quality assurance extends beyond traditional software testing, encompassing the analysis of requirements and the proactive enhancement of software behavior. QA engineers, akin to judges, play a crucial role in interpreting and refining requirements, addressing ambiguity and vagueness, and suggesting improvements. The identification of legal gaps in the form of unregulated cases mirrors the proactive nature of QA in filling in potential use cases not explicitly covered by requirements.

In essence, the multifaceted nature of quality assurance, encompassing legal, ethical, and psychological dimensions, reinforces its distinct and indispensable role in the software development process.