Is there an authoritative source on automated software testing?

svdsinner

Ars Legatus Legionis
15,096
Subscriptor
I have always been an independent thinker on automated software testing. (Unit tests, integration tests, UI tests) I am passionate about ensuring code is tested. However, I have never really cared if I needed to test something whether ivory tower theorists called a particular test an integration test or a unit test. I used the logic that says "How do I test that this specific piece of code works as intended?" and wrote tests to accomplish that without caring about ivory tower concepts. My primary goal is to make sure to test what needs to be tested without wasting time writing "Mary had a little lamb" tests that provide no value.

I'm currently in charge of a major new project and making sure it is getting proper automated tests. This is the biggest project I've been in charge of and I'm trying to hold myself to high standards. I'll need to make good choices on testing standards that other developers will follow. I probably am not going to go fully ivory-tower, but I think I should go through ivory-tower principles to confirm that I'm following the best of them and have solid reasoning specific to this project behind any deviations from those principles.
Details, if you care:
The project is a business-critical Blazor app (interactive auto mode. .NET.8, C#) being written by a team of 6 (2 Sr developers, a UX expert, a tester, a BA and a PM) It has a SQL Server back-end. The tester and the UX expert will write some of the automated UI tests because they are interested in building those skills. We will need basic tests that run against every build, and some tests that run every few days at most (long-running tests and tests that involve per-usage costs of 3rd party systems) We'll use XUnit for testing. The initial coding of the app will take about 18 months, and the back-end will be expected to be maintained for 15-20 years after release. At least 1 new client UI (a modernization in a few years, maybe a mobile app, etc.) will be written eventually against the back-end. We don't have any corporate edicts and the testing policy for this app will not be mindlessly applied to other projects in the company.

The question then becomes: What is a modern authoritative source for what automated testing should be in the ivory tower? Is there a book or website that I could consider as a basis to compare my decisions to?

NOTE: I'm not asking if there is a PERFECT authoritative source, just a source that has a comprehensive philosophy worth comparing and contrasting my testing to.
 

Mark086

Ars Tribunus Angusticlavius
10,601
Why most ask ChatGPT for that?


Test coverage is a critical metric in software development that measures how much of the codebase is exercised by automated tests. Here are some key insights and best practices regarding test coverage and the philosophy of aiming for 100% coverage:

### Types of Test Coverage
1. Statement Coverage: Ensures each statement in the code is executed at least once.
2. Branch Coverage: Ensures each possible path (true/false) in control structures like if-else statements is executed.
3. Path Coverage: Covers all possible paths through the code.
4. Condition Coverage: Ensures each condition in decision points is tested for both true and false outcomes oai_citation:1,A Detailed Guide on Test Coverage oai_citation:2,Test Coverage Metrics: What is, Types & Examples | PractiTest oai_citation:3,Code Coverage Tutorial: A Comprehensive Guide With Examples And Best Practices.

### Best Practices

### Limitations of 100% Coverage

### Real-World Implementation
In practice, aiming for 100% coverage can be impractical and may not add significant value compared to the effort required. Focus on writing meaningful tests that cover critical paths and functionalities, and use code coverage as one of many metrics to ensure software quality oai_citation:13,Code Coverage Tutorial: A Comprehensive Guide With Examples And Best Practices oai_citation:14,Google Testing Blog: Code Coverage Best Practices.

For more detailed insights and examples, you can refer to resources such as LambdaTest, Simform, and the Google Testing Blog which provide comprehensive guides on test coverage and best practices in software testing.
 

koala

Ars Tribunus Angusticlavius
7,598
Test are a means, not an end. Find your end, and decide how tests help you with that.

There is no authoritative source. Heck, there's even the "London" and "Chicago" schools of testing which propose entirely opposite testings.

I also disagree strongly with the unit/functional/etc. classification (and, ugh, the test pyramid). Tests can be fast or they can be slow. Robust or brittle. Easy to maintain or hard to maintain. They can be accurate (when they break, they point exactly at the problem) or inaccurate. They can be helpful to your end, or not. They can be very expensive, or cheap.

And unfortunately, all of those tend to be tradeoffs, so you cannot have it all.

There are many good articles (but mostly old) in the Google Testing Blog, but I keep coming back to this one:


...

Personally, I tend to think of tests as a tool to speed up my coding and help me improve design.

For example, when developing complex logic, I want to isolate that logic so it's easy to create tests for it that "prove" that it does what I want. And this isolation/tests means I want to invoke my complex logic without having to start up my application and go through a few screens to exercise it. And this actually improves the software design (sometimes).

Being able to iterate on development by running a quick test instead of starting up my application makes me get faster to what I want: typically, reliable software that does what it's designed to do, and that later can easily be extended/maintained.

The only way to know if you are doing testing well is to assess if it's helping you go faster to your destination.

(And this is also hard; sometimes you must invest upfront and slow down to achieve long-term maintainability goals. But sometimes this upfront investment goes wrong and it slows you down at the beginning... but it doesn't speed you up long term.)

However, I think I have seen helpful tests. And I believe it's up in the ranking of "things worth investing in". Because it's difficult, and learning testing is hard. (I don't think I'm good at them.)
 

snotnose

Ars Tribunus Militum
2,765
Subscriptor
Something to keep in mind are long term tests. Years ago I was responsible for testing a component of a cell phone system. I had to setup calls, verify the voice A/D and D/A path worked, then teardown calls. My normal test suite that got run every time I wanted to publish a software update was a few hundred short tests, 99% were completely automated and ran in a second or two.

I also developed tests that would run overnight and over weekends. It's flow was something like:
Code:
Initialize the random number generator with the current date, and record that date. 
while(1) {
    Get a random forward channel, reverse channel, DSP, and a handful of other parameters.
    Setup a call
    Wait a random time up to 60 seconds
    Tear down the call
    If something went wrong log everything
}
I ran 8 threads running the above (each board had 8 DSPs), I only had the resources to test 1 board this way.