Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: NUnit on-ramp minibook #745

Open
wants to merge 22 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
63 changes: 35 additions & 28 deletions cSpell.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"version": "0.1",
"version": "0.2",
"language": "en",
"words": [
"buildable",
Expand All @@ -13,6 +13,9 @@
"Dogfood",
"DWORD",
"Enumerables",
"Excella",
"Fadi",
"Goodhart's",
"Guid",
"Guids",
"Hashtable",
Expand Down Expand Up @@ -139,54 +142,58 @@
"osokin",
"lahma",
"unsortable",
"Dalsb酶"
"Dalsb酶",
"solated",
"epeatable",
"imely"

],
"patterns": [
{
"name": "Markdown links",
"pattern": "\\((.*)\\)",
"description": ""
"name": "Markdown links",
"pattern": "\\((.*)\\)",
"description": ""
},
{
"name": "Markdown code blocks",
"pattern": "/^(\\s*`{3,}).*[\\s\\S]*?^\\1/gmx",
"description": "Taken from the cSpell example at https://cspell.org/configuration/patterns/#verbose-regular-expressions"
"name": "Markdown code blocks",
"pattern": "/^(\\s*`{3,}).*[\\s\\S]*?^\\1/gmx",
"description": "Taken from the cSpell example at https://cspell.org/configuration/patterns/#verbose-regular-expressions"
},
{
"name": "Inline code blocks",
"pattern": "\\`([^\\`\\r\\n]+?)\\`",
"description": "https://stackoverflow.com/questions/41274241/how-to-capture-inline-markdown-code-but-not-a-markdown-code-fence-with-regex"
"name": "Inline code blocks",
"pattern": "\\`([^\\`\\r\\n]+?)\\`",
"description": "https://stackoverflow.com/questions/41274241/how-to-capture-inline-markdown-code-but-not-a-markdown-code-fence-with-regex"
},
{
"name": "Link contents",
"pattern": "\\<a(.*)\\>",
"description": ""
"name": "Link contents",
"pattern": "\\<a(.*)\\>",
"description": ""
},
{
"name": "Snippet references",
"pattern": "-- snippet:(.*)",
"description": ""
"name": "Snippet references",
"pattern": "-- snippet:(.*)",
"description": ""
},
{
"name": "Snippet references 2",
"pattern": "\\<\\[sample:(.*)",
"description": "another kind of snippet reference"
"name": "Snippet references 2",
"pattern": "\\<\\[sample:(.*)",
"description": "another kind of snippet reference"
},
{
"name": "Multi-line code blocks",
"pattern": "/^\\s*```[\\s\\S]*?^\\s*```/gm"
"name": "Multi-line code blocks",
"pattern": "/^\\s*```[\\s\\S]*?^\\s*```/gm"
},
{
"name": "HTML Tags",
"pattern": "<[^>]*>",
"description": "Reference: https://stackoverflow.com/questions/11229831/regular-expression-to-remove-html-tags-from-a-string"
"name": "HTML Tags",
"pattern": "<[^>]*>",
"description": "Reference: https://stackoverflow.com/questions/11229831/regular-expression-to-remove-html-tags-from-a-string"
},
{
"name": "UID Lines",
"pattern": "uid: (.*)"
}
],
"ignoreRegExpList": [
],
"ignoreRegExpList": [
"Markdown links",
"Markdown code blocks",
"Inline code blocks",
Expand All @@ -196,5 +203,5 @@
"Multi-line code blocks",
"HTML Tags",
"UID Lines"
]
]
}
45 changes: 45 additions & 0 deletions docs/articles/book/Index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Automated Testing & TDD with NUnit: An On-Ramp

## About This Series

### Who is it for?

This series aims to be for everyone -- from people who've never written a unit test to people who have used NUnit but would like to brush up on some of the theory and practices.

We'll try to split up the articles so that you can dive in and focus on the parts that you care about. And we'll try to use real-world examples along the way.

We'll also try to make it as succinct as possible, because we're not getting paid by the word -- or indeed, at all :smile: -- for this.

### Strong Opinions, Loosely Held

This guide is naturally going to reflect the opinions of the primary author. However, along the way, we'll try to point out where another school of thought might approach something differently.

One thing for sure that we want to be clear about: there are several ways to do testing well, and there is no "one true way" to do it right, especially because the context and trade-offs of each project and team are unique. We'll do our best to not present opinion as fact, and we'll work toward including more adjacent insight as we build out the guide.

Similarly, we're not trying to "sell" you on TDD. We find value in it in many cases, so we'll talk about it. Similarly, NUnit is a great library for testing -- but it's by no means the only one, and alternatives like xUnit are quite popular (even with us!). To each their own; we hope if nothing else, some of the theory and practical tips here will be useful no matter which library you choose.

### What Tech Stack Are You Using?

We're writing this primarily from the perspective of .NET Core and onward, because with .NET 5 this is the path forward that the .NET team has chosen for the technology. With that said, we'll absolutely augment this guide with tips and explainers for those who are on the classic .NET Framework, and if any of what we say doesn't work for you, let us know!

### This Guide Isn't Intended as a Docs Replacement

This guide is going to delve into on-boarding, concepts, and thoughts on how to approach automated testing in general. But while we're going to give examples of syntax & features, we're not going to cover _every_ bit of syntax & features. If you'd like more on a certain topic, absolutely suggest it, but please try to make sure that it would bring something unique to the guide.

## This is a Living Thing. Have Feedback or Improvements?

No improvement to this will happen without you. If you have a question, chances are someone else will too -- please ask! If you have an improvement, we'd love to hear about it. [Create an issue in the docs repository](https://www.notion.so/seankilleen/TBD) to start a conversation.

## Possible Future Directions

It's possible this could expand to the point where it makes sense to stand it up on its own. If that happens, maybe it will move out of the NUnit docs and over to somewhere else.

## Credit Where It's Due

We've read a lot about testing over the years from a lot of places. Wherever we are aware (or are made aware) of credit being owed for a particular contribution, we'll be sure to cite it. Much of the knowledge here is considered general, mainstream knowledge in the industry. If you are reading this and think someone needs to be cited to receive credit for something, by all means -- let us know!

## About the Author

This series is originally by [Sean Killeen](https://SeanKilleen.com) ([Mastodon](https://mastodon.social/@sjkilleen), [GitHub](https://github.com/SeanKilleen)) with additional contributions from the NUnit team and our community.

Sean is a Principal Technical Fellow for Modern Software Delivery at [Excella](https://excella.com). He has taught courses in modern testing and test automation as part of a ScrumAlliance CSD-certified course and an ICAgile ICP-TST/ICP-ATA certified course. He is active in the .NET community including OSS contributions, and is a member of the NUnit Core Team.
55 changes: 55 additions & 0 deletions docs/articles/book/concepts/TestTradeoffs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# Test Trade-offs

Different types of tests have different trade-offs in their usage.

Typically, automated tests are thought of as a pyramid or a funnel.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might deserve a separate heading "The Test Triangle/Pyramid", and a figure would also be nice.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely, that's in the plan 馃憤


* In a pyramid visualization, unit tests comprise the base of the pyramid (the largest part). On top of them are integration tests, then acceptance/functional tests, then UI tests.
* In a funnel visualization, the pyramid is inverted, and we think about unit tests as catching a majority of potential issues, followed by integration tests and acceptance/functional tets.

The thinking behind both of these visualizations is that you want most of the tests in your project to be unit tests, followed by integration tests and acceptance/functional tests because of the trade-offs we're about to get into.

While the approaches above offer a generalized, the reasoning behind them is the important part to consider.

## The Journey Away from Fine-Grained Tests

The further we move away from unit tests toward coarser-grained tests, a few things happen:

* **Tests require more setup**. The further away you go from unit tests, typically the more setup those tests require. Instantiating a new database for each test run is much more setup than using a fake dependency.
* **Tests can fail for more than one reason**. When a unit test fails, you almost always can pinpoint exactly what's happening. But an integration test may comprise many units; when it fails, how do you know which unit is responsible for the failure, unless you also have a failing unit test?
* **Tests can fail for unrelated reasons**. For example, if you have a UI test that refers to a certain element, and the name or position of that element changes, your UI test may fail even though the UI is actually in perfectly working order.
* **Tests take (much) longer to run**. As a rule of thumb, I can run approximately 500-1,000 unit tests per second. However, if I have to instantiate a real database and make round trips of data to that database, a given test will take substantially longer. I once worked on a project where a few thousand unit tests took a few seconds to execute, but a few hundred integration tests took a few hours.
* **Tests take longer to fix**. Because of the longer execution time, multiple possible failure points, and potential for flakiness, troubleshooting these tests are often more difficult, which can lead to tem being left flaky for a long time or (worse) ignored or deleted altogether.

With that said, having a project that completely neglects a layer is likely to suffer as well:

* The team may make assumptions about how units of code work together, only to find that real components behave differently in practice.
* The team may miss important considerations of coarser-grained tests, such as the contracts for an API
* If there are no UI tests at all, it could be possible to deploy an application that passes and yet has a UI that is completely inoperable.

## Finding the Right Trade-offs for Your Team

Each codebase has a different context and set of trade-offs that might inform test types to use. Examples:

* Teams with the ability to minimize the setup and execution time of their tests may benefit from more integration tests
* Legacy projects with little test coverage often start with UI tests to establish a baseline of confidence, and then take some of those tests and "push them down" into several API tests or integration tests to alleviate some of the trade-offs
* Teams with a high degree of non-coder collaboration may write a higher number of acceptance/functional tests because the language of the tests is closer to the language they use with their stakeholders.

Keep some of the below in mind and you may avoid some pitfalls:

* **Actively talk about and re-evaluate test types**. For example:
* If a number of UI tests have built up confidence and you've seen no failures, and those tests are appropriately covered by finer-grained tests, it may make sense to retire them.
* If you keep getting caught off-guard by integration issues, it may make sense to invest more time in integration or acceptance tests.
* If you've discovered a way to reduce the execution time and maintenance burden of a given layer of tests, it may make sense to invest more in that layer.
* **Remember: The goal is _confidence_**.
* If a test fails, it should be treated as an issue until it can be proven otherwise.
* Don't settle for flaky tests if you can at all avoid doing so.
* If a test no longer serves to improve confidence in the system (and doesn't meaningfully play into the living documentation of the system), consider removing it or pushing it into finer grained tests.
* If the maintenance of a set of unit tests is costly and things are well-covered by integration tests that provide a high degree of confidence, perhaps some of those unit tests can be retired.
* **Keep execution times as fast as possible**. The goal is to run as many tests as possible as often as possible. If a set of tests takes 6 hours to run, how will you be able to get confidence in pushing a branch of code prior to merging it in? More often than not, those tests will be skipped.

## What About Commercial Testing Products?

Because this guide is intended for NUnit itself, we won't delve into that topic too much. However, these products tend to _increase_ the feedback loops around testing & results, when typically want we want is _as many tests as possible_ running _as often as possible_. Commercial tools tend to take what should be a continuous process/mindset and extract it into a separate role or separate team. We're pretty dedicated to the idea of agility these days and would prefer that testing happen along side the work in close collaboration within cross-functional teams.

That's not to say we'd never recommend using a commercial testing product -- it certainly may better than having no tests at all or an entirely manual process. But, teams and organizations should be extremely careful of the lagging test feedback, high maintenance burden, and fragility of such endeavors. When in doubt, keep tests close to the work.
Loading