Different Types of Software Testing (Unit/Integration/E2E) and TDD/BDD Techniques
Software Testing helps ensure quality, prevent bugs, reduce development costs, and improve performance. Both inspection and testing are important methods to detect defects.
When I just started my software engineering career, I had always heard different terms for the type of testing but got confused about what these tests really are. Therefore, I would like to gather some of the most used terms and introduce the concept of each test that might help you when you feel confused about what it means.
In general, developers write unit tests to verify that individual units of code (e.g., functions or methods) work correctly in isolation. QA engineers may write and execute different types of tests, including integration and system tests, but unit testing is primarily a developer’s responsibility. A lot of testing can be overlapping, always good to have logs.
- The lowest level of testing focuses on a single unit, it is the base to confirm that the individual component is correctly coded and carries out the intended functionality
- It tests individual building blocks in isolation, just testing a small chunk, like a method
- It requires mockups, mocks the dependencies, and gives dummy values
- It requires the driver and stub, lets the driver call the unit, and also needs to separate dependencies
- It tests the combination of a certain workflow involving multiple building blocks and the entire workflow together, which can query the database
- Focus on communicational and interactional interfaces for groups of subsystems, eventually the entire system
- Unit testing is a base for feature testing, without it, you do not know where it is wrong
- Approach: Big bang, Bottom-up, Top-down, Modified Sandwich, CI
- Focus on the entire system usually during release time
- Determine if the system meets the requirements for both functional and non-functional testing (broader scope than E2E)
- Focus on the flow of the system, and how all components interact with each other
- It tests complete scenarios from start to finish as the software will be used by the actual users
- Retest when changes are made, can be applied at the different level
- Refactoring should be followed by regression testing
- Check if changes introduce any new bug
A well-defined test case typically consists of several components, which include: Test Case ID, Test Case Title, Test Objective, Preconditions, Inputs, Expected Results, Test Steps, Actual Results, Pass/Fail Criteria, Notes/Comments, Test Environment/Setup, Dependencies, Test Data, Test Execution Date, Tested By
driver -> UnitToTest -> stub — Setup (Given), Input (When), Expected output (or Oracle) (Then), and log
Driver: A component, that calls the UnitToTest, Controls test cases
Stub: A component, the UnitToTest depends on, Partial implementation, Returns fake values.
Use the driver and the stub to separate the UnitToTest from the system
- It verifies if a system satisfies business requirements
- More on the client side to verify system deliverables whether a user story has been correctly implemented and the satisfaction
- Given … (setup/preconditions), When … (input/actions), Then … (expected outputs)
- Focus on the entire system if it meets the functional requirements
- Deploy your code in staging, spot check, focus on business requirements, and expect to get a specific value from the database
- Nonfunctional testing mostly during system-level
- For example, usability testing, scalability testing, security testing, stress testing, timing testing, etc.
- Usually at the unit level, can also be performed at the system level
- Simulate user behavior and make sure scenarios work from the point of view of an end-user
- Usually at the system level
- Executes the test cases without using any automated testing tools
- White box: the internal function of the software is correct and efficient
- Black box: only considers the external behavior of the system
- Grey box: the tester has knowledge of the internal workings of the software
- A functional testing technical test by giving a minimum number of inputs and evaluating its appropriate outputs, the idea is to select a small number of test cases using the domain knowledge (inputs)
- The boundary cases are commonly used as the best representative cases in each partition
- Tests the interactions between different microservices or software components based on the contracts between them
- CONTRACT ==> like constraint between two entities. Both of them share the same understanding of documents
- Pact is a code-first tool for testing HTTP and message integrations using contract tests.
- Validates Application Programming Interfaces (APIs)
- Send calls to the API, get output, and note down the system response
- Check whether the application is successfully installed and it is working as expected after installation.
- Evaluate how a system performs, measure the reliability, speed, scalability, and responsiveness
- Post-verification testing, check the basic functionality of the system
- The result is usually used to decide if a build is stable enough to proceed with further testing
- Verifying that a particular release meets the specified requirements and if it is ready for release to the end users
- An authorized simulated attack is performed on a computer system to evaluate its security.
- Internal acceptance testing
- The initial examination phase is conducted in-house before the product is open to a broader group of users in beta testing
- External user acceptance testing is performed by real users in a “real environment.”
- Follows successful alpha testing and occurs just before the final product launch when it’s almost ready for market release
Continuous integration requires the use of automated testing tools. With continuous integration, the system under test is always runnable.
- build from day one
- test from day one
- integration from day one
- the system is always runnable
Requires integrated tool support
- continuous build server
- automated tests with high coverage
- tool supported refactoring
- software configuration management
- issue tracking
TDD — Test Driven Development
- Test Early, Test Often
- Test — Design — Test — Implementation — Test
- Red — Green — Refactor
A technique in which you describe the expected behaviors of your code before you implement it, the development process starts with writing tests before writing any production code. TDD follows a Red-Green-Refactor cycle. You write some failing tests first, then you write some code to get the tests passing, then you go back to optimize to refactor the code. TDD mainly focuses on the correctness of the individual function of the code. We could have written the skeleton of these test cases first before writing the source code, having these tests fail in the first place then writing the production code to have the test pass, and then refactoring the code to optimize the performance. After refactoring, always perform regression testing.
An Overview of Code Smells and Refactor Techniques
BDD — Behavior Driven Development
A technique that emphasizes the behaviors/features that are driven by user needs, the examples start with how the application should behave from the standpoint of the user. The development process encourages collaborations among developers, tests, and business analysts. BDD uses the form of “Given-When-Then” to describe the scenarios of context (Given), the action (When), and the expected outcome (Then). BDD mainly focuses on whether the development meets the desired behaviors compared to TDD. We can use Selenium Cucumber Java Framework to implement the BDD.
User Story (Describe One Specific Thing) Template
We can use personas (end user roles / imaged users) to describe scenarios that can help develop user stories — and understand potential types of possible end users.
- Title (Feature name)
- As a [role], I can/want to [feature/functionality], so that [reason]
Acceptance Test (Confirmation) Template
Designed to verify whether a user story has been correctly implemented and the conditions of satisfaction according to the specified requirements and criteria.
- Given … (setup/preconditions)
- When … (input/actions)
- Then … (expected outputs)
Code coverage is one of the common metrics used to measure how much code is executed under testing. Based on the code structure, code coverage can be categorized into the following buckets:
- Statement coverage — Percentage of code lines executed by a set of tests
- Method coverage — Percentage of method calls executed by a set of tests
- Branch coverage — Percentage of branches executed by a set of tests
- Condition coverage — Percentage of conditions executed by a set of tests
- Path coverage — Percentage of paths executed by a set of tests
High-level Testing Process
- A dev writes a code that goes for code review and when it is passed it moves to QA where the tester tests the changes by deploying that branch on EOD, if no issues are found we move ahead where we merge the branch and prepare for release
EOD (Environment on Demand)
- EODs to check the scenarios are working as expected, which were built through Jenkin (or another CICD environment like Cloudbees)