Guideline for testing performance
1. Overview
Performance testing is a form of software testing that focuses on how a system running the system performs under a particular load. This is not about finding bugs or defects. Performance testing measures according to benchmarks and standards. Performance testing should give developers the diagnostic information they need to eliminate bottlenecks.
2. Description
- Types
Types | Description |
Load testing |
|
Stress testing(Fatigue testing) |
|
Spike testing |
|
Endurance testing(Soak testing) |
|
Scalability testing |
|
Volume testing(Flood testing) |
|
- Most common problems observed in performance testing
Problems | Description |
Bottlenecking | Occurring When data flow is interrupted or halted because there is not enough capacity to handle the workload |
Poor scalability |
If software cannot handle the desired number of concurrent tasks, results could be delayed, errors could increase, or other unexpected behavior could happen that affects
|
Software configuration issues | Often setting are not set at a sufficient level to handle the workload |
Insufficient hardware resources | Performance testing may reveal physical memory constraints or low-performing CPUs |
- Test Flow
1. Identify the testing environment |
Identify the hardware, software, network configuration and tools available allows the testing team to design the test and identify performance testing challenges easily. Performance testing environment options include:
|
2. Identify performance metrics |
|
3. Plan and design performance tests |
|
4. Configure the test environment | Prepare the elements of the test environment and instruments needed to monitor resources |
5. Implement test design | Develop the tests |
6. Execute tests | Running, monitoring, and capturing the data generated |
7. Analyze, report, and retest |
|
- Performance testing Metrics
Metrics are needed to understand the quality and effectiveness of performance testing.
- Measurements: The data being collected such as the seconds it takes to respond to a request
- Metrics: A calculation that uses measurements to define the quality of results such as average response time(total response time/requests)
There are many ways to measure speed, scalability, and stability but each round of performance testing cannot be expected to use all of them
Metrics | Description |
Response time | Total time to send a request and get a response |
Wait time(Average latency) | How long it takes to receive the first byte after a request is sent |
Average load time | The average amount of time it takes to deliver every request is a major indicator of quality from a user's perspective |
Peak response time | Longest amount of time it takes to fulfill a request. significantly longer response time may indicate an anomaly that will create problems |
Error rate | A percentage of requests resulting in errors compared to all requests. These errors usually occur when the load exceeds the capacity |
Concurrent users(load size) | The most common measure of load. how many active users at any point |
Requests per second | How many requests are handled |
Transactions passed/failed | Total numbers of successful or unsuccessful requests. |
Throughput | Measured by kilobytes per second, throughput shows the amount of bandwidth used during the test |
CPU utilization | How much time the CPU needs to process requests |
Memory utilization | How much memory is needed to process the request |
- Performance Testing Best Practices
Test as early as possible in the development |
Performance testing isn't just for completed project but also units or modules testing |
Conduct multiple performance tests to ensure consistent findings and determine metrics average |
Applications often involve multiple systems such as databases, servers, and services. Test the individual units separately as well as together |
Involve developers, IT and tester in creating performance testing environments |
Determine how the results will affect users not just test environment servers |
Develop a model by planning a test environment that takes into account as much user activity as possible |
Baseline measurements provide a starting point for determining the success or failure |
Performance tests are best conducted in test environments that are close to the production systems as possible |
Isolate the performance test environment from the environment used for quality assurance testing |
No performance testing tool will do everything needed. Research performance testing tools for the right fit. |
Keep the test environment as consistent as possible |
Calculating averages will deliver actionable metrics. Extrem measurements could reveal possible failures |
Including any system and software changes in reports. |
- Five common performance testing mistakes
Not enough time for testing |
Not involving developers |
Not using QA system similar to the production system |
Not sufficiently tuning software |
Not having a troubleshooting plan |
- Performance Testing Fallacies
Performance testing is the last step in development |
More hardware can fix performance issues |
The testing environment is close enough |
What works now, works across the board |
One performance testing scenario is enough |
Testing each part equals testing the whole system |
What works for them, works for us |
Software developers are too experienced to need performance testing |
A full load test tells everything |
Test scripts are actual users |
3. References
https://stackify.com/ultimate-guide-performance-testing-and-software-testing/
https://loadstorm.com/load-testing-metrics/
https://testguild.com/performance-testing-what-is-throughput/
https://www.addictivetips.com/net-admin/throughput/
https://stackify.com/fundamentals-web-application-performance-testing/
https://www.blazemeter.com/blog/open-source-load-testing-tools-which-one-should-you-use/
https://www.addictivetips.com/net-admin/throughput/