-
Guideline for testing performanceTestMetric 2019. 9. 3. 09:37
1. Overview
Performance testing is a form of software testing that focuses on how a system running the system performs under a particular load. This is not about finding bugs or defects. Performance testing measures according to benchmarks and standards. Performance testing should give developers the diagnostic information they need to eliminate bottlenecks.
2. Description
- Types
Types Description Load testing - Measuring system performance as the workload increases.
- That workload could mean concurrent users or transactions.
- Measuring response time and system staying power as workload increases
- That workload falls within the parameters of normal working conditions
Stress testing(Fatigue testing) - Measuring system performance outside of the parameters of normal working conditions
- Given more users or transactions that can be handled
- The goal of stress testing is to measure the software stability
- At what point does software fail, and how does the software recover from failure
Spike testing - A type of stress testing that evaluate software performance when workloads are substantially increated quickly and repeatedly
- The workload is beyond normal expectations for short amounts of time
Endurance testing(Soak testing) - An evaluation of how the software performs with a normal workload over an extended amount of time
- The goal of endurance testing is to check for system problems such as memory leaks
Scalability testing - Determining if the software is effectively handling increasing workloads
- Gradually adding to the user load or data volume while monitoring system performance
- Also, The workload may stay at the same level while resources such as CPUs and memory are changed
Volume testing(Flood testing) - Determining how efficiently software performs with a large, projected amount of data
- Most common problems observed in performance testing
Problems Description Bottlenecking Occurring When data flow is interrupted or halted because there is not enough capacity to handle the workload Poor scalability If software cannot handle the desired number of concurrent tasks, results could be delayed, errors could increase, or other unexpected behavior could happen that affects
- Disk Usage
- CPU Usage
- Memory leaks
- OS limitations
- Poor network configuration
Software configuration issues Often setting are not set at a sufficient level to handle the workload Insufficient hardware resources Performance testing may reveal physical memory constraints or low-performing CPUs - Test Flow
1. Identify the testing environment Identify the hardware, software, network configuration and tools available allows the testing team to design the test and identify performance testing challenges easily. Performance testing environment options include:
- A subset of production system with fewer servers of lower specification
- A subset of production system with fewer servers of the same specification
- Replica of a production system
- Actual production system
2. Identify performance metrics - Identifying metrics such as response time, throughput and constraints
- identifying what are the success criteria for performance testing
3. Plan and design performance tests - Identifying performance test scenarios that take into account user variability, test data, and target metrics
- Creating one or two models
4. Configure the test environment Prepare the elements of the test environment and instruments needed to monitor resources 5. Implement test design Develop the tests 6. Execute tests Running, monitoring, and capturing the data generated 7. Analyze, report, and retest - Analyze the data and share the findings
- Run test performance tests again using the same parameters and different parameters
- Performance testing Metrics
Metrics are needed to understand the quality and effectiveness of performance testing.
- Measurements: The data being collected such as the seconds it takes to respond to a request
- Metrics: A calculation that uses measurements to define the quality of results such as average response time(total response time/requests)
There are many ways to measure speed, scalability, and stability but each round of performance testing cannot be expected to use all of them
Metrics Description Response time Total time to send a request and get a response Wait time(Average latency) How long it takes to receive the first byte after a request is sent Average load time The average amount of time it takes to deliver every request is a major indicator of quality from a user's perspective Peak response time Longest amount of time it takes to fulfill a request. significantly longer response time may indicate an anomaly that will create problems Error rate A percentage of requests resulting in errors compared to all requests. These errors usually occur when the load exceeds the capacity Concurrent users(load size) The most common measure of load. how many active users at any point Requests per second How many requests are handled Transactions passed/failed Total numbers of successful or unsuccessful requests. Throughput Measured by kilobytes per second, throughput shows the amount of bandwidth used during the test CPU utilization How much time the CPU needs to process requests Memory utilization How much memory is needed to process the request - Performance Testing Best Practices
Test as early as possible in the development Performance testing isn't just for completed project but also units or modules testing Conduct multiple performance tests to ensure consistent findings and determine metrics average Applications often involve multiple systems such as databases, servers, and services. Test the individual units separately as well as together Involve developers, IT and tester in creating performance testing environments Determine how the results will affect users not just test environment servers Develop a model by planning a test environment that takes into account as much user activity as possible Baseline measurements provide a starting point for determining the success or failure Performance tests are best conducted in test environments that are close to the production systems as possible Isolate the performance test environment from the environment used for quality assurance testing No performance testing tool will do everything needed. Research performance testing tools for the right fit. Keep the test environment as consistent as possible Calculating averages will deliver actionable metrics. Extrem measurements could reveal possible failures Including any system and software changes in reports. - Five common performance testing mistakes
Not enough time for testing Not involving developers Not using QA system similar to the production system Not sufficiently tuning software Not having a troubleshooting plan - Performance Testing Fallacies
Performance testing is the last step in development More hardware can fix performance issues The testing environment is close enough What works now, works across the board One performance testing scenario is enough Testing each part equals testing the whole system What works for them, works for us Software developers are too experienced to need performance testing A full load test tells everything Test scripts are actual users 3. References
https://stackify.com/ultimate-guide-performance-testing-and-software-testing/
https://loadstorm.com/load-testing-metrics/
https://testguild.com/performance-testing-what-is-throughput/
https://www.addictivetips.com/net-admin/throughput/
https://stackify.com/fundamentals-web-application-performance-testing/
https://www.blazemeter.com/blog/open-source-load-testing-tools-which-one-should-you-use/
https://www.addictivetips.com/net-admin/throughput/
'TestMetric' 카테고리의 다른 글
Metric of Performance (0) 2020.02.26 Unit Testing vs Integration Testing (0) 2020.02.23 JUnit (0) 2019.09.01 TDD processing with examples (0) 2019.08.27