Performance testing helps us determine the responsiveness and stability of an application under a particular workload. It’s a way for companies to know if the product meets their performance criteria. It also serves to compare the performance level of different products. The benchmarks you get from performance testing highlights bottlenecks and modules that put strain on the infrastructure.
Usually, performance testing starts by first defining performance goals. This way we have some criteria to test against and we can assign a pass/fail grade at the end of the test. But we’ll skip that for now, since the aim here is to learn the actual testing process.
In this post, we will conduct a performance testing experiment. It all starts by selecting a suitable tool and setting it up.
Writing our first performance test
Jmeter: Our performance testing hero
There are lots of open-source tools that can be used for performance testing. I think Jmeter is among the best of the bunch. It has some remarkable key features:
- A user friendly GUI
- Test results that have visual aids and are easy to understand
- Great community for support
- Multi protocol support (HTTP, SMTP, POP3, LDAP, JDBC, FTP, JMS, SOAP, TCP)
- Runs any platform that supports Java
- Integration with other tools for CI process
- A distributed mode that allows a master machine to control other remote machines
- A recording feature that makes it easier to write test cases
How to setup Jmeter
This really doesn’t need its own subsection, here you go:
1 - Install Jmeter with homebrew
brew install jmeter
2 - Start Jmeter
3 - Profit!
Performance testing on a web page
Let’s prepare our basic web page test. Once Jmeter is installed, we’ll see a UI like this:
I’ll detail each action we need to take in a bullet point, and explain the how/what/why in sub-bullets. Here we go:
- First, we add a thread group. This essentially is the instance that holds all our configuration details for the test.
- Simply right-click on Test Plan > Add > Threads (Users) > Thread Group and voila!
- There are some settings in the thread group you’ll want to get acquainted with:
- ‘Number of Thread users’ is to set the number of simulated users that will run in the tests. Set that to 50 for now.
- ‘Ramp-up Period’ is the time taken by JMeter to create the number of threads per iteration. Let’s set this to 5.
- These settings will simulate 50 users in 5 seconds.
Then, we add HTTP Request Defaults to our Thread Group.
- Right-click on Thread Group > Add > Config Element > HTTP Request Defaults
Next, we fill in the Server Name or IP field with the name or IP address of the web server we want to test.
- Type http://www.google.com here. Let’s see how the giant search engine fares against our tests.
Time to add a Recording controller. Recorded HTTP Request samplers will be created on the Recording controller.
- Right-click on Thread Group > Add > Logic Controller > Recording Controller
Finally, we add some listeners to see the results of our tests. We can ‘Graph Results’ to see a graphical result or add a ‘Results Tree in Table’ listener to see results in a table. Heck, let’s add both:
- Right-click on Thread Group > Add > Listener > Graph Results
- Right-click on Thread Group > Add > Listener > View Results in Table
Our test plan is ready for action! Now, we only need a Test Script Recorder that will re-run our test scenario with the simulated users. We create this recorder in our Workbench. Careful though, Workbench configurations are not saved with our test plan. You should save them separately.
- Let’s configure our test script recorder
- Right-click on WorkBench > Add > Non-Test Elements > HTTP(S) Test Script Recorder
- The default port setting is 8080. You can change the port configuration as you wish later. For now, let’s use the defaults.
Done! Our setup is now ready for a web page test.
- Click on the Start button in the HTTP(S) Test Script Recorder.
Now’s a good time to configure our browser’s proxy settings. I am using Safari for my test, so I’ll guide you on that.
- Open Preferences > Advanced > Proxy Settings.
- Select HTTPS and then enter 127.0.0.1 as the IP address and 8080 as the port.
Let’s do this! Open Safari and go to www.google.com
- In the search bar, type “performance testing” and click on the first result. Stop recording (Click on the Stop button of the Test Script Recorder). You can easily see that Recording Controller saved your actions.
- You’ll see some unnecessary recordings that you can get rid of by selecting “Add suggested Excludes” on the Test Script Recorder.
Now that we’ve actions, we can start performance testing.
- Open the Graph Results to see the performance results live and then click on the Start button.
The results show us that the response time of google.com is 160ms on average.
The conclusions you draw from this obviously depends on the performance goals of the product you’re testing. However, in general, you want a small average response time, as well as a small deviation of the response time.
We’ve come this far. We might as well also try a stress test on a mobile app, right?
Performance testing on a mobile app
I chose a random application on my phone to run my performance test, so I had no idea about the limits of the app. At this point, we should talk about the difference between load testing and stress testing a bit.
In essence, load testing helps us understand the behaviour of a product under expected concurrent number of users. Stress testing, on the other hand, helps us understand the upper limits of the product. So, I’ll run 2 tests for this app, one with 50 users and one with 300 users.
Let’s get started. You know the drill:
- We open Jmeter and create the same test plan we did for the Web tests
- Then we add the proxy settings on our phone.
- For iOS, Open Wifi Settings > Info > HTTP Proxy Manual > Add Your PCs IP address and the default Port 8080.
- We start recording by clicking on the Start button of the HTTP(S) Test Script Recorder.
- We then open an app (any app with a server structure) and act like a regular user to get some server responses. Then stop HTTP(S) Test Script Recorder. Check that every action is saved under Recording controller.
Since we don’t really know the limits of the app, it’s better to first run a test with 50 users to see how it behaves with an expected number of users at a time. Here’s what I got.
For 50 users, our app’s response time is 544ms and deviation is 384ms. This is not a great result at all. I wonder what happens when we force the app with 300 simultaneous users. Let’s see the new results:
With 300 users, the average response time is higher than 4 seconds and deviation is more than 7 seconds which means this app needs to optimised for a good user experience, otherwise it’s doomed, since no one waits 4 seconds for anything anymore.
I hope this post helps you understand the basics of performance testing for both web and mobile apps. Even at this basic level, performance tests give us some useful data that can help developers and solution engineers create better-performing products by quantifying the product’s performance level and putting a set goal in front of them when optimization is needed.
Performance testing should ideally be in the continuous integration process of any good product’s development cycle. Performance test strategy and analysis of reports are also important parts of this process, but they will be the topic of another post.