HP Loadrunner allows the implementation, execution, reporting & evaluation of your performance tests. Before beginning to script using Loadrunner you need to have a clear understanding of what to test. In this post I will describe the major points of planning and executing a proper performance test (see earlier posts for details) highlighting the stages HP Loadrunner plays a role.
Spend time learning the objectives of the project; create plans, including answers to the following questions:
– What are the reasons for performance testing the system? What are the anticipated risks performance testing will address?
– Who are the key players in the project? How will they contribute to the requirements?
– What is the structure of the system you are about to test?
– What is the acceptance/success criterion of the performance measurements?
Do not hesitate to ask questions. Getting the right answers can be complicated and time consuming – however, this will mean your performance testing will be more successful.
2. Test design
The key user scenarios then need to be examined and prioritized. These scenarios are commonly the most frequently used and have the highest impact. Different stakeholders will view different scenarios as more important than others. Communication, risk assessment and prioritization with the participation of business and technical parties is essential.
At the end of this stage you should have enough data to model:
– The user’s behavior according to the priority of scenarios e.g. how does the user navigates through the web pages
– The variability the user behavior e.g. different navigation path, volumes of necessary test data
– Delay patterns e.g. how much time is the idle time of the user during his interaction with the system
– Metrics that describe the system’s state e.g. response times experienced by the user, hardware utilization, data volumes, queue lengths, etc.
This information aggregates into a workload model, which is a basis of the test realization.
You can now determine what types of performance tests are required. Typically you will always require benchmark and soak tests. Other types of tests may be required; this will depend on the requirements of the specific project. Determining the required user load is vital when defining your tests.
3. Test implementation (scripting)
This is the tool-specific step, where Loadrunner’s Virtual User Generator is used to record the users behavior and any additional scripting (based on the workload model).
At the completion of this stage you will have a set of Loadrunner scripts, these are used to simulate the planned workload models.
4. Test execution
Provided you have a realistic test system, execution of the user scenario scripts will commence. You will need to configure your scenarios using Performance Center to match the performance test types and goals.
You will need to specify the following:
– Concurrency of user scenarios: For benchmarking you typically execute only one flow of functional business logic, for soak testing you will typically execute multiple scenario types in parallel
– Number of concurrent Virtual Users
– Characteristics of the load
– The observed resource monitors
– Length of the test
Performance Center will now execute the test, monitoring and providing the raw data for analysis.
5. Test analysis
The HP Loadrunner Analysis tool is now used to extract metrics data from the raw result files, this is collated by the Performance Center Controller. The Analysis component evaluates and presents this data through the Windows Client GUI.
You are now able to:
– Assess the metrics are in line with the expectations e.g. average response times and hardware utilization numbers do not exceed the defined thresholds
– Observe response times, transactions per second, resource chart data for oddities, filter and aggregate data for a better presentation.
– Compare the results for comparison to previous test results (e.g. for benchmarking comparison)
– Extract metrics data / charts for test reports
You should now be able to determine if the performance is acceptable or require further investigation e.g. tuning, fixing of the system or test script revision.
Using the data extracted from the Analysis tool the performance reports are created and presented to the appropriate audience i.e. Business level and technical levels