Blog Details

Performance - the (in) famous buzzword

Performance - the (in) famous buzzword.

Focusing just on the design / implementation and Zero-functional-defect solutions are things of the past. With increasing maturity in technology and IT staff, the 'Non-functional' aspects of the system are fast becoming focus-areas.

Non-functional requirements (NFRs) tell the IT team, about the kinds of usage and load the application will be subjected to, and the expected response time. NFRs define the Service Level Agreements (SLAs) for the system and hence the overall Performance of the Enterprise Application. Managing and ensuring the NFRs (SLAs) for an Enterprise Application is called Performance Engineering. Performance is about risk management. You need to decide just how important performance is to the success of your project. The more important you consider performance to be, the greater the need to reduce the risk of failure and the more time you should spend addressing performance. There should always be a proactive approach to performance rather than reactive.

Engineering for performance is broken down into the following actionable categories and areas of responsibility:

  • Performance objectives enable you to know when your application meets your performance goals.
  • Performance modelling provides a structured and repeatable approach to meeting your performance objectives.
  • Architecture and design guidelines enable you to engineer for performance from an early stage.
  • A performance and scalability frame enables you to organize and prioritize performance issues.
  • Measuring lets you see whether your application is trending toward or away from the performance objectives.
  • Providing clear role segmentation helps architects, developers, testers, and administrators understand their responsibilities within the application life cycle. Different parts of this guide map to the various stages of the product development life cycle and to the various roles.

To define the performance of any System (Software/hardware/abstract) following technical parameters should always be used in conjunction -

  • Response Time: Response time specifies the time taken by the system to respond back with the expected output. For an enterprise application Response time is defined as the time taken to complete a single business transaction. It is usually expressed in seconds.
  • Throughput: Throughput refers to the rate which the system churns expected outputs when the designated input is fed in the system. In other words, for an Enterprise Application, throughput can be defined as the total number of business transactions completed by the application in unit time (per second or per hour).
  • Resource Utilization: For any system to consume an input and produce the designated output, certain resources would be required. The amount of resources consumed by the system during processing that request, defines the resource utilization. There can be different resources factored in, such as processor, disk (I/O controller), memory etc. Utilization of 80% is considered an acceptable limit. Normally utilization of 70% warrants ordering of additional hardware.

The next steps would be – Identifying the transactions to be tested, setting up the environment, identifying the test data, Designing Scenarios and validating Output.

(Read More about it at : https://msdn.microsoft.com/en-us/library/ff647781.aspx )

Here are some key graphs that should be created while executing Performance Testing and should be presented in a Performance Test Report.

  • CPU Utilization vs. No. of users

This graph tells us whether the CPU utilization increases with increasing users. It also tells us about bottlenecks as well as what is the maximum number of users that can be supported.

  • Throughput vs. number of users

This graph tells us if throughput increases with increasing number of users. This graph should have a similar shape as CPU Utilization graph.

  • Response Time vs. throughput

The response time should be fairly constant with increasing throughput. If it is not, it is most likely a bottleneck.

The conventional approach to performance is to ignore it until deployment time. However, many, if not most, performance problems are introduced by specific architecture, design, and technology choices that you make very early in the development cycle. After the choices are made and the application is built, these problems are very difficult and expensive to fix. We need to promote a holistic, life cycle-based approach to performance where you engineer for performance from the early stages of the design phase throughout development, testing, and deployment.

 

 

    27-09-2017         9 : 15 PM

Social Links