Total Pageviews

Featured Post

Excel tips and tricks

Things to Remember Borders can be used with the shortcut key Alt + H + B, which will directly take us to the Border option. Giving a border...

Friday, October 30, 2015

Software testing

What is SDLC consisits of:


analysis : requirement gathering
design: designing software
Coding : implementation of software
Testing : finding defects
Release: shipping product.
Maintenance:

Software Testing


What is testing? Goal of testing?


Testing is to improve the quality of the software before its release
verifying errors  in the software is testing.
 Product meet its specifications and requirements or not is testing.
documentation of input, expected results and test conditions
purpose of testing Software testing is the process used to help identify the Correctness, Completeness, Security and Quality of the developed Computer Software.
Software Testing is the process of executing a program or system with the intent of finding errors Software testing is the process of checking software, to verify that it satisfies its requirements and to detect errors.

ROLES OF A TEST ENGINEER:


========================
involving in writing TEST PLAN
TEST DESIGN GENERATE TEST CASES 
TEST EXECUTION/TEST DEPLOYMENT
Identifying BUG BUG TRACKING closing resolved bugs

GOOD TESTER should have below::


=========
conceptual:
ANALYTICAL
CREATIVE
PROBLEMSOLVING
BREAK IT MENTALITY
MULTI DIMENSIONAL
THINKING

Practical: 
Review specs ,design specs
develope test plan
develope tets automation
develope tets cases
find bugs early.
considerations
WHAT I AM GOING TO TEST ?
HOW I AM GOING TO TEST IT?
Testing is to test behaviour of the product under different test. Conditions

GOOD TEST CASE



COVERS ALL AREAS OF TESTING
INDEPENDENCY
LOCALISATION ENABLED
PRE CONDITINS 
TITLE
PURPOSE
Should DETECT BUGS

Automation vs manual

Manual testing :
Testing with the manual intervention

Automation testing
:with out manual intervention using automation
tools or using code
advantages and disadvantages in manual testing/automation 
cost on manual resources
more no of human resources are required
result may contains errors in manual testing
time taken is more
No error handilng in manual testing
Rgressionn testing is not ease
tesing on more number of platforms is not easily possible
more risk in manual testing
no scalabity 
not reliable
===============================================
advantages and disadvantages in Automation testing
Automation testing this depends on the requirements indicated to perform testing. Not all test cases need to be tested using the automation tools. some test cases have to be tested manually. Automation testing takes place when we have to perform functionality testing using multiple data or we have to perform regression testing.Start and end of the automation testing depends on the factors like length, duration, cost of the project, risks of the project etc.,

Testing life cycle



test plan: 
steps in test plan 
test design 
test cases developementtest executionidentifying bugs
bug tracking
validation
Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests work. Test planning: Test strategy, test plan, testbed creation. A lot of activities will be carried out during testing, so that a plan is needed. Test development: Test procedures, test scenarios, test cases, test scripts to use in testing software. Test execution: Testers execute the software based on the plans and tests and report any errors found to the development team. Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.

How do you decide when you have 'tested enough’?  Exit criteria?


Common factors in deciding when to stop are:
Deadlines (release deadlines, testing deadlines, etc.) 
Test cases completed with certain percentage passed 
Test budget depleted 
Coverage of code/functionality/requirements reaches a specified point Bug rate falls below a certain level Beta or alpha testing period ends 

Code coverage:

validation and verification ? (not that imp)


Verification is done by frequent evaluation and meetings to appraise the documents, policy, code, requirements, and specifications. This is done with the checklists, walkthroughs, and inspection meetings.
Validation is done during actual testing and it takes place after all the verifications are being done.
software verification and validationSoftware testing is used in association with verification and validation:[5]
Verification: Have we built the software right (i.e., does it match the specification)? Validation: Have we built the right software (i.e., is this what the customer wants)?

What is Traceability Matrix ? (not that imp)
Traceability Matrix is a document used for tracking the requirement, Test cases and the defect. This document is prepared to make the clients satisfy that the coverage done is complete as end to end, This document consists of Requirement/Base line doc Ref No., Test case/Condition, Defects/Bug id. Using this document the person can track the Requirement based on the Defect id. ============================================================

What is AUT ?(not that imp)

AUT is nothing but "Application Under Test". After the designing and coding phase in Software development life cycle, the application comes for testing then at that time the application is stated as Application Under Test.
What is Defect Leakage ? 
Defect leakage occurs at the Customer or the End user side after the application delivery. After the release of the application to the client, if the end user gets any type of defects by using that application then it is called as Defect leakage. This Defect Leakage is also called as Bug Leak.

TESTING CAN BE DONE IN FOLLOWING stages:



specifications and requirement planning
design STAGE
coding STAGE
release stage
Development environment , Integration environment ,Staging (before shipping) environment , production testing

Testing can be done on the following levels: (very IMp)


Levels of testing

Unit testing

 tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.[19]


Integration testing 

exposes defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system. [20]

System testing

 tests a completely integrated system to verify that it meets its requirements.[21]
System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.[citation needed]

Component testing


=====================================

Other types of testing:

alpha and beta testing


Before shipping the final version of software, alpha and beta testing are often done additionally in a lab environment
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.

Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

testing done in real time environment

Acceptance testing 

can be conducted by the end-user, customer, or client to validate whether or not to accept the product. Acceptance testing may be performed as part of the hand-off process between any two phases of development.


Regression testing


Main article: Regression testingAfter modifying software, either for a change in functionality or to fix defects, a regression test re-runs previously passing tests on the modified software to ensure that the modifications haven't unintentionally caused a regression of previous functionality.
Regression testing can be performed at any or all of the above test levels. These regression tests are often automated.

Sanity and smoke testing


More specific forms of regression testing are known as sanity testing, when quickly checking for bizarre behaviour, and smoke testing when testing for basic functionality.

Risk based testing 

is basically a testing done for the project based on risks. Risk based testing uses risk to prioritize and emphasize the appropriate tests during test execution. In simple terms – Risk is the probability of occurrence of an undesirable outcome. This outcome is also associated with an impact.

Fuzz testing:


Adhoc testing:

Random

Types of test case techniques:


1. Deriving test cases directly from a requirement specification or black box test design technique. The Techniques include:
·        Boundary Value Analysis (BVA)
·        Equivalence Partitioning (EP)
·        Decision Table Testing
·        State Transition Diagrams
·        Use Case Testing
2. Deriving test cases directly from the structure of a component or system:
·        Statement Coverage
·        Branch Coverage
·        Path Coverage
·        LCSAJ Testing
3. Deriving test cases based on tester's experience on similar systems or testers intuition:
·        Error Guessing
·        Exploratory Testing
test scenarios
=========
scenario is like a story
ex: ATm testing scenario: 
insert card into ATM machne sign in with credentials into account slect with drawl action,select account to with drawl from, select amount to woth drawl(constraints applied user can only with draw max or min amount in aday. Confirm with drawing amount slect option for reciept. with draw cash and sign out.

Test methodologies:

Black box testing:

testing with ut knowing developer code is black box. Giving input and checking out put is black box testing.

white box testing :

 testing the code is white box testing.

UI testing:

User interface
Checking the functionality of  : Check boxes,list boxes Textbox ,labels, buttons, hyperlinks , text , pictures  ,navigation ,menu, drop down menu ,tabs, scroll down .

Challenges in UI testing /automation:  pop ups time out.  Mouse hoover dispparing uis are difficult to automate.

Api testing :

 Testing an interface methods is an api testing
web site testing 

SQl dB Testing

Testing types:

·         Functional testing
·         Nonfunctional:
1.       Peformance testing: load stress scalability reliability.
2.       Security
3.       Compatability
4.       Usability  testing
5.       Accesibility testing
6.       Configuration testing
7.       Logo and privacy policy  ULA testing
8.       Media testing DVD wch need to be shipped.
9.       Localization testing
10.   Internalization


Functional:

Valid invalid  matrix.

Boundaries:


Testing 0 ,null, empty  values.
Max,Max+1max-1
Min min+1 min-1 cases

Perfomance testing:

Measuring performance is performance testing: load testing , stress testing.

Load Testing

This is the simplest form of performance testing. A load test is usually conducted to understand the behavior of the application under a specific expected load. This load can be the expected concurrent number of users on the application performing a specific number of transaction within the set duration. This test will give out the response times of all the important business critical transactions. If the database, application server, etc are also monitored, then this simple test can itself point towards the bottleneck in the application.

 Stress Testing

This testing is normally used to break the application. Double the number of users are added to the application and the test is run again until the application breaks down. This kind of test is done to determine the application's robustness in times of extreme load and helps application administrators to determine if the application will perform sufficiently if the current load goes well above the expected load. Endurance Testing (Soak Testing) This test is usually done to determine if the application can sustain the continuous expected load. Generally this test is done to determine if there are any memory leaks in the application. Spike Testing Spike testing, as the name suggests is done by spiking the number of users and understanding the behavior of the application whether it will go down or will it be able to handle dramatic changes in load. Pre-requisites for Performance Testing A stable build of the application which must resemble the Production environment as close to possible. The performanc testing environment should not be clubbed with UAT or development envrironment. This is dangerous as if an UAT or Integration testing or other testing is going on the same environment, then the results obtained from the performance testing may not be reliable. As a best practice it is always advisable to have a separate performance testing environment resembling the production environment as much as possible. Conclusion Performance testing is evolving as a separate sciences, with the number of performance testng tools such as HP's LoadRunner, JMeter, OpenSTA, WebLoad, SilkPerformer. Also each of the tests are done catering to the specific requirements of the application. Myths of Performance Testing Some of the very common myths are given below.1. Performance Testing is done to break the system. Stress Testing is done to understand the break point of the system. Otherwise normal load testing is generally done to understand the behavior of the application under the expected user load. Depending on other requirements, such as expectation of spike load, continued load for an extended period of time woul demand spike, endurance soak or stress testing. 2. Performance Testing should only be done after the System Integration Testing Although this is mostly the norm in the industry, performance testing can also be done while the initial development of the application is taking place. This kind of approach is known as the Early Performance Testing. This approach would ensure a holistic development of the application keeping the performance parameters in mind. Thus the finding of a performance bug just before the release of the application and the cost involved in rectifying the bug is reduced to a great extend. 3. Performance Testing only involves creation of scripts and any application changes would cause a simple refactoring of the scripts. Performance Testing in itself is an evolving science in the Software Industry. Scripting itself although important, is only one of the components of the performance testing. The major challenge for any performance tester is to determine the type of tests needed to execute and analyzing the various performance counters to determine the performance bottleneck. The other segment of the myth concerning the change in application would result only in little refactoring in the scripts is also untrue as any form of change on the UI espescially in Web protocol would entail complete re-development of the scripts from the scratch. This problem becomes bigger if the protocols involved include Web Services, Siebel, Web Click n Script, Citrix, SAP ========================================================== ============================================

compatability testing

Software Compatibility Testing Your customer base uses a wide variety of OSs, browsers, databases, servers, clients, and hardware. Different versions, configurations, display resolutions, and Internet connect speeds all can impact the behavior of your product and introduce costly and embarrassing bugs. We test for compatibility using real test environments (not just virtual systems). Why outsource your compatibility testing to ApTest?ApTest is expert at testing products for compatibility with hardware and software environments. We can compatibility test your WWW site, CD, or application quickly and inexpensively. Test operates testing labs offering all the hardware and software needed for such testing including: Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements: Computing capacity of Hardware Platform (IBM 360, HP 9000, etc.).. Bandwidth handling capacity of networking hardware Compatibility of peripherals (Printer, DVD drive, etc.) Operating systems (MVS, UNIX, Windows, etc.) Database (Oracle, Sybase, DB2, etc.) Other System Software (Web server, networking/ messaging tool, etc.) Browser compatibility (Firefox, Netscape, Internet Explorer, Safari, etc.) Carrier compatibility (Verizon, Sprint, Orange, O2, AirTel, etc.) Backwards compatibility. Hardware (different phones) Different Compilers (compile the code correctly) Runs on multiple host/guest Emulators no conversions required and behaviour is agrreable

Security Testing:


Authorization
Authentication
Bufferoverflow

Usability:


 Users are able to use software .

Accesibilty:


Able to acess software using key board mouse.

Configuration:

Set up/Installation

Localization:


 is it supporting different languages 18 languages.

Internalization:


Date formats 

==============================================================

What is BUG?


When actualresult  is not as expected  result that is defect/bug


What is Bug Life Cycle? 


Bug Life Cycle is nothing but the various phases a Bug undergoes after it is raised or reported.
New or Opened 
Assigned Fixed 
Tested 
Closed


Bug Severity:

How much impat the bug to customer.
severity
how bad the bug is and the degree of impact when the user encounters the bug.
1)System crash,data loss,dat corruption,security breach2)Operational erreo,wrong result, loss of functionality.3)Minor problem ,mispelling ,UI layout,rare occurance4)Suggestion
priority:=========Indicates how much should be placed on fixing the bug and the urgency of making the fix.
1)Immediate fix,blocks further testing ,very visible2)must fix before the product release,3) when time permits4)would like to fix but the product can be released as is.

Bug Priority:

how soon bug has to be fixed

 Bug report

title :open network propeties path: team path file://ghhg/
status: active sub status:active assigned to:rema issue type:button action disabled build:000056 vistabranch: X source:test case processoe:intelplatform vista description: bug find while opening the window testcase:----------1. Right click My Network Places > Select properties.
expected result:------------------Verify: Network Connections Folder should be able to be opened and closed with no problem. 
tested on all osbuilds
bug identified on:-------------------windows vista
repro:test cases
steps
files------attach files screen shots

What are the contents in an effective Bug report?

Project, Subject, Description, Summary, Detected By (Name of the Tester), Assigned To (Name of the Developer who is supposed to the Bug), Test Lead ( Name ), Detected in Version, Closed in Version, Date Detected, Expected Date of Closure, Actual Date of Closure, Priority (Medium, Low, High, Urgent), Severity (Ranges from 1 to 5), Status, Bug ID, Attachment, Test Case Failed (Testcase thatis failed for the Bug)===================

TiP: Testing In Production


A little while ago I wrote about how QA fits into the DevOps culture. The basic idea of that post was that QA professionals’ jobs are changing, especially when it comes to cloud or web-based apps. Instead of finding bugs in a particular release of software, the job of a tester is to be the guardian and steward of the entire development process, ensuring that defects are identified and removed before they get to the production environment.
That’s why TiP is so important. It’s not that it takes the place of traditional testing, but rather it enhances it with a set of test procedures that just make sense to do in the production environment. As the story above illustrated, it can be very difficult to create and maintain a test environment that’s truly an exact clone of production – so much so, that there are a class of tests that simply don’t make sense to execute in any environment other than production.
TiP provides a structured way of conducting tests using the live site and real users – because for those tests, that is the only way of getting meaningful results.
There are a number of different types of TiP that any software tester should know about. Here’s a summary of some of the most important ones.

Canary Testing

Back in the days before PETA, coal miners would bring a caged canary into the mines with them. If there was a sudden expulsion of poisonous gas like methane, the fragile canary would succumb before the humans, providing an early warning system for the miners. Put simply, Dead Bird = Danger.
In TiP, Canary Testing refers to the process of deploying new code to a small subset of your production machines before releasing it widely. It’s kind of like a smoke test for SaaS. If those machines continue to operate as expected against live traffic, it gives you confidence that there is no poisonous gas lurking, and you can greenlight a full deployment.

Controlled Test Flight

In a Canary Test you are testing hardware, but in a Controlled Test Flight you are testing users. In this kind of TiP, you expose a select group of real users to software changes to see if they behave as expected. For example, let’s say your release involves a change to your app’s navigation structure. You’ve gone through your usability tests, but want to do a little better than that before everyone sees the change.
That’s where a Controlled Test Flight comes in. You make the change but only expose it to a specific slice of your users. See how they behave. If things go as expected, you can roll the change out to the wider audience.

A/B Split Testing

Sometimes you aren’t exactly sure what users will prefer, and the only way to know is to observe their behavior. A/B Split Tests are very common in web-based apps because it’s a great way to use behavioral data to make decisions. In this case, you are developing two (or more) experiences – the “A” experience and the “B” experience – and exposing an equivalent set of users to each experience. Then you measure the results.
A/B Testing is an incredibly powerful tool when used properly, because it truly allows a development organization to follow its users. It does involve more work and coordination, but the benefits can be substantial when done properly.

Synthetic User Testing

Synthetic user testing involves the creation and monitoring of fake users which will interact with the real site. These users operate against predefined scripts to execute various functions and transactions within the web app. For example, they could visit the site, navigate to an Ecommerce store, select some items into their cart, and check out. As this script executes, you keep track of relevant performance metrics of the synthetic user so you know what kind of end-user experience your real users are having.
Synthetic monitoring, using a product like NeoSense, is a key component of any website’s application performance monitoring strategy.

Fault Injection

Here’s an interesting, and perhaps unsettling idea: create a problem in your production environment, just to see how gracefully its handled. That’s the idea behind fault injection. You have built all this infrastructure to make sure that you are protected from specific errors. You should actually test those processes.
Netflix is famous among testing circles for its Chaos Monkey routine. This is a service that will randomly shut down a virtual machine or terminate a process. It creates errors that the service is supposed to be able to handle, and in the process has drastically improved the reliability of the application. Plus, it keeps the operational staff on its toes.

Recovery Testing

Similarly to fault injection, you want to know that your app and organization can recover from a bad problem when it’s called for. There are procedures that are rarely tested in production environments, like failing over to a secondary site or recovering from a previous backup. Recovery testing exercises these processes.
Run fire drills for your app. Select a time when usage is low and put your environment through the paces that it is supposedly designed to handle. Make sure that your technology and your people are able to handle real problems in a controlled way, so you are confident they will be handled properly when it’s truly a surprise.

Data Driven Quality

Finally – and this may go without saying – put in place systems that will help your QA team receive and review operational data to measure quality. Make sure that testers have access to logs, performance metrics, alerts, and other information from the production environment, so they can be proactive in identifying and fixing problems.

Conclusion

Testing in Production can be an extremely valuable tool in your QA arsenal, when used properly. Sure, there are always risks of testing with live users, but let’s face it – there are risks to NOT testing with live users as well. However, if you build the right procedures, TiP can result in a huge boost to your app’s overall quality.

One of the most useful resources for a test engineer dealing with web services is the production environment. This is the live environment that exposes the product to end users. Some of the challenges that the production environment provides us are the following:

How do we know that software that works in a developer box or on a test lab will work in a production environment?
What information can we gather from production that will help us release a higher quality product?
How do we detect and react to issues found after a software upgrade?
In this blog post, we will look at some of the strategies that can be used to improve quality by incorporating the production environment and production data into our testing.

 Smoke Testing in production


Some bugs appear more readily in production due to discrepancies between the test and live environments.  For example, the network configuration in a test environment might be slightly different from the live site, causing calls between datacenters to fail unexpectedly.  One possible way to identify issues like this would be to perform a full test pass on the production environment for every change that we want to make.  However, we don't want to require running a full suite of tests before every upgrade in the live environments since this would be prohibitively time-consuming.  Smoke tests are a good compromise, as they give us confidence that the core features of the product are working without incurring too high of a cost.

A smoke test is a type of test that performs a broad and shallow validation of the product. The term comes from the electronics field, where after plugging in a new board, if smoke comes out, we cannot really do any more testing. During daily testing, we can use smoke tests as a first validation that the product is functional and ready for further testing.  Smoke tests also provide a quick way to determine if the site is working properly after deploying an update.  When we release an update to our production environment we generally perform the following steps to validate that everything went as planned:

Prior to updating the site, we run some tests against the current version.  Our goal is to make sure that the system is healthy and our tests are valid before starting the upgrade.
We then update a subset of the production site. Preferably, this portion will not be available to end users until we complete the smoke test.
Next, we run the tests against the updated portion of the site. It is important to have clarity on which version the tests are running against. We should have a clean pass of the smoke tests before we proceed.  If we encounter problems, we can compare the results pre- and post- upgrade to help focus the troubleshooting investigation.
Continue the rollout to the rest of the production environment.
Finally, run the tests again to validate the entire site is working as expected.

The tests used for smoke testing should have the following qualities:

Smoke tests need to be very reliable, as a false positive may cause either unnecessary false alarms or a loss of trust in the smoke test suite.
Smoke tests need to be very fast.  The main point of smoke testing is quickly identify problems, and long running tests can either delay updates to the site, or potentially allow users to access buggy code before the tests catch it.
Smoke tests need to be good at cleaning up after running. We need to avoid having test data mixed in with real customer data since tests can potentially create data that isn’t intended to be processed in production.

windows Live uses an automated smoke test tool which is able to do a validation of the service within a few minutes. The same utility is used in developer boxes, test environments and production, and is consistently updated as new features are added to the system.

Reacting to issues through data collection and monitoring

Even though we may have done thorough functional validation, shipping a new feature to production always implies a risk that things may not work as intended.  Logging and real-time monitoring are tools that help us in this front.  Before shipping a new feature to production, try to answer the following questions. This will give you a sense of readiness for handling issues:

How will you know that users are having issues with the feature?  Will you mostly rely on user feedback, or will you be able to detect and measure failures yourself? Will the people running the site be able to tell something is wrong?
If a user raises an issue, what are the resources that you will have available to investigate? Will you require the user to collect detailed logs?
In the event of an issue, are your test libraries prepared for quickly building up a test scenario based on a user's feedback?  The ability to craft tests based on logs and the user’s repro steps generally indicates how long it will take for someone to reproduce the issue and validate a fix, which has a direct impact on the time to resolution.
 Some of the strategies that windows live uses  for allowing quicker reaction to issues are the following:
we allow user initiated log collection, both on the client and on the server side. Taking the product group out of the critical path when collecting data saves the team a significant amount of time and effort.
We support using the user logs to craft tests for reproducing an issue. Our tools take logs, remove any actual reference of the users data contents, and replay the traffic.
Using production data as input for tests
The involvement of Test in production should be limited to releasing a new feature or investigating an issue. Production contains a wealth of data that helps us better define what to test. The higher priority tests are those that map to the core customer calling patterns, and for existing scenarios, production data is the best source. Some of the interesting questions that production data analysis is able to answer are the following:
What are the different kinds of users in the environment? What are the characteristics that identify them?
What are the most common calling patterns? Which ones most frequently cause errors?
Do the site’s traffic patterns indicate changes in user behavior?
Gathering and analyzing data to answer the above and other questions is often non-trivial, but the resulting data is invaluable, particularly when deciding which areas should have a bigger focus when testing.
Within Windows Live, we have used this approach to understand both user scenarios and calling patterns. We measure some of the characteristics of the data (like how many folders a SkyDrive has, or how many comments photos typically have) to identify both common scenarios and outliers. This data lets us focus efforts like performance testing and stress on the most common scenarios, while ensuring that we have coverage on the edge cases.
When using production data in testing, the approach to privacy is extremely important and needs to be figured out before starting the work. Our tools only interact with abstractions of user data, with all actual user content and identity removed. We care about what the data looks like, not specifically what the data is.
In conclusion, the effectiveness of a test engineer can be enhanced by using production as a source of information. It may be by making sure that all the core scenarios work as expected through smoke testing, by creating a quick mechanism for reacting to issues, or by harvesting data to feed into test tools and plans.

What Does Test-Driven Development Mean for Performance Testers?

“Begin with the end in mind.”
You must have heard that phrase, right? It’s a common one that’s led many people to great success, not just in agile, but all throughout history. In fact, it’s habit #2 in Stephen Covey’s best-selling book The 7 Habits of Highly Effective People.
Starting with the end in mind is what Olympic athletes do when they visualize their gold medals. Musicians do it when they envision their perfect performance before stepping on stage. Architects have a full picture of the completed skyscraper in their head before ground is broken. If you know what you are aiming for, in detail, you’ll find it much easier to achieve.
That’s exactly the philosophy behind Test-Driven Development, or TDD. Before you start coding business logic, you write a test. It’s almost like a detailed specification for the module you are creating, except it’s produced as a set of functions and gates that the module will have to pass through to confirm it is working as expected. When you initially run the test, it’ll naturally fail because your code doesn’t yet do anything. However, once the test passes, you know you’ve built what you needed to build.
TDD is there to make sure you don’t overbuild. You control costs, you increase efficiency, and you build quality into the product from the beginning. You only create what’s needed to pass the test – no more, no less. You eliminate all that wasteful junk that ends up never getting used (or at least a lot of it). For most people, TDD is a great way to ensure that your app delivers on the functionality it needs to.

Can TDD Apply To Performance?

Test-Driven Development is a great tool for functional testing, but can you apply the same technique to performance testing?
Why not?
The purpose of TDD is to build out small unit tests, or scenarios, under which you control your initial coding. Your tests will fail the first time you run them because you haven’t actually developed any code. But once you do start coding, you’ll end up with just enough code to pass the test.
There’s no reason the same philosophy can’t be applied to performance testing. You can develop performance tests that stress algorithms and exercise code at the unit-level, just like functional tests do. Unit performance tests give you a baseline confidence in your core algorithms. They force developers to think about how their code behaves under stress at the time that code is being written.
Just like with functional testing, TDD helps to eliminate big, systemic problems that may appear later on. The process just requires the forethought involved in careful scenario planning.

Performance TDD Is a Good Start, But It’s Not Everything

Let’s face it: even if your app passed all its TDD-based unit tests for functional testing, you still wouldn’t feel confident that it was flawless. No matter what you were doing for TDD, you’d still create larger functional tests, integration tests, end-user tests, and a whole host of other tests. You’d have a suite of different functional test methods you’d bring to bear to make sure your app was ready for users to attack it.
The same thing is true for load & performance testing. Just because you incorporate Performance Test-Driven Development, doesn’t mean all your performance issues are solved. You’ll still need to coordinate your large, integrated load tests to push your algorithms to their breaking points.
But with Performance TDD, you’ll have much more confidence in your product’s ability to pass those tests at the get-go.

Some Tricks for Performance TDD

If you want to incorporate Performance TDD into your development process, here are a few tips that may come in handy:
1.       Create small-batch load tests that can stress small components. As you start planning your module, think about how that module would be stressed. What algorithms are most likely to limit scalability? Is there an opportunity for resource contention? Do you have queries that could impact performance if they get too large or complicated? Then create your test scenarios specifically to stress the component in these ways. As you and your team work through more and more of the product, you’ll end up building an amazing library of load test scenarios that you can leverage in lots of interesting ways moving forward.
2.       Don’t apply TDD to optimizations, instead just use it for base-level performance. The job of a performance engineer is often focused on optimizing code that’s already been written by someone. TDD isn’t really going to be much help here. Remember, TDD is best leveraged at the beginning of the code-writing process. So that’s where you should start. As your product matures, it’s completely appropriate to make your load tests incrementally more demanding (a certain function must return in 2 seconds instead of 4 seconds), but that may not always be the ideal place to focus because scaling problems are often driven by more complex interactions that are best suited for different kinds of test methodologies.
3.       Automate. TDD and automation go hand-in-hand, and you should approach it that way from the beginning. Think about how to automate your TDD from the moment you start doing it, like any modern development method. By doing this, you’ll be able to easily plug your performance testing into other automated processes like continuous integration, but you’ll also end up with a number of micro-load test scenarios that can be strung together into a larger automated test scenario. This gives you a ton of leverage.

Conclusion

As we’ve said before, the more you can plug performance testing into your standard development process, the more effective you’ll be as a team, and the more successful your application will become. If your team operates using Functional TDD, you’ll definitely find value enhancing that practice with Performance TDD.

Pair programming 

(sometimes referred to as peer programming) is an agile software development technique in which two programmers work as a pair together on one workstation. One, the driver, writes code while the other, the observer,pointer or navigator,[1] reviews each line of code as it is typed in. The two programmers switch roles frequently.
While reviewing, the observer also considers the "strategic" direction of the work, coming up with ideas for improvements and likely future problems to address. This frees the driver to focus all of his or her attention on the "tactical" aspects of completing the current task, using the observer as a safety net and guide.

Data driven testing:

'Data-driven testing (DDT) is a term used in the testing of computer software to describe testing done using a table of conditions directly astest inputs and verifiable outputs as well as the process where testenvironment settings and control are not hard-coded.

BVT


Build verification tests
Build and deployment

Continuous Integration:


Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early.

Testing challenges:


UI manual testing challenges :
Continous UI changes
Pop ups and UACin windows
Prompting windows
UI automation specific:
Automation ids
Thread waits timeouts
Unfamiliarity with tools and tool version changes.
Third party dependencies.
Tools compatability with software versions.
Code Maintainance

How to overcomeui Chalenges:


UI automation isn’t an easy task, as any UI feature test owner who is responsible for automating their areas can tell you. It is quite challenging in terms of getting more test coverage and higher test reliability from automation. Let's discuss these challenges in more detail.
From the test coverage perspective:
Each automated test consists of two parts: the code to drive the product and the code to validate the test result. Many product teams have a coverage goal of 70% or higher. Without help from developers' unit tests, it is actually quite difficult to achieve this goal by UI automation tests alone. Here are 3 main reasons: test hooks, result validation, and cost/complexity.
1) Some UI controls can't be seen (or found) by the UI automation framework being used. This is either due to some technical limitations of the UI automation framework or the accessibility support for the UI controls not implemented properly in the product code. Either way, this often prevents common tests from being automated until the problems are resolved.
2) Test validation often is not easy, especially for UI tests or UI changes that require manual verification from testers. Many teams are trying to solve this problem by developing tools or using techniques like screenshot comparison, video comparison, etc. But these tools can't replace human testers very well because they usually cause higher rates of false positive or false negative results in automation tests. As a result, many UI tests like these won't be automated.
3) Complex UI test scenarios have more steps involved, which potentially adds more cost to the automation effort and may introduce more instability to the tests. From an efficiency perspective, sometimes it is much cheaper to run the tests manually rather than spending time automating them, especially when there is a short product cycle. Also, it is difficult to manually trigger most error messages and difficult to automate them. For this reason, we are not able to achieve the test coverage we want from our automated tests.
From the pass rate (reliability) perspective:
Ideally, tests fail when the product breaks. But this is not always the case for complex projects. Sometimes no matter how hard they try, test owners still need to spend time fixing their tests as part of their daily routine. This is, of course, a headache that they try to avoid. Unfortunately, there isn't a quick or easy solution to remedy this. Here is why:
1) Timing issues are one of the most common and annoying problems that we encounter in UI automation. The product code and the test code usually run in different processes, which sometimes results in their getting out of sync. It isn't easy to handle or debug a timing issue. Some testers run into situations where their test continues to fail although it has been fixed over and over again. Every time there is a UI state change in a product, it creates a potential timing issue for UI automation. To handle a timing issue properly in test code, waiters (UI event listeners), ETW events, polling, or even "sleeps" are often used. Since timing issues can be difficult to reproduce, it might take a few tries before the root cause is determined and the proper fix is applied.
2) Each UI automation test is designed and written based on the expected product behaviors, UI tree structure and control properties. So if a developer changes a product behavior or UI layout, or modifies a control property, that could easily break automated tests. When that happens, testers will need to spend time to debug and fix their test failures. If there is an intermittent issue in the product, debugging the test failure to figure out the root cause is even tougher. In this case, test logs, product logs, screenshots and videos for the failing test state are needed for debugging. Sometimes, additional test logging is required to help debug the problem. In the early product development stage where the product and test automation are not yet stabilized, the test code is always assumed "guilty" by default until proven "innocent" when there is a test failure.
3) We all know that there is no bug-free software out there. So it shouldn't be a surprise that issues from the operating system, network, backend services, lab settings, external tools, or other layers account for many automation test failures. Since those are external issues, testers have to implement less than ideal workarounds to fix their automation failures, which is very unfortunate. Please keep in mind that workarounds for external issues don't always work well because they often band aid problems only.
UI automation remains one of the biggest challenges and a significant cost for test teams in product groups in the upcoming years. Below are some proposed ideas on how to mitigate the UI automation challenges discussed above:
1) Choose a UI automation framework that fits your automation requirements. Four major criteria can be used when testers write prototype tests during the framework evaluation.
- Performance: how fast your tests can be run
- API Supports: how easily you can build your own OMs and tests and how many major test scenarios can be automated with the framework
- Reliability: how stable your tests can be (are your tests flaky because of the framework?)
- Maintainability: how easily you can update your existing test OMs when there is a UI/behavior change in the product
2) Work closely with your developers to add testability (such as accessibility supports and automation IDs for UI controls) needed by your automation. Searching for UI controls by their automation IDs is usually the preferred way because that can reduce some localization support for non-English builds.
3) Make sure to use a waiter or other waiting logic every time there is a UI state change or a timing delay in the UI after triggered by a test step.
4) Understand your product behaviors and pay attention to detail. Again, UI automation tests are written based on expected product behaviors. In other words, any unexpected product behavior could potentially break your tests. Your tests will become more reliable if you can properly handle product behaviors in various situations.
5) The cost for UI automation is actually a long term investment and isn't cheap. So do not try to automate every test. We can't afford to do that anyway since we always have other tasks to do. Before you decide to automate a test, you should ask yourself a couple of questions first: is it worth the effort to automate this test? What is the gain vs. the cost? Basically, almost everything can be automated, the difference being the cost. Here are 3 areas that we can start first: a) BVTs (Build Verification Tests) and performance tests; b) common P1 test cases; c) test cases that provide high coverage with reasonable cost (under 2 days). After that, then we can continue to automate more test cases if time allows.
6) Find and leverage (maybe develop) more tools to help validate results for UI tests.
7) Identify the root cause of a test failure and fix it completely in the right place. Do not just band aid the problem. For example, try to remove or not use SLEEPs!
8) If a test failure is caused by an unexpected product behavior, push the development team to fix the issue. Do not try to work around the issue in test code as that will hide a product bug.
9) Write less automated tests and focus on achieving a higher test pass rate in the early stages of product development. That will help reduce the cost for test case maintenance. Once the product is stabilized, then write more test automation and focus on the code coverage goal.
10) Abstract test cases from UI changes to reduce the test maintenance cost. Please see the article on Object Model Design.