Total Pageviews

Featured Post

Excel tips and tricks

Things to Remember Borders can be used with the shortcut key Alt + H + B, which will directly take us to the Border option. Giving a border...

Tuesday, November 1, 2011

Testing Notes

1) Goal of testing?
Testing is to improove the quality of the software before its release verifying errors in productverify the product meet its soecifications and requirements.documentation of input, expected results and test conditions.purpose of testing:Software testing is the process used to help identify the Correctness, Completeness, Security and Quality of the developed Computer Software.
Software Testing is the process of executing a program or system with the intent of finding errors.Software testing is the process of checking software, to verify that it satisfies its requirements and to detect errors.
==========================================================
2)Manual testing :
Testing with the manual intervention
3)Automation testing
:with out manual intervention using automation
tools

==============================================
Advantages and disadvantages in manual testing/automation
cost on manual resources
more no of human resources are required
result may contains errors in manual testing
time taken is more
No error handilng in manual testing
Rgressionn testing is not ease
tesing on more number of platforms is not easily possible
more risk in manual testing
no scalabity
not reliable
===============================================
Advantages and disadvantages in Automation testing
Automation testing this depends on the requirements indicated to perform testing. Not all test cases need to be tested using the automation tools. some test cases have to be tested manually. Automation testing takes place when we have to perform functionality testing using multiple data or we have to perform regression testing.Start and end of the automation testing depends on the factors like length, duration, cost of the project, risks of the project etc.,
========================================================
4)model based testing:actions
non model based testing:scenarios
5)API testing : testing code applications.
6)UI(non API)testing: testing a Dialogue or window
7)SQL database testing: Backend testing
8)Web testing: testing web applications
Web services: Testing services. wcf

==========================================================
ROLES OF A TEST ENGINEER:
========================
involving in writing TEST PLAN
TEST DESIGN GENERATE TEST CASES
TEST EXECUTION/TEST DEPLOYMENT
Identifying BUG BUG TRACKING closing resolved bugs
==========================================================
GOOD TESTER:
=========
conceptual:
ANALYTICAL
CREATIVE
PROBLEMSOLVING
BREAK IT MENTALITY
MULTI DIMENSIONAL
THINKING

Practical:
Review specs ,design specs
develope test plan
develope tets automation
develope tets cases
find bugs early.
========================================================
considerations
WHAT I AM GOING TO TEST ?
HOW I AM GOING TO TEST IT?
Testing is to test behaviour of the product under different test. conditions
=========================================================
GOOD TEST CASE

COVERS ALL AREAS OF TESTING
INDEPENDENCY
LOCALISATION ENABLED
PRE CONDITINS
TITLE
PURPOSE
DETECT BUGS
=========================================================
SDLC:

analysis
design
Coding
Testing
Release
Maintenance
=============================================================
testing cycle

test plan:
steps in test plan
test design
test cases developementtest executionidentifying bugs
bug tracking
validation
What is Bug Life Cycle?
Bug Life Cycle is nothing but the various phases a Bug undergoes after it is raised or reported.
New or Opened
Assigned Fixed
Tested
Closed
===========================================================
==========================================================
What is the difference between Bug, Error and Defect? (not that imp)
Error : It is the Deviation from actual and the expected value.
Bug : It is found in the development environment before the product is shipped to the respective customer.
Defect : It is found in the product itself after it is shipped to the respective customer
Why does software have bugs?
Miscommunication or no communication – about the details of what an application should or shouldn't do Programming errors – in some cases the programmers can make mistakes. Changing requirements – there are chances of the end-user not understanding the effects of changes, or may understand and request them anyway to redesign, rescheduling of engineers, effects of other projects, work already completed may have to be redone or thrown out. Time force - preparation of software projects is difficult at best, often requiring a lot of guesswork. When deadlines are given and the crisis comes, mistakes will be made.
bugs are because of errors in
==========
TESTING CAN BE DONE IN FOLLOWING stages:

specifications and requirement planning
design STAGE- design bugs gaps 
coding STAGE- coding bugs
Release stage- test in release environment 

====================================================
How do you decide when you have 'tested enough’?
Common factors in deciding when to stop are:
Deadlines (release deadlines, testing deadlines, etc.)
Test cases completed with certain percentage passed
Test budget depleted
Coverage of code/functionality/requirements reaches a specified point Bug rate falls below a certain level Beta or alpha testing period ends
==================================================================
Describe the difference between validation and verification ? 
Verification is done by frequent evaluation and meetings to appraise the documents, policy, code, requirements, and specifications. This is done with the checklists, walkthroughs, and inspection meetings.
Validation is done during actual testing and it takes place after all the verifications are being done.
software verification and validationSoftware testing is used in association with verification and validation:[5]
Verification: Have we built the software right (i.e., does it match the specification)? Validation: Have we built the right software (i.e., is this what the customer wants)?
================================================================================================================================
What is Traceability Matrix ? (not that imp)
Traceability Matrix is a document used for tracking the requirement, Test cases and the defect. This document is prepared to make the clients satisfy that the coverage done is complete as end to end, This document consists of Requirement/Base line doc Ref No., Test case/Condition, Defects/Bug id. Using this document the person can track the Requirement based on the Defect id. ============================================================
What is AUT ?(not that imp)
AUT is nothing but "Application Under Test". After the designing and coding phase in Software development life cycle, the application comes for testing then at that time the application is stated as Application Under Test.
What is Defect Leakage ?
Defect leakage occurs at the Customer or the End user side after the application delivery. After the release of the application to the client, if the end user gets any type of defects by using that application then it is called as Defect leakage. This Defect Leakage is also called as Bug Leak.
What are the contents in an effective Bug report? Project, Subject, Description, Summary, Detected By (Name of the Tester), Assigned To (Name of the Developer who is supposed to the Bug), Test Lead ( Name ), Detected in Version, Closed in Version, Date Detected, Expected Date of Closure, Actual Date of Closure, Priority (Medium, Low, High, Urgent), Severity (Ranges from 1 to 5), Status, Bug ID, Attachment, Test Case Failed (Testcase thatis failed for the Bug)===========================================================================
What is Error guessing and Error seeding ?(not that imp)
Error Guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.
Error Seeding is the process of adding known faults intentionally in a program for the reason of monitoring the rate of detection & removal and also to estimate the number of faults remaining in the program.
What is Test bed and Test data ? Test Bed is an execution environment configured for software testing. It consists of specific hardware, network topology, Operating System, configuration of the product to be under test, system software and other applications. The Test Plan for a project should be developed from the test beds to be used.
Test Data is that run through a computer program to test the software. Test data can be used to test the compliance with effective controls in the software.
============================================================
What is the difference between QA and testing? (not that imp)
Testing involves operation of a system or application under controlled conditions and evaluating the results. It is oriented to 'detection'.
Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.
=========================================================
Software quality
Quality is not an absolute; it is value to some person. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behaviour of the product against a specification. An important point is that software testing should be distinguished from the separate discipline of Software Quality Assurance (S.Q.A.), which encompasses all business process areas, not just testing
.============================================================

Testing can be done on the following levels: (very IMp)
Unit testing tests the minimal software component, or module. Each unit (basic component) of the software is tested to verify that the detailed design for the unit has been correctly implemented. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.[19]
Integration testing exposes defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system. [20]
System testing tests a completely integrated system to verify that it meets its requirements.[21]
System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.[citation needed]
=====================================
Before shipping the final version of software, alpha and beta testing are often done additionally:
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.[citation needed]
Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.[citation needed]

Acceptance testing can be conducted by the end-user, customer, or client to validate whether or not to accept the product. Acceptance testing may be performed as part of the hand-off process between any two phases of development.[citation needed]
Regression testing
Main article: Regression testingAfter modifying software, either for a change in functionality or to fix defects, a regression test re-runs previously passing tests on the modified software to ensure that the modifications haven't unintentionally caused a regression of previous functionality.
Regression testing can be performed at any or all of the above test levels. These regression tests are often automated.
More specific forms of regression testing are known as sanity testing, when quickly checking for bizarre behaviour, and smoke testing when testing for basic functionality.
Finding faults earlyIt is commonly believed that the earlier a defect is found the cheaper it is to fix it.[22] The following table shows the cost of fixing the defect depending on the stage it was found.[23] For example, if a problem in requirements is found only post-release, then it would cost 10-100 times more to fix it comparing to the cost if the same fault was already found by the requirements review.
Time Introduced Time Detected Requirements Architecture Construction System Test Post-Release Requirements 1 3 5-10 10 10-100 Architecture - 1 10 15 25-100 Construction - - 1 10 10-25
Measuring software testingUsually, quality is constrained to such topics as correctness, completeness, security,[citation needed] but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.
There are a number of common software measures, often called "metrics", which are used to measure the state of the software or the adequacy of the testing.
=======================================================================

Test harness
The software, tools, samples of data input and output, and configurations are all referred to collectively as a test harness.

A sample testing cycle
Although variations exist between organizations, there is a typical cycle for testing[25]:
Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests work. Test planning: Test strategy, test plan, testbed creation. A lot of activities will be carried out during testing, so that a plan is needed. Test development: Test procedures, test scenarios, test cases, test scripts to use in testing software. Test execution: Testers execute the software based on the plans and tests and report any errors found to the development team. Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.

Exploratory vs. scripted[30] - Should tests be designed at the same time as they are executed or should they be designed beforehand? Manual vs. automated - Some writers believe that test automation is so expensive relative to its value that it should be used sparingly.[31] Others, such as advocates of agile development, recommend automating 100% of all tests. Software design vs. software implementation[32] - Should testing be carried out only at the end or throughout the whole process? Who watches the watchmen? - The idea is that any form of observation is also an interaction, that the act of testing can also affect that which is being tested[33].
========================================================
===============================================================
generally test engineers uses 4 types of testing techniques:
1. Boundary value analysis
2.Equivalance aprtition
3.
4.
==================================================
test scenarios
=========
scenario is like a story
ex: ATm testing scenario:
insert card into ATM machne sign in with credentials into account slect with drawl action,select account to with drawl from, select amount to woth drawl(constraints applied user can only with draw max or min amount in aday. Confirm with drawing amount slect option for reciept. with draw cash and sign out.
=======================================================
a test harness or automated test framework
===========
is a collection of software and test data configured to test a program unit by running it under varying conditions and monitor its behavior and outputs. It has two main parts: the test execution engine and the test script repository.
Test harnesses allow for the automation of tests. They can call functions with supplied parameters and print out and compare the results to the desired value. The test harness is a hook to the developed code, which can be tested using an automation framework.
A test harness should allow specific tests to run (this helps in optimising), orchestrate a runtime environment, and provide a capability to analyse results.
The typical objectives of a test harness are to:
Automate the testing process. Execute test suites of test cases. Generate associated test reports. A test harness typically provides the following benefits:
Increased productivity due to automation of the testing process. Increased probability that regression testing will occur. Increased quality of software components and application.
==
  1.  Test data 
  2.  Common methods for accesing application/login etc
  3. Core libraries Utilities
  4. Base test case methods- include Initialize ,common actions and cleanup 
  5. Logging and screenshots 
  6. Report generating
  7. configuration/settings
4 forms of Logging
Message
Warning
Exception
Error
3 levels of Logging
Diagnostic
Normal
High

 ======================================================================type of doc to QA, QC,testing.
Business Requirement SpecificationDesign DocumentFunctional SpecificationsSRS-Software Requirement Specification DocumentUse cases Test planTest case..etc are mainly used.
1. FRS2. SRS3. Test Plan4. Test Strategy Plan5. Test Approach plan6. Test Cases7. Risk Anaysis Report8. CR Reports9. User Manuals10. Techincal Documents11. Sing Off Documents12. Test Case Exection Reports13. Bug Report14. Bug Summary Report15. DD Report16. Estmation Documents17. Design Documents
=========================================
====================================
Example equivalence classes:
Strings empty string String consisting solely of white space String with leading or trailing white space syntactically legal: short and long values syntactically legal: semantically legal and illegal values syntactically illegal value: illegal characters or combinations Make sure to test special characters such as #, ", ', &, and < Make sure to test "Foreign" characters typed on international keyboards Numbers empty string, if possible 0 in range positive, small and large in range negative, small and large out of range positive out of range negative with leading zeros syntactically invalid (e.g., includes letters) Identifiers empty string syntactically legal value syntactically legal: reference to existing ID, invalid reference syntactically illegal value Radio buttons one item checked nothing checked, if possible Checkboxes checked unchecked Drop down menus select each item in turn Scrolling Lists select no item, if possible select each item in turn select combinations of items, if possible select all items, if possible File upload blank 0 byte file long file short file name long file name syntactically illegal file name, if possible (e.g., "File With Spaces.tar.gz") ================================================= logged a bug and fixes and assigns it u then wt do u do to mk sure the fix is good? retest and assign ================================= priorityhow soon bug has to be fixed Severity: How much impat the bug to customer. ==================== how do u perform regression test? Regression Testing is carried out both manually and automation. The automatic tools are mainly used for the Regression Testing as this is mainly focused repeatedly testing the same application for the changes the application gone through for the new functionality, after fixing the previous bugs, any new changes in the design etc. The regression testing involves executing the test cases, which we ran for finding the defects. Whenever any change takes place in the Application we should make sure, the previous functionality is still available without any break. For this reason one should do the regression testing on the application by running/executing the previously written test cases. ===================================================================== good code has no bugs ================== good test case === detect bugs Contents in TC: Tets case Title Pre conditions: set up environment etc. test cases execution steps Validation/expected rsults to be verified. ========================= sdlc====process improvement A,D,I,M ============== purose of softwareto improve the qualityto detect bugs ensure that the product meet its requirements =============================================== ========================= ip and MAC adreess to identi fy for a computer or device on a TCP/IP network. Networks using the TCP/IP protocol route messages based on the IP address of the destination. The format of an IP address is a 32-bit numeric address written as four numbers separated by periods. Each number can be zero to 255. For example, 1.160.10.240 could be an IP address. Within an isolated network, you can assign IP addresses at random as long as each one is unique. However, connecting a private network to the Internet requires using registered IP addresses (called Internet addresses) to avoid duplicates. The four numbers in an IP address are used in different ways to identify a particular network and a host on that network. Four regional Internet registries -- ARIN, RIPE NCC, LACNIC and APNIC -- assign Internet addresses from the following three classes. Class A - supports 16 million hosts on each of 126 networks Class B - supports 65,000 hosts on each of 16,000 networks Class C - supports 254 hosts on each of 2 million networks The number of unassigned Internet addresses is running out, so a new classless scheme called CIDR is gradually replacing the system based on classes A, B, and C and is tied to adoption of IPv6. to identify pc with unique dress.====================== DNS :domain name Domain Name System (DNS) is a database system that translates a computer's fully qualified domain name into an IP address. Networked computers use IP addresses to locate and connect to each other, but IP addresses can be difficult for people to remember. For example, on the web, it's much easier to remember the domain name http://www.amazon.com/ than it is to remember its corresponding IP address (207.171.166.48). DNS allows you to connect to another networked computer or remote service by using its user-friendly domain name rather than its numerical IP address. Conversely, Reverse DNS (rDNS) translates an IP address into a domain name. ping hostname to find ipadrs ================================== network layers ============================== protocolIn computing, a protocol is a convention or standard that controls or enables the connection, communication, and data transfer between two computing endpoints. In its simplest form, a protocol can be defined as the rules governing the syntax, semantics, and synchronization of communication. Protocols may be implemented by hardware, software, or a combination of the two. At the lowest level, a protocol defines the behavior of a hardware connection. =================== tcp/ip networkTCP/IP is the Internet Communication ProtocolA communication protocol is a description of the rules computers must follow to communicate with each other. The Internet communication protocol defines the rules for computer communication over the Internet. Your Browser and Your Server Use TCP/IPInternet browsers and Internet servers use TCP/IP to connect to the Internet. Your browser uses TCP/IP to access Internet servers, and servers use TCP/IP to send HTML back to your browser. Your E-Mail Uses TCP/IPYour e-mail program uses TCP/IP to connect to the Internet for sending and receiving e-mails. Your Internet Address is TCP/IPYour Internet address "24.16.98.135" is a part of the standard TCP/IP protocol. (And so is your domain name "http://www.someonesplace.com/") =================================== test cases examples =============== 1)water bottle2)marker3)boat4)text boX5)radio button=6)whether window=7)login window8)file shared window=9)vending machine=10)test calc11)test lamp =================== 1)test cases expected result --functionality testopen the bottle cap opens fill the water hold specified amnt of water with out leakage close cap close tightly with out leakage of water ----boundaries less amount of water ---load amount more than the limit bottle should not break -----stess press the bottle forcely using hands not easily broken -----doccumentation vol height width other ------ easy to hold in hands shape of mouth is easy to pour water with in it not melt with hot water ================================================ 2)test the marker a) functionality 1)should mark on board 2)able to rub with duster 3)mark on any objects such as paper,wood. b)performance how long its working.nib shoulb be narrow ,letters shoud be in narrow. c)documentationeasy to open& close cap , nib should not go inside, no leakage of inkcolors of markers nib easy to hold in fingers ====================== Test cases for pen 1. height, width, of pen 2. the max no of characters that pen can write 3. test the behaviour of pen in extreme conditions like heat, cold ,rain 4. test the apt ange of the pen with paper for writingconfort level while holding the pen.......1.successfully check the ink in the pen 2.successfully check cap whether it is comfortable r not 3.successfully check the Pin point(Ball point)4.successfully check the colour of the penTest case for Ball pen1)check integration between ren nipe point and ink flow2)check integration between cap and pen body3)check ink quality 4)check refile quality and its integration with pen =================================== 3)boat tc Er functionality ---float on water by drop on water float ----push or key or switch or remote to move move doccumentation- --shape,size length width vol -----weight boundaries weighing less amnt of water large amnt of water performance ----how long it floats continously ---under stress how it floats (push forcibly) ----under load more than limit how it floats- ----not easily brokable -----in rain other - -roll on floor ----ice --- other liquids ============================== 4)text box tc er functionality ---input text(alphabet) --input numerics- --input special char ---null str --empty str boundaries- -min length of text (ch/0) --max length (max+1) performance- ---max strings run 1000 times localization- --localised str- -password(ascii code)-- tabing- ---font -----color- ------space- ---short cut keys -----copy enabled text ================================ =5)radio box =========================== 6)calc 1) Test all the basic functionality +,-,*,/ . Results should be as accepted. 2) Check other complex functionality like sqrt for both +ve and -ve Numbers. 3) Divide by Zero ..Most popular test case 4) Few expressions like 2+-3 .. -3+-3 .. -3-3 ..Must be tested. 5) Perform some operation (+,*..etc) number which will fill up the screen. 6) = button must be tested. 7) Screen should clear up on pressing AC. 8) Multiply by Zero. 9) Multiplication of two negative numbers must be positive. (-3*-3 = 9)============================================================= 1.Testing without entering any username and password 2.Test it only with Username 3.Test it only with password. 4 .User name with wrong password 5. Password with wrong user name 6. Right username and right password 7. Cancel, after entering username and pwd. 8.Enter long username and password, that exceeds the set limit of charectors. 9.Try copy/paste in the password text box. 10.After successfull sign-out, try "Back" option from your browser. Check whether it gets you to the "signed-in" page. =========================================================================================1)test case for UI office communicatortest cases ER func===menu items when we move cursor get highlated menu lists on clicking it display menu itemswhen we click enter on send button msg sent and apear in chatsclose button closes uimenu list (no of menu lists should mtch) navigation tabbing docc ==== UI nameno of participantssend msg text box font smileyclose button maximiseminimise3 compartments in the uihighlated participant names who are availablefonts ==================================================== 2)test cases for bool URI(object IE,string uri){ } test case ER 1)fun validIE http:/msn.com UI (msnhome page)IE msn.com ui -home page mozilla fire fox UI -home page invalid IE http:/msnn.com no Home pageie msn@@@.com may or may not depend s on specification 2)boundariesIE " " default home page 3)performance msn.com IE response time= x sec stress testIE msn.com (1000)times (1000x sec cpu time) load test no of users 1000 response time=x sec 4)other====== 1)time out problems 2)connectivity problems ============================================ 3)test cases for int countoccuranceofsubstr(string src,string sub)test cases ER ===============================================================================================================API testing :reverse string program test case expected result 1)functionality valid string, ExampleString ="welcome to Hello world";Start position= 0Length= 3 String =" welcome to Hello world"; " "Start position= 0Length= 4 2)special char in strString ="@#come to Hello world"; @#Start position= 0Length= 2 3)boundaries in start position& length "" String ="welcome to Hello world";Start position= 0Length= 0 4)less than boundariesString ="welcome to Hello world"; error msgStart position= ,-1Length= -1 5)max in start posString ="welcome to Hello world"; error msgStart position= s2.length-1Length= s2.length+1 6)empty strString =" "; " "Start position= 0Length= 3 7)null strString =null; nullStart position= ,-1Length= -1 8)stressrepeating iterationsString ="welcome to Hello world"; wel Start position= 0Length= 3 9)performanceString ="aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaawelcome to Hello world"; dStart position= s2.length-1Length= 1 10)localization String ="japanes str "; "---"Start position= 0Length= 3 11)single word String ="welcome"; welStart position= 0Length= 3 12)repeated words in stringString ="aaaaaaaaaaaaaaa"; aaStart position= 1Length= 2 13)total str as asub strString ="aaaaaaaaaaaaaaa"; aaaaaaaaaaaaaaaStart position= 0Length= s2.length ===================================================================================== test plan doc=========== 
TABLE OF CONTENTS 1.0 INTRODUCTION 
2.0 OBJECTIVES AND TASKS
2.1 Objectives
2.2 Tasks 
3.0 SCOPE 
4.0 Testing Strategy
4.1 Alpha Testing (Unit Testing)
4.2 System and Integration Testing
4.3 Performance and Stress Testing
4.4 User Acceptance Testing
4.5 Batch Testing
4.6 Automated Regression Testing
4.7 Beta Testing 
5.0 Hardware Requirements
 6.0 Environment Requirements
7.0 Test Schedule 
8.0 Control Procedures 
9.0 Features to Be Tested 
10.0 Features Not to Be Tested 
11.0 Resources/Roles & Responsibilities 
12.0 Schedules 
13.0 Significantly Impacted Departments (SIDs) 
14.0 Dependencies 
15.0 Risks/Assumptions 
16.0 Tools 
17.0 Approvals 

1.0 INTRODUCTION A brief summary of the product being tested. Outline all the functions at a high level. 

2.0 OBJECTIVES AND TASKS 

2.1 ObjectivesDescribe the objectives supported by the Master Test Plan, eg., defining tasks and responsibilities, vehicle for communication, document to be used as a service level agreement, etc. 

2.2 TasksList all tasks identified by this Test Plan, i.e., testing, post-testing, problem reporting, etc.

3.0 SCOPE GeneralThis section describes what is being tested, such as all the functions of a specific product, its existing interfaces, integration of all functions. TacticsList here how you will accomplish the items that you have listed in the “Scope” section. For example, if you have mentioned that you will be testing the existing interfaces, what would be the procedures you would follow to notify the key people to represent their respective areas, as well as allotting time in their schedule for assisting you in accomplishing your activity? 

4.0 TESTING STRATEGY Describe the overall approach to testing. For each major group of features or feature combinations, specify the approach which will ensure that these feature groups are adequately tested. Specify the major activities, techniques, and tools which are used to test the designated groups of features. The approach should be described in sufficient detail to permit identification of the major testing tasks and estimation of the time required to do each one. 

4.1 Unit Testing Definition:Specify the minimum degree of comprehensiveness desired. Identify the techniques which will be used to judge the comprehensiveness of the testing effort (for example, determining which statements have been executed at least once). Specify any additional completion criteria (for example, error frequency). The techniques to be used to trace requirements should be specified. Participants:List the names of individuals/departments who would be responsible for Unit Testing. Methodology:Describe how unit testing will be conducted. Who will write the test scripts for the unit testing, what would be the sequence of events of Unit Testing and how will the testing activity take place?

 4.2 System and Integration Testing Definition:List what is your understanding of System and Integration Testing for your project. Participants:Who will be conducting System and Integration Testing on your project? List the individuals that will be responsible for this activity. Methodology:Describe how System & Integration testing will be conducted. Who will write the test scripts for the unit testing, what would be sequence of events of System & Integration Testing, and how will the testing activity take place? 

4.3 Performance and Stress Testing Definition:List what is your understanding of Stress Testing for your project. Participants: Who will be conducting Stress Testing on your project? List the individuals that will be responsible for this activity. Methodology:Describe how Performance & Stress testing will be conducted. Who will write the test scripts for the testing, what would be sequence of events of Performance & Stress Testing, and how will the testing activity take place? 

4.4 User Acceptance Testing Definition:The purpose of acceptance test is to confirm that the system is ready for operational use. During acceptance test, end-users (customers) of the system compare the system to its initial requirements. Participants:Who will be responsible for User Acceptance Testing? List the individuals’ names and responsibility. Methodology:Describe how the User Acceptance testing will be conducted. Who will write the test scripts for the testing, what would be sequence of events of User Acceptance Testing, and how will the testing activity take place? 

4.5 Batch Testing 

4.6 Automated Regression Testing Definition:Regression testing is the selective retesting of a system or component to verify that modifications have not caused unintended effects and that the system or component still works as specified in the requirements. Participants:Methodology: 4.7 Beta TestingParticipants: Methodology:

5.0 HARDWARE REQUIREMENTSComputersModems 

6.0 ENVIRONMENT REQUIREMENTS 

6.1 Main FrameSpecify both the necessary and desired properties of the test environment. The specification should contain the physical characteristics of the facilities, including the hardware, the communications and system software, the mode of usage (for example, stand-alone), and any other software or supplies needed to support the test. Also specify the level of security which must be provided for the test facility, system software, and proprietary components such as software, data, and hardware. Identify special test tools needed. Identify any other testing needs (for example, publications or office space). Identify the source of all needs which are not currently available to your group. 6.2 Workstation

7.0 TEST SCHEDULE Include test milestones identified in the Software Project Schedule as well as all item transmittal events. Define any additional test milestones needed. Estimate the time required to do each testing task. Specify the schedule for each testing task and test milestone. For each testing resource (that is, facilities, tools, and staff), specify its periods of use. 

8.0 CONTROL PROCEDURES Problem ReportingDocument the procedures to follow when an incident is encountered during the testing process. If a standard form is going to be used, attach a blank copy as an “Appendix” to the Test Plan. In the event you are using an automated incident logging system, write those procedures in this section. Change RequestsDocument the process of modifications to the software. Identify who will sign off on the changes and what would be the criteria for including the changes to the current product. If the changes will affect existing programs, these modules need to be identified. 

9.0 FEATURES TO BE TESTED Identify all software features and combinations of software features that will be tested. 10.0 FEATURES NOT TO BE TESTED Identify all features and significant combinations of features which will not be tested and the reasons. 

11.0 RESOURCES/ROLES & RESPONSIBILITIES Specify the staff members who are involved in the test project and what their roles are going to be (for example, Mary Brown (User) compile Test Cases for Acceptance Testing). Identify groups responsible for managing, designing, preparing, executing, and resolving the test activities as well as related issues. Also identify groups responsible for providing the test environment. These groups may include developers, testers, operations staff, testing services, etc. 

12.0 SCHEDULES Major DeliverablesIdentify the deliverable documents. You can list the following documents:- Test Plan- Test Cases- Test Incident Reports- Test Summary Reports 

13.0 SIGNIFICANTLY IMPACTED DEPARTMENTS (SIDs) Department/Business Area Bus. Manager Tester(s) 

14.0 DEPENDENCIES Identify significant constraints on testing, such as test-item availability, testing-resource availability, and deadlines. 

15.0 RISKS/ASSUMPTIONS Identify the high-risk assumptions of the test plan. Specify contingency plans for each (for example, delay in delivery of test items might require increased night shift scheduling to meet the delivery date).

16.0 TOOLSList the Automation tools you are going to use. List also the Bug tracking tool here.


 17.0 APPROVALS Specify the names and titles of all persons who must approve this plan. Provide space for the signatures and dates. Name (In Capital Letters) Signature Date =========================================================================Compatbility

Web applications need to be compatible with multiple browsers and products need to install under a variety of platforms. These aspects are tested through Browser compatibility tests and platform compatibility tests. ====================================================================================Software testing is the process of checking software, to verify that it satisfies its requirements and to detect errors. Definnition Configuration testing is the system testing of different variations of an integrated, blackbox application against its configurability requirements. The process of testing a system with each of the configurations of software and hardware that are supported. CommentTypically the cost of this type of testing is small (since regression tests are re-run within the appropriate environment), but the benefit is very large (since small changes often have dramatic and unexpected impacts - especially on web-based systems). For this reason, this type of testing is both cost effective and provides additional confidence to both developers and users. Goals The typical goals of configuration testing are to: Cause the application to fail to meet its configurability requirements so that the underlying defects can be identified, analyzed, fixed, and prevented in the future. Objectives The typical objectives of configuration testing are to: Partially validate the application (i.e., to determine if it fulfills its configurability requirements). Cause failures concerning the configurability requirements that help identify defects that are not efficiently found during unit and integration testing: Functional Variants. Internationalization (e.g., multiple languages, currencies, taxes and tariffs, time zones, etc.). Personalization Report these failures to the development teams so that the associated defects can be fixed. Determine the effect of adding or modifying hardware resources such as: Memory Disk and tape resources Processors Load balancers Determine an optimal system configuration. Examples Typical examples include configuration testing of an application that must: Have multiple functional variants. Support internationalization. Support personalization. Preconditions Configuration testing can typically begin when the following preconditions hold: The configurability requirements to be tested have been specified. Multiple variants of the application exist. The relevant software components have passed unit testing. Software integration testing has started. However, configuration testing can begin prior to the distribution of the software components onto the hardware components. The relevant system components have passed system integration testing. The independent test team is adequately staffed and trained in configuration testing. The test environment is ready. Completion Criteria Configuration testing is typically complete when the following postconditions hold: At least one configuration test suite exists for each configurability requirement. The test suites for every scheduled configurability requirement execute successfully on the appropriate configuration. Tasks Configurability testing typically involves the independent test team performing the following testing tasks: Test Planning Test Reuse Test Design Test Implementation Test Execution Test Reporting Environments Configuration testing is performed on the following environments using the following techniques: Test Environment: Test Harness Work Products Configuration testing typically results in the production of all or part of the following work products from the test work product set: Documents:Project Test Plan Master Test List Test Procedures Test Report Test Summary Report Software and Data:Test Harness Test Scripts Test Suites Test Cases Test Data Phases Configuration testing typically consists of the following tasks being performed during the following phases: PHASE ? TASK ? Business Strategy (*) Business Optimization Initiation Construction Delivery Usage Retirement Test Planning Not Applicable Not Applicable Completed Optional Regression Not Applicable Not Applicable Not Applicable Test Reuse Not Applicable Not Applicable Optionally Started (**) Completed Not Applicable Not Applicable Not Applicable Test Design Not Applicable Not Applicable Optionally Started (**) Completed Not Applicable Optional Regression Not Applicable Test Implementation Not Applicable Not Applicable Optionally Started (**) Completed Not Applicable Optional Regression Not Applicable Test Execution Not Applicable Not Applicable Optionally Started (**) Completed Not Applicable Optional Regression Not Applicable Test Reporting Not Applicable Not Applicable Not Applicable Completed Not Applicable Optional Regression Not Applicable (*) Optional configuration testing of COTS software components during the technology analysis and technology vendor selection tasks. (**) Optional configuration testing of the executable architecture as well as the COTS components during the vendor and tool evaluation and vendor and tool selection tasks. Guidelines The iterative and incremental development cycle implies that configuration testing is regularly performed in an iterative and incremental manner. Configuration testing must be automated if adequate regression testing is to occur. To the extent practical, reuse functional test cases as configuration test cases. ==================================================== Load Testing This is the simplest form of performance testing.

 A load test is usually conducted to understand the behavior of the application under a specific expected load. This load can be the expected concurrent number of users on the application performing a specific number of transaction within the set duration. This test will give out the response times of all the important business critical transactions. If the database, application server, etc are also monitored, then this simple test can itself point towards the bottleneck in the application. Stress Testing This testing is normally used to break the application. Double the number of users are added to the application and the test is run again until the application breaks down. This kind of test is done to determine the application's robustness in times of extreme load and helps application administrators to determine if the application will perform sufficiently if the current load goes well above the expected load. 

Endurance Testing (Soak Testing) This test is usually done to determine if the application can sustain the continuous expected load. Generally this test is done to determine if there are any memory leaks in the application. Spike Testing Spike testing, as the name suggests is done by spiking the number of users and understanding the behavior of the application whether it will go down or will it be able to handle dramatic changes in load. Pre-requisites for Performance Testing A stable build of the application which must resemble the Production environment as close to possible. The performanc testing environment should not be clubbed with UAT or development envrironment. This is dangerous as if an UAT or Integration testing or other testing is going on the same environment, then the results obtained from the performance testing may not be reliable. As a best practice it is always advisable to have a separate performance testing environment resembling the production environment as much as possible. 

Conclusion Performance testing is evolving as a separate sciences, with the number of performance testng tools such as HP's LoadRunner, JMeter, OpenSTA, WebLoad, SilkPerformer. Also each of the tests are done catering to the specific requirements of the application. Myths of Performance Testing Some of the very common myths are given below.1. Performance Testing is done to break the system. Stress Testing is done to understand the break point of the system. Otherwise normal load testing is generally done to understand the behavior of the application under the expected user load. Depending on other requirements, such as expectation of spike load, continued load for an extended period of time woul demand spike, endurance soak or stress testing. 2. Performance Testing should only be done after the System Integration Testing Although this is mostly the norm in the industry, performance testing can also be done while the initial development of the application is taking place. This kind of approach is known as the Early Performance Testing. This approach would ensure a holistic development of the application keeping the performance parameters in mind. Thus the finding of a performance bug just before the release of the application and the cost involved in rectifying the bug is reduced to a great extend. 3. Performance Testing only involves creation of scripts and any application changes would cause a simple refactoring of the scripts. Performance Testing in itself is an evolving science in the Software Industry. Scripting itself although important, is only one of the components of the performance testing. The major challenge for any performance tester is to determine the type of tests needed to execute and analyzing the various performance counters to determine the performance bottleneck. The other segment of the myth concerning the change in application would result only in little refactoring in the scripts is also untrue as any form of change on the UI espescially in Web protocol would entail complete re-development of the scripts from the scratch. This problem becomes bigger if the protocols involved include Web Services, Siebel, Web Click n Script, Citrix, SAP ========================================================== ============================================ 

compatability ========== Software Compatibility Testing Your customer base uses a wide variety of OSs, browsers, databases, servers, clients, and hardware. Different versions, configurations, display resolutions, and Internet connect speeds all can impact the behavior of your product and introduce costly and embarrassing bugs. We test for compatibility using real test environments (not just virtual systems). Why outsource your compatibility testing to ApTest?ApTest is expert at testing products for compatibility with hardware and software environments. We can compatibility test your WWW site, CD, or application quickly and inexpensively. Test operates testing labs offering all the hardware and software needed for such testing including: Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application's compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements: Computing capacity of Hardware Platform (IBM 360, HP 9000, etc.).. Bandwidth handling capacity of networking hardware Compatibility of peripherals (Printer, DVD drive, etc.) Operating systems (MVS, UNIX, Windows, etc.) Database (Oracle, Sybase, DB2, etc.) Other System Software (Web server, networking/ messaging tool, etc.) Browser compatibility (Firefox, Netscape, Internet Explorer, Safari, etc.) Carrier compatibility (Verizon, Sprint, Orange, O2, AirTel, etc.) Backwards compatibility. Hardware (different phones) Different Compilers (compile the code correctly) Runs on multiple host/guest Emulators no conversions required and behaviour is agrreable ============================================================== bug report ------------ title :open network propeties path: team path file://ghhg/ status: active sub status:active assigned to:rema issue type:button action disabled build:000056 vistabranch: X source:test case processoe:intelplatform vista description: bug find while opening the window testcase:----------1. Right click My Network Places > Select properties.
expected result:------------------Verify: Network Connections Folder should be able to be opened and closed with no problem.
tested on all osbuilds
bug identified on:-------------------windows vista
repro:test cases
steps
files------attach files screen shots
==============================================================================

======================================================================================================================
severity
how bad the bug is and the degree of impact when the user encounters the bug.
1)System crash,data loss,dat corruption,security breach2)Operational erreo,wrong result, loss of functionality.3)Minor problem ,mispelling ,UI layout,rare occurance4)Suggestion
priority:=========Indicates how much should be placed on fixing the bug and the urgency of making the fix.
1)Immediate fix,blocks further testing ,very visible2)must fix before the product release,3) when time permits4)would like to fix but the product can be released as is.
A Bugs Life Cycle:==============
Bug Found <=====1=====>open <====2===>Resolved <======3======>Closed 1)testers find bug and Logs Bug Report Bug report assignedto to programmer
2)programmer fixes bug bug report Assigned to tester
3)tester confirms bug is fixedtester closes bug report.
manual bug reporting and tracking;==============================using excell sheet
contents in report:BUG:IDSOFTWARERELEASEVERSIONTESTERDATEASSIGNED TO;SEVERITYPRIORITYREPRODUCIBLETITLEDESCRIPTIONRESOLVED BYSTSTUS SUB STATUS:
================================================
AUTOMATED bug reporting and tracking:
NEW BUG:TITLE:SEVERITYPRIORITY:PRODUCT:ASSIGNED TO:VERSION AREAbuild no:reproduction steps: expected results: Actual steps: environment:
bug title: severity priority assigned toeven num dont add properly0 divide by 0 causes crashdead link help file calc.helpdead link in help file wcalc.helpcolors are wrong in 256 color mode
================================================software deve companys maturity stages:intialrepeatabledefined managedoptimizing
===================================
software test techniciansoftware test engineersoftware deve engineer in testsoftware test leadsoftware test manager
==========================================
test scenarios 1)add function
test scenario: 1)add 1+2 2) add 11.2+11.3 3) add(0+256)4) add((-1)+(-2))
2)edit
test scenarios: add 1+2 add 11.2+11.3
3)delete
test scenarios: add 1+2 add 11.2+11.3
==========================================

Whiles filing bugs:
Make sure exact repro steps
Logs
screenshots
environment
repro count
Repro time
Consistant repro or not
Erro info
Clear cache and try

Bug not exists/not duplicate
not bydesign
Not external