'''Software testing''' is the process used to measure the quality of developed computer software. Usually, quality is constrained to such topics as correctness, completeness, security, but can also include more technical requirements as described under the ISO standard ISO 9126, such as capability, reliability, efficiency, portability, maintainability, compatibility, and usability.
Testing is a process of technical investigation, performed on behalf of stakeholders, that is intended to reveal quality-related information about the product with respect to the context in which it is intended to operate.
1.1 Verification and validation
Software testing is used in association with verification and validation (V&V). Verification is the checking of or testing of items, including software, for conformance and consistency with an associated specification. Software testing is just one kind of verification, which also uses techniques such as reviews, inspections, and walkthroughs. Validation is the process of checking what has been specified is what the user actually wanted.
* Verification: Are we doing the job right?
* Validation: Have we done the right job?
1.2 Testing Life Cycle
This life cycle is used for standard applications that are built from various custom technologies and follow the normal or standard testing approach. The Application or custom-build Lifecycle and its phases is depicted below:
1.2.1 Test Requirements
* Requirement Specification
* Functional Specification
* Design Specification
* Use case
* Test Trace-ability Matrix for identifying Test Coverage
1.2.2 Test Planning
* Test Scope, Test Environment
* Different Test phase and Test Methodologies
* Manual and Automation Testing
* Defect Mgmt, Configuration Mgmt, Risk Mgmt. Etc
* Evaluation & identification – Test, Defect tracking tools
1.2.3 Test Environment Setup
* Test Bed installation and configuration
* Network connectivity’s
* All the Software/ tools Installation and configuration
* Coordination with Vendors and others
1.2.4 Test Design
* Test Traceability Matrix and Test coverage
* Test Scenarios Identification & Test Case preparation
* Test data and Test scripts preparation
* Test case reviews and Approval
* Base lining under Configuration Management
1.2.5 Test Automation
* Automation requirement identification
* Tool Evaluation and Identification.
* Designing or identifying Framework and scripting
* Script Integration, Review and Approval
* Base lining under Configuration Management
1.2.6 Test Execution and Defect Tracking
* Executing Test cases
* Testing Test Scripts
* Capture, review and analyze Test Results
* Raised the defects and tracking for its closure
1.2.7 Test Reports and Acceptance
* Test summary reports
* Test Metrics and process Improvements made
* Build release
* Receiving acceptance
1.3 What kinds of testing should be considered?
* Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
* White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
* unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
* incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
* integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
* functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
* system testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
* end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
* sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
* regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
* acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
* load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
* stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
* performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
* usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
* install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.
* recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
* failover testing - typically used interchangeably with 'recovery testing'
* security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
* compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
* exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
* ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
* context-driven testing - testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
* user acceptance testing - determining if software is satisfactory to an end-user or customer.
* comparison testing - comparing software weaknesses and strengths to competing products.
* alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
* beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
* mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.
1.4 Steps are needed to run software tests
* Determine input equivalence classes, boundary value analysis, error classes
* Prepare test plan document and have needed reviews/approvals
* Write test cases
* Have needed reviews/inspections/approvals of test cases
* Prepare test environment and test ware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
* Obtain and install software releases
* Perform tests
* Evaluate and report results
* Track problems/bugs and fixes
* Retest as needed
* Maintain and update test plans, test cases, test environment, and test ware through life cycle
1.5 What's a 'test plan'?
A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:
* Title
* Identification of software including version/release numbers
* Revision history of document including authors, dates, approvals
* Table of Contents
* Purpose of document, intended audience
* Objective of testing effort
* Software product overview
* Relevant related document list, such as requirements, design documents, other test plans, etc.
* Relevant standards or legal requirements
* Traceability requirements
* Relevant naming conventions and identifier conventions
* Overall software project organization and personnel/contact-info/responsibilties
* Test organization and personnel/contact-info/responsibilities
* Assumptions and dependencies
* Project risk analysis
* Testing priorities and focus
* Scope and limitations of testing
* Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
* Outline of data input equivalence classes, boundary value analysis, error classes
* Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
* Test environment validity analysis - differences between the test and production systems and their impact on test validity.
* Test environment setup and configuration issues
* Software migration processes
* Software CM processes
* Test data setup requirements
* Database setup requirements
* Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
* Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
* Test automation - justification and overview
* Test tools to be used, including versions, patches, etc.
* Test script/test code maintenance processes and version control
* Problem tracking and resolution - tools and processes
* Project test metrics to be used
* Reporting requirements and testing deliverables
* Software entrance and exit criteria
* Initial sanity testing period and criteria
* Test suspension and restart criteria
* Personnel allocation
* Personnel pre-training needs
* Test site/location
* Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues
* Relevant proprietary, classified, security, and licensing issues.
* Open issues
* Appendix - glossary, acronyms, etc.
1.6 What's a 'test case'?
* A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
* Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.
1.7 What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:
* Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.
* Bug identifier (number, ID, etc.)
* Current bug status (e.g., 'Released for Retest', 'New', etc.)
* The application name or identifier and version
* The function, module, feature, object, screen, etc. where the bug occurred
* Environment specifics, system, platform, relevant hardware specifics
* Test case name/number/identifier
* One-line bug description
* Full bug description
* Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool
* Names and/or descriptions of file/data/messages/etc. used in test
* File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
* Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
* Was the bug reproducible?
* Tester name
* Test date
* Bug reporting date
* Name of developer/group/organization the problem is assigned to
* Description of problem cause
* Description of fix
* Code section/file/module/class/method that was fixed
* Date of fix
* Application version that contains the fix
* Tester responsible for retest
* Retest date
* Retest results
* Regression testing requirements
* Tester responsible for regression tests
* Regression testing results
A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.
1.8 How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers, especially in multi-tier systems. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing.
1.9 How can World Wide Web sites be tested?
Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:
* What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?
* Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?
* What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?
* Will down time for server and content maintenance/upgrades be allowed? how much?
* What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?
* How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
* What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
* Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
* Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??
* How will internal and external links be validated and updated? how often?
* Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?
* How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
* How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
1.10 GUI Test Checklist
1.10.1 Windows Compliance Standards
1.1. Application 1.2. For Each Window in the Application 1.3. Text Boxes 1.4. Option (Radio Buttons) 1.5. Check Boxes 1.6. Command Buttons 1.7. Drop Down List Boxes 1.8. Combo Boxes 1.9. List Boxes
1.10.2 Tester's Screen Validation Checklist
2.1. Aesthetic Conditions 2.2. Validation Conditions 2.3. Navigation Conditions 2.4. Usability Conditions 2.5. Data Integrity Conditions 2.6. Modes (Editable Read-only) Conditions 2.7. General Conditions 2.8. Specific Field Tests 2.8.1. Date Field Checks 2.8.2. Numeric Fields 2.8.3. Alpha Field Checks
1.10.3 Validation Testing - Standard Actions
3.1. On every Screen 3.2. Shortcut keys / Hot Keys 3.3. Control Shortcut Keys
1.10.4 Windows Compliance
'''Windows Compliance Testing'''
* For Each Application Start Application by Double Clicking on its ICON, The Loading message should show the application name, version number, and a bigger pictorial representation of the icon.
* No Login is necessary
* The main window of the application should have the same caption as the caption of the icon in Program Manager.
* Closing the application should result in an "Are you Sure" message box
* Attempt to start application Twice
* This should not be allowed - you should be returned to main Window
* Try to start the application twice as it is loading.
* On each window, if the application is busy, then the hour glass should be displayed. If there is no hour glass (e.g. alpha access enquiries) then some enquiry in progress message should be displayed.
* All screens should have a Help button, F1 should work doing the same.
1.10.5 For Each Window in the Application
* If Window has a Minimize Button, click it.
* Window Should return to an icon on the bottom of the screen
* This icon should correspond to the Original Icon under Program Manager.
* Double Click the Icon to return the Window to its original size.
* The window caption for every application should have the name of the application and the window name - especially the error messages.
* These should be checked for spelling, English and clarity , especially on the top of the screen. Check does the title of the window makes sense.
1.10.6 Check all text on window for Spelling/Tense and Grammar
* Use TAB to move focus around the Window. Use SHIFT+TAB to move focus backwards.
* Tab order should be left to right, and Up to Down within a group box on the screen.
* All controls should get focus - indicated by dotted box, or cursor. Tabbing to an entry field with text in it should highlight the entire text in the field.
* The text in the Micro Help line should change - Check for spelling, clarity and non-updateable etc.
* If a field is disabled (greyed) then it should not get focus. It should not be possible to select them with either the mouse or by using TAB. Try this for every greyed control.
* Never updateable fields should be displayed with black text on a grey background with a black label.
* All text should be left-justified, followed by a colon tight to it.
* In a field that may or may not be updateable, the label text and contents changes from black to grey depending on the current status.
* List boxes are always white background with black text whether they are disabled or not. All others are grey.
* In general, do not use goto screens, use gosub, i.e. if a button causes another screen to be displayed, the screen should not hide the first screen, with the exception of tab in 2.0
* When returning return to the first screen cleanly i.e. no other screens/applications should appear.
* In general, double-clicking is not essential. In general, everything can be done using both the mouse and the keyboard.
* All tab buttons should have a distinct letter.
1.10.7 Text Boxes
* Move the Mouse Cursor over all Enterable Text Boxes. Cursor should change from arrow to Insert Bar. If it doesn't then the text in the box should be grey or non-updateable. Refer to previous page.
1.10.7.1 Enter text into Box
* Try to overflow the text by typing to many characters - should be stopped Check the field width with capitals W.
* Enter invalid characters - Letters in amount fields, try strange characters like + , - * etc. in All fields.
* SHIFT and Arrow should Select Characters. Selection should also be possible with mouse. Double Click should select all text in box.
1.10.8 Option (Radio Buttons)
* Left and Right arrows should move 'ON' Selection. So should Up and Down.. Select with mouse by clicking.
1.10.9 Check Boxes
* Clicking with the mouse on the box, or on the text should SET/UNSET the box. SPACE should do the same
1.10.10 Command Buttons
* If Command Button leads to another Screen, and if the user can enter or change details on the other screen then the Text on the button should be followed by three dots.
* All Buttons except for OK and Cancel should have a letter Access to them. This is indicated by a letter underlined in the button text. The button should be activated by pressing ALT+Letter. Make sure there is no duplication.
* Click each button once with the mouse - This should activate
* Tab to each button - Press SPACE - This should activate
* Tab to each button - Press RETURN - This should activate
* If there is a Cancel Button on the screen , then pressing
* If pressing the Command button results in uncorrectable data e.g. closing an action step, there should be a message phrased positively with Yes/No answers where Yes results in the completion of the action.
1.10.11 Drop Down List Boxes
* Pressing the Arrow should give list of options. This List may be scrollable. You should not be able to type text in the box.
* Pressing a letter should bring you to the first item in the list with that start with that letter. Pressing ‘Ctrl - F4’ should open/drop down the list box.
* Spacing should be compatible with the existing windows spacing (word etc.). Items should be in alphabetical order with the exception of blank/none which is at the top or the bottom of the list box.
* Drop down with the item selected should be display the list with the selected item on the top.
* Make sure only one space appears, shouldn't have a blank line at the bottom.
1.10.12 Combo Boxes
* Should allow text to be entered. Clicking Arrow should allow user to choose from list
1.10.13 List Boxes
* Should allow a single selection to be chosen, by clicking with the mouse, or using the Up and Down Arrow keys.
* Pressing a letter should take you to the first item in the list starting with that letter.
* If there is a 'View' or 'Open' button beside the list box then double clicking on a line in the List Box, should act in the same way as selecting and item in the list box, then clicking the command button.
* Force the scroll bar to appear, make sure all the data can be seen in the box.
1.10.14 Screen Validation Checklist
Aesthetic Conditions
* Is the general screen background the correct color?.
* Are the field prompts the correct color?
* Are the field backgrounds the correct color?
* In read-only mode, are the field prompts the correct color?
* In read-only mode, are the field backgrounds the correct color?
* Are all the screen prompts specified in the correct screen font?
* Is the text in all fields specified in the correct screen font?
* Are all the field prompts aligned perfectly on the screen?
* Are all the field edit boxes aligned perfectly on the screen?
* Are all group boxes aligned correctly on the screen?
* Should the screen be resizable?
* Should the screen be minimisable?
* Are all the field prompts spell correctly?
* Are all character or alpha-numeric fields left justified? This is the default unless otherwise specified.
* Are all numeric fields right justified? This is the default unless otherwise specified.
* Is all the microhelp text spell correctly on this screen?
* Is all the error message text spell correctly on this screen?
* Is all user input captured in UPPER case or lower case consistently?
* Where the database requires a value (other than null) then this should be defaulted into fields. The user must either enter an alternative valid value or leave the default value intact.
* Assure that all windows have a consistent look and feel.
* Assure that all dialog boxes have a consistent look and feel.
1.10.15 Validation Conditions
* Does a failure of validation on every field cause a sensible user error message?
* Is the user required to fix entries which have failed validation tests?
* Have any fields got multiple validation rules and if so are all rules being applied?
* If the user enters an invalid value and clicks on the OK button (i.e. does not TAB off the field) is the invalid entry identified and highlighted correctly with an error message.?
* Is validation consistently applied at screen level unless specifically required at field level?
* For all numeric fields check whether negative numbers can and should be able to be entered.
* For all numeric fields check the minimum and maximum values and also some mid-range values allowable?
* For all character/alphanumeric fields check the field to ensure that there is a character limit specified and that this limit is exactly correct for the specified database size?
* Do all mandatory fields require user input?
* If any of the database columns don’t allow null values then the corresponding screen fields must be mandatory. (If any field which initially was mandatory has become optional then check whether null values are allowed in this field.)
1.10.16 Navigation Conditions
* Can the screen be accessed correctly from the menu?
* Can the screen be accessed correctly from the toolbar?
* Can the screen be accessed correctly by double clicking on a list control on the previous screen?
* Can all screens accessible via buttons on this screen be accessed correctly?
* Can all screens accessible by double clicking on a list control be accessed correctly?
* Is the screen modal. i.e. Is the user prevented from accessing other functions when this screen is active and is this correct?
* Can a number of instances of this screen be opened at the same time and is this correct?
1.10.17 Usability Conditions
* Are all the drop downs on this screen sorted correctly? Alphabetic sorting is the default unless otherwise specified.
* Is all date entry required in the correct format?
* Have all pushbuttons on the screen been given appropriate Shortcut keys?
* Do the Shortcut keys work correctly?
* Have the menu options which apply to your screen got fast keys associated and should they have?
* Does the Tab Order specified on the screen go in sequence from Top Left to bottom right? This is the default unless otherwise specified.
* Are all read-only fields avoided in the TAB sequence?
* Are all disabled fields avoided in the TAB sequence?
* Can the cursor be placed in the microhelp text box by clicking on the text box with the mouse?
* Can the cursor be placed in read-only fields by clicking in the field with the mouse?
* Is the cursor positioned in the first input field or control when the screen is opened?
* Is there a default button specified on the screen?
* Does the default button work correctly?
* When an error message occurs does the focus return to the field in error when the user cancels it?
* When the user Alt+Tab’s to another application does this have any impact on the screen upon return to The application?
* Do all the fields edit boxes indicate the number of characters they will hold by there length? e.g. a 30 character field should be a lot longer