Uncategorized

Testing Material

Spread the love

Definitions of Testing:

“Testing is the process of executing a program with the intent of finding errors”

Or

“Testing is the process of evaluating a system by manual or automatic means and verifies that it satisfies specified requirements”

Or

“… the process of exercising or evaluating a system or system component by manual or automated means to verify that it satisfies specified requirements or to identify differences /between expected and actual results…”

Why Software Testing?

Software testing helps to deliver quality software products that satisfy user’s requirements, needs and expectations. If done poorly:

  • Defects are found during operation,
  • It results in high maintenance cost and user dissatisfaction
  • It may cause mission failure
  • Impact on operational performance and reliability

Eight Basic Principles of Testing

  • Define the expected output or result.
  • Don’t test your own programs.
  • Inspect the results of each test completely.
  • Include test cases for invalid or unexpected conditions.
  • Test the program to see if it does what it is not supposed to do as well as what it is supposed to do.
  • Avoid disposable test cases unless the program itself is disposable.
  • Do not plan tests assuming that no errors will be found.
  • The probability of locating more errors in any one module is directly proportional to the number of errors already found in that module.

Best Testing Practices to be followed during testing

  • Testing and evaluation responsibility is given to every member, so as to generate team responsibility among all.
  • Develop Master Test Plan so that resource and responsibilities are understood and assigned as early in the project as possible.
  • Systematic evaluation and preliminary test design are established as a part of all system engineering and specification work.
  • Testing is used to verify that all project deliverables and components are
  • Complete, and to demonstrate and track true project progress.
  • A-risk prioritized list of test requirements and objectives (such as requirements-based, design-based, etc) are developed and maintained.
  • Conduct Reviews as early and as often as possible to provide developer feedback and get problems found and fixed as they occur.

Requirements and Analysis Specification

The main objective of the requirement analysis is to prepare a document, which includes all the client requirements. That is, the Software Requirement Specification (SRS) document is the primary output of this phase. Proper requirements and specifications are critical for having a successful project. Removing errors at this phase can reduce the cost as much as errors found in the Design phase. And also you should verify the following activities:

  • Determine Verification Approach.
  • Determine Adequacy of Requirements.
  • Generate functional test data.
  • Determine consistency of design with requirements.

Design phase

In this phase we are going to design entire project into two:

High –Level Design or System Design.

Low –Level Design or Detailed Design.

High –Level Design or System Design (HLD)

High – level Design gives the overall System Design in terms of Functional Architecture and Database design. This is very useful for the developers to understand the flow of the system. In this phase design team, review team (testers) and customers plays a major role. For this the entry criteria are the requirement document that is SRS. And the exit criteria will be HLD, projects standards, the functional design documents, and the database design document.

Low – Level Design (LLD)

During the detailed phase, the view of the application developed during the high level design is broken down into modules and programs. Logic design is done for every program and then documented as program specifications. For every program, a unit test plan is created.

The entry criteria for this will be the HLD document. And the exit criteria will the program specification and unit test plan (LLD).

Development Phase

This is the phase where actually coding starts. After the preparation of HLD and LLD, the developers know what their role is and according to the specifications they develop the project.

This stage produces the source code, executables, and database. The output of this phase is the subject to subsequent testing and validation.

And we should also verify these activities:

  • Determine adequacy of implementation.
  • Generate structural and functional test data for programs.

The inputs for this phase are the physical database design document, project standards, program specification, unit test plan, program skeletons, and utilities tools. The output will be test data, source data, executables, and code reviews.

Testing phase

This phase is intended to find defects that can be exposed only by testing the entire system. Static Testing or Dynamic Testing can do this. Static testing means testing the product, which is not executing, we do it by examining and conducting the reviews. Dynamic testing is what you would normally think of testing. We test the executing part of the project. A series of different tests are done to verify that all system elements have been properly integrated and the system performs all its functions.

Note that the system test planning can occur before coding is completed. Indeed, it is often done in parallel with coding. The input for this is requirements specification document, and the output are the system test plan and test result.

Implementation phase or the Acceptance phase

This phase includes two basic tasks:

  • Getting the software accepted
  • Installing the software at the customer site.

Acceptance consist of formal testing conducted by the customer according to the

Acceptance test plan prepared earlier and analysis of the test results to determine whether the system satisfies its acceptance criteria. When the result of the analysis satisfies the acceptance criteria, the user accepts the software.

Maintenance phase

This phase is for all modifications, which is not meeting the customer requirements or any thing to append to the present system. All types of corrections for the project or product take place in this phase. The cost of risk will be very high in this phase. This is the last phase of software development life cycle. The input to this will be project to be corrected and the output will be modified version of the project.

Quality has two working definitions:

  • Producer’s Viewpoint – The quality of the product meets the requirements.
  • Customer’s Viewpoint – The quality of the product is “fit for use” or meets the Customer’s needs.

Meeting the Client requirements first and each and every time are considered as Quality.

Quality Assurance

Quality assurance is a planned and systematic set of activities necessary to provide adequate confidence that products and services will conform to specified requirements and meet user needs.

Quality assurance is a staff function, responsible for implementing the quality policy defined through the development and continuous improvement of software development processes.

Quality assurance is an activity that establishes and evaluates the processes that produce products.

If there is no need for process, there is no role for quality assurance. For example, quality

Assurance activities in an IT environment would determine the need for, acquire, or help install:

  • System development methodologies
  • Estimation processes
  • System maintenance processes
  • Requirements definition processes
  • Testing processes and standards

Quality Control

Quality control is the process by which product quality is compared with applicable standards, and the action taken when nonconformance is detected.

 Quality control is a line function, and the work is done within a process to ensure that the work product conforms to standards and requirements.

Quality control activities focus on identifying defects in the actual products produced. These activities begin at the start of the software development process with reviews of requirements, and continue until all application testing is complete.

It is possible to have quality control without quality assurance. 

There are five perspectives of quality – each of which must be considered as important to the customer:

1. Transcendent – I know it when I see it

2. Product-Based – Possesses desired features

3. User-Based – Fitness for use

4. Development- and Manufacturing-Based – Conforms to requirements

5. Value-Based – At an acceptable cost

The Domino Effect

One problem condition, such as a wrong price or a program defect, can cause

Hundreds or even thousands of similar errors within a few minutes.

Defects typically found in software systems are the results of these circumstances:

􀂃 IT improperly interprets requirements

IT staff misinterprets what the user wants, but correctly implements what the IT

People believe is wanted.

􀂃 Users specify the wrong requirements

The specifications given to IT are erroneous.

􀂃 Requirements are incorrectly recorded

IT fails to record the specifications properly.

􀂃 Design specifications are incorrect

The application system design does not achieve the system requirements, but the

Design as specified is implemented correctly.

􀂃 Program specifications are incorrect

The design specifications are incorrectly interpreted, making the program

Specifications inaccurate; however, it is possible to properly code the program to

Achieve the specifications.

􀂃 Errors in program coding

The program is not coded according to the program specifications.

􀂃 Data entry errors

Data entry staff incorrectly enters information into your computers.

􀂃 Testing errors

Tests either falsely detect an error or fail to detect one.

􀂃 Mistakes in error correction

Your implementation team makes errors in implementing your solutions.

􀂃 the corrected condition causes another defect

In the process of correcting a defect, the correction process itself institutes additional Defects into the application system.

Verification versus Validation

Verification answers the question, “Did we build the right system?” 

while validations addresses, “Did we build the system right?”

Verification is the process confirming that software meets its specifications.

Validation is the process confirming that it meets the user’s requirements.

Verification can be conducted through  views Inspections, Walkthroughs, Meetings,Peer Reviews. Quality reviews provides visibility into the development process throughout the software development life cycle, and help teams determine whether to continue development activity at various checkpoints or milestones in the process. They are conducted to identify defects in a product early in the life cycle.

Types of Reviews:

In-process Reviews :-

They look at the product during a specific time period of life cycle, such as during the design activity. They are usually limited to a segment of a project, 

With the goal of identifying defects as work progresses, rather than at the close

Of a phase or even later, when they are more costly to correct.

Decision-point or phase-end Reviews: –

This type of review is helpful in determining whether to continue with planed

Activities or not. They are held at the end of each phase.

Post implementation Reviews: –

These reviews are held after implementation is complete to audit the process

based on actual results. Post-implementation reviews are also know as “

Postmortems”, and are held to assess the success of the overall process after

release and identify any opportunities for process improvements.

Reviews vary from very informal to very formal (i.e. well structured and regulated). The formality of a review process is related to factors such as the maturity of the development process, any legal or regulatory requirements or the need for an audit trail.

The way a review is carried out depends on the agreed objective of the review (e.g. find defects, gain understanding, or discussion and decision by consensus).

Phases of a formal review 

A typical formal review has the following main phases:

Planning: selecting the personnel, allocating roles; defining the entry and exit criteria for more formal review types (e.g. inspection); and selecting which parts of documents to look at.

Kick-off: distributing documents; explaining the objectives, process and documents to the participants; and checking entry criteria (for more formal review types).

Individual preparation: work done by each of the participants on their own before the review meeting, noting potential defects, questions and comments.

Review meeting: discussion or logging, with documented results or minutes (for more formal review types). The meeting participants may simply note defects, make recommendations for handling the defects, or make decisions about the defects.

Rework: fixing defects found, typically done by the author.

Follow-up: checking that defects have been addressed, gathering metrics and checking on exit criteria (for more formal review types).

Roles and responsibilities 

A typical formal review will include the roles below:

 Manager: decides on the execution of reviews, allocates time in project schedules and

determines if the review objectives have been met.

 Moderator: the person who leads the review of the document or set of documents, including planning the review, running the meeting, and follow-up after the meeting. If necessary, the moderator may mediate between the various points of view and is often the person upon whom the success of the review rests.

 Author: the writer or person with chief responsibility for the document(s) to be reviewed.

 Reviewers: individuals with a specific technical or business background (also called checkers or inspectors) who, after the necessary preparation, identify and describe findings (e.g.defects) in the product under review. Reviewers should be chosen to represent different perspectives and roles in the review process and they take part in any review meetings.

 Scribe (or recorder): documents all the issues, problems and open points that were identified during the meeting. Looking at documents from different perspectives, and using checklists, can make reviews more effective and efficient, for example, a checklist based on perspectives such as user, maintainer, tester or operations, or a checklist of typical requirements problems.

Types of review 

A single document may be the subject of more than one review. If more than one type of review is used, the order may vary. For example, an informal review may be carried out before a technical review, or an inspection may be carried out on a requirement specification before a walkthrough with customers. The main characteristics, options and purposes of common review types are:

Informal review

Key characteristics:

 no formal process;

 there may be pair programming or a technical lead reviewing designs and code;

 optionally may be documented;

 may vary in usefulness depending on the reviewer;

 Main purpose: inexpensive way to get some benefit.

Walkthrough

Key characteristics:

 Meeting led by author;

 Scenarios, dry runs, peer group;

 Open-ended sessions;

 optionally a pre-meeting preparation of reviewers, review report, list of findings and scribe (who is not the author)

 may vary in practice from quite informal to very formal;

 Main purposes: learning, gaining understanding, defect finding.

Technical review

 documented, defined defect-detection process that includes peers and technical experts;

 may be performed as a peer review without management participation;

 ideally led by trained moderator (not the author);

 Pre-meeting preparation;

 Optionally the use of checklists, review report, list of findings and management participation;

 may vary in practice from quite informal to very formal;

 Main purposes: discuss, make decisions, evaluate alternatives, find defects, solve technical problems and check conformance to specifications and standards.

Inspection

 led by trained moderator (not the author);

 usually peer examination;

 defined roles;

 includes metrics;

 Formal process based on rules and checklists with entry and exit criteria;

 Pre-meeting preparation;

 Inspection report, list of findings;

 Formal follow-up process;

 optionally, process improvement and reader;

 Main purpose: find defects.

Success factors for reviews include:

 Each review has a clear predefined objective.

 The right people for the review objectives are involved.

 Defects found are welcomed, and expressed objectively.

 People issues and psychological aspects are dealt with (e.g. making it a positive experience for the author).

 Review techniques are applied that are suitable to the type and level of software work

products and reviewers.

 Checklists or roles are used if appropriate to increase effectiveness of defect identification.

 Training is given in review techniques, especially the more formal techniques, such as

inspection.

 Management supports a good review process (e.g. by incorporating adequate time for review activities in project schedules).

 There is an emphasis on learning and process improvement.

Test manager roles:

The test manager ensures that testing is performed, that it is documented, and that testing

Techniques are established and developed. They are responsible for ensuring that tests are designed and executed in a timely and productive manner, as well as:

  • Test planning and estimation
  • Designing the test strategy
  • Reviewing analysis and design artifacts
  • Chairing the Test Readiness Review
  • Managing the test effort
  • Overseeing acceptance tests

Testers are usually responsible for:

  • Developing test cases and procedures
  • Test data planning, capture, and conditioning
  • Reviewing analysis and design artifacts
  • Testing execution
  • Utilizing automated test tools for regression testing
  • Preparing test documentation
  • Defect tracking and reporting

Other testers joining the team will primarily focus on test execution, defect reporting, and

Regression testing. These testers may be junior members of the test team, users, marketing or product representatives, and so on.

The test team should be represented in all key requirements and design meetings, including: JAD

Or requirements definition sessions, risk analysis sessions, and prototype review sessions. They should also participate in all inspections or walkthroughs for requirements and design artifacts.

What’s a ‘test plan’?


A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. 

The completed document will help people outside the test group understand the ‘why’ and ‘how’ of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:
• Title
• Identification of software including version/release numbers
• Revision history of document including authors, dates, approvals
• Table of Contents
• Purpose of document, intended audience
• Objective of testing effort
• Software product overview
• Relevant related document list, such as requirements, design documents, other test plans, etc.
• Relevant standards or legal requirements
• Traceability requirements
• Relevant naming conventions and identifier conventions
• Overall software project organization and personnel/contact-info/responsibilities
• Test organization and personnel/contact-info/responsibilities
• Assumptions and dependencies
• Project risk analysis
• Testing priorities and focus
• Scope and limitations of testing
• Test outline – a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
• Outline of data input equivalence classes, boundary value analysis, error classes
• Test environment – hardware, operating systems, other required software, data configurations, interfaces to other systems
• Test environment validity analysis – differences between the test and production systems and their impact on test validity.
• Test environment setup and configuration issues
• Software migration processes
• Software CM processes
• Test data setup requirements
• Database setup requirements
• Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
• Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
• Test automation – justification and overview
• Test tools to be used, including versions, patches, etc.
• Test script/test code maintenance processes and version control
• Problem tracking and resolution – tools and processes
• Project test metrics to be used
• Reporting requirements and testing deliverables
• Software entrance and exit criteria
• Initial sanity testing period and criteria
• Test suspension and restart criteria
• Personnel allocation
• Personnel pre-training needs
• Test site/location
• Outside test organizations to be utilized and their purpose, responsibilities, deliverables, contact persons, and coordination issues
• Relevant proprietary, classified, security, and licensing issues.
• Open issues
• Appendix – glossary, acronyms, etc

What’s a ‘test case’? 

• A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. 


• Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it’s useful to prepare test cases early in the development cycle if possible.

• What is system testing?
System testing is testing carried out of an integrated system to verify, that the system meets the specified requirements. It is concerned with the behavior of the whole system, according to the scope defined. More often than not system testing is the final test carried out by the development team, in order to verify that the system developed does meet the specifications and also identify defects which may be present.

What is the difference between retest and regression testing?
Retesting, also known as confirmation testing is testing which runs the test cases that failed the last time, when they were run in order to verify the success of corrective actions taken on the defect found. On the other hand, regression testing is testing of a previously tested program program after the modifications to make sure that no new defects have been introduced. In other words, it helps to uncover defects in the unchanged areas of the software.

• What is a test suite?
A test suite is a set of several test cases designed for a component of a software or system under test, where the post condition of one test case is normally used as the precondition for the next test.

These are some of the software testing interview questions and answers for freshers and the experienced. This is not an exhaustive list, but I have tried to include as many software testing interview questions and answers, as I could in this article. I hope the article proves to be of help, when you are preparing for an interview. Here’s wishing you luck with the interviews and I hope you crack the interview as well.

What is a Test Case?

A test case is a set of conditions or variables and inputs that are developed for a particular goal or objective to be achieved on a certain application to judge its capabilities or features.
It might take more than one test case to determine the true functionality of the application being tested. Every requirement or objective to be achieved needs at least one test case. Some software development methodologies like Rational Unified Process (RUP) recommend creating at least two test cases for each requirement or objective; one for performing testing through positive perspective and the other through negative perspective.

Test Case Structure

A formal written test case comprises of three parts –

Information:
Information consists of general information about the test case. Information incorporates Identifier, test case creator, test case version, name of the test case, purpose or brief description and test case dependencies. 

Activity:
Activity consists of the actual test case activities. Activity contains information about the test case environment, activities to be done at test case initialization, activities to be done after test case is performed, step by step actions to be done while testing and the input data that is to be supplied for testing.

Results:
Results are outcomes of a performed test case. Results data consist of information about expected results and the actual results. 

Designing Test Cases

Test cases should be designed and written by someone who understands the function or technology being tested. A test case should include the following information – 

  • Purpose of the test 
  • Software requirements and Hardware requirements (if any) 
  • Specific setup or configuration requirements 
  • Description on how to perform the test(s) 
  • Expected results or success criteria for the test 

Designing test cases can be time consuming in a testing schedule, but they are worth giving time because they can really avoid unnecessary retesting or debugging or at least lower it. Organizations can take the test cases approach in their own context and according to their own perspectives. Some follow a general step way approach while others may opt for a more detailed and complex approach. It is very important for you to decide between the two extremes and judge on what would work the best for you. Designing proper test cases is very vital for your software testing plans as a lot of bugs, ambiguities, inconsistencies and slip ups can be recovered in time as also it helps in saving your time on continuous debugging and re-testing test cases.

Tips to design test data before executing your test cases

What do I mean by test data?

If you are writing test case then you need input data for any kind of test. Tester may provide this input data at the time of executing the test cases or application may pick the required input data from the predefined data locations. The test data may be any kind of input to application, any kind of file that is loaded by the application or entries read from the database tables. It may be in any format like xml test data, system test data, SQL test data or stress test data.

Preparing proper test data is part of the test setup. Generally testers call it as testbed preparation. In testbed all software and hardware requirements are set using the predefined data values.

If you don’t have the systematic approach for building test data while writing and executing test cases then there are chances of missing some important test cases. Tester can’t justify any bug saying that test data was not available or was incomplete. It’s every testers responsibility to create his/her own test data according to testing needs. Don’t even rely on the test data created by other tester or standard production test data, which might not have been updated for months! Always create fresh set of your own test data according to your test needs.

Sometime it’s not possible to create complete new set of test data for each and every build. In such cases you can use standard production data. But remember to add/insert your own data sets in this available database. One good way to design test data is use the existing sample test data or testbed and append your new test case data each time you get same module for testing. This way you can build comprehensive data set.

How to keep your data intact for any test environment?

Many times more than one tester is responsible for testing some builds. In this case more than one tester will be having access to common test data and each tester will try to manipulate that common data according to his/her own needs. Best way to keep your valuable input data collection intact is to keep personal copies of the same data. It may be of any format like inputs to be provided to the application, input files such as word file, excel file or other photo files.

Check if your data is not corrupted:
Filing a bug without proper troubleshooting is bad a practice. Before executing any test case on existing data make sure that data is not corrupted and application can read the data source.

How to prepare data considering performance test cases?

Performance tests require very large data set. Particularly if application fetching or updating data from DB tables then large data volume play important role while testing such application for performance. Sometimes creating data manually will not detect some subtle bugs that may only be caught by actual data created by application under test. If you want real time data, which is impossible to create manually, then ask your manager to make it available from live environment.

I generally ask to my manager if he can make live environment data available for testing. This data will be useful to ensure smooth functioning of application for all valid inputs.

Take example of my search engine project ‘statistics testing’. To check history of user searches and clicks on advertiser campaigns large data was processed for several years which was practically impossible to manipulate manually for several dates spread over many years. So there is no other option than using live server data backup for testing. (But first make sure your client is allowing you to use this data)

What is the ideal test data?

Test data can be said to be ideal if for the minimum size of data set all the application errors get identified. Try to prepare test data that will incorporate all application functionality, but not exceeding cost and time constraint for preparing test data and running tests.

How to prepare test data that will ensure complete test coverage?

Design your test data considering following categories:
Test data set examples:
1) No data: Run your test cases on blank or default data. See if proper error messages are generated.

2) Valid data set: Create it to check if application is functioning as per requirements and valid input data is properly saved in database or files.

3) Invalid data set: Prepare invalid data set to check application behavior for negative values, alphanumeric string inputs.

4) Illegal data format: Make one data set of illegal data format. System should not accept data in invalid or illegal format. Also check proper error messages are generated.

5) Boundary Condition data set: Data set containing out of range data. Identify application boundary cases and prepare data set that will cover lower as well as upper boundary conditions.

6) Data set for performance, load and stress testing: This data set should be large in volume.

This way creating separate data sets for each test condition will ensure complete test coverage.

Conclusion:

Preparing proper test data is a core part of “project test environment setup”. Tester cannot pass the bug responsibility saying that complete data was not available for testing. Tester should create his/her own test data additional to the existing standard production data. Your test data set should be ideal in terms of cost and time. Use the tips provided in this article to categorize test data to ensure complete functional test cases coverage.

Be creative, use your own skill and judgments to create different data sets instead of relying on standard production data while testing.

What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn’t create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the ‘Tools’ section for web resources with listings of such tools). The following are items to consider in the tracking process:
• Complete information such that developers can understand the bug, get an idea of it’s severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
• Current bug status (e.g., ‘Released for Retest’, ‘New’, etc.)
• The application name or identifier and version
• The function, module, feature, object, screen, etc. where the bug occurred
• Environment specifics, system, platform, relevant hardware specifics
• Test case name/number/identifier
• One-line bug description
• Full bug description
• Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn’t have easy access to the test case/test script/test tool
• Names and/or descriptions of file/data/messages/etc. used in test
• File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
• Severity estimate (a 5-level range such as 1-5 or ‘critical’-to-‘low’ is common)
• Was the bug reproducible?
• Tester name
• Test date
• Bug reporting date
• Name of developer/group/organization the problem is assigned to
• Description of problem cause
• Description of fix
• Code section/file/module/class/method that was fixed
• Date of fix
• Application version that contains the fix
• Tester responsible for retest
• Retest date
• Retest results
• Regression testing requirements
• Tester responsible for regression tests
• Regression testing results
A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.

5.4. Life Cycle of a Bug

The life cycle of a bug, also known as workflow, is customizable to match the needs of your organization, see Figure 5-1 contains a graphical representation of the default workflow using the default bug statuses. 

http://www.bugzilla.org/docs/3.4/en/images/bzLifecycle.png

Figure 5-1. Lifecycle of a Bugzilla Bugbug

5.6.1. Reporting a New Bug

The procedure for filing a bug is as follows:

Click the “New” link available in the footer of pages, or the “Enter a new bug report” link displayed on the home page of the Bugzilla installation. You first have to select the product in which you found a bug. 

You now see a form where you can specify the component (part of the product which is affected by the bug you discovered; if you have no idea, just select “General” if such a component exists), the version of the program you were using, the Operating System and platform your program is running on and the severity of the bug (if the bug you found crashes the program, it’s probably a major or a critical bug; if it’s a typo somewhere, that’s something pretty minor; if it’s something you would like to see implemented, then that’s an enhancement). 

You now have to give a short but descriptive summary of the bug you found. “My program is crashing all the time” is a very poor summary and doesn’t help developers at all. Try something more meaningful or your bug will probably be ignored due to a lack of precision. The next step is to give a very detailed list of steps to reproduce the problem you encountered. Try to limit these steps to a minimum set required to reproduce the problem. This will make the life of developers easier, and the probability that they consider your bug in a reasonable timeframe will be much higher. As you file the bug, you can also attach a document (testcase, patch, or screenshot of the problem). 

Depending on the Bugzilla installation you are using and the product in which you are filing the bug, you can also request developers to consider your bug in different ways (such as requesting review for the patch you just attached, requesting your bug to block the next release of the product, and many other product specific requests). 

Now is a good time to read your bug report again. Remove all misspellings, otherwise your bug may not be found by developers running queries for some specific words, and so your bug would not get any attention. Also make sure you didn’t forget any important information developers should know in order to reproduce the problem, and make sure your description of the problem is explicit and clear enough. When you think your bug report is ready to go, the last step is to click the “Commit” button to add your report into the database. 

You do not need to put “any” or similar strings in the URL field. If there is no specific URL associated with the bug, leave this field blank. 

If you feel a bug you filed was incorrectly marked as a DUPLICATE of another, please question it in your bug, not the bug it was duped to. Feel free to CC the person who duped it if they are not already CCed. 

5.6.2. Clone an Existing Bug

Starting with version 2.20, Bugzilla has a feature that allows you to clone an existing bug. The newly created bug will inherit most settings from the old bug. This allows you to track more easily similar concerns in a new bug. To use this, go to the bug that you want to clone, then click the “Clone This Bug” link on the bug page. This will take you to the “Enter Bug” page that is filled with the values that the old bug has. You can change those values and/or texts if needed. 

The Defect will have the following status:

  • New
  • Open
  • Assigned
  • Fixed
  • Reopen
  • Closed
  • Rejected

The Defect will go the following  Resolutions:

  • Duplicate
  • Deferred
  • Works for me
  • Postpone
  • Need more info
  • User error
  • By design
  • Configuration
  • Not Reproducible

Severity of the issue:

Severity: The degree of impact that a defect has on the functionality of the application or operation of a component or system. Testers usually determines the severity of a defect based on impact of issue on functionality

priority: the order of precedence or importance can be assigned to issue will be decided by the priority of the issue.Prority will be decided  by the developer based on the impact of the issue ,development flexibily,client demand, Revenue generation, Impacted areas.

Defect priority represents the development team’s priority in regard to addressing the defect. It is a risk-management decision based on technical and business considerations related to addressing the defect. To make the term less abstract, I usually propose it be called ‘Development priority’ or something similar.

Priority can be determined only after technical and business considerations related to fixing the defect are identified; therefore the best time to assess priority is after a short examination of the defect, typically during a ‘bug scrub’ attended by both the product owner and technical representatives.

Severity Levels can be defined as follow:

S1 – Urgent/Showstopper. Like system crash or error message forcing to close the window.   Tester’s ability to operate the system either totally (System Down), or

S2-High- almost totally, affected. A major area of the users system is affected by the incident   and it is significant to business processes.

S3 – Medium/Workaround. Exist like when a problem is required in the specs but tester can go   on with testing. Incident affects an area of functionality but there is a work-around which negates impact to business process.

This is a problem that:

Affects a more isolated piece of functionality.

Occurs only at certain boundary conditions.

Has a workaround (where “don’t do that” might be an acceptable answer to the user).

Occurs only at one or two customers. or is intermittent

S4 – Low. This is for minor problems, such as failures at extreme boundary conditions that are    unlikely to occur in normal use, or minor errors in layout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are cosmetic in nature and of no or very low impact to business processes.

S5- Cosmetic

  • Blocker: This bug prevents developers from testing or developing the software.
  • Critical: The software crashes, hangs, or causes you to lose data.
  • Major: A major feature is broken.
  • Normal: It’s a bug that should be fixed.
  • Minor: Minor loss of function, and there’s an easy work around.
  • Trivial: A cosmetic problem, such as a misspelled word or misaligned text.
  • Enhancement: Request for new feature or enhancement.

Low Severity and High priority and defects

Suppose in one banking application there is one module ATM Facility. in that ATM facility when ever we are depositing/withdrawing money it is not showing any conformation message but actually at the back end it is happening properly with out any mistake means only missing of message . in this case as it is happening properly so there is nothing wrong with the application but as end user is not getting any conformation message so he/she will be confuse for this.

So we can consider this issue as HIGH Priority but LOW Severity defects..

Minimal user impact: typo. Factors influencing priority. (1) The typo appears prominently on our login screen; it’s not a terribly big deal for existing customers, but it’s the first thing our sales engineers demo to prospective customers, and (2) the effort to fix the typo is minimal.

Most of the UI related stuff which will get more impact from the client of sales team will give the high priority even those functional wise those are not effecting more but good will and brand name may get effected. and developers need to fix all issues of 1 module and they will choose those module issue even those are not that much severity and easy to fix

Decision: fix it for next release and release it as an unofficial hot fix for our field personnel.

Home Page in web application has misspelled client name and found at the time of delivering the product.

High Priority And Low Severity—- If any Web site say Yahoo now if the logo of site Yahoo spell s Yho —-Than Priority is high but severity is low. Because it effect the name of site so important to do quick —Priority but it is not going to crash because of spell change so severity low.

When the application has trivial problem i.e. (less affected) and it has to be solved within a day then we can say it as low severity and high priority

High severity, low priority –

Critical impact on user and the user is not going to use with in the 3 months then developers will fix all the issue which the user will use very frequently in the first 3 months later will fix this issue

Critical impact on user and code freeze has been done and not safe to touch those areas as impact from those changes will be high hence will deferred to less priority

Critical impact on user: nuclear missiles are launched by accident. Factor influencing priority: analysis reveals that this defect can only be encountered on the second Tuesday of the first month of the twentieth year of each millennium, and only then if it’s raining and five other fail safes have failed.

Business decision: the likelihood of the user encountering this defect is so low that we don’t feel it’s necessary to fix it. We can mitigate the situation directly with the user.

Critical impact on user: when this error is encountered, the application must be killed and restarted, which can take the application off-line for several minutes. Factors influencing priority: (1) analysis reveals that it will take our dev team six months full-time refactoring work fix this defect. We’d have to put all other work on hold for that time. (2) Since this is a mission-critical enterprise application, we tell customers to deploy it in a redundant environment that can handle a server going down, planned or unplanned.

Business decision: it’s a better business investment to make customers aware of the issue, how often they’re likely to encounter it, and how to work through an incidence of it than to devote the 

If there is an application if that application crashes after multiple use of any functionality (ex–save Button use 200 times then that application will crash)Means High Severity because application crashed but Low Priority because no need to debug right now you can debug it after some days.

When the application has critical problem and it has to be solved after a month then we can say it as high severity and low priority.

Regression Testing Technique

One of the attributes that has plagued information technology professionals for years is the snowballing or cascading effect of making changes to an application system. One segment of the system is developed and thoroughly tested. Then a change is made to another part of the system, which has a disastrous effect on the thoroughly tested portion. Either the incorrectly implemented change causes a problem, or the change introduces new data or parameters that cause problems in a previously tested segment. Regression testing retests previously tested segments to ensure that they still function properly after a change has been made to another part of the application.

Objectives

Regression testing involves assurance that all aspects of an application system remain functional after testing. The introduction of change is the cause of problems in previously tested segments.

The objectives of regression testing include:

  • Determine whether systems documentation remains current.
  • Determine that system test data and test conditions remain current.
  • Determine that previously tested system functions perform properly after changes

            Are introduced into the application system.

How to Use Regression Testing

Regression testing is retesting unchanged segments of the application system. It normally involves rerunning tests that have been previously executed to ensure that the same results can be achieved currently as were achieved when the segment was last tested. While the process is simple in that the test transactions have been prepared and the results known, unless the process is automated it can be a very time-consuming and tedious operation. It is also one in which the cost/benefit needs to be carefully evaluated or large amounts of effort can be expended with minimal payback.

Regression Test Example

Examples of regression testing include:

􀂃 Rerunning of previously conducted tests to ensure that the unchanged system

Segments function properly

􀂃 Reviewing previously prepared manual procedures to ensure that they remain

correct after changes have been made to the application system

􀂃 Obtaining a printout from the data dictionary to ensure that the documentation for

Data elements that have been changed is correct

When to Use Regression Testing

Regression testing should be used when there is a high risk that new changes may affect

unchanged areas of the application system. In the developmental process, regression testing should occur after a predetermined number of changes are incorporated into the application system. In maintenance, regression testing should be conducted if the potential loss that could occur due to affecting an unchanged portion is very high. The determination as to whether to conduct regression testing should be based upon the significance of the loss that could occur due to improperly tested applications.

All Activities in Vmodel:

1. Identify all the requirements from BRS,Userguide,use cases.
2. Decompose the Requirements based on components and reqid for the reqs
3. Identify the high level requirements and name the HLR
4. Group the low level requirements in to high level requirements
5. Review the requirements
6. Prioritize the requirements
7. Impact analysis need to be done on the requirements
8. Prepare understanding document from the requirements and try to identify the requirement gaps if there are any
9. Identify some rough assumptions and some rough estimations in order to prepare the test plan
10. Write the initial test plan draft
11. Review the test plan draft.
12. perform the changes in the test plan as suggested by review
13. Send for standard approval of the test plan
14. Identify the test scenarios to cover high level requirements from the testing perspective with the help of test matrix
15. Decompose the test scenarios in to test conditions and Sub conditions
16. Assign the above identified test scenarios to the test engineers to write the test cases.
17. Review the all available documents like 
      a. SRS(Software Requirement specifications)
      b. User Guides
      c. HLD Documents (High level design)
      d. LLD Documents (Low level design)
      e. Data base Schema
      f. Prototypes
      g. Use Cases
18. Identify the risk involved in the system
19. Start writing the test cases
20. Review the test cases
21. Formal review of the test cases
22. Change the test cases as per the review comments
23. Identify the test data
24. send the test cases for the client approval
25. Prepare the tracebility matrix
26. If there are any missed test cases for the requirements from the tracebility matrix try to write those missed test cases
27. Identify the test cycles based on the build release and what are the features those are coming on the build
28. Identify the test sets in to the cycles based on the configurations and different environments
29. Pull the tset cases in to test sets based on the Orthogonal array diagram.
30. Review the test sets and test cases in the test sets.
31. Download or check out the build
32. Perform the check sum and virus scan.
33. Perform the sanity testing or BVT (Build verification Testing)
34. Prepare the test environment or test bed based on the test sets assigned to test engineers
35. Start the executiob of the test cases based on priority basis.
36. Verify the expected behaviour of the test case by performing the actions in the description of the test case & update those 
      results are same then pass the test case otherwise fail the test case.

Note: Test case status are:
         a. Pass
         b. Fail
         c. Blocked
         d. Not completed
         e. No Run
         f. NA (Not Applicable)

37. If there is any mismatch or non-conformance have been identified between the expected behaviour and actual behaviour
     that need to be reported in the defect tracking tool. Provide all necessary information to the defect
38. Fail the test cases and block the test cases based on the impact of the issue. Continue with the assignments and try 
     to finish all assignments within the time. Complete all the test sets in the test cycles.
39. Once that cycle has been completed we may get another build with some new features and some bug fixes which 
     have been raised in the previous cycle.
40. Start the test cycle with the identification of the test sets which need to be executed at this cycle, start the verification
     or retesting of issues which have been fixed in this build.
41. If there is any missed or failed or blocked scenarios of previous cycle finish those test cases.
42. New features which has been came in this build need to be verified.
43. Prepared scenarios of previous cycle need to be identified based on the impact analysis, execute them to confirm the features 
     Which you are tested in the previous cycle has been not disturbed in the present cycle.
44. Complete all the planned test cycles in the above process.
45. Identify the minimal acceptance cycle based on the acceptance criteria.
46. Finish the execution of minimal acceptance cycle.
47. start the evaluation of Exit criteria.
48. Collect the measures and prepare the test metrics
49. Prepare the test reports and graphs as per client’s requirement.
50. Prepare the project test summary reports.
51. Formally sign off the project from the QA.
52. Start the user acceptance testing
53. Once the project has been sign off perform the postmortem for the completed project.

Software Testing Types:

Software test type is a group of test activities that are aimed at testing a component or system focused on a specific test objective; a non-functional requirement such as usability, testability or reliability. Various types of software testing are used with the common objective of finding defects in that particular component.

Software testing is classified according to two basic types of software testing: Manual Scripted Testing and Automated Testing.

Manual Scripted Testing

The levels of software testing life cycle includes : 

  • Unit Testing 
  • Integration Testing 
  • System Testing 
  • Acceptance Testing 
    1. Alpha Testing 
    2. Beta Testing 

Other types of software testing are: 

White-Box Testing

White-box testing assumes that the path of logic in a unit or program is known. White-box testing

consists of testing paths, branch by branch, to produce predictable results. The following are

white-box testing techniques:

􀂃 Statement Coverage

Execute all statements at least once.

􀂃 Decision Coverage

Execute each decision direction at least once.

􀂃 Condition Coverage

Execute each decision with all possible outcomes at least once.

􀂃 Decision/Condition Coverage

Execute all possible combinations of condition outcomes in each decision. Treat all

iterations as two-way conditions exercising the loop zero times and one time.

􀂃 Multiple Condition Coverage

Invoke each point of entry at least once.

Choose the combinations of techniques appropriate for the application. You can have

an unmanageable number of test cases, if you conduct too many combinations of

these techniques.

What is White Box Testing?
White box testing (WBT) is also called Structural or Glass box testing.

White box testing involves looking at the structure of the code. When you know the internal structure of a product, tests can be conducted to ensure that the internal operations performed according to the specification. And all internal components have been adequately exercised.

White Box Testing is coverage of the specification in the code.

Code coverage: 

Segment coverage:
Ensure that each code statement is executed once.

Branch Coverage or Node Testing:
Coverage of each code branch in from all possible was.

Compound Condition Coverage:
For multiple condition test each condition with multiple paths and combination of different path to reach that condition.

Basis Path Testing:
Each independent path in the code is taken for testing.

Data Flow Testing (DFT):
In this approach you track the specific variables through each possible calculation, thus defining the set of intermediate paths through the code.DFT tends to reflect dependencies but it is mainly through sequences of data manipulation. In short each data variable is tracked and its use is verified.
This approach tends to uncover bugs like variables used but not initialize, or declared but not used, and so on.

Path Testing:
Path testing is where all possible paths through the code are defined and covered. Its a time consuming task.

Loop Testing:
These strategies relate to testing single loops, concatenated loops, and nested loops. Independent and dependent code loops and values are tested by this approach.

Why we do White Box Testing?
To ensure:

  • That all independent paths within a module have been exercised at least once.
  • All logical decisions verified on their true and false values.
  • All loops executed at their boundaries and within their operational bounds internal data structures validity.

Need of White Box Testing?
To discover the following types of bugs:

  • Logical error tend to creep into our work when we design and implement functions, conditions or controls that are out of the program
  • The design errors due to difference between logical flow of the program and the actual implementation
  • Typographical errors and syntax checking

Skills Required:
We need to write test cases that ensure the complete coverage of the program logic.
For this we need to know the program well i.e. We should know the specification and the code to be tested. Knowledge of programming languages and logic.

Limitations of WBT:
Not possible for testing each and every path of the loops in program. This means exhaustive testing is impossible for large systems.
This does not mean that WBT is not effective. By selecting important logical paths and data structure for testing is practically possible and effective.

Black-Box Testing

Black-box testing focuses on testing the function of the program or application against its

specification. Specifically, this technique determines whether combinations of inputs and

operations produce expected results.

When creating black-box test cases, the input data used is critical. Three successful techniques for

managing the amount of input data required include:

􀂃 Equivalence Partitioning

An equivalence class is a subset of data that is representative of a larger class.

Equivalence partitioning is a technique for testing equivalence classes rather than

undertaking exhaustive testing of each value of the larger class. For example, a

program which edits credit limits within a given range ($10,000 – $15,000) would

have three equivalence classes:

  • < $10,000 (invalid)
  • Between $10,000 and $15,000 (valid)
  • > $15,000 (invalid)
  • Boundary Analysis

A technique that consists of developing test cases and data that focus on the input and

output boundaries of a given function. In same credit limit example, boundary analysis

would test:

  • Low boundary +/- one ($9,999 and $10,001)
  • On the boundary ($10,000 and $15,000)
  • Upper boundary +/- one ($14,999 and $15,001)
  • Error Guessing

Test cases can be developed based upon the intuition and experience of the tester. For

example, in an example where one of the inputs is the date, a tester may try February29.

Black box testing treats the system as a “black-box”, so it doesn’t explicitly use Knowledge of the internal structure or code. Or in other words the Test engineer need not know the internal working of the “Black box” or application.

Main focus in black box testing is on functionality of the system as a whole. The term ‘behavioral testing’ is also used for black box testing and white box testing is also sometimes called ‘structural testing’. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn’t strictly forbidden, but it’s still discouraged.

Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using only black box or only white box. Majority of the applicationa are tested by black box testing method. We need to cover majority of test cases so that most of the bugs will get discovered by blackbox testing.

Black box testing occurs throughout the software development and Testing life cycle i.e in Unit, Integration, System, Acceptance and regression testing stages.

Tools used for Black Box testing:
Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to check whether new build has created any bug in previous working application functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl.

Advantages of Black Box Testing
– Tester can be non-technical.
– Used to verify contradictions in actual system and the specifications.
– Test cases can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing
– The test inputs needs to be from large sample space.
– It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult
– Chances of having unidentified paths during this testing

Methods of Black box Testing:

Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.

Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.

Boundary Value Analysis:
Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.

Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values

BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing – Boundary Value Analysis plus values that go beyond the limits
2. Min – 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling

Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.

Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.

How is this partitioning performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.

Comparison Testing:
Different independent versions of same software are used to compare to each other for testing in this method.

Black Box Testing Technique:

I have covered what is White box Testing in previous article. Here I will concentrate on Black box testing. BBT advantages, disadvantages and and How Black box testing is performed i.e the black box testing techniques.

Black box testing treats the system as a “black-box”, so it doesn’t explicitly use Knowledge of the internal structure or code. Or in other words the Test engineer need not know the internal working of the “Black box” or application.

Main focus in black box testing is on functionality of the system as a whole. The term ‘behavioral testing’ is also used for black box testing and white box testing is also sometimes called ‘structural testing’. Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn’t strictly forbidden, but it’s still discouraged.

Each testing method has its own advantages and disadvantages. There are some bugs that cannot be found using only black box or only white box. Majority of the applicationa are tested by black box testing method. We need to cover majority of test cases so that most of the bugs will get discovered by blackbox testing.

Black box testing occurs throughout the software development and Testing life cycle i.e in Unit, Integration, System, Acceptance and regression testing stages.

Tools used for Black Box testing:
Black box testing tools are mainly record and playback tools. These tools are used for regression testing that to check whether new build has created any bug in previous working application functionality. These record and playback tools records test cases in the form of some scripts like TSL, VB script, Java script, Perl.

Advantages of Black Box Testing
– Tester can be non-technical.
– Used to verify contradictions in actual system and the specifications.
– Test cases can be designed as soon as the functional specifications are complete

Disadvantages of Black Box Testing
– The test inputs needs to be from large sample space.
– It is difficult to identify all possible inputs in limited testing time. So writing test cases is slow and difficult
– Chances of having unidentified paths during this testing

Methods of Black box Testing:

Graph Based Testing Methods:
Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.

Error Guessing:
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.

Boundary Value Analysis:
Many systems have tendency to fail on boundary. So testing boundry values of application is important. Boundary Value Analysis (BVA) is a test Functional Testing technique where the extreme boundary values are chosen. Boundary values include maximum, minimum, just inside/outside boundaries, typical values, and error values.

Extends equivalence partitioning
Test both sides of each boundary
Look at output boundaries for test cases too
Test min, min-1, max, max+1, typical values

BVA techniques:
1. Number of variables
For n variables: BVA yields 4n + 1 test cases.
2. Kinds of ranges
Generalizing ranges depends on the nature or type of variables
Advantages of Boundary Value Analysis
1. Robustness Testing – Boundary Value Analysis plus values that go beyond the limits
2. Min – 1, Min, Min +1, Nom, Max -1, Max, Max +1
3. Forces attention to exception handling

Limitations of Boundary Value Analysis
Boundary value testing is efficient only for variables of fixed values i.e boundary.

Equivalence Partitioning:
Equivalence partitioning is a black box testing method that divides the input domain of a program into classes of data from which test cases can be derived.

How is this partitioning performed while testing:
1. If an input condition specifies a range, one valid and one two invalid classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined.
4. If an input condition is Boolean, one valid and one invalid class is defined.

Comparison Testing:
Different independent versions of same software are used to compare to each other for testing in this method.

Testing Checklist:
1 Create System and Acceptance Tests [ ]
2 Start Acceptance test Creation [ ]
3 Identify test team [ ]
4 Create Workplan [ ]
5 Create test Approach [ ]
6 Link Acceptance Criteria and Requirements to form the basis of
acceptance test [ ]
7 Use subset of system test cases to form requirements portion of
acceptance test [ ]
8 Create scripts for use by the customer to demonstrate that the system meets
requirements [ ]
9 Create test schedule. Include people and all other resources. [ ]
10 Conduct Acceptance Test [ ]
11 Start System Test Creation [ ]
12 Identify test team members [ ]
13 Create Workplan [ ]
14 Determine resource requirements [ ]
15 Identify productivity tools for testing [ ]
16 Determine data requirements [ ]
17 Reach agreement with data center [ ]
18 Create test Approach [ ]
19 Identify any facilities that are needed [ ]
20 Obtain and review existing test material [ ]
21 Create inventory of test items [ ]
22 Identify Design states, conditions, processes, and procedures [ ]
23 Determine the need for Code based (white box) testing. Identify conditions. [ ]
24 Identify all functional requirements [ ]
25 End inventory creation [ ]
26 Start test case creation [ ]
27 Create test cases based on inventory of test items [ ]
28 Identify logical groups of business function for new sysyem [ ]
29 Divide test cases into functional groups traced to test item inventory [ ] 1.30 Design data sets to correspond to test cases [ ]
31 End test case creation [ ]
32 Review business functions, test cases, and data sets with users [ ]
33 Get signoff on test design from Project leader and QA [ ]
34 End Test Design [ ]
35 Begin test Preparation [ ]
36 Obtain test support resources [ ]
37 Outline expected results for each test case [ ]
38 Obtain test data. Validate and trace to test cases [ ]
39 Prepare detailed test scripts for each test case [ ]
40 Prepare & document environmental set up procedures. Include back up and
recovery plans [ ]
41 End Test Preparation phase [ ]
42 Conduct System Test [ ]
43 Execute test scripts [ ]
44 Compare actual result to expected [ ]
45 Document discrepancies and create problem report [ ]
46 Prepare maintenance phase input [ ]
47 Re-execute test group after problem repairs [ ]
48 Create final test report, include known bugs list [ ]
49 Obtain formal signoff [ ]

Boundary value analysis and Equivalence partitioning, explained with simple example:

Boundary value analysis and equivalence partitioning both are test case design strategies in black box testing.

Equivalence Partitioning:

In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements.

In short it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing.

E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data.

Using equivalence partitioning method above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class.

So in above example we can divide our test cases into three equivalence classes of some valid and invalid inputs.

Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning:
1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient.

2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case.

3) Input data with any value greater than 1000 to represent third invalid input class.

So using equivalence partitioning you have categorized all possible test cases into three classes. Test cases with other values from any class should give you the same result.

We have selected one representative from every input class to design our test cases. Test case values are selected in such a way that largest number of attributes of equivalence class can be exercised.

Equivalence partitioning uses fewest test cases to cover maximum requirements.

Boundary value analysis:

It’s widely recognized that input values at the extreme ends of input domain cause more errors in system. More application errors occur at the boundaries of input domain. ‘Boundary value analysis’ testing technique is used to identify errors at boundaries rather than finding those exist in center of input domain.

Boundary value analysis is a next part of Equivalence partitioning for designing test cases where test cases are selected at the edges of the equivalence classes.

Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis:
1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case.

2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.

3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001.

Boundary value analysis is often called as a part of stress and negative testing.

Note: There is no hard-and-fast rule to test only one value from each equivalence class you created for input domains. You can select multiple valid and invalid values from each equivalence class according to your needs and previous judgments.

E.g. if you divided 1 to 1000 input values in valid data equivalence class, then you can select test case values like: 1, 11, 100, 950 etc. Same case for other test cases having invalid data classes.

This should be a very basic and simple example to understand the Boundary value analysis and Equivalence partitioning concept.

What is Equivalence partitioning?
Equivalence partitioning is a method for deriving test cases. In this method, equivalence classes (for input values) are identified such that each member of the class causes the same kind of processing and output to occur. The values at the extremes (start/end values or lower/upper end values) of such class are known as Boundary values. Analyzing the behavior of a system using such values is called Boundary value analysis (BVA).

Here are few sample questions for practice from ISTQB exam papers on Equivalence partitioning and BVA. (Ordered: Simple to little complex)

Question 1
One of the fields on a form contains a text box which accepts numeric values in the range of 18 to 25. Identify the invalid Equivalence class.

a)    17
b)    19
c)    24
d)    21

Solution
The text box accepts numeric values in the range 18 to 25 (18 and 25 are also part of the class). So this class becomes our valid class. But the question is to identify invalid equivalence class. The classes will be as follows:
Class I: values < 18   => invalid class
Class II: 18 to 25       => valid class
Class III: values > 25 => invalid class

17 fall under invalid class. 19, 24 and 21 fall under valid class. So answer is ‘A’

Question 2
In an Examination a candidate has to score minimum of 24 marks in order to clear the exam. The maximum that he can score is 40 marks.  Identify the Valid Equivalence values if the student clears the exam.

a)    22,23,26
b)    21,39,40
c)    29,30,31
d)    0,15,22

Solution
The classes will be as follows:
Class I: values < 24   => invalid class
Class II: 24 to 40       => valid class
Class III: values > 40 => invalid class

We have to indentify Valid Equivalence values. Valid Equivalence values will be there in Valid Equivalence class. All the values should be in Class II. So answer is ‘C’

Question 3
One of the fields on a form contains a text box which accepts alpha numeric values. Identify the Valid Equivalence class
a)    BOOK
b)    Book
c)    Boo01k
d)    Book

Solution
Alpha numeric is combination of alphabets and numbers. Hence we have to choose an option which has both of these. A valid equivalence class will consist of both alphabets and numbers. Option ‘c’ contains both alphabets and numbers. So answer is ‘C’

Question 4
The Switch is switched off once the temperature falls below 18 and then it is turned on when the temperature is more than 21. When the temperature is more than 21. Identify the Equivalence values which belong to the same class.

a)    12,16,22
b)    24,27,17
c)    22,23,24
d)    14,15,19

Solution
We have to choose values from same class (it can be valid or invalid class). The classes will be as follows:

Class I: less than 18 (switch turned off)
Class II: 18 to 21
Class III: above 21 (switch turned on)

Only in Option ‘c’ all values are from one class. Hence the answer is ‘C’. (Please note that the question does not talk about valid or invalid classes. It is only about values in same class)

Question 5
A program validates a numeric field as follows: values less than 10 are rejected, values between 10 and 21 are accepted, values greater than or equal to 22 are rejected. Which of the following input values cover all of the equivalence partitions?

a. 10,11,21
b. 3,20,21
c. 3,10,22
d. 10,21,22

Solution
We have to select values which fall in all the equivalence class (valid and invalid both). The classes will be as follows:

Class I: values <= 9   => invalid class
Class II: 10 to 21       => valid class
Class III: values >= 22 => invalid class

All the values from option ‘c’ fall under all different equivalence class. So answer is ‘C’.

Question 6
A program validates a numeric field as follows: values less than 10 are rejected, values between 10 and 21 are accepted, values greater than or equal to 22 are rejected. Which of the following covers the MOST boundary values?

a. 9,10,11,22
b. 9,10,21,22
c. 10,11,21,22
d. 10,11,20,21

Solution
We have already come up with the classes as shown in question 5. The boundaries can be identified as 9, 10, 21, and 22. These four values are in option ‘b’. So answer is ‘B’

Question 7
In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax free.
The next £1500 is taxed at 10%.
The next £28000 after that is taxed at 22%.
Any further amount is taxed at 40%.

To the nearest whole pound, which of these groups of numbers fall into three DIFFERENT equivalence classes?
a)    £4000; £5000; £5500
b)    £32001; £34000; £36500
c)    £28000; £28001; £32001
d)    £4000; £4200; £5600

Solution
The classes will be as follows:
Class I   : 0 to £4000          => no tax
Class II  : £4001 to £5500   => 10 % tax
Class III : £5501 to £33500 => 22 % tax
Class IV : £33501 and above => 40 % tax

Select the values which fall in three different equivalence classes. Option ‘d’ has values from three different equivalence classes. So answer is ‘D’.

Question 8
In a system designed to work out the tax to be paid:
An employee has £4000 of salary tax free.
The next £1500 is taxed at 10%.
The next £28000 after that is taxed at 22%.
Any further amount is taxed at 40%.

To the nearest whole pound, which of these is a valid Boundary Value Analysis test case?
a)    £28000
b)    £33501
c)    £32001
d)    £1500

Solution
The classes are already divided in question # 7. We have to select a value which is a boundary value (start/end value). 33501 is a boundary value. So answer is ‘C’.

Question 9
Given the following specification, which of the following values for age are in the SAME equivalence partition?

If you are less than 18, you are too young to be insured.
Between 18 and 30 inclusive, you will receive a 20% discount.
Anyone over 30 is not eligible for a discount.
a)    17, 18, 19
b)    29, 30, 31
c)    18, 29, 30
d)    17, 29, 31

Solution
The classes will be as follows:
Class I: age < 18       => not insured
Class II: age 18 to 30 => 20 % discount
Class III: age > 30     => no discount

Here we cannot determine if the above classes are valid or invalid, as nothing is mentioned in the question. (But according to our guess we can say I and II are valid and III is invalid. But this is not required here.) We have to select values which are in SAME equivalence partition. Values from option ‘c’ fall in same partition. So answer is ‘C’.

Incremental  or integration Testing

Incremental testing is a disciplined method of testing the interfaces between unit-tested programs

as well as between system components. It involves adding unit-tested programs to a given module

or component one by one, and testing each resultant combination. There are two types of

incremental testing:

Top-down

Begin testing from the top of the module hierarchy and work down to the bottom

using interim stubs to simulate lower interfacing modules or programs. Modules are

added in descending hierarchical order.

Bottom-up

Begin testing from the bottom of the hierarchy and work up to the top. Modules are

added in ascending hierarchical order. Bottom-up testing requires the development of

driver modules, which provide the test input, that call the module or program being

tested, and display test output.

There are pros and cons associated with each of these methods, although bottom-up testing is

often thought to be easier to use. Drivers are often easier to create than stubs, and can serve

multiple purposes. Output is also often easier to examine in bottom-up testing, as the output

always comes from the module directly above the module under test.

Boundary Value Analysis

The problem of deriving test data sets is to partition the program domain in some meaningful way

so that input data sets, which span the partition, can be determined. There is no direct, easily stated

procedure for forming this partition. It depends on the requirements, the program domain, and the

creativity and problem understanding of the programmer. This partitioning, however, should be

performed throughout the development life cycle.

At the requirements stage a coarse partitioning is obtained according to the overall functional

requirements. At the design stage, additional functions are introduced which define the separate

modules allowing for a refinement of the partition. Finally, at the coding stage, submodules

implementing the design modules introduce further refinements. The use of a top down testing

methodology allows each of these refinements to be used to construct functional test cases at the

appropriate level.

Once the program domain is partitioned into input classes, functional analysis can be used to

derive test data sets. Test data should be chosen which lie both inside each input class and at the

boundary of each class. Output classes should also be covered by input that causes output at each

class boundary and within each class. These data are the extremal and non-extremal test sets.

Determination of these test sets is often called boundary value analysis or stress testing.

Retesting and Regression Testing:

When a defect is detected and fixed then the software should be retested to confirm that the original defect has been successfully removed. This is called retesting/confirmation testing. Debugging (defect fixing) is a development activity, not a testing activity.

Regression testing is the repeated testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the change(s). These defects may be either in the software being tested, or in another related or unrelated software component. It is performed when the software, or its environment, is changed. The extent of regression testing is based on the risk of not finding defects in software that was working previously.

Tests should be repeatable if they are to be used for confirmation testing and to assist regression

testing.

Regression testing may be performed at all test levels, and applies to functional, non-functional and structural testing. Regression test suites are run many times and generally evolve slowly, so regression testing is a strong candidate for automation.

The Factors Effects the Regression Testing :

 Impact analysis, Sanity Verifications,HIgh Priority things,Sensitive areas,Minimal accepatance      

Scenarios.

One of the attributes that has plagued information technology professionals for years is the

snowballing or cascading effect of making changes to an application system. One segment of the

system is developed and thoroughly tested. Then a change is made to another part of the system,

which has a disastrous effect on the thoroughly tested portion. Either the incorrectly implemented

change causes a problem, or the change introduces new data or parameters that cause problems in

a previously tested segment. Regression testing retests previously tested segments to ensure that

they still function properly after a change has been made to another part of the application.

Objectives

Regression testing involves assurance that all aspects of an application system remain functional

after testing. The introduction of change is the cause of problems in previously tested segments.

The objectives of regression testing include:

  • Determine whether systems documentation remains current.
  • Determine that system test data and test conditions remain current.
  • Determine that previously tested system functions perform properly after changes

are introduced into the application system.

How to Use Regression Testing

Regression testing is retesting unchanged segments of the application system. It normally involves

rerunning tests that have been previously executed to ensure that the same results can be achieved

currently as were achieved when the segment was last tested. While the process is simple in that

the test transactions have been prepared and the results known, unless the process is automated it

can be a very time-consuming and tedious operation. It is also one in which the cost/benefit needs

to be carefully evaluated or large amounts of effort can be expended with minimal payback.

Regression Test Example

Examples of regression testing include:

􀂃 Rerunning of previously conducted tests to ensure that the unchanged system

segments function properly

􀂃 Reviewing previously prepared manual procedures to ensure that they remain

correct after changes have been made to the application system

􀂃 Obtaining a printout from the data dictionary to ensure that the documentation for

data elements that have been changed is correct

When to Use Regression Testing

Regression testing should be used when there is a high risk that new changes may affect

unchanged areas of the application system. In the developmental process, regression testing

should occur after a predetermined number of changes are incorporated into the application

system. In maintenance, regression testing should be conducted if the potential loss that could

occur due to affecting an unchanged portion is very high. The determination as to whether to

conduct regression testing should be based upon the significance of the loss that could occur due to improperly tested applications.

What is Regression Software Testing?
Regression means retesting the unchanged parts of the application. Test cases are re-executed in order to check whether previous functionality of application is working fine and new changes have not introduced any new bugs.

This is the method of verification. Verifying that the bugs are fixed and the newly added feature have not created in problem in previous working version of software.

Why regression Testing?
Regression testing is initiated when programmer fix any bug or add new code for new functionality to the system. It is a quality measure to check that new code complies with old code and unmodified code is not getting affected.
Most of the time testing team has task to check the last minute changes in the system. In such situation testing only affected application area in necessary to complete the testing process in time with covering all major system aspects.

How much regression testing?
This depends on the scope of new added feature. If the scope of the fix or feature is large then the application area getting affected is quite large and testing should be thoroughly including all the application test cases. But this can be effectively decided when tester gets input from developer about the scope, nature and amount of change.

What we do in regression testing?

  • Rerunning the previously conducted tests
  • Comparing current results with previously executed test results.

Regression Testing Tools:
Automated Regression testing is the testing area where we can automate most of the testing efforts. We run all the previously executed test cases this means we have test case set available and running these test cases manually is time consuming. We know the expected results so automating these test cases is time saving and efficient regression testing method. Extent of automation depends on the number of test cases that are going to remain applicable over the time. If test cases are varying time to time as application scope goes on increasing then automation of regression procedure will be the waste of time.

Most of the regression testing tools are record and playback type. Means you will record the test cases by navigating through the AUT and verify whether expected results are coming or not.
Example regression testing tools are:

Most of the tools are both Functional as well as regression testing tools.

Regression Testing Of GUI application:
It is difficult to perform GUI(Graphical User Interface) regression testing when GUI structure is modified. The test cases written on old GUI either becomes obsolete or need to reuse. Reusing the regression testing test cases means GUI test cases are modified according to new GUI. But this task becomes cumbersome if you have large set of GUI test cases.

Functional Test

Functional test can be defined as testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do.

End-to-end testing – Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

System testing is probably the most important phase of complete testing cycle. This phase is started after the completion of other phases like Unit, Component and Integration testing. During the System Testing phase, non functional testing also comes in to picture and performance, load, stress, scalability all these types of testing are performed in this phase.

 By Definition, System Testing is conducted on the complete integrated system and on a replicated production environment. System Testing also evaluates that system compliance with specific functional and non functional requirements both.  

It is very important to understand that not many test cases are written for the system testing. Test cases for the system testing are derived from the architecture/design of the system, from input of the end user and by user stories. It does not make sense to exercise extensive testing in the System Testing phase, as most of the functional defects should have been caught and corrected during earlier testing phase.

Utmost care is exercised for the defects uncovered during System Testing phase and proper impact analysis should be done before fixing the defect. Some times, if business permits defects are just documented and mentioned as the known limitation instead of fixing it.

Progress of the System Testing also instills and build confidence in the product teams as this is the first phase in which product is tested with production environment.

System Testing phase also prepares team for more user centric testing i.e User Acceptance Testing.


Entry Criteria 

  • Unit, component and Integration test are complete
  • Defects identified during these test phases are resolved and closed by QE team
  • Teams have sufficient tools and resources to mimic the production environment
  • Teams have sufficient tools and resources to mimic the production environment

Exit Criteria 

  • Test cases execution reports shows that functional and non functional requirements are met.
  • Defects found during the System Testing are either fixed after doing thorough impact analysis or are documented as known limitations.

User Acceptance Testing:

Acceptance Testing is the formal testing conducted to determine whether a software system satisfies its acceptance criteria and to enable buyer to determine whether to accept the system or not.Acceptance testing is designed to determine whether software is fit for use or not. Apart from functionality of application, other factors related to business environment also plays an important role.

User acceptance testing is different from System Testing. System testing is invariably performed by the development team which includes developer and tester. User acceptance testing on the other hand should be carried out by the end user. This could be in the form of 

  • Alpha Testing – Where test are conducted at the development site by the end users. Environment can be controlled a little bit in this case.
  • Beta Testing – Where test are conducted at customer site and development team do not have any control on the test environment.

It is very important to define acceptance criteria with the buyer during the various phases of SDLC. A well defined acceptance plan will help development/QE teams by identifying user’s need during software development. Acceptance Test plan must be created or reviewed by customer. Development team and customer should work together and make sure that they have 

  • Identify interim and final products for acceptance, acceptance criteria and schedule.
  • Plan how and by whom each acceptance activities will be performed.
  • Schedule adequate time for buyer’s staff to examine and review the product
  • Prepare the acceptance plan.
  • Perform formal acceptance testing at delivery
  • Make a decision based on the results of acceptance testing.

Entry Criteria 

  • System Testing is completed and defects identified are either fixed or documented.
  • Acceptance plan is prepared and resources have been identified.
  • Test environment for the acceptance testing is available

Exit Criteria 

  • Acceptance decision is made for the software
  • In case of any caveats, development team is notified

Installation Testing:

Installation Testing is one of the most important part of testing activities. Installation is the first interaction of user with our product and it is very important to make sure that user do not have any trouble in installing the software. 

It becomes even more critical now as there are different means to distribute the software. Instead of traditional method of distributing software in the physical CD format, software can be installed from internet, from a network location or even it can be pushed to the end user’s machine.

The type if installation testing you do, will be affected by lots of factors like 

  • What platforms and operating systems you support?
  • How will you distribute the software?

Usability Testing:

Software usability testing is an example of non functional testing. Usability testing evaluates how easy a system is to learn and use. There are enormous benefits of the usability testing but still there is not much awareness about the subject. 

Benefits of Usability testing can be summarized as 

  • Its easier for sales team to sell a highly usable product.
  • Usable products are easy to learn and use.
  • Support cost is less for Usable products.

Smoke testing and sanity testing – Quick and simple differences:

Here are the differences you can see:

SMOKE TESTING:

  • Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke. In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.
  • A smoke test is scripted, either using a written set of tests or an automated test
  • A Smoke test is designed to touch every part of the application in a cursory way. It’s shallow and wide.
  • Smoke testing is conducted to ensure whether the most crucial functions of a program are working, but not bothering with finer details. (Such as build verification).
  • Smoke testing is normal health check up to a build of an application before taking it to testing in depth.

SANITY TESTING:

  • A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
  • A sanity test is usually unscripted.
  • A Sanity test is used to determine a small section of the application is still working after a minor change.
  • Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.
  • Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.

Hope these points will help you to clearly understand the Smoke and sanity tests and will help to remove any confusion

Integration Testing:

Objective of Integration testing is to make sure that the interaction of two or more components produces results that satisfy functional requirement. In integration testing, test cases are developed with the express purpose of exercising the interface between the components. Integration testing can also be treated as testing assumption of fellow programmer. During the coding phase, lots of assumptions are made. Assumptions can be made for how you will receive data from different components and how you have to pass data to different components.

During Unit Testing, these assumptions are not tested. Purpose of unit testing is also to make sure that these assumptions are valid. There could be many reasons for integration to go wrong, it could be because 

  • Interface Misuse – A calling component calls another component and makes an error in its use of interface, probably by calling/passing parameters in the wrong sequence.
  • Interface Misunderstanding – A calling component makes some assumption about the other components behavior which are incorrect. 



Integration Testing can be performed in three different ways based on the from where you start testing and in which direction you are progressing. 

  • Big Bang Integration Testing
  • Top Down Integration Testing
  • Bottom Up Integration Testing
  • Hybrid Integration testing

Top down testing can proceed in a depth-first or a breadth-first manner. For depth-first integration each module is tested in increasing detail, replacing more and more levels of detail with actual code rather than stubs. Alternatively breadth-first would proceed by refining all the modules at the same level of control throughout the application. In practice a combination of the two techniques would be used.

Entry Criteria

Main entry criteria for Integration testing is the completion of Unit Testing. If individual units are not tested properly for their functionality, then Integration testing should not be started.

Exit Criteria

Integration testing is complete when you make sure that all the interfaces where components interact with each other are covered. It is important to cover negative cases as well because components might make assumption with respect to the data.

Bottom Up Integration Testing:

In bottom up integration testing, module at the lowest level are developed first and other modules which go towards the ‘main’ program are integrated and tested one at a time. Bottom up integration also uses test drivers to drive and pass appropriate data to the lower level modules. As and when code for other module gets ready, these drivers are replaced with the actual module. In this approach, lower level modules are tested extensively thus make sure that highest used module is tested properly.

Advantages 

  • Behavior of the interaction points are crystal clear, as components are added in the controlled manner and tested repetitively.
  • Appropriate for applications where bottom up design methodology is used.

Disadvantages 

  • Writing and maintaining test drivers or harness is difficult than writing stubs.
  • This approach is not suitable for the software development using top down approach.

Top Down Integration Testing:

Top down integration testing is an incremental integration testing technique which begins by testing the top level module and and progressively adds in lower level module one by one. Lower level modules are normally simulated by stubs which mimic functionality of lower level modules. As you add lower level code, you will replace stubs with the actual components. Top Down integration can be performed and tested in breadth first or depth firs manner. 

Advantages 

  • Driver do not have to be written when top down testing is used.
  • It provides early working module of the program and so design defects can be found and corrected early.

Disadvantages 

  • Stubs have to be written with utmost care as they will simulate setting of output parameters.
  • It is difficult to have other people or third parties to perform this testing, mostly developers will have to spend time on this.

Big Bang Integration Testing

In big bang Integration testing, individual modules of the programs are not integrated until every thing is ready. This approach is seen mostly in inexperienced programmers who rely on ‘Run it and see’ approach. In this approach, the program is integrated without any formal integration testing, and then run to ensures that all the components are working properly.

Unfortunately, whilst it may be possible to get away with it within some simple sequential programs, particularly if sensible design methods and good Function and Module Tests have been applied, the use of such an approach large commercial applications is likely to be much less successful.

The application of this method often simply leads the programmer to have to re-separate parts of the program to find the cause of the errors, thereby effectively performing a full integration test although in a manner which lacks the controlled approach of the other methods.

Disadvantages

There are many disadvantages of this big bang approach 

  • Defects present at the interfaces of components are identified at very late stage.
  • It is very difficult to isolate the defects found, as it is very difficult to tell whether defect is in component or interface.
  • There is high probability of missing some critical defects which might surfaced in production.
  • It is very difficult to make sure that all the cases for integration testing are covered.

Ad Hoc Testing

Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.

Exploratory Testing:

Exploratory testing is defined as simultaneous learning, test design and execution by its one of the most prominent proponent, James Bach. Before 1990 and even till now, in some parts of the industry, exploratory testing is also known as ad-hoc testing.

This term was coined in early 1990’s by the context driven testing school community. Dr. Kaner emphasized the thought process involved in unscripted testing in his one of the best book on software testing called, “Testing Computer Software”.

According to the many experts in software testing field, in terms of finding important defects exploratory testing is much more powerful than traditional scripted testing. As compared to traditional manual scripted testing, it involves less overhead. With the continuously increasing adoption of agile methodologies, adoption of exploratory testing will increase in future.

If we look at the definition closely, it says that simultaneous learning, test design and execution. What does that mean, it means that testers are not equipped with the test scripts; they are working on the product with some specific goal. While working on the product, they will reveal some information and learn new things, which might change their test strategy and course of action. As compared to manual scripted testing, this practice is much more engaging and requires continuous attention and focus of tester. It also fosters the culture of learning, since tester needs to learn new things, continuously about the product, technologies, users and so on. 

One of the main problems with implementing exploratory testing practices is the measurement. In many situations, management is interested in checking the progress and need some evidence of coverage and execution because of various reasons. To address this need, session based test management can be used, which makes it easier to audit and measure exploratory testing efforts.

Compatibility testing, part of software non-functional tests, is testing conducted on the application to evaluate the application’s compatibility with the computing environment. Computing environment may contain some or all of the below mentioned elements:

  • Computing capacity of Hardware Platform (IBM 360, HP 9000, etc.)..
  • Bandwidth handling capacity of networking hardware
  • Compatibility of peripherals (Printer, DVD drive, etc.)
  • Operating systems (MVS, UNIX, Windows, etc.)
  • Database (Oracle, Sybase, DB2, etc.)
  • Other System Software (Web server, networking/ messaging tool, etc.)
  • Browser compatibility (Firefox, Netscape, Internet Explorer, Safari, etc.)

Browser compatibility testing, can be more appropriately referred to as user experience testing. This requires that the web applications are tested on different web browsers, to ensure the following:

  • Users have the same visual experience irrespective of the browsers through which they view the web application.
  • In terms of functionality, the application must behave and respond the same way across different browsers.

For more information please visit the link BCT

  • Carrier compatibility (Verizon, Sprint, Orange, O2, AirTel, etc.)
  • Backwards compatibility.
  • Hardware (different phones)
  • Different Compilers (compile the code correctly)
  • Runs on multiple host/guest Emulators

Certification testing falls within the scope of Compatibility testing. Product Vendors run the complete suite of testing on the newer computing environment to get their application certified for a specific Operating Systems or Databases.

Upgrade & Backward Compatibility Testing:

In any software program, new releases or new versions are inevitable. Organization spend lots of money and resources to improvise the existing software. Continuous improvisation is necessary for any software product so that they can remain competitive in the market. On an average, every software is upgraded at least once in every year.  

This arises the need of testing different aspect of software known as Backward and Upgrade testing. Considerable efforts are spent in making sure that software can be upgraded without affecting user in any adverse ways. With every new version of the product, one of the main criteria should be to make sure that whatever efforts user have spent on the older version, should not be wasted.

Though Backward and Upgrade testing are different, but both are very much similar as you will understand in the following section.

Backward Testing

Testing that ensures that new version of the product continues to work with the assets created from older product is known as Backward compatibility testing. For example, consider a simple case of Excel worksheet. Suppose you have created a very complex excel sheet to track your projects schedule, resources, expenses, future plans etc. Now if you upgrade from Excel 2000 to Excel 2003 and some of the functions stop working, you will not be delighted with this.

So crux of the Backward compatibility testing is to make sure that assets created using older version should continue to work.

In cases where it is not possible to use assets created by older versions due to any reason, then proper migration path should be given to the user so that they can migrate smoothly from old version to new version. 

Upgrade Testing

Scope of upgrade testing becomes a bit broader than backward compatibility testing. In upgrade testing, apart from making sure that assets created with older versions can be used properly, we also make sure that user’s learning is not challenged. We also make sure that upgrade process is simple and user do not have to invest lots of time and resources to upgrade the product. Following items can be included in upgrade testing, but not limited to

Upgraded product should continue to work with the same version of old component. For example, upgrade of your product should not force user to upgrade their database as well.

As far as possible, look and feel of the product should be changed incrementally so user feel comfortable with the upgraded product as well.

Same terminology should be used wherever possible.

Old functionality should remain intact, it should not be dropped until unless you have business reasons to drop it.

Interoperability Testing

Interoperability testing has become a requirement for companies that deploy multi-vendor networks. To satisfy this requirement, network and storage providers and managers have three options.

  1. Set up an interoperability lab, an expensive and time-consuming project.
  2. Use a third-party interoperability lab, such as ISOCORE or the University of New Hampshire.
  3. Create a proof-of-concept lab, such as the labs at Cisco or Spirent Communications.

Localization (L10N) Testing

Localization (L10N) is the process of customizing a software application that was originally designed for a domestic market so that it can be released in foreign markets. This process involves translating all native language strings to the target language and customizing the GUI so that it is appropriate for the target market. Depending on the size and complexity of the software, localization can range from a simple process involving a small team of translators, linguists, desktop publishers and engineers to a complex process requiring a Localization Project Manager directing a team of a hundred

specialists. Localization is usually done using some combination of in-house resources, independent contractors and full-scope services of a localization company.

What we need to consider in Localization Testing?

What we need to consider in Localization Testing ?

  • Things that are often altered during localization, such as the UserInterface and content files.
  • Operating System
  • Keyboards
  • Text Filters
  • Hot keys
  • Spelling Rules
  • Sorting Rules
  • Upper and Lower case conversions
  • Printers
  • Size of Papers
  • Mouse
  • Date formats
  • Rulers and Measurements
  • Memory Availability
  • Voice User Interface language/accent
  • Video Content
  • In particular, localization testing of the UI and linguistics should cover items such as: 
  • Validation of all application resources.
  • Verification of linguistic accuracy and resource attributes.
  • Checking for typographical errors.
  • Checking that printed documentation, online Help, messages, interface resources, and command-key sequences are consistent with each other. If you have shipped localized versions of your product before, make sure that the translation is consistent with the earlier released versions.
  • Confirmation of adherence to system, input, and display environment standards.
  • Checking usability of the UI.
  • Assessment of cultural appropriateness.
  • Checking for politically sensitive content.
  • Making sure the market-specific information about your company, such as contact information or local product-support phone numbers, is updated.

Globalization Testing

The goal of globalization testing is to detect potential problems in application design that could inhibit globalization. It makes sure that the code can handle all international support without breaking functionality that would cause either data loss or display problems. Globalization testing checks proper functionality of the product with any of the culture/locale settings using every type of international input possible.

Proper functionality of the product assumes both a stable component that works according to design specification, regardless of international environment settings or cultures/locales, and the correct representation of data.

Internationalization Testing

World is flat. If you are reading this page, chances are that you are experiencing this as well. It is very difficult to survive in the current world if you are selling your product in only one country or geological region. Even if you are selling in all over the world, but your product is not available in the regional languages, you might not be in a comfortable situation. 

Products developed in one location are used all over the world with different languages and regional standards. This arises the need to test product in different languages and different regional standards. Multilingual and localization testing can increase your products usability and acceptability worldwide.

Internationalization is the process of designing and coding a product so it can perform properly when it is modified for use in different languages and locales. 

Localization (also known as L10N) refers to the process, on a properly internationalized base product, of translating messages and documentation as well as modifying other locale specific files. 

Assuming that there is not a separate base product for the locale, the localized files are installed at their proper location in the base product. This product is then released as a localized version of the product. 

Localizing a properly internationalized product in most cases should require no changes to the source code. 

Internationalization testing is the process, which ensures that product’s functionality is not broken and all the messages are properly externalized when used in different languages and locale. Internationalization testing is also called I18N testing, because there are 18 characters between I and N in Internationalization.

Load Testing

It is well known that poor performance on the Internet leads to dissatisfied users, and dissatisfied users may leave a web site and never return to it again. However, predicting how a web site will respond to a specific load is a real problem. Since web sites are complex systems with hardware, software, networking components, and widely different performance profiles – it is almost impossible to predict how a given system will behave under load. The only solution here is to perform a load test where test volume and characteristics of expected traffic are simulated as realistically as possible. 

The goal of load testing is to simulate how people will use your web application in the real world. This can be done with the help of virtual users. Thus, load testing stresses the real application with a simulated load provided by virtual users.

  • Performance Test – It tests the application against a requested number of users. The goal of this test is to understand how the application will react to different user load, and determine whether it can handle the necessary load with acceptable response times or not. 
  • Load Test: It test the application with the realistic peak loads provided by the customer the application never fails
  • Stress Test – This test verifies the acceptability of application performance under abnormal or extreme conditions such as diminished resources or extremely high number of users. The goal of stress test is to find the breaking point and bottlenecks of the tested system using extreme loads. It is done to evaluate the consequences of huge load, for example, after an advertisement campaign. 
  • Capacity Test – This test is used to determine the maximum number of concurrent users that an application can manage. It is done to evaluate applications used by many people at the same time (applications with high concurrent user rate). Capacity test is the benchmark to say what maximum number of concurrent users the application can handle. 
  • Endurance Test – This test generates an average load for a long period of time to validate application stability and reliability, and find out problems that arise during a long period of user load. 

The Difference between Performance and Stress Tests 

A performance test simulates the real-life usage of web application. It is performed to assess how the application will work under actual conditions of usage. 

A stress test is designed to determine what abnormal level of load the application can handle before it fails. During the test, a huge load is generated as quickly as possible to stress the application to its limit. 

Benchmark Test

Benchmark tests are usually conducted to compare performance for different software releases. This is done to run tests under similar conditions in order to compare results for different software, to verify that new features and functions do not increase values of response time, and to find system bottlenecks. To perform a benchmark test, you should define activities of each virtual user, and measure response times for a single virtual user performing a certain transaction. Once you have data for a single user, you can ramp up the number of users linearly to “n” users and measure response times to look for deviations.

Capacity Test

The goal of capacity test is to determine the maximum number of concurrent users that an application can manage. If your application is used by many people at the same time, you should necessarily perform a capacity test before you deploy the application. Capacity tests can prevent many problems. However, such tests are time-consuming and require much expertise to be successfully executed. Below are several suggestions that will help you prepare for a capacity test. 

Endurance Test

Endurance test is designed to run a system at average user load for prolonged period of time. It is done to identify performance problems that appear after a long period of time and validate application stability and reliability. Some defects or bottlenecks of applications don’t reveal themselves in a short period of testing, but they may accumulate during a long period of time and then result in a system failure. Also, due to memory leaks and other defects, it may occur that a system “stops” to work after a certain number of transactions. Endurance tests provide an opportunity to identify such defects, while performance tests and stress tests cannot find these problems due to their relatively short duration. 

Below are some typical problems identified by endurance tests: 

  • Serious memory leaks that would eventually result in a memory crisis. 
  • Failure to close connections between tiers of a multi-tiered system which could stop some or all modules of the system. 
  • Failure to close database cursors which would eventually result in the entire system stopping. 
  • Gradual degradation of response times since internal data structures may become less efficient during a long test. 

Apart from monitoring response time, it is also important to measure CPU usage and available memory. It is often effective to record memory usage at the start and end of endurance test. 

Test Duration

In an ideal case, endurance test should continue as long as possible depending on the testing needs. However, duration of most endurance tests is determined by the available time in the test lab. In such situation, weekends and nights are convenient time for these tests. 

However, there are many applications that require extremely long endurance tests. Any application that must run uninterrupted for extended periods of time may need the endurance test that would cover all application activities. A classic example of system that requires extensive endurance testing is an air traffic control system. Endurance test for such system may have a multi-week or even multi-month duration.

lure.

Goals-based Testing

Goals-based testing is an approach where critical performance questions are asked before a test begins. This approach enables setting thresholds of parameters (for example, response times) so that tests can be stopped when these thresholds are exceeded. Such approach accelerates the testing process greatly. Besides, testing is not very effective if it doesn’t accurately reproduce the user activity. Goals-based approach implies that realistic user scenarios are created before the test begins. With such approach, the test will answer the business question of “how many users” much more directly, take less time, and require less analysis after test run. 

How Does Goals-based Testing Work?

Generally, the goal of load testing is to determine how many concurrent users your system can handle without degradation of performance and much increase of response times. To meet this goal, you should answer some important questions before the test: 

  1. What are acceptable user performance levels? 
  2. How many concurrent users are required? 
  3. How fast should the site respond? 
  4. What are acceptable levels of system utilization? (CPU, memory, network, disk etc.) 

The main goal of load testing – to answer the question “how many users can my web server handle?” – is too vague to answer accurately. As an alternative, you can design your tests around these key questions: 

  1. Where is the system bottleneck, and how many synchronized concurrent requests can it handle? 
  2. How many nonsynchronized super users can one machine handle before response time becomes unacceptable? (Super users are virtual users whose think time is set to zero.) 
  3. Do the results scale linearly as you add additional hardware? 
  4. Are there any stability problems that will cause a failure? 

From here you can derive more specific questions such as “how many simultaneous requests can the submit process handle?” This is the fastest and cheapest way to get information about your application to make necessary improvements.

Volume Testing :

The purpose of Volume Testing is to find weaknesses in the system with respect to its handling of large amounts of data during short time periods. For example, this kind of testing ensures that the system will process data across physical and logical boundaries such as across servers and across disk partitions on one server.

It belongs to the group of non-functional tests, which are often misunderstood and/or used interchangeably. Volume testing refers to testing a software application for a certain data volume. This volume can in generic terms be the database size or it could also be the size of an interface file that is the subject of volume testing. For example, if you want to volume test your application with a specific database size, you will explode your database to that size and then test the application’s performance on it.

Another example could be when there is a requirement for your application to interact with an interface file (could be any file such as .dat, .xml); this interaction could be reading and/or writing on to/from the file. You will create a sample file of the size you want and then test the application’s functionality with that file to check performance.

Gray Box Testing:

Grey box testing is the combination of black box and white box testing. Intention of this testing is to find out defects related to bad design or bad implementation of the system. 

In gray box testing, test engineer is equipped with the knowledge of system and designs test cases or test data based on system knowledge. 

For example, consider a hypothetical case wherein you have to test a web application. Functionality of this web application is very simple, you just need to enter your personal details like email and field of interest on the web form and submit this form. Server will get this details, and based on the field of interest pick some articles and mail it to the given email. Email validation is happening at the client side using Java Scripts.

In this case, in the absence of implementation detail, you might test web form with valid/invalid mail IDs and different field of interests to make sure that functionality is intact.

But, if you know the implementation detail, you know that system is making following assumptions 

  • Server will never get invalid mail ID
  • Server will never send mail to invalid ID
  • Server will never receive failure notification for this mail.

So as part of gray box testing, in the above example you will have a test case on clients where Java Scripts are disabled. It could happen due to any reason and if it happens, validation can not happen at the client site. In this case, assumptions made by the system are violated and 

  • Server will get invalid mail ID
  • Server will send mail to invalid mail ID
  • Server will receive failure notification

Hope you understood the concept of gray box testing and how it can be used to create different test cases or data points based on the implementation details of the system.

Accessibility Testing:

Accessibility testing is the technique of making sure that your product is accessibility compliant. There could be many reasons why your product needs to be accessibility compliant as stated above.

Typical accessibility problems can be classified into following four groups, each of them with different access difficulties and issues: 

  • Visual impairments
    Such as blindness, low or restricted vision, or color blindness. User with visual impairments uses assistive technology software that reads content loud. User with weak vision can also make text larger with browser setting or magnificent setting of operating system.
  • Motor skills
    Such as the inability to use a keyboard or mouse, or to make fine movements.
  • Hearing impairments
    Such as reduced or total loss of hearing 
  • Cognitive abilities
    Such as reading difficulties, dyslexia or memory loss. 

Development team can make sure that their product is partially accessibility compliant by code inspection and Unit testing. Test team needs to certify that product is accessibility compliant during the functional testing phase. In most cases, accessibility checklist is used to certify the accessibility compliance. This checklist can have information on what should be tested, how it should be tested and status of product for different access related problems. Template of this checklist is available here.

For accessibility testing to succeed, test team should plan a separate cycle for accessibility testing. Management should make sure that test team have information on what to test and all the tools that they need to test accessibility are available to them.

Typical test cases for accessibility might look similar to the following examples. 

  • Make sure that all functions are available via keyboard only (do not use mouse)
  • Make sure that information is visible when display setting is changed to High Contrast modes.
  • Make sure that screen reading tools can read all the text available and every picture/Image have corresponding alternate text associated with it.
  • Make sure that product defined keyboard actions do not affect accessibility keyboard shortcuts.
  • And many more..


There are many tools in the market to assist you in your accessibility testing. Any single tool cannot certify that your product is accessibility compliant. You will always need more than one tool to check accessibility compliance of your product. Broadly, tools related to accessibility can be divided into two categories. 

  • Inspectors or web checkers

    This category of tool allows developer or tester to know exactly what information is being provided to an assistive technology. For example, tools like Inspect Object can be used to get information on what all information is given to the assistive technology. 
  • Assistive Technologies (AT)

    This category of tools is what a person with disability will use. To make sure that product is accessibility compliant, tools like screen readers, screen magnifiers etc. are used. Testing with an assistive technology has to be performed manually to understand how the AT will interact with the product and documentation. 

More information on the tools is present in tool section of this website for you to explore.

Some tips that can be used for Accessibility testing .

When using a screen reader, be sure to include tests for everything the user would be doing, such as install and un-install of the product.

If a function cannot be performed using an Assistive Technology, then it may be considered accessible if it has a command line interface to perform that function.

Most of the time on windows platform, accessibility is built in your product using Microsoft Active Accessibility (MSAA). You can get more information about MSAA on this page.

What is MSAA?
MSAA is the abbreviation of Microsoft Active Accessibility. MSAA is a set of dynamic link libraries (DLLs) that provide COM interface and APIs. It is incorporated into the Microsoft Windows operating system and provide methods for exposing information about UI elements.
MSAA is used by assistive technologies like screen readers. These tools get information from the MSAA and gives to the user. MSAA gives information in the form of objects. Every UI element is treated as UI object and information like Name, Value, Role, State, Keyboard Shortcut, etc. is given to the tools for assistive technology. MSAA also supports events to capture state changes in the UI objects, for example object focus changes.

Today most screen readers expect that application have implemented MSAA. Implementing MSAA is probably one of the easiest way to ensure that assistive technologies will work with your product.

MSAA Software Development Kit contains following tools: 

  • Inspect Objects
    Displays object information for user interface objects 
  • Event Watcher
    Displays events fired by application when navigating the user interface 
  • Accessible Explorer
    Displays object properties and relationship hierarchy


Development team can use these tools to find accessibility related defects in the development phase. Test team can validate that product is accessibility compliant using these tools and some other 

Security Testing:

Security Testing is very important in today’s world, because of the way computer and internet has affected the individual and organization. Today, it is very difficult to imagine world without Internet and latest communication system. All these communication systems increases efficiency of individual and organization by multifold. 

Since every one from individual to organization, uses Internet or communication system to pass information, to do business, to transfer money it becomes very critical for the service provider to make sure that information and network are secured from the intruders.

Primary purpose of security testing is to identify the vulnerabilities and subsequently repairing them. Typically, security testing is conducted after the system has been developed, installed and is operational. Unlike other types of testing, network security testing is performed on the system on the periodic basis to make sure that all the vulnerabilities of the system are identified.

Network security testing can be further classified into following types 

Network Scanning 

Network scanning involves using a port scanner to identify all hosts potentially connected to an organization’s network, the network services operating on those hosts, such as the file transfer protocol (FTP) and hypertext transfer protocol (HTTP), and the specific application running the identified service, such as Internet Information Server (IIS) and Apache for the HTTP service. The result of the scan is a comprehensive list of all active hosts and services, printers, switches, and routers operating in the address space scanned by the port-scanning tool, i.e., any device that has a network address or is accessible to any other device. 

Port scanners, such as nmap, first identify active hosts in the address range specified by the user using Transport Control Protocol/Internet Protocol (TCP/IP) Internet Control Message Protocol (ICMP) ECHO and ICMP ECHO_REPLY packets. Once active hosts have been identified, they are scanned for open TCP and User Datagram Protocol (UDP) ports that will then identify the network services operating on that host.

All basic scanners will identify active hosts and open ports, but some scanners provide additional information on the scanned hosts. The information gathered during this open port scan will often identify the target operating system. This process is called operating system fingerprinting. For example, if a host has TCP port 135 and 139 open, it is most likely a Windows NT or 2000 host.

While port scanners identify active hosts, services, applications and operating systems, they do NOT identify vulnerabilities (beyond some common Trojan ports). Vulnerabilities can only be identified by a human who interprets the mapping and scanning results. From these results, a qualified individual can ascertain what services are vulnerable and the presence of Trojans. Although the scanning process itself is highly automated, the interpretation of scanned data is not.

Purpose of network port scanning is to 

  • Check for unauthorized hosts connected to the organization’s network
  • Identify vulnerable services
  • Identify deviations from the allowed services defined in the organization’s security policy
  • Prepare for penetration testing
  • Assist in the configuration of the intrusion detection system (IDS)
  • Collect forensics evidence

Password Cracking

assword cracking programs can be used to identify weak passwords. Password cracking verifies that users are employing sufficiently strong passwords. Passwords are generally stored and transmitted in an encrypted form called a hash. When a user logs on to a computer/system and enters a password, a hash is generated and compared to a stored hash. If the entered and the stored hashes match, the user is authenticated. An automated password cracker rapidly generates hashes until a match is found. The fastest method for generating hashes is a dictionary attack that uses all words in a dictionary or text file. Another method of cracking is called a hybrid attack, which builds on the dictionary method by adding numeric and symbolic characters to dictionary words. Depending on the password cracker being used, this type of attack will try a number of variations. The attack tries common substitutes of characters and numbers for letters

The most powerful password-cracking method is called the brute force method. Brute force randomly generates passwords and their associated hashes. However since there are so many possibilities it can take months to crack a password. Theoretically all passwords are “crackable” from a brute force attack given enough time and processing power.

Log Review 

Various system logs can be used to identify deviations from the organization’s security policy, including firewall logs, IDS logs, server logs, and any other logs that are collecting audit data on systems and networks. While not traditionally considered a testing activity, log review and analysis can provide a dynamic picture of ongoing system activities that can be compared with the intent and content of the security policy.

Snort is a free IDS sensor with ample support. It is a network intrusion detection system, capable of performing real-time traffic analysis and packet logging on IP networks. Snort can perform protocol analysis, content searching/matching and can be used to detect a variety of attacks and probes, such as buffer overflows, stealth port scans, CGI (Common Gateway Interface) attacks, SMB (System Message Block) probes, and OS fingerprinting attempts.

 File Integrity Checker 

A file integrity checker computes and stores a checksum for every guarded file and establishes a database of file checksums. It provides a tool for the system administrator to recognize changes to files, particularly unauthorized changes. Stored checksums should be recomputed regularly to test the current value against the stored value to identify any file modifications. A file integrity checker capability is usually included with any commercial host-based intrusion detection system.

WAR Dialing 

In a well-configured network, unauthorized modems are often an overlooked vulnerability. These unauthorized modems provide a means to bypass most or all of the security measures in place. There are several software packages available (see Appendix C) that allow attackers and network administrators to dial large blocks of phone numbers in search of available modems. This process is called war dialing. A computer with four modems can dial 10,000 numbers in a matter of days. Certain war dialers will even attempt some limited automatic hacking when a modem is discovered. 

Error handling Testing

Error handling refers to the anticipation, detection, and resolution of programming, application, and communications errors. Specialized programs, called error handlers, are available for some applications. The best programs of this type forestall errors if possible, recover from them when they occur without terminating the application, or (if all else fails) gracefully terminate an affected application and save the error information to a log file.

In programming, a development error is one that can be prevented. Such an error can occur in syntax or logic. Syntax errors, which are typographical mistakes or improper use of special characters, are handled by rigorous proofreading. Logic errors, also called bugs, occur when executed code does not produce the expected or desired result. Logic errors are best handled by meticulous program debugging. This can be an ongoing process that involves, in addition to the traditional debugging routine, beta testing prior to official release and customer feedback after official release. 

A run-time error takes place during the execution of a program, and usually happens because of adverse system parameters or invalid input data. An example is the lack of sufficient memory to run an application or a memory conflict with another program. On the Internet, run-time errors can result from electrical noise, various forms of malware or an exceptionally heavy demand on a server. Run-time errors can be resolved, or their impact minimized, by the use of error handler programs, by vigilance on the part of network and server administrators, and by reasonable security countermeasures on the part of Internet users. 

Exploratory Testing
This is one of the software testing techniques, which has a hands-on approach. There is minimum planning and maximum test execution carried out in exploratory testing. The tester actively controls the design of the tests, when those tests are performed. The information gained while testing is used to design and better tests.

Usability Testing
Usability testing involves tests, which are carried out to determine the extent to which the software product is understood, easy to learn and operate and attractive to the users under specific conditions. The user friendliness of the software is under check in this type of testing. The application flow is checked to know, what is the flow of the software.

Reliability Testing
The ability of the software to perform its required functions under stated conditions for a specific period of time and/or for a specific number of operations or transactions;

Ad-Hoc Testing
It is the least formal method implemented for testing a software. It helps in deciding the scope and duration of the various testing, which need to be carried out on the application. It also helps the tester in better understanding of the software.

Static testing is a form of software testing where the software isn’t actually used. This is in contrast to Dynamic testing. It is generally not detailed testing, but checks mainly for the sanity of the code, algorithm, or document. It is primarily syntax checking of the code or and manually reading of the code or document to find errors. This type of testing can be used by the developer who wrote the code, in isolation. Code reviews, inspections and walkthroughs are also used.

From the black box testing point of view, static testing involves review of requirements or specifications. This is done with an eye toward completeness or appropriateness for the task at hand. This is the verification portion of Verification and Validation.

Bugs discovered at this stage of development are less expensive to fix than later in the development cycle.

Dynamic testing

(or dynamic analysis) is a term used in software engineering to describe the testing of the dynamic behavior of code. That is, dynamic analysis refers to the examination of the physical response from the system to variables that are not constant and change with time.

In dynamic testing the software must actually be compiled and run; this is in contrast to static testing. Dynamic testing is the validation portion of Verification and Validation.

Code Coverage
It is an analysis method implemented to determine, which parts of the software have been covered by the test suite and which parts of the software have not been executed. There are different types of coverage methods, that are used for the same. They are statement coverage, decision coverage and condition coverage. Statement coverage is the process, which gives the percentage of executable statements, which have been exercised by a test suite. The decision coverage on the other hand, is the percentage of decision outcomes, which have been exercised by a test suite. 100% decision coverage means 100% statement coverage.

Error Guessing
A test design technique where an experienced tester is used to anticipate the defects, that might be a present in the software or in a component of the software under test, as a result of errors made. The tests are designed to specifically expose such defects.

compatability testing:

 testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

recovery testing/ failover testing:testing how well a system recovers from crashes, hardware failures, or other catastrophic problems

User Acceptance Testing User Acceptance Testing (UAT) is performed by Users or on behalf of the users to ensure that the Software functions in accordance with the Business Requirement Document. UAT focuses on the following aspects:                                              

•     All functional requirements are satisfied

•     All performance requirements are achieved

•     Other requirements like transportability, compatibility, error recovery etc. are satisfied

•     Acceptance criteria specified by the user is met.

Difference between IST and UAT

Particulars                        IST UAT 
Baseline Document Functional Specification Business Requirement 
Data Simulated Live Data 
Environment Controlled Simulated Live 
Perspective Functionality User style 
Location Off Site On Site 
Tester Composition Tester Company Test company & Real Users
Purpose Validation & Verification User Needs 

Smoke testing:When a new build is received ,testing the build to ensure that the basic functionality is working fine and to ensure that build is stable and can be considered for further testing is known as “Smoke Testing”

Sanity Testing:When a major bug is filed in a build,developers will fix that particular bug alone and releases a new build ,testing the new build to check the particular issue is fixed and fixture  has not caused any impact on other functionality,is known as “Sanity Testing”

Web Testing Checklist:

  • General 
  • Code 
  • Appearance 
  • Localisation and Language 
  • Navigation 
  • Entering a Page 
  • Page Functionality 
    • Submitting Information – Forms 
    • Error Prevention and Correction 
    • Client Environment 

What’s a ‘test case’? 

• A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
• Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it’s useful to prepare test cases early in the development cycle if possible.

• What is system testing?
System testing is testing carried out of an integrated system to verify, that the system meets the specified requirements. It is concerned with the behavior of the whole system, according to the scope defined. More often than not system testing is the final test carried out by the development team, in order to verify that the system developed does meet the specifications and also identify defects which may be present.

• What is the difference between retest and regression testing?
Retesting, also known as confirmation testing is testing which runs the test cases that failed the last time, when they were run in order to verify the success of corrective actions taken on the defect found. On the other hand, regression testing is testing of a previously tested program program after the modifications to make sure that no new defects have been introduced. In other words, it helps to uncover defects in the unchanged areas of the software.

• What is a test suite?
A test suite is a set of several test cases designed for a component of a software or system under test, where the post condition of one test case is normally used as the precondition for the next test.

These are some of the software testing interview questions and answers for freshers and the experienced. This is not an exhaustive list, but I have tried to include as many software testing interview questions and answers, as I could in this article. I hope the article proves to be of help, when you are preparing for an interview. Here’s wishing you luck with the interviews and I hope you crack the interview as well.

What is a Test Case?

A test case is a set of conditions or variables and inputs that are developed for a particular goal or objective to be achieved on a certain application to judge its capabilities or features.
It might take more than one test case to determine the true functionality of the application being tested. Every requirement or objective to be achieved needs at least one test case. Some software development methodologies like Rational Unified Process (RUP) recommend creating at least two test cases for each requirement or objective; one for performing testing through positive perspective and the other through negative perspective.

Test Case Structure

A formal written test case comprises of three parts –

Information:
Information consists of general information about the test case. Information incorporates Identifier, test case creator, test case version, name of the test case, purpose or brief description and test case dependencies. 

Activity:
Activity consists of the actual test case activities. Activity contains information about the test case environment, activities to be done at test case initialization, activities to be done after test case is performed, step by step actions to be done while testing and the input data that is to be supplied for testing.

Results:
Results are outcomes of a performed test case. Results data consist of information about expected results and the actual results. 

Designing Test Cases

Test cases should be designed and written by someone who understands the function or technology being tested. A test case should include the following information – 

Purpose of the test 

Software requirements and Hardware requirements (if any) 

Specific setup or configuration requirements 

Description on how to perform the test(s) 

Expected results or success criteria for the test 

Designing test cases can be time consuming in a testing schedule, but they are worth giving time because they can really avoid unnecessary retesting or debugging or at least lower it. Organizations can take the test cases approach in their own context and according to their own perspectives. Some follow a general step way approach while others may opt for a more detailed and complex approach. It is very important for you to decide between the two extremes and judge on what would work the best for you. Designing proper test cases is very vital for your software testing plans as a lot of bugs, ambiguities, inconsistencies and slip ups can be recovered in time as also it helps in saving your time on continuous debugging and re-testing test cases.

Web Testing Terminology:

Web Testing, Example Test cases

WEB TESTING
While testing a web application you need to consider following Cases:

• Functionality Testing
• Performance Testing
• Usability Testing
• Server Side Interface
• Client Side Compatibility
• Security

Functionality:
In testing the functionality of the web sites the following should be tested:
• Links
i. Internal Links
ii. External Links
iii. Mail Links
iv. Broken Links

• Forms
i. Field validation
ii. Error message for wrong input
iii. Optional and Mandatory fields

• Database
* Testing will be done on the database integrity.

• Cookies
* Testing will be done on the client system side, on the temporary Internet files.

Performance :
Performance testing can be applied to understand the web site’s scalability, or to benchmark the performance in the environment of third party products such as servers and middleware for potential purchase.

• Connection Speed:
Tested over various networks like Dial Up, ISDN etc
• Load:
i. What is the no. of users per time?
ii. Check for peak loads and how system behaves
iii. Large amount of data accessed by user
• Stress:
i. Continuous Load
ii. Performance of memory, CPU, file handling etc..

Usability:
Usability testing is the process by which the human-computer interaction characteristics of a system are measured, and weaknesses are identified for correction.
• Ease of learning
• Navigation
• Subjective user satisfaction
• General appearance

Server Side Interface:
In web testing the server side interface should be tested. This is done by verify that communication is done properly. Compatibility of server with software, hardware, network and database should be tested.

Client Side Compatibility:
The client side compatibility is also tested in various platforms, using various browsers etc.

Security:
The primary reason for testing the security of a web is to identify potential vulnerabilities and subsequently repair them.
• Network Scanning
• Vulnerability Scanning
• Password Cracking
• Log Review
• Integrity Checkers
• Virus Detection

Let’s have first web testing checklist.
1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing

1) Functionality Testing:

Test for – all the links in web pages, database connection, forms used in the web pages for submitting or getting information from user, Cookie testing.

Check all the links:

  • Test the outgoing links from all the pages from specific domain under test.
  • Test all internal links.
  • Test links jumping on the same pages.
  • Test links used to send the email to admin or other users from web pages.
  • Test to check if there are any orphan pages.
  • Lastly in link checking, check for broken links in all above-mentioned links.

Test forms in all pages:
Forms are the integral part of any web site. Forms are used to get information from users and to keep interaction with them. So what should be checked on these forms?

  • First check all the validations on each field.
  • Check for the default values of fields.
  • Wrong inputs to the fields in the forms.
  • Options to create forms if any, form delete, view or modify the forms.

Let’s take example of the search engine project currently I am working on, In this project we have advertiser and affiliate signup steps. Each sign up step is different but dependent on other steps. So sign up flow should get executed correctly. There are different field validations like email Ids, User financial info validations. All these validations should get checked in manual or automated web testing.

Cookies testing:
Cookies are small files stored on user machine. These are basically used to maintain the session mainly login sessions. Test the application by enabling or disabling the cookies in your browser options. Test if the cookies are encrypted before writing to user machine. If you are testing the session cookies (i.e. cookies expire after the sessions ends) check for login sessions and user stats after session end. Check effect on application security by deleting the cookies. (I will soon write separate article on cookie testing)

Validate your HTML/CSS:
If you are optimizing your site for Search engines then HTML/CSS validation is very important. Mainly validate the site for HTML syntax errors. Check if site is crawlable to different search engines.

Database testing:
Data consistency is very important in web application. Check for data integrity and errors while you edit, delete, modify the forms or do any DB related functionality.
Check if all the database queries are executing correctly, data is retrieved correctly and also updated correctly. More on database testing could be load on DB, we will address this in web load or performance testing below.

2) Usability Testing:

Test for navigation:
Navigation means how the user surfs the web pages, different controls like buttons, boxes or how user using the links on the pages to surf different pages.
Usability testing includes:
Web site should be easy to use. Instructions should be provided clearly. Check if the provided instructions are correct means whether they satisfy purpose.
Main menu should be provided on each page. It should be consistent.

Content checking:
Content should be logical and easy to understand. Check for spelling errors. Use of dark colors annoys users and should not be used in site theme. You can follow some standards that are used for web page and content building. These are common accepted standards like as I mentioned above about annoying colors, fonts, frames etc.
Content should be meaningful. All the anchor text links should be working properly. Images should be placed properly with proper sizes.
These are some basic standards that should be followed in web development. Your task is to validate all for UI testing

Other user information for user help:
Like search option, sitemap, help files etc. Sitemap should be present with all the links in web sites with proper tree view of navigation. Check for all links on the sitemap.
“Search in the site” option will help users to find content pages they are looking for easily and quickly. These are all optional items and if present should be validated.

3) Interface Testing:
The main interfaces are:
Web server and application server interface
Application server and Database server interface.

Check if all the interactions between these servers are executed properly. Errors are handled properly. If database or web server returns any error message for any query by application server then application server should catch and display these error messages appropriately to users. Check what happens if user interrupts any transaction in-between? Check what happens if connection to web server is reset in between?

4) Compatibility Testing:
Compatibility of your web site is very important testing aspect. See which compatibility test to be executed:

  • Browser compatibility
  • Operating system compatibility
  • Mobile browsing
  • Printing options

Browser compatibility:
In my web-testing career I have experienced this as most influencing part on web site testing.
Some applications are very dependent on browsers. Different browsers have different configurations and settings that your web page should be compatible with. Your web site coding should be cross browser platform compatible. If you are using java scripts or AJAX calls for UI functionality, performing security checks or validations then give more stress on browser compatibility testing of your web application.
Test web application on different browsers like Internet explorer, Firefox, Netscape navigator, AOL, Safari, Opera browsers with different versions.

OS compatibility:
Some functionality in your web application is may not be compatible with all operating systems. All new technologies used in web development like graphics designs, interface calls like different API’s may not be available in all Operating Systems.
Test your web application on different operating systems like Windows, Unix, MAC, Linux, Solaris with different OS flavors.

Mobile browsing:
This is new technology age. So in future Mobile browsing will rock. Test your web pages on mobile browsers. Compatibility issues may be there on mobile.

Printing options:
If you are giving page-printing options then make sure fonts, page alignment, page graphics getting printed properly. Pages should be fit to paper size or as per the size mentioned in printing option.

5) Performance testing:
Web application should sustain to heavy load. Web performance testing should include:
Web Load Testing
Web Stress Testing

Test application performance on different internet connection speed.
In web load testing test if many users are accessing or requesting the same page. Can system sustain in peak load times? Site should handle many simultaneous user requests, large input data from users, Simultaneous connection to DB, heavy load on specific pages etc.

Stress testing: Generally stress means stretching the system beyond its specification limits. Web stress testing is performed to break the site by giving stress and checked how system reacts to stress and how system recovers from crashes.
Stress is generally given on input fields, login and sign up areas.

In web performance testing web site functionality on different operating systems, different hardware platforms is checked for software, hardware memory leakage errors,

6) Security Testing:

Following are some test cases for web security testing:

  • Test by pasting internal url directly into browser address bar without login. Internal pages should not open.
  • If you are logged in using username and password and browsing internal pages then try changing url options directly. I.e. If you are checking some publisher site statistics with publisher site ID= 123. Try directly changing the url site ID parameter to different site ID which is not related to logged in user. Access should denied for this user to view others stats.
  • Try some invalid inputs in input fields like login username, password, input text boxes. Check the system reaction on all invalid inputs.
  • Web directories or files should not be accessible directly unless given download option.
  • Test the CAPTCHA for automates scripts logins.
  • Test if SSL is used for security measures. If used proper message should get displayed when user switch from non-secure http:// pages to secure https:// pages and vice versa.
  • All transactions, error messages, security breach attempts should get logged in log files somewhere on web server.

Web sites are essentially client/server applications
with web servers and ‘browser’ clients.

Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.).

Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort.

Other considerations might include:

What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?

Who is the target audience? What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?

What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?

Will down time for server and content maintenance/upgrades be allowed? how much?

What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?

How reliable are the site’s Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?

What processes will be required to manage updates to the web site’s content, and

what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?

Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?

Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??

How will internal and external links be validated and updated? how often?
Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet ‘traffic congestion’ problems to be accounted for in testing?

How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?

How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.

The page layouts and design elements should be consistent throughout a site, so that it’s clear to the user that they’re still within a site.

Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.

All pages should have links external to the page; there should be no dead-end pages.
The page owner, revision date, and a link to a contact person or organization should be included on each page.

Website Cookie Testing, Test cases for testing web application cookies?

We will first focus on what exactly cookies are and how they work. It would be easy for you to understand the test cases for testing cookies when you have clear understanding of how cookies work? How cookies stored on hard drive? And how can we edit cookie settings?

What is Cookie?
Cookie is small information stored in text file on user’s hard drive by web server. This information is later used by web browser to retrieve information from that machine. Generally cookie contains personalized user data or information that is used to communicate between different web pages.

Why Cookies are used?
Cookies are nothing but the user’s identity and used to track where the user navigated throughout the web site pages. The communication between web browser and web server is stateless.

For example if you are accessing domain http://www.example.com/1.html then web browser will simply query to example.com web server for the page 1.html. Next time if you type page as http://www.example.com/2.html then new request is send to example.com web server for sending 2.html page and web server don’t know anything about to whom the previous page 1.html served.

What if you want the previous history of this user communication with the web server? You need to maintain the user state and interaction between web browser and web server somewhere. This is where cookie comes into picture. Cookies serve the purpose of maintaining the user interactions with web server.

How cookies work?
The HTTP protocol used to exchange information files on the web is used to maintain the cookies. There are two types of HTTP protocol. Stateless HTTP and Stateful HTTP protocol. Stateless HTTP protocol does not keep any record of previously accessed web page history. While Stateful HTTP protocol do keep some history of previous web browser and web server interactions and this protocol is used by cookies to maintain the user interactions.

Whenever user visits the site or page that is using cookie, small code inside that HTML page (Generally a call to some language script to write the cookie like cookies in JAVAScript, PHP, Perl) writes a text file on users machine called cookie.
Here is one example of the code that is used to write cookie and can be placed inside any HTML page:

Set-Cookie: NAME=VALUE; expires=DATE; path=PATH; domain=DOMAIN_NAME;

When user visits the same page or domain later time this cookie is read from disk and used to identify the second visit of the same user on that domain. Expiration time is set while writing the cookie. This time is decided by the application that is going to use the cookie.

Generally two types of cookies are written on user machine.

1) Session cookies: This cookie is active till the browser that invoked the cookie is open. When we close the browser this session cookie gets deleted. Some time session of say 20 minutes can be set to expire the cookie.
2) Persistent cookies: The cookies that are written permanently on user machine and lasts for months or years.

Where cookies are stored?
When any web page application writes cookie it get saved in a text file on user hard disk drive. The path where the cookies get stored depends on the browser. Different browsers store cookie in different paths. E.g. Internet explorer store cookies on path “C:\Documents and Settings\Default User\Cookies”
Here the “Default User” can be replaced by the current user you logged in as. Like “Administrator”, or user name like “Vijay” etc.
The cookie path can be easily found by navigating through the browser options. In Mozilla Firefox browser you can even see the cookies in browser options itself. Open the Mozila browser, click on Tools->Options->Privacy and then “Show cookies” button.

How cookies are stored?
Lets take example of cookie written by rediff.com on Mozilla Firefox browser:
On Mozilla Firefox browser when you open the page rediff.com or login to your rediffmail account, a cookie will get written on your Hard disk. To view this cookie simply click on “Show cookies” button mentioned on above path. Click on Rediff.com site under this cookie list. You can see different cookies written by rediff domain with different names.

Site: Rediff.com Cookie name: RMID
Name: RMID (Name of the cookie)
Content: 1d11c8ec44bf49e0… (Encrypted content)
Domain: .rediff.com
Path: / (Any path after the domain name)
Send For: Any type of connection
Expires: Thursday, December 31, 2020 11:59:59 PM

Applications where cookies can be used:

1) To implement shopping cart:
Cookies are used for maintaining online ordering system. Cookies remember what user wants to buy. What if user adds some products in their shopping cart and if due to some reason user don’t want to buy those products this time and closes the browser window? When next time same user visits the purchase page he can see all the products he added in shopping cart in his last visit.

2) Personalized sites:
When user visits certain pages they are asked which pages they don’t want to visit or display. User options are get stored in cookie and till the user is online, those pages are not shown to him.

3) User tracking:
To track number of unique visitors online at particular time.

4) Marketing:
Some companies use cookies to display advertisements on user machines. Cookies control these advertisements. When and which advertisement should be shown? What is the interest of the user? Which keywords he searches on the site? All these things can be maintained using cookies.

5) User sessions:
Cookies can track user sessions to particular domain using user ID and password.

Drawbacks of cookies:

1) Even writing Cookie is a great way to maintain user interaction, if user has set browser options to warn before writing any cookie or disabled the cookies completely then site containing cookie will be completely disabled and can not perform any operation resulting in loss of site traffic.

2) Too many Cookies:
If you are writing too many cookies on every page navigation and if user has turned on option to warn before writing cookie, this could turn away user from your site.3) Security issues:
Some times users personal information is stored in cookies and if someone hack the cookie then


Leave a Reply

Your email address will not be published. Required fields are marked *