Sunday, November 20, 2011

Equivalence Partitioning

All the software engineers would agree to the fact that, software testing is as important as software development itself. Every developer is familiar with the frustration of receiving bug reports from users even after spending hundreds of hours on writing thousands of code lines. An increasing number of software companies are realizing the importance of software testing and are paying more attention to it. Equivalence partitioning is an important as well as interesting software testing technique. Let's discuss it deeper.

What is Equivalence Partitioning?

In equivalence partitioning, the tester recognizes various equivalent classes for segregating, which are also the test cases. In this method, input possibilities are sorted into classes which are known as equivalence classes. But each of these classes cause the same processing and produce the same output. A class is a bunch of inputs that are likely to be processed in the same manner by the software. Equivalence partitioning can also be defined as a testing technique to minimize the occurrences of permutations and combinations of input data. It can be considered that the utility of the program will remain same for any value of data from the same class. That means, it is enough to choose one test case from every segment to inspect the behavior or utility of the program. Even if you test for all the test cases of a partition, hardly ever a new fault will be revealed in the program. Thus, the values in one partition can be safely taken to be equivalent. This reduces the effort of the tester by minimizing the number of test cases to be tested. Applying this technique also assists you in finding the "dirty" test cases.

Black Box Vs White Box

Black box testing is a testing is a way in which a software program is tested at the outer interface, without considering its internal architecture. Equivalence partitioning is often compared with black box testing. However, it has similarities with white box testing too. Some software may give different results for different ranges of the input values which will not be noticed by black box testing as it deals only with the outer interface. In white box testing, all the possible processes will be examined. To ensure this, additional segregation is considered in equivalence partitioning, which is not done in black box testing.

Equivalence Partitioning Example

Consider the following example of a simple software program for student grading system.

  • Percentage 00 - 39 Output- Grade F
  • Percentage 40 - 59 Output- Grade C
  • Percentage 60 - 70 Output- Grade B
  • Percentage 71 - 100 Output- Grade A
As per the equivalence partitioning testing technique, partitions for this program could be as follows.
  • Percentage between 0 to 39 - Valid Input
  • Percentage between 40 to 59 - Valid Input
  • Percentage between 60 to 70 - Valid Input
  • Percentage between 71 to 100 - Valid Input
  • Percentage less than 0 - Invalid Input
  • Percentage more than 100 - Invalid Input
  • Non numeric input - Invalid Input
It is obvious from the above example that from innumerable possible test cases, that is, any input from 0 to 100, values greater than 100 or less than 0 and values other than numeric, the data can be divided into 7 distinct classes. Now even if you take any one value from these divisions, your testing is acceptable.

The main use of equivalence partitioning is to find out equivalence classes and doing that requires proper examination of all the possible inputs. The prime advantage of implementing the equivalence partitioning in software testing is that the efforts of the tester are greatly reduced without compromising with quality. Unnecessary test cases are eliminated because all the cases that give the same outcome are bunched together.

Manual Testing Interview Questions

Manual testing is one of the oldest and most effective ways in which one can carry through software testing. Whenever a new software is invented, software testing needs to be carried through in order to test for its effectiveness. It is for this purpose that manual testing is required. Manual testing is one of the types of software testing which is an important component of the IT job sector. It does not use any automation methods and is therefore tedious and laborious.

Manual testing requires, most importantly, a 'tester'. One who is required to have certain qualities because the job demands it - he needs to be observant, creative, innovative, speculative, open-minded, resourceful, patient, skillful and possess certain other qualities that will help him with his job. In the following article we shall not be concentrating on what a tester is like, but what some of the manual testing interview questions are. So if you have a doubt in this regard, read the following article to know what these questions entail.

Manual Testing Interview Questions for Freshers

The following are some of the interview questions for manual testing. This will give you a fair idea of what these questions are like.
  • What is accessibility testing?
  • What is Ad Hoc Testing?
  • What is Alpha Testing?
  • What is Beta Testing?
  • What is Component Testing?
  • What is Compatibility Testing?
  • What is Data Driven Testing?
  • What is Concurrency Testing?
  • What is Conformance Testing?
  • What is Context Driven Testing?
  • What is Conversion Testing?
  • What is Depth Testing?
  • What is Dynamic Testing?
  • What is End-to-End testing?
  • What is Endurance Testing?
  • What is Installation Testing?
  • What is Gorilla Testing?
  • What is Exhaustive Testing?
  • What is Localization Testing?
  • What is Loop Testing?
  • What is Mutation Testing?
  • What is Positive Testing?
  • What is Monkey Testing?
  • What is Negative Testing?
  • What is Path Testing?
  • What is Ramp Testing?
  • What is Performance Testing?
  • What is Recovery Testing?
  • What is Regression testing?
  • What is Re-testing testing?
  • What is Stress Testing?
  • What is Sanity Testing?
  • What is Smoke Testing?
  • What is Volume Testing?
  • What is Usability testing?
  • What is Scalability Testing?
  • What is Soak Testing?
  • What is User Acceptance testing?
These were some of the manual testing interview questions for freshers, let us now move on to other forms of manual testing questions.

Software Testing Interview Questions for Freshers

Here are some software testing interview questions that will help you get into the more intricate and complex formats of this form of manual testing.
  • Can you explain the V model in manual testing?
  • What is the water fall model in manual testing?
  • What is the structure of a bug life cycle?
  • What is the difference between bug, error and defect?
  • How does one add objects into the Object Repository?
  • What are the different modes of recording?
  • What does 'testing' mean?
  • What is the purpose of carrying out manual testing for a background process that does not have a user interface and how do you go about it?
  • Explain with an example what test case and bug report are.
  • How does one go about reviewing a test case and what are the types that are available?
  • What is AUT?
  • What is compatibility testing?
  • What is alpha testing and beta testing?
  • What is the V model?
  • What is debugging?
  • What is the difference between debugging and testing? Explain in detail.
  • What is the fish model?
  • What is port testing?
  • Explain in detail the difference between smoke and sanity testing.
  • What is the difference between usability testing and GUI?
  • Why does one require object spy in QTP?
  • What is the test case life cycle?
  • Why does one save .vsb in library files in qtp winrunner?
  • When do we use the update mode in qtp?
  • What is virtual memory?
  • What is visual source safe?
  • What is the difference between test scenarios and test strategy?
  • What is the difference between properties and methods in qtp?
Why do these manual testing interview questions help? They help you to prepare for what lies ahead. The career opportunities that an IT job provides is greater than what many other fields provide, and if you're from this field then you'll know what I'm talking about, right?

Software Testing - Brief Introduction To Exploratory Testing

What is an Exploratory Testing?
Bach's Definition: 'Any testing to the extent that the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.'

Which simply can be put as: A type of testing where we explore software, write and execute the test scripts simultaneously.

Exploratory testing is a type of testing where tester does not have specifically planned test cases, but he/she does the testing more with a point-of-view to explore the software features and tries to break it in order to find out unknown bugs.

A tester who does exploratory testing, does it only with an idea to more and more understand the software and appreciate its features. During this process, he/she also tries to think of all possible scenarios where the software may fail and a bug can be revealed.

Why do we need exploratory testing?
  • At times, exploratory testing helps in revealing many unknown and un-detected bugs, which is very hard to find out through normal testing.
  • As exploratory testing covers almost all the normal type of testing, it helps in improving our productivity in terms of covering the scenarios in scripted testing and those which are not scripted as well.
  • Exploratory Testing is a learn and work type of testing activity where a tester can at least learn more and understand the software if at all he/she was not able to reveal any potential bug.
  • Exploratory testing, even though disliked by many, helps testers in learning new methods, test strategies, and also think out of the box and attain more and more creativity.
Who Does Exploratory Testing?
Any software tester knowingly or unknowingly does it!

While testing, if a tester comes across a bug, as a general practice, tester registers that bug with the programmer. Along with registering the bug, tester also tries to make it sure that he/she has understood the scenario and functionality properly and can reproduce the bug condition. Once programmer fixes the bug, tester runs a test case with the same scenario replication in which the bug had occurred previously. If tester finds that the bug is fixed, he/she again tries to find out if the fix can handle any such type of scenario with different inputs.

For an example, lets consider that a tester finds a bug related to an input text field on a form, where the field is supposed to accept any digit other than the digits from 1 to 100, which it fails to and accepts the number 100. Tester logs this bug to the programmer and now is waiting for the fix. Once programmer fixes the bug, it sends it across to the tester so as to get it tested. Tester now will try to test the bug with same input value (100: as he/she had found that this condition causes application to fail) in the field. If application rejects the number (100) entered by the tester, he/she can safely close the defect.

Now, along with the above given test input value, which had revealed the bug, tester tries to check if there is any other value from this set (0 to 100), which can cause the application to fail. He/she may try to enter values from 0 to 100, or may be some characters or a combination of character and numbers in any order. All these test cases are thought by the tester as a variation of the type of value he/she had entered previously and represent only one test scenario. This testing is called exploratory testing, as the tester tried to explore and find out the possibility of revealing a bug by using any possible way.

What qualities I need to possess to be able to perform an Exploratory Testing?
As I mentioned above, any software tester can perform exploratory testing.
The only limit to the extent to which you can perform exploratory testing is your imagination and creativity, more you can think of ways to explore, understand the software, more test cases you will be able write and execute simultaneously.

Advantages of Exploratory Testing
  • Exploratory testing can uncover bugs, which are normally ignored (or hard to find) by other testing strategies.
  • It helps testers in learning new strategies, expand the horizon of their imagination that helps them in understanding & executing more and more test cases and finally improve their productivity.
  • Exploratory testing helps tester in confirming that he/she understands the application and its functionality properly and has no confusion about the working of even a smallest part of it, hence covering the most important part of requirement understanding.
  • As in case of exploratory testing, we write and execute the test cases simultaneously. It helps in collecting result oriented test scripts and shading of load of unnecessary test cases which do not yield and result.
  • Exploratory testing covers almost all type of testing, hence tester can be sure of covering various scenarios once exploratory testing is performed at the highest level (i.e. if the exploratory testing performed can ensure that all the possible scenarios and test cases are covered).

Software Testing - Contents of a Bug

When a tester finds a defect, he/she needs to report a bug and enter certain fields, which helps in uniquely identifying the bug reported by the tester. The contents of a bug are as given below:

Project: Name of the project under which the testing is being carried out.

Subject: Description of the bug in short which will help in identifying the bug. This generally starts with the project identifier number/string. This string should be clear enough to help the reader in anticipate the problem/defect for which the bug has been reported.

Description: Detailed description of the bug. This generally includes the steps that are involved in the test case and the actual results. At the end of the summary, the step at which the test case fails is described along with the actual result obtained and expected result.

Summary: This field contains some keyword information about the bug, which can help in minimizing the number of records to be searched.

Detected By: Name of the tester who detected/reported the bug.

Assigned To: Name of the developer who is supposed to fix the bug. Generally this field contains the name of developer group leader, who then delegates the task to member of his team, and changes the name accordingly.

Test Lead: Name of leader of testing team, under whom the tester reports the bug.

Detected in Version: This field contains the version information of the software application in which the bug was detected.

Closed in Version: This field contains the version information of the software application in which the bug was fixed.

Date Detected: Date at which the bug was detected and reported.

Expected Date of Closure: Date at which the bug is expected to be closed. This depends on the severity of the bug.

Actual Date of Closure: As the name suggests, actual date of closure of the bug i.e. date at which the bug was fixed and retested successfully.

Priority: Priority of the bug fixing. This specifically depends upon the functionality that it is hindering. Generally Medium, Low, High, Urgent are the type of severity that are used.

Severity: This is typically a numerical field, which displays the severity of the bug. It can range from 1 to 5, where 1 is high severity and 5 is the lowest.

Status: This field displays current status of the bug. A status of ‘New’ is automatically assigned to a bug when it is first time reported by the tester, further the status is changed to Assigned, Open, Retest, Pending Retest, Pending Reject, Rejected, Closed, Postponed, Deferred etc. as per the progress of bug fixing process.

Bug ID: This is a unique ID i.e. number created for the bug at the time of reporting, which identifies the bug uniquely.

Attachment: Sometimes it is necessary to attach screen-shots for the tested functionality that can help tester in explaining the testing he had done and it also helps developers in re-creating the similar testing condition.

Test Case Failed: This field contains the test case that is failed for the bug.

Any of above given fields can be made mandatory, in which the tester has to enter a valid data at the time of reporting a bug. Making a field mandatory or optional depends on the company requirements and can take place at any point of time in a Software Testing project.

(Please Note: All the contents enlisted above are generally present for any bug reported in a bug-reporting tool. In some cases (for the customized bug-reporting tools) the number of fields and their meaning may change as per the company requirements.)

Software Testing Techniques

Software testing is a process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine, that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects. ~ Foundation of Software Testing by Dorothy Graham, Erik van Veenendal, Isabel Evans, Rex Black.

Software Testing - An Introduction!
From the definition of software testing, it is clear that software testing forms the fundamental component of software quality assurance. While a software is being tested, it is laid into pieces to find out any errors in the software. When the software is tested, the test engineer aims to find errors in the system, to find an undiscovered error. To find new errors, different software testing techniques are used. The purpose of the different techniques is to uncover new errors. In this article, we will see the different types of software testing techniques used in the software testing process.

Software Testing Techniques and Strategies

Normally software testing is carried out in all stages of the software development life cycle. The advantage of testing at all stages is, that it helps to find different defects in different stages of software development. This helps to minimize the cost of software, as it is easier to log the defects and fix the defects in the early stage of software development. When the entire product is ready the cost increases, as there are a number of other components, which are also dependent on the component, which has defects in it. The software testing methodologies are broadly divided into two, namely static techniques and dynamic techniques.

Static Software Testing Techniques
In this type of technique, testing of a component is carried out without execution of the software. There is static analysis of the code, which is carried out. There are different types of static techniques of software testing as well.

Review
Review is said to be one powerful static technique, which is carried out in the early stages of software testing life cycle. The reviews can either be formal or informal. Inspection is the most documented and formal review technique. However, in practice the informal review is perhaps the most commonly used technique.

In the initial stages of the development, the number of people attending the reviews, whether formal or informal are less, but they increase in the later stages of the software development. Peer Review is a review of a software product undertaken by the peers and colleagues of the author of the software component, to identify the defects in the component and also to recommend any improvements in the system if required. The types of reviews are:
  • Walkthrough: The author of the document to be reviewed guides the participants through the document, along with his/her thought process to come to a common understanding as well as to gather feedback on the component document under review.
  • Technical Review: It is a peer group discussion, where the focus of the discussion, is to achieve consensus on the technical approach taken, while developing the system.
  • Inspection: This is also a type of peer review, where the focus is on the visual examination of various documents to detect any defects in the system. This type of review is always based on a documented procedure.
Static Analysis by Tools
Static analysis tools focus on the software code. These tools are used by software developers before as well as sometimes during component and integration testing. The tools used include
  • Coding Standards: Here, there is a check conducted to verify adherence to coding standards.
  • Code Metrics: The code metrics helps to measure structural attributes of the code. When the system becomes increasingly complex, it helps to decide the design alternatives, more so while redesigning portions of the code.
  • Code Structure: Three main aspects of the code structure are control flow structure, data flow structure and data structure.
Dynamic Software Testing Techniques
In the dynamic software testing techniques, the code is actually tested for defects. This technique is further divided into three sub-categories, namely specification based technique, structure based technique and experience based technique. We will now see each one of them.

Specification Based Testing Techniques
The procedure used to derive and/or select test cases based on the analysis of either functional specifications or non functional specifications, of a component or system, without any reference to the internal structure of the component or system. It is also known as 'black box testing' or 'input/output driven testing techniques'. They are so-called the tester has no knowledge of how the system is structured inside. The tester concentrates on what the software does and is not bothered about how it does it. Functional testing concentrates on what the system does, along with its features or functions. On the other hand, the non functional testing concentrates on how well the system does something. There are five main specification based testing techniques:
  1. Equivalence Partitioning: The test cases are designed to execute representative inputs from an equivalence partition or equivalence classes. The test cases are designed, such that the test cases cover every partition at least once. To explain it further, an equivalence partitioning technique, the idea is to divide - a set of test conditions into sub groups or sets, which can be considered the same. If any value from the group is used in the system, the result should be the same. This helps to reduce the execution of a number of test cases, as only one condition from each partition can be tested. Example: If 1 to 100 are the valid values, then the valid partitioning is 1 to 50, 50 to 100. Therefore, for valid partitioning, 1, 50 and 100 are the values for which the system will have to be checked. But it does not end there, the system will have to be checked also for invalid partitions as well. Hence, random values like -10, 125, etc. are invalid partitions. While choosing the values for invalid partitioning, the values should be away from the valid boundaries.
  2. Boundary Value Analysis: An input or output value that is on the edge of an equivalence partition or is at the smallest incremental distance on either side of an edge. This technique is based on testing the boundaries between the partitions for both valid boundaries and invalid boundaries. Example: If 1 to 99 are the valid inputs. Therefore, values 0 and 100 are the invalid values. Hence, the test cases should be so designed to include values 0, 1, 99 and 100, to know the working of the system.
  3. Decision Table: This technique focuses on business logic or business rules. A decision table is also known as cause effect table. In this table there is combination of inputs with their associated outputs, which are used to design test cases. This technique works well in conjunction with equivalence partitioning. Here the first task is to identify a suitable function, which has behavioral traits, that react according to a combination of inputs. If there are a large number of conditions, then dividing them into subsets helps to come up with the accurate results. If there are two conditions, then you will have 4 combination of input sets. Likewise, for 3 conditions there are 8 combination and for 4 conditions there are 16 combination, etc.
  4. State Transition Testing: This technique is used, where any aspect of the component or system can be described as a 'finite state machine'. The test cases for this technique are designed to execute valid and invalid state transition. In any given state, one event can give rise to only one action, but the same event from another state may cause a different action and a different end state.
  5. Use Case Testing: It helps to identify the test cases, which exercise the whole system on a transaction by transaction basis from the beginning to end. The test cases are designed to execute real life scenarios. They help to unravel integration defects.
Structure Based Testing Techniques
There are two purposes of structure based testing techniques, viz. Test coverage measurement and structural test case design. They are a good way to generate additional test cases, which are different from existing test cases, derived from the specification based techniques. This is also known as white box testing strategy or white box testing techniques.
  • Test Coverage: The degree expressed as a percentage, to which a specified coverage item has been exercised by a test suite. The basic coverage measure is

    Coverage  =    Number of coverage items exercised
    Total number of coverage items
      *100%

    There is a danger in using the coverage measure. Contrary to the belief, 100% coverage does not mean that the code is 100% tested.
  • Statement Coverage and Statement Testing:
    This is the percentage of executable statements, which have been exercised by a particular test suite. It is important to note, that a statement may be on one single line or it can be spread over several lines. At the same time, one line may contain more than one statement or a part of a statement as well and not to forget statements, which contain other statement inside them. The formula to be used for statement coverage is:

    Statement Coverage  =    Number of statements exercised
    Total number of statements
      *100%
  • Decision Coverage and Decision Testing:
    Decision statements are statements like 'If statements', loop statements, Case statements, etc. where there are two or more possible outcomes from the same statement. To calculate decision coverage, the formula you will use is

    Decision Coverage  =    Number of decision outcomes exercised
    Total number of decision outcomes
      *100%

    Decision coverage is stronger than statement coverage, as 100% decision coverage always guarantees statement coverage, but the other way round is not true. While checking decision coverage, each decision needs to have both a true as well as a false outcome.
  • Other Structure based Testing Techniques:
    Apart from the structure based techniques mentioned above, there are some other techniques as well. They include linear code sequence and jump (LCSAJ) coverage, multiple condition decision coverage (MCDC), path testing, etc.
Experience Based Testing Techniques
Although testing needs to be rigorous, systematic and thorough, there are some non systematic techniques, that are based on a person's knowledge, experience, imagination and intuition. A bug hunter often is able to locate an elusive defect in the system. The two techniques, which fall under this category, they are:

Error Guessing
It is a test design technique, where the experience of the tester is put to test, to hunt for elusive bugs, which may be a part of the component or the system, due to errors made. This technique is often used after the formal techniques have been used to test the code and has proved to be very useful. A structured approach to be used in error guessing approach is to list the possible defects, which can be a part of the system and then test cases in an attempt to reproduce them.

Exploratory Testing Techniques
Exploratory testing is also known as 'monkey testing'. It is a hands-on approach, where there is minimum planning of testing the component, but maximum testing takes place. The test design and test execution happen simultaneously without formally documenting the test conditions, test cases or test scripts. This approach is useful, when the project specifications are poor or when the time at hand is extremely limited.

There are different types of software testing estimation techniques. One of the techniques involves consulting people who will perform the testing activities and the people who have expertise on the tasks to be done. The software testing techniques to be used to test the project depends on a number of factors. The main factors are urgency of the project, severity of the project, resources in hand, etc. At the same time not all techniques of software testing will be utilized in all the projects, depending on the organizational polices techniques will be decided.

Software Testing - Brief Introduction To Security Testing

Security Testing of any developed system (or a system under development) is all about finding out all the potential loopholes and weaknesses of the system, which might result into loss/theft of highly sensitive information or destruction of the system by an intruder/outsider. Security Testing helps in finding out all the possible vulnerabilities of the system and help developers in fixing those problems.
Need of Security Testing
  • Security test helps in finding out loopholes that can cause loss of important information and allow any intruder enter into the systems.
  • Security Testing helps in improving the current system and also helps in ensuring that the system will work for longer time (or it will work without hassles for the estimated time).
  • Security Testing doesn’t only include conformance of resistance of the systems your organization uses, it also ensures that people in your organization understand and obey security policies. Hence adding up to the organization-wide security.
  • If involved right from the first phase of system development life cycle, security testing can help in eliminating the flaws into design and implementation of the system and in turn help the organization in blocking the potential security loopholes in the earlier stage. This is beneficial to the organization almost in all aspects (financially, security and even efforts point of view).
Who need Security Testing?
Now a day, almost all organizations across the world are equipped with hundreds of computers connected to each other through intranets and various types of LANs inside the organization itself and through Internet with the outer world and are also equipped with data storage & handling devices. The information that is stored in these storage devices and the applications that run on the computers are highly important to the organization from the business, security and survival point of view.

Any organization small or big in size, need to secure the information it possesses and the applications it uses in order to protect its customer’s information safe and suppress any possible loss of its business.

Security testing ensures that the systems and applications used by the organizations are secure and not vulnerable to any type of attack.

What are the different types of Security Testing?
Following are the main types of security testing:
  • Security Auditing: Security Auditing includes direct inspection of the application developed and Operating Systems & any system on which it is being developed. This also involves code walk-through.
  • Security Scanning: It is all about scanning and verification of the system and applications. During security scanning, auditors inspect and try to find out the weaknesses in the OS, applications and network(s).
  • Vulnerability Scanning: Vulnerability scanning involves scanning of the application for all known vulnerabilities. This scanning is generally done through various vulnerability scanning software.
  • Risk Assessment: Risk assessment is a method of analyzing and deciding the risk that depends upon the type of loss and the possibility/probability of loss occurrence. Risk assessment is carried out in the form of various interviews, discussions and analysis of the same. It helps in finding out and preparing possible backup-plan for any type of potential risk, hence contributing towards the security conformance.
  • Posture Assessment & Security Testing: This is a combination of Security Scanning, Risk Assessment and Ethical Hacking in order to reach a conclusive point and help your organization know its stand in context with Security.
  • Penetration Testing: In this type of testing, a tester tries to forcibly access and enter the application under test. In the penetration testing, a tester may try to enter into the application/system with the help of some other application or with the help of combinations of loopholes that the application has kept open unknowingly. Penetration test is highly important as it is the most effective way to practically find out potential loopholes in the application.
  • Ethical Hacking: It’s a forced intrusion of an external element into the system & applications that are under Security Testing. Ethical hacking involves number of penetration tests over the wide network on the system under test.

(Please Note: The above given list of security types is based on the Open Source Security Testing Methodology Manual of Pete Herzog and the Institute for Security and Open Methodologies - ISECOM)

The best way to ensure security is to involve the security related assessments, audits and various types of testing right from the first phase of system development. The level and form of processes used in security testing of any system varies depending upon the phase, condition and type of system under testing.

Software Testing - Bug and Statuses Used During A Bug Life Cycle

The main purpose behind any software development process is to provide the client (Final/End User of the software product) with a complete solution (software product), which will help him in managing his business/work in a cost-effective and efficient way. A software product developed is considered successful if it satisfies all the requirements stated by the end user.

Any software development process is incomplete if the most important phase of 'testing' of the developed product is excluded. Software testing is a process carried out in order to find out and fix previously undetected bugs/errors in the software product. It helps in improving the quality of the software product and making it secure for the client to use.

What is a bug/error?
A bug or an error in the software product is any exception that can hinder the functionality of either the whole software or a part of it.

How do I find out a Bug/Error?
Basically, test cases/scripts are run in order to find out any unexpected behavior of the software product under test. If any such unexpected behavior or exception occurs, it is called a bug.

What is a Test Case?
A test case is a noted/documented set of steps/activities that are carried out or executed on the software in order to confirm its functionality/behavior to certain set of inputs.

What do I do if I find a bug/error?
In normal terms, if a bug or error is detected in a system, it needs to be communicated to the developer in order to get it fixed.

Right from the first time any bug is detected till the point when the bug is fixed and closed, it is assigned various statuses which are New, Open, Postpone, Pending Retest, Retest, Pending Reject, Reject, Deferred, and Closed.

Please note that there are various ways to communicate the bug to the developer and track the bug status.

Statuses associated with a bug

New
When a bug is found/revealed for the first time, the software tester communicates it to his/her team leader (Test Leader) in order to confirm if it is a valid bug. After getting confirmation from the Test Lead, the software tester logs the bug and the status of 'New' is assigned to the bug.

Assigned
After the bug is reported as 'New', it comes to the Development Team. The development team verifies if the bug is valid. If the bug is valid, the development leader assigns it to a developer to fix it and a status of 'Assigned' is given to it.

Open
Once the developer starts working on the bug, he/she changes the status of the bug to 'Open' to indicate that he/she is working on it to find a solution.

Fixed
Once the developer makes necessary changes in the code and verifies the code, he/she marks the bug as 'Fixed' and passes it over to the Development Lead in order to pass it to the Testing team.

Pending Retest
After the bug is fixed, it is passed back to the testing team to get retested and the status of 'Pending Retest' is assigned to it.

Retest
The testing team leader changes the status of the bug, which is previously marked with 'Pending Retest' to 'Retest' and assigns it to a tester for retesting.

Closed
After the bug is assigned a status of 'Retest', it is again tested. If the problem is solved, the tester closes it and marks it with 'Closed' status.

Reopen
If after retesting the software for the bug opened, if the system behaves in the same way or the same bug arises once again, then the tester reopens the bug and again sends it back to the developer marking its status as 'Reopen'.

Pending Reject
If the developers think that a particular behavior of the system, which the tester reports as a bug has to be same and the bug is invalid, in that case, the bug is rejected and marked as 'Pending Reject'.

Rejected
If the Testing Leader finds that the system is working according to the specifications or the bug is invalid as per the explanation from the development, he/she rejects the bug and marks its status as 'Rejected'.

Postponed
Sometimes, testing of a particular bug has to be postponed for an indefinite period. This situation may occur because of many reasons, such as unavailability of test data, unavailability of particular functionality, etc. That time, the bug is marked with 'Postponed' status.

Deferred
In some cases, a particular bug stands no importance and is needed to be/can be avoided, at that time, it is marked with a 'Deferred' status.

Software Testing Interview Questions and Answers

After the boom in software development, it is the software testing industry which has presented the job industry with a plethora of career opportunities and jobs in software testing. There are various reasons, why someone would want to take up a job in the software testing industry. There are some who like software jobs, but are not really keen on software development jobs, there are others who simply like the idea of software testing more lucrative than the software development jobs. Once you have decided to opt for software testing jobs, then you will have to arm yourself with the basic software testing interview questions and answers.

Manual Software Testing Interview Questions and Answers

As a software tester the person should have certain qualities, which are imperative. The person should be observant, creative, innovative, speculative, patient, etc. It is important to note, that when you opt for manual testing, it is an accepted fact that the job is going to be tedious and laborious. Whether you are a fresher or experienced, there are certain questions, to which answers you should know.

What is a test case?
Find the answer to this question in the article titled test cases.

Explain the bug life cycle in detail.
This is one of the most commonly asked interview questions, hence this question is always a part of software testing interview questions and answers for experienced as well as freshers. The bug life cycle is the stages the bug or defect goes through before it is fixed, deferred or rejected. Read in detail on bug life cycle.

What are the phases of STLC?
Like there are different phases of the software development life cycle, there are different phases of software testing life cycle as well. Read through software testing life cycle for more explanation.

What is regression testing?
Regression testing is the testing of a particular component of the software or the entire software after modifications have been made to it. The aim of regression testing is to ensure new defects have not been introduced in the component or software, especially in the areas where no changes have been made. In short, regression testing is the testing to ensure nothing has changed, which should not have changed due to changes made.

Explain stress testing.
Find the answer to this question in this article on stress testing.

What is a Review?
A review is an evaluation of a said product or project status to ascertain any discrepancies from the actual planned results and to recommend improvements to the said product. The common examples of reviews are informal review or peer review, technical review, inspection, walkthrough, management review. This is one of the manual testing interview questions.

What are the different types of software testing?
There are a number of types of software testing which you will learn in the preceding link.

Explain in short, sanity testing, adhoc testing and smoke testing.
Sanity testing is a basic test, which is conducted if all the components of the software can be compiled with each other without any problem. It is to make sure that there are no conflicting or multiple functions or global variable definitions have been made by different developers. It can also be carried out by the developers themselves.

Smoke testing on the other hand is a testing methodology used to cover all the major functionality of the application without getting into the finer nuances of the application. It is said to be the main functionality oriented test.

Ad hoc testing is different than smoke and sanity testing. This term is used for software testing, which is performed without any sort of planning and/or documentation. These tests are intended to run only once. However in case of a defect found it can be carried out again. It is also said to be a part of exploratory testing.

What are stubs and drivers in manual testing?
Both stubs and drivers are a part of incremental testing. There are two approaches, which are used in incremental testing, namely bottom up and top down approach. Drivers are used in bottom up testing. They are modules, which test the components to be tested. The look of the drivers is similar to the future real modules.

A skeletal or special purpose implementation of a component, which is used to develop or test a component, that calls or is otherwise dependent on it. It is the replacement for the called component.

Explain priority, severity in software testing.
Priority is the level of business importance, which is assigned to a defect found. On the other hand, severity is the degree of impact, the defect can have on the development or operation of the component or the system.

Explain the waterfall model in testing.
Waterfall model is a part of software development life cycle, as well as software testing. It is one of the first models to be used for software testing.

Tell me about V model in manual testing.
V model is a framework, which describes the software development life cycle activities right from requirements specification up to software maintenance phase. Testing is integrated in each of the phases of the model. The phases of the model start with user requirements and are followed by system requirements, global design, detailed design, implementation and ends with system testing of the entire system. Each phase of model has the respective testing activity integrated in it and is carried out parallel to the development activities. The four test levels used by this model include, component testing, integration testing, system testing and acceptance testing.

Difference between bug, error and defect.
Bug and defect essentially mean the same. It is the flaw in a component or system, which can cause the component or system to fail to perform its required function. If a bug or defect is encountered during the execution phase of the software development, it can cause the component or the system to fail. On the other hand, an error is a human error, which gives rise to incorrect result. You may want to know about, how to log a bug (defect), contents of a bug, bug life cycle, and bug and statuses used during a bug life cycle, which help you in understanding the terms bug and defect better.

What is compatibility testing?
Compatibility testing is a part of non-functional tests carried out on the software component or the entire software to evaluate the compatibility of the application with the computing environment. It can be with the servers, other software, computer operating system, different web browsers or the hardware as well.

What is integration testing?
One of the software testing types, where tests are conducted to test interfaces between components, interactions of the different parts of the system with operating system, file system, hardware and between different software. It may be carried out by the integrator of the system, but should ideally be carried out by a specific integration tester or a test team.

Which are the different methodologies used in software testing?
Refer to software testing methodologies for detailed information on the different methodologies used in software testing.

Explain performance testing.
It is one of the non-functional type of software testing. Performance of a software is the degree to which a system or a component of system accomplish the designated functions given constraints regarding processing time and throughput rate. Therefore, performance testing is the process to test to determine the performance of a software.

Explain the testcase life cycle.
On an average a test case goes through the following phases. The first phase of the testcase life cycle is identifying the test scenarios either from the specifications or from the use cases designed to develop the system. Once the scenarios have been identified, the test cases apt for the scenarios have to be developed. Then the test cases are reviewed and the approval for those test cases have to be taken from the concerned authority. After the test cases have been approved, they are executed. When the execution of the test cases start, the results of the tests have to be recorded. The test cases which pass are marked accordingly. If the test cases fail, defects have to be raised. When the defects are fixed the failed test case has to be executed again.

Explain equivalence class partition.
It is either specification based or a black box technique. Gather information on equivalence partitioning from the article on equivalence partitioning.

Explain statement coverage.
It is a structure based or white box technique. Test coverage measures in a specific way the amount of testing performed by a set of tests. One of the test coverage type is statement coverage. It is the percentage of executable statements which have been exercise by a particular test suite. The formula which is used for statement coverage is:



Statement Coverage  =    Number of statements exercised
Total number of statements
  * 100%

What is acceptance testing.
Refer to the article on acceptance testing for the answer.

Explain compatibility testing.
The answer to this question is in the article on compatibility testing.

What is meant by functional defects and usability defects in general? Give appropriate example.
We will take the example of 'Login window' to understand functionality and usability defects. A functionality defect is when a user gives a valid user name but invalid password and the user clicks on login button. If the application accepts the user name and password, and displays the main window, where an error should have been displayed. On the other hand a usability defect is when the user gives a valid user name, but invalid password and clicks on login button. The application throws up an error message saying "Please enter valid user name" when the error message should have been "Please enter valid Password."

What are the check lists, which a software tester should follow?
Read the link on check lists for software tester to find the answer to the question.

What is usability testing?
Refer to the article titled usability testing for an answer to this question.

What is exploratory testing?
Read the page on exploratory testing to find the answer.

What is security testing?
Read on security testing for an appropriate answer.

Explain white box testing.
One of the testing types used in software testing is white box testing. Read in detail on white box testing.

What is the difference between volume testing and load testing?
Volume testing checks if the system can actually cope up with the large amount of data. For example, a number of fields in a particular record or numerous records in a file, etc. On the other hand, load testing is measuring the behavior of a component or a system with increased load. The increase in load can be in terms of number of parallel users and/or parallel transactions. This helps to determine the amount of load, which can be handled by the component or the software system.

What is pilot testing?
It is a test of a component of a software system or the entire system under the real time operating conditions. The real time environment helps to find the defects in the system and prevent costly bugs been detected later on. Normally a group of users use the system before its complete deployment and give their feedback about the system.

What is exact difference between debugging & testing?
When a test is run and a defect has been identified. It is the duty of the developer to first locate the defect in the code and then fix it. This process is known as debugging. In other words, debugging is the process of finding, analyzing and removing the causes of failures in the software. On the other hand, testing consists of both static and dynamic testing life cycle activities. It helps to determine that the software does satisfy specified requirements and it is fit for purpose.

Explain black box testing.
Find the answer to the question in the article on black box testing.

What is verification and validation?
Read on the two techniques used in software testing namely verification and validation in the article on verification and validation.

Explain validation testing.
For an answer about validation testing, click on the article titled validation testing.

What is waterfall model in testing?
Refer to the article on waterfall model in testing for the answer.

Explain beta testing.
For answer to this question, refer to the article on beta testing.

What is boundary value analysis?
A boundary value is an input or an output value, which resides on the edge of an equivalence partition. It can also be the smallest incremental distance on either side of an edge, like the minimum or a maximum value of an edge. Boundary value analysis is a black box testing technique, where the tests are based on the boundary values.

What is system testing?
System testing is testing carried out of an integrated system to verify, that the system meets the specified requirements. It is concerned with the behavior of the whole system, according to the scope defined. More often than not system testing is the final test carried out by the development team, in order to verify that the system developed does meet the specifications and also identify defects which may be present.

What is the difference between retest and regression testing?
Retesting, also known as confirmation testing is testing which runs the test cases that failed the last time, when they were run in order to verify the success of corrective actions taken on the defect found. On the other hand, regression testing is testing of a previously tested program after the modifications to make sure that no new defects have been introduced. In other words, it helps to uncover defects in the unchanged areas of the software.

What is a test suite?
A test suite is a set of several test cases designed for a component of a software or system under test, where the post condition of one test case is normally used as the precondition for the next test.

These are some of the software testing interview questions and answers for freshers and the experienced. This is not an exhaustive list, but I have tried to include as many software testing interview questions and answers, as I could in this article. I hope the article proves to be of help, when you are preparing for an interview. Here's wishing you luck with the interviews and I hope you crack the interview as well.

Software Testing - Acceptance Testing

Acceptance testing (also known as user acceptance testing) is a type of testing carried out in order to verify if the product is developed as per the standards and specified criteria and meets all the requirements specified by customer. This type of testing is generally carried out by a user/customer where the product is developed externally by another party.

Acceptance testing falls under black box testing methodology where the user is not very much interested in internal working/coding of the system, but evaluates the overall functioning of the system and compares it with the requirements specified by them. User acceptance testing is considered to be one of the most important testing by user before the system is finally delivered or handed over to the end user.

Acceptance testing is also known as validation testing, final testing, QA testing, factory acceptance testing and application testing etc. And in software engineering, acceptance testing may be carried out at two different levels; one at the system provider level and another at the end user level (hence called user acceptance testing, field acceptance testing or end-user testing).

Acceptance testing in software engineering generally involves execution of number test cases which constitute to a particular functionality based on the requirements specified by the user. During acceptance testing, the system has to pass through or operate in a computing environment that imitates the actual operating environment existing with user. The user may choose to perform the testing in an iterative manner or in the form of a set of varying parameters (for example: missile guidance software can be tested under varying payload, different weather conditions etc.).

The outcome of the acceptance testing can be termed as success or failure based on the critical operating conditions the system passes through successfully/unsuccessfully and the user’s final evaluation of the system.

The test cases and test criterion in acceptance testing are generally created by end user and cannot be achieved without business scenario criteria input by user. This type of testing and test case creation involves most experienced people from both sides (developers and users) like business analysts, specialized testers, developers, end users etc.

Process involved in Acceptance Testing
  1. Test cases are created with the help of business analysts, business customers (end users), developers, test specialists etc.
  2. Test cases suites are run against the input data provided by the user and for the number of iterations that the customer sets as base/minimum required test runs.
  3. The outputs of the test cases run are evaluated against the criterion/requirements specified by user.
  4. Depending upon the outcome if it is as desired by the user or consistent over the number of test suites run or non conclusive, user may call it successful/unsuccessful or suggest some more test case runs.
  5. Based on the outcome of the test runs, the system may get rejected or accepted by the user with or without any specific condition.
Acceptance testing is done in order to demonstrate the ability of system/product to perform as per the expectations of the user and induce confidence in the newly developed system/product. A sign-off on contract stating the system as satisfactory is possible only after successful acceptance testing.

Types of Acceptance Testing

User Acceptance Testing: User acceptance testing in software engineering is considered to be an essential step before the system is finally accepted by the end user. In general terms, user acceptance testing is a process of testing the system before it is finally accepted by user.

Alpha Testing & Beta Testing: Alpha testing is a type of acceptance testing carried out at developer’s site by users (internal staff). In this type of testing, the user goes on testing the system and the outcome is noted and observed by the developer simultaneously.

Beta testing is a type of testing done at user’s site. The users provide their feedback to the developer for the outcome of testing. This type of testing is also known as field testing. Feedback from users is used to improve the system/product before it is released to other users/customers.

Operational Acceptance Testing: This type of testing is also known as operational readiness/preparedness testing. It is a process of ensuring all the required components (processes and procedures) of the system are in place in order to allow user/tester to use it.

Contact and Regulation Acceptance Testing: In contract and regulation acceptance testing, the system is tested against the specified criteria as mentioned in the contract document and also tested to check if it meets/obeys all the government and local authority regulations and laws and also all the basic standards.

Software Testing - An Introduction!

1. What is testing?

Software Testing can be defined as: Testing is an activity that helps in finding out bugs/defects/errors in a software system under development, in order to provide a bug free and reliable system/solution to the customer.

In other words, you can consider an example as: suppose you are a good cook and are expecting some guests at dinner. You start making dinner; you make few very very delicious dishes (off-course, those which you already know how to make). And finally, when you are about to finish making the dishes, you ask someone (or you yourself) to check if everything is fine and there is no extra salt/chili/anything, which if is not in balance, can ruin your evening (This is what called 'TESTING').

This procedure you follow in order to make it sure that you do not serve your guests something that is not tasty! Otherwise your collar will go down and you will regret over your failure!

2. Why we go for testing?

Well, while making food, it's ok to have something extra, people might understand and eat the things you made and may well appreciate your work. But this isn't the case with Software Project Development. If you fail to deliver a reliable, good and problem free software solution, you fail in your project and probably you may lose your client. This can get even worse!

So in order to make it sure, that you provide your client a proper software solution, you go for TESTING. You check out if there is any problem, any error in the system, which can make software unusable by the client. You make software testers test the system and help in finding out the bugs in the system to fix them on time. You find out the problems and fix them and again try to find out all the potential problems.

3. Why there is need of testing?

OR

Why there is a need of 'independent/separate testing'?

This is a right question because, prior to the concept of TESTING software as a 'Testing Project', the testing process existed, but the developer(s) did that at the time of development.

But you must know the fact that, if you make something, you hardly feel that there can be something wrong with what you have developed. It's a common trait of human nature, we feel that there is no problem in our designed system as we have developed it and it is perfectly functional and fully working. So the hidden bugs or errors or problems of the system remain hidden and they raise their head when the system goes into production.

On the other hand, it's a fact that, when one person starts checking something which is made by some other person, there are 99% chances that checker/observer will find some problem with the system (even if the problem is with some spelling that by mistake has been written in wrong way.). Really weird, isn't it? But that's a truth!

Even though its wrong in terms of human behavior, this thing has been used for the benefit of software projects (or you may say, any type of project). When you develop something, you give it to get checked (TEST) and to find out any problem, which never aroused while development of the system. Because, after all, if you could minimize the problems with the system you developed, it's beneficial for yourself. Your client will be happy if your system works without any problem and will generate more revenues for you.

BINGO, it's really great, isn't it? That's why we need testing!

4. What is the role of "a tester"?

A tester is a person who tries to find out all possible errors/bugs in the system with the help of various inputs to it. A tester plays an important part in finding out the problems with system and helps in improving its quality.

If you could find all the bugs and fix them all, your system becomes more and more reliable.

A tester has to understand the limits, which can make the system break and work abruptly. The more number of VALID BUGS tester finds out, the better tester he/she is!

Software Testing - Compatibility Testing

Compatibility testing is one of the several types of software testing performed on a system that is built based on certain criteria and which has to perform specific functionality in an already existing setup/environment. Compatibility of a system/application being developed with, for example, other systems/applications, OS, Network, decide many things such as use of the system/application in that environment, demand of the system/application etc. Many a time, users prefer not to opt for an application/system just because it is not compatible with any other system/application, network, hardware or OS they are already using. This leads to a situation where the development efforts taken by developers prove to be in vain.

What is Compatibility Testing

Compatibility testing is a type of testing used to ensure compatibility of the system/application/website built with various other objects such as other web browsers, hardware platforms, users (in case if it's very specific type of requirement, such as a user who speaks and can read only a particular language), operating systems etc. This type of testing helps find out how well a system performs in a particular environment that includes hardware, network, operating system and other software etc.

Compatibility testing can be automated using automation tools or can be performed manually and is a part of non-functional software testing.

Developers generally lookout for the evaluation of following elements in a computing environment (environment in which the newly developed system/application is tested and which has similar configuration as the actual environment in which the system/application is supposed to fit and start working).

Hardware: Evaluation of the performance of system/application/website on a certain hardware platform. For example: If an all-platform compatible game is developed and is being tested for hardware compatibility, the developer may choose to test it for various combinations of chipsets (such as Intel, Macintosh GForce), motherboards etc.

Browser: Evaluation of the performance of system/website/application on a certain type of browser. For example: A website is tested for compatibility with browsers like Internet Explorer, Firefox etc. (usually browser compatibility testing is also looked at as a user experience testing, as it is related to user's experience of the application/website, while using it on different browsers).

Network: Evaluation of the performance of system/application/website on network with varying parameters such as bandwidth, variance in capacity and operating speed of underlying hardware etc., which is set up to replicate the actual operating environment.

Peripherals: Evaluation of the performance of system/application in connection with various systems/peripheral devices connected directly or via network. For example: printers, fax machines, telephone lines etc.

Compatibility between versions: Evaluation of the performance of system/application in connection with its own predecessor/successor versions (backward and forward compatibility). For example: Windows 98 was developed with backward compatibility for Windows 95 etc.

Software: Evaluation of the performance of system/application in connection with other software. For example: Software compatibility with operating tools for network, web servers, messaging tools etc.

Operating System: Evaluation of the performance of system/application in connection with the underlying operating system on which it will be used.

Databases: Many applications/systems operate on databases. Database compatibility testing is used to evaluate an application/system's performance in connection to the database it will interact with.

How helpful is it?

Compatibility testing can help developers understand the criteria that their system/application needs to attain and fulfill, in order to get accepted by intended users who are already using some OS, network, software and hardware etc. It also helps the users to find out which system will better fit in the existing setup they are using.

The most important use of the compatibility testing is as already mentioned above: to ensure its performance in a computing environment in which it is supposed to operate. This helps in figuring out necessary changes/modifications/additions required to make the system/application compatible with the computing environment.

Software Testing - An Introduction To Usability Testing

Usability Testing:
As the term suggest, usability means how better something can be used over the purpose it has been created for. Usability testing means a way to measure how people (intended/end user) find it (easy, moderate or hard) to interact with and use the system keeping its purpose in mind. It is a standard statement that "Usability testing measures the usability of the system".

Why Do We Need Usability Testing?
Usability testing is carried out in order to find out if there is any change needs to be carried out in the developed system (may it be design or any specific procedural or programmatic change) in order to make it more and more user-friendly so that the intended/end user who is ultimately going to buy and use it receives the system which he can understand and use it with utmost ease.

Any changes suggested by the tester at the time of usability testing, are the most crucial points that can change the stand of the system in intended/end user's view. Developer/designer of the system need to incorporate the feedback (here feedback can be a very simple change in look and feel or any complex change in the logic and functionality of the system) of usability testing into the design and developed code of the system (the word system may be a single object or an entire package consisting more than one objects) in order to make system more and more presentable to the intended/end user.

Developer often try to make the system as good looking as possible and also tries to fit the required functionality, in this endeavor he may have forgotten some error prone conditions which are uncovered only when the end user is using the system in real time.
Usability testing helps developer in studying the practical situations where the system will be used in real time. Developer also gets to know the areas that are error prone and the area of improvement.

In simple words, usability testing is an in-house dummy-release of the system before the actual release to the end users, where developer can find and fix all possible loop holes.

How Usability Test Is Carried Out?
Usability test, as mentioned above is an in-house dummy release before the actual release of the system to the intended/end user. Hence, a setup is required in which developer and testers try to replicate situations as realistic as possible to project the real time usage of the system. The testers try to use the system in exactly the same manner that any end user can/will do. Please note that, in this type of testing also, all the standard instruction of testing are followed to make it sure that the testing is done in all the directions such as functional testing, system integration testing, unit testing etc.

The outcome/feedback is noted down based on observations of how the user is using the system and what are all the possible ways that also may come into picture, and also based on the behavior of the system and how easy/hard it is for the user to operate/use the system. User is also asked for his/her feedback based on what he/she thinks should be changed to improve the user interaction between the system and the end user.

Usability testing measures various aspects such as:
How much time the tester/user and system took to complete basic flow?
How much time people take to understand the system (per object) and how many mistakes they make while performing any process/flow of operation?
How fast the user becomes familiar with the system and how fast he/she can recall the system's functions?
And the most important: how people feel when they are using the system?

Over the time period, many people have formulated various measures and models for performing usability testing. Any of the models can be used to perform the test.

Advantages of Usability Testing
  • Usability test can be modified to cover many other types of testing such as functional testing, system integration testing, unit testing, smoke testing etc. (with keeping the main objective of usability testing in mind) in order to make it sure that testing is done in all the possible directions.
  • Usability testing can be very economical if planned properly, yet highly effective and beneficial.
  • If proper resources (experienced and creative testers) are used, usability test can help in fixing all the problems that user may face even before the system is finally released to the user. This may result in better performance and a standard system.
  • Usability testing can help in uncovering potential bugs and potholes in the system which generally are not visible to developers and even escape the other type of testing.
Usability testing is a very wide area of testing and it needs fairly high level of understanding of this field along with creative mind. People involved in the usability testing are required to possess skills like patience, ability to listen to the suggestions, openness to welcome any idea, and the most important of them all is that they should have good observation skills to spot and fix the problems on fly.