REGISTER

Blog

Better Read, Think Better!

img

Optimism, Pessimism or Qualityism?

Software development, in most circumstances, starts with Optimism and ends with Pessimism. Any Project is initiated with an expectation of delivering considerable value to the end customer. However, when finally...
read more
img

How much is Too much?

More may not always be merrier. Surely in the case of Quality Assurance. With a zeal to ensure that all the test scenarios are covered, QA teams tend to create a large number of test cases.Ensuring the adequate test coverage is not an easy task as it takes considerable...
read more
img

Risk Based Testing or is it “Much ado about nothing”?

More may not always be merrier. Surely in the case of Quality Assurance. With a zeal to ensure that all the test scenarios are covered, QA teams tend to create a large number of test cases.Ensuring the adequate...
read more
img

Bug free software, reality or a myth?

Quality Assurance or Quality Control has assumed an uncompromising role in the modern day world. With the internet revolution bringing the necessary impetus,enlightened customers are demanding accuracy at...
read more

Optimism, Pessimism or Qualityism?

October-1-2018

Optimism, Pessimism or Qualityism?

Software development, in most circumstances, starts with Optimism and ends with Pessimism. Any Project is initiated with an expectation of delivering considerable value to the end customer. However, when finally delivered, the user may have his expectations unfulfilled and the pessimism sets in. Will the delivered project meets the expectations? Will it work as envisaged? This feature is very important, but not sure whether it has been tested thoroughly and for all possible scenarios?

                         
                         “To an optimist, the glass is half full.To a pessimist, 
                         the glass is half empty. To a good tester, 
                         the glass is twice as big as it needs to be.”— Anonymous
                       

As rightly said in the above quote, the role of a good tester is to identify the right size. Testing more for better quality assurance is a pessimistic view and waste of time and resources. On the other hand too less a testing is driven by optimism, but may be a compromise on quality. A Tester need to strike a right balance between the optimism and pessimism to identify the right quality and number of Test Scenarios and Test Cases.

As defined by Peter Thiel in his book “Zero to One: Notes on Startups, or How to Build the Future”, a definite optimist is hopeful of the bright future and has plans for the same. On the other extreme end indefinite pessimist is sure about the bleak future, but no plans to counter it. Striking the balance between Definite Optimism and Definite Pessimism leads the way to “Qualityism”.

How to achieve Qualityism in testing? Or is it just Utopian? Qualityism is strikingly similar to Zero Bug Software. There is no definite point where we can claim to have achieved it. Having said that, moving from pessimism to optimism, QA team can achieve near Qualityism by working on combining and prioritizing the most needed features and the most important parameters within those features. A good combination of this along with scenario planning will ensure that most possible real life user interactions are covered. A thorough execution of testing using the above, will provide a good amount of confidence to the end users. With the limited time and resources available at the disposal for testing, sounds easier said, than done, isn’t it?

@TEST, our Test Management Suite is enriched with features that help in the journey towards Qualityism. Its feature rich and robust Domain Vault has all the functionalities and the attributes that are applicable and required for your product portfolio specific to your domain. @TEST is sturdy, yet flexible to allow the Quality Analysts to add / remove those attributes and combinations for any product or even at a project / sprint level.

Another important feature of @TEST is its ability to generate the Test Scenarios and Test Cases automatically. QA team does not have to spend a good number of hours churning out the combinations for creating Scenarios and Cases to ensure the robust test coverage. These are automatically generated by a combination of algorithms encompassing Test Design Techniques like Boundary Value Analysis, equivalence partitioning, Decision Table and Combinatorial pairs.

Armed with the above Test Cases, the QA process will move from definite pessimism of under-coverage of scenarios to the definite optimism of well-planned and well groomed Test Coverage. @TEST also provides the flexibility to remove or add Scenarios and Cases generated to enable the QA team to have complete control of the QA process to achieve Qualityism.

How much is Too much?

October-15-2018

How much is Too much?

More may not always be merrier. Surely in the case of Quality Assurance. With a zeal to ensure that all the test scenarios are covered, QA teams tend to create a large number of test cases. Ensuring the adequate test coverage is not an easy task as it takes considerable time to create test cases and data ensuring that all the required combination of parameters is included in the test cases and all the required scenarios are covered. It may only burden the scarce resources further and may increase the testing cycle time. The challenge is in finding the right mix of combinations and optimum number of test cases that will provide adequate coverage.

The question is “What is adequate coverage?”. Answer to this question is subjective and highly tilted towards the experience and judgement of the QA team. Normally, Quality Assurance teams decide the quantum of the test cases basis their previous experience of testing, business knowledge and confidence level.

While it is not possible to entirely remove the subjectivity of the QA team, it is possible to help the QA team to reduce their burden. Our flagship solution @TEST can help the teams in deciding the optimum amount of test coverage while not compromising the quality of testing. Its robust algorithms generate Test Cases and Test Data automatically for both positive and negative Test Scenarios. @TEST has the capability to build the robust combinations of various data values for all types of data whether it is numeric range, list of values either picked from a table of values or entered on adhoc basis. @TEST can ensure adequate coverage across the numeric range picking the data randomly from the predefined segments and in multiples of step as required. Backed by the strength of @TEST, QA team can be rest assured about the parameter coverage and combinations.

As the load of ensuring the adquacy of the combinations takenover by @TEST, QA team can concentrate on planning their test cycle and other priorities where their judgement is necessary. There are many methods to decide the optimum number of cases for test coverage. The decision on optimum number of cases to be tested depends on various factors like impact on end customer, priority of the feature or scenario and the available time and resources. @TEST can be helpful in these areas too.

By deciding the adequate number of test cases that can provide optimum test coverage, QA team can maximise their delivery capabilities.

Risk Based Testing or is it “Much ado about nothing”?

October-23-2018

Risk Based Testing or is it “Much ado about nothing”?

Risk Based Testing has acquired lot of significance in recent times as significant number of organizations have started adopting it in their testing process. Is it worth the hype or is it “Much Ado About Nothing?”

Pareto principle or the 80/20 rule as it is commonly known, can also be applied to software testing and in different perspectives. Risk assessment in combination with Pareto principle is a very useful tool in arriving at the optimum number of test cases for maximum coverage.

Theoretically, there will be infinite number of possible combinations of data to be tested for each Test Scenario. Testing each and every combination is both impossible and not necessary. In the real world scenario, we would generally have a few combinations appearing repeatedly. These are the most common and frequently appearing possibilities. When we apply 80/20 rule here, 80% of the real world scenarios appear with 20% of combinations. These 20% combinations become first priority items for Testing.

Similarly, any product or a feature will have numerous functionalities and transactions offered to the customer. Pareto principle is applicable here too. 80% of all the transactions executed in the real world will be from 20% of the total transactions available. So, those 20% of transactions are to be taken up on priority for testing.

Similarly, any product or a feature will have numerous functionalities and transactions offered to the customer. Pareto principle is applicable here too. 80% of all the transactions executed in the real world will be from 20% of the total transactions available. So, those 20% of transactions are to be taken up on priority for testing.

When we look from the perspective of development, 80% of the defects are generally caused by 20% of code. This needs to be dealt slightly different from others. It is not the results we need to focus on, but the reason for the defect. Analyzing the defects and user feedback, discussing with the development team for identifying the common code will help in identifying the areas where the software can go wrong. This will help in identifying the modules and thorough testing of the transactions using those.

Risk Based Testing allows the delivery team to take a pro-active stance, addressing the high probable impact areas on priority and thereby minimizing the negative outcome or results in the real time scenarios. Risk Based Testing allows the QA teams to execute the test cases that will have maximum impact and high probability and de-prioritizing those with lesser impact and possibility of occurrence. This will help the teams saving their precious time and resources without compromising the quality or speed to market. Risk Based Testing can also be used to measure the Test Quality – how well the QA team is able to identify the critical defects. Risk Based Testing is worth the hype, particularly in scenarios where the time and resources are limited.

@TEST supports Risk Based Testing. Based on the requirements, QA team can decide and assign the Risk factors to each of the elements mentioned above. Test Case and Data generation algorithm in @TEST calculates the combined risk factor taking all the elements and their respective risk factors. It also provides flexibility to increase / decrease the combined factor based on user experience, if necessary. It is possible to benchmark the combined risk factor in @TEST at project level, to generate test cases and data above the benchmark.

Bug free software, reality or a myth?

October-23-2018

Bug free software, reality or a myth?

Quality Assurance or Quality Control has assumed an uncompromising role in the modern day world. With the internet revolution bringing the necessary impetus, enlightened customers are demanding accuracy at greater speeds. Whether it is in e-commerce, Financial Services or any other business, precision and alacrity or the lack of it will determine whether the customer is will remain loyal to organization.

However, Quality Assurance or Testing is resource intensive both in terms of time and money. It requires quality resources to ensure the Testing is thorough and complete. To ensure that the product is available BUG FREE at the time of end point delivery, quality of Testing cannot be compromised.

Having said that, can we make any software BUG FREE? NO WAY. In order to ensure that the software is Bug Free, we need to check each and every combination the developed solution is expected to deliver. In a real world scenario, these combinations will run into Billions and it is next to impossible to Test each and every combination and confirm that it works fine for all combinations. Many a time, most of those scenarios will not be encountered during the life time of the Solution in production too.

Agreed that BUG FREE Software is a myth, How can we ensure our customer is delivered the error free experience every time he is engaged?

The trick is in understanding the customer behavior and actions. Pareto principle may work in most of the situations, but not for Quality Assurance as we cannot afford to lose 20% of customers to the competition. An approach similar to Six-Sigma need to be adopted to ensure that the errors are kept at minimum.

The answer lies in risk based approach. Quality Assurance strategy and the volume of testing should be based on consumer behavior, actions, loss tolerance and minimization. While not compromising the speed, Quality can be assured to a great extent by identifying and focusing on the areas mentioned above.

True, BUG FREE Software is a myth, but one can minimize the impact by concentrating on loss minimization and risk tolerance.

@TEST supports Risk Based Testing. Based on the requirements, QA team can decide and assign the Risk factors to each of the elements mentioned above. Test Case and Data generation algorithm in @TEST calculates the combined risk factor taking all the elements and their respective risk factors. It also provides flexibility to increase / decrease the combined factor based on user experience, if necessary. It is possible to benchmark the combined risk factor in @TEST at project level, to generate test cases and data above the benchmark.