Thursday, February 18, 2010

Multivariate testing

Multivariate testing is a process by which more than one component of a website may be tested in a live environment. It can be thought of in simple terms as numerous A/B tests performed on one page at the same time. A/B tests are usually performed to determine the better of two content variations, multivariate testing can theoretically test the effectiveness of limitless combinations. The only limits on the number of combinations and the number of variables in a multivariate test are the amount of time it will take to get a statistically valid sample of visitors and computational power.

Multivariate testing is usually employed in order to ascertain which content or creative variation produces the best improvement in the defined goals of a website, whether that be user registrations or successful completion of a checkout process (that is, conversion rate). Dramatic increases can be seen through testing different copy text, form layouts and even landing page images and background colours. However, not all elements produce the same increase in conversions, and by looking at the results from different tests, it is possible to identify those elements that consistently tend to produce the greatest increase in conversions.

Testing can be carried out on a dynamically generated website by setting up the server to display the different variations of content in equal proportions to incoming visitors. Statistics on how each visitor went on to behave after seeing the content under test must then be gathered and presented. Outsourced services can also be used to provide multivariate testing on websites with minor changes to page coding. These services insert their content to predefined areas of a site and monitor user behavior.

In a nutshell, multivariate testing can be seen as allowing website visitors to vote with their clicks for which content they prefer and will stand the most chance of them proceeding to a defined goal. The testing is transparent to the visitor with all commercial solutions capable of ensuring that each visitor is shown the same content on every visit.

Sunday, February 7, 2010

System Test - Capacity Testing

Scheduling Capacity Testing
Performance Test and Stress/Load Test occur during System Test or when enough of the application has been delivered. The earlier capacity testing can be applied, the earlier defects can be detected and mediated. It is critical to detect capacity related defects early because these types of defects often require architectural changes to the system. Because these tests usually depend on functional interfaces, it may be wise to delay until function testing has demonstrated some predefined level of reliability. A balance must be struck between testing early and testing when appropriate.

Designing Capacity Tests
Lack of capacity testing requirements and goals is one of the most common errors made during any capacity testing exercise. Test organizations will often go through the process of capacity testing only to discover the testing results do not present the project with any useful information. The solution is to treat Performance and Stress/Load testing the same as any other testing effort. The test organization needs to perform: Test Planning, Partitioning / Functional Decomposition, Requirements Definition / Verification, Test Case Design, Traceability (Traceability Matrix), Test Case Execution, Defect Management, and Coverage Analysis (for more on this see "Requirements based Function Test").

Implementing Capacity Tests
Both Performance and Stress/Load testing require a toolset that can put the system under a known load and measure the performance of the application while under load. Several shops have developed their own in-shop solutions for capacity testing and there are several capacity testing freeware, shareware, and commercial products available to meet this need. It is easy to fall into the .over-engineering. trap when implementing a capacity testing toolset - to avoid this trap ensure the solution meets the test organizations goals and that the toolset does not become a "goal" in itself.

Sunday, January 31, 2010

Software QA and Testing Less-Frequently-Asked-Questions

  • Why is it often hard for organizations to get serious about quality assurance?
  • Who is responsible for risk management?
  • Who should decide when software is ready to be released?
  • What can be done if requirements are changing continuously?
  • What if the application has functionality that wasn't in the requirements?
  • How can QA processes be implemented without reducing productivity?
  • What if an organization is growing so fast that fixed QA processes are impossible?
  • Will automated testing tools make testing easier?
  • What's the best way to choose a test automation tool?
  • How can it be determined if a test environment is appropriate?
  • What's the best approach to software test estimation?

Sunday, January 24, 2010

what is defect life cycle?


The defect life cycle is:
first when the defect is found it is made note of. (this is called the New stage)
Then it is sent to the developer (this is called the Open stage)
Once the developer fixes and sends back to the tester (it is called as Fixed)
Now the tester checks if the bug had been fixed , if it is fixed (it is called Closed)

if the bug is not fixed it is returned back to the developer (this is called as Reopen)

Saturday, January 16, 2010

Risk-based Testing|Risk analysis techniques

Testing is the means used in software development to reduce risks associated with a system. By testing, we hope to identify many of the problems before they get to the customer, thereby reducing the system’s risk. Unfortunately, testing alone can’t find all of the bugs and with the rapid pace of application development in the today’s world, testing has become a challenging proposition and often just doesn’t get done.

Trying to meet even tighter deadlines while still delivering products that meet customer requirements is the greatest challenge testers face today. Formulating answers to age-old questions like “What should we test?” and “How long do we test?” requires different strategies in fast-paced environments.

  • Does the product meet our quality expectations?
  • Is the application ready for users?
  • What can we expect when 2,000 people hit the site?
  • What are we risking if we release now?
Outcomes:
  • Risks and risk reduction techniques relative in software testing
  • Risk analysis techniques designed to identify software testing related risks
  • Test design strategy based upon risk analysis

Sunday, January 10, 2010

Testing Cycle:A sample

Although variations exist between organizations, there is a typical cycle for testing. The sample below is common among organizations employing the Waterfall development model.

  • Requirements analysis: Testing should begin in the requirements phase of the software development life cycle. During the design phase, testers work with developers in determining what aspects of a design are testable and with what parameters those tests work.
  • Test planning: Test strategy, test plan, testbed creation. Since many activities will be carried out during testing, a plan is needed.
  • Test development: Test procedures, test scenarios, test cases, test datasets, test scripts to use in testing software.
  • Test execution: Testers execute the software based on the plans and tests and report any errors found to the development team.
  • Test reporting: Once testing is completed, testers generate metrics and make final reports on their test effort and whether or not the software tested is ready for release.
  • Test result analysis: Or Defect Analysis, is done by the development team usually along with the client, in order to decide what defects should be treated, fixed, rejected (i.e. found software working properly) or deferred to be dealt with later.
  • Defect Retesting: Once a defect has been dealt with by the development team, it is retested by the testing team. AKA Resolution testing.
  • Regression testing: It is common to have a small test program built of a subset of tests, for each integration of new, modified, or fixed software, in order to ensure that the latest delivery has not ruined anything, and that the software product as a whole is still working correctly.
  • Test Closure: Once the test meets the exit criteria, the activities such as capturing the key outputs, lessons learned, results, logs, documents related to the project are archived and used as a reference for future projects.

Monday, December 14, 2009

Testing Scope

  • Whether you need ad hoc testing, automated, or just application profiling, our engagement implementation roadmap can address every need from your testing life cycle.
  • Test Planning: Test strategy, test objectives & approach.
  • Test Analysis & Design: Analyze functional requirements, make automation decisions, design test cases, design test environments.
  • Test Environment: Install HW, SW, test tools, and perform a smoke test.
  • Test Implementation: Develop test scripts and create test data.
  • Test Execution & Reporting: Execute test cases and test scripts, test report & metrics, and defect management.
  • Test Completion: project acceptance, delivery of test ware, postmortem.
  • Project Management: Tracking and control of testing processes, overall managed testing initiatives.
  • Configuration & Change Management: Version control, source control, change management, configuration items.