ATTA Exam Information and Guideline
Advanced Level Technical Test Analyst
Below are complete topics detail with latest syllabus and course outline, that will help you good knowledge about exam objectives and topics that you have to prepare. These contents are covered in questions and answers pool of exam.
Exam ID : ATTA
Exam Title : Advanced Technical Test Analyst (ASTQB)
Number of Questions in exam : 45
Passig Score : 65%
Exam Type : Multiple Choice Questions
- Summarize the generic risk factors that the Technical Test Analyst typically needs to consider.
- Summarize the activities of the Technical Test Analyst within a risk-based approach for testing activities.
- Write test cases from a given specification item by applying the Statement testing test technique to achieve a defined level of coverage.
- Write test cases from a given specification item by applying the Modified Condition/Decision Coverage (MC/DC) test technique to achieve coverage.
- Write test cases from a given specification item by applying the Multiple Condition testing test technique to achieve a defined level of coverage.
- Write test cases from a given specification item by applying McCabe's Simplified Baseline Method.
- Understand the applicability of API testing and the kinds of defects it finds.
- Select an appropriate white-box test technique according to a given project situation.
- Use control flow analysis to detect if code has any control flow anomalies.
- Explain how data flow analysis is used to detect if code has any data flow anomalies.
- Propose ways to improve the maintainability of code by applying static analysis.
- Explain the use of call graphs for establishing integration testing strategies.
- Apply dynamic analysis to achieve a specified goal.
- For a particular project and system under test, analyze the non-functional requirements and write the respective sections of the test plan.
- Given a particular product risk, define the particular non-functional test type(s) which are most appropriate.
- Understand and explain the stages in an applications lifecycle where non-functional tests should be applied.
- For a given scenario, define the types of defects you would expect to find by using non-functional testing types.
- Explain the reasons for including security testing in a test strategy and/or test approach.
- Explain the principal aspects to be considered in planning and specifying security tests.
- Explain the reasons for including reliability testing in a test strategy and/or test approach.
- Explain the principal aspects to be considered in planning and specifying reliability tests.
- Explain the reasons for including performance testing in a test strategy and/or test approach.
- Explain the principal aspects to be considered in planning and specifying performance efficiency tests.
- Explain the reasons for including maintainability testing in a testing strategy and/or test approach.
- Explain the reasons for including portability tests in a testing strategy and/or test approach.
- Explain the reasons for compatibility testing in a testing strategy and/or test approach.
- Explain why review preparation is important for the Technical Test Analyst.
- Analyze an architectural design and identify problems according to a checklist provided in the syllabus.
- Analyze a section of code or pseudo-code and identify problems according to a checklist provided in the syllabus.
- Summarize the activities that the Technical Test Analyst performs when setting up a test automation project.
- Summarize the differences between data-driven and keyword-driven automation.
- Summarize common technical issues that cause automation projects to fail to achieve the planned return on investment.
- Construct keywords based on a given business process.
- Summarize the purpose of tools for fault seeding and fault injection.
- Summarize the main characteristics and implementation issues for performance testing tools.
- Explain the general purpose of tools used for web-based testing.
- Explain how tools support the practice of model-based testing.
- Outline the purpose of tools used to support component testing and the build process.
- Outline the purpose of tools used to support mobile application testing.
1. The Technical Test Analyst's Tasks in Risk-Based Testing
Keywords
product risk, risk assessment, risk identification, risk mitigation, risk-based testing
Learning Objectives for The Technical Test Analyst's Tasks in Risk-Based Testing
Risk-based Testing Tasks
- Summarize the generic risk factors that the Technical Test Analyst typically needs to consider
- Summarize the activities of the Technical Test Analyst within a risk-based approach for testing activities
1.1 Introduction
The Test Manager has overall responsibility for establishing and managing a risk-based testing strategy. The Test Manager usually will request the involvement of the Technical Test Analyst to ensure the risk-based approach is implemented correctly.
Technical Test Analysts work within the risk-based testing framework established by the Test Manager for the project. They contribute their knowledge of the technical product risks that are inherent in the project, such as risks related to security, system reliability and performance.
1.2 Risk-based Testing Tasks
Because of their particular technical expertise, Technical Test Analysts are actively involved in the following risk-based testing tasks:
• Risk identification
• Risk assessment
• Risk mitigation
These tasks are performed iteratively throughout the project to deal with emerging product risks and changing priorities, and to regularly evaluate and communicate risk status.
1.2.1 Risk Identification
By calling on the broadest possible sample of stakeholders, the risk identification process is most likely to detect the largest possible number of significant risks. Because Technical Test Analysts possess unique technical skills, they are particularly well-suited for conducting expert interviews, brainstorming with co-workers and also analyzing the current and past experiences to determine where the likely areas of product risk lie. In particular, Technical Test Analysts work closely with other stakeholders, such as developers, architects, operations engineers, product owners, local support offices, and service desk technicians, to determine areas of technical risk impacting the product and project. Involving other stakeholders ensures that all views are considered and is typically facilitated by Test Managers.
Risks that might be identified by the Technical Test Analyst are typically based on the [ISO25010] quality characteristics listed in Chapter 4, and include, for example:
• Performance efficiency (e.g., inability to achieve required response times under high load conditions)
• Security (e.g., disclosure of sensitive data through security attacks)
• Reliability (e.g., application unable to meet availability specified in the Service Level Agreement)
1.2.2 Risk Assessment
While risk identification is about identifying as many pertinent risks as possible, risk assessment is the study of those identified risks in order to categorize each risk and determine the likelihood and impact associated with it. The likelihood of occurrence is usually interpreted as the probability that the potential problem could exist in the system under test.
The Technical Test Analyst contributes to finding and understanding the potential technical product risk for each risk item whereas the Test Analyst contributes to understanding the potential business impact of the problem should it occur.
Project risks can impact the overall success of the project. Typically, the following generic project risks need to be considered:
• Conflict between stakeholders regarding technical requirements
• Communication problems resulting from the geographical distribution of the development organization
• Tools and technology (including relevant skills)
• Time, resource and management pressure
• Lack of earlier quality assurance
• High change rates of technical requirements
Product risk factors may result in higher numbers of defects. Typically, the following generic product risks need to be considered:
• Complexity of technology
• Complexity of code structure
• Amount of re-use compared to new code
• Large number of defects found relating to technical quality characteristics (defect history)
• Technical interface and integration issues
Given the available risk information, the Technical Test Analyst proposes an initial risk level according to the guidelines established by the Test Manager. For example, the Test Manager may determine that risks should be categorized with a value from 1 to 10, with 1 being highest risk. The initial value may be modified by the Test Manager when all stakeholder views have been considered.
1.2.3 Risk Mitigation
During the project, Technical Test Analysts influence how testing responds to the identified risks. This generally involves the following:
• Reducing risk by executing the most important tests (those addressing high risk areas) and by putting into action appropriate mitigation and contingency measures as stated in the test plan
• Evaluating risks based on additional information gathered as the project unfolds, and using that information to implement mitigation measures aimed at decreasing the likelihood or avoiding the impact of those risks
The Technical Test Analyst will often cooperate with specialists in areas such as security and performance to define risk mitigation measures and elements of the organizational test strategy. Additional information can be obtained from ISTQB® Specialist syllabi, such as the Advanced Level Security Testing syllabus [ISTQB_ALSEC_SYL] and the Foundation Level Performance Testing syllabus [ISTQB_FLPT_SYL].
2. White-box Test Techniques
Keywords
API testing, atomic condition, control flow testing, cyclomatic complexity, decision testing, modified condition/decision testing, multiple condition testing, path testing, short-circuiting, statement testing, white-box test technique
Learning Objectives for White-Box Testing
2.2 Statement Testing
TTA-2.2.1 (K3) Write test cases for a given specification item by applying the Statement test technique to achieve a defined level of coverage
Decision Testing
TTA-2.3.1 (K3) Write test cases for a given specification item by applying the Decision test technique to achieve a defined level of coverage
2.4 Modified Condition/Decision Coverage (MC/DC) Testing
TTA-2.4.1 (K3) Write test cases by applying the Modified Condition/Decision Coverage (MC/DC) test design technique to achieve a defined level of coverage
Multiple Condition Testing
TTA-2.5.1 (K3) Write test cases for a given specification item by applying the Multiple Condition test technique to achieve a defined level of coverage
2.6 Basis Path Testing
TTA-2.6.1 (K3) Write test cases for a given specification item by applying McCabes Simplified Baseline Method
2.7 API Testing
TTA-2.7.1 (K2) Understand the applicability of API testing and the kinds of defects it finds
2.8 Selecting a White-box Test Technique
TTA-2.8.1 (K4) Select an appropriate white-box test technique according to a given project situation
2.1 Introduction
This chapter principally describes white-box test techniques. These techniques apply to code and other structures, such as business process flow charts.
Each specific technique enables test cases to be derived systematically and focuses on a particular aspect of the structure to be considered. The techniques provide coverage criteria which have to be measured and associated with an objective defined by each project or organization. Achieving full coverage does not mean that the entire set of tests is complete, but rather that the technique being used no longer suggests any useful tests for the structure under consideration.
The following techniques are considered in this syllabus:
• Statement testing
• Decision testing
• Modified Condition/Decision Coverage (MC/DC) testing
• Multiple Condition testing
• Basis Path testing
• API testing
The Foundation Syllabus [ISTQB_FL_SYL] introduces Statement testing and Decision testing. Statement testing exercises the executable statements in the code, whereas Decision testing exercises the decisions in the code and tests the code that is executed based on the decision outcomes.
The MC/DC and Multiple Condition techniques listed above are based on decision predicates and broadly find the same types of defects. No matter how complex a decision predicate may be, it will evaluate to either TRUE or FALSE, which will determine the path taken through the code. A defect is detected when the intended path is not taken because a decision predicate does not evaluate as expected.
The first four techniques are successively more thorough (and Basis Path testing is more thorough than Statement and Decision testing); more thorough techniques generally require more tests to be defined in order to achieve their intended coverage and find more subtle defects.
2.2 Statement Testing
Statement testing exercises the executable statements in the code. Coverage is measured as the number of statements executed by the tests divided by the total number of executable statements in the test object, normally expressed as a percentage.
Applicability
This level of coverage should be considered as a minimum for all code being tested.
Limitations/Difficulties
Decisions are not considered. Even high percentages of statement coverage may not detect certain defects in the codes logic.
2.3 Decision Testing
Decision testing exercises the decisions in the code and tests the code that is executed based on the decision outcomes. To do this, the test cases follow the control flows that occur from a decision point (e.g., for an IF statement, one for the true outcome and one for the false outcome; for a CASE statement, test cases would be required for all the possible outcomes, including the default outcome).
Coverage is measured as the number of decision outcomes executed by the tests divided by the total
number of decision outcomes in the test object, normally expressed as a percentage.
Compared to the MC/DC and Multiple Condition techniques described below, decision testing considers the entire decision as a whole and evaluates the TRUE and FALSE outcomes in separate test cases.
Applicability
The most useful checklists are those gradually developed by an individual organization, because they reflect:
• The nature of the product
• The local development environment
o Staff
o Tools
o Priorities
• History of previous successes and defects
• Particular issues (e.g., performance efficiency, security)
Checklists should be customized for the organization and perhaps for the particular project. The checklists provided in this chapter are meant only to serve as examples.
Some organizations extend the usual notion of a software checklist to include “anti-patterns” that refer to common errors, poor techniques, and other ineffective practices. The term derives from the popular concept of “design patterns” which are reusable solutions to common problems that have been shown to be effective in practical situations [Gamma94]. An anti-pattern, then, is a commonly made error, often implemented as an expedient short-cut.
It is important to remember that if a requirement is not testable, meaning that it is not defined in such a way that the Technical Test Analyst can determine how to test it, then it is a defect. For example, a requirement that states “The software should be fast” cannot be tested. How can the Technical Test Analyst determine if the software is fast? If, instead, the requirement said “The software must provide a maximum response time of three seconds under specific load conditions”, then the testability of this requirement is substantially better assuming the “specific load conditions” (e.g., number of concurrent users, activities performed by the users) are defined. It is also an overarching requirement because this one requirement could easily spawn many individual test cases in a non-trivial application. Traceability from this requirement to the test cases is also critical because if the requirement should change, all the test cases will need to be reviewed and updated as needed.
5.2.1 Architectural Reviews
Software architecture consists of the fundamental organization of a system, embodied in its components, their relationships to each other and the environment, and the principles governing its design and evolution. [ISO42010], [Bass03].
Checklists1 used for architecture reviews could, for example, include verification of the proper implementation of the following items, which are quoted from [Web-2]:
• “Connection pooling - reducing the execution time overhead associated with establishing database connections by establishing a shared pool of connections
• Load balancing – spreading the load evenly between a set of resources
• Distributed processing
• Caching – using a local copy of data to reduce access time
• Lazy instantiation
• Transaction concurrency
• Process isolation between Online Transactional Processing (OLTP) and Online Analytical Processing (OLAP)
• Replication of data”
5.2.2 Code Reviews
Checklists for code reviews are necessarily very detailed, and, as with checklists for architecture reviews, are most useful when they are language, project and company-specific. The inclusion of code-level anti-patterns is helpful, particularly for less experienced software developers.
Checklists1 used for code reviews could include the following items:.
1. Structure
• Does the code completely and correctly implement the design?
• Does the code conform to any pertinent coding standards?
• Is the code well-structured, consistent in style, and consistently formatted?
• Are there any uncalled or unneeded procedures or any unreachable code?
• Are there any leftover stubs or test routines in the code?
• Can any code be replaced by calls to external reusable components or library functions?
• Are there any blocks of repeated code that could be condensed into a single procedure?
• Is storage use efficient?
• Are symbolics used rather than “magic number” constants or string constants?
• Are any modules excessively complex and should be restructured or split into multiple modules?
2. Documentation
• Is the code clearly and adequately documented with an easy-to-maintain commenting style?
• Are all comments consistent with the code?
• Does the documentation conform to applicable standards?
3. Variables
• Are all variables properly defined with meaningful, consistent, and clear names?
• Are there any redundant or unused variables?
4. Arithmetic Operations
• Does the code avoid comparing floating-point numbers for equality?
• Does the code systematically prevent rounding errors?
• Does the code avoid additions and subtractions on numbers with greatly different magnitudes?
• Are divisors tested for zero or noise?
5. Loops and Branches
• Are all loops, branches, and logic constructs complete, correct, and properly nested?
• Are the most common cases tested first in IF-ELSEIF chains?
• Are all cases covered in an IF-ELSEIF or CASE block, including ELSE or DEFAULT clauses?
• Does every case statement have a default?
• Are loop termination conditions obvious and invariably achievable?
• Are indices or subscripts properly initialized, just prior to the loop?
• Can any statements that are enclosed within loops be placed outside the loops?
• Does the code in the loop avoid manipulating the index variable or using it upon exit from the loop?
6. Defensive Programming
• Are indices, pointers, and subscripts tested against array, record, or file bounds?
• Are imported data and input arguments tested for validity and completeness?
• Are all output variables assigned?
• Is the correct data element operated on in each statement?
• Is every memory allocation released?
• Are timeouts or error traps used for external device access?
• Are files checked for existence before attempting to access them?
• Are all files and devices left in the correct state upon program termination?
6. Test Tools and Automation
Keywords capture/playback, data-driven testing, debugging, emulator, fault seeding, hyperlink, keyword-driven testing, performance efficiency, simulator, test execution, test management
Learning Objectives for Test Tools and Automation
6.1 Defining the Test Automation Project
TTA-6.1.1 (K2) Summarize the activities that the Technical Test Analyst performs when setting up a test automation project
TTA-6.1.2 (K2) Summarize the differences between data-driven and keyword-driven automation
TTA-6.1.3 (K2) Summarize common technical issues that cause automation projects to fail to achieve the planned return on investment
TTA-6.1.4 (K3) Construct keywords based on a given business process
6.2 Specific Test Tools
TTA-6.2.1 (K2) Summarize the purpose of tools for fault seeding and fault injection
TTA-6.2.2 (K2) Summarize the main characteristics and implementation issues for performance testing tools
TTA-6.2.3 (K2) Explain the general purpose of tools used for web-based testing
TTA-6.2.4 (K2) Explain how tools support the practice of model-based testing
TTA-6.2.5 (K2) Outline the purpose of tools used to support component testing and the build process
TTA-6.2.6 (K2) Outline the purpose of tools used to support mobile application testing
6.1 Defining the Test Automation Project
In order to be cost-effective, test tools (and particularly those which support test execution), must be carefully architected and designed. Implementing a test execution automation strategy without a solid architecture usually results in a tool set that is costly to maintain, insufficient for the purpose and unable to achieve the target return on investment.
A test automation project should be considered a software development project. This includes the need for architecture documentation, detailed design documentation, design and code reviews, component and component integration testing, as well as final system testing. Testing can be needlessly delayed or complicated when unstable or inaccurate test automation code is used.
There are multiple tasks that the Technical Test Analyst can perform regarding test execution automation. These include:
• Determining who will be responsible for the test execution (possibly in coordination with a Test Manager)
• Selecting the appropriate tool for the organization, timeline, skills of the team, and maintenance requirements (note this
could mean deciding to create a tool to use rather than acquiring one)
• Defining the interface requirements between the automation tool and other tools such as the test management, defect management and tools used for continuous integration
• Developing any adapters which may be required to create an interface between the test execution tool and the software under test
• Selecting the automation approach, i.e., keyword-driven or data-driven (see Section 6.1.1 below)
• Working with the Test Manager to estimate the cost of the implementation, including training. In Agile projects this aspect would typically be discussed and agreed in project/sprint planning meetings with the whole team.
• Scheduling the automation project and allocating the time for maintenance
• Training the Test Analysts and Business Analysts to use and supply data for the automation
• Determining how and when the automated tests will be executed
• Determining how the automated test results will be combined with the manual test results
In projects with a strong emphasis on test automation, a Test Automation Engineer may be tasked with many of these activities (see the Advanced Level Test Automation Engineer syllabus [ISTQB_ALTAE_SYL] for details). Certain organizational tasks may be taken on by a Test Manager according to project needs and preferences. In Agile projects the assignment of these tasks to roles is typically more flexible and less formal.
These activities and the resulting decisions will influence the scalability and maintainability of the automation solution. Sufficient time must be spent researching the options, investigating available tools and technologies and understanding the future plans for the organization.
6.1.1 Selecting the Automation Approach
This section considers the following factors which impact the test automation approach:
• Automating through the GUI
• Applying a data-driven approach
• Applying a keyword-driven approach
• Handling software failures
• Considering system state
The Advanced Level Test Automation Engineer syllabus [ISTQB_ALTAE_SYL] includes further details on selecting an automation approach.