AI Testing

according to ISTQB® syllabus

Do you enjoy exploring new test approaches, crafting innovative methods, training models or writing scripts to automate tests? Do you possess a keen understanding of data quality? If you are eager to discover AI-based systems and successfully test them, then this training will be ideal. You will gain an overview of the trends and challenges faced by AI testers and you are going to gain testing experience by designing your own test cases.

Advising

Objectives

  • Understand and apply AI for software testing
  • Design and apply test cases for AI-based systems
  • Test ML models using specific testing methods
  • Gain testing experience with AI and identify the determinants of successful AI testing
  • Intensive preparation for the CT-AI exam

Target groups:

Business Analyst, Requirements Engineer, Usability Expert, Scrum Master, AI Expert, Project Manager, Project Director, Demand Manager, Portfolio Manager, IT Project Director, Test Manager, Tester, Test Automation Specialist, Test Engineer, Enterprise Architect, System Architect, Software Architect, Software Designer, Software Developer and Product Owner

Certificate - ISTQB CT-AI®

The course is based on the current ISTQB® syllabus and prepares you for the exam which you may take online.

Note:

There is an additional fee for the certification exam.

Syllabus

1. Introduction to artificial intelligence (AI)

  • Definition of AI and AI effect
  • Narrow, General, and Super AI
  • AI-based and conventional systems
  • AI technologies
  • AI development frameworks
  • Hardware for AI-based systems
  • AI as a Service (AIaaS)
  • Pre-trained models
  • Standards, regulations and AI

2. Quality characteristic for AI-based systems

  • Flexibility and adaptability
  • Autonomy
  • Evolution
  • Bias
  • Ethics
  • Side effects and reward hacking
  • Transparency, interpretability, and explainability
  • Safety and AI

3. Machine Learning (ML) – Overview

  • Forms of ML
  • ML workflow
  • Selecting a form of ML
  • Factors involved in ML algorithm selection
  • Overfitting und Underfitting

4. ML – Data

  • Data preparation as part of the ML workflow
  • Training, validation and test datasets in the ML workflow
  • Dataset quality issues
  • Data quality and its effect on the ML model
  • Data labelling for supervised learning

5. ML functional performance metrics

  • Confusion Matrix
  • Add ML functional performance metrics for classification, regression and clustering
  • Limitations of ML functional performance metrics
  • Selecting ML functional performance metrics
  • Benchmark suites for ML performance

6. Testing AI-based Systems – Overview

  • Specification of AI-based systems
  • Test levels for AI-based systems
  • Test data for testing AI-based systems
  • Testing for automation bias in AI-based systems
  • Documenting an AI component
  • Testing for concept drift
  • Selecting a test approach for an ML system

7. Testing AI-specific quality characteristics

  • Challenges testing self-learning systems
  • Testing autonomous self-learning systems
  • Testing for algorithmic, sample, and inappropriate bias
  • Challenges testing probabilistic and non-deterministic AI-based systems
  • Challenges testing complex AI-based systems
  • Testing transparency, interpretability and explainability of AI-based systems
  • Test oracles for AI-based systems
  • Test objectives and acceptance criteria

8. Methods and techniques for the testing of AI-based systems

  • Adversarial attacks und data poisoning
  • Pairwise testing
  • A/B testing
  • Back-to-Back Testing
  • Metamorphic Testing (MT)
  • Experience-based testing of AI-based Systems
  • Selecting test techniques for AI-based Systems

9. Test environments for AI-based systems

  • Test environments for AI-based systems
  • Virtual test environments for testing AI-based systems

10. Using AI for testing

  • AI technologies for testing
  • Using AI to analyze defect reports
  • Using AI for test case generation
  • Using AI for the optimization of regression test suites
  • Using AI for defect prediction
  • Using AI for testing user interfaces

Advising after course completion

Spirit in Projects