Creating and optimizing LLM

At the end of the course, you will earn a Certificate of Completion from Spirit in Projects.

Are you eager to understand how Large Language Models (LLMs) function and how to optimize them for your specific needs? This training is designed to provide you with the foundational knowledge of LLM architecture and optimization, combined with practical, hands-on experience. Through a balanced blend of theory and interactive exercises, you’ll explore the technology behind LLMs, with a focus on their effective application. Our hands-on sessions will give you the opportunity to apply your learning immediately, ensuring you gain the practical skills to efficiently harness these models in real-world scenarios.

Advising

Objectives

  • Develop a fundamental understanding of Large Language Models (LLMs)
  • Gain knowledge of different LLM architectures
  • Acquire practical experience in fine-tuning and optimizing LLMs
  • Understand methods for evaluating and ensuring the quality of LLMs
  • Engage in exercises for the implementation and optimization of LLMs

Target groups:

Business Analyst, Requirements Engineer, AI Expert, Project Manager, Project Director, Software Architect, Software Designer, Software Developer and everyone who is eager to acquire knowledge in the field.

Syllabus

1. An introduction to Large Language Models

  • Definition and key characteristics of LLMs
  • Various LLM-based architectures
  • Transformer architectures
  • Differences between types of LLMs (BERT, GPT, T5)
  • In groups: Compare different LLM architectures

2. Training and fine-tuning of LLMs

  • Overview
  • Data preparation
  • Training of language models
  • Objectives of pretraining
  • Implementing training loops
  • In groups: Perform a fine-tuning process on a pre-trained model

3. Techniques for optimization

  • Prompt Engineering
  • Knowledge distillation
  • Quantization and model compression
  • Hyperparameter
  • Performance monitoring and improvement
  • In groups: Optimize a model using various techniques

4. Implementation

  • Architecture of LLM-based applications
  • Integrating LLMs in existing systems
  • Working with various frameworks
  • In groups: Develop your own LLM-based application

5. Deployment

  • Best Practices
  • Scaling and resource management
  • Cost optimization
  • Monitoring and maintenance
  • Security aspects and ethical considerations
  • In groups: Deploy and monitor an optimized model

6. Evaluation and quality assurance

  • Metrics for evaluating LLMs
  • Test strategies and validation techniques
  • Error analysis and optimization
  • Continuous improvement
  • In groups: Perform a holistic evaluation of the model
Spirit in Projects