Integrating AI into Applications

At the end of the course, you will earn a Certificate of Completion from Spirit in Projects.

Do you want to fully exploit the potential of Large Language Models (LLMs) in practice and integrate them into your existing systems? If you want to learn more about the various application scenarios and collaboratively develop practical solutions, then look forward to this training. From text classification to system integration, you will learn how to effectively integrate LLMs into your infrastructure, build modern RAG architectures, use Function Calling, and develop your own solutions for real problems.

Advising

Objectives

  • Gain practical experience with various LLM application scenarios
  • Implement state-of-the-art NLP solutions with current models
  • Integrate LLMs into existing system landscapes
  • Develop understanding of the strengths and limitations of different model types
  • Build and optimize RAG (Retrieval-Augmented Generation) architectures
  • Use Function Calling and Tool Use for extended functionality
  • Be able to conceive and implement your own LLM-based solutions

Target groups:

AI Expert, Software Developer, System Architect, Software Architect and anyone who wants to engage with artificial intelligence.

Syllabus

1. Fundamentals of LLM Integration

  • Architecture patterns for LLM-based systems
  • REST APIs and microservices with LLMs
  • Scalable infrastructures for LLM applications
  • Various deployment options (Cloud, On-Premise, Hybrid)
  • Multi-model strategies (GPT, Claude, Gemini, Open Source models)
  • Security and governance

2. Current LLM Models and APIs (2026)

  • OpenAI GPT: Multimodal capabilities and API integration
  • Anthropic Claude: Large-token context window, Tool Use
  • Google Gemini: Multi-million-token context, Hybrid Reasoning
  • Secure use of Open Source models: Llama, Mistral, DeepSeek and more
  • Model selection and performance-to-cost ratio
  • Practical exercise: Comparison of different LLM APIs for use cases

3. RAG (Retrieval-Augmented Generation) Architectures

  • RAG fundamentals: Improving factual accuracy
  • Vector databases: Pinecone, Weaviate, Chroma, Qdrant
  • Embedding models and vector search
  • Chunking strategies and metadata management
  • RAG pipeline optimization
  • Hybrid approaches: RAG + fine-tuning
  • Practical exercise: Building a RAG system with vector database

4. Function Calling and Tool Use

  • Concept of Function Calling
  • Integration of external tools and APIs
  • Multi-tool orchestration
  • Error handling and fallback strategies
  • Security aspects of Tool Use
  • Practical exercise: Extending LLM with external functions

5. Text Classification and Sentiment Analysis

  • Building classification pipelines
  • Feature extraction with Transformer models
  • Integration into content management systems
  • Batch and real-time processing
  • Practical exercise: Integration of a sentiment analyzer into an existing web application

6. Information Extraction and Document Processing

  • Multilingual NER systems
  • Integration with document processing pipelines
  • Connection to document management systems
  • Workflow integration
  • Practical exercise: Development of a document processing pipeline with LLM integration

7. Chatbots and Dialogue Systems

  • Integration with messaging platforms
  • Connection to CRM systems
  • Context management and session handling
  • Memory strategies for longer conversations
  • Monitoring and logging
  • Practical exercise: Integration of an LLM-based chatbot into an enterprise platform

8. Development Frameworks and Tools

  • LangChain: Modular workflows and memory handling
  • LlamaIndex: Specialization in RAG applications
  • Semantic Kernel (Microsoft): Enterprise integration
  • Hugging Face: Open-source models and Transformers
  • Practical exercise: Application development with LangChain

9. Observability and Monitoring

  • LangSmith for LLM debugging and tracing
  • Weights & Biases for experiment tracking
  • Logging of prompts and responses
  • Performance metrics and latency monitoring
  • Cost control and budgeting
  • Practical exercise: Monitoring setup for LLM applications

10. System Integration and Production Operation

  • API design and management
  • A/B testing and gradual rollout
  • Error handling and fallback strategies
  • Scaling and load balancing
  • Fine-tuning and adaptation methods
  • Practical exercise: Implementation of a complete LLM service with monitoring and failover

Advising after course completion

Spirit in Projects