AI as part of the software development process: Faster implementation – with reliably high code quality

Artificial intelligence is currently transforming the entire IT industry – and with it, the fundamental way in which software is developed. Terms such as agentic programming and prompt-driven development are appearing more and more frequently in developer communities and represent a new approach: code is no longer written exclusively line by line, but is created in interaction with large language models (LLMs) – faster, more iterative and more strongly controlled by requirements. This is relevant for companies for three main reasons: shorter time-to-market, higher productivity and a stronger focus on technical requirements.

We used this approach in one of our internal projects at Spirit in Projects – and gained two key insights into the use of AI in development projects: AI can significantly accelerate the development of applications, but requires additional effort in terms of precise control and consistent quality assurance through reviews and tests.

After Stefan Hiermann compared Power Apps and conventional development with AI support using the same project in another blog post (👉 click here for part 1), in this post we show how AI integration was implemented in our software development, what experiences we gained in the process – and why AI-supported programming is more than just a short-term trend for us.

Project context

The aim of this internal project was to develop a dashboard for employees and management as a central portal for:

  • Time recording
  • Resource management
  • Holiday management

Technically, the solution is based on Django (Python) and HTMX. We designed the software and data architecture (including structure, roles, rights, data model) ourselves in order to create a robust and long-term maintainable foundation.

Our approach: AI-supported development – without compromising on quality

The entire development process was based on clearly defined requirements. That is why we carried out a requirements engineering phase before implementation: Together with our stakeholders, we refined goals, roles, rights and processes, described user stories including use cases, and derived prioritised requirements with acceptance criteria – which served as a ‘single source of truth’. Learn more in our IREB/CPRE training courses.

Building on this, the code was generated on demand using an AI-native plugin (Kilo Code) directly in our development environment (Visual Studio Code). We used different models (including Gemini 3 Flash and Claude Sonnet 4.5). We have already described a comparison of these models in a separate article: 👉 Click here for the article

The AI-generated code was then regularly reviewed, adapted to our standards and manually supplemented as necessary, especially in cases of more complex logic or specific bugs. Through consistent testing, we were able to identify and fix errors early on. Combined with our technical expertise, this ensured that quality, maintainability and stability were guaranteed at all times.

The key challenges (and what we learned from them)

The use of AI in software development brings with it not only great opportunities but also legitimate challenges. Three points were particularly relevant for us – and are also the most important lessons we learned:

1) Formulating precisely what we really want

AI is particularly powerful when tasks are described in detail. In practice, however, it was sometimes surprisingly challenging to communicate the desired behaviour to the LLMs with sufficient precision to ensure that the right solution was actually produced.

Consequence: We did less ‘simple direct implementation’ and invested much more in refining requirements, concrete examples, and use and edge cases.

2) Paying more attention to side effects

A second, very practical lesson: when we changed something at point A in the code, something unexpected could break at point B. This is a well-known issue in software development, but it becomes even more relevant with AI-generated code and faster iterations.

Consequence: We invested noticeably more time in code reviews and tests to reliably ensure stability and maintainability.

3) Architecture remains the responsibility of the team

AI can be very helpful in implementation. However, we deliberately took responsibility for the architecture and data model ourselves and used AI primarily where it reliably speeds things up: in implementation, refactoring and detailed work.

Conclusion

Within a few weeks, we were able to launch a modern and clear dashboard that efficiently maps our processes and is tailored precisely to our requirements. What would probably have taken months using a traditional approach was achieved in a significantly shorter time.

For us, AI-supported software development is not a substitute for experience, but rather a tool that reinforces the experience we have already gained. We see the greatest benefit when AI is not ‘simply used’ but interacts with clear requirements, consistent quality assurance and architectural responsibility within the team. This significantly reduces development time while ensuring high code quality, stable, maintainable results and, most importantly, satisfied users.

Time tracking that eats up time: our Power Apps experiment

As a small team of three developers, we developed an internal time tracking app. The goal was to create a stable internal tool that would integrate seamlessly into our system landscape and remain maintainable in the long term.

Since our company is heavily integrated with Microsoft 365, we opted for our current time tracking solution for Power Apps. In retrospect, this decision was obvious – but not efficient.

Why Power Apps made sense for us initially

On paper, Power Apps offers many advantages for organisations that use Microsoft 365. Azure Active Directory, Outlook, Teams and SharePoint are directly connected. Authentication, user management and role models are already in place and do not need to be redesigned.

These are precisely the points that convinced us. The platform promised a quick start, low infrastructure costs and a low barrier to entry – especially for internal applications.

In practice, however, things turned out differently.

The reality: a year of familiarisation with a low-code world

The development of time tracking in Power Apps was not a quick start.
On the contrary, it took us almost a year to develop the platform properly.

The biggest time factor was not the technical complexity, but familiarising ourselves with:

  • the mindset of Power Apps,
  • the limitations of formulas and components,
  • the peculiarities of Power Automate,
  • and the interaction of the various Microsoft tools.

Since you don’t work directly in the code, but rather with the tools, patterns, and abstractions provided by Microsoft, you are severely limited. Many things that are trivial in classic code can only be implemented indirectly or not at all.

A large part of the development time was spent not on technical logic, but on understanding and working around the platform mechanisms.

When low-code becomes a structural problem

The performance issues with Power Apps were not a gradual effect, but were present from the outset. Even when opening the application, it took a noticeably long time for all the necessary operations, dependencies and initialisations to load. This behaviour was independent of data volume or usage and clearly demonstrated the framework’s lack of efficiency.

Optimisation was hardly possible, as essential processes were beyond our control. In addition, systemic peculiarities such as the automatic deactivation of inactive Power Automate flows exacerbated the situation. In order to keep productive processes stable, technical workarounds had to be implemented without any added value.

At this point, it became clear that our development work was no longer based on technical requirements, but on platform limitations.

The comparison: three months of Django instead of a year of Power Apps

The switch was not an experiment, but a conscious decision.
We rebuilt the time tracking system – this time with Django in Python.

The actual development of the new platform took around three months.

Despite being a completely new development, we were significantly faster than with Power Apps. The reason was simple: we were able to work directly in the code again. Architecture, data models, business logic and performance were entirely in our hands.

We rely on prompt-driven development to boost productivity. AI-supported prompts assist us with standard logic, testing, refactoring and modelling. However, technical responsibility remains clearly with the development team.

The existing Power App was not abruptly shut down. The relevant data was decoupled from the Power Apps world via data flows and transferred to a separate database. This allowed us to migrate step by step without jeopardising ongoing operations.

This approach enabled us to build the new platform in parallel and gradually take it over.

Microsoft integration without low code

The change meant that Power Apps’ implicit Microsoft integration was no longer available. Authentication, user synchronisation and permissions had to be implemented explicitly – for example, via Azure AD, OAuth and Microsoft Graph.

In practice, this effort proved to be manageable and easily controllable. Instead of implicit platform logic, there are now explicit interfaces, clear configurations and traceable behaviour. Integration is no more difficult – it is more transparent and easier to test.

Conclusion: Low-code costs time – classic code saves it

Power Apps did not enable us to get started quickly. The platform required a long training period and forced us to think within the limits of its tools.

The Django-based redevelopment, on the other hand, was significantly faster, even though it was completely reimplemented. Direct code, clear architecture and full control over the system proved to be more efficient than any low-code abstraction.

Today, our development is more targeted, stable and sustainable.
Our time tracking system is back to doing what it’s supposed to do:
tracking time – not consuming it.

AI Governance Training for E-Control

It’s like with any new technology: there is a lot of uncertainty about how to use it, the scope of its possibilities is not clear, and often only a few people know the answers, which means they are easily overwhelmed with questions. You can try to resist and avoid the technology – ‘It worked great before!’ – but how far you will get with this attitude or how successful you will be as a business – I’ll leave that up to you to answer.

The challenge

E-Control has recognised that the use of artificial intelligence (AI), in the sense of language models, has enormous potential and that the existing internal guidelines for the use of AI need to be adapted. In addition, the EU AI Regulation (EU AI Act) is gradually coming into force. The EU AI Act is the world’s first comprehensive law regulating AI and sets binding rules for safety, transparency and the protection of fundamental rights. The regulation places specific requirements on companies and organisations, including the obligation to provide all employees with appropriate training and further education in the use of AI. In addition to face-to-face participation, the AI training was also offered online and recorded so that everyone could benefit from the content regardless of their availability.

Our approach

Our aim with this training course was to give all participants a holistic view of the subject. The topic of AI is not something that can be covered in a single training course; it is constantly evolving. That is precisely why we wanted to convey our own teaching approaches to the participants, so that they would be able to identify the most relevant topics for themselves afterwards. For E-Control itself, we focused on regulatory topics and prepared use cases for the training. This allowed us to directly discuss and debate the basics and understanding of the results of a language model. Anyone who has had a lot of experience with AI knows how varied the results can be. With special techniques or guidelines for prompt engineering, the results can be improved with a very high degree of probability. In addition to the positive potential of using AI, the technology also harbours risks and challenges, which we clearly highlighted so that the participants were aware of them. At E-Control itself, the EU AI Act was an important pillar of our presentation, which is why more time was devoted to this topic.

The benefits

Spirit in Projects has years of experience in developing various training courses. This enabled us to quickly and professionally develop a concept together with the responsible parties at E-Control. All of E-Control’s requirements were fully taken into account. The content presented was very well received by the participants and was rated as helpful. The aim was not to discuss the topic one time and then tick it off the list, but to create a foundation on which E-Control can build sustainably. Often, it is not the department that matters, because the scope of application for AI is greater than one might think. The focus is always on the people who use the technology and are responsible for it.

Would you also like to introduce AI into your company in a clearly structured, legally compliant manner and without reservations? We would be happy to support you in exploiting its full potential.

Contact us now and arrange a consultation appointment!

IREB CERTIFICATION

In our previous blog post, you already read that we have been IREB/CPRE Platinum certified since January 2026. Today, we want to delve deeper into the topic and report on the origins of the IREB (International Requirements Engineering Board).

The IREB was born out of the vision of our founder Karl Schott to define requirements engineering (RE) as an independent discipline that is more uniform, comparable, and professional worldwide. His goal was to avoid misunderstandings, change requests, delays, and unplanned additional costs in projects.

Starting point: Why was the IREB needed?

In the early 2000s, requirements engineering was recognized as a success factor in projects, but:

  • –    there was no internationally standardized curriculum for it
  • –    training was highly heterogeneous (depending on the company, university, trainer, method)
  • –    many projects suffered from unclear, contradictory, or poorly coordinated requirements
  • –    and, in addition, there was no comparable proof of competence for RE expertise

In practice, this led to a need for standardization, as identified by our CEO Karl Schott. He defined the minimum requirements for “good” requirements engineering, regardless of whether you work in an agile, classic, or hybrid environment. Karl Schott founded the D-A-CH Board in 2003 with the aim of introducing a uniform standard in German-speaking countries.

Foundation: How was the IREB established?

Based on the D-A-CH Board, which played a leading role, the IREB was founded several years later as an internationally oriented, independent organization. Since day one, Spirit in Projects has been closely associated with what for many organizations is still the gateway to professional requirements engineering: IREB® certification.

  • a neutral body that coordinates content,
  • defines publicly accessible basic knowledge (syllabus)
  • and awards certifications based on this

What happened next?

As the IREB proved so popular, it was further internationalized and disseminated. An internationally standardized syllabus and exam questions were developed and rolled out in international training and examination networks. Since then, the IREB has been regarded by many companies as the qualification standard for business analysts, requirements engineers, product owners, and others.

As Austria’s leading IREB partner, we are very proud to offer RE courses. You can find our current courses at: https://spiritinprojects.com/requirements-engineer/

IREB / CPRE Platinum Certification

IREB/CPRE Platinum Certification

We have been IREB/CPRE Platinum certified since the beginning of the year.

This means that we not only lay the foundation for IREB/CPRE certification for our participants with our training courses, but have also had our in-house expertise officially certified. It was important to us to send a clear signal of our expertise to the outside world.

Our employees – specialists in RE

We are very proud of this, because this award stands for one thing above all else: active requirements engineering expertise within our team. The IREB/CRPE certificates show how many employees in a company have IREB certification themselves. We have so many that we have achieved the platinum level – the highest level. For our customers, this means that we practice what we “preach” = RE expertise at the highest level. At the same time, we pass on our knowledge: as a training partner, we offer practical training courses in requirements engineering that enable participants to prepare specifically for IREB/CPRE certification.

About IREB/CRPE

CPRE (Certified Professional for Requirements Engineering) is a globally recognized personal certification in the field of RE. The certificate is recognized proof of skills and documents the required level of knowledge, thus guaranteeing high quality.

The four-level certification program concludes each level with its own certificate, thus guaranteeing knowledge of the professional handling of requirements. 

The certificate is:

  1. A certificate with lifelong validity
  2. Practically relevant
  3. Internationally recognized and valued
  4. A competitive advantage on the job market

Founding member Karl Schott
did you know that…

… Spirit in Projects has been associated with IREB® certification from the very beginning? Managing director and founding member of Spirit in Projects Karl Schott wanted to professionalize the job profile of requirements engineers and founded the D-A-CH RE Board in 2003. This later developed into the globally active International Requirements Engineering Board (IREB®).

Today, more than 90,000 people in 102 countries worldwide have been certified in the field of Certified Professional Requirements Engineering. Spirit in Projects is the leading provider of this international certification in Austria and one of the top training providers and consultants internationally. We were the first company worldwide to offer and conduct training courses for the advanced levels of IREB® certification.

Advantage for our customers

Our claim remains clear: to collect, structure, and secure (project) requirements in such a way that projects become more successful—with fewer misunderstandings, less rework, and more clarity for all involved.

Introduction of the Scaled Agile Framework (SAFe): Agile Transformation at AGES

The Austrian Agency for Health and Food Safety (AGES) is a company of the Republic of Austria. For over 20 years, AGES has been committed to the health of humans, animals, and plants. In doing so, it performs a wide range of legally defined tasks – such as in the areas of food and drug safety or animal health.

The Challenge

AGES operates a number of IT systems to fulfill its tasks. In the area of software development, SCRUM methods had been used for some time. After successfully participating in a seminar by Spirit in Projects on the Scaled Agile Framework (SAFe), the management of AGES decided to implement methods for scaling agility in the company. With a company size of over 1750 employees and 10 locations, this presented a significant challenge for sustainable integration into the organization. In close cooperation with the client, Spirit in Projects developed a comprehensive coaching and mentoring program to effectively support the integration of SAFe elements into the organizational structure.

Our Approach

From the beginning, it was clear that a blind rollout of the ‘standard’ SAFe framework was not an option. Here, the comprehensive experience of Spirit in Projects with agile frameworks in different company and training contexts was useful. In the course of strategic consulting, suitable methods from SAFe were identified, which were then specifically tailored to the needs of AGES. For the entire project, Spirit in Projects planned an agile approach in close coordination with the client.

Spirit in Projects offered targeted hands-on support for the introduction and implementation of the SAFe framework, from initial conception to operational execution. Particular emphasis was placed on coaching and mentoring measures for individual employees and teams. These were supported in effectively taking on and filling the roles required for the SAFe framework. With their expertise, the Spirit consultants were available as sparring partners for technical and methodological questions – always with the goal of conveying an understanding of agile principles and practices in order to develop solutions for future challenges independently. Together with development teams, agile practices such as SCRUM, Kanban, and the Built-In Quality approach were further established to optimize the product development process.

Relevant KPIs were developed and implemented for the framework to enable organizational control. This was done in close coordination with the organizational core team (Lean-Agile Change Agents) to define objectives and next steps.

The Benefits

The year-long support by Spirit in Projects during the implementation of SAFe enabled AGES employees to effectively apply new methods and techniques and to critically reflect on their work processes. The support in selecting and applying methods as well as targeted feedback significantly contributed to the improvement of quality, structure, efficiency, and effectiveness.

The intensive support through planning workshops and retrospectives as well as individual coaching deepened the understanding and application of the SAFe methodology. Additionally, a practice-oriented guide for the methods was developed, and the communication structures as well as reporting within AGES were optimized.

The success is also confirmed by the head of AGES IT Services, Ing. Gottfried Scheck, BSc MSc: “The implementation and support of the transformation from SCRUM to the SAFe framework was carried out to our full satisfaction. In addition to methodological support, the necessary organizational adjustments were accompanied and supported.”

Would you also like to take the next step in your company and benefit from scaling agile methods? We offer specialized training courses on Scaled Agile and will be happy to advise you with our practical knowledge on choosing the right frameworks and implementing them for your business. Contact us – we will be happy to advise you!

AGI – An Overrated Goal?

The AGI Hype and Its Promises

In the media and at tech conferences, one term dominates the discussion about the future of artificial intelligence: AGI – Artificial General Intelligence. The vision of a superintelligent machine that can handle any intellectual task a human can do fascinates AI users and investors. Billions are flowing into AGI research and the provision of the necessary computing capacities. In companies, people are wondering how to prepare for Day X when AGI is achieved, and how and if they can still jump on this bandwagon and how to bear the costs.

In the effort to achieve artificial superintelligence soon, one critical question remains largely unanswered: Do we even need human-like AGI in businesses?

The short answer: No. In this article, I would like to highlight why an AGI aligned with human performance is an overrated goal for broad AI application and how the fixation on it distracts us from the true potentials of today’s AI technologies.

What is AGI and Why is it So Tempting?

Definition and Delimitation

Artificial General Intelligence (AGI) refers to hypothetical AI systems that are capable of understanding, learning, and performing any intellectual task – similar to a human. In contrast, there is Narrow AI (also called Weak AI), which is specialized in specific tasks.

Characteristics of AGI are:

  • Broad applicability across various domains
  • Human-like understanding and context capture
  • Autonomous learning and development

Characteristics of Narrow AI are:

  • Specialization in defined task areas
  • Excellent performance in specific domains
  • Scalable and practical to use
  • Already economically successful today

The Promises of AGI Proponents

Prominent voices from Silicon Valley promise a revolutionary future through AGI:

  • Solving complex global problems (climate change, diseases, poverty)
  • Explosive economic growth
  • Scientific breakthroughs in all disciplines
  • A new era of human history

The positive visions are tempting – but are they realistic? And above all: Are they necessary for economic progress?

The Fundamental Problem with AGI

A fundamental problem with the AGI discussion is the lack of agreement on what AGI actually is. There is no clear definition of when a system should be considered ‘generally intelligent.’ The goalposts for AGI are constantly shifting: what was considered AGI yesterday is dismissed as ‘mere’ Narrow AI today. This makes AGI a moving target that may even remain unattainable.

Current AI systems, particularly Large Language Models, are based on statistical correlations in vast datasets. They recognize patterns but do not understand causal relationships.

A common argument among AGI proponents is: “If we only invest more data and more computing power, we will achieve AGI.” This assumption – often referred to as Scaling Laws – is increasingly being criticized.

A study published in Nature in 2025 concludes: Scaling alone does not lead to AGI. It requires fundamentally new architectures – which may not even exist.1

Why the AGI Fixation is Problematic

Distraction from Practical Solutions

The obsession with AGI distracts resources, attention, and talent from technologies that already work today and create economic value.

An article in Foreign Affairs (September 2025) criticizes the pursuit of AGI as “The Cost of the AGI Delusion” – a dangerous illusion that causes economic and strategic misjudgments.2

Providing Applications Efficiently and Flexibly: Introduction of OpenShift for the City of Vienna

The rapid and reliable provision of applications is an important prerequisite in times of rapid digitization to bring digital solutions to customers. The city of Vienna, as a provider of countless digital services, has recognized this challenge. As part of the technology renewal program, the city is therefore using Kubernetes based on Red Hat OpenShift to orchestrate container applications. This significantly increases the efficiency and flexibility in software development and provision.

The Challenge

The numerous applications in the environment of the city of Vienna must be horizontally scalable on the one hand to absorb sporadically high load peaks (for example due to new applications, legal application deadlines or the like). On the other hand, the services must be continuously developed to meet current requirements (new needs, legal changes, security, etc.). For this purpose, new applications or versions must be provided in a demand-oriented and rapid manner, which requires a high degree of automation. In addition, the solution must correspond to the complex enterprise technology environment as well as the data center architecture of the city of Vienna with multiple locations. It should be future-proof and offer the simplest possible administration and precise scalability.

Our Approach

The experts from Spirit in Projects were entrusted by Wien Digital (MA 01) with the project management and enterprise architecture. Initially, with the help of the enterprise architects from Spirit in Projects, a comprehensive concept for the integration of OpenShift into the existing enterprise landscape of the city of Vienna was created. The main focus was on building an infrastructure in several data centers of the city of Vienna that ensures the high availability of all applications operated there by the city of Vienna. Thanks to the in-depth technology expertise of Spirit in Projects, it was ensured that this concept meets current standards, internal regulations, and the concepts from the overarching technology renewal program. The project management by Spirit in Projects ensured that there was always close coordination with other ongoing projects within the city administration. Thanks to the synergy of program management and project management (both experts from Spirit in Projects), the desired solutions could be advanced holistically.

The Benefit

With the support of Spirit in Projects, Wien Digital (MA 01) was able to implement a platform with Red Hat OpenShift that optimally utilizes container technologies. During the introduction of the technology in the city of Vienna, it became particularly clear what advantage the deep roots of Spirit in Projects in technology and engineering bring. Thus, even during the project, the first applications were successfully put into operation in the newly created infrastructure. This milestone on the way to a modern and flexible infrastructure will enable the city of Vienna to design its digital services more efficiently in the future and to better respond to customer needs. Read the City of Vienna´s blog post here.

SLM vs. LLM: Why Small Language Models Can Be the Better Choice for Businesses

The Changing AI Landscape

Artificial intelligence has become indispensable in the modern business world. While in recent years, Large Language Models (LLMs) such as GPT-4, Claude, or Gemini have dominated the headlines, an interesting trend has emerged since 2025: Small Language Models (SLMs) are gaining increasing importance – especially in the enterprise environment.

For many business applications, SLMs are not just ‘good enough’ – they are the better choice. With careful planning and the right know-how, even complex AI projects can be successfully implemented.

But what distinguishes SLMs from their larger counterparts? And why should companies take a closer look at small language models right now? In this article, we highlight the advantages of SLMs and show when they represent the more economically and technically sensible choice.

What Are Small Language Models?

Small Language Models are compact AI systems for natural language processing that operate with significantly fewer parameters than their larger counterparts:

• LLMs: Typically 100 billion to over 1 trillion parameters (e.g., GPT-4, DeepSeek, Claude)
• SLMs: Usually a few million to a low two-digit number of billions of parameters (e.g., Phi-3, Mistral 7B, Gemma 2, GPT-OSS-20b)

SLMs are often created through knowledge distillation – a process in which the knowledge of larger models is transferred into more compact structures. The result: specialized models that are optimized for specific tasks and require only a fraction of the resources.

Small models demonstrate capabilities that were only recently achievable with large models. Study linked here

The Seven Decisive Advantages of SLMs

1. Cost Efficiency: Drastic Reduction in Operating Costs

The financial advantages of SLMs are remarkable:

• Potentially 10 – 100 times lower inference costs compared to LLMs
• No expensive GPU clusters necessarily required – SLMs run even on standard hardware (CPUs with small GPUs 8 – 32 GB RAM)
• Reduced cloud costs due to lower resource consumption

For companies, this means: AI projects become economically feasible without breaking the budget. The low entry costs also enable smaller organizations to access AI technology.

2. Resource Efficiency: Sustainability Meets Performance

In times of rising energy costs and growing environmental awareness, SLMs score points with their efficiency, as using smaller models consumes significantly less energy than large models. This advantage makes SLMs not only economically but also ecologically the more responsible choice.

3. Speed: Real-Time Performance for Time-Critical Applications

The compact architecture of SLMs enables significantly faster response times:

• Significantly shorter inference times in specialized applications
• Low latency for real-time applications (e.g., chatbots, fraud detection algorithms)

For use cases such as customer service chatbots, voice assistants, or IoT devices, this speed is a decisive competitive advantage.

4. Data Privacy and Security: Full Control Over Sensitive Data

A critical factor for European companies is data sovereignty:

• On-premise deployment – data never leaves the company premises
• Edge-computing capability – processing directly on end devices possible
• Reduced risk due to smaller attack surface
• GDPR compliance through local data processing

This advantage is particularly crucial for regulated industries such as finance, healthcare, or public administration. SLMs enable AI deployment without compromising on data privacy.

5. Specialization: Higher Accuracy in the Target Domain

While LLMs are designed as ‘all-rounders,’ SLMs impress with their focus:

• Higher accuracy in specialized tasks related to specific and trained business applications
• Fewer hallucinations in domain-specific tasks are achievable when SLMs are operated with high-quality, curated corporate data via RAG and/or lightweight fine-tuning.
• Faster adaptation through simple fine-tuning

For companies, this means: better results in exactly the areas that are relevant to the business – without the ‘noise’ of unnecessary general knowledge.

6. Deployment Flexibility: AI Everywhere It’s Needed

SLMs open up new deployment possibilities:

• Mobile devices – AI on smartphones without cloud connection
• Edge devices – IoT sensors, smart manufacturing
• Local servers – complete control in your own infrastructure
• Offline operation – AI even without internet connection

This flexibility is particularly valuable for production environments, field service scenarios, or regions with limited connectivity.

7. Compliance and Governance: Control in Regulated Environments

For companies in highly regulated industries, SLMs offer decisive advantages:

• Traceability through simpler architecture
• Auditing – easier documentation of decision-making processes
• Control over data flows and model behavior
• Compliance with regulations such as NIS 2, GDPR, or industry-specific standards

The current development of AI regulation (EU AI Act) makes these properties increasingly business-critical.

When Are SLMs the Right Choice?

SLMs are particularly suitable for:

Specialized applications – customer service, document analysis, process automation
Budget-conscious projects – SMEs, start-ups, pilot projects
Data privacy-critical scenarios – healthcare, finance, public sector
Edge and IoT applications – smart manufacturing, mobile apps
Fast time-to-market – agile development with short iteration cycles
Multi-agent architectures – multiple specialized models in combination

Conclusion: SLMs as Enablers of Pragmatic AI Innovation

Small Language Models are not a ‘scaled-down’ version of LLMs – they are a conscious strategic alternative for companies that want to use AI technology efficiently, securely, and purposefully.

The advantages are compelling:

Economical due to low costs 

Sustainable due to low resource consumption 

Secure through local deployment options 

Precise through domain-specific optimization