AI Deployment Models: Which Solution Fits Your Business?

The Right Deployment Strategy for Your AI Applications

The use of artificial intelligence offers enormous potential for businesses – but choosing the right deployment model is crucial for success. Should the AI run locally on your own computer, on a server in your own data center, via specialized services like OpenRouter, or directly via the providers’ APIs? Each option has its own advantages and disadvantages. In this article, we present the most important deployment models and show which solution is suitable for which requirements.

1. Local AI on Your Own Computer

Description

In this model, AI models are executed directly on the user’s hardware – be it on the desktop PC, laptop, or a dedicated workstation. Tools like Ollama or LM Studio enable easy installation and use of open-source models such as Llama, Mistral, or DeepSeek.

Advantages

In addition to maximum data privacy, another significant factor is that there are no ongoing costs. Usage can also be offline and is not subject to external filtering. You retain full control and, with appropriate hardware, achieve fast response times due to low latency.

Disadvantages

The disadvantages include high hardware requirements (a modern GPU is required, e.g., Nvidia RTX 4080 with 16+ GB VRAM), limited model size, and essential in-house technical expertise. The applications are not automatically scalable, so they are limited to local hardware capacity and have increased maintenance effort.

Typical Use Cases

  • Personal assistants for sensitive tasks (including software development for critical systems)
  • Development and prototyping of AI applications
  • Processing of highly sensitive or confidential data
  • Offline use cases (e.g., in the field)

Suitable For

Local AI is therefore suitable for individuals and small teams with technical know-how, data privacy-conscious users in regulated industries, developers who want to experiment and customize, and companies with sporadic AI needs and low volume.


2. On-Premise in Your Own Data Center

Description

AI models are hosted on the company’s own servers. This can range from individual AI appliances to complete GPU server clusters. The infrastructure is managed entirely internally. Some AI appliances use proprietary software. Open-source or self-developed models are usually used, but closed-source models can also be licensed and used.

Advantages

On-premise solutions maintain digital sovereignty, high security, predictable costs, and no bandwidth limitations. They are scalable in volume and offer long-term cost efficiency.

Disadvantages

In addition to the mentioned advantages, on-premise solutions also have disadvantages such as high initial investment, required IT expertise, longer implementation time, limited elasticity and increased responsibility for updates.

Typical Use Cases

  • Automated document analysis (e.g., contract review)
  • Mass document processing (e.g., invoice processing)
  • Screening and analysis of sensitive company data
  • AI-based video surveillance and analysis
  • Internal knowledge databases and enterprise search

Suitable For

This solution is well-suited for medium-sized to large companies with constant, high AI needs in regulated industries, as well as organizations with strict compliance requirements (GDPR, NIS2) and companies with established IT infrastructure and data center expertise.


3. Cloud-based API Services (Direct)

Description

Direct use of AI models via the providers’ APIs such as OpenAI (GPT-5), Anthropic (Claude), Google (Gemini), or Mistral AI. Access is via API key, billing is pay-per-token.

Advantages

The advantages include immediate availability, no infrastructure management needed, access to the latest models, flexible scaling, low entry barriers, and low initial investment.

Disadvantages

Cloud-based solutions incur ongoing costs (API fees can rise sharply with high volume), there may be data privacy concerns, and there is a potential vendor lock-in risk. The potential latency is network-dependent and therefore not ideal for real-time applications. Additionally, there are limited customization options (no control or transparency over model behavior or filtering).

Typical Use Cases

  • Chatbots and customer support
  • Content generation (marketing, social media)
  • Translations and text processing
  • Rapid prototyping and MVP development
  • Sporadic or experimental AI usage

Suitable For

This solution is ideal for startups and small companies without IT infrastructure, projects in the proof-of-concept phase, teams with variable or unpredictable workloads, and applications with non-sensitive data.


4. Aggregation Services (OpenRouter, Portkey, etc.)

Description

Platforms like OpenRouter provide unified access to over 300+ AI models from various providers. Instead of managing multiple API keys, users use a standardized interface with automatic fallback and routing.

Advantages

This access offers independence from a single provider, as access to 300+ models is possible via one API. Automatic fallbacks are possible (if one provider fails, the system automatically switches), integration is simplified, and transparent routing automatically selects the cheapest/fastest model. This system is experiment-friendly and requires only one API key for everything from central management to billing.

Disadvantages

The disadvantage lies in an increased price, as a 10-15% surcharge on the original API costs may be charged. There may be minimal delays due to additional latency, the system is dependent on the aggregator, and data privacy is challenged by an additional instance through which the data flows.

Typical Use Cases

  • Multi-model applications (different models for different tasks)
  • Evaluation and benchmarking of various models
  • Development of flexible AI assistants
  • Rapid prototyping with model switching

Suitable For

This solution is primarily suitable for development and tech teams that value flexibility, but also for companies with changing requirements, as well as projects that use multiple models in parallel and teams that want to avoid vendor lock-in.


5. Hybrid Deployment

Description

Combination of several approaches: sensitive data is processed locally or on-premise, while less critical workloads are outsourced to the cloud. Tools like Microsoft Azure ML or AWS SageMaker support hybrid architectures.

Advantages

The biggest advantage of the hybrid model is that it combines the best of both worlds – the flexibility of the cloud and the security of on-premise models. Additionally, it offers an optimized cost structure, is compliance-conform, scalable, and provides high redundancy through distribution.

Disadvantages

The disadvantages include higher complexity, increased integration effort, and potentially higher costs due to redundancy: connection between on-premise and cloud.

Typical Use Cases

  • Financial institutions with a mix of sensitive and public data
  • Healthcare organizations (patient data on-premise, research in cloud)
  • Enterprise AI with seasonal fluctuations

Suitable For

The hybrid model is well-suited for large companies with complex requirements, industries with partially sensitive data, and organizations in transformation phases.


6. Integrated AI in Applications (AI as a Service)

Description

AI functions are directly integrated into existing software products, e.g., via Microsoft Copilot, Salesforce Einstein, GitHub Copilot, or specialized industry solutions. The AI is invisibly embedded in the user’s workflow.

Advantages

The advantages include a seamless user experience, no separate platform required, context-awareness (AI has access to relevant company data), easier introduction with no complex technical integration, and continuous updates and support included at no additional cost.

Disadvantages

The major disadvantage is vendor lock-in, limited customization, often additional costs (premium features with surcharge), and data privacy concerns as data is processed by the platform provider.

Typical Use Cases

  • Code completion in IDEs (GitHub Copilot)
  • CRM assistance and sales insights (Salesforce Einstein)
  • Automated office tasks (Microsoft 365 Copilot)
  • Email intelligence and meeting summaries

Suitable For

Integrated AI in applications is suitable for companies already using the base platform, non-tech users without AI expertise, and teams that want quick, uncomplicated AI support.


Comparison Table: Deployment Models at a Glance

Conclusion: There is no “one” right solution

The choice of the optimal AI deployment model depends heavily on your specific requirements. While cloud APIs offer a low-barrier entry, on-premise solutions are often more economical and secure in the long term for data-intensive, regulated industries. Hybrid approaches combine the best of both worlds but require more complexity in management.

Our Recommendation

  1. Start with a clear analysis of your requirements: data privacy, volume, budget, technical expertise.
  2. Start small: PoC with cloud APIs to test feasibility.
  3. Evaluate long-term: For increasing volume (>50 million tokens/month), consider on-premise.
  4. Plan for flexibility: Avoid early commitment to a provider.
  5. Think about compliance: In regulated industries, prefer local or on-premise options from the start.

The AI landscape is evolving rapidly – 2026 will be the year when many companies rethink and optimize their deployment strategies. Stay flexible and adapt your strategy to growing requirements.

What is NIS 2 – and why does it affect (almost) everyone?

The new EU Directive NIS 2 is changing the rules of the game for IT security in Europe. Since October 2024, member states must have implemented it into national law. It obliges companies and authorities to systematically organize their cyber security – with clear processes, reporting obligations, and responsibility at the management level.

For many organizations, this is akin to a turning point: what was previously ‘best practice’ is now mandatory. And many more institutions are affected than before – from energy suppliers to municipal utilities and hospitals to IT service providers and local authorities.

What is behind NIS 2?

NIS 2 stands for “Network and Information Security Directive 2”. It replaces the first NIS Directive from 2016 and pursues a clear goal: 

a high, uniform level of cybersecurity across the entire EU.

New is the significantly expanded scope and the binding liability for management. NIS 2 is intended to ensure that not only technical defense measures are in place, but also that governance, risk management, and supply chain control are practiced.

The directive has been in force since January 16, 2023.

  • In Germany, the NIS2 Implementation and Cybersecurity Strengthening Act (NIS2UmsuCG) was passed in the summer of 2025.
  • In Austria, the amendment to the NIS Act (NISG) was delayed; a new version is being prepared for 2025.

Even if national laws are still in progress, the requirements are fixed. Organizations that act now secure valuable time and a compliance advantage.

What does NIS 2 specifically require?

The core of the directive consists of ten mandatory security and organizational measures that all affected institutions must implement:

  1. Risk analysis & security strategy
  2. Incident handling – detection, response, recovery
  3. Business continuity & backup management
  4. Supply chain security
  5. Secure development & maintenance (including vulnerability management)
  6. Effectiveness evaluation of security measures
  7. Cyber hygiene & training
  8. Access and identity management
  9. Cryptography & encryption policies
  10. Policies for system and network security

In addition, there is a strict reporting obligation in the event of security incidents, the so-called 24/72/30 rule:

  • Within 24 hours: “Early Warning” to authority or CSIRT
  • Within 72 hours: Incident report with initial assessment
  • No later than 30 days: Final or progress report

Management responsibility

One of the most serious changes affects the management. According to Article 20 of NIS 2, management must:

  • approve and monitor security risk management,
  • introduce appropriate measures, and
  • regularly participate in training.

In case of violations, severe fines threaten – and personal liability of the management level:

  • Essential Entities: up to 10 million € or 2% of global annual turnover
  • Important Entities: up to 7 million € or 1.4% of turnover

This makes cybersecurity a matter for the boss – comparable to occupational safety or data protection.

Am I affected? – The quick self-check

  1. Active in one of the 18 sectors according to NIS 2?
  2. More than 50 employees or > 10 million € turnover?
  3. Do I provide critical services for citizens or the economy?
  4. Am I a supplier to an affected company or authority?
  5. Do I have documented processes for risk and incident management?

If you answer yes to any of these questions, your organization is most likely within the scope of NIS 2.

What to do now

  1. Conduct a gap analysis: Which requirements are already met, which are missing?
  2. Adapt governance: roles, responsibilities, approval processes.
  3. Create an incident playbook: clear reporting paths and escalation levels.
  4. Review supplier contracts: anchor cybersecurity requirements.
  5. Train management & employees: raise awareness, avoid liability.

Conclusion

NIS 2 is much more than a regulatory issue – it is an impulse for professional security and risk management. Organizations that start today gain not only legal certainty but also trust and resilience in an increasingly digital world.

NIS 2 is here to stay – those who start now have an advantage

Your first step

NIS-2 Readiness Check

Our experts analyze the maturity level of your organization – concrete, practical, and confidential.

Requirements for AI Systems: How a Well-Designed Process Model Leads Projects to Success

For years, Spirit in Projects has supported companies and public sector clients in designing, tendering, and implementing AI systems. One thing has been consistently confirmed: A structured, data-driven, and value-oriented process model is the key to success—regardless of whether the project follows V-Modell XT, Scrum, or a custom hybrid approach.
Many AI projects fail not because of the technology, but due to a lack of methodological rigor in gathering and implementing requirements.

From Gut Feeling to Structured Requirements

In traditional projects, specifications with clear functionalities often suffice—but AI systems require a different mindset. Key questions include:

  • What is the AI’s actual objective?
  • What data is truly available?
  • How precise must the model be to deliver real value?

We use an internal methodological framework inspired by CRISP-DM, adapted for modern AI projects with needs like MLOps, retraining, and continuous improvement. This model structures our workshops and stakeholder interviews—from Business Understanding to Data Understanding to concrete evaluation criteria for AI models.

A Planned Approach—Even in Agile or Classical Contexts

Our clients work with different methodologies: some use Scrum, others the V-Modell XT. For AI projects, however, we recommend adding an AI-specific structure. We integrate our CRISP-DM-based logic into the existing framework:

  • In agile projects, it brings order to exploratory data work.
  • In classical environments (e.g., public tenders), it serves as a blueprint for AI-specific requirements.

The key is: We create a common language between business units, data scientists, and IT—ensuring requirements are not just documented but understood and verifiable later.

Requirements Need Iteration—and Data

Another guiding principle: AI requirements don’t emerge from a blank sheet but through iteration, discussion, and what the data reveals. An ideal process model accounts for this:

  • Data exploration often leads to new insights that reshape requirements.
  • Model testing shows whether target accuracy is realistic—or if adjustments are needed.
  • Feedback loops with stakeholders are not a nuisance but a valuable asset.

In our projects, we deliberately plan for these iterations—not as exceptions, but as the norm. This improves both result quality and user acceptance during deployment.

Conclusion: Structure Provides Security

An AI project without a structured approach is like a flight without a flight plan—you rarely end up where you intended. With a well-designed, AI-tailored process model, we help our clients gain early clarity, define realistic requirements, and create reliable tenders.

Requirements for AI Systems: Why They Differ from Classical Software Systems

At Spirit in Projects, we have spent years helping companies define requirements for complex IT systems and professionally prepare tenders. In recent years, a clear trend has emerged: Artificial intelligence (AI) is being integrated into more and more projects—and brings with it unique challenges.

When it comes to AI systems, defining the right requirements from the outset is crucial. Many companies still rely on the same methods used for classical software projects—and later struggle with poor quality, lack of explainability, or unrealistic expectations.

At Spirit in Projects, we specifically support our customers in formulating AI system requirements in a way that ensures they are robust for implementation, operation, and tendering.

Learning Instead of Programming

Classical software systems follow clearly defined rules: If input X, then output Y. Their behavior is deterministic and predictable.

AI systems work fundamentally differently: They learn from data and develop a model. The “rules” do not come from programming but from machine learning. This means: Data is the key to success. Even in the requirements definition phase, critical questions must be answered, such as:

  • What data is available?
  • What is its quality and volume?
  • How is the data currently maintained and made accessible?

Our experience shows: This aspect is often underestimated in practice—with dire consequences for the project.

Probabilities instead of Guarantees

Another fundamental difference: AI systems deliver probabilistic results. For example, an image recognition system might report, “This object is a cat with 95% probability.” However, it does not guarantee flawless detection of every cat.

Requirements for result quality (accuracy, precision, recall, confidence intervals) must therefore be explicitly defined. In our AI tendering projects, we ensure these quality parameters are clearly specified—to avoid later disappointment.

Dynamic Behavior

Classical software remains largely stable after deployment. AI models, however, often require regular retraining as data and environments change (concept drift).

This leads to specific requirements for:

  • Lifecycle management
  • Model monitoring and maintenance
  • Governance and responsibilities

Spirit in Projects helps customers integrate these aspects into their requirements definitions and tender documents—a critical point often neglected in many projects.

Explainability and Traceability

Especially in regulated sectors (e.g., financial services, public sector, healthcare), explainability is crucial. AI systems must not be black boxes.

Requirements must clearly define:

  • To what extent must explainability be ensured?
  • For whom must traceability be provided (e.g., internal audits, external regulators)?

In our practice, we ensure these aspects are structured in the requirements catalog—balancing technical feasibility with regulatory demands.

Conclusion

AI system requirements are not just an extension of classical software requirements. They must explicitly account for learning capabilities, probabilistic nature, and data dependency.

Spirit in Projects brings extensive experience from numerous AI projects and tenders. We help companies formulate sustainable and reliable requirements—laying the foundation for successful AI initiatives.

Artificial Intelligence in Project Management: Success Factors and Pitfalls

The integration of Artificial Intelligence (AI) in businesses promises significant advancements in automation and efficiency. However, successfully managing an AI project requires more than just technical expertise. Project leaders must navigate a range of pitfalls and standards to overcome the complex challenges involved.

Setting Realistic Goals

One of the most common mistakes in AI projects is vague or unrealistic objectives. Many companies expect AI to deliver quick, far-reaching successes—yet these expectations are often unrealistic. While AI can automate and improve processes, it is not a universal solution for every problem.

Success depends on clearly defined, measurable, and realistic goals.
Project managers should: Identify concrete use cases and align expectations with achievable outcomes

Data Quality and Privacy

High-quality, comprehensive datasets are the foundation of any successful AI application. Without them, even the most advanced AI will underperform. At the same time, data privacy cannot be overlooked.

In Europe, the General Data Protection Regulation (GDPR) plays a central role. Companies must ensure that Data is lawfully collected and processed, can be deleted upon request and that compliance is continuously monitored

Bias and Fairness

A critical challenge in AI projects is bias and fairness. If training data contains biases, AI systems may make unfair decisions—with serious consequences in sensitive areas like hiring, lending, or law enforcement.

Project leaders should: Implement bias detection mechanisms, ensure fairness in model training and validation and conduct regular audits to mitigate discriminatory outcomes.

Ethical and Regulatory Requirements

Beyond technical hurdles, ethical and legal considerations must be addressed. International standards, such as ISO/IEC 23894, provide guidelines for the AI system lifecycle, while organizations like the European Ethics Commission offer frameworks to navigate ethical dilemmas.

Interdisciplinary Collaboration

AI projects require cross-functional expertise, bringing together: Data scientists (model development), Engineers (system integration), Legal experts (compliance & risk management)

Project leaders must foster clear communication between teams, coordinate workflows efficiently and ensures alignment on project goals.

Conclusion

A successful AI project requires clear, realistic objectives, high-quality, ethically sourced data,  compliance with privacy and ethical standards,  strong interdisciplinary collaboration.

Project leaders who proactively address these challenges—balancing technical, ethical, and regulatory aspects—will unlock AI’s full potential while avoiding common pitfalls.

Agile methods and Requirements Engineering: A guide for Project Managers

Project managers often find themselves at an interface between constantly changing market requirements and the technical challenges faced by developer teams. While the agile model promises flexibility and adaptability, requirements engineering (RE) offers structured clarity. But how do project managers achieve a balance between these two approaches to ensure efficient, successful project management? MODEL-BASED REQUIREMENTS… Continue Reading Agile methods and Requirements Engineering: A guide for Project Managers

Scaling Frameworks in an Agile Environment: What you as an IT Manager need to know

Without a doubt, the agile methodology has made its mark on the software development landscape. However, whenever you’re faced with the task of scaling agility in your department or even your entire company, you can ask yourself: “Which framework is right for us?”

Experience varies by organization, culture, type of project and chosen implementation. Following are a few general experiences companies have had with various scaling frameworks, along with what they’re learned.

SCALED AGILE FRAMEWORK (SAFe)

This framework provides a structured approach which is often highly valued by large organizations. But beware – although SAFe provides clear roles and rituals, some teams find it to be bureaucratic to a fault. As a result, it could turn out that you need to make some adaptations before you integrate it into your own corporate culture.

LARGE SCALE SCRUM (LeSS)

LeSS might be the right framework for you if you want to keep the charm and simplicity of scrum, even on a large scale. Nevertheless, note that its minimalist approach might not appeal to everyone in your company, especially those who are looking for detailed structure.

DISCIPLINED AGILE DELIVERY (DAD)

This flexible approach takes the entire software lifecycle into consideration. Its adaptability can be a blessing, but make sure you’ve defined clear guidelines for your team so you don’t wind up drowning in ambiguities.

Finally, here are a few key findings we’ve learned from our work as consultants:

  1. Adaptability is your best friend: No framework will be a perfect match for your team. Be ready to make adaptations which correspond to your team and their culture.
  2. Culture matters: You need to foster a true agile mindset. If you don’t do that, even the best frameworks won’t deliver the results you desire.
  3. Your role as a manager is crucial: Your commitment and your support are essential for a successful agile transformation.

Agile approaches can be (and should often be) used in large software projects, but this requires careful planning, adaptation, selection of the right scaling framework and strong organizational support.

Usability Engineering – a useful addon for requirements engineers

Requirements analysts will find it worth their while to make a quick detour in the direction of UX/UI, since user interfaces are what essentially determine a solution’s acceptance.

No, usability engineering is not concerned with making graphical interfaces pretty. Yes, requirements engineers, especially those involved with applications for end customers or applications with a large number of internal users, should always be concerned with usability engineering.

UX Challenges in Requirements Management

An increasingly growing challenge for requirements engineers is to functionally describe applications which not only claim to be correct and efficient, but also easy to learn (ideally no initial training at all is required), clear and understandable to users, fault-tolerant, not just for bad input but also for incorrect use, and finally attractive and helpful to users as a whole.

A pretty user interface is just one aspect involved in the understandability, operability, clarity, learnability and attractiveness of a system. Although the design of this aspect is the responsibility of the user interface designer, the process is actually carried out using traceable methods, and also requires taste and a touch of artistic design throughout.

The Solution: Usability Engineering

However, usability engineering is concerned with deriving and optimizing a system’s functional and non-functional characteristics by examining the user’s overall environment. Another aspect is to model the entire user journey and include it in system design. For example, a web shop user’s experience with that system doesn’t end when he places an order, but continues until he receives the package or support for his complaints.

In requirements engineering, we set ourselves apart (completely intentionally) from the environment not directly associated with the system, which we already learned already over the course of creating context diagrams. This allows us, but sometimes only with difficulty, to understand the user’s motives behind individual activities. In addition, requirements engineering does not examine user satisfaction over the entire use life cycle (nor beyond system limits).

Methods of requirements engineering and usability engineering complement each other perfectly in this area, and create real added value for many tasks. Spirit in Projects’ training portfolio offers basic and advanced courses for both directions, so you can selectively expand your own skill set.


Learn more about our trainings on UX/UI:

Different Levels of Business Analysis

Business analysis is not the same as business analysis. In reality, different variants exist which need to be applied according to the given examination aspect and goals.

These different applications result from the various tasks uncovered over the course of digitalizing business processes and supporting processes. As a starting point, these variants can be divided into:

  • Sub-business process analysis/IT system analysis
  • Business process analysis
  • Enterprise analysis

We take a closer look at these areas below.

Sub-Business Process Analysis

Sub-business process analysis focuses on a section of a business process. As a result, in practice it is best applied when individual process steps need to be digitalized. In this way, it is very similar to conventional requirements engineering, since it focuses on the requirements placed on an IT system. Although this focus makes it possible to come up with an optimal design of a system’s functionality, the process as a whole is not optimized.

In reality, this variant is the one which comes up the most frequently in practice, since it is generally intended to further optimize existing processes without shaking up the basic process framework. The advantage of this variant lies in its clear, manageable scope. However, the local optimization of a process leads to the potential of process optimization not being fully exploited in the long run.

Business Process Analysis

By contrast, business process analysis focuses on an end-to-end examination of an entire process. This variant is most often used when creating entire processes from scratch. Examples include:

  • A process is being completely digitalized for the very first time.
  • The IT system for a process is being replaced or significantly changed, e.g. multiple systems are being merged together or the technology of individual process steps is being changed.
  • The performance data for a process is no longer adequate, and needs to undergo a fundamental redesign involving the use of IT systems.

Unfortunately, in practical implementation, many companies find this variant is still associated with barriers which need to be overcome, those between responsibility for process design/optimization and responsibility for designing and developing IT systems. Still, this does not seem to be advisable in terms of the fact that business processes are being extensively digitalized wherever possible. At least a close collaboration, or even an integration is desirable so as to be able to take on future challenges.

Enterprise Analysis

Enterprise analysis in turn has an even larger scope. This variant examines entire process groups, suborganizations, or in extreme cases even the entire organization itself, then works out optimizations. To put it roughly, company objectives are associated with processes, services, systems and technologies and examined as a whole. This approach cannot be implemented with normal process analysis; instead, enterprise analysis methods (TOGAF, ArchiMate) must be applied. Furthermore, preparations for the analysis must be more than just technical. First, objectives must be clearly defined by corporate management. In addition, successful implementation also requires close collaboration with internal stakeholders and/or external consultants from the areas of finance and costs, marketing and sales, production, research and development, etc.

In the opinion of business analysts, this approach will lead to creation of an IT service and system landscape which is based on and is tailored to optimized processes. The advantage of this method lies in its holistic examination of the organization and the focusing on process digitalization which is integrated in the method. Its disadvantage is clearly its high complexity. In practice, we have also frequently noted the lack of clear company objectives and also the problem that designing processes is an operation which cannot always be handled in a purely rational manner, especially when changes are being made to the organizational structure.

The challenge: Getting the right level of detail

The experts at Spirit in Projects are here to advise you on how you can apply business analysis for your company’s success. In addition, we provide training which can help you build up your company’s know-how in the area of business analysis.

Agile Methods With Purpose

The agile approach in all of its manifestations has become the most-used development method in the world. There is rarely a new development project that isn’t implemented without the use of agile, kanban, DevOps methods, or a mixture of these. However, is agile being applied with restraint, and only for good reason?

By now, modern development tools are geared almost completely to the use of agile methods. We hear from many of our clients that developers often insist on the use of such methods in their job applications.

However, in spite of the use of these modern methods suitable for modern projects, in practice they still show problems with project delays and exceeded costs, although quality-related problems have dropped significantly in our opinion.

The Downside: Agility Without Purpose

Unfortunately, it has been observed that many projects are started without appropriate preparation, although this is proper from a purely agile view of the world. Nevertheless, the internal or external client’s product owner bears responsibility for this, at least for the timely clarification of requirements.

However, in practice exercising this responsibility often requires that the product owner be supported by at least a business analyst or requirements engineer. In the case of complex systems, fundamental decisions on system architecture must be made before starting a project, something which can hardly be asked of the product owner.

It must also be kept in mind that since projects must be budgeted and scheduled, many decisions must be made prior to the agile implementation.

The Answer Lies in the Combination

The answer is to use proven methods of planning and analysis, so that, for example, a requirements specification can be created for an agile project and used to prepare for its implementation. This preparation is well worth it, and helps the agile implementation team to respond adaptively to changes within the specified budget and schedule.

The experts at Spirit in Projects are ready to support you in a successful, lasting implementation of agile methods and in their appropriate use. You can also take advantage of our training portfolio on agile methods to give your employees the right qualifications in that area.