The GenAI divide: 95% of Generative AI Projects Fail in Enterprise by 2025, According to MIT

Introduction: generative AI, a revolution... but not without obstacles

Since 2022, theGenerative AI is becoming an integral part of our daily lives. These systems generative artificial intelligence - as ChatGPT or Copilot - are able to create textand imagesfrom code or even musicfrom a simple instruction.

Concrete examples:

  • A sales representative uses AI to write a personalized email in 30 seconds.
  • A developer automates the generation of unit tests using a model generative.
  • A marketing team stimulates personal productivity creating campaign visuals

These uses clearly demonstrate the potential of digital technologies for theautomation and productivity gains. However, according to an MIT study (2025), 95% of enterprise generative AI projects fail. This paradox is called the GenAI Divide a lot ofadoptionvery little transformation.


AI and business: massive adoption but few results

The MIT report entitled State of AI in Business 2025 is clear. Companies are investing IA budgets considerable - between $30 and $40 billion - but the returns on investment are almost non-existent.

  • Over 80 % of organizations say they have tested or explored a tool such as ChatGPT or Copilot.
  • 40 % claim to have officially deployed it.
  • However, only 5 % have really transformed their processes.

This means that while a minority takes advantage of transformation opportunitiesthe vast majority are stuck in the experimentation stagedigging into the differences between and the rest of the market.

In short: theAI could transform working methods, but it has to be integrated intelligently.


What are the risks behind GenAI Divide?

1. The illusion of immediate productivity

Employees love tools like ChatGPT because they are simple, flexible and suitable for small tasks. But in a professional setting, these uses remain limited to individual productivity. This may have consequences the gap between employee enthusiasm and employee expectations business leaders.

2. Projects stalled at the pilot stage

Only 5% solutions reach full deployment. Most fail because of a lack of integration with workflows or the poor quality of thework organization.

3. Data risks

Visit political decision-makers and IT managers point out that DATA SECURITY and the job quality must be guaranteed. This is not just a technical issue: it also concerns the digital dividethe information security and the online privacy.

4. Jobs and the job market

The impact of AI on the job market is a cause for concern. L'international labor organization believes that automation could transform certain professions, while reinforcing economic inequalities if the productivity gains are not redistributed.


2025: "Shadow AI" in the enterprise

A key lesson from MIT is the rise of a AI parallel economy.
👉 Clearly, employees exploit AI in their day-to-day tasks without waiting for official validation.

  • 90 % say they use an AI tool (often a personal account) at work.
  • But only 40 % organizations have purchased an official license.

This points to a paradox: individuals move faster than their companies. L'AI adoption happens from belowfurther deepening the Divide between what is actually done in practice and what is financed by management.


How to successfully adopt generative artificial intelligence?

MIT insists that it's not the quality of the models that's holding them back, but the companies' approach. Best practices consist of :

1. Start with concrete cases

Target specific processes (contracts, email sorting, repetitive code generation) rather than large abstract projects.

2. Integrate AI into existing workflows

A tool that isn't connected to CRM, ERP or internal tools becomes useless, and can lead to new regulatory and safety risks.

3. Train employees

Success depends on training and expertise learning how to write clear instructions, checking the ChatGPT's answersprotect their sensitive data.

4. Establish social dialogue

As theInternational Labor Organizationthe adoption of AI requires social dialogue how to use these technologies without damaging the job quality ?

5. Working with reliable partners

Projects carried out with external partners are twice as successful as in-house initiatives, according to MIT.


Jobs and automation: AI can be a lever, not a threat

The MIT study shows thatGenerative AI doesn't cut jobs on a massive scale. can also be a lever for improvingwork organization.

  • It helps reduce repetitive tasks.
  • It boosts productivity in customer support and document management.
  • It paves the way for job creation in emerging professions, linked to training programs and data governance.

L'IA can be a tool to relieve tasks, not to replace humans.


Conclusion: from the digital divide to responsible leadership

MIT's big lesson is clear: the GenAI Divide is not only technological, but also human and organizational. It reflects the digital divide between those who move forward and those who remain stuck.

👉 The employees will not be replaced by AI. They will be boosted by AI.
👉 Those who will lose their jobs are those who refuse to learn to use AI.

This is why governments and companies must act together: to strengthen safety, ensure sustainable development fairguarantee access to tools for all, and support skills development through the training.

Our conviction: crossing the Dividemeans investing in practical applications and, above all, in training. L'Generative AI can transform work, boost productivity and sustain growth, but only if we learn how to use it. efficiently and safely.

Share our article

ChatGPT: GPT-5 is here! All you need to know about OpenAI

Go to

Adopting new technologies: keys to successful user adoption

Go to

Bring Your Own AI: 78 % of AI users bring their own AI tools to work

Go to
0

Subtotal