“I think AI systems will continue to become more autonomous when performing specific, well-defined tasks”. Eric Doyen, Founder and CEO of Id Interactive

Eric Doyen, Founder and CEO of Id Interactive

Eric Doyen, Founder and CEO of Id Interactive

It is with great pleasure that I welcome Eric Doyen, founder and CEO of Id Interactive.

Social Media

FACEBOOK

LINKEDIN

INSTAGRAM

YOUTUBE

Part I:

Introduction

Alexandre MartinTimes of AI™

Hi, could you tell the listeners of Times of AI™ about your professional background and your current work?

What sparked your interest in artificial intelligence (AI)?

What projects and business ideas are you working on? What are your goals and objectives? Who are they aimed at, and in which fields are they being implemented?

Eric DoyenID Interactive

A graduate of ESC Toulouse, I have been leading ID Interactive since 2006, a digital agency based in Vannes that specializes in the design of web platforms, business applications, and digital acquisition strategies. My career has been shaped at the intersection of technology, web development, and digital marketing, with a strong focus on innovation and custom solutions.

For several years now, we have seen a dramatic increase in digital complexity within companies: a surge in data, growing automation needs, productivity demands, and challenges related to internal knowledge. It was against this backdrop that my interest in artificial intelligence began to grow.

We support around 100 small and medium-sized businesses. Our shift involves deploying 100% on-premise AI infrastructure—no cloud—with everything running on our servers in Vannes.

Our monthly SEO reports have gone from taking 30 minutes to just 3 minutes per client, without any client data ever leaving our infrastructure.

We also work more broadly on a local and hybrid AI approach for businesses, combining locally executed open-source models with cloud tools when it adds real value.

The goal is to provide AI systems that are useful, secure, and truly integrated into business operations:

  • Document assistance;

  • Knowledge structuring;

  • SEO/SEA analysis;

  • Decision support;

  • Process automation;

  • Software development assistance…

Our ambition is to make AI more tangible, more autonomous, and more practical for French and Breton companies.

I am also working on developments in organic search engine optimization (SEO) in light of the emergence of AI engines and conversational assistants.

SEO is gradually evolving toward what is known as GEO (Generative Engine Optimization), that is, the optimization of content for generative artificial intelligence.

This involves rethinking content structure, data quality, source authority, and how brands appear in AI-generated responses. For us, this represents a major transformation of the web and of companies’ digital visibility.

Part II:‍ ‍

Artificial Intelligence (AI)

Alexandre MartinTimes of AI™‍ ‍

Artificial intelligence is a multidisciplinary field. Just as its applications vary, there are several definitions. How would you define artificial intelligence?

Eric DoyenId Interactive‍ ‍

For me, artificial intelligence is a tool that simulates certain aspects of human intelligence for specific purposes. In particular, it enables data analysis, content generation, task automation, and decision-making support.

The challenge today is not just technological: it is also economic, organizational, and strategic for businesses.

Alexandre Martin Times of AI™

In your professional life or at your company, do you use Large Language Models (LLMs)? In what contexts do you use LLMs?

Eric Doyen Id Interactive

Yes, we use Large Language Models on a daily basis at the agency, in two distinct contexts.

For employee support, we rely on cloud-based LLMs, primarily in generative AI and technical production:

  • Image and video generation;

  • Advertising creation;

  • HTML code generation;

  • Functional prototypes;

  • Assisted software development;

  • Code review;

  • Document structuring.

We are also exploring MCP protocols (Model Context Protocol) to enable AI to interact with third-party tools, databases, or business software.

The goal: to make AI an intelligent complement to traditional automation tools, capable of understanding context, analyzing information, and supporting decision-making.

When it comes to private or sensitive data, the rule is simple: customer data never leaves the agency. We therefore switch to open-source models run locally, within our own infrastructure. This applies to the analysis of SEO reports, sales and lead analysis, and certain internal business processes.

Specifically, our monthly SEO reports are generated by Qwen3-30B-A3B, a MoE (Mixture of Experts) model installed on our own server. Its architecture activates only the “experts” needed for each task, making it an ideal solution for high-performance local inference.

Our internal documentary RAG also runs entirely on-premises. Today, this is what enables us to produce monthly reports for our 100 clients while keeping 100% of the data within our organization.

This hybrid cloud approach for creative assistance, combined with on-premise storage for sensitive data, allows us to fully leverage the capabilities of AI while maintaining control over our data and costs.

Alexandre MartinTimes of AI™

What is your perspective on AI agents and agentic AI?

Eric DoyenId Interactive

I think we are witnessing a shift in the nature of AI. We’ve moved from conversational tools that we ask questions to systems capable of using tools, accessing data, and performing tasks semi-autonomously. This is the most transformative development of the past 18 months.

Alexandre MartinTimes of AI™‍ ‍

Does your company use AI agents? Why?

Eric DoyenId Interactive‍ ‍

Yes, we are actively experimenting with them, particularly in document automation, SEO workflows, and certain internal business tasks.

In R&D, we’re working specifically with OpenClaw, the open-source project that took off in early 2026. Within a few months, it became one of the leading examples of autonomous agent-based AI. NVIDIA has even made it the foundation of its NemoClaw stack, designed to run on DGX Spark, which is precisely the type of infrastructure we operate at the agency.

OpenClaw is not easy to implement; it remains a technically demanding project. But wechose open source for 2 reasons: data control and independence from major cloud platforms.

Alexandre MartinTimes of AI™‍ ‍

Do you see potential for businesses? Why?

Eric DoyenId Interactive

There is certainly great potential. To boost productivity, automate repetitive tasks, and, above all, make better use of internal knowledge — the kind that currently lies dormant on corporate servers. However, the data must first be cleaned up and made accessible.

Next, I believe that the AI agents most useful to companies will be those that run locally or in a controlled environment. An agent that executes code, reads files, and sends emails on your behalf cannot be hosted by a third party without raising serious questions of sovereignty and security.

Alexandre MartinTimes of AI™‍ ‍

What do you think of contextual AI?

Eric DoyenId Interactive

In my opinion, it’s the key to making everything we’ve just discussed actually work. Generic AI produces generic responses.

AI that understands the business, its objectives, data, history, constraints, and the company’s style and tone produces coherent and actionable responses. That’s exactly what RAG and protocols like MCP enable feeding the AI with the context specific to each company.

Without that context, we’re stuck at the demonstration stage. With it, we move into production.

Part III: ‍ ‍

The Future of AI‍ ‍

Alexandre MartinTimes of AI™‍ ‍

Questions About General Artificial Intelligence (GAI) Do you think AI systems will be capable of achieving a level of autonomy? Why?  

‍ ‍

Eric DoyenId Interactive

I believe that AI systems will continue to become more autonomous when performing specific, well-defined tasks.

However, I’m not sure that the pursuit of general artificial intelligence is the most relevant path. It seems more realistic, more reliable, and probably less expensive to develop specialized intelligence, each expert in a specific field and capable of achieving a very high level of performance with few errors.

Ultimately, AI agents could coordinate these various specialized intelligence to produce highly efficient systems, without necessarily seeking to replicate comprehensive human intelligence. This approach seems to me to be more pragmatic today for businesses and professional applications.

Generally speaking, I believe that AI should never be completely autonomous. It can and should have a limited degree of autonomy, because that is exactly what makes it useful:

  • Execute;

  • Make decisions within a defined scope;

  • Act without being micromanaged.

But unlimited autonomy, without human oversight, without an off switch?

No. Otherwise, it’s “The Terminator,” and beyond the reference, it’s above all the end of human responsibility for decisions that should be ours to make.

Alexandre MartinTimes of AI

Would you and your company be interested in using an AI agent within your department? What benefits would this bring to the company?

Eric DoyenId Interactive‍ ‍

Not as things stand. Deploying a general-purpose GAI in an agency like ours would mean introducing a general-purpose system where we have specifically chosen to build specialized, controlled, and well-managed models.

I prefer three AIs that perfectly do their jobs—SEO report writing, documentation assistance, and code generation—to a general-purpose AI that would require constant monitoring and produce less predictable results.

That doesn’t mean I’m closing the door for good. If tomorrow a GAI system were to demonstrate that it can, within a strict governance framework, bring real added value to our business lines while ensuring data control, I would take a look. But my priority today is contextualization and operational efficiency, not the race for power.

As for the benefits to society at large—I imagine that’s also the point of your question—I think they will exist, but only if this technology isn’t concentrated in the hands of three or four global players.

A GAI that belongs to everyone, that can be hosted locally, governed democratically, and controlled by users: yes! A hegemonic AI that makes decisions for us: no!

The real societal benefit will depend entirely on the governance we are able to collectively put in place.

Alexandre MartinTimes of AI™‍ ‍

What emerging trend(s) do you believe in?

Eric DoyenId Interactive‍ ‍

I see 5 major convictions that will shape the coming years.

  • 1st conviction:

The end of general-purpose models, the rise of specialized models. Value will no longer lie in the raw power of LLMs, but in intelligence that does one thing—and do it perfectly.

Beyond performance gains, it is also a matter of the quality of the knowledge produced. A specialized model, grounded in verified sources of truth from scientific studies, industry standards, and validated data, produces more reliable, better-reasoned answers with far fewer hallucinations than a generalist model trained on the entire web.

It’s also a matter of energy efficiency. A model precisely sized for its task consumes far less energy than a generalist model handling the same query. On architectures like MoEs, only the relevant “experts” are activated for each token—a fraction of the model’s total parameters. Combined with local inference, this radically changes the energy footprint of AI in the enterprise.

More useful knowledge, more domain-specific reasoning, fewer fabricated errors, and less energy consumed. This is the most transformative development in my view, because it changes not only the business model of AI in the enterprise, but also the reliability of the decisions we can delegate to it and its environmental footprint.

  • 2nd conviction:

The rise of local and hybrid infrastructure. Sovereignty, privacy, data control: these are no longer marginal issues; they are key business decision-making criteria. Every company I work with is now asking the question: “Where does my data go when I use ChatGPT?”

  • 3rd conviction:

The one we don’t hear enough about: the wake-up call regarding the true cost of cloud AI. Many uses seem “free” today. But business models will evolve toward highly granular billing for queries, agents, and complex prompts. Today’s cheap AI will be tomorrow’s costly addiction. As companies delegate more tasks to AI, on-premise infrastructure will no longer be just an ethical choice—it will be an economic one.

  • 4th conviction:

SEO is becoming GEO.

It’s my line of work, so I see it firsthand. When an internet user asks a question, they no longer go solely to Google; they ask ChatGPT, Perplexity, and Gemini. And visibility in these responses isn’t achieved through the same strategies as traditional SEO. This is likely the fastest-paced shift digital marketing has seen in the past 20 years.

  • 5th conviction:

The fifth and most disruptive: MCP redefines the value of SaaS business software. Today, a CRM or ERP system derives its value from its interface and workflows.

With MCP protocols, AI agents access the data and actions of these tools directly, making the interface optional. SaaS platforms that open up via MCP become layers of actionable data. Those that resist by keeping their interface as a barrier will lose value.

For businesses, this means that a software choice must now incorporate a new question: “Will this tool be interoperable with my AI agents in two years?”

And beyond these 5 convictions:

A fundamental insight: the most transformative uses of AI will not be the most spectacular ones. Not the viral demos, not the generated videos that go viral on social media.

But rather the quiet integration of AI into business tools and daily workflows. It may be less photogenic, but that is where the real value will be created.

Alexandre MartinTimes of AI™‍ ‍

From your perspective, what would be the ideal future of AI for you and your company?

Eric DoyenId Interactive

For me, the ideal evolution of AI is one that is useful, context-aware, and integrated into real-world business operations—not one that floods the web with empty content. The next two years will force us to collectively choose between 2 opposing paths.

The approach I advocate:

It is an AI that assists humans, facilitates the flow of knowledge within the company, and frees up teams to focus on tasks that require judgment and creativity. An AI that is well governed, managed locally, energy-efficient, economically sustainable, and that creates value—and thereby enriches the country.

The trend that concerns me:

It’s slop AI — the mass production of content aimed at generating volume, grabbing attention, or manipulating public opinion. Videos churned out en masse, articles written by models to be read by other models, images designed to generate clicks without adding any value.

A veritable underground economy has even formed around this phenomenon: operators based in low-cost countries mass-produce AI-generated content for a few dollars, which then floods social media. All of this is monetized through social media’s viral reward mechanisms, with no quality control whatsoever.

This type of AI (slop AI) clutters the information ecosystem and consumes a considerable amount of energy to produce disposable content. The greatest risk isn’t that AI will become intelligent; it’s that it will make us collectively lazier. We’ll become more dependent on content produced without intention and more passive in the face of information. This is a societal issue, not just a technological one.

The challenge, therefore, is no longer whether to use AI—everyone will use it—but rather to choose what kind of AI we build and how we use it. AI that elevates us, or AI that dulls us.

By establishing a clear regulatory framework and steering its use toward what is useful, lawmakers bear a heavy responsibility: to guide citizens toward practices that are individually useful and collectively beneficial.

Part IV:

Regulation of AI

Alexandre MartinTimes of AI

In your opinion, is the implementation of regulations on artificial intelligence a solution for better regulating artificial intelligence? Why?

Eric DoyenId Interactive

Yes, I believe a regulatory framework is essential to guide the development of artificial intelligence on a global scale. This revolutionary technology is too powerful and too transformative to be left solely in the hands of actors driven by financial interests or those with malicious intent.

The stakes go far beyond technology:

  • Sovereignty;

  • Disinformation;

  • Security,

  • Employment;

  • Concentration of power;

  • Economic dependence.

The issue of the creation and, above all, the distribution of the wealth generated by AI will also become central in the coming years. It will therefore be necessary to strike a balance between innovation, the protection of citizens, and the collective interest.

I also believe that training and education regarding these technologies will be essential. Citizens and businesses need to understand how AI works, its limitations, and its impacts in order to truly take ownership of it—rather than remaining mere users dependent on tools they don’t fully understand. Technology is a bit like politics: if you don’t take care of it, it ends up taking care of you.

Alexandre MartinTimes of AI™‍ ‍

Among the various existing regulations on artificial intelligence, which one do you think is the most effective for regulating artificial intelligence? Why?

Eric DoyenId Interactive‍ ‍

I believe that European regulations are currently the most advanced in the world when it comes to governing artificial intelligence, particularly with the AI Act. Europe is striving to strike a balance between innovation, citizen protection, and the accountability of technology companies.

This regulation also has the merit of raising essential questions regarding transparency, high-risk uses, data governance, and human oversight. Even though it will likely need to evolve as innovations advance, I believe it provides a solid foundation for preventing abuses and fostering a more responsible development of AI.

Next
Next

“AI is first and foremost a tool.” Jérôme Ribeiro, Founder of Human AI