Founder of Euklydia, Dr. Oumeima Laifa Defends a Simple Idea
The value of AI lies not in its demonstration effects, but in its ability to improve decision-making. In this interview, she explains how to shift from a replacement logic to an augmentation logic, where governance, roles, KPIs, and a hybrid model make all the difference.
How to Shift from a Replacement Logic to an Augmentation Logic
If replacement is a technological obsession, augmentation is a true leadership strategy. In Tunisia, the debate is often poorly framed: the issue is not AI itself, but the organization it requires. Replacing collaborators assumes written processes, clear roles, clean data, and solid governance. Otherwise, we do not replace a human: we replace ambiguity with emptiness. The paradigm shift is quite simple. In "replacement" mode, we reason as follows: "This task costs X. AI costs Y. If Y is less than X, we replace." This is a subtractive logic. In "augmentation" mode, the question becomes: "This person has potential Z. With AI, they can do three times Z. What is the ROI?" Here, we are in a multiplicative logic. The medical analogy helps to understand: we do not replace a doctor with an MRI, but we augment their diagnostic capacity. AI is a bit like an algorithmic microscope: it detects the invisible and frees up cognitive time for what really matters. Hence, a golden rule: diagnose before buying. The most frequent error is adopting a tool... and then looking for a use for it. As a result, the company remains stuck in the demo, not in the impact. The right logic is not "augmenting a task," but augmenting a decision. Useful AI is not "another chatbot," but a system that improves a critical decision, delay, margin, risk, quality, or customer experience. Above all, we must assume a hybrid model: humans retain responsibility, nuance, and context; AI takes detection, simulation, and speed. The shift can be made in five steps: clarifying roles, targeting areas of gold (where decision and relationship are at stake), training for use (not for the tool), creating human/AI pairs, and measuring value, not usage volume. Talking about replacement today is a bit like talking about autonomous cars when the highway code is not respected: human infrastructure precedes technological infrastructure. The real question, ultimately, is: do we have the courage to clarify what each human really brings? Because augmentation requires lucidity... and lucidity, sometimes, is disturbing.
What Decisions are Most "Augmentable" for a Manager: Prioritization, Arbitration, Control, Execution?
The principle is simple: the more strategic and structuring the decision, the more AI "augments" and creates value. Conversely, the more operational and repetitive, the more we shift towards automation. The most profitable AI for a manager is not the one that does their job, but the one that reveals what they did not see before making a decision. The classic error is to believe that AI is mainly used to "boost execution." In reality, it shines when it illuminates the decision, at several levels. First, prioritization: this is the jackpot. Faced with noise and incomplete information, AI captures weak signals, reduces 100 topics to three actionable priorities, and reveals dependencies. Judgment remains human, but the decision becomes faster, clearer, and less blind. Next, arbitration: very "augmentable," never delegable. Quality-delay-cost, resources, conflicts: AI models trade-offs, tests scenarios, and quantifies impacts (ROI, risks, compliance, reputation). It does not choose, it avoids blind spots. Leadership remains human, better informed. Then, control: useful, if intelligent. Anomalies, early warnings, consistency audits, dashboards oriented "what to do now" rather than 200 notifications: AI strengthens piloting. But more control without trust is to weaken the team. Augmented control must reduce micro-management, not industrialize it. Finally, execution: automatable, therefore risky if poorly framed. Syntheses, drafts, tickets, reminders: yes, AI accelerates. But without governance, it also accelerates damage: silent errors, hallucinations, data leaks... In Tunisia, when processes are not documented, automation stalls. And yet, this is often where we invest first, with a sometimes low ROI.
What Signals Indicate that AI is Used as a Gadget Rather than a Management Lever?
We can see the difference quickly: a gadget impresses, a lever transforms. The right question for a manager is simple: does AI improve the quality of decisions and KPIs... or just the formatting of deliverables? When we pile up tools without integration, without piloting, and without governance, we do a show, not an impact. Seven signals indicate that AI is used as a gadget:
- We talk about the tool, never about the problem. "We have ChatGPT/Copilot," but what decision is really improved? If it's unclear: gadget.
- No business KPIs. "It's faster" is not enough: a lever is measured (delays, errors, margin, quality, compliance, satisfaction).
- Many demos, zero routine. If AI does not enter rituals (weekly review, committee, decision pipeline), it remains a spectacle.
- Uncontrolled shadow AI. Each person tests in their corner, sometimes with sensitive data: risks, inconsistencies, impossible responsibilities.
- AI automates execution... in a fuzzy organization. Without clear processes, automation accelerates disorder.
- No criticism, no traceability. If no one challenges the results and nothing is documented (data, hypotheses, reasons), it's not a copilot: it's a generator.
- No business ownership. If it's an "IT topic" or a "com topic" without sponsorship from general management or operations.
How to Design an AI Copilot: What Tasks Does it Take Before, During, and After the Decision?
Designing an AI copilot means giving it a clear role at each stage. The most frequent error is using it "a bit everywhere"... therefore, effectively nowhere. Before the decision: it prepares the ground. It gathers dispersed information, detects weak signals, simulates scenarios, quantifies risks/impacts, and delivers a structured brief. The manager frames the questions, challenges, and validates. During the decision: it illuminates the choice. It does not decide: it reduces blind spots. It puts key data at the right time, alerts to biases, proposes options and their consequences. The manager decides, with human nuance (political, relational, ethical). Afterward, AI ensures follow-up: a neglected phase. This is where the organization learns. It documents, tracks indicators, alerts if results deviate, and capitalizes for future decisions. The manager interprets, adjusts, and transmits. Three months later, they receive a report, not a vague intuition. A well-designed copilot does not make the manager less necessary; it makes their judgment more powerful: decision → action → measurement → improvement.
What are the Most Frequent Governance Errors: Shadow AI, Lack of Framework, Poorly Defined KPIs, etc.?
The most frequent governance errors are invisible at first... then explosive. The first error is letting AI enter without architecture. No perimeter, no criticality levels, so the tool ends up touching sensitive topics by accident. The second error is treating data as a detail: no single truth source, contradictory definitions, uncontrolled quality, AI does not create clarity, it amplifies contradictions. The third error is forgetting auditability: no trace of inputs, versions, hypotheses, confidence level, so it's impossible to defend a decision or correct it properly. The fourth error is poorly designing validation: either we let everything pass, or we block everything; in both cases, adoption dies. The fifth error is measuring the wrong indicators: we optimize "content produced" instead of optimizing the real impact on quality, risk, delay, margin. Finally, the fatal error: no sponsor who decides when no one has the authority to say yes/no, AI becomes a playground, then a minefield. Well-governed AI is not more tools, but clear limits, controlled data, and assumed responsibility.