Interview between Loïc Debeugny, ArianeGroup and Maxim De Clercq, EDGX
As part of the ENLIGHTEN project, EDGX is developing AI-based tools to detect anomalies in rocket engines in real time, supporting faster and more reliable test operations.
During the webinar “From Lab to Launcher: Training AI to Assess Performance,” Loïc Debeugny (LD), ArianeGroup spoke with Maxim De Clercq (MDC), EDGX about the AI Health Monitor — a system that watches the engine, flags early anomalies, and runs entirely on edge hardware.
LD: In one sentence, what does the AI health monitor do?
MDC: It watches the engine in real time and flags early anomalies—so test teams get a fast, confidence-scored heads-up while the controller keeps full authority.
LD: What did your latest tests show?
MDC: We’ve run hardware-in-the-loop with injected faults and exercised the pipeline on simulated test data. Results meet our timing budgets; we’re now improving detection reliability and repeatability to satisfy strict requirements.
LD: Why do you use different AI models for start-up versus steady running, and how are they optimized for edge hardware?
MDC: Because the engine behaves very differently at ignition and during steady-state. Start-up is a burst of rapid change; steady running is about small drifts. That naturally pushes us toward different model families: quick-to-react models for the transient window and drift-sensing models for the stable window. For operators it stays simple: the system spots “this looks unusual,” checks if it matters, and names the issue. Everything runs on a small onboard computer under strict timing, so alerts arrive in real time while the controller remains fully in charge.
LD: How do you keep inference fast and predictable, and what was the toughest technical challenge/innovation?
MDC: We budget latency per model and use Triton’s per-model scheduling to keep execution deterministic; the system returns an advisory with confidence while the engine controller retains full authority. The hardest part was synchronising many high-rate channels under deterministic latency; a Triton-based stack on Orin with strict timing envelopes and explicit uncertainty outputs is a key innovation.
LD: Why include cameras and microphones as well as telemetry?
MDC: Yes, cameras and microphones are on the launch platform/test stand, and they’re not connected to the in-flight health monitor. We use them to give the ground team an independent check alongside telemetry: they can reveal brief visual or acoustic cues (plume changes, pad acoustics) earlier in some cases, or confirm an event you see in the sensors. They’re most useful for pad and test operations and post-test analysis, not for pinpointing something inside the engine during flight.
LD: Why does this matter for industry?
MDC: Earlier detection can prevent aborted tests, shorten post-test investigations, and support faster turnaround. Because it’s an add-on alongside the controller, it fits upgrade paths toward lower-cost, more reusable propulsion.
LD: Where are you on TRL?
MDC: Today we’re around TRL-5 (validated in a relevant environment via HIL). ENLIGHTEN-ED is the next step: an integrated engine demonstration targeting TRL-6, with on-engine hot-fire runs planned in the next phase to prove end-to-end real-time performance and harden the models for operations.
Watch the full webinar recording here: youtu.be/nEGhqWpGu3s


