Zharis
Sovereign AI Compute — 10 Gb/s in 1U
A high-density embedded AI compute platform built entirely by Thelis — ingesting and processing video and audio streams at 10 Gb/s, on-premises, in a single rack unit. No cloud. No data leaving the premises. Full sovereignty.
Real-time AI on video and audio — at scale, on-premises, without the cloud.
The audiovisual industry — broadcast, live events, surveillance, archiving — has an exponentially growing need for real-time AI analysis of video and audio streams. The dominant answer has been the cloud: send the data upstream, process it remotely, receive results. But cloud processing comes with fundamental limitations: latency, bandwidth costs, dependency on connectivity, and above all, data sovereignty. For organizations handling sensitive footage — security operations, broadcasting rights-protected content, regulated environments — sending raw video and audio to external infrastructure is not always acceptable, and sometimes legally impossible.
The alternative — running AI inference directly on local hardware — has historically required compromises: either limited processing power, or large, complex, expensive infrastructure. The ReconnAIssance project set out to solve this: build a complete embedded AI solution capable of processing video and audio at 10 Gb/s, entirely on-premises, in a single 1U rack unit.
Either the bandwidth is there, or the system does not work.
One rack unit. Full hardware ownership. 10 Gb/s of AI inference — no cloud required.
Zharis is the hardware product that emerged from the ReconnAIssance R&D program — a Walloon consortium under Pôle MecaTech labelling and Walloon Region co-funding. Thelis owns the hardware entirely: architecture, electronics design, PCB, mechanical integration, thermal management, and production. Phoenix AI leads the software and AI layer. The result is a high-density compute platform packaged into a single 1U rack unit, built for edge deployment with no cloud dependency.
Full hardware ownership
Thelis owns the complete hardware stack — architecture, electronics design, PCB, mechanical integration, thermal management, and production. Every layer of the physical system was designed and built in-house, with no dependency on off-the-shelf integration platforms.
10 Gb/s edge processing
Zharis processes video and audio streams at 10 gigabits per second — entirely on-premises. No data leaves the hardware. For organizations where cloud transfer is impractical, cost-prohibitive, or legally impossible, this is the architecture that makes sovereign AI inference viable at scale.
Multi-modal AI platform
The platform supports standard and custom AI functions across a range of audiovisual applications — live broadcast, video surveillance, voice recognition, and archival analysis. Both video and audio modalities are supported simultaneously within the same hardware envelope.
Consortium model — hardware & AI
ReconnAIssance brought together 7 partners across industry and academia. Thelis held full responsibility for the hardware stack. Phoenix AI led the software and AI layer. UCLouvain, ULiège, Sirris, ACIC, and WNM contributed research and expertise. Walloon Region co-funded under Pôle MecaTech labelling.
A sovereign, rack-deployable AI compute platform — built in Wallonia, running anywhere.
Zharis positions Wallonia at the frontier of embedded AI hardware — demonstrating that state-of-the-art AI inference does not require cloud infrastructure, and that data sovereignty and processing performance are not mutually exclusive. The platform is deployable in any environment where connectivity is constrained, data is sensitive, or latency is non-negotiable.
Beyond the platform itself, ReconnAIssance represents a model for industrial R&D: a consortium of 7 partners — hardware engineers, AI specialists, academic researchers — delivering a physical product with clear ownership boundaries and a shared technical ambition. Thelis contributed the layer that makes everything else possible.
Key Challenges
High-speed communication at 10 Gb/s
Processing video and audio streams at 10 gigabits per second is not a software problem — it is a hardware architecture problem. At that throughput, every interconnect, every bus, every interface becomes a potential bottleneck. Designing the internal communication fabric of Zharis required careful selection and validation of high-speed protocols, precise signal integrity work on the PCB, and a system architecture where data can flow at full rate without creating latency or packet loss. This is the kind of engineering constraint that cannot be approximated — either the bandwidth is there, or the system does not work.
Thermal management in a 1U enclosure
High-density compute generates heat. Packing the processing power required to run real-time AI inference on 10 Gb/s streams into a single rack unit — a form factor defined by strict height constraints — makes thermal management one of the central design problems. The challenge is not just dissipating heat, but doing so reliably under sustained load, in a sealed enclosure, without active cooling becoming a noise or reliability liability. Every component placement, every airflow path, every thermal interface decision contributes to whether the system can run continuously at full performance or degrades over time.
Mechanical integration at the system level
A 1U rack unit imposes extreme spatial constraints on every component. Integrating high-speed interconnects, compute modules, power delivery, and thermal management into a chassis where millimeters matter — while maintaining accessibility for maintenance and respecting the mechanical standards that data center environments require — is a discipline that sits at the intersection of electronics engineering and industrial design. On Zharis, these constraints were not handled sequentially but co-designed from the start.
Full hardware ownership in a research consortium
ReconnAIssance is a live R&D program — the specifications evolve as the research progresses, and the boundary between what is known and what needs to be discovered shifts continuously. Thelis held full responsibility for the hardware stack — from component selection and PCB design to mechanical integration and production — within a consortium where the software, AI models, and academic research were handled by other partners. Maintaining clear ownership of that layer, while staying tightly coupled to the software constraints Phoenix AI was defining, required continuous technical alignment across the full system.
