SPAES is claimed to overcome the  depth sensing performance limitations for robotics and XR technologies enabling machines to understand both the physical world and human behavior from user’s point-of-view, advancing Physical AI.

“After five years of developing our technology, we see our vision being realised through optimisations with Snapdragon XR Platforms,” says VoxelSensors CEO Johannes Peeters, “with our sensors that are ideally suited for next-generation 3D sensing and eye-tracking systems, and our inference engine for capturing users’ egocentric data, we see great potential in enabling truly personal AI agent interactions only available on XR devices.”

SPAES sensors detect individual photons and generate ultra-high-resolution 3D spatial data points at a rate of up to 100 million per second with a refresh rate of 10 nanoseconds, resulting in ultra-low latency and ultra-low power consumption for depth sensing, imaging, and gaze tracking.

VoxelSensors has developed PERCEPT – a software sensing and inference system designed for wearable and mobile devices for XR and AR,

The SPAES technology works by localizing active laser points in space using a laser beam scanning triangulation method. This approach produces a continuous, serialised data stream of 3D points (voxels) without the need for complex stereo image matching, resulting in simplified processing and robustness against ambient light and concurrent optical interference.

The high temporal resolution and sensitivity are claimed to enable up to 100 times faster depth data acquisition than traditional technologies while consuming minimal energy (less than 10 photons per 3D point).

VoxelSensors says its technology expands the operative limits of current day sensors, while collecting human point-of-view data to better train physical AI models.

The first products from the collaboration will be available to select customers and partners by December 2025.