ByteHD: Efficient Byte-Level Hypervector Compression for Memory-Constrained Embedded Systems
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Hyperdimensional Computing (HDC) has proven effective in solving a wide range of classification tasks, often outperforming traditional Machine Learning techniques, particularly in terms of robustness to noise, low computational complexity, and suitability for hardware-efficient implementations due to its highly parallelizable algebra. However, the main barrier to HDC adoption in memory-constrained embedded systems lies in its high memory requirements, on the order of O(n X D), where n is the number of hypervectors stored in memory and D is their dimension. In this work we present ByteHD, a lightweight compression library that reduces the memory footprint of bipolar hypervectors through efficient byte-level encoding. To test our approach, we implemented an HDC-based framework serving anomaly detection in emergency lighting devices and evaluated its impact on memory usage, execution time and classification accuracy. Experimental results show that ByteHD enables HDC implementation even on very resource constrained embedded systems of the Internet of Things, e.g. RPI Pico 2, while maintaining classification performance comparable to uncompressed implementations. These findings underscore the potential of HDC as a practical and energy-efficient learning paradigm for edge intelligence, bridging the gap between theoretical advances and real-world embedded deployments.