The central idea of the VisioBone project, funded by the , is to leverage the brain's neuroplasticity to reproduce vision through tactile stimuli.
To achieve this, a device has been developed with three main components: a camera for visual capture of the surrounding environment, an image interpretation software, and a series of tactile stimulators that transmit processed visual information to the body.
But how does it work? Using a depth camera embedded in a frame similar to a pair of glasses, the device gathers information about the distance of nearby objects and converts this data into tactile stimuli. These haptic feedbacks are transmitted to the user through stimulators positioned on specially designed garments, providing the wearer with a sensory map of the environment.
The first prototype of the system, already successfully tested, divides the camera-captured images into four main areas, each representing an average object distance. Four actuators with vibrating motors for tactile stimulation translate these different distances into vibration frequency and intensity, enabling perception not only of the physical environment but also of additional information, such as object color.
The next phase will focus on developing a market-ready product with an increased number of stimulation points and an optimized design, integrated into a practical and lightweight garment that complies with medical device standards. It will also be built with biocompatible, hypoallergenic, and breathable materials, essential for applications requiring short- to medium-term skin contact.
With this innovative device, the MeDiTech team is making a significant advance in assisting visually impaired individuals, providing them with a new form of spatial perception that fosters autonomy and inclusion.