We hear over 30,000 words per day but we can’t structure them. MEMOR.AI provides an innovative solution for structuring and navigating sound information for all abilities, which was inspired by blind people. Blind people have a great dependence on sound, and they get about 3-4 times as much information from voices as ordinary people every day. However, it is time-consuming to search and retrieve sound information by ear because sound is linear. MEMOR.AI is a product combining machine learning with real-time physiological data and intuitive controls to realize instant structure to sound information.
Co-designer : Carolina Hermenegildo, Chao Dai, Tom Pais, Yu Hu
Collaboration : CERN (Geneva, Switzerland)