2016 Code for the Mission
Research: mHealth: Sensors, Wearables & Data Mashups
To many parents, the sound of their baby’s voice is music to their ears. For those who are either deaf or hearing-impaired, this music is silent. To allow every baby to be heard, we are creating a baby-monitor for deaf parents called “ChatterBaby,” so deaf parents can interpret their babies' sounds with an application (app) on iPhone and Android platforms. This project will translate an infant's vocalizations into "states" such as hungry, fussy (fatigue, irritability), pain, neutral (babble, coo), or happy (laughter, squeal) - creating a novel interpretation device which is both inexpensive and accessible, for deaf parents and their babies, aged 0-2 years. Research has consistently shown the vocalizations made by a baby can promote bonding, decrease parental stress levels, and allow communication of baby needs such as pain or hunger. Far from a random biophysical phenomenon, the cries of infants show remarkable predictability; acoustic features such as formants and tenseness can help discriminate among different baby cries for hunger, pain, and fussiness. An increased rate of parental response to vocal cues is associated with increased language comprehension. Translating a child’s vocal cues to inform and encourage parental responses may stimulate language comprehension during a critical period of development. Our brains are remarkable in their ability to use efficiently all available sensory inputs. In neuroimaging studies of the blind, the visual cortex region of the brain is used to hear and to feel. Similarly, deaf individuals repurpose parts of their auditory cortex for vision and somatosensation. Sensory substitution allows deaf individuals to substitute contextual information for auditory information. ChatterBaby will substitute infant sound with labeled states, so that deaf parents can interpret and respond appropriately to their infants’ and toddlers’ vocal cues.