There’s long been talk in medicine about the need for doctors to listen to patients. Now that advice is being taken literally to diagnose concussions and other hard-to-diagnose brain disorders. Whether caused by a blow to the head during a football game or an accident, concussions are particularly hard to identify.
A Utah-based startup is easing that process using AI and the patient’s voice to detect telltale shifts in vocal patterns — shifts human ears can’t pick up — to help doctors make the right call. The company, Canary Speech, is building voice tests that use GPU-accelerated deep learning to pick up the subtle voice tremors, slower speech and gaps between words that may reveal brain injuries, or warn of diseases such as Parkinson’s or Alzheimer’s.