🌐 We loved presenting at the Out-Of-Pocket Gen AI x Healthcare Ops Hackathon. Our very own Davis Liang, Staff Machine Learning Scientist, demoed Abridge and spoke about the importance of multilinguality in health tech. 🗣️ Did you know? •Over 350 languages are spoken in the United States. •20% of Americans speak two or more languages. •More than 11% of patients in California-licensed hospitals prefer to speak Spanish (CA Dept. Healthcare Access to Information, 2021). Yet, multilingual performance remains a significant challenge today, even for state-of-the-art models like GPT-4 (The Belebele Benchmark. Bandarkar et al. ACL 2024). Tokenizations are often biased towards English, making it more expensive and less efficient for languages like Arabic, Hindi, and Chinese. Davis also discussed ways we can start tackling this issue: •Leverage both English and non-English data for training. •Consider up-weighted sampling of important languages during training. •Increase rank for parameter efficient fine-tuning on multilingual data. •Construct intelligent multilingual vocabularies (XLM-V. Liang et al. EMNLP 2023). At Abridge, we care deeply about multilingual performance. Our speech recognition is tuned to handle medical conversations across 14+ languages, coping with cross-talk, background noise, and an evolving landscape of maladies, medications, and practice patterns. Learn more about our AI here: https://www.abridge.com/ai
Davis—the man, the myth, the legend. 🎤
learned a ton and the live demo was also pretty mindblowing - especially loved the audience participation!
WTG, Davis!
I learned a lot! Thanks for coming and presenting!
❤️❤️❤️
LET’S GO Davis Liang 🙌🏼
Way to go!
Exciting!
Experienced NICU Nurse | Team Collaboration, Nurse Preceptor, Continuous Learning | Dedicated to Excellence in Patient Care
2wI recently listened to the Heart of Healthcare podcast with Shivdev Rao (https://www.heartofhealthcarepodcast.com/episodes/10x-the-incumbent-ai-dr-shiv-rao-md-abridge), where Abridge's multi-language capabilities were discussed towards the end. I'm curious about any advancements in this technology since the podcast was published (which was only a month ago 😄), particularly regarding the quality assurance of summaries when translated back into the patient's native language. Were there any updates on a timeline for this topic in the recent presentation? I'd love to hear more, or even be able to watch this presentation somehow!