It’s Thursday, which means it’s time to catch up on all the latest in voice tech for kids!
This week: An inside look at acoustic models, invite your edtech colleagues to our upcoming webinar, prioritizing privacy in smart toys, and how our language models differentiate homophones. Read on!
Lessons from Our Voice Engine #5: Acoustic Models
Speech Recognition Engineer Armin Saeb brings you the fifth installment in our “Lessons from Our Voice Engine” series. Armin explains acoustic models and why they’re an important component in a speech recognition engine, particularly an engine like ours that caters to kids’ variable and spontaneous speech.
Over 120 edtech professionals have registered for our Voice in EdTech webinar with VP of Global Sales Jon Hume, Founder Dr. Patricia Scanlon, and VP of Speech Technology Dr. Amelia Kelly. Already registered? There’s still time to share the link with your colleagues so they don’t miss out. We go live next Tuesday, May 25 at 12 noon ET, 9 a.m. PT.
It’s so good to see that privacy will be a key factor in selecting the winners at the World Economic Forum's Smart Toys Awards, an inaugural event celebrating ethically and responsibly designed AI-powered toys. As a privacy-by-design company, we only work with AI and toy companies that share our commitment to protecting kids’ fundamental right to privacy. The awards are happening live on Saturday, May 22. See you there!
Q: How does your voice engine differentiate between homophones, like “hare” vs. “hair”?
A: Great question! Our voice technology uses language models that can predict the next word based on the context to determine which word is meant. So if a child says, “I have long hair,” our voice engine would look at what it has learnt about human language and the meaning of the words to determine that the child means “hair,” not “hare.”
Thanks for catching up with SoapBox Labs. Until next week!
Communications @ SoapBox Labs
PS: Did someone forward this to you? You can hop on the newsletter right here.