Adding Speech Recognition Capabilities to your NativeScript app
It's 2017 and speech recognition on phones and tablet finally no longer sucks. Without needing external libraries even. This blog post shows how to use a plugin to add speech to text capabilities to your NativeScript app.
Does speech recognition still suck?
It doesn't. Watch this 24 sec video so you can literally take my word for it:
Wow, iOS speech recognition is really impressive!
I know, right?! The nice thing is it works equally well on Android and neither of these require any external SDK - it's all built into the mobile operating systems nowadays.
I'm convinced, let's replace all text input by speech!
Sure, knock yourself out! Add the plugin like any other and read on:
$ tns plugin add nativescript-speech-recognition
Availability check
With the plugin installed, let's make sure the device has speech recognition capabilities before trying to use it (certain older Android devices may not):
Starting 👂 and stopping 🙉 listening
Now that we've made sure the device supports speech recognition we can start listening for voice input. To help the device recognize what the user says we need to tell it which language it can expect. By default we expect the device language.
We also pass in a callback that gets invoked whenever the device interpreted one or more spoken words.
This example builds on the previous one and shows how to start and stop listening:
iOS user consent
On iOS the startListening function will trigger two prompts: one to request allowing Apple to analyze voice input, and another one to requests permission to use the microphone.
The contents of these "consent popups" can be amended by adding fragments like these to app/App_Resources/iOS/Info.plist:
Have feedback?
As usual, compliments and marriage proposals can be added to the comments. Problems related to the plugin can go to the GitHub repository. Enjoy!