I had to build a website for lab 33 which utilizes voice recognition and outputs the speech in a p tag as a transcript. To do this lab, I had to use the fresh SpeechRecognition) (constructed in feature; which was set to recognition equivalent. Later I used the recognition constant to add a Listener event called' result' to itself that used a feature to return the information in the form of a transcript using outcome= result.transcript. The remainder was quite simple, I just had to show the information on the page in the form of text.
I did this lab for browser-built exercise. In this lab, I learned how to use built-in features to introduce voice detection on my locations. In the future, I plan to use these abilities when I need to add voice detection to my website, perhaps for individuals with disabilities. Some recommendation I'd offer is to attempt to play with other languages and perhaps do some browser-based voice detection studies.

Comments
Post a Comment