Are Automated Transcribing Softwares Good Enough? Not for New York Times

If you are ChristineMcM, a New York Times commentator you probably know a too much about how automatic transcribing software can mess things up for you.

As reported by The Daily Dot, she had something to say about a recent Trump article but had to take a phone call in the middle of her comment. Her automatic transcription software heard and posted the whole conversation.

Yes, you read that right.

This is what it ended up posting.

This might be funny, but this shows the state that we are currently in with respect to automated transcribing.

Transcribing still continues to be mostly done by humans to avoid such gaffes.

Although she later clarified the mistake, it left those close to her and her followers in a state of a fix. Some even suspected that she might be having a neurological episode.

Here is her clarification:

Having understood these problems, Scribie is not looking to go the same route.

Instead, we use technology and AI to help humans transcribe faster and better.

The industry is far away from completely eliminating the human factor in the transcribing chain (unless you can afford such a gaffe).

For the time being the best way to get your file transcribed is a human with cutting-edge technology that enables efficiency and high accuracy.

 

 

 

Building a Custom Deep Learning Rig

Deep learning is a very exciting field to be part of right now. New model architectures, especially those trained with Graphics Processing Units (GPUs), have enabled machines to do everything from defeating the world’s best human Go players to composing “classical music”. We wanted to take advantage of its applications in speech and language modeling, and started with AWS G2 instances. We soon found that training even very simple models on a small portion of our data took days at a time, so we decided to build our own rig with specialized hardware. … 

 

Google Introduces Dictation in Google Docs

Google provides us with a variety of services and tools to make our lives easier. One tool in particular, voice dictation, is now available in Google Docs. It’s an easy feature that makes the lives of those using it run a little smoother. Need to get an email sent? How about the notes for your next business meeting? Google Docs voice dictation makes that possible, without you having to really lift too many fingers.

google-76517_640

To get started, you will need to have the latest version of Google Chrome installed and a microphone for your computer. With these tools set up, you’ll head to Google Drive and open a new Google Docs word processing document. You’ll go to the top menu and select Tools, then . A pop-up window will appear with a dark microphone icon in the middle. Once you click on the microphone, it will turn red to signify that it’s recording and you can start to speak.

microphone-34097_640

It’s okay if you need to think about your words as you’re speaking; Google will wait. When you’ve completed your dictation, click the microphone to turn off the dictation. It is important to note that punctuation needs to be dictated.

An added benefit to voice dictation is that you can edit and format as well. Take the sentence, “I like pie.” To edit or format it, just say “select ‘I like pie’ and follow that with whatever formatting change you need to make. That could include “apply heading” or “apply underline.”

You can also create itemized list by saying “create numbered list” or “create bullet list.” When you need to go to the next item on the list, just say “new line” and say “new line” twice to finish the list. And no fears if you mess up! You can simply say “undo” to change any mistakes.

For transcribers, these features can be a great time saver. Not only that, but it can reduce the amount of effort you have to put in to typing up your latest project. Life made simple by Google. It’s as though Google just provided you with the option of having your own free secretary. For those you who may wonder what all can you type with your voice, Google even made a complete list of commands for your viewing pleasure.

 

Why Do We Still Need Humans For Transcribing Speech

siri errorsSo, how is Siri doing on your iPhone. Would you happily replace her with your secretary?

Personally, I won’t, because there are just too many ‘misses’ and ‘trouble spots’ that I wouldn’t want in my business.

The case is almost the same when you count upon software to transcribe your audio files instead of their ‘time-consuming’ human counterparts. Unfortunately, despite several attempts, science has not yet come up with a software solution that would act like Aladdin’s magic lamp. And from what it seems, the genie isn’t coming out any time soon. Why? The reasons are many.

The English language can be very tricky and hence very difficult to master especially when the learner in question is a transcription-software. Homophones pose a problem that most software find impossible to overcome. For instance, will it be sale or sail, no or know, fair, or fare? The list continues. Unlike us humans who are blest with critical analyzing skills, software cannot comprehend the difference. Plus, making these finer differentiations may be very difficult without a context, which might not appear until further into the conversation.
The problem aggravates when the software needs to transcribe an interview or a dialogue involving many speakers. It is easy to guess why. Each of us has a unique style of speaking. This speech distinction becomes far complex as this personal style of speaking is influenced and shaped heavily by our geographical location, our culture, and our upbringing, to name a few. It is impossible to ‘teach’ so accurate a speech recognition to any software.

Audio quality is yet another issue. And a very important one. Any speech recognition and transcription software would need a clear piece of audio. Anyone in the transcription business would know that an impeccable audio file is a rare phenomenon.

Talking about the accuracy rate of a human transcriptionist versus a software-driven one, Xuedong Huang, a senior scientist at Microsoft says, “If you have people transcribe conversational speech over the telephone, the error rate is around 4 percent. If you put all the systems together—IBM and Google and Microsoft and all the best combined—amazingly the error rate will be around 8 percent.”

Now the real question is, would you settle for something that is twice as bad as humans? We know the answer. That is why we offer transcription service that is among the best in the industry. Start uploading your files now!

 

Automatic Audio Transcription

Humans Are Better at Transcribing Than Robots

Audio transcription can be a long process, especially if you are a newbie in the field. For many, the automatic audio transcription offers an easy alternative. But is the shortcut worth taking? Statistics would not say so.

Express Scribe, an automatic transcription software, offers an accuracy of around 40 -60% when integrated with the Microsoft Speech Recognition. Google Voice, on the other hand, offers an approximately 80% accuracy but only while transcribing voicemails. That percentage goes significantly down for conversational speech audio. The appalling performance of the various automatic audio transcription or speech recognition software programs even today makes one think why it is so. The reasons are plentiful.

The software fails to factor in the various styles of speaking

A language changes its character depending upon who speaks it. For instance, the way English is spoken in the US is different from how people in India speak it. Teaching a software program how to recognize the variations in human intonations and accents can be very challenging. The problem multiplies when there are groups of speakers involved. Analyzing voice can be equally frustrating for a program. The ease with which the human ear can decipher the spoken words by a variety of voice quality, such as hoarse, soft, deep, etc., does not work in case of a software. In the ideal world, the speaker would have to speak clearly and carefully in order to be accurately transcribed by an automatic audio transcription system. But unfortunately, we don’t get to work in an ideal world scenario.

English can be a tricky language

Sale, sail. Year, ear. Feet, feat. You get the drift. Homophones can be quite tricky and sometimes becomes impossible to understand from a spoken language if we don’t understand the context. Quite obviously, this is a high expectation from a software, and this naturally leads to undesirable mistakes.

The better alternative

Hiring a transcription service with a team of experienced transcribers is still the best. Old is gold when it comes to accuracy, at least in this context. Scribie is completely powered by humans and hence is able to consistently maintain accuracy level of 99% or higher.

Want to find out for yourself? Start uploading your files now.