People behind the Amazon Voice Assistant say that artificial intelligence programs need to see and explore the world if they want to get a real understanding.
Rohit Prasad, the chief scientist of the Amazon Alexa Artificial Intelligence organization, has been asking himself, this is a problem. This is also a difficult question, telling us how much progress we have made in artificial intelligence and how far we are. Prasad outlined the techniques behind Alexa and the intellectual limitations of all AI assistants.
Amazon’s virtual assistant has almost no failure. The company (Amazon Alexa)launched Alexa in 2014 as its durable, ruthless and cheerful female interface for its Echo smart speakers, a desktop device that zeros your voice in the room and response to voice queries and commands.
Since the sale of more than 100 million Echo products since 2014, the success of this product line has prompted Google and Apple to flood into competitors. Virtual assistants are now available for hundreds of different devices, including TVs, cars, headsets, baby monitors, and even toilets.
This popularity demonstrates the effectiveness of the software in responding to simple requests. Users have little patience with too stupid virtual assistants. But spending a lot of time with them, the shortcomings of technology will soon be exposed. Amazon Alexa is easily confused by subsequent questions or wrong “hmm”, and it can’t make the right conversation because it is plagued by language ambiguity.
Prasad said that Amazon Alexa was stumbled because the words we use are more powerful and meaningful than we often realize. Whenever you speak to another person, that person must use what he had previously understood about the world to build what you said. “Before the definition, the language was complicated and ambiguous,” he said in an interview before the meeting. “There must be reason and background.”
Alexa is more conducive to accessing a large number of useful fact encyclopedias than simulating the human brain. By querying this knowledge base, Alexa can determine if you are talking about a person, a place or a product. However, this is more like a hacking attack than a path to true intelligence. In many cases, the meaning of the statement is still ambiguous.
Even a simple question like “What is temperature?” Alexa needs to do some reasoning. You may ask what the weather is like outside, or you want to read data from a thermostat or even connected to the internet.
Prasad explained that Alexa has a way to try to solve these wrinkles – it knows where and when you are, it can access every question you have asked, as well as questions from other people in the same city. For example, if you ask it to play a particular song, Alexa might guess that you are in a cover version rather than the original if there are enough people nearby to listen to that song.
But so far, this situational information only requires Alexa. To decode, some statements require a deeper understanding of the world – we call it “common sense.”
Some researchers are studying how to get computers to build and maintain their common sense sources. More and more practitioners believe that unless they experience the world, the machine will not master the language.