30
Sam Liang,CEO of AISense, a speech recognition technology and product start-up in Los Altos, says: “I totally agree with Vinod in terms of data. We focus on the voice data. So if you look at how people communicate, while you’re spending two days at a conference, for example, most of the time you are either speaking or listening. When we started the company a few years ago I was thinking, I can search my email from 10 years ago within seconds, but I can’t search anything I heard this morning; I keep forgetting things. Some say that 30% of all the employee’s time is spent in meetings.
“But if you are a higher level manager, you probably spend more than 50% of your time in meetings, phone calls, video conferences. Where is this data? People have to take notes diligently, you have to type, you have to write on paper, notebooks. You can’t search anything on your paper notebook. We are trying to automate this process.”
Greg Fitzgerald, chief marketing officer at JASK says: “We’re looking at AI being applied for better situational analysis. The over-abundance of data means that a human physically cannot interpret it all on their own. We’re applying it in such a way that it is just enhancing and augmenting the intelligence that a human has to then make this subjective decision on whether they need to act on that information, and it’s starting to change the world to enhance what we do as people. In the financial trading world it will make a big difference to accuracy. The human touch will ensure that the machines don’t make mistakes.”
In terms of some of the challenges that AI faces, what are some of the key roadblocks? Is it just, on the human side, the sort of scepticism that we have of the rise of the machines, or it could be technical roadblocks as well. What are the limitations to the progress?
Fitzgerald says: “In my mind AI is like a child; it’s still learning. There’s supervised learning and unsupervised. For example, I taught my 11-year-old daughter what a bus looks like so she can get on the school bus. It’s big and long and yellow and has big wheels. But when she goes out to the street and
there’s a selection of buses – it could be a city bus or other – she has to make that decision unsupervised and she finds lots of variables that I didn’t
mention. AI is still “
www.ibsintelligence.com | © IBS Intelligence 2018
developing in terms of its intelligence. The challenge is still feeding it the information and trying to help it to learn the right way.”
Liang says: “You should look at AI relatively. People say what is AI? AI is anything that hasn’t been done yet. We go back 50 years, everything we have today could be considered as AI 50 years ago. Once it’s done it’s not attractive anymore. You can do OCR pretty reliably today, but 10-20 years ago it’s still a fantasy. It’s the same as speech recognition; we can recognise most of the words today, but still not perfectly. Regarding privacy, for our product privacy is one of the major challenges. People have the psychological resistance to being recorded. ‘Are you going to use this against me some time later?’ ‘Are you going to sue me some time later?’
“This question of whether or not we want AI to mimic human behaviour sufficiently well to fool humans into thinking that AI is human, has now been a bar that is set but it’s not really clear whether or not we want to achieve that or not,” says Liang. “But you see that especially in the consumer-oriented world. Even though we realise when we’re interacting with Alexa or one of the similar consumer-oriented products we’re interacting with a machine, it’s still more and more human-like and we now have this human nature response of reacting to something that is similar to a human in a human way. The same is true of robots, even if they are not particularly intelligent, if they are soft and cuddly then we treat them like animals or children even though we realise they are machines. So this is one of these challenges. Then you see products like Sophia a human-like robot that has rudimentary AI and has got a lot of press recently. Or even more recently is the Google Duplex.”
Peris says: “There’s nothing wrong with mimicking humans. If you think about it, if you want the machine to interface with a human it’s only natural that they do it with our interfaces which are speech, listening, natural language and so forth. Where I think we are crossing the line from an ethics perspective is, you don’t want to use it for deception. You as a human being want to know when you’re talking to a human and when you’re talking to a machine. I’m sure many of you have been pissed off like me when you get these robo-calls, from banks and financial institutions because they’re getting more and more sophisticated these days; they try to act like they drop the phone in the middle of the call just to hook you in, right? That’s where I would draw the line. You don’t want to deceive. It’s the same thing with Duplex – I think a couple of criticisms that I’ve seen is, one they didn’t announce upfront that this is Google Assistant or something like that, and two is they were mimicking some of the human pauses and things like that, which human beings do, which seemed deceptive. I think you want to draw the line at being deceptive. Otherwise just using it to interface with humans is perfectly fine.”
People have the psychological
resistance to being recorded. ‘Are you going to use this against me some time later?’ ‘Are you going to sue me some time later?’
Page 1 |
Page 2 |
Page 3 |
Page 4 |
Page 5 |
Page 6 |
Page 7 |
Page 8 |
Page 9 |
Page 10 |
Page 11 |
Page 12 |
Page 13 |
Page 14 |
Page 15 |
Page 16 |
Page 17 |
Page 18 |
Page 19 |
Page 20 |
Page 21 |
Page 22 |
Page 23 |
Page 24 |
Page 25 |
Page 26 |
Page 27 |
Page 28 |
Page 29 |
Page 30 |
Page 31 |
Page 32 |
Page 33 |
Page 34 |
Page 35 |
Page 36 |
Page 37 |
Page 38 |
Page 39 |
Page 40 |
Page 41 |
Page 42 |
Page 43 |
Page 44 |
Page 45 |
Page 46 |
Page 47 |
Page 48 |
Page 49 |
Page 50 |
Page 51 |
Page 52