And Android voice assistants aren't the point of this interview, anyway. It's more about the process of developing interactive, voice-based IO systems. This whole voice/response thing is an area that's going to take off any year now -- and has been in that state for several decades -- but may finally be going somewhere, spurred by intense competition between the many companies working in this field, including Ilya's.
Ilya Gelfenbeyn, CEO of Api.ai : We’re building a platform for conversational user experiences. So, built off agent's speech to things to applications to devices.
Robin 'Roblimo' Miller for Slashdot: You're relying on other speech-to-text modules, or you’re doing your own?
Ilya Gelfenbeyn: Both. We do have our own, but our clients can also just choose to revise to some third party speech recognizers. Our main focus is on this next step when we get the text and we want to understand what this text is about and then support some kind of conversation with the user.
Slashdot: Okay. So, you're saying it's a super Eliza Bot.
Ilya Gelfenbeyn: Well, in a way, yeah, yeah. So yeah, so like initially, it was this Eliza Bot and well then a lot of development there and well, technologies changed a couple of times like, totally like – but yeah, it is pretty close. And well, main difference I would say is that Eliza Bot was – well, it's purpose was to just talk to you and well, just support the conversation. Our target is actually to understand your intent and fulfill assumption, so like do some action for you, do something meaningful.
Slashdot: See, now, I'm getting this vision of an extremely abrasive Hollywood movie executive. And he is yelling at his, whatever it is, the thing, yelling at it, and cursing it with a lot of bad words. How does your system react to that?
Ilya Gelfenbeyn: It all depends on how you train it. So, just to give you some background. So, we started with a personal assistant. So, it is something like SIRI in a way. And this is the kind of an agent that we train ourselves. So, it works just like fifty something services. I mean like search by the content providers like Yelp and Google Places and weather, news and so on. And this is where we can train it and well, it is up to us how it reacts. So, it was our first product. It has about 26 million subscribers. We are getting about 30,000 new users joining every day, which makes us the largest independent assistant in the market.
Slashdot: What is the name of your assistant?
Ilya Gelfenbeyn: Well, it was initially called just assistant. So, if you search in Google for assistant, we’ll be on the first place. Now it is called the Assitant.ai, which is similar thing. So, just not that generic. Yeah, so we started with it. We are also the highest rated voice assistant. So, our user rankings in Android Play Store are higher than any other competitors, higher than Google Now, higher than stuff by new ones, and well, anyone else. And then we just decided that our market can be much broader if we are not just, well, selling our own assistant, but we just opened engine to developers of apps and devices. And this is where they can teach their, well, agents however they want. So like, well, back to your question, it all depends on how you train. So, in our case, well, assistant will still reply because it’s main goal to fulfill the actions. So, it’ll understand that you're cursing, you're saying bad words, it may actually, well, tell something relative, well give some joke, but well, the end goal is to fulfill an action.
Slashdot: What are your use cases?
Ilya Gelfenbeyn: Yeah. So, well, especially for developers, it is obviously not the assistant itself. I mean, they can use assistant in their everyday life. But, so it is the API. And when we build the API, and the API has two parts. So, one is a toolset where you can build this interaction scenarios, design how conversation may go with the user, and then well the engine itself that accepts voice or text from you and returns objects representing the meaning of what user is saying. So for them to start, well they would register with the API, it is free. I mean unless you'd like some like enterprise offering like with support and consulting and everything, it is just free without limitations. And you can very easily describe this interaction scenario.
So, imagine you've got, well, any type of device that you want to add voice to, or it can be voice, it can be text also. For example, we've got also developers who are building like bots for messengers, like Slack or some other messengers. So, you as a developer, you just describe some examples of commands that you want to support. So, basic one would be, if you want to add like music-playing functionality, you just tell it, I'd like to listen to the Beatles or play Madonna. So, with these two examples that you input to the system, system will also understand requests like, I'm in the mood for Bob Dylan today, or if I feel like listening to Rihanna or something like this.
So basically what we are doing is, well, based on this limited number of examples, we have envisioned those, we understand, a huge variety of how user may feel the same thing, ask about the same thing. And then we also support clarifying conversation. So, if the user says, oh how about Eminem, or anything else, so we’ll understand how the conversation goes, what user means by that. So, it’ll take into account the context. And you can easily also flag it to the assistant to our consumer apps, so that you can test it immediately. So, you build this app. You add it in the Assistant. You test how it works.