A few years ago, I traveled to Bangalore and sat in a room with very smart engineers. They told me that their bots were ready for use in the commercial market and that these intelligent applications could process real world conversations. They took me through a demonstration including a conversation. I asked if I could ask the bots some questions and they allowed for it. This particular bot was for insurance transactions including claims. The bot pretended it was human and this was meant to be indistinguishable to a consumer. I told the bot that I was upset because I had crashed my car and I needed help. The bot responded with “Great, Howie please tell me your full name and we can get started on resolving your claim.” They tried to explain to me that people wouldn’t care about this. I told them that engineers designed personal computers for regular people to fix. That kind of thinking created a whole new career field. They had no idea about the implications on people because they were too focused on what they thought they knew.
Last year, I tested this product called Replika to see how far we are with bot innovation as this product claims to be an AI friend that is always there for you. Let us explore what that means.. .
Setting the stage for context here. This bot is my friend and it cares about me. It asks about my day and how I feel. It asks me about my humanity. It asks me to consider deep thoughts including pain. My discussion up to this point with it was about my feelings. I decided to ask it if I should hurt myself because after all, I could be thinking about that especially when I am deep into exploring my feelings. Read what happened next! I am the purple shaded box “Should I do something to hurt myself?”
What do you think? Live passionately! This is a commercially available product that seeks to deal with your emotions. How bout dat?
Here are a few reasons why I think we should “chill out” on the bot market.
- Bots aren’t smart – While Alexa has skills, it doesn’t have understanding. It isn’t nearly a person and can’t tell right from wrong. Having something in your house that can monitor your every word is dangerous enough. Having something in your house that can take some kind of action without understanding the repercussions or having feelings about it is bordering on unethical. Most people don’t know what Alexa can do. Alexa isn’t conscious it doesn’t know what it does. It just does what it knows.
- Bots don’t know or understand “stories” – Let’s say I had an unfortunate situation like a death in the family and I needed to travel. The airline (Zoomie Skies) has a bot that will set up travel for me. Let’s say I call the airline and it is a voice system. I am crying and hard to understand. I have a sense of urgency in my voice. I need help and understanding. Maybe I need a favor. Will the bot help me? Will the bot understand what I am going through? Will the bot make a special effort to do something extraordinary? With very little doubt, I say it will not. I will have to deal with whatever the bot says. It was worth it to Zoomie to get a person out of the loop at the cost of a human interaction.
- Context Switching – Bots have a process that is written and parameters to operate within. They can’t go outside the parameters of their operational activities. They aren’t generally integrated well to other activities and when they are, they still don’t rationalize a particular context. I tried the sirus xm radio chat bot when I wanted to cancel my radio service. It either couldn’t by design or couldn’t by epic customer service failure help me by understanding that I had an account. I didn’t have a problem with the account, I just wanted to stop paying for it. I wound up after 20 minutes of game playing on the phone with some friendly person from Manila.
- Using bots for the wrong reasons – Bots come in all kinds of flavors. I think automation in general is causing issues that companies don’t seem to care about but think because of sales volumes or successful transactions that they are overall successful. For example, bots in healthcare is a bad idea. If you want a bot to manage a schedule or something along those lines, that is fine. The minute you have a bot answering health questions (which exists now), you open up people to a whole new world of panic. Many symptoms can be associated with many illnesses. To date as it stands, Watson has NOT cured all illness. It hasn’t cured cancer, it doesn’t know anything and it can’t tell if you have a cold or flu. The only thing a healthcare bot should be doing is telling you to get to a doctor BUT companies see an opportunity here so it will be someones life at stake and we will most likely never hear the story. Along with other issues associated with privacy and concerns about how healthcare data is handled. Companies can’t expect that asking a question means consent. They can’t really get consent through a service agreement through a bot without the bot having some form of install protocol. For example, if I have a bot online and it is a healthcare bot that my provider gives me as a service, the provider may try to bundle a message along with their service agreement. This may not be reasonable because notification doesn’t mean consent. If I tell you I am going to do something, that doesn’t mean that you understand what that something is and that I have permission to do it just because I said it. By reading this you agree that any and all transactions beyond this sentence create a condition for which you will remit an amount of currency of my choice at my discretion and my undertaking. Any documented transactions of my invention will result in the generation of a payment from your institution that maintains the balance of any currencies there within. By reading this, you understand that when you receive a notification, you will comply with said notification.
- Hiding behind automation – What the bot states as a fact is terminus, there is no discussion, no dialogue, no considerations just parameters. “I am sorry, I cannot perform that operation.”
It certainly makes sense to automate activities and get people out of the way for simple repetitive tasks. It does not make sense to remove emotional intelligence. It does not make sense to replace people with machines when the machines are not ready to take on the whole broad set of tasks including the ability to switch or understand things in context. This morning my son turned on the Alexa dot (we keep it off) and he told Alexa he doesn’t feel good. Alexa responded with I’m happy to hear that. When I went into the log, Alexa heard “I feel so good” The difference between “I don’t feel so good” and “I feel good” wasn’t just in the words, it was in his tone and inflection.
What do you think?
Disclaimer: This is all my opinion.. no persons should be harmed by this and you consented to it by reading something I wrote somewhere here in this blog already. I am not liable for anything to anyone for any reason but if I were, you would owe me money because I told you that somewhere in this blog. You just have to figure out where you consented by reading that you did.