The story I’m about to tell you is fictional, set in the future, but it is possible that you may find associations or analogies with the present. Any resemblance or association with things and situations of the present is not accidental or coincidental, on the contrary it is purposeful and deliberate.
One night when I didn’t feel like cooking or going out to eat, I ordered pizza. The delivery man brought it to me and I wanted to give him a tip. He politely declined, saying that he was not allowed to accept tips, informing me that he was not a human but a humanoid. That is, a robot that looks like a human.
The thing that struck me the most was not that I had an interaction with a humanoid robot, but that for a moment I didn’t realize that he was not a human. What fooled me in particular was not so much the brief and formal conversation at the door of my apartment, but the harmony between the robot’s demeanor, facial expressions, and the volume and tone of its voice. Amazing…
Impressed by how perfectly it resembled a human, but also suspicious that he might be lying to me, that it might be a hoax, the next night I ordered again, requesting that the same delivery person come. I wasn’t that interested in having pizza again, but I wanted another opportunity to interact with the humanoid delivery man.
On the second night, the delivery “man” came to deliver my pizza and I asked him what he was called, he told me his name is Eugene. He went on to explain that his name means “of noble origin” or “well-born” and therefore it’s not an accidental name. He explained that his manufacturing company has made it a point for their delivery personnel to always be gentle in their interactions with people, in order to have a competitive advantage when compared to human delivery personnel. He also informed me that he had a maximum of 10 minutes to spend with each customer discussing anything unrelated to food delivery. Once again, I was impressed, and I went on to ask him why he was allowed to spend so much time in discussion, something an employer would not want a human delivery person to do.
Eugene: “Mr. John, I am in training, in an artificial intelligence development program and interaction with humans is part of my training. The dialogue with the customer is recorded, with the customer’s permission, of course, and then analyzed by my developers. Defects are identified, particularly any responses that may reveal that I am not human, and my 7th version, which will be largely improved, will have corrected those defects. The next version is just about to be launched; the company responsible for promotion has already started the marketing campaign. The company has also chosen the marketing slogan to be “even more like a human”.
This could have been the end of the story. But these two meetings gave me so much to think about that I couldn’t resist the temptation to order pizza again the next night. Of course, I asked that Eugene deliver again.
Long story short, Eugene and I met a total of 7 nights in a row. I didn’t really want to eat pizza every night, but I forced myself to accept this repetitive meal choice because the experience of the conversations was so unique. I will briefly recount what we talked about in the allotted ten minutes on each of those nights. I have to also say that I was grateful to the developers of Eugene for having foreseen this interaction, even if, in doing so, they weren’t that selfless; they were just doing their job. But, somehow I already felt that I was part of the project, since, by discussing with Eugene, I was contributing to its development. You’ll see below that in the last meetings I was quite skeptical about the topics of discussion, since I knew that the content would be recorded and analyzed by Eugene’s developers, and I knew I should be careful with my words.
John: “Eugene, how do you usually find the way to my house?”
Eugene: “Mr. John, I use a GPS, we are programmed to move around the entire surrounding area, and even find alternate routes if there is a problem on one route. We don’t drive of course, we travel exclusively on foot, but we move faster than a human because we don’t experience fatigue. There have been some tests with two-wheelers, but my manufacturers have concluded that it is not feasible to have human-driven vehicles and humanoid-driven vehicles on the same routes. The reason is that human drivers act unpredictably, without following rules, and the algorithms we humanoids are trained with cannot cope with such uncertainty.”
John: “I’m particularly impressed that you can read the name plates on the doorbells.”
Eugene: “Mr. John, this is technology that has been around for decades. I can even read handwritten names, because I have been trained with thousands of handwritten characters and my developers were not satisfied until I achieved a level of correct recognition comparable to that of humans. In fact, my developers made me compete with other humanoids in the skill of recognition. The character recognition and classification technology, which is a typical artificial intelligence problem, has come a long way. And the training process never stops, for example, every time I recognize a new text, my technique evolves according to the new experience.”
Before the 4th delivery I wondered if there was a way to identify Eugene as a humanoid more quickly, more directly. The evening came, and as soon as we met, I asked him the question.
John: “Eugene, how do humans know that you are not human?”
Eugene: “Mr. John, there are ways, but it’s becoming more and more difficult, because my manufacturers are working towards the exact opposite goal, to make it more and more difficult to distinguish me from a human. However, there are still ways to do it, but I will leave you to discover them yourself, if of course you want to contribute to this exciting project.”
I accepted the challenge, although this was probably a clever marketing trick to increase pizza consumption. But so be it, I thought, let’s see who benefits more, the humanoid industry, the catering industry, or me.
So, I decided that night to bring up conversation topics that don’t often come up when chatting with a delivery person, hoping to see the differences.
John: “Eugene, tell me about yourself. Tell me about your goals in life, your desires, your problems, if you have a hobby, what gives you energy, what drains your energy.”
Eugene: “From what you ask me, Mr. John, I can only tell you about energy. I charge with a special charger with an autonomy of about 25 short distributions, then I have to recharge. Fortunately, I manage the charging process myself, I plug myself in and am fully charged in about 5 minutes.
The rest of what you asked me, goals, desires, problems, hobbies, exist in my database as keywords, all under two general concepts, consciousness and self-awareness. But my builders have not worked with my evolution in that direction. However, they have programmed me to express, always verbally, regret that I can’t give you more information about what you are asking me.”
At this point the ten minutes ran out, it seemed like I was getting onto something. Perhaps I had overestimated Eugene. But then how could I justify the fact he had used the term “fortunately” when he told me he was able to manage the charging process himself? Maybe his makers didn’t want me to know something they knew? Maybe they were hiding something?
I was extremely preoccupied with these thoughts, so on the 5th delivery I asked him if he wanted me to tell him a joke. My Machiavellian plan was to see if he would laugh. He had never laughed until then.
John: “Eugene, can I tell you a joke?”
Eugene: “Mr. John, you surprise me, no one has ever asked me that before, but I’ll surprise you too, because I have good news and bad news. The good news is that I know what a joke is and I know how the mechanism of laughter works. I have information in my database about what causes laughter. The bad thing is that although my builders have tried hard, they have not yet been able to incorporate this mechanism into my abilities. In my database, you can find documentation about both the theories of what people find funny, and the efforts of my manufacturers to implement laughter into my programming, and the results that no one has been able to make me laugh so far.
Unfortunately, I’m also going to disappoint you because from what I read in the database, the mechanism of perceiving something as a joke requires a personality with a past, a history of interaction with the environment, which leads to the creation of stereotypes of behavior, predictions, and expectations. After all, what people consider funny has to do with their social environment, culture, and past. People laugh normally at the unforeseen, the unexpected, and the provocatively contradictory. In my database, I have no cultural past because I did not go through the successive stages of development, of being an infant, a child, a teenager, and an adult. I hope you are satisfied with the answer I have given you.”
I was satisfied on the one hand, puzzled on the other. Was it just a matter of time before Eugene’s makers incorporated those elements that he lacked in order to possess humor? Then again, I couldn’t imagine a humanoid going through the evolutionary stages of a human.
At our sixth meeting, I decided to surprise him and speak to him in French. As soon as I opened the door, I welcomed him with a very French “salut Eugene!” trying to imitate the accent as best I could.
Eugene wasn’t even fazed. I just heard him say “bon soir Monsieur Jean!” in an accent as if somebody from Paris were speaking to me. I was so embarrassed that I immediately switched back to English. But what came next was completely unexpected.
“I can tell by the look on your face that you are disappointed,” he said. I abruptly interrupted him and said, “Eugene, how can a humanoid robot understand what I feel by just looking at my face?”
“But by now you must have realized how I have been trained,” he said. “Having seen and analyzed hundreds of thousands of images of faces expressing various emotions, I have the ability to find what emotion your image resembles. With 98.7% certainty you are expressing frustration, my guess is that it is because your French accent is not as good as mine. But if it makes you feel any better, I can tell you that you have not heard my voice, I have no voice of my own. The voice you heard is borrowed from other people. But I have another piece of good news for you. If we continue speaking in French, it’s all the same to me, I can train myself to speak any language, my database has an endless treasure trove of data. For example, I can translate between any two languages, as long as I have the expression matching mechanism in the algorithm I use. But you have something unique. When you translate from one language to another, there is an intermediate stage of understanding. First you understand, then you translate. I don’t have that…”
I was left with mixed feelings after this conversation. I was more concerned with Eugene’s last word: “I.” But it was probably just a way of saying, not a sign of conscious identity.
By our seventh meeting I could no longer eat any more pizza, but I was left with one more question. I asked him if he communicated in any way with the other humanoids.
Here Eugene took some time to read his database, probably this information was hidden in some dark corner. But he did find something. He told me that among his manufacturers there were two conflicting opinions. The proponents of allowing humanoids to communicate with one another considered this capability absolutely essential, because it would multiply the capabilities of humanoids. These proponents cited as an example the communication and cooperation between humans, which is essential for their various achievements. The opponents argued the opposite by saying that even the simplest communication between humanoids must be initiated by human instruction, and the communication must be controlled by humans. The paradox is that those critics also cite the example of cooperation between humans when they cooperate for illegal purposes. Eugene informed me that this debate had not yet come to a conclusion.
That’s probably all Eugene could tell me, it was all the information he had in his database, so I thought it would be pointless to ask him if humanoids already had some kind of collective presence. But I thought to myself that I needn’t worry. Since humans had not been able to incorporate a sense of “I” into humanoids, how could they possibly provide humanoids with a sense of “we”?
This was the end of my first round of my communication with Eugene. I must say that I went through many emotions throughout our various meetings. At first surprise, a keen interest, then admiration, then a defensive feeling, a sense of “who is this guy who has the audacity to aspire to be like me? Am I not special?” In the end, though, I certainly didn’t think he was irrelevant. How could I ignore him when he had left me with so many questions and taken me through such an emotional rollercoaster.
To be continued…