Can computers make mistakes?

This blog post is going to be a short one for a topic that could take a bunch of different directions. I just hope we can start a small conversation about this specific issue of artificial intelligence so please share your views!

I just started a new series called WestWorld. The amazing synopsis is inspired by the movie by the same name, it takes place in a world where human-like robots have been placed into a Wild West theme park. Visitors can pay 40.000 dollars a day, to do whatever they want: kill, torture, rape or fall in love with the androids. I haven’t finished it but so far I highly recommend it.

I was watching episode 3 yesterday in which there is a scene where one of the android asks one of ‘her’ programmers if she had made a mistake. At that moment, I believe the writers of the show want us to ask ourselves the question “Can machine make mistakes?”.

If a computer program goes wrong, the normal reaction is to say that there is a bug because the programmers have made a mistake. With today’s technologies, it would be impossible to say to a machine has made a mistake, because a machine is purely following orders.

However, we do have a tendency to personify the machines. If you can’t install a new software on your old computer, you will say “I couldn’t install it because of my stupid computer”. The blame is not put on the programmer who did not allow the machine to deal with future possibilities; it is the machine that is being blamed for being too old. So we do to a certain extend hold objects accountable (there is some great research on personification and anthropomorphism  if you get a chance you should read some more!).

Is it possible that objects would one day be able to make mistake or will we always track back responsibility to the initial mistake of the creator? At what point can we consider that the object’s accountability is detached to its creator’s. Instinctively, I thought that it will never be fully detached. Then , I started to think about other creations with personalities that we already have been dealing with for centuries: companies.

Companies have been granted with legal fiction a personality. The creation of a company implies the artificial creation of a person. When a company is first created, the actions of the company are highly correlated with the one’s of its’ creator, so even if there is a legal separation there is still a de facto union of accountability. However, what happens when the company grows, more people are on board, different people are at key decisions until the creator is actually out? Well, it would be ridiculous to link the accountability of the company to its’ creator’s.

If we go back to the comparison with computers, we could say that all the actions and decisions the founder of a company has made is the source code. There is a causal link between this source code and everything that happens after the creator is gone. The reason why we don’t think about that causality when thinking about the accountability of the company is because the company has grown independently from its creator.

Could that one day happen with computers in the context of artificial intelligence? the machine will indeed be the product of a source cause which will always hold genesis causality Then the artificial intelligence will evolve with the multitude of interactions that cannot be predicted by the creator (otherwise there would be no point developing AI!).

So the next question is what would qualify as a mistake? I honestly am not exactly sure what a mistake is. I think that, to make a mistake, the machine must be given a purpose. If a machine’s purpose is to recycle, a mistake would be to throw batteries in the wrong recycling bin. STOP there, artificial intelligence must make mistake to progress, that’s a keystone of the expansion of intelligence.

I am confused now, because if making mistakes is necessary for artificial intelligence to serve its purpose, are mistakes really mistakes, or simply anticipated learning milestone. For a machine, to fail to serve its purpose once is maybe not a mistake but a necessity. What would be a mistake, would be to fail to serve its purpose twice in the same context. If the machine learned the hard way that batteries are not recyclable if it throws away batteries again, that’s a mistake.

This answer is still somehow too trivial. When the android asks “Have I made a mistake?” there was something more profound to it. There was a sensation that the machine did not serve its purpose well or perhaps has gone beyond its purpose. How can AI go beyond what it is programmed to do? I’m not sure, and even if I was this post is already way longer than I was hoping it to be.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s