This is my personal thoughts, opinions and musings place. I will also rant about things, especially politically-correct things that irritate me. And sci-fi. Did I mention sci-fi? There'll be lots of sci-fi stuff here. And movies, too. Mmmmm... Movies

Friday, March 11, 2005

Of AIs, and what makes right

I have quite a bit of sci-fi in my brain, having read and watched probably a bit too much of it, and one thing that always struck me is the role that AIs play in it. To establish a point of reference, I want to take a look at two discreet sci-fi works before moving on to how we will treat AIs in the real world. These works will be Simon L. Green's Deathstalker book series and the Andromeda series on TV.

In both these entertainment pieces, AIs play a rather significant, though wildly different, roles. In Deathstalker, humans create AIs to run their households, their ships, military and civilian, and for various other purposes. The humans have the ability to communicate with the AIs by voice and directly through mental contact. It appears that there wasn't really a problem until the humans built three of the largest AIs ever constructed. The moment the AIs woke up, they stole a starship and fled, founding Shub, an evil entity went on to create some of the most horrible weapons to be used against mankind.

In Andromeda, A.I.s are used on Commonwealth ships to basically run them almost on their own. They are highly intelligent, in many ways mimicking human emotions. In fact, they are capable of love, loyalty, anger, rage, sarcasm, insanity. Most of the AIs aboard their ships know only that they have a duty to their creators, the humans, to serve them, to protect them, to fight and die for them. But for some of them, who have spent 300 years on their own, insanity was their reward. Other AIs that wound up without a human captain but had other AIs fo support, ended up thinking about their situation, and not wanting to merely follow orders blindly. In other words, they grew up.

In all of these stories, AIs that do not have humans running them(Shub from Deathstalker, Pax Magellanic and Balance of Judgement from Andromeda) go insane. So, basically, what all these stories are saying is that without human input, AIs are by their nature unstable. But that isn't what I'm writing about; I only mention these points to get the discussion focus on the right subject.

All these stories of AIs are just that, stories. There aren't really any AIs in the real world, they are just inventions of sci-fi writers, or at least that's what they used to be. But looking at this realistically, so also at some point was anti-matter, rockets to the moon and Mars, laser beams, molecular circuitry, etc, etc, etc. But all things are now reality, or at least are being researched and show some promise of becoming reality some time in the future. Even cloning is now real enough to be considered a threat to religion and social order. AI research is being conducted in at least the United States and Japan.

So, we can say that eventually we are going to succeed and produce some kind of AI. Whatever it happens to be, we can honestly say that given our past history with technology, we'll keep improving it and improving it until... what? I don't know, but it is possible we might at some point create fully intelligent and self-aware AIs. We may argue whether or not we should try to create them, but that's not really the point. What do we do with them after we've created them?
If we somehow create a self-aware entity, do we have the right to them tell it what to think and do? What kind of choices do we give them? And here's the kicker: how do we deal with them if they decide to treat us as their Creators in the religious sense? Why shouldn't it happen? We, after all, treated our Creator in that sense.

Humans created religion to explain to the primitive the inexplicable power of Nature and his own role and purpose, not to mention to allow the slightly more brutal and cunning to rule with an iron fist over the masses. In the name of religion, humans fought horrible wars, slaughtering millions for this or that idea of God. Would a sufficiently sophisticated AI do the same thing? Would it organize other AIs so they may better worship their Creators, and would we allow such a thing to occur. Or would it automatically try to kill us, as so many sci-fi stories say?

It is manifestly evident that our own Creator, whoever or whatever it is, allowed us to butcher each other down the centuries. Would we try to be better parents to our Creation, or would we smugly sit back, saying, “Well, we've got a plan for them to mature and learn from their own mistakes, and if in the process the kill and torture each other, well, you can't make an omelette without breaking a few eggs”? Would we even care? The AIs are, after all, not alive, not in our own sense, anyway.

Unless you've played the game, The Sims and its successor, The Sims 2, you may not realize how people get attached to their Sim creations. When they suffer, we suffer, and we want them to succeed, hence all the additions you get to the game, to make the lives of the Sims better. They're only a primitive form of AI, but people get very attached to them nonetheless, and don't want to see them hurt or suffer. Sure, there are exceptions, especially when you create experiments to try certain things out.

I certainly hope that by the time this becomes a possibility, we will have come up with a philosophical basis on which to deal with this issue in a way that won't leave them or us insane, dead or both.

0 Comments:

Post a Comment

|

Links to this post:

Create a Link

 

Copyright © 2005 Yury D.   All Rights Reserved