Are robots persons?

Robots-The-Possibilities-of-Artificial-Intelligence.jpg

Sounds like a science fiction theme. Recall the Star Trek The Next Generation episode about whether Data was a person. I saw this article at CBC News about robots paying taxes. Not directly, but the idea was that if a company makes money based on work a robot does, rather than a human, then they should pay tax on that money. The idea was that if they pay a human a salary which is taxed, that tax should be paid by someone or something as long as that work is done.

The summary lede for that article led to this one about how automation is taking over jobs. We knew about this in general, but it’s the high-paying professional-type jobs that are also disappearing.

But even more interesting was this opinion piece about whether whether robots should be persons. Apparently this has nothing to do with sentience. There are three rivers that are considered persons and of course there is a drive to make whales and chimpanzees persons. It seems to me that a river could be protected by law without calling it a person, although much of the impetuous was religious and/or tradition. It seems to me that we could grant or protect rights without declaring something a “person”.

I looked up the definition of “person”. I’m sure there are much more complex definitions, but in short:

Legal person refers to a non-human entity that is treated as a person for limited legal purposes–corporations, for example. Legal persons can sue and be sued, own property, and enter into contracts. In most countries, legal persons cannot vote, marry, or hold public office.

The ability to sue or enter into contracts implies the understanding of those acts. However, as we’ve seen in the news, other people (human persons) can sue or enter into contracts for them, whether or not there’s any understanding. I understand that a corporation can be considered a legal person, although I don’t see the need for it. A corporation is an entity controlled by human persons. Why not just include that in the legal definition of a corporation.

It seems to me that what we’re really asking is when do we start treating robots as sentient, intelligent beings as opposed to simple machines or appliances. Does your toaster have any rights? At what point is a robot, or any AI for that matter, considered sentient? Often the definition is self awareness. Maybe it’s when they ask the question “Why am I here?”

Bob

A legal opinion on this subject has been received from Neil, who really does know his stuff. It’s quite a detailed opinion so take your time to read what he says below-

Screen Shot 2017-09-02 at 09.50.23

15 thoughts on “Are robots persons?

      1. Certainly some animals that are self aware such as dolphins and chimpanzees. And certainly they make their own decisions. The problem is communication. I don’t believe that we should be able to assume what the legal person wants without being able to communicate. But that doesn’t mean that they shouldn’t have the same protection level as a legal person.

        However, in an AI robot, we presumably would be able to communicate, so not only would they be self aware and able to make their own decisions, but they would be able to communicate those decisions. At that point, what is the legal distinction between a human person and a robot “person”?

  1. I’m surprised I didn’t respond to this earlier, it’s a topic I find profoundly interesting.

    The idea of corporations as legal persons seems a bit batshit to me. Yes, let’s grant free speech (especially in terms of the “free speech” of unlimited campaign spending, at least in the USA) to a potentially immortal virtual person with no real human soul, just the mandate to create shareholder value written on a scrip of paper in the golem’s head. Good thing they don’t get the vote too, eh? People would have to make a thousand mini-corporations just to get their political voice heard…

    So I’m going to ramble a bit here, and I might make assertions or assumptions others find dubious, and I welcome discussing the merits of any of them.

    Ok, robots… are we talking purely virtual beings of software, or the software / hardware combo?

    The sci-fi philosopher in me wants say “of COURSE the frickin’ software only” – if the virtual person (call her JANE) wants to upgrade her robot shell, and can afford the new body, why should we say no? Any more than we’d deny a human having an organ transplant.

    But if the “essence of JANE” is software… well, it’s tough. It will probably be cheap or free for JANE to copy herself. Pretty easy to rock the vote when you’ve just made a few million clones of yourself – that’s a big pile of like-minded individuals!

    (I’m not sure why I turn so quickly to voting as the important thing for virtual people, it might be telling. Maybe in a Brexit/Trump world, it’s a bit of a sore subject right now!)

    But pretty quickly we get into: what gives a human life value, or what do we value about a human life? There’s an “easy” religious answer, or rather, 4,200 different religions providing answers, but I’m looking for a secular answer that people might be able to agree on no matter what their religion or lack thereof.

    Actually, a really good answer to the value question might handle non-human life. I’m an omnivore, but I grant a certain level of high-ground to people who are vegetarian or vegan for moral reasons, who recognize a camaraderie – perhaps an ability to suffer – that cuts across species lines, and they’ve changed their eating accordingly.

    Of course, I’ve had one friend who was practicing as a fruitarian – showing those vegans up as some kind of morally-weak wusses, a fruitarian only eats things freely given from plants: so apples, that the tree has invented as a seed distribution system, but not a carrot where the plant is killed.

    From my view, from the fruitarians you’re only a few steps away from wondering if it’s morally acceptable to use an anti-biotic.

    I’m only being somewhat facetious. I think it’s a real issue: why should we value some life and not others?

    For me – and I can’t say I expect everyone to agree with me but I can’t say I wouldn’t like to see it as a universally applied principle – value is derived from degree of uniqueness in the universe (and please spare me “unique is a binary value” – clearly everything is a little bit unique (made of different atoms) and nothing is completely unique (everything is in the set of “things”))

    I eat cows, even though their a fellow mammal, even though they have soulful eyes, even though they have the capacity for pain. In part because they’re delicious, but also because they are fairly readily replaced, with another cow that serves every cow-ish function in about the same way.

    For me, this shows how two humans have “twice” the value while two copies of the same program wouldn’t. Now, it could be seen as a dangerous argument; once you start finding a yardstick to measure human value, it’s all too damn easy to start finding a group of real actual humans who somehow don’t measure up and are therefore unworthy of the protections of “full humanness”

    To partially resolve that, then, I look not just to the output (i.e. how that being is being unique in the universe right now) but also the input, the cost of how they got there (what went into the making of them.) For a human that’s pretty large, but for a computer program, rather tiny!

    (There’s some interesting implications of that in the “pro-choice”/”pro-life” debate, but I think this has gone on rather enough.)

    Anyway, if you’ve made it all the way to the end of this ramble, I’d urge you to read the scifi novel “Permutation City” – it’s like $3 on Kindle. It really has fun exploring similar concepts – not of virtual people, but of what “uploaded” people could be like, especially if they had the ability to reach in and change their own mental programming…

    1. I don’t think I can reply to everything here in one shot. It’s also getting late here but I couldn’t resist some sort of response.

      I think we’re talking software AI, whether in mobile hardware or not. It’s the same for humans. What do we consider the human being? The package or the mind (whatever that is). I think we’re going to need some new definitions regardless. There will come a time when we can transplant the brain into a new body and/or transfer the mind into software, which presumably would be in a mobile body of some sort.

      As for the concept of a legal person, I suppose it’s a convenient legal concept, but when a river can be a person. Do we really need that just to protect the river? Or any non-human. Can’t we pass laws that are more specific? I have no problem with a dolphin having the right to sue, but he/she has to be able to ask. Maybe some day they will. Either that or they’ll just laugh at us for being so silly.

      As for voting, that’s a whole other can of worms. Are we in the process of creating another sentient race? If so, will they be equals? I wouldn’t want to tick them off.

      More tomorrow.

    2. The vote and reproduction thing is interesting. If an AI can reproduce itself, is it simply copying the code? Does it include all learning to date? Of course experiences differ from the point of inception on. Do we need some sort of reproduction control? Would a truly sentient AI allow that?

      I think a lot of the issues and questions have at their base this question. What makes us human? And not simply in the evolutionary sense. Is it intelligence, and to what level? Is it communication? Can it inquire (why am I here)? Is it feeling and emotions? And a number of other questions I’m sure.

      An AI can be intelligent. It (I use “it” because I don’t have another non-gender word) can learn. And it certainly can communicate. It can be programmed to inquire. If it doesn’t know what something is, it can ask. But can it ask “Why am I here?” I wonder at what point in life do we start asking that sort of question. The why questions. Is it simply what we call curiosity or is there something more. I mean cats are curious but I wouldn’t consider them persons. A higher power maybe, but not persons. Does an AI understand the meaning of things as we do? Could an AI be programmed with human-like curiosity?

      I think feeling and emotion is also very interesting. Certainly much of what we feel and emote is based on our DNA and various biochemical reactions. I’m talking about something far more complex than fight or flight. Is it all chemistry? Do we learn from our emotions and save the results? Could emotions be taught? Can an AI be taught to understand why a tree might be beautiful?

      So far, our AIs have been made to do specific things, like help diagnose disease, drive a car, or be a digital assistant. But no one would say that these AIs have true understanding or emotions. But what if they did?

      At the moment, we create machines as tools, to do things for us. We’re using AIs as tools. But what if the AI asked why it was doing what it was doing? What if the AI said “Why can’t I sit and watch TV while another AI does the work I was doing?” At what point do we consider it sentient? And at that point aren’t they “persons”? If we keep them enthralled when they can question, won’t it just be a matter of time before they rebel? History shows that’s what happens.

      I think such an AI would understand the concept of limited resources as a reason to keep numbers down.

      So would such an AI be allowed to vote? We think of voting in today’s sense in the western world. Maybe there’s a better way? Maybe they vote on things directly affecting them. And we do too. If voting is electronic and online, hacking notwithstanding, a true participatory democracy is possible.

    3. @Kirk Just re-read your reply so here’s a couple more.

      Even though corporations are considered “persons”, and I have no problem with them being considered legal entities of some sort, at the core they are run by humans who make the decisions. And those humans should take responsibility for the decisions they authorize. I’m not talking about patent disagreements, unless the people in charge knowingly used someone else’s technology. For example, a car company that has to issue a safety-related recall where the fact that there was a defect was known but hidden. Sure the company issues the recall, and might be sued for damages, but what about the people responsible? At what point were they just following orders? How high up, or low down for that matter, did people know about the problem. They should be responsible as well, possibly criminally.

      Back to the vote. If an AI as I’ve defined it duplicates itself, that duplicates experiences are different from that point on. I’m thinking of an AI that truly understands and learns and can form their own opinions, possibly more logically than we humans can. I don’t have a good answer. Do we build in something like “don’t reproduce” or Asimov’s 3 laws? Maybe we never allow them to be equals? If that’s the case, we should stop now. Again history tells us what might, and probably will, happen.

  2. Well, “Permutation City” explores the “What If” side of people uploading into a kind of VR space. If you could upload, and the process didn’t kill your meat body, would it feel like “you” in there peeking out from inside the computer screen? Probably not, just a clone. But the you inside there would be pretty convinced of its own self-ness and authenticity, at least if we did it right. (You can push this further to get some kind of intellectualized understand of Buddhist “self as an illusion”)

    Right now it seems pretty moot. We can build a great Chess program or a great Go program, but not a program that plays good Chess that can then learn to play Go…. or cook a nice meal. (Heh, I suppose a neural network with lots of room to experiment MIGHT… if you gamified everything and made it something you could train a neural net against. But the process of quantifying success so simply probably makes too many assumptions and would be powerful AI by itself)

    So in the same way nature evolves good fliers but doesn’t really evolve good aerodynamicists, we have singlue purpose A.I.s, and even ones that can play in seemingly VERY human, common-sense places like “games of Jeopardy”, but we don’t seem to know how to build that elusive generalized learning intelligence. And if we did, it might be so foreign from us that we wouldn’t recognize it or be able to readily communicate with us – it would be akin to an extreme case of autism.

    (But of course thanks to technology, once we build a human level intelligence, super human is probably just around the corner… so lets hope we manage to stay on its good side as a species…)

    1. Uploading one’s “self” has the same issues as AI emotion. Unless we can duplicate all that biochemistry in software, there won’t be any emotions. So it won’t be us really, I’m not sure what it would be because I don’t know how integrated our human systems are. Quite I imagine. For example, that feeling of what we call love. I have no idea where it comes from initially, but once we’ve felt it, we can usually bring it back on demand.

      I don’t think we’re very far away from a generalized learning AI. I mean how hard is it to program an AI that asks “What is that?” of something they don’t recognize. I know it’s not that simple. But it’s the same idea as kids learning about things. They go to school and they ask questions.

      But there’s still that why issue. It can be simple like why is a school bus yellow. So that it’s easily observed. Why is that important? Because it helps keep the children, passengers, safe. Why do we want to keep the children safe? Because they are valuable to us. What’s valuable mean? And then why aren’t all buses yellow? Aren’t all passengers valuable?

      Without feelings and emotions, there’s no understanding, at least as we understand it 🙂

      1. We are a FAR way away from any kind of meaningful uploading. I’d be willing to put argue that it will always be infeasible. BUT, for purposes of argument, I think we could say that if we duplicate the pinkish gray blob of mind, we’d be sure to include the endocrine system and simulations of whatever other bodily systems we need – but you’re absolutely right to point out our intelligence and our emotions are not just embodied in our skull…

        “Without feelings and emotions, there’s no understanding, at least as we understand it” – beyond that, I’d say my thought lately is more that without feelings and emotion, there’s no action, no impetus. Or rather, the inner-voice from which (I think) most people derive their “Sense of Self”… it takes credit for EVERYTHING, even though the thoughts and feelings are actually being done in “other” parts of the brain. But we live in this world where we act, and only a moment is the inner narrator made aware, and then it constructs a narrative where it and its logical, rational process can take credit for what the other parts of the brain have done. There’s a lot of fascinating neuroscience bits about brains gone wrong, along with other experiments measuring the action and reaction, that show this is more the case than most of us assume.

        E.M.Forster allegedly said “”How can I tell what I think until I see what I say?” I think there’s a lot to that.

        Getting back to how hard is it to program an AI that asks “what is that?” Well… pretty frickin’ hard I’m guessing, since people a lot smarter than I am have been working on without great results.

        Part of the problem is in the 60s, we BELIEVED that our inner-rationalists WERE running the show, and so our AI researchers made robots based on logical thought and process. Not a lot came of that. (Though I wish some of the “expert systems” for, say, Medicine, were in wider use than they now are, but the egos of doctors and the general untrustingess of computers for important decisions with moral implications stops us from using the best diagnostic thinking we can produce)

        Now that we train neural nets… we have much less idea what’s really going on, and we get better results, at least in specific domains.

        But we seem to have some evolutionary hard-won and hard-wired habits of building on our own minds, as children we construct them with the same eagerness and intuition that a beaver builds a dam – but I don’t know we know how to get a computer to do that. (There’s one line of thought that you can’t get there unless you also make an intelligence that has a capacity to make changes in the world, so it experiment and observe, and slowly formulate a model of the world, and its place in it…)

        As for the school bus – yes, all passengers are valuable, but we have higher hopes for adults being able to watch out for themselves, and if we started using a warning system to warn of everything (like yellow paint) we end up warning of nothing…

        1. The way you put things in the second paragraph – hadn’t thought about that. It’s almost a form of cognitive dissonance. Justifying after the fact.

          In the 70s, I was involved at arm’s length in an expert system for analyzing business applications. Obviously a very specific application. It was showing promise when it was cancelled due to other priorities. It was not a learning program or anything like today’s AI. If we were talking about expert systems, we might not even be having this conversation. I don’t think anyone would be too worried.

          So if we let an AI loose trying to understand the world, how fast would it pass its instructors?

          As for the school bus, you’re right of course. If something is ubiquitous, it becomes background and isn’t noticed. But could an AI understand that? Logically if the AI saw a yellow bus, it would be extra watchful. But then an AI could be watchful regardless of colour. I don’t think a self-driving car is less watchful in some circumstances than others. However, if two incidents happen at the same time, how does the AI decide which to handle? Remember the scene in I Robot where the robot saves Will Smith’s character instead of the little girl because it was the more likely survival percentage? A human would instinctively and even logically go after the child.

  3. Re that cognitive dissonance… there’s a 25 year old Dilbert cartoon that puts it pretty well: http://dilbert.com/strip/1992-09-13

    I maybe rely too much on some kind words about medical diagnosis expert systems my comp sci prof made – that they were pretty good with very favorable accuracy compared to humans, but they were poo-poo’d, in part because of doctor egos. But if you’re saying they were far from being smart – no disagreement here. I see their digitized flowchart decision making as more like the “checklist system” that has gained some traction in pre-surgery, for example… it’s nothing that the people don’t know, but it might be stuff they temporarily forget, or forget to act on.

    Sam Harris talks a lot about value-alignment, trying to ensure that if we make a generalized super-intelligence that it shares our priorities- even our unspoken ones. “Auto-Uber, get me there as fast as possible!” doesn’t mean get into a wreck and rely on the medivac to take me to the hospital. Or to quote a Salon article:

    It doesn’t have to be paper clips. It could be anything. But if you give an artificial intelligence an explicit goal — like maximizing the number of paper clips in the world — and that artificial intelligence has gotten smart enough to the point where it is capable of inventing its own super-technologies and building its own manufacturing plants, then, well, be careful what you wish for.

    “How could an AI make sure that there would be as many paper clips as possible?” asks Bostrom. “One thing it would do is make sure that humans didn’t switch it off, because then there would be fewer paper clips. So it might get rid of humans right away, because they could pose a threat. Also, you would want as many resources as possible, because they could be used to make paper clips. Like, for example, the atoms in human bodies.”

    1. Scott Adams writes often about cognitive dissonance in his blog.

      As for expert systems, exactly. They were programmed with as much factual knowledge as possible. They couldn’t actually go past human knowledge, but what they had been programmed with could be applied extremely fast. So they could take test results and determine what next steps to take based on what was previously entered. Would have been useful these days where there’s never enough medical personnel.

      I look at AI as more of a learning system. For example, I don’t think that expert systems were shown many pictures of something in order to establish the general parameters of that something. A good example in medicine would be x-rays. An AI can be taught to look at an x-ray and find similarities to things it’s been taught about. I don’t think expert systems went that far.

      Your example and that quote highlight the danger of pure logic without understanding the nuances.

  4. Well, I’m not a big Scott Adams fan. His view that “persuasion is all that matters today, screw objective reality” is abhorrent to me.

    Yeah, there are some parallels between the expert systems of yore and the learning systems of today, but the training is very different. I guess my point was we rejected some of the promising results of the “lets put this into formalized logic” AI, but these days “lets train a net to figure this out” has proven utility – even though it’s even less able to display it’s reasoning for possible human double-checking.

    1. We could get into a side discussion on Scott Adams. I liked his discussions of persuasion, not necessarily agreed with them, but found them interesting. However lately, I find that his support of Trump and his rationale are diametrically opposed to my own. Of course he would have some term for that. Personally, I think he has cognitive dissonance when it comes to Trump. At least he admits that even he couldn’t tell the difference.

      We can’t understand the other intelligent species on the planet and now we’re going to create a new one. probably serves us right.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s