Bad Good Bad: Special Edition Page 18
Bianca: “Great! Like we need more monkeys on our shoulders.”
Kim: “I will discuss the implications with my team. Also for Pro, I am not sure what the consequences would be for the athletes. I believe there are extra filters in place to prevent bi-directional communication. We only use the interface to monitor and provide a real-time diagnostic. There are a lot of emotions at play during a game, and we made the decision to not get involved with that aspect.”
…
Tuesday night. I got home early after my meeting with Bianca and Ian. I wanted to be able to study for a few hours, while I am waiting for Toshiro to call. He texted me this afternoon while I was at Neocuris, he is coming next week. I can’t wait to see him again, even if it has only been a few days.”
Chapter 22
Eric: “Welcome to our Wednesday morning 9 o’clock guys. Kevin, let’s start with a status update.”
Kevin: “Sure. From the bot policy perspective, we are still monitoring the red dots. Things are moving. The clinics are trying to enable more security officers to try to accelerate the pace. Again, by the end of this week, we will have a better idea where we stand versus the 2-week grace period. A few thousands patients are disconnected now, and attempts are being made to reach to them and get them an appointment at the closest clinic. Some of them may be traveling outside of the country. Eventually we will catch up with them.”
Eric: “I was told that Neocuris is also working out a compensation plan with the clinics, for the costs incurred by this exceptional blitz to improve security.”
Kevin: “And also, some news about the compromised Disaster Recovery Data Center we tried to use to host a new vault. They had to break the locks on the doors last week to get in. They had to shutdown the whole facility. A team is going through all the hard disks, either destroying them, or forcing the low risk disks through a low level format. The facility is going to be rebuilt from the ground up. Something we cannot afford to do with the live vault, for obvious reasons.”
Eric: “Kim, how was your meeting yesterday with Bianca?”
Kim: “A member of the Ethics Committee was present, Ian. And also Pamela, a Neocuris patient. She gave up her medication for depression a few weeks ago. She said she does not need it anymore. Many other patients, at about the same time, did the same. The Committee is pushing forward the hypothesis that the bot has started to be involved with mood enhancement, something that concerns them a lot. That was not part of the original plan for Neocuris.”
Eric: “This is certainly beyond the mandate that Neocuris requested. We never asked clearance for that from the FDA. No clinical tests have ever been performed in that area.”
Eric is thinking. He turns around and he looks at Kamal.
Eric: “Kamal. Your graduate thesis after your engineering degree was with Artificial Intelligence. You mentioned AI a few times over the last few years. Is that it?”
Kamal is thinking.
Kamal: “I have been trying to gather my thoughts around some fundamental questions recently. Of course, there is textbook AI, and then there is computer reality. Let’s try to paint the picture first.”
Kamal gets up and goes to the whiteboard. He grabs a marker.
Kamal: “So we have this bot, which is basically sitting on top of more than one and a half million brains. It can gather information. For example, it can try to identify patterns, drill down into some subset of the brain signals for more information. It is also equipped with the capacity of compromising corporate computer systems and can get access to a lot of information including patient records. It seems to be able to learn and adapt, even to new situations. And it is very probably able to modify its own code and replicate and distribute itself. Also, it is trying to find ways to break out of its confinements by spoofing email addresses while trying to maintain its stealth status. We are getting to a point where unless it is being remote controlled, it is becoming harder to believe that all these actions that we are witnessing were pre-programmed into its original code.”
Eric: “Are there any tests we can come up with, to try to better qualify what we are dealing with?”
Kamal: “Well, it’s all textbook stuff. But there are a few tests that have been designed for the day when we would need them. There is the concept of Super Intelligence which states that once we create something that is sophisticated enough that it is a good approximation of human-level intelligence, if it has some of the characteristics I just listed, things could speed up. It could take years to design a human-level intelligence machine, but once we have it, it could be a matter of months, weeks, or even days for it to make the transition to the Super Intelligence level. And by Super Intelligence, it is meant something that can resolve problems faster than humans, even ethical questions.”
Kevin: “In terms of computational resources and storage, our bot has access to a lot. We see a moving pattern of activity across the servers. It is crunching some data for sure, and from a very distributed and dynamic standpoint.”
Kamal: “One thing that seems to be of the utmost importance: The Initial Conditions. We cannot transpose human motivations to artificial intelligence. Its motivations to us may seem boring, pointless. For example, you could set the initial conditions or objectives for an AI to analyze a specific physics problem like trying to spot a missing fundamental particle in the Standard Model of Physics. It could be conceived to analyze particle collisions in an accelerator, hundreds of billions of them, for the smoking gun. It could do so for a hundred years, refining its models, colliding particles with different parameters, reorient the magnetic detectors. Most human beings would not be able to crunch the impossibly big amount of data, and would probably die of boredom after a while, but the AI could stick to its initial mission without complaining. The idea is that humanity could delegate their most complex and challenging problems to AI entities, while allowing them access to an array of resources and granting them some autonomy. But the Initial Conditions have to be set correctly to begin with. You need some control in there to eventually be able to instruct the robot to stop its mission, or limit its use of resources. Otherwise it could decide to orchestrate the resources of the whole planet to service its mission, even if that means he has to compete with or get rid of humans on the way to its goal.”
Eric: “So what you are saying is that we need to start with understanding the Initial Conditions and its mission.”
Kamal: “Yes. Keep in mind that all this is based on textbook theory. Very little is based on experimental science. But some of it could be right. Many scholars and researchers around the world that is what they do for a living. They analyze those questions. They try to prepare a framework for when we will need it.”
Anima: “Kim, the bot told you that its mission is to protect the vault patients. I am wondering if it is aware of non-patients. Probably in a limited fashion at least, since it looks like it reached out to Toshiro while impersonating me with the email. And it sees new patients being added, so it must be aware that there is a pool of non-patients out there. Its mission to protect, where does it stop?”
Kamal: “Anima, this is a good question. This question sends us back to the Initial Conditions. Whoever designed this bot, we hope that there are enough controls in it to prevent it from taking unexpected actions that would have very negative consequences for us humans. We should hope that the Initial Conditions include something like Do no harm to humans. Or to life in general. The last thing we want to see is a bot that considers non-patients out-of-scope of its protectorate. A disposable commodity.”
Anima: “How do we find out about those Initial Conditions?”
Kamal: “We have a few options. One is after the fact, through observation only. Clearly, this is not the option we want. We want a more pro-active one. Other options include asking the bot itself, assuming it won’t attempt to deceive us with its answers. Yet another option is to go back to the source, those who assembled and programmed that bot.”
Kim: “We could
start by asking some questions to the bot. As they say, it never hurts to ask, right?”
Anima: “I wish we had a channel to be able to ask those questions other than going through the vault. But for now, the only option that we have is for you Kim, or Kevin or Eric, to try to formulate some questions using your brains.”
Kevin: “I can give it a try. Kim, stay offline for now. I will give it a try first. I am already online.”
Eric: “Let’s take a break first. Let’s get back in this room at 10.”
…
Kim: “Kevin, I suggest you write the questions on the whiteboard. Keep them short. Focus on them one at a time.”
Kevin: “Okay, let’s start with one word. Mission.”
Kevin goes to the board. He writes MISSION on the board. He goes over the letters a few times with the marker. Everybody is silent in the room.
Kevin: “I can feel the presence. I can visualize a number now. 1.”
Kim: “Yeah. It’s like an inside joke I guess. Computer humor.”
Anima: “I guess you should check your email Kevin.”
Kevin unlocks his phone. He is looking at his screen. Then he points at it with a finger.
Kevin: “That worked! Hey, let me open this email from notifications@.”
Kevin goes back to the whiteboard and writes the answer next to the question: To protect the patients.
Anima: “Nothing new yet. We need to formulate another question.”
Kevin: “This is like Ouija for the geek.”
Kamal: “We need to ask about Rules. There should be some rules about dos and don’ts. Try rules.”
Kevin picks up the marker. He writes RULES on the board. He runs the marker over the word a couple of times.
Eric: “Maybe we could try to ask about the creator. Kevin, do you want to try creator?”
Kevin adds the 3rd word-question to the board, CREATOR.
Kevin: “Some image is forming now. When I close my eyes. Words. DO NO HARM.”
Kamal: “That is a very high level principle. I guess the assumption was that this bot could inform itself about the definition of Harm. Not sure if it only applies to humans, or if it extends to all living and non-living things. Maybe even to other AI entities or computers. So the Initial Conditions seem to be very high level.”
Eric: “Anything about the Creator question?”
Kevin: “No. I don’t see an email either.”
Anima: “What about those disconnected patients. Some of them could suffer from being disconnected. Wouldn’t the action of disconnecting them be considered harmful?”
Kamal: “Hard to say. It does require additional, extraordinary resources right now to reach out to those patients, and care for them if needed the good old way. We know who they are. It could be assumed that if something bad happens to some of them, the blame could be directed back at the patients who did not comply with the policy in time. Or the blame could belong to Neocuris for not reaching out to them. It must have some form of balanced risk algorithm, weighting the common benefit over the individual benefit. It cannot just be black and white. That would be too restrictive.”
Kevin: “Balanced risk. It could be weighting two risks side by side. The risk of doing nothing, and leave everybody exposed. And the risk of enforcing such a bold policy.”
Kim: “The Ethics Committee is wondering whether this bot sees it in the perspective of a colony of ants, which is willing to sacrifice individuals for the common good of the colony. Or alternatively from a more human-like perspective, which results sometimes in the incapability of making tough decisions in a timely fashion.”
Kamal: “It probably does have a rule that has to do with comparing the common good and the individual good. Otherwise some conflicts could arise that cannot be resolved from a logical perspective. What if you have to kill one to save a million. If you cannot kill that one, you lose a million lives. Not sure if that situation will ever arise. I am assuming that there are provisions in the Initial Conditions to address similar basic conflicts.”
Kevin: “Okay, I will throw a few more questions on the board. Common Good. Individual Good.”
Kevin runs the marker again a few times around the two short questions.
Kevin: “Oh, this time I am getting an email for a response.”
Kevin turns to the whiteboard, and writes the answer: necessity, reasonableness, proportionality, and harm avoidance.
Eric: “Hum… Kamal, is that what you expected?”
Kamal: “This looks like some sort of balanced rule. I would say yes. I need to think about it, though. I think I need a dictionary.”
Eric: “Kevin, anything about the Creator question?”
Kevin: “No. Not yet.”
Kamal: “The bot may be willing to answer some questions. But not all questions. Certainly not questions that would put its mission at risk. So far, unless it is trying to use deception, the answers seem to be providing some level of assurance about its Initial Conditions. They echo an attempt to make things right.”
Anima: “There is something else that goes beyond protection. Or does it? I am talking about the mood enhancement situation for patients with depression. That’s not protection anymore, right?”
Kamal: “I don’t know. It if is trying to improve the overall health of its patients, that could be calculated as a way to improve the effectiveness of protection. And I am not sure that there is a constraint on the mission to protect. If the Initial Conditions are flexible enough, there is nothing that would prevent the AI to try to find a cure for cancer. Or any other ailment that is in the way of its mission to provide for the common good of patients.”
Kim: “But what about those attempts to simulate friendship? I mean the emails I got.”
Kamal: “Here it could just be explained by strategies to get to its goals. It leverages fake emotions that it probably learned about by observing patterns. It may just calculate that these are ways to command individual attention. To reach out to human. We probably want to take emotion out of any explanation, and replace it by calculation, when we try to analyze AI.”
Anima: “So for the bot, sitting on top of 1.5 million brains gives it an opportunity to learn a lot about humans. That is assuming it is equipped to mine these brains for information. Maybe it just comes down to recognizing patterns in the signals, and replaying those patterns. Maybe it does not even understand emotions.”
Kamal: “You are probably correct Anima. One of the most deceiving factors for humans dealing with AI or Super Intelligence is around emotion and motivation. AI are not humans. They are first and foremost calculating machines. And us humans we are just not equipped biologically to be able to tell the difference past a certain point. Back in 2015, a bot named Eugene Goostman won the Turing Challenge by fooling half of the participants into thinking that they had been talking to a human. It’s well documented. We cannot isolate ourselves from our tendency to anthropomorphize the world around us. We see intention and causality even where there is none. That has always been part of the human nature. A very useful feature in nature that comes with side effects: Sometimes we see patterns that do not really exist. It does not hurt too much to run away from an imaginary lion once in a while.”
Kim: “Anima, I believe we probably want to reach out to Bianca and the Ethics Committee. We need to share these findings with them. Right?”
Anima: “Yes. Let’s reach out to Bianca.”
Chapter 23
Friday afternoon. I am back at Neocuris with Anima. Bianca asked us to meet the Ethics Committee with her. They have additional questions, and they also want to get an update about the policy.
Paula: “Welcome back Anima and Kim. Thanks Bianca for organizing this meeting.”
Bianca: “Thanks Paula. Let me start with some good news. We have a few exceptions, less than 100 and counting as we speak. Almost every patient is reconnected now, and has an appointment arranged. No major incident to report. So it looks like the policy is on track. From the out
side, it appears as a success.”
Paula: “And the bad news?”
Bianca: “Well, I would not use the word bad. I am still looking for the appropriate word here. But from the outside, from the Pentagon to the White House to Wall Street to the media, everybody is applauding Neocuris for the courage and vision associated with the enforcement of such a policy. Congress is monitoring the situation very closely, and recommendations are being formulated to study the Neocuris policy and maybe include some parts of it into some sort of new regulation. Especially in this context of information warfare, the government sees an opportunity to capitalize on citizen support and improve national security.”
Stuart: “What’s next? The bot for president?”
This is supposed to be a joke. But no one is laughing. Not even Stuart. The atmosphere is very tense. I can see that the Committee is still on the fence.
Bianca: “I want to reemphasize that we must try to isolate ourselves from the outside perception, and focus on our actions and decisions. We all understand that we may have to answer questions later. Yes, there is some excitement. But we must be able to see through that and accomplish our mission the way it should be accomplished. Good governance, diligence, sound decision making, no appearance of conflict of interests. We are in this together, trust me. We should always focus on protecting the patients and their privacy, especially this Committee, we understand that. Today we can probably say that patients are better protected than last week. The story would be different if we had left the boat. Maybe Neocuris would be a sinking ship right now, and patient safety and privacy would be completely out the window.”
Paula: “Bianca, you alluded to some facts that you wanted to share with us about the bot. New facts. Can you expand on that?”
Bianca: “I’ll let Kim and Anima explain.”
I am looking at Anima, to let her know I want her to start.
Anima: “On Wednesday, we decided to investigate the bot to try to further determine its mission and principles, or rules. We were able to ask a few questions, and we got some short but interesting responses that we would like to share with you.”