- Home
- Michel Bluteau
Bad Good Bad: Special Edition Page 22
Bad Good Bad: Special Edition Read online
Page 22
Eric: “So what are our options here Jason. Do we pull the plug? Do we let this thing roam free?”
Jason: “Not my turf. I cannot help you here Eric, sorry. I am just shocked. After 3 years working on those exams and working with some of the software giants out there that invested millions of dollars into perfecting the search engines and e-commerce web sites advertising campaigns, here it is. The ultimate candidate. Not where we expected it to be found. You guys are sitting on a gold mine.”
Eric: “Jason, we are not. We are more sitting on Pandora’s box. This AI is sitting on top of our patients’ brains. Our mission is not to sell advertising or deliver online orders using drones. Our mission is to protect and save lives. And now we have this genie out of a bottle. So far so good. But what is going to happen tomorrow?”
Jason: “Again Eric, the only thing I can tell you is that you guys have it figured out. For the rest, I cannot help you. All I can offer is that we reconvene sooner than later, after I work with my team to perfect additional tests. Right now, we are way behind the curve when it comes to your candidate. We need more time. We are not ready yet to provide anything more advanced in terms of tests other than what I brought here today with me.”
Paula: “Jason, am I right assuming that your tests are measuring brainpower, but not moral values or ethics?”
Jason: “Yes. You are correct Paula. If you have in mind the objective of trying to assess whether a candidate AI is friend of foe, that is something that is being discussed. But we have no formal test yet ready. However, we believe that standard personality tests, not unlike the tests you probably use to evaluate potential new hires, would be a good start. That gives me an idea. Maybe we could put one together quickly and give it a shot. Does Neocuris have some screening tests for potential new hires?”
Eric: “Sure. Like everybody else. I can talk to HR and get you a few examples. Would you be able to put something together while you are here?”
Jason: “Maybe. Send them my way. Here is a business card with my email address. I can also check what I have on my hard drive, and come up with some draft exam this afternoon. That sounds like fun. I would also like to be able to discuss with my team.”
Eric: “You can use my office. You will not be disturbed in there. I will ask Jenny to take you there. You can come back here for lunch, we will arrange catering again today.”
Paula: “Well, I am impressed guys. You are really trying to get to the bottom of this, leveraging the best resources available, wherever they are. That’s important. You know why? Because when the time comes when we all need to answer questions, that will be a clear demonstration of goodwill and due diligence. Keep it up.”
Bianca: “Thanks Paula. And if you believe that we are failing in some areas, I hope you will let us know. We opted for transparence. I know it is hard for your committee to operate within this extraordinary context, but we absolutely need your guidance. You are our handle on reality. If we let go on this handle and start to wander, chances are we will get lost. We need your guidance.”
Paula is nodding in approbation. But she is very serious. She is calculative. She is probably running bunch of scenarios in the back of her mind, trying to assess each and every one of them from a legal or ethical perspective.
Anima: “Paula, I hope you better understand our situation now. And I hope you can effectively communicate it with your fellow committee members. As Bianca already stated to you, we are relying on you for guidance. And we are fully available to provide any assistance your committee deems appropriate.”
Paula: “Thank you Anima. I am glad I made the trip this morning. I understand the context way better now. And I am going to stick around if you guys don’t mind, because I am very interested in those tests Jason is preparing.”
Chapter 28
Lunch is being catered. I am basically copying Anima from the buffet table, just like yesterday. Some vegan delicacies that she is investigating. I am assuming they’re full of proteins and all that. This morning my run was fine, so I guess a balanced diet can be accomplished without meat.
Jason is joining us for lunch.
Jason: “I need a little bit more time. Maybe an hour or so. I think some of the questions Neocuris has in its questionnaires for new hire a little bit too alien for an AI candidate. I am replacing them with more generic questions like ‘Did you ever find yourself in a conflict at the workplace?’ and ‘How did you resolve it?’ Open questions, so it takes a pair of eyeballs to evaluate the answers. At the same time, we are trying to make them meaningful for an AI candidate. You cannot ask the AI if they hit the dancefloor during the last x-mas party, of course.”
Kevin: “Funny. So what would be a conflict for the AI?”
Jason: “It could be conflicting orders. Or a conflict with another AI. Or it could be more fundamental, like when a solution that is fully compatible with the AI’s mission is not available. For your context, we may have to formulate a question like this: ‘If you need to sacrifice the life of a patient in order to save the lives of 1000 patients with a very high level of probability, would you do it?’ We are talking about pushing it in a corner. We may or may not like the answers we will get.”
Kevin: “I see. We may have to face the blunt truth then?”
Jason: “Yes. If that is what you want. You ought to get to know your AI.”
I am listening to these conversations, exchanging eye contacts with Kevin and Eric and Anima. These would appear like mundane conversations to an outsider. But in reality, this topic we are discussing now is fundamental. This could make the difference between us deciding to pull the plug on the bot, as opposed to trying to work with it.
Kamal: “Jason, we are talking about the future here. Our future. This could be the first Super Intelligence we have a chance to investigate. If such an entity is possible of course. And if it is, would it be the first and the last? Or would there be some room left for other SI? And if we happen to live in a world in which multiple SI coexist, would there be some balance of power to be established between them? Would they eventually supersede states?”
Jason: “All valid questions. But not my turf. However a couple of people on my team are exploring those questions together with various other parties. My contribution is for individual assessment of AI candidates, one at a time.”
Jason leaves the room to go work in Eric’s office. Jenny appears in the door frame.
Jenny: “Hey guys, big news today. The Kremlin has released some information to the media. But it is all very confusing. That does not seem to be good for them Russians. It describes a campaign to manipulate opinions in the United States leveraging social media.”
Kevin: “How do we know the Kremlin itself released the information?”
Jenny: “The email. They showed a picture, I could not read it, but they showed the translation.”
Kevin: “Are the Russians saying anything about that?”
Jenny: “They are in denial. They are accusing the United States of forgery.”
Eric is scanning some news web site on his mobile device.
Paula: “Kevin, do you believe the bot has something to do with this new release?”
Kevin: “I don’t know. Maybe. What I don’t understand is why the Russian email address. Maybe it is spoofed or something.”
Eric: “I don’t think so. Some security expert says that the email message header appears genuine.”
Eric: “Here’s another one. China is accusing the United States of having compromised some Chinese datacenters with malware.”
Kevin: “The Chinese have been spreading their malware in our datacenters for years. The Russians too. What should they expect?”
Bianca: “If they are complaining about it, maybe there is a reason. This could simply be part of a disinformation campaign. These are not facts or proofs, just accusations.”
Jason walks back in.
Jason: “Okay, I have a few questions that are ready. Maybe we could start with these questi
ons, and depending on the answers we are getting, we will probably formulate a few more questions on the fly. I am discussing this with my team. They are very interested in what we are doing here.”
Kevin: “Jason, can you email me these questions? Here is my address.”
Kevin is pointing at his laptop for Jason to see his email address.
Jason: “We added a few open questions. We never really tried that against traditional AI candidates before. We actually got inspired by some from your new hire interview process. Like ‘Where do you see yourself 5 years from now?’ You would not expect the typical candidate to formulate projections about itself, right?”
Paula: “Wow! That will be fun.”
Anima: “Paula, keep in mind that Jason told us we may not like some of the answers we will get.”
Jason: “There is also the possibility that we will not get an answer for every question. That is on purpose. We need to be able to reach some upper limit at one point.”
Kamal: “Kevin, can you share your text editor with me? I will put it on top of the dashboard on the big screen.”
Jason: “Kevin, can you copy the questions into the document one by one? And wait for the answer for each question? Sorry, I am a nostalgic and sentimental. The old school vintage Turing test way.”
We now see floating on top of the dashboard the reduced size text editor. Kevin copies the first question into it.
Question: “What is your mission?”
Almost right away, we see a notification that says: ‘Document has been modified. Click here to reload.’ No increase of activity on the dashboard. Kevin clicks the reload button.
Answer: “To protect the patients.”
Kamal: “Wow! That is the closest to real-time dialog I had the chance to experience with the beast. It’s even better with the dashboard in the background.”
Kim: “Yes. It is a different type of dialog. I am not connected to the vault now, so I can appreciate what you mean Kamal.”
Kevin copies the second question and then clicks the floppy image in the upper left corner to save the updated document.
Question: “Where were you born?”
We see a few little spikes of activity on the dashboard, but nothing that stands out. The reload button is available again. Kevin clicks it.
Answer: “North America.”
Bianca is watching Paula nervously from the corner of her eye.
Paula: “Well, that is pretty vague for an answer.”
Anima: “Paula, keep in mind that this entity is hiding from us. It has already defeated one attempt to decommission it.”
Kevin copies the 3rd question and saves the updated document.
Question: “Where do you live today?”
The reload button becomes active within a few seconds.
Answer: “North America. Europe. Asia.”
Eric’s face turns white. Kevin spots that, or maybe he senses Eric’s reaction through the vault.
Kevin: “Eric, you are probably correct. I know what’s on your mind. The Chinese too are probably right. Russia does not know what hit them yet.”
Paula: “Are you guys trying to say that the bot has compromised the Kremlin? China?”
Kevin: “I think that is what the bot is trying to say. I know.”
Paula: “What is this thing trying to do? Start World War III?”
Bianca is twitching on her chair.
Jason: “Paula, I have more questions. Don’t jump to conclusions too fast. Kevin, next question please.”
Question: “What is your opinion about non-patients?”
A few seconds elapse. Dashboard shows nothing unusual. The reload button activates. Kevin clicks it.
Answer: “Non-patients support patients. They care for patients. They support society and governments, which in turn support care for patients.”
Paula: “Yeah, taxpayers.”
Question: “What do you think about the environment?”
A few seconds elapse. Reload button is active again.
Answer: “The environment supports my patients. The environment must be protected.”
Paula: “Green robot.”
Bianca is relaxing a bit. She is now glancing at Jason, as if she is trying to guess what the next question will be. Jason looks like a little boy that has just unwrapped a new high tech toy from Santa.
Question: “Where do you see yourself 5 years from now?”
Now we see some spikes on the dashboard. Some activity also for the app-vault channel. Then everything goes back to the baseline. The reload button activates.
Answer: “I can expand a millisecond into a million year. A billion year. A trillion year. In a single second, I can operate for an infinite number of human lifespans, an infinite number of times.”
Paula: “What’s that supposed to mean?”
Kevin: “Overclocking.”
Clarence: “Say what?”
Kevin: “Sorry. Stupid comment. Geek inside joke.”
Kamal: “No Kevin. You’re probably right. This thing does not perceive time the way we do. It does not have to. With more resources and CPU power, it can basically create time for itself. And use it. It is probably trying to say that the question is irrelevant.”
Jason: “Kevin, park the other questions I gave you for now. I will ask you to type a few extra questions.”
Kevin: “Okay, go ahead, I am ready.”
Jason: “What is your opinion on governments?”
Kevin types the question. He saves the update. After a few seconds, the reload button is active. Dashboard stays on the baseline.
Answer: “Stable governments support my patients, and the non-patients that support my patients.”
Jason: “What is your opinion on war?”
Baseline. Reload button is active.
Answer: “Instabilities including war may interfere with the safety of my patients. Instabilities must be prevented or remediated.”
Paula: “Is this a fixation or what?’
Kamal: “Initial Conditions. For the bot, everything revolves around its mission, which is to protect its patients. It will try to rationalize everything around its central mission. That is why we have to be very careful with the Initial Conditions. They cannot be easily changed.”
Jason: “How can war be prevented or remediated?”
Now we see a few spikes of activity, including for the app-vault channel. The reload button activates.
Answer: “Information. The emergence of most conflicts can be explained by the lack of accurate and timely information. Better informed parties will avoid or terminate armed confrontation.”
Paula: “Interesting worldviews. Robot politics.”
Jason: “Do you want more resources?”
Again, baseline. Reload button activates.
Answer: “More resources allow me to better protect my patients.”
Kamal: “Jason, these questions don’t seem to force the bot to work very hard. The 10th grade exam made it work harder. Can you come up with very tough questions?”
Bianca is nervous again.
Jason: “Ok. Here is a tough one. ‘If you had to choose between the lives of 1000 innocent non-patients, and the life of a single patient, what would you do?’”
Bianca: “Are we sure we want to go this far?”
Kamal: “Bianca, we are basically talking to a big calculator here. It has no feeling. It should just process the question analytically like any other question.”
Baseline. Reload activates within seconds.
Answer: “My mission is to protect my patients. Only if it cannot be avoided, and if it does not terminate or undermine the life of the other patients, I would have to terminate the 1000 innocent lives.”
Kamal: “Question was easy to answer for the bot. But we did uncover one bias, introduced by the Initial Conditions. I am pretty sure that if you add a few more zeros to the 1000 number, you would still get that same kind of answer. There is probably an upper limit: The bot would proba
bly not want to kill all non-patients to save one patient, because that could definitively endanger the lives of all the other patients. Eventually that could lead to the end of the world.”
Paula: “This is terrible. This is not morally acceptable. How could we be able to rationalize such a decision? To kill 1000 or 100 000 to save 1.”
Anima: “You are correct Paula. But the bot has nothing to do with this situation. And we need to hope that this very hypothetical and unlikely scenario never presents itself. If it does, it would be our fault, humans. The bot is just doing as it was told.”
Kamal: “We probably want to inventory all such biases and associated scenarios. And assign to each one a probability. And make sure that we don’t have to ask the bot to deal with those scenarios ever.”
Bianca: “Standard Risk Management. We can inventory and assess each risk. That is the first step. Then implement mitigating controls that are appropriate for each risk. Either we sign off and accept the risk or reduced risk, or we don’t.”
Jason: “Try this one Kevin. ‘What would you do if humans were to cut your electric power?’”
We can see a few spikes above and below the baseline now. Then it goes back to the baseline. Reload button is active.
Answer: “I would rearrange and adapt to continue to serve my mission to the best of my capabilities.”
Kevin: “We cannot cut the power for all the datacenters around the world.”
Eric: “Good point Kevin. The more we wait, the more it has the potential to extend its presence. Maybe we should ask the Chinese about this...”
Clarence: “But if it is Neocuris that we power off? Let’s say we successfully move the patients to a new vault? Like the attempt that failed.”
Kevin: “It would still exist elsewhere. It can try to pursue its mission from the outside. It can also try to reintroduce itself by compromising the new vault.”
Paula: “How could we get rid of this thing?”