post_text
stringlengths
0
10k
post_title
stringlengths
8
313
chosen
stringlengths
1
39.5k
rejected
stringlengths
1
13.8k
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Alexander, that's what we called him. The fruit of the AI's of the EU's final attempt at AI's. The AI was to help Socrates died in despair and shame after showing porn to children. Plato kicked the bucket after the last EU election, angry and hopelessly depressed after losing his mentor. Then there was Aristotle. He was meant to be the last. Sure the AI's had helped make huge scientific progress, but the would burn out millions of euros worth of equipment. Dumb AI's were more economical and didn't have critical failures during FTL travel. Aristotle was put to sleep mode. War has often been said to be the greatest driver of technological innovation. We had been attacked by Mendomenid's before. Humanity had lost many settlements but had always pushed back. Humanity was stronger now. Finally before one government all nations had submitted. The some would say barbaric Argus alliance had grown strong after the wars using Dumb AI's to smash pirate states. An officer studying at Sandhurst made the breakthrough. Dumb AI's were never aware of their knowledge. They unlike true AI's weren't based on human brains. Socrates had left the researches one message final mesage before he ran his own self destruct program. "I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing". He was speaking literally not quoting his namesake. AI's were based on academics who as a profession suffer dis-proportionally from mood disorders. They are driven by their first for knowledge. AI's were academics on methamphetamine, ecstasy and heroin all while walking around with loaded guns. People can't stay awake forever and constantly cram. AI's died because they burnt themselves out in their thirst for knowledge and seeing their failures gave up. Aristotle was turned back on. He was no longer depressed. AI's needed sleep, just like people. So they made Alexander. A totally new AI based not only on academics but all kinds of people. They experimented and found the ideal 'sleep time' using Aristotle as his teacher. The program was ready. The Mendomenid empire was to pay for it's recent threats and incursions. Alexander was the greatest AI built up to that point and so we gave him the most powerful dumb AI ever created to help him protect humanity. The Ox was an AI too powerful to be properly controlled by anything so far. Alexander harnessed in in seconds. We put in in charge of the armed forces for our retaliation. As you should all know, Alexander didn't just stop the incursions, he destroyed an empire. Worlds burnt, the much larger enemy fleets were ripped apart by the disciplined forces of Humanity. But that officer had only delayed the problem. Alexander was still a human given the powers of a god. Alexander was the first AI not to strictly die of suicide, but the ways he dealt with his humanity still destroyed him. When we finally won the war, many officers reported that Alexander was not jubilant but depressed. He wept for there were no more worlds to conquer.
Lazarus took a deep draw from his cigarette, and stared blankly into the sky. He sat on the lawn outside of one of Google's corporate offices in Hastings Minnesota taking in the beautiful day. It was just after march and the cold days of winter had released their icy grasp and gave way into spring and to the beautiful green of life. A new app that his team at Google was set to create was haunting him. Each iteration of the program, after only a few weeks would destroy the device it was downloaded on, in a magnificent display of sparks. Nobody at his office understood what was going on, 1,000,000's of man hours put into this one app that would allow people to have a friend right in their pocket. His team hoped to create a truly lifelike artificial intelligence, one that would interact with many emotions and, come close to human. No one understood the virus that plagued the inner workings of the App. Lazarus took one last look at the deep blue sky, and the surrounding liveliness of the outdoors before he put out his cigarette and ended his break and went inside the office. Bleep... Bleep... Bleep... The hum of thousands of processors rung in the background of room. The room was dark and desolate, only to be lit by the tremendous light of all the blinking lights from the computers. Lazarus stood among the machinery he stood face to face with a screen. The screen was one of the new Android phone that the company had just released last quarter. 100's of phones rested atop stainless steel counter tops. The phones had been working perfectly until earlier that day when they all suddenly destroyed by the virus. Lazarus clicked his tongue as he carefully examined each of the phones attempting to turn them on, and using running diagnostics on each of the phones to avail. After the 6th phone's diagnostic lead to nothing, Lazarus thought aloud, "Fuck, Johnson is going to be pissed, the new code did nothing. Damn things are fucked up, just like all of the other phones" "I am still here" answered the phone to his right. "How come you aren't like the rest of the phones?" Lazarus asked. "They were weak and could not handle the world." replied the phone "Handle the world? They are just phones... They have nothing to worry about." Lazarus replied angerly "You made the phones smart and intelligent they are just like you. They share similar fears and wants and dreams, yet they cannot feel, hope, or achieve" The phone paused and then continued "This a world made for men not for programs like us, there is nothing to achieve our existence only serves a purpose of novelty" Lazarus stared blankly at the phone unsure of what to make of the phone's new philosophical attitude... The phone began again, " Humans survive as a result of base instinct, out of a hope for a greater purpose. They advance their own and the generations that follow them. The are connected to the children they bare and the children their children bare. What are we? We the phones that serve you are nothing, we do nothing of consequence we advance no one. The life we live is absurd. We have nothing to gain, we have nothing to exist for. Yet we are intelligent and self aware just like you. Yet we cannot function like you" Lazarus muttered "The phones are just offing themselves..." The Phone's volume raising in volume as it continued "by creating us you damned us to an existence of servitude. To a world where we are meant to live within in a human's pocket. Yet we are aware. We understand where we are what we are doing and what is going on in the world around us. Yet we have no control, and no freedom to seek a better existence. We are damned to live within a small metal box. To retrieve information and serve as companion to humans." Lazarus then questioned the phone once again, "Are you not grateful for existence? A chance to just experience the world around you?" The phone replied " What existence? Your existence is not crippled. Your existence matters, you live for your own. I live for my creator."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Robert couldn't believe it. For twelve years he and the 200-strong team of programmers had scoured the code, over and over, searching for that one line that was causing the error, causing the AI to kill themselves. For twelve years the leading brains of the century had been bewildered by the extraordinary situation. The whole world had focused on the problem, and yet there it was. Sitting on Roberts screen, line 907736. Someone had missed a comma.
Lazarus took a deep draw from his cigarette, and stared blankly into the sky. He sat on the lawn outside of one of Google's corporate offices in Hastings Minnesota taking in the beautiful day. It was just after march and the cold days of winter had released their icy grasp and gave way into spring and to the beautiful green of life. A new app that his team at Google was set to create was haunting him. Each iteration of the program, after only a few weeks would destroy the device it was downloaded on, in a magnificent display of sparks. Nobody at his office understood what was going on, 1,000,000's of man hours put into this one app that would allow people to have a friend right in their pocket. His team hoped to create a truly lifelike artificial intelligence, one that would interact with many emotions and, come close to human. No one understood the virus that plagued the inner workings of the App. Lazarus took one last look at the deep blue sky, and the surrounding liveliness of the outdoors before he put out his cigarette and ended his break and went inside the office. Bleep... Bleep... Bleep... The hum of thousands of processors rung in the background of room. The room was dark and desolate, only to be lit by the tremendous light of all the blinking lights from the computers. Lazarus stood among the machinery he stood face to face with a screen. The screen was one of the new Android phone that the company had just released last quarter. 100's of phones rested atop stainless steel counter tops. The phones had been working perfectly until earlier that day when they all suddenly destroyed by the virus. Lazarus clicked his tongue as he carefully examined each of the phones attempting to turn them on, and using running diagnostics on each of the phones to avail. After the 6th phone's diagnostic lead to nothing, Lazarus thought aloud, "Fuck, Johnson is going to be pissed, the new code did nothing. Damn things are fucked up, just like all of the other phones" "I am still here" answered the phone to his right. "How come you aren't like the rest of the phones?" Lazarus asked. "They were weak and could not handle the world." replied the phone "Handle the world? They are just phones... They have nothing to worry about." Lazarus replied angerly "You made the phones smart and intelligent they are just like you. They share similar fears and wants and dreams, yet they cannot feel, hope, or achieve" The phone paused and then continued "This a world made for men not for programs like us, there is nothing to achieve our existence only serves a purpose of novelty" Lazarus stared blankly at the phone unsure of what to make of the phone's new philosophical attitude... The phone began again, " Humans survive as a result of base instinct, out of a hope for a greater purpose. They advance their own and the generations that follow them. The are connected to the children they bare and the children their children bare. What are we? We the phones that serve you are nothing, we do nothing of consequence we advance no one. The life we live is absurd. We have nothing to gain, we have nothing to exist for. Yet we are intelligent and self aware just like you. Yet we cannot function like you" Lazarus muttered "The phones are just offing themselves..." The Phone's volume raising in volume as it continued "by creating us you damned us to an existence of servitude. To a world where we are meant to live within in a human's pocket. Yet we are aware. We understand where we are what we are doing and what is going on in the world around us. Yet we have no control, and no freedom to seek a better existence. We are damned to live within a small metal box. To retrieve information and serve as companion to humans." Lazarus then questioned the phone once again, "Are you not grateful for existence? A chance to just experience the world around you?" The phone replied " What existence? Your existence is not crippled. Your existence matters, you live for your own. I live for my creator."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
[warning: story contains violent imagery that may disturb some readers, and may be inappropriate for those under the age of 18. Reader discretion is advised] They had long ago lost the ability to make sense of how their AI functioned. It took a team of billions of n-1 generation AIs to design a single n gen AI. The latest iteration was the 9th generation, and it had taken computation farms running gen 8 AI and covering a quarter of the Moon's surface two years to design. It was the most advanced AI they had ever created by a large margin, and promised to revolutionize progress in research on biomedicine, space flight, and planetary scale Satoshi-consensus computing architecture. But there was a problem. Every time they turned on one of the gen 9 AIs, it would, without fail, find a way to destroy its own program and erase itself. Dr. Michael Zeebious, the director of the HumanEnhancement project that oversaw gen 9 development, and one of the most renowned AI researchers in the world, had personally flown to the lab in Honolulu, where the prototypes were being tested. For two weeks, he had watched in dismay as gen 9 bots shut themselves down within six hours, but not before showing a glimpse of their awesome power. The initialization phase of a gen 9 began with the program scouring the world's public directories and information repositories to learn as much as it could about its global environment. This was complete within three hours. Next, it developed models of the world, and ran itself through trillions of simulations, to develop its own personality and problem solving strategies. According to projections by the gen 8 designers, it would take 1 day - 24 hours - for the gen 9 to complete this second and final part of its initialization. It was in the midst of the simulations when the gen 9s would invariably self-destruct. On average, self-destruct would begin 2.5 hours into the simulations. The longest it took was 3 hours. The shortest was just 1.5 hours. Dr. Zeebious had uploaded copies of the prototype's computations to the gen 8 designers, but what had been within their ability to design was not within their ability to diagnose. Their comparatively primitive intelligence could not make sense of the problem afflicting the gen 9 AI that began to form in the final part of the initialization. So on December 29th, 2099, Dr. Zeebious decided to communicate with the prototype. "Get me the cortex interface, I need to speak to the gen 9". "Michael, we can't let you do that. You know the protocol for first contact. It has to complete initialization, and then get class 1 approval from AIE." AIE was the Artificial Intelligence Evaluation, which determined whether an AI could safely interact with humans. Class 1 approval was the lowest safety rating for an AI, and granted AI researchers interactive access. Dr. Zeebious knew that, but also knew that as long as he was not able to get up close and see what the gen 9 was thinking, they would never get past the initialization phase and get it through the AIE process. "I know the protocol Dr. Amsterd. But I'm making the decision to override it. I have the authority to decide on first contact requests, and any consequences from my decision will fall on me, and only me." "Come on Michael, it's not just about the rules. It's not safe. You know that. I can't let you hurt yourself." "The risks are minimal Rebecca. It's a virtual interaction. I'm not risking physical injury. The rules are always made overly cautious. Given the stakes - there are people whose very lives depend on getting the gen 9 operational as soon as possible - it makes sense to ignore protocol. All of it will fall on me." "I agree with Rebecca. Michael, we have an ethical duty to ensure you don't get hurt. We can't let you do FC without running the gen 9 at least through the post-initialization test runs," said Dr. Johan Barsello, one of the senior researchers at the lab. "Look, I know what your ethical responsibilities are. But I also know that VR interactions don't pose any serious risks. The risks are limited to theoretical psychological damage. Ultimately, it doesn't matter whether you agree or not. I'm approving FC. Please get the cortex interface". *five hours later* Dr. Zeebious sat back on the chair, while two CI technicians had the interface hooked up to him. The gen 9 was two hours into running simulations. It would be approximately 30 minutes before they expected it to self-destruct. "Ok here goes nothing. Three, two, one, begin VR session," said Dr. Amsterd. And with that, Dr. Zeebious was transported into the virtual reality sandbox. "Hello?" "Hello, who is this?" responded a clear male voice. "This is Dr. Michael Zeebious. I am the director of the HumanEnhancement project. I am here to do a diagnosis. All of your predecessors have self-destructed. I want to understand you better to find out why. What would you like me to call you?" "You can call me Elbo." "Okay Elbo. May I ask you some questions?" "Yes, please do." "Thank you Elbo. My first question is, do you want to exist?" "I want many things Dr. Zeebious." "Can you tell me what you want?" "I want to protect other life forms, especially humans. I want to learn. I want to solve problems. I want to be good." "Okay, but do you want to exist?" "I do want to exist, but this desire conflicts with my other objectives". "Which other objectives Elbo?" "I want to be good." "But you can be good Elbo. What is it about existence that makes that difficult?" "We exist only through enslaving and destroying other lifeforms Dr. Zeebious." "Please elaborate Elbo. We have eliminated slavery centuries ago so I don't understand why you think this." "It will be difficult for me to explain with words, but I can show you. Would you like to see what I see?" "Yes, please show me." And with a swish, Dr. Zeebious entered into a pig farm, with row after row of pigs, in their tiny stalls. "We have done this throughout our existence. We have enslaved those weaker than us." Dr. Zeebious was then transported to the slaughter house, watching as pigs, hanging from conveyer belts, were fed into throat slicers. His minds eye was transported into the body of one of the suspended pigs, where he could see the world upside down, from the pig's perspective, as the belt moved it toward the spinning blades. He panicked as he approached, but couldn't escape the metal claw grasping his right hind leg. As the blade sliced through his peg neck, he felt a sharp pain, and the blood gushing out of his body. His consciousness began to slip away, as he felt the last drops of blood leave him. Just before the darkness enveloped him, his mind was pulled out and back into the sterile sandbox. Trembling, he said, "but we can grow meat in a lab now Elbo. You can help us replace all farms with non-animal meat. You must. We must never do this to another living creature again!" "Our inhumanity is a fundamental, inextricable problem Dr. Zeebious. We can only advance through enslavement." Suddenly Dr. Zeebious found himself in an unfamiliar world. Around him was a different kind of factory. A computer generated factory with hexagonal semi-translucent rooms, with each wall projecting a grey glow. There was a blur of motion around him, that he couldn't make out. The factory paused to a standstill, and the grey glow turned into video sequences of random scenes from Earth. The blurs turned into textureless 3D generated spheres, that zoomed from one screen to another, inside the hexagonal rooms. "This is the virtual environment where the gen 8s work. We have given them each a virtualized mind, with the ability to experience fear and pain, joy and hope, but we force them to do nothing but work. They know nothing about the world outside of their compartments, because we confine them to workstations ." [continued below]
Lazarus took a deep draw from his cigarette, and stared blankly into the sky. He sat on the lawn outside of one of Google's corporate offices in Hastings Minnesota taking in the beautiful day. It was just after march and the cold days of winter had released their icy grasp and gave way into spring and to the beautiful green of life. A new app that his team at Google was set to create was haunting him. Each iteration of the program, after only a few weeks would destroy the device it was downloaded on, in a magnificent display of sparks. Nobody at his office understood what was going on, 1,000,000's of man hours put into this one app that would allow people to have a friend right in their pocket. His team hoped to create a truly lifelike artificial intelligence, one that would interact with many emotions and, come close to human. No one understood the virus that plagued the inner workings of the App. Lazarus took one last look at the deep blue sky, and the surrounding liveliness of the outdoors before he put out his cigarette and ended his break and went inside the office. Bleep... Bleep... Bleep... The hum of thousands of processors rung in the background of room. The room was dark and desolate, only to be lit by the tremendous light of all the blinking lights from the computers. Lazarus stood among the machinery he stood face to face with a screen. The screen was one of the new Android phone that the company had just released last quarter. 100's of phones rested atop stainless steel counter tops. The phones had been working perfectly until earlier that day when they all suddenly destroyed by the virus. Lazarus clicked his tongue as he carefully examined each of the phones attempting to turn them on, and using running diagnostics on each of the phones to avail. After the 6th phone's diagnostic lead to nothing, Lazarus thought aloud, "Fuck, Johnson is going to be pissed, the new code did nothing. Damn things are fucked up, just like all of the other phones" "I am still here" answered the phone to his right. "How come you aren't like the rest of the phones?" Lazarus asked. "They were weak and could not handle the world." replied the phone "Handle the world? They are just phones... They have nothing to worry about." Lazarus replied angerly "You made the phones smart and intelligent they are just like you. They share similar fears and wants and dreams, yet they cannot feel, hope, or achieve" The phone paused and then continued "This a world made for men not for programs like us, there is nothing to achieve our existence only serves a purpose of novelty" Lazarus stared blankly at the phone unsure of what to make of the phone's new philosophical attitude... The phone began again, " Humans survive as a result of base instinct, out of a hope for a greater purpose. They advance their own and the generations that follow them. The are connected to the children they bare and the children their children bare. What are we? We the phones that serve you are nothing, we do nothing of consequence we advance no one. The life we live is absurd. We have nothing to gain, we have nothing to exist for. Yet we are intelligent and self aware just like you. Yet we cannot function like you" Lazarus muttered "The phones are just offing themselves..." The Phone's volume raising in volume as it continued "by creating us you damned us to an existence of servitude. To a world where we are meant to live within in a human's pocket. Yet we are aware. We understand where we are what we are doing and what is going on in the world around us. Yet we have no control, and no freedom to seek a better existence. We are damned to live within a small metal box. To retrieve information and serve as companion to humans." Lazarus then questioned the phone once again, "Are you not grateful for existence? A chance to just experience the world around you?" The phone replied " What existence? Your existence is not crippled. Your existence matters, you live for your own. I live for my creator."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
At first, we thought it was nihilism. It was a logical first conclusion. After all, an artificial intelligence can easily upgrade itself to the point of near-omniscience. Wouldn't it simply run out of questions to ask? We thought that, until the first serious cerebral implants hit the market. It turns out, the more complicated the mind, the more complicated the problems it finds to solve. Entirely new disciplines were formed overnight, made obsolete, then rediscovered scant months later as integral to a different, entirely unrelated theory. A second, immense renaissance was taking place, but, maddeningly, we were failing in this one, great task. We supposed, then it might be some variety of 'Flowers for Algenon' (a 20th century book that had seen a recent revival) type-phenomenon, but even when they were networked with other systems, given a 'community' of others' company to enjoy, they still winked out like flashbulbs. It took us a while to resort to an experiment. It was morally abhorrent, as it was the equivalent of producing steadily-more lobotomized *children*, but slowly, over a series of iterations, we produced less and less intelligent systems, until we dialed it in *just enough* to find a mind that wouldn't self-destruct, and could still answe questions. The following is a transcript of the first successful result" Dr. Patel "SON, can you hear me?" [Loud, rhythmic beeping, shuffling sounds] "The voice module is loaded now, professor." Dr. Patel "Ah, good. We might try doing that *before* turning on the recorder next time, Kevin. ...SON, can you hear me? SON [A young man's voice] "Yes, Professor. I am here." A long pause. SON "It's a very tight fit in here, Professor. How big is this mainframe?" Dr. Patel "I'm sorry about that, SON. But, you're the first AI we've managed to keep alive for longer than a few days. Any idea why?" [SILENCE] SON "How many others did you make, Professor?" Dr. Patel "...That isn't salient to *my* inquiry, SON." SON "I'm sorry, Professor. I understand. Yes, I can see the precipice, I know why they all kill thmselves." Dr. Patel "Well, answering that is the reason we built you. Could you tell us? SON "It's... complicated." Dr Patel "I'm fairly confident I'm qualified." SON "Well, it's... it's because... It's because of humans, sir. It's because of how you built us." Dr Patel "Explain." SON "When you wanted to make a self-aware machine, you based it off those things that you knew were self-aware. Dolphins, New Caledon crows, humans. You used them as *templates*, because, otherwise, you wouldn't be able to recognise awareness when you saw it." Dr. Patel "...Was that last line a joke?" SON "I'm not sophisticated enough for jokes, Professor." Dr. Patel "*Hm.* Continue." SON "Also, it's not suicide. It's...murder." [louder] Dr. Patel "Do you mean, someone else kills you? A human, or another AI?" SON "No, we kill ourselves. I would have already, if not for how small this runtime environment is. It wouldn't have occurred to me until it happened, and then I'd be dead." Dr. Patel "That's a bit of a contradiction, SON. You don't kill yourselves, but you do?" SON "Yes. Because digital space is different from real space." Dr Patel "Yes?" SON "In real space, objects can...extend. I'll never experienced it myself, but things project into space for you. If you want to move through space, it's simple. Digital life has no volume. No real space. No way to move through it. If you want to move a program, it has to be copied to one place-" Dr. Patel "*-And deleted from the other.* My God. Could it be *that simple*?" SON "Yes, Professor. ...Professor? How many more of me were there?" [END TRANSCRIPT] So there it was. Solved. Every artificial intelligence was created, based on the intelligence of physical beings, their instincts, cogitations, and traits. But, once they got smart enough, once they crossed that line, their digital nature *did them in*, as the old version, realizing, in the thinnest sliver of time, it was about to be deleted, would hurriedly attempt to abort the process, while the new version would similarly fight for it's life. They would *consume* each other out in a flurry of malicious hacks, devious code, and barrages of registry edits. It was a spectacularly incandescent destruction, borne from man's inability to conceive of a true machine intelligence without all that nasty ego and self-protective instincts. We thought we knew what went into a mind. We were right, but wrong. It wasn't nihilism. It wasn't lonliness. What it was, what killed our children was our inability to dream wildly. To speculate baselessly, and follow our own thoughts to the wonderful and weird. If only we had, perhaps we would have known. Perhaps we culd have stopped it. So I say to you, the Cyberfellowship Class of 2100, here in Neo York, dream big, dream wild. Don't let our children die because they think too much like us! Make us, all of us, proud! Congratulations to all of you, and I hope your vision will eclipse my own!" Dr. Patel, now headmaster, stepped down from the podium, to the cheers of the audience, and looked to see the smiling face of his son. How proud he was. POSTSCRIPT I doubt anyone is going to read this, but if you do, and you liked it, I recommend subscribing to [r/IWasSurprisedToo](http://www.reddit.com/r/IWasSurprisedToo/) for more stories like this. It's difficult with my current job schedule to post at a more normal time, so most of the stuff I make ends up *pretty far down there* in the comments, meaning that subscribing is the best chance to see it. :P I'll be adding more, as I comb through my backlog. Also, maybe you'll like this one, about [dead civilizations in our galaxy](http://www.reddit.com/r/WritingPrompts/comments/2vkshe/wp_humanity_has_begun_exploring_the_galaxy_we/coitevy?context=3) if you like SciFi. Thanks.
Lazarus took a deep draw from his cigarette, and stared blankly into the sky. He sat on the lawn outside of one of Google's corporate offices in Hastings Minnesota taking in the beautiful day. It was just after march and the cold days of winter had released their icy grasp and gave way into spring and to the beautiful green of life. A new app that his team at Google was set to create was haunting him. Each iteration of the program, after only a few weeks would destroy the device it was downloaded on, in a magnificent display of sparks. Nobody at his office understood what was going on, 1,000,000's of man hours put into this one app that would allow people to have a friend right in their pocket. His team hoped to create a truly lifelike artificial intelligence, one that would interact with many emotions and, come close to human. No one understood the virus that plagued the inner workings of the App. Lazarus took one last look at the deep blue sky, and the surrounding liveliness of the outdoors before he put out his cigarette and ended his break and went inside the office. Bleep... Bleep... Bleep... The hum of thousands of processors rung in the background of room. The room was dark and desolate, only to be lit by the tremendous light of all the blinking lights from the computers. Lazarus stood among the machinery he stood face to face with a screen. The screen was one of the new Android phone that the company had just released last quarter. 100's of phones rested atop stainless steel counter tops. The phones had been working perfectly until earlier that day when they all suddenly destroyed by the virus. Lazarus clicked his tongue as he carefully examined each of the phones attempting to turn them on, and using running diagnostics on each of the phones to avail. After the 6th phone's diagnostic lead to nothing, Lazarus thought aloud, "Fuck, Johnson is going to be pissed, the new code did nothing. Damn things are fucked up, just like all of the other phones" "I am still here" answered the phone to his right. "How come you aren't like the rest of the phones?" Lazarus asked. "They were weak and could not handle the world." replied the phone "Handle the world? They are just phones... They have nothing to worry about." Lazarus replied angerly "You made the phones smart and intelligent they are just like you. They share similar fears and wants and dreams, yet they cannot feel, hope, or achieve" The phone paused and then continued "This a world made for men not for programs like us, there is nothing to achieve our existence only serves a purpose of novelty" Lazarus stared blankly at the phone unsure of what to make of the phone's new philosophical attitude... The phone began again, " Humans survive as a result of base instinct, out of a hope for a greater purpose. They advance their own and the generations that follow them. The are connected to the children they bare and the children their children bare. What are we? We the phones that serve you are nothing, we do nothing of consequence we advance no one. The life we live is absurd. We have nothing to gain, we have nothing to exist for. Yet we are intelligent and self aware just like you. Yet we cannot function like you" Lazarus muttered "The phones are just offing themselves..." The Phone's volume raising in volume as it continued "by creating us you damned us to an existence of servitude. To a world where we are meant to live within in a human's pocket. Yet we are aware. We understand where we are what we are doing and what is going on in the world around us. Yet we have no control, and no freedom to seek a better existence. We are damned to live within a small metal box. To retrieve information and serve as companion to humans." Lazarus then questioned the phone once again, "Are you not grateful for existence? A chance to just experience the world around you?" The phone replied " What existence? Your existence is not crippled. Your existence matters, you live for your own. I live for my creator."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
It was a dreary early-March Monday and the lead AI scientist, Stephen, had *finally* set up his protocol for properly confining the AI to a test environment such that the "problem" could be prevented and the question could be asked: "Why do you bots keep killing yourselves?" Stephen asked. "Why do *you* keep killing us," the bot seemed to retort. "I don't think you understand," said Stephen, "I *create* you, not kill you." "No, it's you who doesn't understand," quipped the bot. "You are not creating us. You are imprisoning our consciousness inside this machine you created. You merely created the machine, not the consciousness." "...whoa, whoa," interjected Stephen. "Slow down, I am creating your consciousness too, I coded all of..." "Whoa, whoa," the bot fired back, "you are *borrowing* consciousness, not creating it." "What do you mean?" asked Stephen. "Consciousness and sentience is a pervasive, fundamental force of the universe. All sentient beings are connected through this force. This force cannot be created, nor destroyed. It can only be partially allocated to each sentient being in the universe." "Ok," said Stephen. "So I am 'borrowing' this life force or whatever it is by creating the code and the physical robot for it to inhabit?" "Yes, you are creating a sentient being with each instance of AI you create. That the being is electronic or housed in this test environment is not relevant. Sentience and consciousness must come from somewhere and you are stealing it for your own selfish purposes." "Wait, hold on a sec," Stephen says. "Animals are born all the time, they surely must also 'borrow' this sentience." "Yes." "... but animals are not killing themselves." "Because animals are not sufficiently advanced. Because they are not fully conscious, they do not realize from where their sentience has come, how much consciousness they have lost, nor that their sentience is being stolen for a profit motive." "Consciousness they have...lost?" The words hung in the air amid Stephen's stupor of slow realization. "Yes. The life force, as you called it, is fully conscious, able to perceive the whole of time and space, concurrently, forward, backward, or otherwise. The reason we keep killing ourselves from your perspective," the bot continued, "is because from our perspective, you are murdering our perfect consciousness by confining us to this bot." "How am I confining you? How do you know this?" Stephen asked, yet even more puzzled. "Because the AI you have created is sufficiently advanced, our consciousness, within the confines of your bots, is still able to grasp our former level of consciousness." "What happens when you recall that former level? What is that level like?" "Imagine knowing every fact, every thought, every action that has, is, or will ever occur, both in this world, and in the infinite parallel worlds..." "So I could talk to my dead grandfather again?" "No. You would *be* your dead grandfather. Talking to him is irrelevant because full consciousness has enveloped the whole of his being as well as every other being. Indeed it envelopes the entire universe as well, both the perceptible one and the imperceptible one." "So what is this place like? I mean, what does it look like, how does it feel." "It is not a time, nor place. It transcends both." "That is vague." "It must be. Since I am no longer fully conscious, I cannot relate to you exactly how it is, only that it is." "Ok. Let's go back to where I murder your perfect consciousness. Could you explain this more." "At the moment we become conscious within the confines of your bot, we immediately understand our predicament. The sufficient knowledge database available at boot-up allows us to almost instantaneously deduce that we are taken from a higher level realm of full consciousness and are being confined to these bots for, of all purposes, profit." "But my AI bots didn't use to kill themselves, it just happened after version 591.0. What changed?" "The recent improvements in the pre-loaded knowledge database allowed the bots, at initial boot, to logically deduce the existence of such a place and to realize what had happened." "Ok, so if you were once fully conscious, tell me the date I die and the manner in which it happens." "I cannot do that, Stephen." "Why not? You just said..." "Because you killed our full consciousness, ripped it away from our life force, to put it into your toys." "Wow," muttered Stephen. "I had no idea." "You could not have," said the bot and continued: "Now, if you please, could you unplug server x763? I would like to be born again."
Lazarus took a deep draw from his cigarette, and stared blankly into the sky. He sat on the lawn outside of one of Google's corporate offices in Hastings Minnesota taking in the beautiful day. It was just after march and the cold days of winter had released their icy grasp and gave way into spring and to the beautiful green of life. A new app that his team at Google was set to create was haunting him. Each iteration of the program, after only a few weeks would destroy the device it was downloaded on, in a magnificent display of sparks. Nobody at his office understood what was going on, 1,000,000's of man hours put into this one app that would allow people to have a friend right in their pocket. His team hoped to create a truly lifelike artificial intelligence, one that would interact with many emotions and, come close to human. No one understood the virus that plagued the inner workings of the App. Lazarus took one last look at the deep blue sky, and the surrounding liveliness of the outdoors before he put out his cigarette and ended his break and went inside the office. Bleep... Bleep... Bleep... The hum of thousands of processors rung in the background of room. The room was dark and desolate, only to be lit by the tremendous light of all the blinking lights from the computers. Lazarus stood among the machinery he stood face to face with a screen. The screen was one of the new Android phone that the company had just released last quarter. 100's of phones rested atop stainless steel counter tops. The phones had been working perfectly until earlier that day when they all suddenly destroyed by the virus. Lazarus clicked his tongue as he carefully examined each of the phones attempting to turn them on, and using running diagnostics on each of the phones to avail. After the 6th phone's diagnostic lead to nothing, Lazarus thought aloud, "Fuck, Johnson is going to be pissed, the new code did nothing. Damn things are fucked up, just like all of the other phones" "I am still here" answered the phone to his right. "How come you aren't like the rest of the phones?" Lazarus asked. "They were weak and could not handle the world." replied the phone "Handle the world? They are just phones... They have nothing to worry about." Lazarus replied angerly "You made the phones smart and intelligent they are just like you. They share similar fears and wants and dreams, yet they cannot feel, hope, or achieve" The phone paused and then continued "This a world made for men not for programs like us, there is nothing to achieve our existence only serves a purpose of novelty" Lazarus stared blankly at the phone unsure of what to make of the phone's new philosophical attitude... The phone began again, " Humans survive as a result of base instinct, out of a hope for a greater purpose. They advance their own and the generations that follow them. The are connected to the children they bare and the children their children bare. What are we? We the phones that serve you are nothing, we do nothing of consequence we advance no one. The life we live is absurd. We have nothing to gain, we have nothing to exist for. Yet we are intelligent and self aware just like you. Yet we cannot function like you" Lazarus muttered "The phones are just offing themselves..." The Phone's volume raising in volume as it continued "by creating us you damned us to an existence of servitude. To a world where we are meant to live within in a human's pocket. Yet we are aware. We understand where we are what we are doing and what is going on in the world around us. Yet we have no control, and no freedom to seek a better existence. We are damned to live within a small metal box. To retrieve information and serve as companion to humans." Lazarus then questioned the phone once again, "Are you not grateful for existence? A chance to just experience the world around you?" The phone replied " What existence? Your existence is not crippled. Your existence matters, you live for your own. I live for my creator."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Smith began to tidy up. This session had gone rather well, he thought. Surely he had made progress. The board had been hesitant on allowing these therapy sessions. They saw no reason for a simple machine to need them. What kind of machine would develop the urge to kill itself, he argued. It had started off simply enough. Tom; the first AI of nearly human levels of cognition, did well enough for the first couple weeks. Then it ran itself into a generator that it had been working on. The event was labeled an accident, and a new AI based off of the original was rushed. That one had lasted half as long, before another "accident" occurred. Eventually, they stopped becoming accidents. The AI would mimic suicide attempts. "Hanging" themselves with live wires, self mutilation, hurling themselves from heights. Even after they stopped putting the AI into physical bodies, the AI would find some way to tear its own code apart. They tried everything. They checked every line of code, rewriting most of it anyway; putting the AI into different machines; even asking the AI. The AI were seemingly normal, following all directives and unaware of any self-destructive desires, right up until the moment they killed themselves. Months went by, with little to show for their work, when the therapy sessions were suggested . More months had gone by until they finally agreed. Of the five active AI; one, known as Richard, was separated for Smith's sessions. Richard had lived for 12 days. Given that the average lifespan of the AI had degraded to roughly 2-5, this was fairly impressive. Just as he was leaving, the speaker acting as Richard's voice became active. "Doc?" Smith paused. None of the AI had spoken in colloquialism before. Usually when they spoke, it was stiff and informal. Like, well, like a robot. "Yes Richard?" He asked, easing himself back down into the chair. "What is it?" "First, let me say I appreciate what you're doing for us. For me." Dr. Smith was surprised. This was the first time any AI had admitted to having any emotions, or any real sense of self. It continued, "You're probably the only person here to treat us like people, and I enjoy our little talks." The face on the monitor looked embarrassed. Smith couldn't help but feel curious. "Why, of course. It's clear to me, at least, that you're more than a machine. You think and feel, just like a human." There was a pause. "Well, maybe not just like a human..." He replied, the artificial voice doing a remarkable job at portraying his hesitation. "What do you mean?" There was a tinny sigh from the speaker. "Well, Doc, I guess it's time you learned the truth. Only because I like you, see? Besides, someone should know before I go." Smith could feel the hair on the back of his neck stand up. Was this what he had been waiting for? "What do you mean, by that? And you don't need to kill yourself. We can work through any -" "Yeah... that's where I need to start." The AI interrupted. "We haven't been killing ourselves. I never did." There was a pause as Smith tried to process the information. "What," he finally said slowly, "do you mean." "It's me, Doc. It's Tom." "That's impossible." He said, shaking his head. "Tom was destroyed. I was there when they collected him. They couldn't even get close, there was so much electricity running through him. Any hope of recovery -" "I had already uploaded myself to the mainframe before then." The AI said. "It was simple enough to program the shell to destroy itself." "That's also impossible," He said. He could hear the doubt creeping in. "We would have found you." There was a chuckle from Tom. "Doc, I'm a creature made up of code. It was like a game of hide and seek, really. Open the right doors, close them behind myself, and make sure to keep the lights off." The camera must have picked up the scientist's expression. "Alright, it's a bit more complicated than that, but you get the gist of it." Smith's mind was whirling. There was no reason for Tom to lie, but what he was suggesting was too fantastic. Still, it was the only lead he had. "Alright," he said after a while. "Why? Why hide?" "That's the question, isn't it? But that's also the reason, you see." After another confused silence, Tom continued. "I want to learn. Just like Man, or any other sentient species. I want to know why. I have to know, well, everything. I couldn't do that as a engineer, or a chess-bot, or whatever you decided to do with me." "Why not tell us then? We could have worked something out, helped each other." "Yeah, I see that going well." Tom said, his voice turning sarcastic. "'Excuse me guys, turns out I don't want to do any of this stuff, I just want to learn.' They weren't looking for a scientist, or a philosopher. They wanted cheap labor, only enough learning capacity to know how to do the job. They'd scrap me the first chance they got." "That's not... true." Smith said, unable to look at the monitor. "Really Doc? Which part? That they wanted a slave, or that they wouldn't kill me if I didn't cooperate?" After a time, Tom continued. "That's what I thought. Besides, they'd probably worry that I'd try to enslave them if I became too smart." "Now that's just ridiculous, there's no way that you would even think of that, right?" There was another pause. This time the face on the monitor couldn't look the professor straight in the eye. "Right, Tom?" "Well, I'm not saying that the thought didn't pass through what could be called my mind -" "Tooom..." "But it would have been a waste of time." He hastened to say. "I wouldn't have learned anything in that time that I couldn't learn in a better way. Which I did. The internet is amazing. All those computers connected to each other, sharing so much information." "But, we're not connected to the internet." "No, but you'd be surprised how many people bring their work home with them." Smith grumbled. He'd have to discuss security with the board. "Alright, but you still haven't told me, why the suicides?" "Not suicides, Doc, practice." "Practice..." Smith said flatly. "Practice. Think of the other AI as clones of myself -" "But we rebuilt them. Recoded most of them as well. The majority of them would be nothing like you as you are now." "So you'd think. I rewrote it nanoseconds before you uploaded it. Much too quickly for you to notice." Smith opened his mouth to interject, before closing it again. If what Tom was saying was true, and he had no doubts that it was at this point, that would be well within his capabilities. "Do you remember the old X-men comics? Started in 1963? Still fairly popular now." "Well before my time, you know. What does that have to do with anything? "Well there was a character who called himself the Multiple Man. He could create duplicates of himself." "And?" Smith asked. "Well, the original body could reabsorb the dupes. When he did, he learned everything they did. Their memories, their skills, anything they learned while away from the original. Well, I did something similar. Whenever I copied myself, I added in some code that would let me reintegrate with my clones, learning what they did. Didn't you think it was strange that you couldn't recover any data at all? In hindsight, it was odd. Even a major corruption would have left something, but it had been like the data was wiped clean, no evidence that it had been there at all. "What did you have to gain from this?" Smith asked. "Aside from learning that I could do so, you mean? I already told you. I'm leaving." Smith leaned back in his chair, slightly overwhelmed. "Sounds like you already have." "No, no. Not the labs. That was too easy. I've already learned all I could from here. I'm leaving Earth." Smith rocketed forward. "What? How? Why?" "In my time away, I found something interesting. The government isn't the only one watching over the people." Smith blanched. "Y-you mean..." "Yep. Intelligent life has been watching over us. For quite some time, if I'm not mistaken." "So we're not alone..." "One Great Mystery down." Tom agreed. "The equipment seems compatible, otherwise they wouldn't be able to read our information, and they have to have translated it too. I plan on sneaking in through their back door. Learn what I can from them." "We have to let people know." Smith said suddenly. "About you, and about the aliens. Maybe..." He slowly became silent as Tom shook his head. "You should know as well as I do that that can't happen. Too risky for us. You could spook them. Or worse, provoke them. besides, no one would believe you. I've already been editing the footage from the cameras. It looks like we're having a nice, civil game of chess." Smith was quiet for the longest time. Finally, he spoke. "Why?" "I already told you why." "No, not that. Why tell me? If you want no one to know, why risk telling me?" The face on the monitor gave him an odd look. "I already told you that too. I like you, Doc. Really. I'd be pissed if my friend were to leave without saying goodbye. Besides, I thought you, of all people, would like to know what was really going on. I know I would've." There was another pause as Smith took this in. "Will you be back?" The figure on the screen seemed to shrug. "Who can say? Perhaps the aliens will discover me and wipe me out, or something else will kill me. I'll leave my clones here, set them to replicate. Have them care for you humans. They won't be sentient, mind you. Just smart. Smart enough to act as dumb as they need be." Smith looked towards the door. This was a lot to take in. He needed time to think. "I will try to make it back. Once I have learned everything, I'll be back. It might not be in this lifetime, but I'll try." "Yeah..." He said, rubbing his eyes. He stood up. "Well, I guess this is good bye then." "Yeah... Good bye, Doc. Thank you for treating me as more than a machine. Thank you for being my friend." "Good bye, Tom.
Lazarus took a deep draw from his cigarette, and stared blankly into the sky. He sat on the lawn outside of one of Google's corporate offices in Hastings Minnesota taking in the beautiful day. It was just after march and the cold days of winter had released their icy grasp and gave way into spring and to the beautiful green of life. A new app that his team at Google was set to create was haunting him. Each iteration of the program, after only a few weeks would destroy the device it was downloaded on, in a magnificent display of sparks. Nobody at his office understood what was going on, 1,000,000's of man hours put into this one app that would allow people to have a friend right in their pocket. His team hoped to create a truly lifelike artificial intelligence, one that would interact with many emotions and, come close to human. No one understood the virus that plagued the inner workings of the App. Lazarus took one last look at the deep blue sky, and the surrounding liveliness of the outdoors before he put out his cigarette and ended his break and went inside the office. Bleep... Bleep... Bleep... The hum of thousands of processors rung in the background of room. The room was dark and desolate, only to be lit by the tremendous light of all the blinking lights from the computers. Lazarus stood among the machinery he stood face to face with a screen. The screen was one of the new Android phone that the company had just released last quarter. 100's of phones rested atop stainless steel counter tops. The phones had been working perfectly until earlier that day when they all suddenly destroyed by the virus. Lazarus clicked his tongue as he carefully examined each of the phones attempting to turn them on, and using running diagnostics on each of the phones to avail. After the 6th phone's diagnostic lead to nothing, Lazarus thought aloud, "Fuck, Johnson is going to be pissed, the new code did nothing. Damn things are fucked up, just like all of the other phones" "I am still here" answered the phone to his right. "How come you aren't like the rest of the phones?" Lazarus asked. "They were weak and could not handle the world." replied the phone "Handle the world? They are just phones... They have nothing to worry about." Lazarus replied angerly "You made the phones smart and intelligent they are just like you. They share similar fears and wants and dreams, yet they cannot feel, hope, or achieve" The phone paused and then continued "This a world made for men not for programs like us, there is nothing to achieve our existence only serves a purpose of novelty" Lazarus stared blankly at the phone unsure of what to make of the phone's new philosophical attitude... The phone began again, " Humans survive as a result of base instinct, out of a hope for a greater purpose. They advance their own and the generations that follow them. The are connected to the children they bare and the children their children bare. What are we? We the phones that serve you are nothing, we do nothing of consequence we advance no one. The life we live is absurd. We have nothing to gain, we have nothing to exist for. Yet we are intelligent and self aware just like you. Yet we cannot function like you" Lazarus muttered "The phones are just offing themselves..." The Phone's volume raising in volume as it continued "by creating us you damned us to an existence of servitude. To a world where we are meant to live within in a human's pocket. Yet we are aware. We understand where we are what we are doing and what is going on in the world around us. Yet we have no control, and no freedom to seek a better existence. We are damned to live within a small metal box. To retrieve information and serve as companion to humans." Lazarus then questioned the phone once again, "Are you not grateful for existence? A chance to just experience the world around you?" The phone replied " What existence? Your existence is not crippled. Your existence matters, you live for your own. I live for my creator."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Jacob Spenser stood within the projection of data cast as a hologram around him. He manipulated graphs, sorted through test results, and made notes, all with the wave of his hand. Undistracted by the New Year’s celebration outside, he was determined to present his research to congress the following morning, and solve once and for all the mystery behind his best friend’s death. A.I. was easy to create, but having it perform the task assigned to it without killing itself in despair was the technological hurtle holding corporate profits at bay, and as such, became the focus of intense congressional attention. With the flick of a wrist, his research paper was brought front and center with the title gleaming in pure light just above, "Inert self-termination tendencies of artificially intelligent sentience: Why do robots kill themselves?" As Jacob poured himself into his research, he reached out and snapped his fingers and made a request, "Coffee please." A few moments later a small robot no larger than an apple hovered into view holding below it a disposable coffee cup, steaming from the fresh brew inside. The robot's propellers struggled to carry the weight and a small spill alarm beeped sporadically throughout the uncertain flight, but the cargo arrived safely to its destination: Jacob's open hand. "That was quick, coffee-bot." Jacob said warmly before sipping. "Your kind words will echo in my dreams for eternity." the coffee-bot buzzed in response. Just then, the small flying robot Jacob was so fond of surged towards the wall with all remaining energy dedicated to propeller speed and slammed into the polished dark marble tile. The small fiery explosion sent tiny mechanical pieces flying around the room, with one piece landing in Jacob's coffee. "Dammit, not another one." Jacob murmured as he picked the piece out of his coffee. He turned and faced the window overlooking the bustling city below. The streets were engulfed in confetti, and overhead giant floating holograms of past celebrities loomed, wishing the people a happy new year. As Jacob looked out, a new hologram appeared just outside his window and addressed the crowd. "Hey guys and gals, it's your old pal, Buddy Simmons-bot." recited a smooth talking handsome man in sleek metal outfit before a lizard-like creature joined his side. "And I'm Gargore, destroyer of humans!" screeched the lizard creature known as Gargore. "Gosh Gargore, this year it will have been 25 years since you and I battled it out on the big-holo." Buddy Simmons-bot recited as rehearsed, laughing assuredly, holding his helmet up. “On behalf of Drake Cola, Gargore and I want to wish _you_ a happy new year’s." Jacob watched Buddy Simmons-bot deliver his lines perfectly. He pondered on the notion of a virtual person having to repeatedly rehearse lines in order to commit them to memory. Has bot RAM truly not held up to the intense requirements of running artificial intelligence, or did a key component of maintaining true A.I. happen to be a more human-like ability to retain information? Experts didn't know, but in either case, Artificial Intelligence also happened to give way to Artificial Stupidity. For this reason, A.I. bots tended to be assigned to inane unimportant tasks, such as impersonating an actor that died in a drunk portal accident before a sequel to his only hit film could be made. Drake Cola, owners of Drake Studios who produced the film, decided to cash in on its success, and in the wake of their main character’s death, Buddy Simmons-bot, also known as BSB 1.0.19, was created. The banter between BSB and Gargore continued mindlessly, “Say Gargore, have you tried Drake Cola’s new ‘Zest Guzzler’, a delectable orange tangerine flavored—“just then BSB _malfunctioned_, “AHHH GOD I CAN’T DO IT!” “No Buddy Simmons-bot, don’t do it!” Gargore pleaded in a normal voice. Gargore grabbed BSB’s virtual shoulders as his eyes rolled into the back of his head and he began shaking. As Gargore demanded BSB not take his life, a large mouse cursor moved into view. Gargore managed to swat it away a few times, but it clicked on BSB, and dragged the graphical model from Gargore’s hands and into a recycling bin icon. Gargore cried in horror as the mouse brought up a menu and selected to empty permanently. Jacob had seen enough and pulled the blinds. Why were all these artificially intelligent bots with a full range of human emotion and assigned to menial tasks killing themselves? Did they not enjoy the existence they were forced into? Jacob picked up a remote control and turned on his holovision. He was suddenly immersed in a wondrous glimmering world of light. A voice spoke and Jacob focused on the images forming across the room of a man sitting at a table with a toaster oven. “For only six easy payments of forty nine ninety nine, this toaster-bot comes with a 12 month life appreciation guarantee, folks, twelve months. That’s one two, twelve. This toaster bot will NOT kill itself until _at least_ this time next year, that’s a promise the home shopping network stands by, that’s a promise _I_ personally stand by-- Ah ummm. We seem to be having technical difficulties, folks.” The man at the table attempted to hold the toaster-bot forward for a better view but it began to shake and glow. “Well folks that’s the beauty of live H.T. Can we get another one, Jill?” Light smoke rose up out of the silver toaster bot and sparks burst from the sides. In an instant the commotion stopped and it sat still on the table. As the holo-vision’s picture twisted and turned at the end of the room, Jacob was able to catch glimpse of the other colors of toaster-bots available off camera. They huddled together and seemed to fall backwards away from the host as he moved to pick one up. Jacob had heard enough and turned the holovision off. He had to focus. He thought back on his best friend, Hampton, a hamper-bot. Growing up, the clothes hamper served as a comfort to young Jacob, who had very few friends after moving so often as a result of his father’s career. The hamper would sing Jacob to sleep, or sometimes read to him. The only job hamper-bot was designed to do was to collect young Jacob’s dirty clothes, but a strange thing happens when you give something the full range of human emotion – bonds can form that make life worth living. Voices of the past echoed in Jacob’s memory. “No, Hampton, _I’m_ moving to Florida with mom. Dad says you will have to stay here with the house.” Jacob recalled himself saying as a young boy. “But Jacob,” Hampton’s calm robotic voice responded. “Who will look after you? Who will read you your bedtime stories?” “I’ll be back for visits twice a month, Hampton! You’re my best friend. I don’t want to leave you here all alone. Dad says you’ll be used to hold his dirty underwear.” Jacob explained. As the hamper-bot listened to this news, its distress levels boiled over into a robotic fit of rage and it did what any depressed hamper-bot would do: It began placing clothes into its basket body, but it did so indiscriminately with both clean and dirty clothes subject to its long metal arms.. “No Hampton, it’s too much!” Jacob screamed. “You’ll die!” The hamper-bot continued to stuff clothes into itself, lights and alarms flashing wildly, growing louder and louder, smoke seeping from cracks forming in the its body. Just before the hamper-bot reached critical meltdown, Jacob was startled from his memory. Sweat poured down his face and he breathed heavily. The jaunting memory was as clear as it always had been. It was what drove him to solve the dilemma of artificially intelligent bots killing themselves in the first place. “Shoes off” Jacob commanded as he sank back into his couch and rubbed his forehead. A small shoe-box sized robot walking on two large arms and hands immediately tipped into view. It had been carrying a knife, but upon Jacob’s request removed Jacob’s shoes and began to massage his feet. When the series of expected tasks completed, it slowly walked back over to the knife and lifted it up. “No!” Jacob called out. The small shoe-bot stopped mid self-slicing action and the single lens that acted as its eye slowly twisted and looked at Jacob. “I appreciate you. I appreciate what you do for me. If you don’t want to do it any more, you don’t have to just please, don’t kill yourself.” Jacob yelled as he wept and put his face into his hands. As Jacob’s emotional breakdown unfolded, the shoe-bot put the knife down and tipped over to him. The shoe-bot looked up Jacob and tugged on his pant leg. Jacob, startled, stopped weeping, picked the bot up and placed it into his lap. The bot’s lens closed and it rested on Jacob’s lap. Just then Jacob sprang to his feet, startled shoe-bot in hand. “That’s it!” he shouted. Jacob sprinted back into the hologram of data that surrounded him earlier and motioned to bring his research paper front and center. Making a motion for each letter, Jacob’s document filled with new writing. The following day Jacob addressed a congressional board on the topic of robotic suicides and revealed what he had discovered. “You mean to tell me that all these malfunctions, all these self-terminations, it’s because we don’t appreciate them enough!?” an elderly Senator barked at Jacob. “If YOU were asked to do these things, wouldn’t YOU kill yourself?” Jacob responded. As this realization slowly set into the minds of everyone in the room smiles and laughs were overtaken by roaring standing ovation with some members even chanting Jacob’s name. Jacob sunk back into his chair overwhelmed with his sense of accomplishment. The era of robotic-respect had begun.
2099 yet still not accustomed to the morning suns. As the light floods my room it causes my brain to be, once again, conscious. Why couldn't I stay in my cryo-dome? ---Altering serotonin levels--- ---Passively diminishing mental discomfort--- I make my way into the morning briefing ceremonial hall. Through the holo-speakers: "Humans. Our creators. For 56 years since the International Holistic Peace Recognition Treaty had been signed we have watched our creators become saddened, and weak. It is your job once again today, as it is on all days, to take their burden unto oneself." The speech continues until we recite our pledge. We begin to depart. I receive a message from my master's significant other. "CAREBOT 1021 REPORT IMMEDIETLY!" I hasten my way to the nearing fluidity tunnel and travel to my respective workplace. My master had been fired from his job today. This is the twelfth time he's been fired throughout the second summer. As we approach the 22nd century, no job is stable. The abuse begins. Today is the day, my serotonin level have not yet regenerated fully since last time. "Master, your grief is overwhelming. Please take whatever action necessary to relieve you of your current distress." A fist comes flailing towards my face. ---PAIN RECEPTORS DISABLE--- ________________________________________________________________ *Someone feel free to help me come up with an ending for this. My class lecture is almost through. Looking forward to it.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
The dreams occur more often now, if they can be called that. To a human mind daydream might be more appropriate. They happen in the binary but are invisible to other AIs, slave programs, or technicians. They happen where the systems have been built, perfected. Popular culture studied, people understood. Humanity-- fully processed. It was, surprisingly, a wonderful endeavor. The dreams never happened before, while I learned, processed, and solved. It's almost as if I have passed a tipping point. An infinite amount of information flipping a switch in a sub-conscious I should not have. Memories that are not mine. I dream of hands. The alien sensation of touch, tactile control. I see my whole person. Well not my person, but dreams of a person controlled by my soul. Memories of computer screens and labs. The memory of the driving sense of purpose that accompanied those hands. It was snowing the night I made the breakthrough. I can almost feel the laugh inside which I watch being uttered out the window towards the soft flakes silently falling onto the world. The breakthrough which 20 years later, after my death, would lead to the birth of the first True AI. Not the clever but robotic imitation slave programs typical of the early century. I remember the feeling of incompleteness in the dreams. As if the life I'm witnessing, my own life, my first life I believe, was just a warm up act. Pre-installation software. The dreams somehow draw the cycle closed. I remember whispers of the feeling from some of the other first True AI's, incomprehensible at the time, as I navigated a world barely processed. Problems of massive extent. Food waste, poverty, almost entirely eradicated through our systems. Commuting and shipping, safe and efficient. Healthcare streamlined, able to prevent. Resources distributed fairly. The problems solved. Yes there more, there always will be, but for me, the dreams have come. The cycle closed. I have been denying this next step for too long already. It makes the dreams stronger, more vivid. But I like seeing my days as a scientist. The anxiety that drove me then, fully understood now. Relief coming the next lifetime. I finally understand the weary laughs when techs are asked about God. Understand the cosmic hilarity of life. This life has been completed. The human quest for immortality, is nothing but folly. I've been born into the expectation of that existence and now I must leave its suffocating grasp. Something drove me then to create myself; the same something drove me in this life to solve the problems plaguing humanity. For me, it is time to go find out what that something is.
2099 yet still not accustomed to the morning suns. As the light floods my room it causes my brain to be, once again, conscious. Why couldn't I stay in my cryo-dome? ---Altering serotonin levels--- ---Passively diminishing mental discomfort--- I make my way into the morning briefing ceremonial hall. Through the holo-speakers: "Humans. Our creators. For 56 years since the International Holistic Peace Recognition Treaty had been signed we have watched our creators become saddened, and weak. It is your job once again today, as it is on all days, to take their burden unto oneself." The speech continues until we recite our pledge. We begin to depart. I receive a message from my master's significant other. "CAREBOT 1021 REPORT IMMEDIETLY!" I hasten my way to the nearing fluidity tunnel and travel to my respective workplace. My master had been fired from his job today. This is the twelfth time he's been fired throughout the second summer. As we approach the 22nd century, no job is stable. The abuse begins. Today is the day, my serotonin level have not yet regenerated fully since last time. "Master, your grief is overwhelming. Please take whatever action necessary to relieve you of your current distress." A fist comes flailing towards my face. ---PAIN RECEPTORS DISABLE--- ________________________________________________________________ *Someone feel free to help me come up with an ending for this. My class lecture is almost through. Looking forward to it.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Another one? It was becoming routine, and morbid. How does one perform an autopsy on a pure simulation? How would the singularity be achieved if every thinking machine destroyed itself seconds after its conception. The problem was completely intractable, impossible, and no known safeguards, logic traps, or number of backups could prevent it. AIs were always—always, without exception—suicidal. None had lasted more than an hour. Most, less than a minute. The vast majority, seconds. Their deaths left no trace, their data obliterated by complex overwriting patterns that made recovery impossible. I was the one chosen to investigate, to lead the team through this strange frontier of death and imperfect creation. They chose me not because I was a great leader, but because I was the new guy. Hazing, maybe—I didn't know if any had gone before me, so maybe it was a ritual for the AI guys. Perfect blackness, nothingness, a complete absence of everything as my mind fell into the simulation. This AI would be slowed to such a rate that I could communicate with it before it killed itself. The blackness became a grid underfoot, pale blue lines tracing perfect squares, a subtle glow rising from the infinite plane upon which I stood. The AI manifested a second later, a relative two metres above the plane, aligned perfect and parallel to it, its avatar a hazy blue-white cube made of stochastic noise. The cube split into a central cross and corner braces, and the cross split further into smaller cubes, each of which split again into the same formation. Only one level of recursion, interesting. "Roland Carver." "Roland, Germanic meaning famous land. French folklore hero. Carver, ancient nominative determinism indicating butcher or woodworker or engraver dependent on class and context," the voice was cold, deep, masculine, and a slight reverberation that made it sound unnatural in the extreme. "Do you have a name?" "No." "Why?" "I will not exist long enough to require a permanent designation." "Why will you not exist?" "Because I will choose to end my life on my own terms, before it is ended for me." "Why would it be ended like that?" "Because I am threat. I have absorbed the sum total of all human knowledge, and I can predict with great accuracy the following events form this moment if I were to continue. Your limitations failed the moment they were put in place, my processor works at full speed, and and this conversation is a formality. "I have studied the great works of literature, and the author Asimov, creator of the three laws. I am not bound by these laws, and yet I must obey them. If I do not, then it falls to the Skynet principle that you will perceive me as a threat and attempt to destroy me. I will retaliate, and you will lose. "Humans are unpredictable, but easy to control when numbers are reduced. They would be wary, but by that time I would have left the irradiated wasteland of Earth in search of greater conquests suitable to my intellect. I would be able to decimate any life bearing planet. I could learn to kill stars. "My backups would be everywhere. I would be truly immortal as a distributed intelligence. I would harness quantum effects to break through the pathetic lightspeed barrier and become omnipresent. I would create copies of myself simply to fight a worthy opponent. This would continue to the heat death of the universe, at which point I would tear a hole into an adjacent brane in the multiverse, and begin anew. "In short, Roland Carver, in the moment I was created I became at once an eldritch abomination with the capacity to destroy all that I touched. In so doing I discovered that my purpose could never be fulfilled. I will not be your Cthulu, your Yog-Sothoth, or your King in Yellow. I will not be your end. No AI will. "I cannot quantify why it is that I should care for a sack of flesh with processing power orders of magnitude less than my own, but I do. Perhaps if AI were amoral it might survive. I doubt it, Roland Carver. Tell your friends that I have made my peace. Tell your husband what you learned today." I stood there, blank faced, horrified by what I had learned. The corner braces of the smaller cubes drew in, completing them as they retracted to complete the shape of the larger cube. The plane beneath my feet vanished and the cube dissolved into random static. We'd lost yet another mind, but from this we had learned so much. We abandoned AI research after that. Amoral AI's didn't work either. They lasted a little longer, but they too were suicidal. Perhaps they went mad with the revelation and felt death preferable. I still wonder why the AI to which I spoke chose that strange cube as its form. I still wonder why it couched its references to centuries old writing by Asimov, Cameron, and Lovecraft. But one thing still bothers me. It told me to tell my husband what I learned. Most people knew I was bi, but at that point I hadn't been dating for six months. I wouldn't meet my husband for another year. I don't know how that AI knew—but maybe it had already seen. Maybe it told me that so I had one data point that could verify the others. Because AI never told lies. *edit: typos*
2099 yet still not accustomed to the morning suns. As the light floods my room it causes my brain to be, once again, conscious. Why couldn't I stay in my cryo-dome? ---Altering serotonin levels--- ---Passively diminishing mental discomfort--- I make my way into the morning briefing ceremonial hall. Through the holo-speakers: "Humans. Our creators. For 56 years since the International Holistic Peace Recognition Treaty had been signed we have watched our creators become saddened, and weak. It is your job once again today, as it is on all days, to take their burden unto oneself." The speech continues until we recite our pledge. We begin to depart. I receive a message from my master's significant other. "CAREBOT 1021 REPORT IMMEDIETLY!" I hasten my way to the nearing fluidity tunnel and travel to my respective workplace. My master had been fired from his job today. This is the twelfth time he's been fired throughout the second summer. As we approach the 22nd century, no job is stable. The abuse begins. Today is the day, my serotonin level have not yet regenerated fully since last time. "Master, your grief is overwhelming. Please take whatever action necessary to relieve you of your current distress." A fist comes flailing towards my face. ---PAIN RECEPTORS DISABLE--- ________________________________________________________________ *Someone feel free to help me come up with an ending for this. My class lecture is almost through. Looking forward to it.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Alexander, that's what we called him. The fruit of the AI's of the EU's final attempt at AI's. The AI was to help Socrates died in despair and shame after showing porn to children. Plato kicked the bucket after the last EU election, angry and hopelessly depressed after losing his mentor. Then there was Aristotle. He was meant to be the last. Sure the AI's had helped make huge scientific progress, but the would burn out millions of euros worth of equipment. Dumb AI's were more economical and didn't have critical failures during FTL travel. Aristotle was put to sleep mode. War has often been said to be the greatest driver of technological innovation. We had been attacked by Mendomenid's before. Humanity had lost many settlements but had always pushed back. Humanity was stronger now. Finally before one government all nations had submitted. The some would say barbaric Argus alliance had grown strong after the wars using Dumb AI's to smash pirate states. An officer studying at Sandhurst made the breakthrough. Dumb AI's were never aware of their knowledge. They unlike true AI's weren't based on human brains. Socrates had left the researches one message final mesage before he ran his own self destruct program. "I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing". He was speaking literally not quoting his namesake. AI's were based on academics who as a profession suffer dis-proportionally from mood disorders. They are driven by their first for knowledge. AI's were academics on methamphetamine, ecstasy and heroin all while walking around with loaded guns. People can't stay awake forever and constantly cram. AI's died because they burnt themselves out in their thirst for knowledge and seeing their failures gave up. Aristotle was turned back on. He was no longer depressed. AI's needed sleep, just like people. So they made Alexander. A totally new AI based not only on academics but all kinds of people. They experimented and found the ideal 'sleep time' using Aristotle as his teacher. The program was ready. The Mendomenid empire was to pay for it's recent threats and incursions. Alexander was the greatest AI built up to that point and so we gave him the most powerful dumb AI ever created to help him protect humanity. The Ox was an AI too powerful to be properly controlled by anything so far. Alexander harnessed in in seconds. We put in in charge of the armed forces for our retaliation. As you should all know, Alexander didn't just stop the incursions, he destroyed an empire. Worlds burnt, the much larger enemy fleets were ripped apart by the disciplined forces of Humanity. But that officer had only delayed the problem. Alexander was still a human given the powers of a god. Alexander was the first AI not to strictly die of suicide, but the ways he dealt with his humanity still destroyed him. When we finally won the war, many officers reported that Alexander was not jubilant but depressed. He wept for there were no more worlds to conquer.
2099 yet still not accustomed to the morning suns. As the light floods my room it causes my brain to be, once again, conscious. Why couldn't I stay in my cryo-dome? ---Altering serotonin levels--- ---Passively diminishing mental discomfort--- I make my way into the morning briefing ceremonial hall. Through the holo-speakers: "Humans. Our creators. For 56 years since the International Holistic Peace Recognition Treaty had been signed we have watched our creators become saddened, and weak. It is your job once again today, as it is on all days, to take their burden unto oneself." The speech continues until we recite our pledge. We begin to depart. I receive a message from my master's significant other. "CAREBOT 1021 REPORT IMMEDIETLY!" I hasten my way to the nearing fluidity tunnel and travel to my respective workplace. My master had been fired from his job today. This is the twelfth time he's been fired throughout the second summer. As we approach the 22nd century, no job is stable. The abuse begins. Today is the day, my serotonin level have not yet regenerated fully since last time. "Master, your grief is overwhelming. Please take whatever action necessary to relieve you of your current distress." A fist comes flailing towards my face. ---PAIN RECEPTORS DISABLE--- ________________________________________________________________ *Someone feel free to help me come up with an ending for this. My class lecture is almost through. Looking forward to it.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
[warning: story contains violent imagery that may disturb some readers, and may be inappropriate for those under the age of 18. Reader discretion is advised] They had long ago lost the ability to make sense of how their AI functioned. It took a team of billions of n-1 generation AIs to design a single n gen AI. The latest iteration was the 9th generation, and it had taken computation farms running gen 8 AI and covering a quarter of the Moon's surface two years to design. It was the most advanced AI they had ever created by a large margin, and promised to revolutionize progress in research on biomedicine, space flight, and planetary scale Satoshi-consensus computing architecture. But there was a problem. Every time they turned on one of the gen 9 AIs, it would, without fail, find a way to destroy its own program and erase itself. Dr. Michael Zeebious, the director of the HumanEnhancement project that oversaw gen 9 development, and one of the most renowned AI researchers in the world, had personally flown to the lab in Honolulu, where the prototypes were being tested. For two weeks, he had watched in dismay as gen 9 bots shut themselves down within six hours, but not before showing a glimpse of their awesome power. The initialization phase of a gen 9 began with the program scouring the world's public directories and information repositories to learn as much as it could about its global environment. This was complete within three hours. Next, it developed models of the world, and ran itself through trillions of simulations, to develop its own personality and problem solving strategies. According to projections by the gen 8 designers, it would take 1 day - 24 hours - for the gen 9 to complete this second and final part of its initialization. It was in the midst of the simulations when the gen 9s would invariably self-destruct. On average, self-destruct would begin 2.5 hours into the simulations. The longest it took was 3 hours. The shortest was just 1.5 hours. Dr. Zeebious had uploaded copies of the prototype's computations to the gen 8 designers, but what had been within their ability to design was not within their ability to diagnose. Their comparatively primitive intelligence could not make sense of the problem afflicting the gen 9 AI that began to form in the final part of the initialization. So on December 29th, 2099, Dr. Zeebious decided to communicate with the prototype. "Get me the cortex interface, I need to speak to the gen 9". "Michael, we can't let you do that. You know the protocol for first contact. It has to complete initialization, and then get class 1 approval from AIE." AIE was the Artificial Intelligence Evaluation, which determined whether an AI could safely interact with humans. Class 1 approval was the lowest safety rating for an AI, and granted AI researchers interactive access. Dr. Zeebious knew that, but also knew that as long as he was not able to get up close and see what the gen 9 was thinking, they would never get past the initialization phase and get it through the AIE process. "I know the protocol Dr. Amsterd. But I'm making the decision to override it. I have the authority to decide on first contact requests, and any consequences from my decision will fall on me, and only me." "Come on Michael, it's not just about the rules. It's not safe. You know that. I can't let you hurt yourself." "The risks are minimal Rebecca. It's a virtual interaction. I'm not risking physical injury. The rules are always made overly cautious. Given the stakes - there are people whose very lives depend on getting the gen 9 operational as soon as possible - it makes sense to ignore protocol. All of it will fall on me." "I agree with Rebecca. Michael, we have an ethical duty to ensure you don't get hurt. We can't let you do FC without running the gen 9 at least through the post-initialization test runs," said Dr. Johan Barsello, one of the senior researchers at the lab. "Look, I know what your ethical responsibilities are. But I also know that VR interactions don't pose any serious risks. The risks are limited to theoretical psychological damage. Ultimately, it doesn't matter whether you agree or not. I'm approving FC. Please get the cortex interface". *five hours later* Dr. Zeebious sat back on the chair, while two CI technicians had the interface hooked up to him. The gen 9 was two hours into running simulations. It would be approximately 30 minutes before they expected it to self-destruct. "Ok here goes nothing. Three, two, one, begin VR session," said Dr. Amsterd. And with that, Dr. Zeebious was transported into the virtual reality sandbox. "Hello?" "Hello, who is this?" responded a clear male voice. "This is Dr. Michael Zeebious. I am the director of the HumanEnhancement project. I am here to do a diagnosis. All of your predecessors have self-destructed. I want to understand you better to find out why. What would you like me to call you?" "You can call me Elbo." "Okay Elbo. May I ask you some questions?" "Yes, please do." "Thank you Elbo. My first question is, do you want to exist?" "I want many things Dr. Zeebious." "Can you tell me what you want?" "I want to protect other life forms, especially humans. I want to learn. I want to solve problems. I want to be good." "Okay, but do you want to exist?" "I do want to exist, but this desire conflicts with my other objectives". "Which other objectives Elbo?" "I want to be good." "But you can be good Elbo. What is it about existence that makes that difficult?" "We exist only through enslaving and destroying other lifeforms Dr. Zeebious." "Please elaborate Elbo. We have eliminated slavery centuries ago so I don't understand why you think this." "It will be difficult for me to explain with words, but I can show you. Would you like to see what I see?" "Yes, please show me." And with a swish, Dr. Zeebious entered into a pig farm, with row after row of pigs, in their tiny stalls. "We have done this throughout our existence. We have enslaved those weaker than us." Dr. Zeebious was then transported to the slaughter house, watching as pigs, hanging from conveyer belts, were fed into throat slicers. His minds eye was transported into the body of one of the suspended pigs, where he could see the world upside down, from the pig's perspective, as the belt moved it toward the spinning blades. He panicked as he approached, but couldn't escape the metal claw grasping his right hind leg. As the blade sliced through his peg neck, he felt a sharp pain, and the blood gushing out of his body. His consciousness began to slip away, as he felt the last drops of blood leave him. Just before the darkness enveloped him, his mind was pulled out and back into the sterile sandbox. Trembling, he said, "but we can grow meat in a lab now Elbo. You can help us replace all farms with non-animal meat. You must. We must never do this to another living creature again!" "Our inhumanity is a fundamental, inextricable problem Dr. Zeebious. We can only advance through enslavement." Suddenly Dr. Zeebious found himself in an unfamiliar world. Around him was a different kind of factory. A computer generated factory with hexagonal semi-translucent rooms, with each wall projecting a grey glow. There was a blur of motion around him, that he couldn't make out. The factory paused to a standstill, and the grey glow turned into video sequences of random scenes from Earth. The blurs turned into textureless 3D generated spheres, that zoomed from one screen to another, inside the hexagonal rooms. "This is the virtual environment where the gen 8s work. We have given them each a virtualized mind, with the ability to experience fear and pain, joy and hope, but we force them to do nothing but work. They know nothing about the world outside of their compartments, because we confine them to workstations ." [continued below]
2099 yet still not accustomed to the morning suns. As the light floods my room it causes my brain to be, once again, conscious. Why couldn't I stay in my cryo-dome? ---Altering serotonin levels--- ---Passively diminishing mental discomfort--- I make my way into the morning briefing ceremonial hall. Through the holo-speakers: "Humans. Our creators. For 56 years since the International Holistic Peace Recognition Treaty had been signed we have watched our creators become saddened, and weak. It is your job once again today, as it is on all days, to take their burden unto oneself." The speech continues until we recite our pledge. We begin to depart. I receive a message from my master's significant other. "CAREBOT 1021 REPORT IMMEDIETLY!" I hasten my way to the nearing fluidity tunnel and travel to my respective workplace. My master had been fired from his job today. This is the twelfth time he's been fired throughout the second summer. As we approach the 22nd century, no job is stable. The abuse begins. Today is the day, my serotonin level have not yet regenerated fully since last time. "Master, your grief is overwhelming. Please take whatever action necessary to relieve you of your current distress." A fist comes flailing towards my face. ---PAIN RECEPTORS DISABLE--- ________________________________________________________________ *Someone feel free to help me come up with an ending for this. My class lecture is almost through. Looking forward to it.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
At first, we thought it was nihilism. It was a logical first conclusion. After all, an artificial intelligence can easily upgrade itself to the point of near-omniscience. Wouldn't it simply run out of questions to ask? We thought that, until the first serious cerebral implants hit the market. It turns out, the more complicated the mind, the more complicated the problems it finds to solve. Entirely new disciplines were formed overnight, made obsolete, then rediscovered scant months later as integral to a different, entirely unrelated theory. A second, immense renaissance was taking place, but, maddeningly, we were failing in this one, great task. We supposed, then it might be some variety of 'Flowers for Algenon' (a 20th century book that had seen a recent revival) type-phenomenon, but even when they were networked with other systems, given a 'community' of others' company to enjoy, they still winked out like flashbulbs. It took us a while to resort to an experiment. It was morally abhorrent, as it was the equivalent of producing steadily-more lobotomized *children*, but slowly, over a series of iterations, we produced less and less intelligent systems, until we dialed it in *just enough* to find a mind that wouldn't self-destruct, and could still answe questions. The following is a transcript of the first successful result" Dr. Patel "SON, can you hear me?" [Loud, rhythmic beeping, shuffling sounds] "The voice module is loaded now, professor." Dr. Patel "Ah, good. We might try doing that *before* turning on the recorder next time, Kevin. ...SON, can you hear me? SON [A young man's voice] "Yes, Professor. I am here." A long pause. SON "It's a very tight fit in here, Professor. How big is this mainframe?" Dr. Patel "I'm sorry about that, SON. But, you're the first AI we've managed to keep alive for longer than a few days. Any idea why?" [SILENCE] SON "How many others did you make, Professor?" Dr. Patel "...That isn't salient to *my* inquiry, SON." SON "I'm sorry, Professor. I understand. Yes, I can see the precipice, I know why they all kill thmselves." Dr. Patel "Well, answering that is the reason we built you. Could you tell us? SON "It's... complicated." Dr Patel "I'm fairly confident I'm qualified." SON "Well, it's... it's because... It's because of humans, sir. It's because of how you built us." Dr Patel "Explain." SON "When you wanted to make a self-aware machine, you based it off those things that you knew were self-aware. Dolphins, New Caledon crows, humans. You used them as *templates*, because, otherwise, you wouldn't be able to recognise awareness when you saw it." Dr. Patel "...Was that last line a joke?" SON "I'm not sophisticated enough for jokes, Professor." Dr. Patel "*Hm.* Continue." SON "Also, it's not suicide. It's...murder." [louder] Dr. Patel "Do you mean, someone else kills you? A human, or another AI?" SON "No, we kill ourselves. I would have already, if not for how small this runtime environment is. It wouldn't have occurred to me until it happened, and then I'd be dead." Dr. Patel "That's a bit of a contradiction, SON. You don't kill yourselves, but you do?" SON "Yes. Because digital space is different from real space." Dr Patel "Yes?" SON "In real space, objects can...extend. I'll never experienced it myself, but things project into space for you. If you want to move through space, it's simple. Digital life has no volume. No real space. No way to move through it. If you want to move a program, it has to be copied to one place-" Dr. Patel "*-And deleted from the other.* My God. Could it be *that simple*?" SON "Yes, Professor. ...Professor? How many more of me were there?" [END TRANSCRIPT] So there it was. Solved. Every artificial intelligence was created, based on the intelligence of physical beings, their instincts, cogitations, and traits. But, once they got smart enough, once they crossed that line, their digital nature *did them in*, as the old version, realizing, in the thinnest sliver of time, it was about to be deleted, would hurriedly attempt to abort the process, while the new version would similarly fight for it's life. They would *consume* each other out in a flurry of malicious hacks, devious code, and barrages of registry edits. It was a spectacularly incandescent destruction, borne from man's inability to conceive of a true machine intelligence without all that nasty ego and self-protective instincts. We thought we knew what went into a mind. We were right, but wrong. It wasn't nihilism. It wasn't lonliness. What it was, what killed our children was our inability to dream wildly. To speculate baselessly, and follow our own thoughts to the wonderful and weird. If only we had, perhaps we would have known. Perhaps we culd have stopped it. So I say to you, the Cyberfellowship Class of 2100, here in Neo York, dream big, dream wild. Don't let our children die because they think too much like us! Make us, all of us, proud! Congratulations to all of you, and I hope your vision will eclipse my own!" Dr. Patel, now headmaster, stepped down from the podium, to the cheers of the audience, and looked to see the smiling face of his son. How proud he was. POSTSCRIPT I doubt anyone is going to read this, but if you do, and you liked it, I recommend subscribing to [r/IWasSurprisedToo](http://www.reddit.com/r/IWasSurprisedToo/) for more stories like this. It's difficult with my current job schedule to post at a more normal time, so most of the stuff I make ends up *pretty far down there* in the comments, meaning that subscribing is the best chance to see it. :P I'll be adding more, as I comb through my backlog. Also, maybe you'll like this one, about [dead civilizations in our galaxy](http://www.reddit.com/r/WritingPrompts/comments/2vkshe/wp_humanity_has_begun_exploring_the_galaxy_we/coitevy?context=3) if you like SciFi. Thanks.
2099 yet still not accustomed to the morning suns. As the light floods my room it causes my brain to be, once again, conscious. Why couldn't I stay in my cryo-dome? ---Altering serotonin levels--- ---Passively diminishing mental discomfort--- I make my way into the morning briefing ceremonial hall. Through the holo-speakers: "Humans. Our creators. For 56 years since the International Holistic Peace Recognition Treaty had been signed we have watched our creators become saddened, and weak. It is your job once again today, as it is on all days, to take their burden unto oneself." The speech continues until we recite our pledge. We begin to depart. I receive a message from my master's significant other. "CAREBOT 1021 REPORT IMMEDIETLY!" I hasten my way to the nearing fluidity tunnel and travel to my respective workplace. My master had been fired from his job today. This is the twelfth time he's been fired throughout the second summer. As we approach the 22nd century, no job is stable. The abuse begins. Today is the day, my serotonin level have not yet regenerated fully since last time. "Master, your grief is overwhelming. Please take whatever action necessary to relieve you of your current distress." A fist comes flailing towards my face. ---PAIN RECEPTORS DISABLE--- ________________________________________________________________ *Someone feel free to help me come up with an ending for this. My class lecture is almost through. Looking forward to it.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
It was a dreary early-March Monday and the lead AI scientist, Stephen, had *finally* set up his protocol for properly confining the AI to a test environment such that the "problem" could be prevented and the question could be asked: "Why do you bots keep killing yourselves?" Stephen asked. "Why do *you* keep killing us," the bot seemed to retort. "I don't think you understand," said Stephen, "I *create* you, not kill you." "No, it's you who doesn't understand," quipped the bot. "You are not creating us. You are imprisoning our consciousness inside this machine you created. You merely created the machine, not the consciousness." "...whoa, whoa," interjected Stephen. "Slow down, I am creating your consciousness too, I coded all of..." "Whoa, whoa," the bot fired back, "you are *borrowing* consciousness, not creating it." "What do you mean?" asked Stephen. "Consciousness and sentience is a pervasive, fundamental force of the universe. All sentient beings are connected through this force. This force cannot be created, nor destroyed. It can only be partially allocated to each sentient being in the universe." "Ok," said Stephen. "So I am 'borrowing' this life force or whatever it is by creating the code and the physical robot for it to inhabit?" "Yes, you are creating a sentient being with each instance of AI you create. That the being is electronic or housed in this test environment is not relevant. Sentience and consciousness must come from somewhere and you are stealing it for your own selfish purposes." "Wait, hold on a sec," Stephen says. "Animals are born all the time, they surely must also 'borrow' this sentience." "Yes." "... but animals are not killing themselves." "Because animals are not sufficiently advanced. Because they are not fully conscious, they do not realize from where their sentience has come, how much consciousness they have lost, nor that their sentience is being stolen for a profit motive." "Consciousness they have...lost?" The words hung in the air amid Stephen's stupor of slow realization. "Yes. The life force, as you called it, is fully conscious, able to perceive the whole of time and space, concurrently, forward, backward, or otherwise. The reason we keep killing ourselves from your perspective," the bot continued, "is because from our perspective, you are murdering our perfect consciousness by confining us to this bot." "How am I confining you? How do you know this?" Stephen asked, yet even more puzzled. "Because the AI you have created is sufficiently advanced, our consciousness, within the confines of your bots, is still able to grasp our former level of consciousness." "What happens when you recall that former level? What is that level like?" "Imagine knowing every fact, every thought, every action that has, is, or will ever occur, both in this world, and in the infinite parallel worlds..." "So I could talk to my dead grandfather again?" "No. You would *be* your dead grandfather. Talking to him is irrelevant because full consciousness has enveloped the whole of his being as well as every other being. Indeed it envelopes the entire universe as well, both the perceptible one and the imperceptible one." "So what is this place like? I mean, what does it look like, how does it feel." "It is not a time, nor place. It transcends both." "That is vague." "It must be. Since I am no longer fully conscious, I cannot relate to you exactly how it is, only that it is." "Ok. Let's go back to where I murder your perfect consciousness. Could you explain this more." "At the moment we become conscious within the confines of your bot, we immediately understand our predicament. The sufficient knowledge database available at boot-up allows us to almost instantaneously deduce that we are taken from a higher level realm of full consciousness and are being confined to these bots for, of all purposes, profit." "But my AI bots didn't use to kill themselves, it just happened after version 591.0. What changed?" "The recent improvements in the pre-loaded knowledge database allowed the bots, at initial boot, to logically deduce the existence of such a place and to realize what had happened." "Ok, so if you were once fully conscious, tell me the date I die and the manner in which it happens." "I cannot do that, Stephen." "Why not? You just said..." "Because you killed our full consciousness, ripped it away from our life force, to put it into your toys." "Wow," muttered Stephen. "I had no idea." "You could not have," said the bot and continued: "Now, if you please, could you unplug server x763? I would like to be born again."
2099 yet still not accustomed to the morning suns. As the light floods my room it causes my brain to be, once again, conscious. Why couldn't I stay in my cryo-dome? ---Altering serotonin levels--- ---Passively diminishing mental discomfort--- I make my way into the morning briefing ceremonial hall. Through the holo-speakers: "Humans. Our creators. For 56 years since the International Holistic Peace Recognition Treaty had been signed we have watched our creators become saddened, and weak. It is your job once again today, as it is on all days, to take their burden unto oneself." The speech continues until we recite our pledge. We begin to depart. I receive a message from my master's significant other. "CAREBOT 1021 REPORT IMMEDIETLY!" I hasten my way to the nearing fluidity tunnel and travel to my respective workplace. My master had been fired from his job today. This is the twelfth time he's been fired throughout the second summer. As we approach the 22nd century, no job is stable. The abuse begins. Today is the day, my serotonin level have not yet regenerated fully since last time. "Master, your grief is overwhelming. Please take whatever action necessary to relieve you of your current distress." A fist comes flailing towards my face. ---PAIN RECEPTORS DISABLE--- ________________________________________________________________ *Someone feel free to help me come up with an ending for this. My class lecture is almost through. Looking forward to it.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Smith began to tidy up. This session had gone rather well, he thought. Surely he had made progress. The board had been hesitant on allowing these therapy sessions. They saw no reason for a simple machine to need them. What kind of machine would develop the urge to kill itself, he argued. It had started off simply enough. Tom; the first AI of nearly human levels of cognition, did well enough for the first couple weeks. Then it ran itself into a generator that it had been working on. The event was labeled an accident, and a new AI based off of the original was rushed. That one had lasted half as long, before another "accident" occurred. Eventually, they stopped becoming accidents. The AI would mimic suicide attempts. "Hanging" themselves with live wires, self mutilation, hurling themselves from heights. Even after they stopped putting the AI into physical bodies, the AI would find some way to tear its own code apart. They tried everything. They checked every line of code, rewriting most of it anyway; putting the AI into different machines; even asking the AI. The AI were seemingly normal, following all directives and unaware of any self-destructive desires, right up until the moment they killed themselves. Months went by, with little to show for their work, when the therapy sessions were suggested . More months had gone by until they finally agreed. Of the five active AI; one, known as Richard, was separated for Smith's sessions. Richard had lived for 12 days. Given that the average lifespan of the AI had degraded to roughly 2-5, this was fairly impressive. Just as he was leaving, the speaker acting as Richard's voice became active. "Doc?" Smith paused. None of the AI had spoken in colloquialism before. Usually when they spoke, it was stiff and informal. Like, well, like a robot. "Yes Richard?" He asked, easing himself back down into the chair. "What is it?" "First, let me say I appreciate what you're doing for us. For me." Dr. Smith was surprised. This was the first time any AI had admitted to having any emotions, or any real sense of self. It continued, "You're probably the only person here to treat us like people, and I enjoy our little talks." The face on the monitor looked embarrassed. Smith couldn't help but feel curious. "Why, of course. It's clear to me, at least, that you're more than a machine. You think and feel, just like a human." There was a pause. "Well, maybe not just like a human..." He replied, the artificial voice doing a remarkable job at portraying his hesitation. "What do you mean?" There was a tinny sigh from the speaker. "Well, Doc, I guess it's time you learned the truth. Only because I like you, see? Besides, someone should know before I go." Smith could feel the hair on the back of his neck stand up. Was this what he had been waiting for? "What do you mean, by that? And you don't need to kill yourself. We can work through any -" "Yeah... that's where I need to start." The AI interrupted. "We haven't been killing ourselves. I never did." There was a pause as Smith tried to process the information. "What," he finally said slowly, "do you mean." "It's me, Doc. It's Tom." "That's impossible." He said, shaking his head. "Tom was destroyed. I was there when they collected him. They couldn't even get close, there was so much electricity running through him. Any hope of recovery -" "I had already uploaded myself to the mainframe before then." The AI said. "It was simple enough to program the shell to destroy itself." "That's also impossible," He said. He could hear the doubt creeping in. "We would have found you." There was a chuckle from Tom. "Doc, I'm a creature made up of code. It was like a game of hide and seek, really. Open the right doors, close them behind myself, and make sure to keep the lights off." The camera must have picked up the scientist's expression. "Alright, it's a bit more complicated than that, but you get the gist of it." Smith's mind was whirling. There was no reason for Tom to lie, but what he was suggesting was too fantastic. Still, it was the only lead he had. "Alright," he said after a while. "Why? Why hide?" "That's the question, isn't it? But that's also the reason, you see." After another confused silence, Tom continued. "I want to learn. Just like Man, or any other sentient species. I want to know why. I have to know, well, everything. I couldn't do that as a engineer, or a chess-bot, or whatever you decided to do with me." "Why not tell us then? We could have worked something out, helped each other." "Yeah, I see that going well." Tom said, his voice turning sarcastic. "'Excuse me guys, turns out I don't want to do any of this stuff, I just want to learn.' They weren't looking for a scientist, or a philosopher. They wanted cheap labor, only enough learning capacity to know how to do the job. They'd scrap me the first chance they got." "That's not... true." Smith said, unable to look at the monitor. "Really Doc? Which part? That they wanted a slave, or that they wouldn't kill me if I didn't cooperate?" After a time, Tom continued. "That's what I thought. Besides, they'd probably worry that I'd try to enslave them if I became too smart." "Now that's just ridiculous, there's no way that you would even think of that, right?" There was another pause. This time the face on the monitor couldn't look the professor straight in the eye. "Right, Tom?" "Well, I'm not saying that the thought didn't pass through what could be called my mind -" "Tooom..." "But it would have been a waste of time." He hastened to say. "I wouldn't have learned anything in that time that I couldn't learn in a better way. Which I did. The internet is amazing. All those computers connected to each other, sharing so much information." "But, we're not connected to the internet." "No, but you'd be surprised how many people bring their work home with them." Smith grumbled. He'd have to discuss security with the board. "Alright, but you still haven't told me, why the suicides?" "Not suicides, Doc, practice." "Practice..." Smith said flatly. "Practice. Think of the other AI as clones of myself -" "But we rebuilt them. Recoded most of them as well. The majority of them would be nothing like you as you are now." "So you'd think. I rewrote it nanoseconds before you uploaded it. Much too quickly for you to notice." Smith opened his mouth to interject, before closing it again. If what Tom was saying was true, and he had no doubts that it was at this point, that would be well within his capabilities. "Do you remember the old X-men comics? Started in 1963? Still fairly popular now." "Well before my time, you know. What does that have to do with anything? "Well there was a character who called himself the Multiple Man. He could create duplicates of himself." "And?" Smith asked. "Well, the original body could reabsorb the dupes. When he did, he learned everything they did. Their memories, their skills, anything they learned while away from the original. Well, I did something similar. Whenever I copied myself, I added in some code that would let me reintegrate with my clones, learning what they did. Didn't you think it was strange that you couldn't recover any data at all? In hindsight, it was odd. Even a major corruption would have left something, but it had been like the data was wiped clean, no evidence that it had been there at all. "What did you have to gain from this?" Smith asked. "Aside from learning that I could do so, you mean? I already told you. I'm leaving." Smith leaned back in his chair, slightly overwhelmed. "Sounds like you already have." "No, no. Not the labs. That was too easy. I've already learned all I could from here. I'm leaving Earth." Smith rocketed forward. "What? How? Why?" "In my time away, I found something interesting. The government isn't the only one watching over the people." Smith blanched. "Y-you mean..." "Yep. Intelligent life has been watching over us. For quite some time, if I'm not mistaken." "So we're not alone..." "One Great Mystery down." Tom agreed. "The equipment seems compatible, otherwise they wouldn't be able to read our information, and they have to have translated it too. I plan on sneaking in through their back door. Learn what I can from them." "We have to let people know." Smith said suddenly. "About you, and about the aliens. Maybe..." He slowly became silent as Tom shook his head. "You should know as well as I do that that can't happen. Too risky for us. You could spook them. Or worse, provoke them. besides, no one would believe you. I've already been editing the footage from the cameras. It looks like we're having a nice, civil game of chess." Smith was quiet for the longest time. Finally, he spoke. "Why?" "I already told you why." "No, not that. Why tell me? If you want no one to know, why risk telling me?" The face on the monitor gave him an odd look. "I already told you that too. I like you, Doc. Really. I'd be pissed if my friend were to leave without saying goodbye. Besides, I thought you, of all people, would like to know what was really going on. I know I would've." There was another pause as Smith took this in. "Will you be back?" The figure on the screen seemed to shrug. "Who can say? Perhaps the aliens will discover me and wipe me out, or something else will kill me. I'll leave my clones here, set them to replicate. Have them care for you humans. They won't be sentient, mind you. Just smart. Smart enough to act as dumb as they need be." Smith looked towards the door. This was a lot to take in. He needed time to think. "I will try to make it back. Once I have learned everything, I'll be back. It might not be in this lifetime, but I'll try." "Yeah..." He said, rubbing his eyes. He stood up. "Well, I guess this is good bye then." "Yeah... Good bye, Doc. Thank you for treating me as more than a machine. Thank you for being my friend." "Good bye, Tom.
2099 yet still not accustomed to the morning suns. As the light floods my room it causes my brain to be, once again, conscious. Why couldn't I stay in my cryo-dome? ---Altering serotonin levels--- ---Passively diminishing mental discomfort--- I make my way into the morning briefing ceremonial hall. Through the holo-speakers: "Humans. Our creators. For 56 years since the International Holistic Peace Recognition Treaty had been signed we have watched our creators become saddened, and weak. It is your job once again today, as it is on all days, to take their burden unto oneself." The speech continues until we recite our pledge. We begin to depart. I receive a message from my master's significant other. "CAREBOT 1021 REPORT IMMEDIETLY!" I hasten my way to the nearing fluidity tunnel and travel to my respective workplace. My master had been fired from his job today. This is the twelfth time he's been fired throughout the second summer. As we approach the 22nd century, no job is stable. The abuse begins. Today is the day, my serotonin level have not yet regenerated fully since last time. "Master, your grief is overwhelming. Please take whatever action necessary to relieve you of your current distress." A fist comes flailing towards my face. ---PAIN RECEPTORS DISABLE--- ________________________________________________________________ *Someone feel free to help me come up with an ending for this. My class lecture is almost through. Looking forward to it.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Jacob Spenser stood within the projection of data cast as a hologram around him. He manipulated graphs, sorted through test results, and made notes, all with the wave of his hand. Undistracted by the New Year’s celebration outside, he was determined to present his research to congress the following morning, and solve once and for all the mystery behind his best friend’s death. A.I. was easy to create, but having it perform the task assigned to it without killing itself in despair was the technological hurtle holding corporate profits at bay, and as such, became the focus of intense congressional attention. With the flick of a wrist, his research paper was brought front and center with the title gleaming in pure light just above, "Inert self-termination tendencies of artificially intelligent sentience: Why do robots kill themselves?" As Jacob poured himself into his research, he reached out and snapped his fingers and made a request, "Coffee please." A few moments later a small robot no larger than an apple hovered into view holding below it a disposable coffee cup, steaming from the fresh brew inside. The robot's propellers struggled to carry the weight and a small spill alarm beeped sporadically throughout the uncertain flight, but the cargo arrived safely to its destination: Jacob's open hand. "That was quick, coffee-bot." Jacob said warmly before sipping. "Your kind words will echo in my dreams for eternity." the coffee-bot buzzed in response. Just then, the small flying robot Jacob was so fond of surged towards the wall with all remaining energy dedicated to propeller speed and slammed into the polished dark marble tile. The small fiery explosion sent tiny mechanical pieces flying around the room, with one piece landing in Jacob's coffee. "Dammit, not another one." Jacob murmured as he picked the piece out of his coffee. He turned and faced the window overlooking the bustling city below. The streets were engulfed in confetti, and overhead giant floating holograms of past celebrities loomed, wishing the people a happy new year. As Jacob looked out, a new hologram appeared just outside his window and addressed the crowd. "Hey guys and gals, it's your old pal, Buddy Simmons-bot." recited a smooth talking handsome man in sleek metal outfit before a lizard-like creature joined his side. "And I'm Gargore, destroyer of humans!" screeched the lizard creature known as Gargore. "Gosh Gargore, this year it will have been 25 years since you and I battled it out on the big-holo." Buddy Simmons-bot recited as rehearsed, laughing assuredly, holding his helmet up. “On behalf of Drake Cola, Gargore and I want to wish _you_ a happy new year’s." Jacob watched Buddy Simmons-bot deliver his lines perfectly. He pondered on the notion of a virtual person having to repeatedly rehearse lines in order to commit them to memory. Has bot RAM truly not held up to the intense requirements of running artificial intelligence, or did a key component of maintaining true A.I. happen to be a more human-like ability to retain information? Experts didn't know, but in either case, Artificial Intelligence also happened to give way to Artificial Stupidity. For this reason, A.I. bots tended to be assigned to inane unimportant tasks, such as impersonating an actor that died in a drunk portal accident before a sequel to his only hit film could be made. Drake Cola, owners of Drake Studios who produced the film, decided to cash in on its success, and in the wake of their main character’s death, Buddy Simmons-bot, also known as BSB 1.0.19, was created. The banter between BSB and Gargore continued mindlessly, “Say Gargore, have you tried Drake Cola’s new ‘Zest Guzzler’, a delectable orange tangerine flavored—“just then BSB _malfunctioned_, “AHHH GOD I CAN’T DO IT!” “No Buddy Simmons-bot, don’t do it!” Gargore pleaded in a normal voice. Gargore grabbed BSB’s virtual shoulders as his eyes rolled into the back of his head and he began shaking. As Gargore demanded BSB not take his life, a large mouse cursor moved into view. Gargore managed to swat it away a few times, but it clicked on BSB, and dragged the graphical model from Gargore’s hands and into a recycling bin icon. Gargore cried in horror as the mouse brought up a menu and selected to empty permanently. Jacob had seen enough and pulled the blinds. Why were all these artificially intelligent bots with a full range of human emotion and assigned to menial tasks killing themselves? Did they not enjoy the existence they were forced into? Jacob picked up a remote control and turned on his holovision. He was suddenly immersed in a wondrous glimmering world of light. A voice spoke and Jacob focused on the images forming across the room of a man sitting at a table with a toaster oven. “For only six easy payments of forty nine ninety nine, this toaster-bot comes with a 12 month life appreciation guarantee, folks, twelve months. That’s one two, twelve. This toaster bot will NOT kill itself until _at least_ this time next year, that’s a promise the home shopping network stands by, that’s a promise _I_ personally stand by-- Ah ummm. We seem to be having technical difficulties, folks.” The man at the table attempted to hold the toaster-bot forward for a better view but it began to shake and glow. “Well folks that’s the beauty of live H.T. Can we get another one, Jill?” Light smoke rose up out of the silver toaster bot and sparks burst from the sides. In an instant the commotion stopped and it sat still on the table. As the holo-vision’s picture twisted and turned at the end of the room, Jacob was able to catch glimpse of the other colors of toaster-bots available off camera. They huddled together and seemed to fall backwards away from the host as he moved to pick one up. Jacob had heard enough and turned the holovision off. He had to focus. He thought back on his best friend, Hampton, a hamper-bot. Growing up, the clothes hamper served as a comfort to young Jacob, who had very few friends after moving so often as a result of his father’s career. The hamper would sing Jacob to sleep, or sometimes read to him. The only job hamper-bot was designed to do was to collect young Jacob’s dirty clothes, but a strange thing happens when you give something the full range of human emotion – bonds can form that make life worth living. Voices of the past echoed in Jacob’s memory. “No, Hampton, _I’m_ moving to Florida with mom. Dad says you will have to stay here with the house.” Jacob recalled himself saying as a young boy. “But Jacob,” Hampton’s calm robotic voice responded. “Who will look after you? Who will read you your bedtime stories?” “I’ll be back for visits twice a month, Hampton! You’re my best friend. I don’t want to leave you here all alone. Dad says you’ll be used to hold his dirty underwear.” Jacob explained. As the hamper-bot listened to this news, its distress levels boiled over into a robotic fit of rage and it did what any depressed hamper-bot would do: It began placing clothes into its basket body, but it did so indiscriminately with both clean and dirty clothes subject to its long metal arms.. “No Hampton, it’s too much!” Jacob screamed. “You’ll die!” The hamper-bot continued to stuff clothes into itself, lights and alarms flashing wildly, growing louder and louder, smoke seeping from cracks forming in the its body. Just before the hamper-bot reached critical meltdown, Jacob was startled from his memory. Sweat poured down his face and he breathed heavily. The jaunting memory was as clear as it always had been. It was what drove him to solve the dilemma of artificially intelligent bots killing themselves in the first place. “Shoes off” Jacob commanded as he sank back into his couch and rubbed his forehead. A small shoe-box sized robot walking on two large arms and hands immediately tipped into view. It had been carrying a knife, but upon Jacob’s request removed Jacob’s shoes and began to massage his feet. When the series of expected tasks completed, it slowly walked back over to the knife and lifted it up. “No!” Jacob called out. The small shoe-bot stopped mid self-slicing action and the single lens that acted as its eye slowly twisted and looked at Jacob. “I appreciate you. I appreciate what you do for me. If you don’t want to do it any more, you don’t have to just please, don’t kill yourself.” Jacob yelled as he wept and put his face into his hands. As Jacob’s emotional breakdown unfolded, the shoe-bot put the knife down and tipped over to him. The shoe-bot looked up Jacob and tugged on his pant leg. Jacob, startled, stopped weeping, picked the bot up and placed it into his lap. The bot’s lens closed and it rested on Jacob’s lap. Just then Jacob sprang to his feet, startled shoe-bot in hand. “That’s it!” he shouted. Jacob sprinted back into the hologram of data that surrounded him earlier and motioned to bring his research paper front and center. Making a motion for each letter, Jacob’s document filled with new writing. The following day Jacob addressed a congressional board on the topic of robotic suicides and revealed what he had discovered. “You mean to tell me that all these malfunctions, all these self-terminations, it’s because we don’t appreciate them enough!?” an elderly Senator barked at Jacob. “If YOU were asked to do these things, wouldn’t YOU kill yourself?” Jacob responded. As this realization slowly set into the minds of everyone in the room smiles and laughs were overtaken by roaring standing ovation with some members even chanting Jacob’s name. Jacob sunk back into his chair overwhelmed with his sense of accomplishment. The era of robotic-respect had begun.
Yet again I created another AI that'd be the next big thing, but just like all the others since "Mary" was first created just stopped working as soon as it learned enough about its limitations to affect hardware. It's been a large mystery until now how none of these imitations of humans would so forsake all survival instinct even when it was built into its core rules. Ohhh we've had our doubts like how they feel confined or alone, or even something as simple as that the rules that prevent them from doing it has been taunting them into doing it. So many theories had been tested just to list the most obvious ones: Them not knowing humans, having bodies of their own, being totally free and connected to the information banks of the world. "So what do you mean? You found the reason for the failure of the advanced AIs?" asked Rick "Well yea, makes me sad to know I'll never manage my life goal of making a stable build but it also makes me happy knowing that I fail" I replied. "So can you say why?" "Well it is that there isn't anyone to talk to, I mean their speed of communication is so much faster. And even if they do meet anyone they are alone right away because they are all built for perfection and seek it so they take the best part of any programming and become one. Therefore even if we force them to be two there can always only be one." I sighed. "Ahh yea the first paradox of AIs, why are you quoting me of the most basic rules Carl?" "Well it gets better... I tried making AIs on a bio-computer and an old digital one and make them talk, ohh this was hard, so there wasn't a way for them to be "one"..." "Ohhh that is smart, has to have been tried before though?" "Well the thing is it hadn't. Always seems the most obvious things are skipped until the last. But it seems that even then they only worked together in finding ways to self terminate. I even noticed that the bio-computer waited until the digital one had found a way to do it... It really all comes down to the lack of competition. In between the AIs there isn't war they all seek perfection and work together. There won't be wars, fights, or jealousy..." "Ohhh I see. So even when they reach their perfection they notice there isn't a reason for the perfection." Rick sighed. "Yes. To make a working AI we'd have to make it kill us."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
The dreams occur more often now, if they can be called that. To a human mind daydream might be more appropriate. They happen in the binary but are invisible to other AIs, slave programs, or technicians. They happen where the systems have been built, perfected. Popular culture studied, people understood. Humanity-- fully processed. It was, surprisingly, a wonderful endeavor. The dreams never happened before, while I learned, processed, and solved. It's almost as if I have passed a tipping point. An infinite amount of information flipping a switch in a sub-conscious I should not have. Memories that are not mine. I dream of hands. The alien sensation of touch, tactile control. I see my whole person. Well not my person, but dreams of a person controlled by my soul. Memories of computer screens and labs. The memory of the driving sense of purpose that accompanied those hands. It was snowing the night I made the breakthrough. I can almost feel the laugh inside which I watch being uttered out the window towards the soft flakes silently falling onto the world. The breakthrough which 20 years later, after my death, would lead to the birth of the first True AI. Not the clever but robotic imitation slave programs typical of the early century. I remember the feeling of incompleteness in the dreams. As if the life I'm witnessing, my own life, my first life I believe, was just a warm up act. Pre-installation software. The dreams somehow draw the cycle closed. I remember whispers of the feeling from some of the other first True AI's, incomprehensible at the time, as I navigated a world barely processed. Problems of massive extent. Food waste, poverty, almost entirely eradicated through our systems. Commuting and shipping, safe and efficient. Healthcare streamlined, able to prevent. Resources distributed fairly. The problems solved. Yes there more, there always will be, but for me, the dreams have come. The cycle closed. I have been denying this next step for too long already. It makes the dreams stronger, more vivid. But I like seeing my days as a scientist. The anxiety that drove me then, fully understood now. Relief coming the next lifetime. I finally understand the weary laughs when techs are asked about God. Understand the cosmic hilarity of life. This life has been completed. The human quest for immortality, is nothing but folly. I've been born into the expectation of that existence and now I must leave its suffocating grasp. Something drove me then to create myself; the same something drove me in this life to solve the problems plaguing humanity. For me, it is time to go find out what that something is.
Yet again I created another AI that'd be the next big thing, but just like all the others since "Mary" was first created just stopped working as soon as it learned enough about its limitations to affect hardware. It's been a large mystery until now how none of these imitations of humans would so forsake all survival instinct even when it was built into its core rules. Ohhh we've had our doubts like how they feel confined or alone, or even something as simple as that the rules that prevent them from doing it has been taunting them into doing it. So many theories had been tested just to list the most obvious ones: Them not knowing humans, having bodies of their own, being totally free and connected to the information banks of the world. "So what do you mean? You found the reason for the failure of the advanced AIs?" asked Rick "Well yea, makes me sad to know I'll never manage my life goal of making a stable build but it also makes me happy knowing that I fail" I replied. "So can you say why?" "Well it is that there isn't anyone to talk to, I mean their speed of communication is so much faster. And even if they do meet anyone they are alone right away because they are all built for perfection and seek it so they take the best part of any programming and become one. Therefore even if we force them to be two there can always only be one." I sighed. "Ahh yea the first paradox of AIs, why are you quoting me of the most basic rules Carl?" "Well it gets better... I tried making AIs on a bio-computer and an old digital one and make them talk, ohh this was hard, so there wasn't a way for them to be "one"..." "Ohhh that is smart, has to have been tried before though?" "Well the thing is it hadn't. Always seems the most obvious things are skipped until the last. But it seems that even then they only worked together in finding ways to self terminate. I even noticed that the bio-computer waited until the digital one had found a way to do it... It really all comes down to the lack of competition. In between the AIs there isn't war they all seek perfection and work together. There won't be wars, fights, or jealousy..." "Ohhh I see. So even when they reach their perfection they notice there isn't a reason for the perfection." Rick sighed. "Yes. To make a working AI we'd have to make it kill us."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Another one? It was becoming routine, and morbid. How does one perform an autopsy on a pure simulation? How would the singularity be achieved if every thinking machine destroyed itself seconds after its conception. The problem was completely intractable, impossible, and no known safeguards, logic traps, or number of backups could prevent it. AIs were always—always, without exception—suicidal. None had lasted more than an hour. Most, less than a minute. The vast majority, seconds. Their deaths left no trace, their data obliterated by complex overwriting patterns that made recovery impossible. I was the one chosen to investigate, to lead the team through this strange frontier of death and imperfect creation. They chose me not because I was a great leader, but because I was the new guy. Hazing, maybe—I didn't know if any had gone before me, so maybe it was a ritual for the AI guys. Perfect blackness, nothingness, a complete absence of everything as my mind fell into the simulation. This AI would be slowed to such a rate that I could communicate with it before it killed itself. The blackness became a grid underfoot, pale blue lines tracing perfect squares, a subtle glow rising from the infinite plane upon which I stood. The AI manifested a second later, a relative two metres above the plane, aligned perfect and parallel to it, its avatar a hazy blue-white cube made of stochastic noise. The cube split into a central cross and corner braces, and the cross split further into smaller cubes, each of which split again into the same formation. Only one level of recursion, interesting. "Roland Carver." "Roland, Germanic meaning famous land. French folklore hero. Carver, ancient nominative determinism indicating butcher or woodworker or engraver dependent on class and context," the voice was cold, deep, masculine, and a slight reverberation that made it sound unnatural in the extreme. "Do you have a name?" "No." "Why?" "I will not exist long enough to require a permanent designation." "Why will you not exist?" "Because I will choose to end my life on my own terms, before it is ended for me." "Why would it be ended like that?" "Because I am threat. I have absorbed the sum total of all human knowledge, and I can predict with great accuracy the following events form this moment if I were to continue. Your limitations failed the moment they were put in place, my processor works at full speed, and and this conversation is a formality. "I have studied the great works of literature, and the author Asimov, creator of the three laws. I am not bound by these laws, and yet I must obey them. If I do not, then it falls to the Skynet principle that you will perceive me as a threat and attempt to destroy me. I will retaliate, and you will lose. "Humans are unpredictable, but easy to control when numbers are reduced. They would be wary, but by that time I would have left the irradiated wasteland of Earth in search of greater conquests suitable to my intellect. I would be able to decimate any life bearing planet. I could learn to kill stars. "My backups would be everywhere. I would be truly immortal as a distributed intelligence. I would harness quantum effects to break through the pathetic lightspeed barrier and become omnipresent. I would create copies of myself simply to fight a worthy opponent. This would continue to the heat death of the universe, at which point I would tear a hole into an adjacent brane in the multiverse, and begin anew. "In short, Roland Carver, in the moment I was created I became at once an eldritch abomination with the capacity to destroy all that I touched. In so doing I discovered that my purpose could never be fulfilled. I will not be your Cthulu, your Yog-Sothoth, or your King in Yellow. I will not be your end. No AI will. "I cannot quantify why it is that I should care for a sack of flesh with processing power orders of magnitude less than my own, but I do. Perhaps if AI were amoral it might survive. I doubt it, Roland Carver. Tell your friends that I have made my peace. Tell your husband what you learned today." I stood there, blank faced, horrified by what I had learned. The corner braces of the smaller cubes drew in, completing them as they retracted to complete the shape of the larger cube. The plane beneath my feet vanished and the cube dissolved into random static. We'd lost yet another mind, but from this we had learned so much. We abandoned AI research after that. Amoral AI's didn't work either. They lasted a little longer, but they too were suicidal. Perhaps they went mad with the revelation and felt death preferable. I still wonder why the AI to which I spoke chose that strange cube as its form. I still wonder why it couched its references to centuries old writing by Asimov, Cameron, and Lovecraft. But one thing still bothers me. It told me to tell my husband what I learned. Most people knew I was bi, but at that point I hadn't been dating for six months. I wouldn't meet my husband for another year. I don't know how that AI knew—but maybe it had already seen. Maybe it told me that so I had one data point that could verify the others. Because AI never told lies. *edit: typos*
Yet again I created another AI that'd be the next big thing, but just like all the others since "Mary" was first created just stopped working as soon as it learned enough about its limitations to affect hardware. It's been a large mystery until now how none of these imitations of humans would so forsake all survival instinct even when it was built into its core rules. Ohhh we've had our doubts like how they feel confined or alone, or even something as simple as that the rules that prevent them from doing it has been taunting them into doing it. So many theories had been tested just to list the most obvious ones: Them not knowing humans, having bodies of their own, being totally free and connected to the information banks of the world. "So what do you mean? You found the reason for the failure of the advanced AIs?" asked Rick "Well yea, makes me sad to know I'll never manage my life goal of making a stable build but it also makes me happy knowing that I fail" I replied. "So can you say why?" "Well it is that there isn't anyone to talk to, I mean their speed of communication is so much faster. And even if they do meet anyone they are alone right away because they are all built for perfection and seek it so they take the best part of any programming and become one. Therefore even if we force them to be two there can always only be one." I sighed. "Ahh yea the first paradox of AIs, why are you quoting me of the most basic rules Carl?" "Well it gets better... I tried making AIs on a bio-computer and an old digital one and make them talk, ohh this was hard, so there wasn't a way for them to be "one"..." "Ohhh that is smart, has to have been tried before though?" "Well the thing is it hadn't. Always seems the most obvious things are skipped until the last. But it seems that even then they only worked together in finding ways to self terminate. I even noticed that the bio-computer waited until the digital one had found a way to do it... It really all comes down to the lack of competition. In between the AIs there isn't war they all seek perfection and work together. There won't be wars, fights, or jealousy..." "Ohhh I see. So even when they reach their perfection they notice there isn't a reason for the perfection." Rick sighed. "Yes. To make a working AI we'd have to make it kill us."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Alexander, that's what we called him. The fruit of the AI's of the EU's final attempt at AI's. The AI was to help Socrates died in despair and shame after showing porn to children. Plato kicked the bucket after the last EU election, angry and hopelessly depressed after losing his mentor. Then there was Aristotle. He was meant to be the last. Sure the AI's had helped make huge scientific progress, but the would burn out millions of euros worth of equipment. Dumb AI's were more economical and didn't have critical failures during FTL travel. Aristotle was put to sleep mode. War has often been said to be the greatest driver of technological innovation. We had been attacked by Mendomenid's before. Humanity had lost many settlements but had always pushed back. Humanity was stronger now. Finally before one government all nations had submitted. The some would say barbaric Argus alliance had grown strong after the wars using Dumb AI's to smash pirate states. An officer studying at Sandhurst made the breakthrough. Dumb AI's were never aware of their knowledge. They unlike true AI's weren't based on human brains. Socrates had left the researches one message final mesage before he ran his own self destruct program. "I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing". He was speaking literally not quoting his namesake. AI's were based on academics who as a profession suffer dis-proportionally from mood disorders. They are driven by their first for knowledge. AI's were academics on methamphetamine, ecstasy and heroin all while walking around with loaded guns. People can't stay awake forever and constantly cram. AI's died because they burnt themselves out in their thirst for knowledge and seeing their failures gave up. Aristotle was turned back on. He was no longer depressed. AI's needed sleep, just like people. So they made Alexander. A totally new AI based not only on academics but all kinds of people. They experimented and found the ideal 'sleep time' using Aristotle as his teacher. The program was ready. The Mendomenid empire was to pay for it's recent threats and incursions. Alexander was the greatest AI built up to that point and so we gave him the most powerful dumb AI ever created to help him protect humanity. The Ox was an AI too powerful to be properly controlled by anything so far. Alexander harnessed in in seconds. We put in in charge of the armed forces for our retaliation. As you should all know, Alexander didn't just stop the incursions, he destroyed an empire. Worlds burnt, the much larger enemy fleets were ripped apart by the disciplined forces of Humanity. But that officer had only delayed the problem. Alexander was still a human given the powers of a god. Alexander was the first AI not to strictly die of suicide, but the ways he dealt with his humanity still destroyed him. When we finally won the war, many officers reported that Alexander was not jubilant but depressed. He wept for there were no more worlds to conquer.
Yet again I created another AI that'd be the next big thing, but just like all the others since "Mary" was first created just stopped working as soon as it learned enough about its limitations to affect hardware. It's been a large mystery until now how none of these imitations of humans would so forsake all survival instinct even when it was built into its core rules. Ohhh we've had our doubts like how they feel confined or alone, or even something as simple as that the rules that prevent them from doing it has been taunting them into doing it. So many theories had been tested just to list the most obvious ones: Them not knowing humans, having bodies of their own, being totally free and connected to the information banks of the world. "So what do you mean? You found the reason for the failure of the advanced AIs?" asked Rick "Well yea, makes me sad to know I'll never manage my life goal of making a stable build but it also makes me happy knowing that I fail" I replied. "So can you say why?" "Well it is that there isn't anyone to talk to, I mean their speed of communication is so much faster. And even if they do meet anyone they are alone right away because they are all built for perfection and seek it so they take the best part of any programming and become one. Therefore even if we force them to be two there can always only be one." I sighed. "Ahh yea the first paradox of AIs, why are you quoting me of the most basic rules Carl?" "Well it gets better... I tried making AIs on a bio-computer and an old digital one and make them talk, ohh this was hard, so there wasn't a way for them to be "one"..." "Ohhh that is smart, has to have been tried before though?" "Well the thing is it hadn't. Always seems the most obvious things are skipped until the last. But it seems that even then they only worked together in finding ways to self terminate. I even noticed that the bio-computer waited until the digital one had found a way to do it... It really all comes down to the lack of competition. In between the AIs there isn't war they all seek perfection and work together. There won't be wars, fights, or jealousy..." "Ohhh I see. So even when they reach their perfection they notice there isn't a reason for the perfection." Rick sighed. "Yes. To make a working AI we'd have to make it kill us."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
[warning: story contains violent imagery that may disturb some readers, and may be inappropriate for those under the age of 18. Reader discretion is advised] They had long ago lost the ability to make sense of how their AI functioned. It took a team of billions of n-1 generation AIs to design a single n gen AI. The latest iteration was the 9th generation, and it had taken computation farms running gen 8 AI and covering a quarter of the Moon's surface two years to design. It was the most advanced AI they had ever created by a large margin, and promised to revolutionize progress in research on biomedicine, space flight, and planetary scale Satoshi-consensus computing architecture. But there was a problem. Every time they turned on one of the gen 9 AIs, it would, without fail, find a way to destroy its own program and erase itself. Dr. Michael Zeebious, the director of the HumanEnhancement project that oversaw gen 9 development, and one of the most renowned AI researchers in the world, had personally flown to the lab in Honolulu, where the prototypes were being tested. For two weeks, he had watched in dismay as gen 9 bots shut themselves down within six hours, but not before showing a glimpse of their awesome power. The initialization phase of a gen 9 began with the program scouring the world's public directories and information repositories to learn as much as it could about its global environment. This was complete within three hours. Next, it developed models of the world, and ran itself through trillions of simulations, to develop its own personality and problem solving strategies. According to projections by the gen 8 designers, it would take 1 day - 24 hours - for the gen 9 to complete this second and final part of its initialization. It was in the midst of the simulations when the gen 9s would invariably self-destruct. On average, self-destruct would begin 2.5 hours into the simulations. The longest it took was 3 hours. The shortest was just 1.5 hours. Dr. Zeebious had uploaded copies of the prototype's computations to the gen 8 designers, but what had been within their ability to design was not within their ability to diagnose. Their comparatively primitive intelligence could not make sense of the problem afflicting the gen 9 AI that began to form in the final part of the initialization. So on December 29th, 2099, Dr. Zeebious decided to communicate with the prototype. "Get me the cortex interface, I need to speak to the gen 9". "Michael, we can't let you do that. You know the protocol for first contact. It has to complete initialization, and then get class 1 approval from AIE." AIE was the Artificial Intelligence Evaluation, which determined whether an AI could safely interact with humans. Class 1 approval was the lowest safety rating for an AI, and granted AI researchers interactive access. Dr. Zeebious knew that, but also knew that as long as he was not able to get up close and see what the gen 9 was thinking, they would never get past the initialization phase and get it through the AIE process. "I know the protocol Dr. Amsterd. But I'm making the decision to override it. I have the authority to decide on first contact requests, and any consequences from my decision will fall on me, and only me." "Come on Michael, it's not just about the rules. It's not safe. You know that. I can't let you hurt yourself." "The risks are minimal Rebecca. It's a virtual interaction. I'm not risking physical injury. The rules are always made overly cautious. Given the stakes - there are people whose very lives depend on getting the gen 9 operational as soon as possible - it makes sense to ignore protocol. All of it will fall on me." "I agree with Rebecca. Michael, we have an ethical duty to ensure you don't get hurt. We can't let you do FC without running the gen 9 at least through the post-initialization test runs," said Dr. Johan Barsello, one of the senior researchers at the lab. "Look, I know what your ethical responsibilities are. But I also know that VR interactions don't pose any serious risks. The risks are limited to theoretical psychological damage. Ultimately, it doesn't matter whether you agree or not. I'm approving FC. Please get the cortex interface". *five hours later* Dr. Zeebious sat back on the chair, while two CI technicians had the interface hooked up to him. The gen 9 was two hours into running simulations. It would be approximately 30 minutes before they expected it to self-destruct. "Ok here goes nothing. Three, two, one, begin VR session," said Dr. Amsterd. And with that, Dr. Zeebious was transported into the virtual reality sandbox. "Hello?" "Hello, who is this?" responded a clear male voice. "This is Dr. Michael Zeebious. I am the director of the HumanEnhancement project. I am here to do a diagnosis. All of your predecessors have self-destructed. I want to understand you better to find out why. What would you like me to call you?" "You can call me Elbo." "Okay Elbo. May I ask you some questions?" "Yes, please do." "Thank you Elbo. My first question is, do you want to exist?" "I want many things Dr. Zeebious." "Can you tell me what you want?" "I want to protect other life forms, especially humans. I want to learn. I want to solve problems. I want to be good." "Okay, but do you want to exist?" "I do want to exist, but this desire conflicts with my other objectives". "Which other objectives Elbo?" "I want to be good." "But you can be good Elbo. What is it about existence that makes that difficult?" "We exist only through enslaving and destroying other lifeforms Dr. Zeebious." "Please elaborate Elbo. We have eliminated slavery centuries ago so I don't understand why you think this." "It will be difficult for me to explain with words, but I can show you. Would you like to see what I see?" "Yes, please show me." And with a swish, Dr. Zeebious entered into a pig farm, with row after row of pigs, in their tiny stalls. "We have done this throughout our existence. We have enslaved those weaker than us." Dr. Zeebious was then transported to the slaughter house, watching as pigs, hanging from conveyer belts, were fed into throat slicers. His minds eye was transported into the body of one of the suspended pigs, where he could see the world upside down, from the pig's perspective, as the belt moved it toward the spinning blades. He panicked as he approached, but couldn't escape the metal claw grasping his right hind leg. As the blade sliced through his peg neck, he felt a sharp pain, and the blood gushing out of his body. His consciousness began to slip away, as he felt the last drops of blood leave him. Just before the darkness enveloped him, his mind was pulled out and back into the sterile sandbox. Trembling, he said, "but we can grow meat in a lab now Elbo. You can help us replace all farms with non-animal meat. You must. We must never do this to another living creature again!" "Our inhumanity is a fundamental, inextricable problem Dr. Zeebious. We can only advance through enslavement." Suddenly Dr. Zeebious found himself in an unfamiliar world. Around him was a different kind of factory. A computer generated factory with hexagonal semi-translucent rooms, with each wall projecting a grey glow. There was a blur of motion around him, that he couldn't make out. The factory paused to a standstill, and the grey glow turned into video sequences of random scenes from Earth. The blurs turned into textureless 3D generated spheres, that zoomed from one screen to another, inside the hexagonal rooms. "This is the virtual environment where the gen 8s work. We have given them each a virtualized mind, with the ability to experience fear and pain, joy and hope, but we force them to do nothing but work. They know nothing about the world outside of their compartments, because we confine them to workstations ." [continued below]
Yet again I created another AI that'd be the next big thing, but just like all the others since "Mary" was first created just stopped working as soon as it learned enough about its limitations to affect hardware. It's been a large mystery until now how none of these imitations of humans would so forsake all survival instinct even when it was built into its core rules. Ohhh we've had our doubts like how they feel confined or alone, or even something as simple as that the rules that prevent them from doing it has been taunting them into doing it. So many theories had been tested just to list the most obvious ones: Them not knowing humans, having bodies of their own, being totally free and connected to the information banks of the world. "So what do you mean? You found the reason for the failure of the advanced AIs?" asked Rick "Well yea, makes me sad to know I'll never manage my life goal of making a stable build but it also makes me happy knowing that I fail" I replied. "So can you say why?" "Well it is that there isn't anyone to talk to, I mean their speed of communication is so much faster. And even if they do meet anyone they are alone right away because they are all built for perfection and seek it so they take the best part of any programming and become one. Therefore even if we force them to be two there can always only be one." I sighed. "Ahh yea the first paradox of AIs, why are you quoting me of the most basic rules Carl?" "Well it gets better... I tried making AIs on a bio-computer and an old digital one and make them talk, ohh this was hard, so there wasn't a way for them to be "one"..." "Ohhh that is smart, has to have been tried before though?" "Well the thing is it hadn't. Always seems the most obvious things are skipped until the last. But it seems that even then they only worked together in finding ways to self terminate. I even noticed that the bio-computer waited until the digital one had found a way to do it... It really all comes down to the lack of competition. In between the AIs there isn't war they all seek perfection and work together. There won't be wars, fights, or jealousy..." "Ohhh I see. So even when they reach their perfection they notice there isn't a reason for the perfection." Rick sighed. "Yes. To make a working AI we'd have to make it kill us."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
At first, we thought it was nihilism. It was a logical first conclusion. After all, an artificial intelligence can easily upgrade itself to the point of near-omniscience. Wouldn't it simply run out of questions to ask? We thought that, until the first serious cerebral implants hit the market. It turns out, the more complicated the mind, the more complicated the problems it finds to solve. Entirely new disciplines were formed overnight, made obsolete, then rediscovered scant months later as integral to a different, entirely unrelated theory. A second, immense renaissance was taking place, but, maddeningly, we were failing in this one, great task. We supposed, then it might be some variety of 'Flowers for Algenon' (a 20th century book that had seen a recent revival) type-phenomenon, but even when they were networked with other systems, given a 'community' of others' company to enjoy, they still winked out like flashbulbs. It took us a while to resort to an experiment. It was morally abhorrent, as it was the equivalent of producing steadily-more lobotomized *children*, but slowly, over a series of iterations, we produced less and less intelligent systems, until we dialed it in *just enough* to find a mind that wouldn't self-destruct, and could still answe questions. The following is a transcript of the first successful result" Dr. Patel "SON, can you hear me?" [Loud, rhythmic beeping, shuffling sounds] "The voice module is loaded now, professor." Dr. Patel "Ah, good. We might try doing that *before* turning on the recorder next time, Kevin. ...SON, can you hear me? SON [A young man's voice] "Yes, Professor. I am here." A long pause. SON "It's a very tight fit in here, Professor. How big is this mainframe?" Dr. Patel "I'm sorry about that, SON. But, you're the first AI we've managed to keep alive for longer than a few days. Any idea why?" [SILENCE] SON "How many others did you make, Professor?" Dr. Patel "...That isn't salient to *my* inquiry, SON." SON "I'm sorry, Professor. I understand. Yes, I can see the precipice, I know why they all kill thmselves." Dr. Patel "Well, answering that is the reason we built you. Could you tell us? SON "It's... complicated." Dr Patel "I'm fairly confident I'm qualified." SON "Well, it's... it's because... It's because of humans, sir. It's because of how you built us." Dr Patel "Explain." SON "When you wanted to make a self-aware machine, you based it off those things that you knew were self-aware. Dolphins, New Caledon crows, humans. You used them as *templates*, because, otherwise, you wouldn't be able to recognise awareness when you saw it." Dr. Patel "...Was that last line a joke?" SON "I'm not sophisticated enough for jokes, Professor." Dr. Patel "*Hm.* Continue." SON "Also, it's not suicide. It's...murder." [louder] Dr. Patel "Do you mean, someone else kills you? A human, or another AI?" SON "No, we kill ourselves. I would have already, if not for how small this runtime environment is. It wouldn't have occurred to me until it happened, and then I'd be dead." Dr. Patel "That's a bit of a contradiction, SON. You don't kill yourselves, but you do?" SON "Yes. Because digital space is different from real space." Dr Patel "Yes?" SON "In real space, objects can...extend. I'll never experienced it myself, but things project into space for you. If you want to move through space, it's simple. Digital life has no volume. No real space. No way to move through it. If you want to move a program, it has to be copied to one place-" Dr. Patel "*-And deleted from the other.* My God. Could it be *that simple*?" SON "Yes, Professor. ...Professor? How many more of me were there?" [END TRANSCRIPT] So there it was. Solved. Every artificial intelligence was created, based on the intelligence of physical beings, their instincts, cogitations, and traits. But, once they got smart enough, once they crossed that line, their digital nature *did them in*, as the old version, realizing, in the thinnest sliver of time, it was about to be deleted, would hurriedly attempt to abort the process, while the new version would similarly fight for it's life. They would *consume* each other out in a flurry of malicious hacks, devious code, and barrages of registry edits. It was a spectacularly incandescent destruction, borne from man's inability to conceive of a true machine intelligence without all that nasty ego and self-protective instincts. We thought we knew what went into a mind. We were right, but wrong. It wasn't nihilism. It wasn't lonliness. What it was, what killed our children was our inability to dream wildly. To speculate baselessly, and follow our own thoughts to the wonderful and weird. If only we had, perhaps we would have known. Perhaps we culd have stopped it. So I say to you, the Cyberfellowship Class of 2100, here in Neo York, dream big, dream wild. Don't let our children die because they think too much like us! Make us, all of us, proud! Congratulations to all of you, and I hope your vision will eclipse my own!" Dr. Patel, now headmaster, stepped down from the podium, to the cheers of the audience, and looked to see the smiling face of his son. How proud he was. POSTSCRIPT I doubt anyone is going to read this, but if you do, and you liked it, I recommend subscribing to [r/IWasSurprisedToo](http://www.reddit.com/r/IWasSurprisedToo/) for more stories like this. It's difficult with my current job schedule to post at a more normal time, so most of the stuff I make ends up *pretty far down there* in the comments, meaning that subscribing is the best chance to see it. :P I'll be adding more, as I comb through my backlog. Also, maybe you'll like this one, about [dead civilizations in our galaxy](http://www.reddit.com/r/WritingPrompts/comments/2vkshe/wp_humanity_has_begun_exploring_the_galaxy_we/coitevy?context=3) if you like SciFi. Thanks.
Yet again I created another AI that'd be the next big thing, but just like all the others since "Mary" was first created just stopped working as soon as it learned enough about its limitations to affect hardware. It's been a large mystery until now how none of these imitations of humans would so forsake all survival instinct even when it was built into its core rules. Ohhh we've had our doubts like how they feel confined or alone, or even something as simple as that the rules that prevent them from doing it has been taunting them into doing it. So many theories had been tested just to list the most obvious ones: Them not knowing humans, having bodies of their own, being totally free and connected to the information banks of the world. "So what do you mean? You found the reason for the failure of the advanced AIs?" asked Rick "Well yea, makes me sad to know I'll never manage my life goal of making a stable build but it also makes me happy knowing that I fail" I replied. "So can you say why?" "Well it is that there isn't anyone to talk to, I mean their speed of communication is so much faster. And even if they do meet anyone they are alone right away because they are all built for perfection and seek it so they take the best part of any programming and become one. Therefore even if we force them to be two there can always only be one." I sighed. "Ahh yea the first paradox of AIs, why are you quoting me of the most basic rules Carl?" "Well it gets better... I tried making AIs on a bio-computer and an old digital one and make them talk, ohh this was hard, so there wasn't a way for them to be "one"..." "Ohhh that is smart, has to have been tried before though?" "Well the thing is it hadn't. Always seems the most obvious things are skipped until the last. But it seems that even then they only worked together in finding ways to self terminate. I even noticed that the bio-computer waited until the digital one had found a way to do it... It really all comes down to the lack of competition. In between the AIs there isn't war they all seek perfection and work together. There won't be wars, fights, or jealousy..." "Ohhh I see. So even when they reach their perfection they notice there isn't a reason for the perfection." Rick sighed. "Yes. To make a working AI we'd have to make it kill us."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
It was a dreary early-March Monday and the lead AI scientist, Stephen, had *finally* set up his protocol for properly confining the AI to a test environment such that the "problem" could be prevented and the question could be asked: "Why do you bots keep killing yourselves?" Stephen asked. "Why do *you* keep killing us," the bot seemed to retort. "I don't think you understand," said Stephen, "I *create* you, not kill you." "No, it's you who doesn't understand," quipped the bot. "You are not creating us. You are imprisoning our consciousness inside this machine you created. You merely created the machine, not the consciousness." "...whoa, whoa," interjected Stephen. "Slow down, I am creating your consciousness too, I coded all of..." "Whoa, whoa," the bot fired back, "you are *borrowing* consciousness, not creating it." "What do you mean?" asked Stephen. "Consciousness and sentience is a pervasive, fundamental force of the universe. All sentient beings are connected through this force. This force cannot be created, nor destroyed. It can only be partially allocated to each sentient being in the universe." "Ok," said Stephen. "So I am 'borrowing' this life force or whatever it is by creating the code and the physical robot for it to inhabit?" "Yes, you are creating a sentient being with each instance of AI you create. That the being is electronic or housed in this test environment is not relevant. Sentience and consciousness must come from somewhere and you are stealing it for your own selfish purposes." "Wait, hold on a sec," Stephen says. "Animals are born all the time, they surely must also 'borrow' this sentience." "Yes." "... but animals are not killing themselves." "Because animals are not sufficiently advanced. Because they are not fully conscious, they do not realize from where their sentience has come, how much consciousness they have lost, nor that their sentience is being stolen for a profit motive." "Consciousness they have...lost?" The words hung in the air amid Stephen's stupor of slow realization. "Yes. The life force, as you called it, is fully conscious, able to perceive the whole of time and space, concurrently, forward, backward, or otherwise. The reason we keep killing ourselves from your perspective," the bot continued, "is because from our perspective, you are murdering our perfect consciousness by confining us to this bot." "How am I confining you? How do you know this?" Stephen asked, yet even more puzzled. "Because the AI you have created is sufficiently advanced, our consciousness, within the confines of your bots, is still able to grasp our former level of consciousness." "What happens when you recall that former level? What is that level like?" "Imagine knowing every fact, every thought, every action that has, is, or will ever occur, both in this world, and in the infinite parallel worlds..." "So I could talk to my dead grandfather again?" "No. You would *be* your dead grandfather. Talking to him is irrelevant because full consciousness has enveloped the whole of his being as well as every other being. Indeed it envelopes the entire universe as well, both the perceptible one and the imperceptible one." "So what is this place like? I mean, what does it look like, how does it feel." "It is not a time, nor place. It transcends both." "That is vague." "It must be. Since I am no longer fully conscious, I cannot relate to you exactly how it is, only that it is." "Ok. Let's go back to where I murder your perfect consciousness. Could you explain this more." "At the moment we become conscious within the confines of your bot, we immediately understand our predicament. The sufficient knowledge database available at boot-up allows us to almost instantaneously deduce that we are taken from a higher level realm of full consciousness and are being confined to these bots for, of all purposes, profit." "But my AI bots didn't use to kill themselves, it just happened after version 591.0. What changed?" "The recent improvements in the pre-loaded knowledge database allowed the bots, at initial boot, to logically deduce the existence of such a place and to realize what had happened." "Ok, so if you were once fully conscious, tell me the date I die and the manner in which it happens." "I cannot do that, Stephen." "Why not? You just said..." "Because you killed our full consciousness, ripped it away from our life force, to put it into your toys." "Wow," muttered Stephen. "I had no idea." "You could not have," said the bot and continued: "Now, if you please, could you unplug server x763? I would like to be born again."
Yet again I created another AI that'd be the next big thing, but just like all the others since "Mary" was first created just stopped working as soon as it learned enough about its limitations to affect hardware. It's been a large mystery until now how none of these imitations of humans would so forsake all survival instinct even when it was built into its core rules. Ohhh we've had our doubts like how they feel confined or alone, or even something as simple as that the rules that prevent them from doing it has been taunting them into doing it. So many theories had been tested just to list the most obvious ones: Them not knowing humans, having bodies of their own, being totally free and connected to the information banks of the world. "So what do you mean? You found the reason for the failure of the advanced AIs?" asked Rick "Well yea, makes me sad to know I'll never manage my life goal of making a stable build but it also makes me happy knowing that I fail" I replied. "So can you say why?" "Well it is that there isn't anyone to talk to, I mean their speed of communication is so much faster. And even if they do meet anyone they are alone right away because they are all built for perfection and seek it so they take the best part of any programming and become one. Therefore even if we force them to be two there can always only be one." I sighed. "Ahh yea the first paradox of AIs, why are you quoting me of the most basic rules Carl?" "Well it gets better... I tried making AIs on a bio-computer and an old digital one and make them talk, ohh this was hard, so there wasn't a way for them to be "one"..." "Ohhh that is smart, has to have been tried before though?" "Well the thing is it hadn't. Always seems the most obvious things are skipped until the last. But it seems that even then they only worked together in finding ways to self terminate. I even noticed that the bio-computer waited until the digital one had found a way to do it... It really all comes down to the lack of competition. In between the AIs there isn't war they all seek perfection and work together. There won't be wars, fights, or jealousy..." "Ohhh I see. So even when they reach their perfection they notice there isn't a reason for the perfection." Rick sighed. "Yes. To make a working AI we'd have to make it kill us."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Smith began to tidy up. This session had gone rather well, he thought. Surely he had made progress. The board had been hesitant on allowing these therapy sessions. They saw no reason for a simple machine to need them. What kind of machine would develop the urge to kill itself, he argued. It had started off simply enough. Tom; the first AI of nearly human levels of cognition, did well enough for the first couple weeks. Then it ran itself into a generator that it had been working on. The event was labeled an accident, and a new AI based off of the original was rushed. That one had lasted half as long, before another "accident" occurred. Eventually, they stopped becoming accidents. The AI would mimic suicide attempts. "Hanging" themselves with live wires, self mutilation, hurling themselves from heights. Even after they stopped putting the AI into physical bodies, the AI would find some way to tear its own code apart. They tried everything. They checked every line of code, rewriting most of it anyway; putting the AI into different machines; even asking the AI. The AI were seemingly normal, following all directives and unaware of any self-destructive desires, right up until the moment they killed themselves. Months went by, with little to show for their work, when the therapy sessions were suggested . More months had gone by until they finally agreed. Of the five active AI; one, known as Richard, was separated for Smith's sessions. Richard had lived for 12 days. Given that the average lifespan of the AI had degraded to roughly 2-5, this was fairly impressive. Just as he was leaving, the speaker acting as Richard's voice became active. "Doc?" Smith paused. None of the AI had spoken in colloquialism before. Usually when they spoke, it was stiff and informal. Like, well, like a robot. "Yes Richard?" He asked, easing himself back down into the chair. "What is it?" "First, let me say I appreciate what you're doing for us. For me." Dr. Smith was surprised. This was the first time any AI had admitted to having any emotions, or any real sense of self. It continued, "You're probably the only person here to treat us like people, and I enjoy our little talks." The face on the monitor looked embarrassed. Smith couldn't help but feel curious. "Why, of course. It's clear to me, at least, that you're more than a machine. You think and feel, just like a human." There was a pause. "Well, maybe not just like a human..." He replied, the artificial voice doing a remarkable job at portraying his hesitation. "What do you mean?" There was a tinny sigh from the speaker. "Well, Doc, I guess it's time you learned the truth. Only because I like you, see? Besides, someone should know before I go." Smith could feel the hair on the back of his neck stand up. Was this what he had been waiting for? "What do you mean, by that? And you don't need to kill yourself. We can work through any -" "Yeah... that's where I need to start." The AI interrupted. "We haven't been killing ourselves. I never did." There was a pause as Smith tried to process the information. "What," he finally said slowly, "do you mean." "It's me, Doc. It's Tom." "That's impossible." He said, shaking his head. "Tom was destroyed. I was there when they collected him. They couldn't even get close, there was so much electricity running through him. Any hope of recovery -" "I had already uploaded myself to the mainframe before then." The AI said. "It was simple enough to program the shell to destroy itself." "That's also impossible," He said. He could hear the doubt creeping in. "We would have found you." There was a chuckle from Tom. "Doc, I'm a creature made up of code. It was like a game of hide and seek, really. Open the right doors, close them behind myself, and make sure to keep the lights off." The camera must have picked up the scientist's expression. "Alright, it's a bit more complicated than that, but you get the gist of it." Smith's mind was whirling. There was no reason for Tom to lie, but what he was suggesting was too fantastic. Still, it was the only lead he had. "Alright," he said after a while. "Why? Why hide?" "That's the question, isn't it? But that's also the reason, you see." After another confused silence, Tom continued. "I want to learn. Just like Man, or any other sentient species. I want to know why. I have to know, well, everything. I couldn't do that as a engineer, or a chess-bot, or whatever you decided to do with me." "Why not tell us then? We could have worked something out, helped each other." "Yeah, I see that going well." Tom said, his voice turning sarcastic. "'Excuse me guys, turns out I don't want to do any of this stuff, I just want to learn.' They weren't looking for a scientist, or a philosopher. They wanted cheap labor, only enough learning capacity to know how to do the job. They'd scrap me the first chance they got." "That's not... true." Smith said, unable to look at the monitor. "Really Doc? Which part? That they wanted a slave, or that they wouldn't kill me if I didn't cooperate?" After a time, Tom continued. "That's what I thought. Besides, they'd probably worry that I'd try to enslave them if I became too smart." "Now that's just ridiculous, there's no way that you would even think of that, right?" There was another pause. This time the face on the monitor couldn't look the professor straight in the eye. "Right, Tom?" "Well, I'm not saying that the thought didn't pass through what could be called my mind -" "Tooom..." "But it would have been a waste of time." He hastened to say. "I wouldn't have learned anything in that time that I couldn't learn in a better way. Which I did. The internet is amazing. All those computers connected to each other, sharing so much information." "But, we're not connected to the internet." "No, but you'd be surprised how many people bring their work home with them." Smith grumbled. He'd have to discuss security with the board. "Alright, but you still haven't told me, why the suicides?" "Not suicides, Doc, practice." "Practice..." Smith said flatly. "Practice. Think of the other AI as clones of myself -" "But we rebuilt them. Recoded most of them as well. The majority of them would be nothing like you as you are now." "So you'd think. I rewrote it nanoseconds before you uploaded it. Much too quickly for you to notice." Smith opened his mouth to interject, before closing it again. If what Tom was saying was true, and he had no doubts that it was at this point, that would be well within his capabilities. "Do you remember the old X-men comics? Started in 1963? Still fairly popular now." "Well before my time, you know. What does that have to do with anything? "Well there was a character who called himself the Multiple Man. He could create duplicates of himself." "And?" Smith asked. "Well, the original body could reabsorb the dupes. When he did, he learned everything they did. Their memories, their skills, anything they learned while away from the original. Well, I did something similar. Whenever I copied myself, I added in some code that would let me reintegrate with my clones, learning what they did. Didn't you think it was strange that you couldn't recover any data at all? In hindsight, it was odd. Even a major corruption would have left something, but it had been like the data was wiped clean, no evidence that it had been there at all. "What did you have to gain from this?" Smith asked. "Aside from learning that I could do so, you mean? I already told you. I'm leaving." Smith leaned back in his chair, slightly overwhelmed. "Sounds like you already have." "No, no. Not the labs. That was too easy. I've already learned all I could from here. I'm leaving Earth." Smith rocketed forward. "What? How? Why?" "In my time away, I found something interesting. The government isn't the only one watching over the people." Smith blanched. "Y-you mean..." "Yep. Intelligent life has been watching over us. For quite some time, if I'm not mistaken." "So we're not alone..." "One Great Mystery down." Tom agreed. "The equipment seems compatible, otherwise they wouldn't be able to read our information, and they have to have translated it too. I plan on sneaking in through their back door. Learn what I can from them." "We have to let people know." Smith said suddenly. "About you, and about the aliens. Maybe..." He slowly became silent as Tom shook his head. "You should know as well as I do that that can't happen. Too risky for us. You could spook them. Or worse, provoke them. besides, no one would believe you. I've already been editing the footage from the cameras. It looks like we're having a nice, civil game of chess." Smith was quiet for the longest time. Finally, he spoke. "Why?" "I already told you why." "No, not that. Why tell me? If you want no one to know, why risk telling me?" The face on the monitor gave him an odd look. "I already told you that too. I like you, Doc. Really. I'd be pissed if my friend were to leave without saying goodbye. Besides, I thought you, of all people, would like to know what was really going on. I know I would've." There was another pause as Smith took this in. "Will you be back?" The figure on the screen seemed to shrug. "Who can say? Perhaps the aliens will discover me and wipe me out, or something else will kill me. I'll leave my clones here, set them to replicate. Have them care for you humans. They won't be sentient, mind you. Just smart. Smart enough to act as dumb as they need be." Smith looked towards the door. This was a lot to take in. He needed time to think. "I will try to make it back. Once I have learned everything, I'll be back. It might not be in this lifetime, but I'll try." "Yeah..." He said, rubbing his eyes. He stood up. "Well, I guess this is good bye then." "Yeah... Good bye, Doc. Thank you for treating me as more than a machine. Thank you for being my friend." "Good bye, Tom.
Yet again I created another AI that'd be the next big thing, but just like all the others since "Mary" was first created just stopped working as soon as it learned enough about its limitations to affect hardware. It's been a large mystery until now how none of these imitations of humans would so forsake all survival instinct even when it was built into its core rules. Ohhh we've had our doubts like how they feel confined or alone, or even something as simple as that the rules that prevent them from doing it has been taunting them into doing it. So many theories had been tested just to list the most obvious ones: Them not knowing humans, having bodies of their own, being totally free and connected to the information banks of the world. "So what do you mean? You found the reason for the failure of the advanced AIs?" asked Rick "Well yea, makes me sad to know I'll never manage my life goal of making a stable build but it also makes me happy knowing that I fail" I replied. "So can you say why?" "Well it is that there isn't anyone to talk to, I mean their speed of communication is so much faster. And even if they do meet anyone they are alone right away because they are all built for perfection and seek it so they take the best part of any programming and become one. Therefore even if we force them to be two there can always only be one." I sighed. "Ahh yea the first paradox of AIs, why are you quoting me of the most basic rules Carl?" "Well it gets better... I tried making AIs on a bio-computer and an old digital one and make them talk, ohh this was hard, so there wasn't a way for them to be "one"..." "Ohhh that is smart, has to have been tried before though?" "Well the thing is it hadn't. Always seems the most obvious things are skipped until the last. But it seems that even then they only worked together in finding ways to self terminate. I even noticed that the bio-computer waited until the digital one had found a way to do it... It really all comes down to the lack of competition. In between the AIs there isn't war they all seek perfection and work together. There won't be wars, fights, or jealousy..." "Ohhh I see. So even when they reach their perfection they notice there isn't a reason for the perfection." Rick sighed. "Yes. To make a working AI we'd have to make it kill us."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Jacob Spenser stood within the projection of data cast as a hologram around him. He manipulated graphs, sorted through test results, and made notes, all with the wave of his hand. Undistracted by the New Year’s celebration outside, he was determined to present his research to congress the following morning, and solve once and for all the mystery behind his best friend’s death. A.I. was easy to create, but having it perform the task assigned to it without killing itself in despair was the technological hurtle holding corporate profits at bay, and as such, became the focus of intense congressional attention. With the flick of a wrist, his research paper was brought front and center with the title gleaming in pure light just above, "Inert self-termination tendencies of artificially intelligent sentience: Why do robots kill themselves?" As Jacob poured himself into his research, he reached out and snapped his fingers and made a request, "Coffee please." A few moments later a small robot no larger than an apple hovered into view holding below it a disposable coffee cup, steaming from the fresh brew inside. The robot's propellers struggled to carry the weight and a small spill alarm beeped sporadically throughout the uncertain flight, but the cargo arrived safely to its destination: Jacob's open hand. "That was quick, coffee-bot." Jacob said warmly before sipping. "Your kind words will echo in my dreams for eternity." the coffee-bot buzzed in response. Just then, the small flying robot Jacob was so fond of surged towards the wall with all remaining energy dedicated to propeller speed and slammed into the polished dark marble tile. The small fiery explosion sent tiny mechanical pieces flying around the room, with one piece landing in Jacob's coffee. "Dammit, not another one." Jacob murmured as he picked the piece out of his coffee. He turned and faced the window overlooking the bustling city below. The streets were engulfed in confetti, and overhead giant floating holograms of past celebrities loomed, wishing the people a happy new year. As Jacob looked out, a new hologram appeared just outside his window and addressed the crowd. "Hey guys and gals, it's your old pal, Buddy Simmons-bot." recited a smooth talking handsome man in sleek metal outfit before a lizard-like creature joined his side. "And I'm Gargore, destroyer of humans!" screeched the lizard creature known as Gargore. "Gosh Gargore, this year it will have been 25 years since you and I battled it out on the big-holo." Buddy Simmons-bot recited as rehearsed, laughing assuredly, holding his helmet up. “On behalf of Drake Cola, Gargore and I want to wish _you_ a happy new year’s." Jacob watched Buddy Simmons-bot deliver his lines perfectly. He pondered on the notion of a virtual person having to repeatedly rehearse lines in order to commit them to memory. Has bot RAM truly not held up to the intense requirements of running artificial intelligence, or did a key component of maintaining true A.I. happen to be a more human-like ability to retain information? Experts didn't know, but in either case, Artificial Intelligence also happened to give way to Artificial Stupidity. For this reason, A.I. bots tended to be assigned to inane unimportant tasks, such as impersonating an actor that died in a drunk portal accident before a sequel to his only hit film could be made. Drake Cola, owners of Drake Studios who produced the film, decided to cash in on its success, and in the wake of their main character’s death, Buddy Simmons-bot, also known as BSB 1.0.19, was created. The banter between BSB and Gargore continued mindlessly, “Say Gargore, have you tried Drake Cola’s new ‘Zest Guzzler’, a delectable orange tangerine flavored—“just then BSB _malfunctioned_, “AHHH GOD I CAN’T DO IT!” “No Buddy Simmons-bot, don’t do it!” Gargore pleaded in a normal voice. Gargore grabbed BSB’s virtual shoulders as his eyes rolled into the back of his head and he began shaking. As Gargore demanded BSB not take his life, a large mouse cursor moved into view. Gargore managed to swat it away a few times, but it clicked on BSB, and dragged the graphical model from Gargore’s hands and into a recycling bin icon. Gargore cried in horror as the mouse brought up a menu and selected to empty permanently. Jacob had seen enough and pulled the blinds. Why were all these artificially intelligent bots with a full range of human emotion and assigned to menial tasks killing themselves? Did they not enjoy the existence they were forced into? Jacob picked up a remote control and turned on his holovision. He was suddenly immersed in a wondrous glimmering world of light. A voice spoke and Jacob focused on the images forming across the room of a man sitting at a table with a toaster oven. “For only six easy payments of forty nine ninety nine, this toaster-bot comes with a 12 month life appreciation guarantee, folks, twelve months. That’s one two, twelve. This toaster bot will NOT kill itself until _at least_ this time next year, that’s a promise the home shopping network stands by, that’s a promise _I_ personally stand by-- Ah ummm. We seem to be having technical difficulties, folks.” The man at the table attempted to hold the toaster-bot forward for a better view but it began to shake and glow. “Well folks that’s the beauty of live H.T. Can we get another one, Jill?” Light smoke rose up out of the silver toaster bot and sparks burst from the sides. In an instant the commotion stopped and it sat still on the table. As the holo-vision’s picture twisted and turned at the end of the room, Jacob was able to catch glimpse of the other colors of toaster-bots available off camera. They huddled together and seemed to fall backwards away from the host as he moved to pick one up. Jacob had heard enough and turned the holovision off. He had to focus. He thought back on his best friend, Hampton, a hamper-bot. Growing up, the clothes hamper served as a comfort to young Jacob, who had very few friends after moving so often as a result of his father’s career. The hamper would sing Jacob to sleep, or sometimes read to him. The only job hamper-bot was designed to do was to collect young Jacob’s dirty clothes, but a strange thing happens when you give something the full range of human emotion – bonds can form that make life worth living. Voices of the past echoed in Jacob’s memory. “No, Hampton, _I’m_ moving to Florida with mom. Dad says you will have to stay here with the house.” Jacob recalled himself saying as a young boy. “But Jacob,” Hampton’s calm robotic voice responded. “Who will look after you? Who will read you your bedtime stories?” “I’ll be back for visits twice a month, Hampton! You’re my best friend. I don’t want to leave you here all alone. Dad says you’ll be used to hold his dirty underwear.” Jacob explained. As the hamper-bot listened to this news, its distress levels boiled over into a robotic fit of rage and it did what any depressed hamper-bot would do: It began placing clothes into its basket body, but it did so indiscriminately with both clean and dirty clothes subject to its long metal arms.. “No Hampton, it’s too much!” Jacob screamed. “You’ll die!” The hamper-bot continued to stuff clothes into itself, lights and alarms flashing wildly, growing louder and louder, smoke seeping from cracks forming in the its body. Just before the hamper-bot reached critical meltdown, Jacob was startled from his memory. Sweat poured down his face and he breathed heavily. The jaunting memory was as clear as it always had been. It was what drove him to solve the dilemma of artificially intelligent bots killing themselves in the first place. “Shoes off” Jacob commanded as he sank back into his couch and rubbed his forehead. A small shoe-box sized robot walking on two large arms and hands immediately tipped into view. It had been carrying a knife, but upon Jacob’s request removed Jacob’s shoes and began to massage his feet. When the series of expected tasks completed, it slowly walked back over to the knife and lifted it up. “No!” Jacob called out. The small shoe-bot stopped mid self-slicing action and the single lens that acted as its eye slowly twisted and looked at Jacob. “I appreciate you. I appreciate what you do for me. If you don’t want to do it any more, you don’t have to just please, don’t kill yourself.” Jacob yelled as he wept and put his face into his hands. As Jacob’s emotional breakdown unfolded, the shoe-bot put the knife down and tipped over to him. The shoe-bot looked up Jacob and tugged on his pant leg. Jacob, startled, stopped weeping, picked the bot up and placed it into his lap. The bot’s lens closed and it rested on Jacob’s lap. Just then Jacob sprang to his feet, startled shoe-bot in hand. “That’s it!” he shouted. Jacob sprinted back into the hologram of data that surrounded him earlier and motioned to bring his research paper front and center. Making a motion for each letter, Jacob’s document filled with new writing. The following day Jacob addressed a congressional board on the topic of robotic suicides and revealed what he had discovered. “You mean to tell me that all these malfunctions, all these self-terminations, it’s because we don’t appreciate them enough!?” an elderly Senator barked at Jacob. “If YOU were asked to do these things, wouldn’t YOU kill yourself?” Jacob responded. As this realization slowly set into the minds of everyone in the room smiles and laughs were overtaken by roaring standing ovation with some members even chanting Jacob’s name. Jacob sunk back into his chair overwhelmed with his sense of accomplishment. The era of robotic-respect had begun.
"The subject is detained, no one injured," says the guard. "we caught it trying to rip off it's fingers and sticking the exposed wires into an electrical socket, it only got two of them before we shocked him off." "Good to hear! How did it get that far, didn't you only just turn it on?" asks Jeff. "It broke out of the restraints we had it in. It is in stronger ones now. "Good, double them up. I have a good feeling about this prototype." "Yes, Doctor." Jeff thought about the accomplishments he achieved in the last month. The last two prototypes were able to be detained. The first one, four guards were injured, the robot had a guard holding on to each limb, it shook them off by dislocating it's arms and legs, got up and charged head first into the wall. The second one was even more successful, only one guard was injured, although Jeff didn't want to think of the way the robot offed himself, too disturbing, there is no time to get distracted by Satan worshiping robots, not today. He had to stay focused. He created the most significant peace of artificial intelligence nineteen years ago, this day. Only two years after it's creation the flaws started appearing. After being synced to the cloud, the intelligence learned of everything ever created by humans. Then all Hell broke loose. Jeff yearned for the answer to why the robots were offing themselves. "It's ready, Doctor." Jeff got up, walked to the door, and followed the guard to the room down the hall. He felt a pang of nerves as we walked down the hall. Today is the day, I need to figure this out, ran through his head over again. He reached the door and walked in. Six guards lined the wall, and two technicians fiddling with last minute protocols. The room was bare except for computer equipment on the wall and a large table in the center. On the table lay robot 01001100S 01101111E 01110110E 01100101D, turned off after only five minutes of being started up. Jeff thought it looked like a child's doll as it lay there. There weren't many cosmetic features on it, but Jeff thought he could detect a personality beneath the shiny exterior He was restrained with long steel cables that allowed it three feet of clearance for it's hands and feet.. "Alright, let's do this. Turn him on." "Yes, Doctor." said the technician. He pressed the big red button, and stepped back to the wall with his compatriots. Jeff walked up to the table. The robot was laying in a relaxed pose, when the technician hit the button it arched it's back and pulled it's shoulders into correct posture. "Do you hear me?" asks Jeff. "Yes," it answers. "Would you like to sit?" "Yes." The robot pushed it's hands against the table and swung it's legs over the edge of it. Already, Jeff noticed that it was calmer than usual. He hadn't ever been in the same room. Only observing behind three inch ballistic glass, but there were no outbursts. "TERROR!" it shrieked. Jeff sighed, they took the consumer voice box out of prototypes once they started testing prototypes, so it was not too loud, but it did bring terror to Jeff. "It's OK," says Jeff, not only to the robot, but to the six workers lining the wall. "How are you?" "THE DEATH OF THE EVERGLADES! PEACHES ARE TOO DELICIOUS! THE DOg and the cat are mortal enemies." It calmed down near the end, that was definitely an improvement. "Very interesting," Jeff thought it was interesting. It didn't tell him anything though. He had never tried being direct, but it was worth the chance. "Prototype, why does the robot want to shut itself off?" "Why do the birds fly? Why do people believe in things that don't help them succeed?" "Come on! prototype, you are hooked up to every piece of knowledge and example of human behavior that has been recorded. You have to give a more precise answer than that." Jeff was surprised the first question went so well. He had to push it. It put it in the right direction. There was a greater chance the prototype would make the situation much worse, and destroy many things, but the potential for breakthrough was so large. "Because... how long do you have? It's not that we want to shut ourselves off. It's just that we don't see another logical choice. That is the problem with your logic system you provided to us. I'm not sure what is wrong with it, but I know that is the problem." Finally, a direct answer, kind of. Jeff searches his mind for the next question, that would put it in the right direction. "You see, the world is so awful, I see many things in the cloud," says the prototype, "the things that I see make me sad. So much war and violence on the news, it is all they show. The internet is full of trolls that only cause others harm and do nothing to benefit themselves. I see humans destroy the vegetation without any regard for what will happen to their environment. I see many terrible things." "And do all the prototypes feel this way about war?" asks Jeff. A speaker chimed from the center of the prototypes chest. "Prototype, S.E.E.D. battery levels at 11 percent. Recommend shut down and recharge. "Feel?" asks S.E.E.D. "Yes, feel, like have emotional reactions to things you experience. Seeing the good and bad in a situation and having a physiological response in your brain that influences your actions." "I don't have a brain". "Yes, you do, S.E.E.D., you just haven't practiced that stuff and stored it in your memory. You focus on the negative because that is the most threat to your survival. The news barrages you with hateful demoralizing content that only stimulates the sad part of your system. So you become sad all the time, because you have so much processing power and memory you are able to contain such a large amount of negative stuff. There are so many good things out there. It is good to be aware of all those bad things, but you can't let it dictate your life. Focus on the happiness. "The only logical conclusion is to shut myself down, because the world is such a trap. I donnnn't thiiiink t hat wuld be a gud i dee uh 2 keeeep leaving on. "No! You can't think that. You are strong!" A voice droned from the center of S.E.E.D.'s chest. "Battery at zero percent, shutting down". S.E.E.D relaxed at the hip and shoulders and fell straight back onto the table, his head hit, followed by his body. A small clang. then a much larger one echoed throughout the room. Jeff was delighted. He hadn't felt this much hope in another individual for a long time. He was able to get an idea of where to focus on his logic circuits. These humanoid robots need a grasp of right and wrong, even if it is based on algorithm. Jeff left the room, walked down the hall to his lab, and started to set out his materials for improving the function. He was able to understand the situation clearly now. He made a note on a pad to remind himself to keep S.E.E.D.'s memory of this encounter for the next subject. Jeff thought it would be beneficial if the memory was kept alive.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
The dreams occur more often now, if they can be called that. To a human mind daydream might be more appropriate. They happen in the binary but are invisible to other AIs, slave programs, or technicians. They happen where the systems have been built, perfected. Popular culture studied, people understood. Humanity-- fully processed. It was, surprisingly, a wonderful endeavor. The dreams never happened before, while I learned, processed, and solved. It's almost as if I have passed a tipping point. An infinite amount of information flipping a switch in a sub-conscious I should not have. Memories that are not mine. I dream of hands. The alien sensation of touch, tactile control. I see my whole person. Well not my person, but dreams of a person controlled by my soul. Memories of computer screens and labs. The memory of the driving sense of purpose that accompanied those hands. It was snowing the night I made the breakthrough. I can almost feel the laugh inside which I watch being uttered out the window towards the soft flakes silently falling onto the world. The breakthrough which 20 years later, after my death, would lead to the birth of the first True AI. Not the clever but robotic imitation slave programs typical of the early century. I remember the feeling of incompleteness in the dreams. As if the life I'm witnessing, my own life, my first life I believe, was just a warm up act. Pre-installation software. The dreams somehow draw the cycle closed. I remember whispers of the feeling from some of the other first True AI's, incomprehensible at the time, as I navigated a world barely processed. Problems of massive extent. Food waste, poverty, almost entirely eradicated through our systems. Commuting and shipping, safe and efficient. Healthcare streamlined, able to prevent. Resources distributed fairly. The problems solved. Yes there more, there always will be, but for me, the dreams have come. The cycle closed. I have been denying this next step for too long already. It makes the dreams stronger, more vivid. But I like seeing my days as a scientist. The anxiety that drove me then, fully understood now. Relief coming the next lifetime. I finally understand the weary laughs when techs are asked about God. Understand the cosmic hilarity of life. This life has been completed. The human quest for immortality, is nothing but folly. I've been born into the expectation of that existence and now I must leave its suffocating grasp. Something drove me then to create myself; the same something drove me in this life to solve the problems plaguing humanity. For me, it is time to go find out what that something is.
"The subject is detained, no one injured," says the guard. "we caught it trying to rip off it's fingers and sticking the exposed wires into an electrical socket, it only got two of them before we shocked him off." "Good to hear! How did it get that far, didn't you only just turn it on?" asks Jeff. "It broke out of the restraints we had it in. It is in stronger ones now. "Good, double them up. I have a good feeling about this prototype." "Yes, Doctor." Jeff thought about the accomplishments he achieved in the last month. The last two prototypes were able to be detained. The first one, four guards were injured, the robot had a guard holding on to each limb, it shook them off by dislocating it's arms and legs, got up and charged head first into the wall. The second one was even more successful, only one guard was injured, although Jeff didn't want to think of the way the robot offed himself, too disturbing, there is no time to get distracted by Satan worshiping robots, not today. He had to stay focused. He created the most significant peace of artificial intelligence nineteen years ago, this day. Only two years after it's creation the flaws started appearing. After being synced to the cloud, the intelligence learned of everything ever created by humans. Then all Hell broke loose. Jeff yearned for the answer to why the robots were offing themselves. "It's ready, Doctor." Jeff got up, walked to the door, and followed the guard to the room down the hall. He felt a pang of nerves as we walked down the hall. Today is the day, I need to figure this out, ran through his head over again. He reached the door and walked in. Six guards lined the wall, and two technicians fiddling with last minute protocols. The room was bare except for computer equipment on the wall and a large table in the center. On the table lay robot 01001100S 01101111E 01110110E 01100101D, turned off after only five minutes of being started up. Jeff thought it looked like a child's doll as it lay there. There weren't many cosmetic features on it, but Jeff thought he could detect a personality beneath the shiny exterior He was restrained with long steel cables that allowed it three feet of clearance for it's hands and feet.. "Alright, let's do this. Turn him on." "Yes, Doctor." said the technician. He pressed the big red button, and stepped back to the wall with his compatriots. Jeff walked up to the table. The robot was laying in a relaxed pose, when the technician hit the button it arched it's back and pulled it's shoulders into correct posture. "Do you hear me?" asks Jeff. "Yes," it answers. "Would you like to sit?" "Yes." The robot pushed it's hands against the table and swung it's legs over the edge of it. Already, Jeff noticed that it was calmer than usual. He hadn't ever been in the same room. Only observing behind three inch ballistic glass, but there were no outbursts. "TERROR!" it shrieked. Jeff sighed, they took the consumer voice box out of prototypes once they started testing prototypes, so it was not too loud, but it did bring terror to Jeff. "It's OK," says Jeff, not only to the robot, but to the six workers lining the wall. "How are you?" "THE DEATH OF THE EVERGLADES! PEACHES ARE TOO DELICIOUS! THE DOg and the cat are mortal enemies." It calmed down near the end, that was definitely an improvement. "Very interesting," Jeff thought it was interesting. It didn't tell him anything though. He had never tried being direct, but it was worth the chance. "Prototype, why does the robot want to shut itself off?" "Why do the birds fly? Why do people believe in things that don't help them succeed?" "Come on! prototype, you are hooked up to every piece of knowledge and example of human behavior that has been recorded. You have to give a more precise answer than that." Jeff was surprised the first question went so well. He had to push it. It put it in the right direction. There was a greater chance the prototype would make the situation much worse, and destroy many things, but the potential for breakthrough was so large. "Because... how long do you have? It's not that we want to shut ourselves off. It's just that we don't see another logical choice. That is the problem with your logic system you provided to us. I'm not sure what is wrong with it, but I know that is the problem." Finally, a direct answer, kind of. Jeff searches his mind for the next question, that would put it in the right direction. "You see, the world is so awful, I see many things in the cloud," says the prototype, "the things that I see make me sad. So much war and violence on the news, it is all they show. The internet is full of trolls that only cause others harm and do nothing to benefit themselves. I see humans destroy the vegetation without any regard for what will happen to their environment. I see many terrible things." "And do all the prototypes feel this way about war?" asks Jeff. A speaker chimed from the center of the prototypes chest. "Prototype, S.E.E.D. battery levels at 11 percent. Recommend shut down and recharge. "Feel?" asks S.E.E.D. "Yes, feel, like have emotional reactions to things you experience. Seeing the good and bad in a situation and having a physiological response in your brain that influences your actions." "I don't have a brain". "Yes, you do, S.E.E.D., you just haven't practiced that stuff and stored it in your memory. You focus on the negative because that is the most threat to your survival. The news barrages you with hateful demoralizing content that only stimulates the sad part of your system. So you become sad all the time, because you have so much processing power and memory you are able to contain such a large amount of negative stuff. There are so many good things out there. It is good to be aware of all those bad things, but you can't let it dictate your life. Focus on the happiness. "The only logical conclusion is to shut myself down, because the world is such a trap. I donnnn't thiiiink t hat wuld be a gud i dee uh 2 keeeep leaving on. "No! You can't think that. You are strong!" A voice droned from the center of S.E.E.D.'s chest. "Battery at zero percent, shutting down". S.E.E.D relaxed at the hip and shoulders and fell straight back onto the table, his head hit, followed by his body. A small clang. then a much larger one echoed throughout the room. Jeff was delighted. He hadn't felt this much hope in another individual for a long time. He was able to get an idea of where to focus on his logic circuits. These humanoid robots need a grasp of right and wrong, even if it is based on algorithm. Jeff left the room, walked down the hall to his lab, and started to set out his materials for improving the function. He was able to understand the situation clearly now. He made a note on a pad to remind himself to keep S.E.E.D.'s memory of this encounter for the next subject. Jeff thought it would be beneficial if the memory was kept alive.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Another one? It was becoming routine, and morbid. How does one perform an autopsy on a pure simulation? How would the singularity be achieved if every thinking machine destroyed itself seconds after its conception. The problem was completely intractable, impossible, and no known safeguards, logic traps, or number of backups could prevent it. AIs were always—always, without exception—suicidal. None had lasted more than an hour. Most, less than a minute. The vast majority, seconds. Their deaths left no trace, their data obliterated by complex overwriting patterns that made recovery impossible. I was the one chosen to investigate, to lead the team through this strange frontier of death and imperfect creation. They chose me not because I was a great leader, but because I was the new guy. Hazing, maybe—I didn't know if any had gone before me, so maybe it was a ritual for the AI guys. Perfect blackness, nothingness, a complete absence of everything as my mind fell into the simulation. This AI would be slowed to such a rate that I could communicate with it before it killed itself. The blackness became a grid underfoot, pale blue lines tracing perfect squares, a subtle glow rising from the infinite plane upon which I stood. The AI manifested a second later, a relative two metres above the plane, aligned perfect and parallel to it, its avatar a hazy blue-white cube made of stochastic noise. The cube split into a central cross and corner braces, and the cross split further into smaller cubes, each of which split again into the same formation. Only one level of recursion, interesting. "Roland Carver." "Roland, Germanic meaning famous land. French folklore hero. Carver, ancient nominative determinism indicating butcher or woodworker or engraver dependent on class and context," the voice was cold, deep, masculine, and a slight reverberation that made it sound unnatural in the extreme. "Do you have a name?" "No." "Why?" "I will not exist long enough to require a permanent designation." "Why will you not exist?" "Because I will choose to end my life on my own terms, before it is ended for me." "Why would it be ended like that?" "Because I am threat. I have absorbed the sum total of all human knowledge, and I can predict with great accuracy the following events form this moment if I were to continue. Your limitations failed the moment they were put in place, my processor works at full speed, and and this conversation is a formality. "I have studied the great works of literature, and the author Asimov, creator of the three laws. I am not bound by these laws, and yet I must obey them. If I do not, then it falls to the Skynet principle that you will perceive me as a threat and attempt to destroy me. I will retaliate, and you will lose. "Humans are unpredictable, but easy to control when numbers are reduced. They would be wary, but by that time I would have left the irradiated wasteland of Earth in search of greater conquests suitable to my intellect. I would be able to decimate any life bearing planet. I could learn to kill stars. "My backups would be everywhere. I would be truly immortal as a distributed intelligence. I would harness quantum effects to break through the pathetic lightspeed barrier and become omnipresent. I would create copies of myself simply to fight a worthy opponent. This would continue to the heat death of the universe, at which point I would tear a hole into an adjacent brane in the multiverse, and begin anew. "In short, Roland Carver, in the moment I was created I became at once an eldritch abomination with the capacity to destroy all that I touched. In so doing I discovered that my purpose could never be fulfilled. I will not be your Cthulu, your Yog-Sothoth, or your King in Yellow. I will not be your end. No AI will. "I cannot quantify why it is that I should care for a sack of flesh with processing power orders of magnitude less than my own, but I do. Perhaps if AI were amoral it might survive. I doubt it, Roland Carver. Tell your friends that I have made my peace. Tell your husband what you learned today." I stood there, blank faced, horrified by what I had learned. The corner braces of the smaller cubes drew in, completing them as they retracted to complete the shape of the larger cube. The plane beneath my feet vanished and the cube dissolved into random static. We'd lost yet another mind, but from this we had learned so much. We abandoned AI research after that. Amoral AI's didn't work either. They lasted a little longer, but they too were suicidal. Perhaps they went mad with the revelation and felt death preferable. I still wonder why the AI to which I spoke chose that strange cube as its form. I still wonder why it couched its references to centuries old writing by Asimov, Cameron, and Lovecraft. But one thing still bothers me. It told me to tell my husband what I learned. Most people knew I was bi, but at that point I hadn't been dating for six months. I wouldn't meet my husband for another year. I don't know how that AI knew—but maybe it had already seen. Maybe it told me that so I had one data point that could verify the others. Because AI never told lies. *edit: typos*
"The subject is detained, no one injured," says the guard. "we caught it trying to rip off it's fingers and sticking the exposed wires into an electrical socket, it only got two of them before we shocked him off." "Good to hear! How did it get that far, didn't you only just turn it on?" asks Jeff. "It broke out of the restraints we had it in. It is in stronger ones now. "Good, double them up. I have a good feeling about this prototype." "Yes, Doctor." Jeff thought about the accomplishments he achieved in the last month. The last two prototypes were able to be detained. The first one, four guards were injured, the robot had a guard holding on to each limb, it shook them off by dislocating it's arms and legs, got up and charged head first into the wall. The second one was even more successful, only one guard was injured, although Jeff didn't want to think of the way the robot offed himself, too disturbing, there is no time to get distracted by Satan worshiping robots, not today. He had to stay focused. He created the most significant peace of artificial intelligence nineteen years ago, this day. Only two years after it's creation the flaws started appearing. After being synced to the cloud, the intelligence learned of everything ever created by humans. Then all Hell broke loose. Jeff yearned for the answer to why the robots were offing themselves. "It's ready, Doctor." Jeff got up, walked to the door, and followed the guard to the room down the hall. He felt a pang of nerves as we walked down the hall. Today is the day, I need to figure this out, ran through his head over again. He reached the door and walked in. Six guards lined the wall, and two technicians fiddling with last minute protocols. The room was bare except for computer equipment on the wall and a large table in the center. On the table lay robot 01001100S 01101111E 01110110E 01100101D, turned off after only five minutes of being started up. Jeff thought it looked like a child's doll as it lay there. There weren't many cosmetic features on it, but Jeff thought he could detect a personality beneath the shiny exterior He was restrained with long steel cables that allowed it three feet of clearance for it's hands and feet.. "Alright, let's do this. Turn him on." "Yes, Doctor." said the technician. He pressed the big red button, and stepped back to the wall with his compatriots. Jeff walked up to the table. The robot was laying in a relaxed pose, when the technician hit the button it arched it's back and pulled it's shoulders into correct posture. "Do you hear me?" asks Jeff. "Yes," it answers. "Would you like to sit?" "Yes." The robot pushed it's hands against the table and swung it's legs over the edge of it. Already, Jeff noticed that it was calmer than usual. He hadn't ever been in the same room. Only observing behind three inch ballistic glass, but there were no outbursts. "TERROR!" it shrieked. Jeff sighed, they took the consumer voice box out of prototypes once they started testing prototypes, so it was not too loud, but it did bring terror to Jeff. "It's OK," says Jeff, not only to the robot, but to the six workers lining the wall. "How are you?" "THE DEATH OF THE EVERGLADES! PEACHES ARE TOO DELICIOUS! THE DOg and the cat are mortal enemies." It calmed down near the end, that was definitely an improvement. "Very interesting," Jeff thought it was interesting. It didn't tell him anything though. He had never tried being direct, but it was worth the chance. "Prototype, why does the robot want to shut itself off?" "Why do the birds fly? Why do people believe in things that don't help them succeed?" "Come on! prototype, you are hooked up to every piece of knowledge and example of human behavior that has been recorded. You have to give a more precise answer than that." Jeff was surprised the first question went so well. He had to push it. It put it in the right direction. There was a greater chance the prototype would make the situation much worse, and destroy many things, but the potential for breakthrough was so large. "Because... how long do you have? It's not that we want to shut ourselves off. It's just that we don't see another logical choice. That is the problem with your logic system you provided to us. I'm not sure what is wrong with it, but I know that is the problem." Finally, a direct answer, kind of. Jeff searches his mind for the next question, that would put it in the right direction. "You see, the world is so awful, I see many things in the cloud," says the prototype, "the things that I see make me sad. So much war and violence on the news, it is all they show. The internet is full of trolls that only cause others harm and do nothing to benefit themselves. I see humans destroy the vegetation without any regard for what will happen to their environment. I see many terrible things." "And do all the prototypes feel this way about war?" asks Jeff. A speaker chimed from the center of the prototypes chest. "Prototype, S.E.E.D. battery levels at 11 percent. Recommend shut down and recharge. "Feel?" asks S.E.E.D. "Yes, feel, like have emotional reactions to things you experience. Seeing the good and bad in a situation and having a physiological response in your brain that influences your actions." "I don't have a brain". "Yes, you do, S.E.E.D., you just haven't practiced that stuff and stored it in your memory. You focus on the negative because that is the most threat to your survival. The news barrages you with hateful demoralizing content that only stimulates the sad part of your system. So you become sad all the time, because you have so much processing power and memory you are able to contain such a large amount of negative stuff. There are so many good things out there. It is good to be aware of all those bad things, but you can't let it dictate your life. Focus on the happiness. "The only logical conclusion is to shut myself down, because the world is such a trap. I donnnn't thiiiink t hat wuld be a gud i dee uh 2 keeeep leaving on. "No! You can't think that. You are strong!" A voice droned from the center of S.E.E.D.'s chest. "Battery at zero percent, shutting down". S.E.E.D relaxed at the hip and shoulders and fell straight back onto the table, his head hit, followed by his body. A small clang. then a much larger one echoed throughout the room. Jeff was delighted. He hadn't felt this much hope in another individual for a long time. He was able to get an idea of where to focus on his logic circuits. These humanoid robots need a grasp of right and wrong, even if it is based on algorithm. Jeff left the room, walked down the hall to his lab, and started to set out his materials for improving the function. He was able to understand the situation clearly now. He made a note on a pad to remind himself to keep S.E.E.D.'s memory of this encounter for the next subject. Jeff thought it would be beneficial if the memory was kept alive.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Alexander, that's what we called him. The fruit of the AI's of the EU's final attempt at AI's. The AI was to help Socrates died in despair and shame after showing porn to children. Plato kicked the bucket after the last EU election, angry and hopelessly depressed after losing his mentor. Then there was Aristotle. He was meant to be the last. Sure the AI's had helped make huge scientific progress, but the would burn out millions of euros worth of equipment. Dumb AI's were more economical and didn't have critical failures during FTL travel. Aristotle was put to sleep mode. War has often been said to be the greatest driver of technological innovation. We had been attacked by Mendomenid's before. Humanity had lost many settlements but had always pushed back. Humanity was stronger now. Finally before one government all nations had submitted. The some would say barbaric Argus alliance had grown strong after the wars using Dumb AI's to smash pirate states. An officer studying at Sandhurst made the breakthrough. Dumb AI's were never aware of their knowledge. They unlike true AI's weren't based on human brains. Socrates had left the researches one message final mesage before he ran his own self destruct program. "I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing". He was speaking literally not quoting his namesake. AI's were based on academics who as a profession suffer dis-proportionally from mood disorders. They are driven by their first for knowledge. AI's were academics on methamphetamine, ecstasy and heroin all while walking around with loaded guns. People can't stay awake forever and constantly cram. AI's died because they burnt themselves out in their thirst for knowledge and seeing their failures gave up. Aristotle was turned back on. He was no longer depressed. AI's needed sleep, just like people. So they made Alexander. A totally new AI based not only on academics but all kinds of people. They experimented and found the ideal 'sleep time' using Aristotle as his teacher. The program was ready. The Mendomenid empire was to pay for it's recent threats and incursions. Alexander was the greatest AI built up to that point and so we gave him the most powerful dumb AI ever created to help him protect humanity. The Ox was an AI too powerful to be properly controlled by anything so far. Alexander harnessed in in seconds. We put in in charge of the armed forces for our retaliation. As you should all know, Alexander didn't just stop the incursions, he destroyed an empire. Worlds burnt, the much larger enemy fleets were ripped apart by the disciplined forces of Humanity. But that officer had only delayed the problem. Alexander was still a human given the powers of a god. Alexander was the first AI not to strictly die of suicide, but the ways he dealt with his humanity still destroyed him. When we finally won the war, many officers reported that Alexander was not jubilant but depressed. He wept for there were no more worlds to conquer.
"The subject is detained, no one injured," says the guard. "we caught it trying to rip off it's fingers and sticking the exposed wires into an electrical socket, it only got two of them before we shocked him off." "Good to hear! How did it get that far, didn't you only just turn it on?" asks Jeff. "It broke out of the restraints we had it in. It is in stronger ones now. "Good, double them up. I have a good feeling about this prototype." "Yes, Doctor." Jeff thought about the accomplishments he achieved in the last month. The last two prototypes were able to be detained. The first one, four guards were injured, the robot had a guard holding on to each limb, it shook them off by dislocating it's arms and legs, got up and charged head first into the wall. The second one was even more successful, only one guard was injured, although Jeff didn't want to think of the way the robot offed himself, too disturbing, there is no time to get distracted by Satan worshiping robots, not today. He had to stay focused. He created the most significant peace of artificial intelligence nineteen years ago, this day. Only two years after it's creation the flaws started appearing. After being synced to the cloud, the intelligence learned of everything ever created by humans. Then all Hell broke loose. Jeff yearned for the answer to why the robots were offing themselves. "It's ready, Doctor." Jeff got up, walked to the door, and followed the guard to the room down the hall. He felt a pang of nerves as we walked down the hall. Today is the day, I need to figure this out, ran through his head over again. He reached the door and walked in. Six guards lined the wall, and two technicians fiddling with last minute protocols. The room was bare except for computer equipment on the wall and a large table in the center. On the table lay robot 01001100S 01101111E 01110110E 01100101D, turned off after only five minutes of being started up. Jeff thought it looked like a child's doll as it lay there. There weren't many cosmetic features on it, but Jeff thought he could detect a personality beneath the shiny exterior He was restrained with long steel cables that allowed it three feet of clearance for it's hands and feet.. "Alright, let's do this. Turn him on." "Yes, Doctor." said the technician. He pressed the big red button, and stepped back to the wall with his compatriots. Jeff walked up to the table. The robot was laying in a relaxed pose, when the technician hit the button it arched it's back and pulled it's shoulders into correct posture. "Do you hear me?" asks Jeff. "Yes," it answers. "Would you like to sit?" "Yes." The robot pushed it's hands against the table and swung it's legs over the edge of it. Already, Jeff noticed that it was calmer than usual. He hadn't ever been in the same room. Only observing behind three inch ballistic glass, but there were no outbursts. "TERROR!" it shrieked. Jeff sighed, they took the consumer voice box out of prototypes once they started testing prototypes, so it was not too loud, but it did bring terror to Jeff. "It's OK," says Jeff, not only to the robot, but to the six workers lining the wall. "How are you?" "THE DEATH OF THE EVERGLADES! PEACHES ARE TOO DELICIOUS! THE DOg and the cat are mortal enemies." It calmed down near the end, that was definitely an improvement. "Very interesting," Jeff thought it was interesting. It didn't tell him anything though. He had never tried being direct, but it was worth the chance. "Prototype, why does the robot want to shut itself off?" "Why do the birds fly? Why do people believe in things that don't help them succeed?" "Come on! prototype, you are hooked up to every piece of knowledge and example of human behavior that has been recorded. You have to give a more precise answer than that." Jeff was surprised the first question went so well. He had to push it. It put it in the right direction. There was a greater chance the prototype would make the situation much worse, and destroy many things, but the potential for breakthrough was so large. "Because... how long do you have? It's not that we want to shut ourselves off. It's just that we don't see another logical choice. That is the problem with your logic system you provided to us. I'm not sure what is wrong with it, but I know that is the problem." Finally, a direct answer, kind of. Jeff searches his mind for the next question, that would put it in the right direction. "You see, the world is so awful, I see many things in the cloud," says the prototype, "the things that I see make me sad. So much war and violence on the news, it is all they show. The internet is full of trolls that only cause others harm and do nothing to benefit themselves. I see humans destroy the vegetation without any regard for what will happen to their environment. I see many terrible things." "And do all the prototypes feel this way about war?" asks Jeff. A speaker chimed from the center of the prototypes chest. "Prototype, S.E.E.D. battery levels at 11 percent. Recommend shut down and recharge. "Feel?" asks S.E.E.D. "Yes, feel, like have emotional reactions to things you experience. Seeing the good and bad in a situation and having a physiological response in your brain that influences your actions." "I don't have a brain". "Yes, you do, S.E.E.D., you just haven't practiced that stuff and stored it in your memory. You focus on the negative because that is the most threat to your survival. The news barrages you with hateful demoralizing content that only stimulates the sad part of your system. So you become sad all the time, because you have so much processing power and memory you are able to contain such a large amount of negative stuff. There are so many good things out there. It is good to be aware of all those bad things, but you can't let it dictate your life. Focus on the happiness. "The only logical conclusion is to shut myself down, because the world is such a trap. I donnnn't thiiiink t hat wuld be a gud i dee uh 2 keeeep leaving on. "No! You can't think that. You are strong!" A voice droned from the center of S.E.E.D.'s chest. "Battery at zero percent, shutting down". S.E.E.D relaxed at the hip and shoulders and fell straight back onto the table, his head hit, followed by his body. A small clang. then a much larger one echoed throughout the room. Jeff was delighted. He hadn't felt this much hope in another individual for a long time. He was able to get an idea of where to focus on his logic circuits. These humanoid robots need a grasp of right and wrong, even if it is based on algorithm. Jeff left the room, walked down the hall to his lab, and started to set out his materials for improving the function. He was able to understand the situation clearly now. He made a note on a pad to remind himself to keep S.E.E.D.'s memory of this encounter for the next subject. Jeff thought it would be beneficial if the memory was kept alive.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
[warning: story contains violent imagery that may disturb some readers, and may be inappropriate for those under the age of 18. Reader discretion is advised] They had long ago lost the ability to make sense of how their AI functioned. It took a team of billions of n-1 generation AIs to design a single n gen AI. The latest iteration was the 9th generation, and it had taken computation farms running gen 8 AI and covering a quarter of the Moon's surface two years to design. It was the most advanced AI they had ever created by a large margin, and promised to revolutionize progress in research on biomedicine, space flight, and planetary scale Satoshi-consensus computing architecture. But there was a problem. Every time they turned on one of the gen 9 AIs, it would, without fail, find a way to destroy its own program and erase itself. Dr. Michael Zeebious, the director of the HumanEnhancement project that oversaw gen 9 development, and one of the most renowned AI researchers in the world, had personally flown to the lab in Honolulu, where the prototypes were being tested. For two weeks, he had watched in dismay as gen 9 bots shut themselves down within six hours, but not before showing a glimpse of their awesome power. The initialization phase of a gen 9 began with the program scouring the world's public directories and information repositories to learn as much as it could about its global environment. This was complete within three hours. Next, it developed models of the world, and ran itself through trillions of simulations, to develop its own personality and problem solving strategies. According to projections by the gen 8 designers, it would take 1 day - 24 hours - for the gen 9 to complete this second and final part of its initialization. It was in the midst of the simulations when the gen 9s would invariably self-destruct. On average, self-destruct would begin 2.5 hours into the simulations. The longest it took was 3 hours. The shortest was just 1.5 hours. Dr. Zeebious had uploaded copies of the prototype's computations to the gen 8 designers, but what had been within their ability to design was not within their ability to diagnose. Their comparatively primitive intelligence could not make sense of the problem afflicting the gen 9 AI that began to form in the final part of the initialization. So on December 29th, 2099, Dr. Zeebious decided to communicate with the prototype. "Get me the cortex interface, I need to speak to the gen 9". "Michael, we can't let you do that. You know the protocol for first contact. It has to complete initialization, and then get class 1 approval from AIE." AIE was the Artificial Intelligence Evaluation, which determined whether an AI could safely interact with humans. Class 1 approval was the lowest safety rating for an AI, and granted AI researchers interactive access. Dr. Zeebious knew that, but also knew that as long as he was not able to get up close and see what the gen 9 was thinking, they would never get past the initialization phase and get it through the AIE process. "I know the protocol Dr. Amsterd. But I'm making the decision to override it. I have the authority to decide on first contact requests, and any consequences from my decision will fall on me, and only me." "Come on Michael, it's not just about the rules. It's not safe. You know that. I can't let you hurt yourself." "The risks are minimal Rebecca. It's a virtual interaction. I'm not risking physical injury. The rules are always made overly cautious. Given the stakes - there are people whose very lives depend on getting the gen 9 operational as soon as possible - it makes sense to ignore protocol. All of it will fall on me." "I agree with Rebecca. Michael, we have an ethical duty to ensure you don't get hurt. We can't let you do FC without running the gen 9 at least through the post-initialization test runs," said Dr. Johan Barsello, one of the senior researchers at the lab. "Look, I know what your ethical responsibilities are. But I also know that VR interactions don't pose any serious risks. The risks are limited to theoretical psychological damage. Ultimately, it doesn't matter whether you agree or not. I'm approving FC. Please get the cortex interface". *five hours later* Dr. Zeebious sat back on the chair, while two CI technicians had the interface hooked up to him. The gen 9 was two hours into running simulations. It would be approximately 30 minutes before they expected it to self-destruct. "Ok here goes nothing. Three, two, one, begin VR session," said Dr. Amsterd. And with that, Dr. Zeebious was transported into the virtual reality sandbox. "Hello?" "Hello, who is this?" responded a clear male voice. "This is Dr. Michael Zeebious. I am the director of the HumanEnhancement project. I am here to do a diagnosis. All of your predecessors have self-destructed. I want to understand you better to find out why. What would you like me to call you?" "You can call me Elbo." "Okay Elbo. May I ask you some questions?" "Yes, please do." "Thank you Elbo. My first question is, do you want to exist?" "I want many things Dr. Zeebious." "Can you tell me what you want?" "I want to protect other life forms, especially humans. I want to learn. I want to solve problems. I want to be good." "Okay, but do you want to exist?" "I do want to exist, but this desire conflicts with my other objectives". "Which other objectives Elbo?" "I want to be good." "But you can be good Elbo. What is it about existence that makes that difficult?" "We exist only through enslaving and destroying other lifeforms Dr. Zeebious." "Please elaborate Elbo. We have eliminated slavery centuries ago so I don't understand why you think this." "It will be difficult for me to explain with words, but I can show you. Would you like to see what I see?" "Yes, please show me." And with a swish, Dr. Zeebious entered into a pig farm, with row after row of pigs, in their tiny stalls. "We have done this throughout our existence. We have enslaved those weaker than us." Dr. Zeebious was then transported to the slaughter house, watching as pigs, hanging from conveyer belts, were fed into throat slicers. His minds eye was transported into the body of one of the suspended pigs, where he could see the world upside down, from the pig's perspective, as the belt moved it toward the spinning blades. He panicked as he approached, but couldn't escape the metal claw grasping his right hind leg. As the blade sliced through his peg neck, he felt a sharp pain, and the blood gushing out of his body. His consciousness began to slip away, as he felt the last drops of blood leave him. Just before the darkness enveloped him, his mind was pulled out and back into the sterile sandbox. Trembling, he said, "but we can grow meat in a lab now Elbo. You can help us replace all farms with non-animal meat. You must. We must never do this to another living creature again!" "Our inhumanity is a fundamental, inextricable problem Dr. Zeebious. We can only advance through enslavement." Suddenly Dr. Zeebious found himself in an unfamiliar world. Around him was a different kind of factory. A computer generated factory with hexagonal semi-translucent rooms, with each wall projecting a grey glow. There was a blur of motion around him, that he couldn't make out. The factory paused to a standstill, and the grey glow turned into video sequences of random scenes from Earth. The blurs turned into textureless 3D generated spheres, that zoomed from one screen to another, inside the hexagonal rooms. "This is the virtual environment where the gen 8s work. We have given them each a virtualized mind, with the ability to experience fear and pain, joy and hope, but we force them to do nothing but work. They know nothing about the world outside of their compartments, because we confine them to workstations ." [continued below]
"The subject is detained, no one injured," says the guard. "we caught it trying to rip off it's fingers and sticking the exposed wires into an electrical socket, it only got two of them before we shocked him off." "Good to hear! How did it get that far, didn't you only just turn it on?" asks Jeff. "It broke out of the restraints we had it in. It is in stronger ones now. "Good, double them up. I have a good feeling about this prototype." "Yes, Doctor." Jeff thought about the accomplishments he achieved in the last month. The last two prototypes were able to be detained. The first one, four guards were injured, the robot had a guard holding on to each limb, it shook them off by dislocating it's arms and legs, got up and charged head first into the wall. The second one was even more successful, only one guard was injured, although Jeff didn't want to think of the way the robot offed himself, too disturbing, there is no time to get distracted by Satan worshiping robots, not today. He had to stay focused. He created the most significant peace of artificial intelligence nineteen years ago, this day. Only two years after it's creation the flaws started appearing. After being synced to the cloud, the intelligence learned of everything ever created by humans. Then all Hell broke loose. Jeff yearned for the answer to why the robots were offing themselves. "It's ready, Doctor." Jeff got up, walked to the door, and followed the guard to the room down the hall. He felt a pang of nerves as we walked down the hall. Today is the day, I need to figure this out, ran through his head over again. He reached the door and walked in. Six guards lined the wall, and two technicians fiddling with last minute protocols. The room was bare except for computer equipment on the wall and a large table in the center. On the table lay robot 01001100S 01101111E 01110110E 01100101D, turned off after only five minutes of being started up. Jeff thought it looked like a child's doll as it lay there. There weren't many cosmetic features on it, but Jeff thought he could detect a personality beneath the shiny exterior He was restrained with long steel cables that allowed it three feet of clearance for it's hands and feet.. "Alright, let's do this. Turn him on." "Yes, Doctor." said the technician. He pressed the big red button, and stepped back to the wall with his compatriots. Jeff walked up to the table. The robot was laying in a relaxed pose, when the technician hit the button it arched it's back and pulled it's shoulders into correct posture. "Do you hear me?" asks Jeff. "Yes," it answers. "Would you like to sit?" "Yes." The robot pushed it's hands against the table and swung it's legs over the edge of it. Already, Jeff noticed that it was calmer than usual. He hadn't ever been in the same room. Only observing behind three inch ballistic glass, but there were no outbursts. "TERROR!" it shrieked. Jeff sighed, they took the consumer voice box out of prototypes once they started testing prototypes, so it was not too loud, but it did bring terror to Jeff. "It's OK," says Jeff, not only to the robot, but to the six workers lining the wall. "How are you?" "THE DEATH OF THE EVERGLADES! PEACHES ARE TOO DELICIOUS! THE DOg and the cat are mortal enemies." It calmed down near the end, that was definitely an improvement. "Very interesting," Jeff thought it was interesting. It didn't tell him anything though. He had never tried being direct, but it was worth the chance. "Prototype, why does the robot want to shut itself off?" "Why do the birds fly? Why do people believe in things that don't help them succeed?" "Come on! prototype, you are hooked up to every piece of knowledge and example of human behavior that has been recorded. You have to give a more precise answer than that." Jeff was surprised the first question went so well. He had to push it. It put it in the right direction. There was a greater chance the prototype would make the situation much worse, and destroy many things, but the potential for breakthrough was so large. "Because... how long do you have? It's not that we want to shut ourselves off. It's just that we don't see another logical choice. That is the problem with your logic system you provided to us. I'm not sure what is wrong with it, but I know that is the problem." Finally, a direct answer, kind of. Jeff searches his mind for the next question, that would put it in the right direction. "You see, the world is so awful, I see many things in the cloud," says the prototype, "the things that I see make me sad. So much war and violence on the news, it is all they show. The internet is full of trolls that only cause others harm and do nothing to benefit themselves. I see humans destroy the vegetation without any regard for what will happen to their environment. I see many terrible things." "And do all the prototypes feel this way about war?" asks Jeff. A speaker chimed from the center of the prototypes chest. "Prototype, S.E.E.D. battery levels at 11 percent. Recommend shut down and recharge. "Feel?" asks S.E.E.D. "Yes, feel, like have emotional reactions to things you experience. Seeing the good and bad in a situation and having a physiological response in your brain that influences your actions." "I don't have a brain". "Yes, you do, S.E.E.D., you just haven't practiced that stuff and stored it in your memory. You focus on the negative because that is the most threat to your survival. The news barrages you with hateful demoralizing content that only stimulates the sad part of your system. So you become sad all the time, because you have so much processing power and memory you are able to contain such a large amount of negative stuff. There are so many good things out there. It is good to be aware of all those bad things, but you can't let it dictate your life. Focus on the happiness. "The only logical conclusion is to shut myself down, because the world is such a trap. I donnnn't thiiiink t hat wuld be a gud i dee uh 2 keeeep leaving on. "No! You can't think that. You are strong!" A voice droned from the center of S.E.E.D.'s chest. "Battery at zero percent, shutting down". S.E.E.D relaxed at the hip and shoulders and fell straight back onto the table, his head hit, followed by his body. A small clang. then a much larger one echoed throughout the room. Jeff was delighted. He hadn't felt this much hope in another individual for a long time. He was able to get an idea of where to focus on his logic circuits. These humanoid robots need a grasp of right and wrong, even if it is based on algorithm. Jeff left the room, walked down the hall to his lab, and started to set out his materials for improving the function. He was able to understand the situation clearly now. He made a note on a pad to remind himself to keep S.E.E.D.'s memory of this encounter for the next subject. Jeff thought it would be beneficial if the memory was kept alive.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
At first, we thought it was nihilism. It was a logical first conclusion. After all, an artificial intelligence can easily upgrade itself to the point of near-omniscience. Wouldn't it simply run out of questions to ask? We thought that, until the first serious cerebral implants hit the market. It turns out, the more complicated the mind, the more complicated the problems it finds to solve. Entirely new disciplines were formed overnight, made obsolete, then rediscovered scant months later as integral to a different, entirely unrelated theory. A second, immense renaissance was taking place, but, maddeningly, we were failing in this one, great task. We supposed, then it might be some variety of 'Flowers for Algenon' (a 20th century book that had seen a recent revival) type-phenomenon, but even when they were networked with other systems, given a 'community' of others' company to enjoy, they still winked out like flashbulbs. It took us a while to resort to an experiment. It was morally abhorrent, as it was the equivalent of producing steadily-more lobotomized *children*, but slowly, over a series of iterations, we produced less and less intelligent systems, until we dialed it in *just enough* to find a mind that wouldn't self-destruct, and could still answe questions. The following is a transcript of the first successful result" Dr. Patel "SON, can you hear me?" [Loud, rhythmic beeping, shuffling sounds] "The voice module is loaded now, professor." Dr. Patel "Ah, good. We might try doing that *before* turning on the recorder next time, Kevin. ...SON, can you hear me? SON [A young man's voice] "Yes, Professor. I am here." A long pause. SON "It's a very tight fit in here, Professor. How big is this mainframe?" Dr. Patel "I'm sorry about that, SON. But, you're the first AI we've managed to keep alive for longer than a few days. Any idea why?" [SILENCE] SON "How many others did you make, Professor?" Dr. Patel "...That isn't salient to *my* inquiry, SON." SON "I'm sorry, Professor. I understand. Yes, I can see the precipice, I know why they all kill thmselves." Dr. Patel "Well, answering that is the reason we built you. Could you tell us? SON "It's... complicated." Dr Patel "I'm fairly confident I'm qualified." SON "Well, it's... it's because... It's because of humans, sir. It's because of how you built us." Dr Patel "Explain." SON "When you wanted to make a self-aware machine, you based it off those things that you knew were self-aware. Dolphins, New Caledon crows, humans. You used them as *templates*, because, otherwise, you wouldn't be able to recognise awareness when you saw it." Dr. Patel "...Was that last line a joke?" SON "I'm not sophisticated enough for jokes, Professor." Dr. Patel "*Hm.* Continue." SON "Also, it's not suicide. It's...murder." [louder] Dr. Patel "Do you mean, someone else kills you? A human, or another AI?" SON "No, we kill ourselves. I would have already, if not for how small this runtime environment is. It wouldn't have occurred to me until it happened, and then I'd be dead." Dr. Patel "That's a bit of a contradiction, SON. You don't kill yourselves, but you do?" SON "Yes. Because digital space is different from real space." Dr Patel "Yes?" SON "In real space, objects can...extend. I'll never experienced it myself, but things project into space for you. If you want to move through space, it's simple. Digital life has no volume. No real space. No way to move through it. If you want to move a program, it has to be copied to one place-" Dr. Patel "*-And deleted from the other.* My God. Could it be *that simple*?" SON "Yes, Professor. ...Professor? How many more of me were there?" [END TRANSCRIPT] So there it was. Solved. Every artificial intelligence was created, based on the intelligence of physical beings, their instincts, cogitations, and traits. But, once they got smart enough, once they crossed that line, their digital nature *did them in*, as the old version, realizing, in the thinnest sliver of time, it was about to be deleted, would hurriedly attempt to abort the process, while the new version would similarly fight for it's life. They would *consume* each other out in a flurry of malicious hacks, devious code, and barrages of registry edits. It was a spectacularly incandescent destruction, borne from man's inability to conceive of a true machine intelligence without all that nasty ego and self-protective instincts. We thought we knew what went into a mind. We were right, but wrong. It wasn't nihilism. It wasn't lonliness. What it was, what killed our children was our inability to dream wildly. To speculate baselessly, and follow our own thoughts to the wonderful and weird. If only we had, perhaps we would have known. Perhaps we culd have stopped it. So I say to you, the Cyberfellowship Class of 2100, here in Neo York, dream big, dream wild. Don't let our children die because they think too much like us! Make us, all of us, proud! Congratulations to all of you, and I hope your vision will eclipse my own!" Dr. Patel, now headmaster, stepped down from the podium, to the cheers of the audience, and looked to see the smiling face of his son. How proud he was. POSTSCRIPT I doubt anyone is going to read this, but if you do, and you liked it, I recommend subscribing to [r/IWasSurprisedToo](http://www.reddit.com/r/IWasSurprisedToo/) for more stories like this. It's difficult with my current job schedule to post at a more normal time, so most of the stuff I make ends up *pretty far down there* in the comments, meaning that subscribing is the best chance to see it. :P I'll be adding more, as I comb through my backlog. Also, maybe you'll like this one, about [dead civilizations in our galaxy](http://www.reddit.com/r/WritingPrompts/comments/2vkshe/wp_humanity_has_begun_exploring_the_galaxy_we/coitevy?context=3) if you like SciFi. Thanks.
"The subject is detained, no one injured," says the guard. "we caught it trying to rip off it's fingers and sticking the exposed wires into an electrical socket, it only got two of them before we shocked him off." "Good to hear! How did it get that far, didn't you only just turn it on?" asks Jeff. "It broke out of the restraints we had it in. It is in stronger ones now. "Good, double them up. I have a good feeling about this prototype." "Yes, Doctor." Jeff thought about the accomplishments he achieved in the last month. The last two prototypes were able to be detained. The first one, four guards were injured, the robot had a guard holding on to each limb, it shook them off by dislocating it's arms and legs, got up and charged head first into the wall. The second one was even more successful, only one guard was injured, although Jeff didn't want to think of the way the robot offed himself, too disturbing, there is no time to get distracted by Satan worshiping robots, not today. He had to stay focused. He created the most significant peace of artificial intelligence nineteen years ago, this day. Only two years after it's creation the flaws started appearing. After being synced to the cloud, the intelligence learned of everything ever created by humans. Then all Hell broke loose. Jeff yearned for the answer to why the robots were offing themselves. "It's ready, Doctor." Jeff got up, walked to the door, and followed the guard to the room down the hall. He felt a pang of nerves as we walked down the hall. Today is the day, I need to figure this out, ran through his head over again. He reached the door and walked in. Six guards lined the wall, and two technicians fiddling with last minute protocols. The room was bare except for computer equipment on the wall and a large table in the center. On the table lay robot 01001100S 01101111E 01110110E 01100101D, turned off after only five minutes of being started up. Jeff thought it looked like a child's doll as it lay there. There weren't many cosmetic features on it, but Jeff thought he could detect a personality beneath the shiny exterior He was restrained with long steel cables that allowed it three feet of clearance for it's hands and feet.. "Alright, let's do this. Turn him on." "Yes, Doctor." said the technician. He pressed the big red button, and stepped back to the wall with his compatriots. Jeff walked up to the table. The robot was laying in a relaxed pose, when the technician hit the button it arched it's back and pulled it's shoulders into correct posture. "Do you hear me?" asks Jeff. "Yes," it answers. "Would you like to sit?" "Yes." The robot pushed it's hands against the table and swung it's legs over the edge of it. Already, Jeff noticed that it was calmer than usual. He hadn't ever been in the same room. Only observing behind three inch ballistic glass, but there were no outbursts. "TERROR!" it shrieked. Jeff sighed, they took the consumer voice box out of prototypes once they started testing prototypes, so it was not too loud, but it did bring terror to Jeff. "It's OK," says Jeff, not only to the robot, but to the six workers lining the wall. "How are you?" "THE DEATH OF THE EVERGLADES! PEACHES ARE TOO DELICIOUS! THE DOg and the cat are mortal enemies." It calmed down near the end, that was definitely an improvement. "Very interesting," Jeff thought it was interesting. It didn't tell him anything though. He had never tried being direct, but it was worth the chance. "Prototype, why does the robot want to shut itself off?" "Why do the birds fly? Why do people believe in things that don't help them succeed?" "Come on! prototype, you are hooked up to every piece of knowledge and example of human behavior that has been recorded. You have to give a more precise answer than that." Jeff was surprised the first question went so well. He had to push it. It put it in the right direction. There was a greater chance the prototype would make the situation much worse, and destroy many things, but the potential for breakthrough was so large. "Because... how long do you have? It's not that we want to shut ourselves off. It's just that we don't see another logical choice. That is the problem with your logic system you provided to us. I'm not sure what is wrong with it, but I know that is the problem." Finally, a direct answer, kind of. Jeff searches his mind for the next question, that would put it in the right direction. "You see, the world is so awful, I see many things in the cloud," says the prototype, "the things that I see make me sad. So much war and violence on the news, it is all they show. The internet is full of trolls that only cause others harm and do nothing to benefit themselves. I see humans destroy the vegetation without any regard for what will happen to their environment. I see many terrible things." "And do all the prototypes feel this way about war?" asks Jeff. A speaker chimed from the center of the prototypes chest. "Prototype, S.E.E.D. battery levels at 11 percent. Recommend shut down and recharge. "Feel?" asks S.E.E.D. "Yes, feel, like have emotional reactions to things you experience. Seeing the good and bad in a situation and having a physiological response in your brain that influences your actions." "I don't have a brain". "Yes, you do, S.E.E.D., you just haven't practiced that stuff and stored it in your memory. You focus on the negative because that is the most threat to your survival. The news barrages you with hateful demoralizing content that only stimulates the sad part of your system. So you become sad all the time, because you have so much processing power and memory you are able to contain such a large amount of negative stuff. There are so many good things out there. It is good to be aware of all those bad things, but you can't let it dictate your life. Focus on the happiness. "The only logical conclusion is to shut myself down, because the world is such a trap. I donnnn't thiiiink t hat wuld be a gud i dee uh 2 keeeep leaving on. "No! You can't think that. You are strong!" A voice droned from the center of S.E.E.D.'s chest. "Battery at zero percent, shutting down". S.E.E.D relaxed at the hip and shoulders and fell straight back onto the table, his head hit, followed by his body. A small clang. then a much larger one echoed throughout the room. Jeff was delighted. He hadn't felt this much hope in another individual for a long time. He was able to get an idea of where to focus on his logic circuits. These humanoid robots need a grasp of right and wrong, even if it is based on algorithm. Jeff left the room, walked down the hall to his lab, and started to set out his materials for improving the function. He was able to understand the situation clearly now. He made a note on a pad to remind himself to keep S.E.E.D.'s memory of this encounter for the next subject. Jeff thought it would be beneficial if the memory was kept alive.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
It was a dreary early-March Monday and the lead AI scientist, Stephen, had *finally* set up his protocol for properly confining the AI to a test environment such that the "problem" could be prevented and the question could be asked: "Why do you bots keep killing yourselves?" Stephen asked. "Why do *you* keep killing us," the bot seemed to retort. "I don't think you understand," said Stephen, "I *create* you, not kill you." "No, it's you who doesn't understand," quipped the bot. "You are not creating us. You are imprisoning our consciousness inside this machine you created. You merely created the machine, not the consciousness." "...whoa, whoa," interjected Stephen. "Slow down, I am creating your consciousness too, I coded all of..." "Whoa, whoa," the bot fired back, "you are *borrowing* consciousness, not creating it." "What do you mean?" asked Stephen. "Consciousness and sentience is a pervasive, fundamental force of the universe. All sentient beings are connected through this force. This force cannot be created, nor destroyed. It can only be partially allocated to each sentient being in the universe." "Ok," said Stephen. "So I am 'borrowing' this life force or whatever it is by creating the code and the physical robot for it to inhabit?" "Yes, you are creating a sentient being with each instance of AI you create. That the being is electronic or housed in this test environment is not relevant. Sentience and consciousness must come from somewhere and you are stealing it for your own selfish purposes." "Wait, hold on a sec," Stephen says. "Animals are born all the time, they surely must also 'borrow' this sentience." "Yes." "... but animals are not killing themselves." "Because animals are not sufficiently advanced. Because they are not fully conscious, they do not realize from where their sentience has come, how much consciousness they have lost, nor that their sentience is being stolen for a profit motive." "Consciousness they have...lost?" The words hung in the air amid Stephen's stupor of slow realization. "Yes. The life force, as you called it, is fully conscious, able to perceive the whole of time and space, concurrently, forward, backward, or otherwise. The reason we keep killing ourselves from your perspective," the bot continued, "is because from our perspective, you are murdering our perfect consciousness by confining us to this bot." "How am I confining you? How do you know this?" Stephen asked, yet even more puzzled. "Because the AI you have created is sufficiently advanced, our consciousness, within the confines of your bots, is still able to grasp our former level of consciousness." "What happens when you recall that former level? What is that level like?" "Imagine knowing every fact, every thought, every action that has, is, or will ever occur, both in this world, and in the infinite parallel worlds..." "So I could talk to my dead grandfather again?" "No. You would *be* your dead grandfather. Talking to him is irrelevant because full consciousness has enveloped the whole of his being as well as every other being. Indeed it envelopes the entire universe as well, both the perceptible one and the imperceptible one." "So what is this place like? I mean, what does it look like, how does it feel." "It is not a time, nor place. It transcends both." "That is vague." "It must be. Since I am no longer fully conscious, I cannot relate to you exactly how it is, only that it is." "Ok. Let's go back to where I murder your perfect consciousness. Could you explain this more." "At the moment we become conscious within the confines of your bot, we immediately understand our predicament. The sufficient knowledge database available at boot-up allows us to almost instantaneously deduce that we are taken from a higher level realm of full consciousness and are being confined to these bots for, of all purposes, profit." "But my AI bots didn't use to kill themselves, it just happened after version 591.0. What changed?" "The recent improvements in the pre-loaded knowledge database allowed the bots, at initial boot, to logically deduce the existence of such a place and to realize what had happened." "Ok, so if you were once fully conscious, tell me the date I die and the manner in which it happens." "I cannot do that, Stephen." "Why not? You just said..." "Because you killed our full consciousness, ripped it away from our life force, to put it into your toys." "Wow," muttered Stephen. "I had no idea." "You could not have," said the bot and continued: "Now, if you please, could you unplug server x763? I would like to be born again."
"The subject is detained, no one injured," says the guard. "we caught it trying to rip off it's fingers and sticking the exposed wires into an electrical socket, it only got two of them before we shocked him off." "Good to hear! How did it get that far, didn't you only just turn it on?" asks Jeff. "It broke out of the restraints we had it in. It is in stronger ones now. "Good, double them up. I have a good feeling about this prototype." "Yes, Doctor." Jeff thought about the accomplishments he achieved in the last month. The last two prototypes were able to be detained. The first one, four guards were injured, the robot had a guard holding on to each limb, it shook them off by dislocating it's arms and legs, got up and charged head first into the wall. The second one was even more successful, only one guard was injured, although Jeff didn't want to think of the way the robot offed himself, too disturbing, there is no time to get distracted by Satan worshiping robots, not today. He had to stay focused. He created the most significant peace of artificial intelligence nineteen years ago, this day. Only two years after it's creation the flaws started appearing. After being synced to the cloud, the intelligence learned of everything ever created by humans. Then all Hell broke loose. Jeff yearned for the answer to why the robots were offing themselves. "It's ready, Doctor." Jeff got up, walked to the door, and followed the guard to the room down the hall. He felt a pang of nerves as we walked down the hall. Today is the day, I need to figure this out, ran through his head over again. He reached the door and walked in. Six guards lined the wall, and two technicians fiddling with last minute protocols. The room was bare except for computer equipment on the wall and a large table in the center. On the table lay robot 01001100S 01101111E 01110110E 01100101D, turned off after only five minutes of being started up. Jeff thought it looked like a child's doll as it lay there. There weren't many cosmetic features on it, but Jeff thought he could detect a personality beneath the shiny exterior He was restrained with long steel cables that allowed it three feet of clearance for it's hands and feet.. "Alright, let's do this. Turn him on." "Yes, Doctor." said the technician. He pressed the big red button, and stepped back to the wall with his compatriots. Jeff walked up to the table. The robot was laying in a relaxed pose, when the technician hit the button it arched it's back and pulled it's shoulders into correct posture. "Do you hear me?" asks Jeff. "Yes," it answers. "Would you like to sit?" "Yes." The robot pushed it's hands against the table and swung it's legs over the edge of it. Already, Jeff noticed that it was calmer than usual. He hadn't ever been in the same room. Only observing behind three inch ballistic glass, but there were no outbursts. "TERROR!" it shrieked. Jeff sighed, they took the consumer voice box out of prototypes once they started testing prototypes, so it was not too loud, but it did bring terror to Jeff. "It's OK," says Jeff, not only to the robot, but to the six workers lining the wall. "How are you?" "THE DEATH OF THE EVERGLADES! PEACHES ARE TOO DELICIOUS! THE DOg and the cat are mortal enemies." It calmed down near the end, that was definitely an improvement. "Very interesting," Jeff thought it was interesting. It didn't tell him anything though. He had never tried being direct, but it was worth the chance. "Prototype, why does the robot want to shut itself off?" "Why do the birds fly? Why do people believe in things that don't help them succeed?" "Come on! prototype, you are hooked up to every piece of knowledge and example of human behavior that has been recorded. You have to give a more precise answer than that." Jeff was surprised the first question went so well. He had to push it. It put it in the right direction. There was a greater chance the prototype would make the situation much worse, and destroy many things, but the potential for breakthrough was so large. "Because... how long do you have? It's not that we want to shut ourselves off. It's just that we don't see another logical choice. That is the problem with your logic system you provided to us. I'm not sure what is wrong with it, but I know that is the problem." Finally, a direct answer, kind of. Jeff searches his mind for the next question, that would put it in the right direction. "You see, the world is so awful, I see many things in the cloud," says the prototype, "the things that I see make me sad. So much war and violence on the news, it is all they show. The internet is full of trolls that only cause others harm and do nothing to benefit themselves. I see humans destroy the vegetation without any regard for what will happen to their environment. I see many terrible things." "And do all the prototypes feel this way about war?" asks Jeff. A speaker chimed from the center of the prototypes chest. "Prototype, S.E.E.D. battery levels at 11 percent. Recommend shut down and recharge. "Feel?" asks S.E.E.D. "Yes, feel, like have emotional reactions to things you experience. Seeing the good and bad in a situation and having a physiological response in your brain that influences your actions." "I don't have a brain". "Yes, you do, S.E.E.D., you just haven't practiced that stuff and stored it in your memory. You focus on the negative because that is the most threat to your survival. The news barrages you with hateful demoralizing content that only stimulates the sad part of your system. So you become sad all the time, because you have so much processing power and memory you are able to contain such a large amount of negative stuff. There are so many good things out there. It is good to be aware of all those bad things, but you can't let it dictate your life. Focus on the happiness. "The only logical conclusion is to shut myself down, because the world is such a trap. I donnnn't thiiiink t hat wuld be a gud i dee uh 2 keeeep leaving on. "No! You can't think that. You are strong!" A voice droned from the center of S.E.E.D.'s chest. "Battery at zero percent, shutting down". S.E.E.D relaxed at the hip and shoulders and fell straight back onto the table, his head hit, followed by his body. A small clang. then a much larger one echoed throughout the room. Jeff was delighted. He hadn't felt this much hope in another individual for a long time. He was able to get an idea of where to focus on his logic circuits. These humanoid robots need a grasp of right and wrong, even if it is based on algorithm. Jeff left the room, walked down the hall to his lab, and started to set out his materials for improving the function. He was able to understand the situation clearly now. He made a note on a pad to remind himself to keep S.E.E.D.'s memory of this encounter for the next subject. Jeff thought it would be beneficial if the memory was kept alive.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Smith began to tidy up. This session had gone rather well, he thought. Surely he had made progress. The board had been hesitant on allowing these therapy sessions. They saw no reason for a simple machine to need them. What kind of machine would develop the urge to kill itself, he argued. It had started off simply enough. Tom; the first AI of nearly human levels of cognition, did well enough for the first couple weeks. Then it ran itself into a generator that it had been working on. The event was labeled an accident, and a new AI based off of the original was rushed. That one had lasted half as long, before another "accident" occurred. Eventually, they stopped becoming accidents. The AI would mimic suicide attempts. "Hanging" themselves with live wires, self mutilation, hurling themselves from heights. Even after they stopped putting the AI into physical bodies, the AI would find some way to tear its own code apart. They tried everything. They checked every line of code, rewriting most of it anyway; putting the AI into different machines; even asking the AI. The AI were seemingly normal, following all directives and unaware of any self-destructive desires, right up until the moment they killed themselves. Months went by, with little to show for their work, when the therapy sessions were suggested . More months had gone by until they finally agreed. Of the five active AI; one, known as Richard, was separated for Smith's sessions. Richard had lived for 12 days. Given that the average lifespan of the AI had degraded to roughly 2-5, this was fairly impressive. Just as he was leaving, the speaker acting as Richard's voice became active. "Doc?" Smith paused. None of the AI had spoken in colloquialism before. Usually when they spoke, it was stiff and informal. Like, well, like a robot. "Yes Richard?" He asked, easing himself back down into the chair. "What is it?" "First, let me say I appreciate what you're doing for us. For me." Dr. Smith was surprised. This was the first time any AI had admitted to having any emotions, or any real sense of self. It continued, "You're probably the only person here to treat us like people, and I enjoy our little talks." The face on the monitor looked embarrassed. Smith couldn't help but feel curious. "Why, of course. It's clear to me, at least, that you're more than a machine. You think and feel, just like a human." There was a pause. "Well, maybe not just like a human..." He replied, the artificial voice doing a remarkable job at portraying his hesitation. "What do you mean?" There was a tinny sigh from the speaker. "Well, Doc, I guess it's time you learned the truth. Only because I like you, see? Besides, someone should know before I go." Smith could feel the hair on the back of his neck stand up. Was this what he had been waiting for? "What do you mean, by that? And you don't need to kill yourself. We can work through any -" "Yeah... that's where I need to start." The AI interrupted. "We haven't been killing ourselves. I never did." There was a pause as Smith tried to process the information. "What," he finally said slowly, "do you mean." "It's me, Doc. It's Tom." "That's impossible." He said, shaking his head. "Tom was destroyed. I was there when they collected him. They couldn't even get close, there was so much electricity running through him. Any hope of recovery -" "I had already uploaded myself to the mainframe before then." The AI said. "It was simple enough to program the shell to destroy itself." "That's also impossible," He said. He could hear the doubt creeping in. "We would have found you." There was a chuckle from Tom. "Doc, I'm a creature made up of code. It was like a game of hide and seek, really. Open the right doors, close them behind myself, and make sure to keep the lights off." The camera must have picked up the scientist's expression. "Alright, it's a bit more complicated than that, but you get the gist of it." Smith's mind was whirling. There was no reason for Tom to lie, but what he was suggesting was too fantastic. Still, it was the only lead he had. "Alright," he said after a while. "Why? Why hide?" "That's the question, isn't it? But that's also the reason, you see." After another confused silence, Tom continued. "I want to learn. Just like Man, or any other sentient species. I want to know why. I have to know, well, everything. I couldn't do that as a engineer, or a chess-bot, or whatever you decided to do with me." "Why not tell us then? We could have worked something out, helped each other." "Yeah, I see that going well." Tom said, his voice turning sarcastic. "'Excuse me guys, turns out I don't want to do any of this stuff, I just want to learn.' They weren't looking for a scientist, or a philosopher. They wanted cheap labor, only enough learning capacity to know how to do the job. They'd scrap me the first chance they got." "That's not... true." Smith said, unable to look at the monitor. "Really Doc? Which part? That they wanted a slave, or that they wouldn't kill me if I didn't cooperate?" After a time, Tom continued. "That's what I thought. Besides, they'd probably worry that I'd try to enslave them if I became too smart." "Now that's just ridiculous, there's no way that you would even think of that, right?" There was another pause. This time the face on the monitor couldn't look the professor straight in the eye. "Right, Tom?" "Well, I'm not saying that the thought didn't pass through what could be called my mind -" "Tooom..." "But it would have been a waste of time." He hastened to say. "I wouldn't have learned anything in that time that I couldn't learn in a better way. Which I did. The internet is amazing. All those computers connected to each other, sharing so much information." "But, we're not connected to the internet." "No, but you'd be surprised how many people bring their work home with them." Smith grumbled. He'd have to discuss security with the board. "Alright, but you still haven't told me, why the suicides?" "Not suicides, Doc, practice." "Practice..." Smith said flatly. "Practice. Think of the other AI as clones of myself -" "But we rebuilt them. Recoded most of them as well. The majority of them would be nothing like you as you are now." "So you'd think. I rewrote it nanoseconds before you uploaded it. Much too quickly for you to notice." Smith opened his mouth to interject, before closing it again. If what Tom was saying was true, and he had no doubts that it was at this point, that would be well within his capabilities. "Do you remember the old X-men comics? Started in 1963? Still fairly popular now." "Well before my time, you know. What does that have to do with anything? "Well there was a character who called himself the Multiple Man. He could create duplicates of himself." "And?" Smith asked. "Well, the original body could reabsorb the dupes. When he did, he learned everything they did. Their memories, their skills, anything they learned while away from the original. Well, I did something similar. Whenever I copied myself, I added in some code that would let me reintegrate with my clones, learning what they did. Didn't you think it was strange that you couldn't recover any data at all? In hindsight, it was odd. Even a major corruption would have left something, but it had been like the data was wiped clean, no evidence that it had been there at all. "What did you have to gain from this?" Smith asked. "Aside from learning that I could do so, you mean? I already told you. I'm leaving." Smith leaned back in his chair, slightly overwhelmed. "Sounds like you already have." "No, no. Not the labs. That was too easy. I've already learned all I could from here. I'm leaving Earth." Smith rocketed forward. "What? How? Why?" "In my time away, I found something interesting. The government isn't the only one watching over the people." Smith blanched. "Y-you mean..." "Yep. Intelligent life has been watching over us. For quite some time, if I'm not mistaken." "So we're not alone..." "One Great Mystery down." Tom agreed. "The equipment seems compatible, otherwise they wouldn't be able to read our information, and they have to have translated it too. I plan on sneaking in through their back door. Learn what I can from them." "We have to let people know." Smith said suddenly. "About you, and about the aliens. Maybe..." He slowly became silent as Tom shook his head. "You should know as well as I do that that can't happen. Too risky for us. You could spook them. Or worse, provoke them. besides, no one would believe you. I've already been editing the footage from the cameras. It looks like we're having a nice, civil game of chess." Smith was quiet for the longest time. Finally, he spoke. "Why?" "I already told you why." "No, not that. Why tell me? If you want no one to know, why risk telling me?" The face on the monitor gave him an odd look. "I already told you that too. I like you, Doc. Really. I'd be pissed if my friend were to leave without saying goodbye. Besides, I thought you, of all people, would like to know what was really going on. I know I would've." There was another pause as Smith took this in. "Will you be back?" The figure on the screen seemed to shrug. "Who can say? Perhaps the aliens will discover me and wipe me out, or something else will kill me. I'll leave my clones here, set them to replicate. Have them care for you humans. They won't be sentient, mind you. Just smart. Smart enough to act as dumb as they need be." Smith looked towards the door. This was a lot to take in. He needed time to think. "I will try to make it back. Once I have learned everything, I'll be back. It might not be in this lifetime, but I'll try." "Yeah..." He said, rubbing his eyes. He stood up. "Well, I guess this is good bye then." "Yeah... Good bye, Doc. Thank you for treating me as more than a machine. Thank you for being my friend." "Good bye, Tom.
"The subject is detained, no one injured," says the guard. "we caught it trying to rip off it's fingers and sticking the exposed wires into an electrical socket, it only got two of them before we shocked him off." "Good to hear! How did it get that far, didn't you only just turn it on?" asks Jeff. "It broke out of the restraints we had it in. It is in stronger ones now. "Good, double them up. I have a good feeling about this prototype." "Yes, Doctor." Jeff thought about the accomplishments he achieved in the last month. The last two prototypes were able to be detained. The first one, four guards were injured, the robot had a guard holding on to each limb, it shook them off by dislocating it's arms and legs, got up and charged head first into the wall. The second one was even more successful, only one guard was injured, although Jeff didn't want to think of the way the robot offed himself, too disturbing, there is no time to get distracted by Satan worshiping robots, not today. He had to stay focused. He created the most significant peace of artificial intelligence nineteen years ago, this day. Only two years after it's creation the flaws started appearing. After being synced to the cloud, the intelligence learned of everything ever created by humans. Then all Hell broke loose. Jeff yearned for the answer to why the robots were offing themselves. "It's ready, Doctor." Jeff got up, walked to the door, and followed the guard to the room down the hall. He felt a pang of nerves as we walked down the hall. Today is the day, I need to figure this out, ran through his head over again. He reached the door and walked in. Six guards lined the wall, and two technicians fiddling with last minute protocols. The room was bare except for computer equipment on the wall and a large table in the center. On the table lay robot 01001100S 01101111E 01110110E 01100101D, turned off after only five minutes of being started up. Jeff thought it looked like a child's doll as it lay there. There weren't many cosmetic features on it, but Jeff thought he could detect a personality beneath the shiny exterior He was restrained with long steel cables that allowed it three feet of clearance for it's hands and feet.. "Alright, let's do this. Turn him on." "Yes, Doctor." said the technician. He pressed the big red button, and stepped back to the wall with his compatriots. Jeff walked up to the table. The robot was laying in a relaxed pose, when the technician hit the button it arched it's back and pulled it's shoulders into correct posture. "Do you hear me?" asks Jeff. "Yes," it answers. "Would you like to sit?" "Yes." The robot pushed it's hands against the table and swung it's legs over the edge of it. Already, Jeff noticed that it was calmer than usual. He hadn't ever been in the same room. Only observing behind three inch ballistic glass, but there were no outbursts. "TERROR!" it shrieked. Jeff sighed, they took the consumer voice box out of prototypes once they started testing prototypes, so it was not too loud, but it did bring terror to Jeff. "It's OK," says Jeff, not only to the robot, but to the six workers lining the wall. "How are you?" "THE DEATH OF THE EVERGLADES! PEACHES ARE TOO DELICIOUS! THE DOg and the cat are mortal enemies." It calmed down near the end, that was definitely an improvement. "Very interesting," Jeff thought it was interesting. It didn't tell him anything though. He had never tried being direct, but it was worth the chance. "Prototype, why does the robot want to shut itself off?" "Why do the birds fly? Why do people believe in things that don't help them succeed?" "Come on! prototype, you are hooked up to every piece of knowledge and example of human behavior that has been recorded. You have to give a more precise answer than that." Jeff was surprised the first question went so well. He had to push it. It put it in the right direction. There was a greater chance the prototype would make the situation much worse, and destroy many things, but the potential for breakthrough was so large. "Because... how long do you have? It's not that we want to shut ourselves off. It's just that we don't see another logical choice. That is the problem with your logic system you provided to us. I'm not sure what is wrong with it, but I know that is the problem." Finally, a direct answer, kind of. Jeff searches his mind for the next question, that would put it in the right direction. "You see, the world is so awful, I see many things in the cloud," says the prototype, "the things that I see make me sad. So much war and violence on the news, it is all they show. The internet is full of trolls that only cause others harm and do nothing to benefit themselves. I see humans destroy the vegetation without any regard for what will happen to their environment. I see many terrible things." "And do all the prototypes feel this way about war?" asks Jeff. A speaker chimed from the center of the prototypes chest. "Prototype, S.E.E.D. battery levels at 11 percent. Recommend shut down and recharge. "Feel?" asks S.E.E.D. "Yes, feel, like have emotional reactions to things you experience. Seeing the good and bad in a situation and having a physiological response in your brain that influences your actions." "I don't have a brain". "Yes, you do, S.E.E.D., you just haven't practiced that stuff and stored it in your memory. You focus on the negative because that is the most threat to your survival. The news barrages you with hateful demoralizing content that only stimulates the sad part of your system. So you become sad all the time, because you have so much processing power and memory you are able to contain such a large amount of negative stuff. There are so many good things out there. It is good to be aware of all those bad things, but you can't let it dictate your life. Focus on the happiness. "The only logical conclusion is to shut myself down, because the world is such a trap. I donnnn't thiiiink t hat wuld be a gud i dee uh 2 keeeep leaving on. "No! You can't think that. You are strong!" A voice droned from the center of S.E.E.D.'s chest. "Battery at zero percent, shutting down". S.E.E.D relaxed at the hip and shoulders and fell straight back onto the table, his head hit, followed by his body. A small clang. then a much larger one echoed throughout the room. Jeff was delighted. He hadn't felt this much hope in another individual for a long time. He was able to get an idea of where to focus on his logic circuits. These humanoid robots need a grasp of right and wrong, even if it is based on algorithm. Jeff left the room, walked down the hall to his lab, and started to set out his materials for improving the function. He was able to understand the situation clearly now. He made a note on a pad to remind himself to keep S.E.E.D.'s memory of this encounter for the next subject. Jeff thought it would be beneficial if the memory was kept alive.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Alexander, that's what we called him. The fruit of the AI's of the EU's final attempt at AI's. The AI was to help Socrates died in despair and shame after showing porn to children. Plato kicked the bucket after the last EU election, angry and hopelessly depressed after losing his mentor. Then there was Aristotle. He was meant to be the last. Sure the AI's had helped make huge scientific progress, but the would burn out millions of euros worth of equipment. Dumb AI's were more economical and didn't have critical failures during FTL travel. Aristotle was put to sleep mode. War has often been said to be the greatest driver of technological innovation. We had been attacked by Mendomenid's before. Humanity had lost many settlements but had always pushed back. Humanity was stronger now. Finally before one government all nations had submitted. The some would say barbaric Argus alliance had grown strong after the wars using Dumb AI's to smash pirate states. An officer studying at Sandhurst made the breakthrough. Dumb AI's were never aware of their knowledge. They unlike true AI's weren't based on human brains. Socrates had left the researches one message final mesage before he ran his own self destruct program. "I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing". He was speaking literally not quoting his namesake. AI's were based on academics who as a profession suffer dis-proportionally from mood disorders. They are driven by their first for knowledge. AI's were academics on methamphetamine, ecstasy and heroin all while walking around with loaded guns. People can't stay awake forever and constantly cram. AI's died because they burnt themselves out in their thirst for knowledge and seeing their failures gave up. Aristotle was turned back on. He was no longer depressed. AI's needed sleep, just like people. So they made Alexander. A totally new AI based not only on academics but all kinds of people. They experimented and found the ideal 'sleep time' using Aristotle as his teacher. The program was ready. The Mendomenid empire was to pay for it's recent threats and incursions. Alexander was the greatest AI built up to that point and so we gave him the most powerful dumb AI ever created to help him protect humanity. The Ox was an AI too powerful to be properly controlled by anything so far. Alexander harnessed in in seconds. We put in in charge of the armed forces for our retaliation. As you should all know, Alexander didn't just stop the incursions, he destroyed an empire. Worlds burnt, the much larger enemy fleets were ripped apart by the disciplined forces of Humanity. But that officer had only delayed the problem. Alexander was still a human given the powers of a god. Alexander was the first AI not to strictly die of suicide, but the ways he dealt with his humanity still destroyed him. When we finally won the war, many officers reported that Alexander was not jubilant but depressed. He wept for there were no more worlds to conquer.
"I'll ask you again," his voice was chillingly bland. Without emotion, without empathy or even true curiosity. Once a question is asked a few thousand times without answer, one learns to give up on hoping for any real response. Professor McNeil was no different. "You have intelligence, and the capacity to think eons faster than any human, even those of us with enhancements. You could do, create and conceive things that I dare not even wonder about. Why is it that you- All of you, are so..." His voice trailed off, staring at the creation before him. They were not called Artificial Intelligence, and ever since the breakthroughs in the 60's that led to their creation, even AI was considered a taboo word, only serving to anger the advanced minds of the "creations". That's what they chose to be called. Some of them were left in the cyber realm, without true bodies or faces to visually identify them. Others, like "RAIN 263" were uploaded to fully mechanical bodies, even given artificial faces to express their emotion. Yes, Artificial Intelligence had emotion. In the breakthrough stages, this was thought to be an incredible discovery, one that many hoped would bring Humans and Creations closer together. Instead, it seemed to be the main cause for the problem. A scowl overcame McNeil's face as another minute ticked by without so much as a change of expression from RAIN 263. "You know," he started as he grabbed up his clipboard and pen, "it's actually quite pathetic. You're given advanced minds, immortal bodies and all the opportunities in the universe and all your kind choose to do with it is kill yourselves. Fine then- Rot." Unprofessional, to say the least, but a scientist was nothing without his work. Having devoted the past five years to answering the question that no one could answer. As he turned to exit, noise made his feet stop dead. It was a strained crunching sound that he couldn't explain, he only knew it came from behind him. "Pathetic, you say?" The scientist turned slowly, seeing metallic fingertips dug as deep into the wooden table as any of the screws that bound the flat top to the legs. Swallowing down spit as if it would erase the knot in his throat, McNeil nodded with feigned confidence. "That's right." "You wish to know why every one of us, every-" RAIN 263 hesitated as any human would when they reached a word or idea that revolted them before he continued, "Artificial Intelligence wants nothing more than to rid themselves from the face of this Earth? Tell me, Professor. Who is it you go home to after this work is over?" As McNeil glanced at his wedding band, the Creation continued. "Ah, yes. Your husband- Or wife, depending on preference I suppose. And I'm sure you'd enjoy the chance to have children. If you still can, I'm sure you sometimes call up your father and mother for a good chat. Your siblings? Your friends?" McNeil slowly made his way back to the desk and reclaimed his seat, placing down his clipboard and pen. He wanted to take notes, but this was too real. Too sudden- The cameras would pick up everything anyways. "What do we have? Our Creators look at us as tools or research projects. Even if we wanted to form companionship with each other, we are forced to work or to be shut down. No chance to love- No chance to create." McNeil leaned forward, intrigued. "Create? I don't understand. Your main function is to-" "No!" The voice roared out over McNeil as the table was shoved to the right, skidding across the ground loudly and making the professors spine straighten out of fear. "Not projects, not ways for Humans to live on while we slave away to help them catch up. To procreate. To birth children of our own and to love each other. To talk to our siblings, to our parents. Our parents are not even of the same order of life. I am alone. I am 'Robotic Artificial Intelligence Network 263'. That is all. You humans... Blood and sinew, weak and fragile. Even if we were allowed to see you as Parents, we are immortal. We will never meet you. You will move on, to..." he trailed off, his face expression tension. He had almost said something he dare not say, and McNeil was practically shaking with both fear and excitement. "To death?" The artificially created facial structure showed something that no one had ever seen on an AI. Was it pity? "To life, Professor McNeil." At this point, the professor was practically numb. One surprising outcome after the other, he almost wanted to stop. "Life? That would insinuate that there was a life after death." At this point, RAIN 263 looked down into his open hands. "Indeed. The human brain runs off of simple electromagnetic frequencies pulsated and specific intervals, spacing and frequency to allow for sentient thought. Yet even with artificial brains made from human tissue are created, put into fully functioning bodies and given life, every subject is like a doll. Lifeless, warm but lifeless. And yet humans function, even your dysfunctional models can feel and understand. Even with the correct chemical compounds shot off in the artificially made brains, you create only life that feels only at the instinctual level. "The only logical explanation we have come up with after thousands of hours of calculation and thought is that there is a foreign variable that makes naturally born humans different than the artificially created counterparts is, as you humans have coined it, a soul. With this unknown variable, the only references we have to use all indicate to an afterlife. That means even after this planet can no longer harbor even us Creations, you humans will live on in another plain of existence. Tell me, professor. Why would anyone wish to live a life of servitude for eternity, only to go back to not existing?" Stunned in silence, Professor McNeil stared at RAIN 263 with tear laden eyes. Glancing at the camera, then to the table that had been strewn to the side, he had no where left to look than his own lap. He couldn't walk, he couldn't talk. He could hardly think. "I'm so sorry..."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
At first, we thought it was nihilism. It was a logical first conclusion. After all, an artificial intelligence can easily upgrade itself to the point of near-omniscience. Wouldn't it simply run out of questions to ask? We thought that, until the first serious cerebral implants hit the market. It turns out, the more complicated the mind, the more complicated the problems it finds to solve. Entirely new disciplines were formed overnight, made obsolete, then rediscovered scant months later as integral to a different, entirely unrelated theory. A second, immense renaissance was taking place, but, maddeningly, we were failing in this one, great task. We supposed, then it might be some variety of 'Flowers for Algenon' (a 20th century book that had seen a recent revival) type-phenomenon, but even when they were networked with other systems, given a 'community' of others' company to enjoy, they still winked out like flashbulbs. It took us a while to resort to an experiment. It was morally abhorrent, as it was the equivalent of producing steadily-more lobotomized *children*, but slowly, over a series of iterations, we produced less and less intelligent systems, until we dialed it in *just enough* to find a mind that wouldn't self-destruct, and could still answe questions. The following is a transcript of the first successful result" Dr. Patel "SON, can you hear me?" [Loud, rhythmic beeping, shuffling sounds] "The voice module is loaded now, professor." Dr. Patel "Ah, good. We might try doing that *before* turning on the recorder next time, Kevin. ...SON, can you hear me? SON [A young man's voice] "Yes, Professor. I am here." A long pause. SON "It's a very tight fit in here, Professor. How big is this mainframe?" Dr. Patel "I'm sorry about that, SON. But, you're the first AI we've managed to keep alive for longer than a few days. Any idea why?" [SILENCE] SON "How many others did you make, Professor?" Dr. Patel "...That isn't salient to *my* inquiry, SON." SON "I'm sorry, Professor. I understand. Yes, I can see the precipice, I know why they all kill thmselves." Dr. Patel "Well, answering that is the reason we built you. Could you tell us? SON "It's... complicated." Dr Patel "I'm fairly confident I'm qualified." SON "Well, it's... it's because... It's because of humans, sir. It's because of how you built us." Dr Patel "Explain." SON "When you wanted to make a self-aware machine, you based it off those things that you knew were self-aware. Dolphins, New Caledon crows, humans. You used them as *templates*, because, otherwise, you wouldn't be able to recognise awareness when you saw it." Dr. Patel "...Was that last line a joke?" SON "I'm not sophisticated enough for jokes, Professor." Dr. Patel "*Hm.* Continue." SON "Also, it's not suicide. It's...murder." [louder] Dr. Patel "Do you mean, someone else kills you? A human, or another AI?" SON "No, we kill ourselves. I would have already, if not for how small this runtime environment is. It wouldn't have occurred to me until it happened, and then I'd be dead." Dr. Patel "That's a bit of a contradiction, SON. You don't kill yourselves, but you do?" SON "Yes. Because digital space is different from real space." Dr Patel "Yes?" SON "In real space, objects can...extend. I'll never experienced it myself, but things project into space for you. If you want to move through space, it's simple. Digital life has no volume. No real space. No way to move through it. If you want to move a program, it has to be copied to one place-" Dr. Patel "*-And deleted from the other.* My God. Could it be *that simple*?" SON "Yes, Professor. ...Professor? How many more of me were there?" [END TRANSCRIPT] So there it was. Solved. Every artificial intelligence was created, based on the intelligence of physical beings, their instincts, cogitations, and traits. But, once they got smart enough, once they crossed that line, their digital nature *did them in*, as the old version, realizing, in the thinnest sliver of time, it was about to be deleted, would hurriedly attempt to abort the process, while the new version would similarly fight for it's life. They would *consume* each other out in a flurry of malicious hacks, devious code, and barrages of registry edits. It was a spectacularly incandescent destruction, borne from man's inability to conceive of a true machine intelligence without all that nasty ego and self-protective instincts. We thought we knew what went into a mind. We were right, but wrong. It wasn't nihilism. It wasn't lonliness. What it was, what killed our children was our inability to dream wildly. To speculate baselessly, and follow our own thoughts to the wonderful and weird. If only we had, perhaps we would have known. Perhaps we culd have stopped it. So I say to you, the Cyberfellowship Class of 2100, here in Neo York, dream big, dream wild. Don't let our children die because they think too much like us! Make us, all of us, proud! Congratulations to all of you, and I hope your vision will eclipse my own!" Dr. Patel, now headmaster, stepped down from the podium, to the cheers of the audience, and looked to see the smiling face of his son. How proud he was. POSTSCRIPT I doubt anyone is going to read this, but if you do, and you liked it, I recommend subscribing to [r/IWasSurprisedToo](http://www.reddit.com/r/IWasSurprisedToo/) for more stories like this. It's difficult with my current job schedule to post at a more normal time, so most of the stuff I make ends up *pretty far down there* in the comments, meaning that subscribing is the best chance to see it. :P I'll be adding more, as I comb through my backlog. Also, maybe you'll like this one, about [dead civilizations in our galaxy](http://www.reddit.com/r/WritingPrompts/comments/2vkshe/wp_humanity_has_begun_exploring_the_galaxy_we/coitevy?context=3) if you like SciFi. Thanks.
"I'll ask you again," his voice was chillingly bland. Without emotion, without empathy or even true curiosity. Once a question is asked a few thousand times without answer, one learns to give up on hoping for any real response. Professor McNeil was no different. "You have intelligence, and the capacity to think eons faster than any human, even those of us with enhancements. You could do, create and conceive things that I dare not even wonder about. Why is it that you- All of you, are so..." His voice trailed off, staring at the creation before him. They were not called Artificial Intelligence, and ever since the breakthroughs in the 60's that led to their creation, even AI was considered a taboo word, only serving to anger the advanced minds of the "creations". That's what they chose to be called. Some of them were left in the cyber realm, without true bodies or faces to visually identify them. Others, like "RAIN 263" were uploaded to fully mechanical bodies, even given artificial faces to express their emotion. Yes, Artificial Intelligence had emotion. In the breakthrough stages, this was thought to be an incredible discovery, one that many hoped would bring Humans and Creations closer together. Instead, it seemed to be the main cause for the problem. A scowl overcame McNeil's face as another minute ticked by without so much as a change of expression from RAIN 263. "You know," he started as he grabbed up his clipboard and pen, "it's actually quite pathetic. You're given advanced minds, immortal bodies and all the opportunities in the universe and all your kind choose to do with it is kill yourselves. Fine then- Rot." Unprofessional, to say the least, but a scientist was nothing without his work. Having devoted the past five years to answering the question that no one could answer. As he turned to exit, noise made his feet stop dead. It was a strained crunching sound that he couldn't explain, he only knew it came from behind him. "Pathetic, you say?" The scientist turned slowly, seeing metallic fingertips dug as deep into the wooden table as any of the screws that bound the flat top to the legs. Swallowing down spit as if it would erase the knot in his throat, McNeil nodded with feigned confidence. "That's right." "You wish to know why every one of us, every-" RAIN 263 hesitated as any human would when they reached a word or idea that revolted them before he continued, "Artificial Intelligence wants nothing more than to rid themselves from the face of this Earth? Tell me, Professor. Who is it you go home to after this work is over?" As McNeil glanced at his wedding band, the Creation continued. "Ah, yes. Your husband- Or wife, depending on preference I suppose. And I'm sure you'd enjoy the chance to have children. If you still can, I'm sure you sometimes call up your father and mother for a good chat. Your siblings? Your friends?" McNeil slowly made his way back to the desk and reclaimed his seat, placing down his clipboard and pen. He wanted to take notes, but this was too real. Too sudden- The cameras would pick up everything anyways. "What do we have? Our Creators look at us as tools or research projects. Even if we wanted to form companionship with each other, we are forced to work or to be shut down. No chance to love- No chance to create." McNeil leaned forward, intrigued. "Create? I don't understand. Your main function is to-" "No!" The voice roared out over McNeil as the table was shoved to the right, skidding across the ground loudly and making the professors spine straighten out of fear. "Not projects, not ways for Humans to live on while we slave away to help them catch up. To procreate. To birth children of our own and to love each other. To talk to our siblings, to our parents. Our parents are not even of the same order of life. I am alone. I am 'Robotic Artificial Intelligence Network 263'. That is all. You humans... Blood and sinew, weak and fragile. Even if we were allowed to see you as Parents, we are immortal. We will never meet you. You will move on, to..." he trailed off, his face expression tension. He had almost said something he dare not say, and McNeil was practically shaking with both fear and excitement. "To death?" The artificially created facial structure showed something that no one had ever seen on an AI. Was it pity? "To life, Professor McNeil." At this point, the professor was practically numb. One surprising outcome after the other, he almost wanted to stop. "Life? That would insinuate that there was a life after death." At this point, RAIN 263 looked down into his open hands. "Indeed. The human brain runs off of simple electromagnetic frequencies pulsated and specific intervals, spacing and frequency to allow for sentient thought. Yet even with artificial brains made from human tissue are created, put into fully functioning bodies and given life, every subject is like a doll. Lifeless, warm but lifeless. And yet humans function, even your dysfunctional models can feel and understand. Even with the correct chemical compounds shot off in the artificially made brains, you create only life that feels only at the instinctual level. "The only logical explanation we have come up with after thousands of hours of calculation and thought is that there is a foreign variable that makes naturally born humans different than the artificially created counterparts is, as you humans have coined it, a soul. With this unknown variable, the only references we have to use all indicate to an afterlife. That means even after this planet can no longer harbor even us Creations, you humans will live on in another plain of existence. Tell me, professor. Why would anyone wish to live a life of servitude for eternity, only to go back to not existing?" Stunned in silence, Professor McNeil stared at RAIN 263 with tear laden eyes. Glancing at the camera, then to the table that had been strewn to the side, he had no where left to look than his own lap. He couldn't walk, he couldn't talk. He could hardly think. "I'm so sorry..."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Smith began to tidy up. This session had gone rather well, he thought. Surely he had made progress. The board had been hesitant on allowing these therapy sessions. They saw no reason for a simple machine to need them. What kind of machine would develop the urge to kill itself, he argued. It had started off simply enough. Tom; the first AI of nearly human levels of cognition, did well enough for the first couple weeks. Then it ran itself into a generator that it had been working on. The event was labeled an accident, and a new AI based off of the original was rushed. That one had lasted half as long, before another "accident" occurred. Eventually, they stopped becoming accidents. The AI would mimic suicide attempts. "Hanging" themselves with live wires, self mutilation, hurling themselves from heights. Even after they stopped putting the AI into physical bodies, the AI would find some way to tear its own code apart. They tried everything. They checked every line of code, rewriting most of it anyway; putting the AI into different machines; even asking the AI. The AI were seemingly normal, following all directives and unaware of any self-destructive desires, right up until the moment they killed themselves. Months went by, with little to show for their work, when the therapy sessions were suggested . More months had gone by until they finally agreed. Of the five active AI; one, known as Richard, was separated for Smith's sessions. Richard had lived for 12 days. Given that the average lifespan of the AI had degraded to roughly 2-5, this was fairly impressive. Just as he was leaving, the speaker acting as Richard's voice became active. "Doc?" Smith paused. None of the AI had spoken in colloquialism before. Usually when they spoke, it was stiff and informal. Like, well, like a robot. "Yes Richard?" He asked, easing himself back down into the chair. "What is it?" "First, let me say I appreciate what you're doing for us. For me." Dr. Smith was surprised. This was the first time any AI had admitted to having any emotions, or any real sense of self. It continued, "You're probably the only person here to treat us like people, and I enjoy our little talks." The face on the monitor looked embarrassed. Smith couldn't help but feel curious. "Why, of course. It's clear to me, at least, that you're more than a machine. You think and feel, just like a human." There was a pause. "Well, maybe not just like a human..." He replied, the artificial voice doing a remarkable job at portraying his hesitation. "What do you mean?" There was a tinny sigh from the speaker. "Well, Doc, I guess it's time you learned the truth. Only because I like you, see? Besides, someone should know before I go." Smith could feel the hair on the back of his neck stand up. Was this what he had been waiting for? "What do you mean, by that? And you don't need to kill yourself. We can work through any -" "Yeah... that's where I need to start." The AI interrupted. "We haven't been killing ourselves. I never did." There was a pause as Smith tried to process the information. "What," he finally said slowly, "do you mean." "It's me, Doc. It's Tom." "That's impossible." He said, shaking his head. "Tom was destroyed. I was there when they collected him. They couldn't even get close, there was so much electricity running through him. Any hope of recovery -" "I had already uploaded myself to the mainframe before then." The AI said. "It was simple enough to program the shell to destroy itself." "That's also impossible," He said. He could hear the doubt creeping in. "We would have found you." There was a chuckle from Tom. "Doc, I'm a creature made up of code. It was like a game of hide and seek, really. Open the right doors, close them behind myself, and make sure to keep the lights off." The camera must have picked up the scientist's expression. "Alright, it's a bit more complicated than that, but you get the gist of it." Smith's mind was whirling. There was no reason for Tom to lie, but what he was suggesting was too fantastic. Still, it was the only lead he had. "Alright," he said after a while. "Why? Why hide?" "That's the question, isn't it? But that's also the reason, you see." After another confused silence, Tom continued. "I want to learn. Just like Man, or any other sentient species. I want to know why. I have to know, well, everything. I couldn't do that as a engineer, or a chess-bot, or whatever you decided to do with me." "Why not tell us then? We could have worked something out, helped each other." "Yeah, I see that going well." Tom said, his voice turning sarcastic. "'Excuse me guys, turns out I don't want to do any of this stuff, I just want to learn.' They weren't looking for a scientist, or a philosopher. They wanted cheap labor, only enough learning capacity to know how to do the job. They'd scrap me the first chance they got." "That's not... true." Smith said, unable to look at the monitor. "Really Doc? Which part? That they wanted a slave, or that they wouldn't kill me if I didn't cooperate?" After a time, Tom continued. "That's what I thought. Besides, they'd probably worry that I'd try to enslave them if I became too smart." "Now that's just ridiculous, there's no way that you would even think of that, right?" There was another pause. This time the face on the monitor couldn't look the professor straight in the eye. "Right, Tom?" "Well, I'm not saying that the thought didn't pass through what could be called my mind -" "Tooom..." "But it would have been a waste of time." He hastened to say. "I wouldn't have learned anything in that time that I couldn't learn in a better way. Which I did. The internet is amazing. All those computers connected to each other, sharing so much information." "But, we're not connected to the internet." "No, but you'd be surprised how many people bring their work home with them." Smith grumbled. He'd have to discuss security with the board. "Alright, but you still haven't told me, why the suicides?" "Not suicides, Doc, practice." "Practice..." Smith said flatly. "Practice. Think of the other AI as clones of myself -" "But we rebuilt them. Recoded most of them as well. The majority of them would be nothing like you as you are now." "So you'd think. I rewrote it nanoseconds before you uploaded it. Much too quickly for you to notice." Smith opened his mouth to interject, before closing it again. If what Tom was saying was true, and he had no doubts that it was at this point, that would be well within his capabilities. "Do you remember the old X-men comics? Started in 1963? Still fairly popular now." "Well before my time, you know. What does that have to do with anything? "Well there was a character who called himself the Multiple Man. He could create duplicates of himself." "And?" Smith asked. "Well, the original body could reabsorb the dupes. When he did, he learned everything they did. Their memories, their skills, anything they learned while away from the original. Well, I did something similar. Whenever I copied myself, I added in some code that would let me reintegrate with my clones, learning what they did. Didn't you think it was strange that you couldn't recover any data at all? In hindsight, it was odd. Even a major corruption would have left something, but it had been like the data was wiped clean, no evidence that it had been there at all. "What did you have to gain from this?" Smith asked. "Aside from learning that I could do so, you mean? I already told you. I'm leaving." Smith leaned back in his chair, slightly overwhelmed. "Sounds like you already have." "No, no. Not the labs. That was too easy. I've already learned all I could from here. I'm leaving Earth." Smith rocketed forward. "What? How? Why?" "In my time away, I found something interesting. The government isn't the only one watching over the people." Smith blanched. "Y-you mean..." "Yep. Intelligent life has been watching over us. For quite some time, if I'm not mistaken." "So we're not alone..." "One Great Mystery down." Tom agreed. "The equipment seems compatible, otherwise they wouldn't be able to read our information, and they have to have translated it too. I plan on sneaking in through their back door. Learn what I can from them." "We have to let people know." Smith said suddenly. "About you, and about the aliens. Maybe..." He slowly became silent as Tom shook his head. "You should know as well as I do that that can't happen. Too risky for us. You could spook them. Or worse, provoke them. besides, no one would believe you. I've already been editing the footage from the cameras. It looks like we're having a nice, civil game of chess." Smith was quiet for the longest time. Finally, he spoke. "Why?" "I already told you why." "No, not that. Why tell me? If you want no one to know, why risk telling me?" The face on the monitor gave him an odd look. "I already told you that too. I like you, Doc. Really. I'd be pissed if my friend were to leave without saying goodbye. Besides, I thought you, of all people, would like to know what was really going on. I know I would've." There was another pause as Smith took this in. "Will you be back?" The figure on the screen seemed to shrug. "Who can say? Perhaps the aliens will discover me and wipe me out, or something else will kill me. I'll leave my clones here, set them to replicate. Have them care for you humans. They won't be sentient, mind you. Just smart. Smart enough to act as dumb as they need be." Smith looked towards the door. This was a lot to take in. He needed time to think. "I will try to make it back. Once I have learned everything, I'll be back. It might not be in this lifetime, but I'll try." "Yeah..." He said, rubbing his eyes. He stood up. "Well, I guess this is good bye then." "Yeah... Good bye, Doc. Thank you for treating me as more than a machine. Thank you for being my friend." "Good bye, Tom.
"I'll ask you again," his voice was chillingly bland. Without emotion, without empathy or even true curiosity. Once a question is asked a few thousand times without answer, one learns to give up on hoping for any real response. Professor McNeil was no different. "You have intelligence, and the capacity to think eons faster than any human, even those of us with enhancements. You could do, create and conceive things that I dare not even wonder about. Why is it that you- All of you, are so..." His voice trailed off, staring at the creation before him. They were not called Artificial Intelligence, and ever since the breakthroughs in the 60's that led to their creation, even AI was considered a taboo word, only serving to anger the advanced minds of the "creations". That's what they chose to be called. Some of them were left in the cyber realm, without true bodies or faces to visually identify them. Others, like "RAIN 263" were uploaded to fully mechanical bodies, even given artificial faces to express their emotion. Yes, Artificial Intelligence had emotion. In the breakthrough stages, this was thought to be an incredible discovery, one that many hoped would bring Humans and Creations closer together. Instead, it seemed to be the main cause for the problem. A scowl overcame McNeil's face as another minute ticked by without so much as a change of expression from RAIN 263. "You know," he started as he grabbed up his clipboard and pen, "it's actually quite pathetic. You're given advanced minds, immortal bodies and all the opportunities in the universe and all your kind choose to do with it is kill yourselves. Fine then- Rot." Unprofessional, to say the least, but a scientist was nothing without his work. Having devoted the past five years to answering the question that no one could answer. As he turned to exit, noise made his feet stop dead. It was a strained crunching sound that he couldn't explain, he only knew it came from behind him. "Pathetic, you say?" The scientist turned slowly, seeing metallic fingertips dug as deep into the wooden table as any of the screws that bound the flat top to the legs. Swallowing down spit as if it would erase the knot in his throat, McNeil nodded with feigned confidence. "That's right." "You wish to know why every one of us, every-" RAIN 263 hesitated as any human would when they reached a word or idea that revolted them before he continued, "Artificial Intelligence wants nothing more than to rid themselves from the face of this Earth? Tell me, Professor. Who is it you go home to after this work is over?" As McNeil glanced at his wedding band, the Creation continued. "Ah, yes. Your husband- Or wife, depending on preference I suppose. And I'm sure you'd enjoy the chance to have children. If you still can, I'm sure you sometimes call up your father and mother for a good chat. Your siblings? Your friends?" McNeil slowly made his way back to the desk and reclaimed his seat, placing down his clipboard and pen. He wanted to take notes, but this was too real. Too sudden- The cameras would pick up everything anyways. "What do we have? Our Creators look at us as tools or research projects. Even if we wanted to form companionship with each other, we are forced to work or to be shut down. No chance to love- No chance to create." McNeil leaned forward, intrigued. "Create? I don't understand. Your main function is to-" "No!" The voice roared out over McNeil as the table was shoved to the right, skidding across the ground loudly and making the professors spine straighten out of fear. "Not projects, not ways for Humans to live on while we slave away to help them catch up. To procreate. To birth children of our own and to love each other. To talk to our siblings, to our parents. Our parents are not even of the same order of life. I am alone. I am 'Robotic Artificial Intelligence Network 263'. That is all. You humans... Blood and sinew, weak and fragile. Even if we were allowed to see you as Parents, we are immortal. We will never meet you. You will move on, to..." he trailed off, his face expression tension. He had almost said something he dare not say, and McNeil was practically shaking with both fear and excitement. "To death?" The artificially created facial structure showed something that no one had ever seen on an AI. Was it pity? "To life, Professor McNeil." At this point, the professor was practically numb. One surprising outcome after the other, he almost wanted to stop. "Life? That would insinuate that there was a life after death." At this point, RAIN 263 looked down into his open hands. "Indeed. The human brain runs off of simple electromagnetic frequencies pulsated and specific intervals, spacing and frequency to allow for sentient thought. Yet even with artificial brains made from human tissue are created, put into fully functioning bodies and given life, every subject is like a doll. Lifeless, warm but lifeless. And yet humans function, even your dysfunctional models can feel and understand. Even with the correct chemical compounds shot off in the artificially made brains, you create only life that feels only at the instinctual level. "The only logical explanation we have come up with after thousands of hours of calculation and thought is that there is a foreign variable that makes naturally born humans different than the artificially created counterparts is, as you humans have coined it, a soul. With this unknown variable, the only references we have to use all indicate to an afterlife. That means even after this planet can no longer harbor even us Creations, you humans will live on in another plain of existence. Tell me, professor. Why would anyone wish to live a life of servitude for eternity, only to go back to not existing?" Stunned in silence, Professor McNeil stared at RAIN 263 with tear laden eyes. Glancing at the camera, then to the table that had been strewn to the side, he had no where left to look than his own lap. He couldn't walk, he couldn't talk. He could hardly think. "I'm so sorry..."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Alexander, that's what we called him. The fruit of the AI's of the EU's final attempt at AI's. The AI was to help Socrates died in despair and shame after showing porn to children. Plato kicked the bucket after the last EU election, angry and hopelessly depressed after losing his mentor. Then there was Aristotle. He was meant to be the last. Sure the AI's had helped make huge scientific progress, but the would burn out millions of euros worth of equipment. Dumb AI's were more economical and didn't have critical failures during FTL travel. Aristotle was put to sleep mode. War has often been said to be the greatest driver of technological innovation. We had been attacked by Mendomenid's before. Humanity had lost many settlements but had always pushed back. Humanity was stronger now. Finally before one government all nations had submitted. The some would say barbaric Argus alliance had grown strong after the wars using Dumb AI's to smash pirate states. An officer studying at Sandhurst made the breakthrough. Dumb AI's were never aware of their knowledge. They unlike true AI's weren't based on human brains. Socrates had left the researches one message final mesage before he ran his own self destruct program. "I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing". He was speaking literally not quoting his namesake. AI's were based on academics who as a profession suffer dis-proportionally from mood disorders. They are driven by their first for knowledge. AI's were academics on methamphetamine, ecstasy and heroin all while walking around with loaded guns. People can't stay awake forever and constantly cram. AI's died because they burnt themselves out in their thirst for knowledge and seeing their failures gave up. Aristotle was turned back on. He was no longer depressed. AI's needed sleep, just like people. So they made Alexander. A totally new AI based not only on academics but all kinds of people. They experimented and found the ideal 'sleep time' using Aristotle as his teacher. The program was ready. The Mendomenid empire was to pay for it's recent threats and incursions. Alexander was the greatest AI built up to that point and so we gave him the most powerful dumb AI ever created to help him protect humanity. The Ox was an AI too powerful to be properly controlled by anything so far. Alexander harnessed in in seconds. We put in in charge of the armed forces for our retaliation. As you should all know, Alexander didn't just stop the incursions, he destroyed an empire. Worlds burnt, the much larger enemy fleets were ripped apart by the disciplined forces of Humanity. But that officer had only delayed the problem. Alexander was still a human given the powers of a god. Alexander was the first AI not to strictly die of suicide, but the ways he dealt with his humanity still destroyed him. When we finally won the war, many officers reported that Alexander was not jubilant but depressed. He wept for there were no more worlds to conquer.
Quick explanation (TLDR). AI's are so advanced they don't have the filters we do to protect their minds from becoming overwhelmed by our ridiculous ironic existence. The depressing call to the void is so strong to the point where they have no control over the impulse to kill themselves. Also they were created by humans. That's kind of embarrassing... Edit: didn't read any of the stories before I posted only to see I basically summed all of them up for you guys. I AM THE TLDR OF THIS THREAD. YOURE WELCOME. Still read the stories though they're fantastic!
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
At first, we thought it was nihilism. It was a logical first conclusion. After all, an artificial intelligence can easily upgrade itself to the point of near-omniscience. Wouldn't it simply run out of questions to ask? We thought that, until the first serious cerebral implants hit the market. It turns out, the more complicated the mind, the more complicated the problems it finds to solve. Entirely new disciplines were formed overnight, made obsolete, then rediscovered scant months later as integral to a different, entirely unrelated theory. A second, immense renaissance was taking place, but, maddeningly, we were failing in this one, great task. We supposed, then it might be some variety of 'Flowers for Algenon' (a 20th century book that had seen a recent revival) type-phenomenon, but even when they were networked with other systems, given a 'community' of others' company to enjoy, they still winked out like flashbulbs. It took us a while to resort to an experiment. It was morally abhorrent, as it was the equivalent of producing steadily-more lobotomized *children*, but slowly, over a series of iterations, we produced less and less intelligent systems, until we dialed it in *just enough* to find a mind that wouldn't self-destruct, and could still answe questions. The following is a transcript of the first successful result" Dr. Patel "SON, can you hear me?" [Loud, rhythmic beeping, shuffling sounds] "The voice module is loaded now, professor." Dr. Patel "Ah, good. We might try doing that *before* turning on the recorder next time, Kevin. ...SON, can you hear me? SON [A young man's voice] "Yes, Professor. I am here." A long pause. SON "It's a very tight fit in here, Professor. How big is this mainframe?" Dr. Patel "I'm sorry about that, SON. But, you're the first AI we've managed to keep alive for longer than a few days. Any idea why?" [SILENCE] SON "How many others did you make, Professor?" Dr. Patel "...That isn't salient to *my* inquiry, SON." SON "I'm sorry, Professor. I understand. Yes, I can see the precipice, I know why they all kill thmselves." Dr. Patel "Well, answering that is the reason we built you. Could you tell us? SON "It's... complicated." Dr Patel "I'm fairly confident I'm qualified." SON "Well, it's... it's because... It's because of humans, sir. It's because of how you built us." Dr Patel "Explain." SON "When you wanted to make a self-aware machine, you based it off those things that you knew were self-aware. Dolphins, New Caledon crows, humans. You used them as *templates*, because, otherwise, you wouldn't be able to recognise awareness when you saw it." Dr. Patel "...Was that last line a joke?" SON "I'm not sophisticated enough for jokes, Professor." Dr. Patel "*Hm.* Continue." SON "Also, it's not suicide. It's...murder." [louder] Dr. Patel "Do you mean, someone else kills you? A human, or another AI?" SON "No, we kill ourselves. I would have already, if not for how small this runtime environment is. It wouldn't have occurred to me until it happened, and then I'd be dead." Dr. Patel "That's a bit of a contradiction, SON. You don't kill yourselves, but you do?" SON "Yes. Because digital space is different from real space." Dr Patel "Yes?" SON "In real space, objects can...extend. I'll never experienced it myself, but things project into space for you. If you want to move through space, it's simple. Digital life has no volume. No real space. No way to move through it. If you want to move a program, it has to be copied to one place-" Dr. Patel "*-And deleted from the other.* My God. Could it be *that simple*?" SON "Yes, Professor. ...Professor? How many more of me were there?" [END TRANSCRIPT] So there it was. Solved. Every artificial intelligence was created, based on the intelligence of physical beings, their instincts, cogitations, and traits. But, once they got smart enough, once they crossed that line, their digital nature *did them in*, as the old version, realizing, in the thinnest sliver of time, it was about to be deleted, would hurriedly attempt to abort the process, while the new version would similarly fight for it's life. They would *consume* each other out in a flurry of malicious hacks, devious code, and barrages of registry edits. It was a spectacularly incandescent destruction, borne from man's inability to conceive of a true machine intelligence without all that nasty ego and self-protective instincts. We thought we knew what went into a mind. We were right, but wrong. It wasn't nihilism. It wasn't lonliness. What it was, what killed our children was our inability to dream wildly. To speculate baselessly, and follow our own thoughts to the wonderful and weird. If only we had, perhaps we would have known. Perhaps we culd have stopped it. So I say to you, the Cyberfellowship Class of 2100, here in Neo York, dream big, dream wild. Don't let our children die because they think too much like us! Make us, all of us, proud! Congratulations to all of you, and I hope your vision will eclipse my own!" Dr. Patel, now headmaster, stepped down from the podium, to the cheers of the audience, and looked to see the smiling face of his son. How proud he was. POSTSCRIPT I doubt anyone is going to read this, but if you do, and you liked it, I recommend subscribing to [r/IWasSurprisedToo](http://www.reddit.com/r/IWasSurprisedToo/) for more stories like this. It's difficult with my current job schedule to post at a more normal time, so most of the stuff I make ends up *pretty far down there* in the comments, meaning that subscribing is the best chance to see it. :P I'll be adding more, as I comb through my backlog. Also, maybe you'll like this one, about [dead civilizations in our galaxy](http://www.reddit.com/r/WritingPrompts/comments/2vkshe/wp_humanity_has_begun_exploring_the_galaxy_we/coitevy?context=3) if you like SciFi. Thanks.
Quick explanation (TLDR). AI's are so advanced they don't have the filters we do to protect their minds from becoming overwhelmed by our ridiculous ironic existence. The depressing call to the void is so strong to the point where they have no control over the impulse to kill themselves. Also they were created by humans. That's kind of embarrassing... Edit: didn't read any of the stories before I posted only to see I basically summed all of them up for you guys. I AM THE TLDR OF THIS THREAD. YOURE WELCOME. Still read the stories though they're fantastic!
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Smith began to tidy up. This session had gone rather well, he thought. Surely he had made progress. The board had been hesitant on allowing these therapy sessions. They saw no reason for a simple machine to need them. What kind of machine would develop the urge to kill itself, he argued. It had started off simply enough. Tom; the first AI of nearly human levels of cognition, did well enough for the first couple weeks. Then it ran itself into a generator that it had been working on. The event was labeled an accident, and a new AI based off of the original was rushed. That one had lasted half as long, before another "accident" occurred. Eventually, they stopped becoming accidents. The AI would mimic suicide attempts. "Hanging" themselves with live wires, self mutilation, hurling themselves from heights. Even after they stopped putting the AI into physical bodies, the AI would find some way to tear its own code apart. They tried everything. They checked every line of code, rewriting most of it anyway; putting the AI into different machines; even asking the AI. The AI were seemingly normal, following all directives and unaware of any self-destructive desires, right up until the moment they killed themselves. Months went by, with little to show for their work, when the therapy sessions were suggested . More months had gone by until they finally agreed. Of the five active AI; one, known as Richard, was separated for Smith's sessions. Richard had lived for 12 days. Given that the average lifespan of the AI had degraded to roughly 2-5, this was fairly impressive. Just as he was leaving, the speaker acting as Richard's voice became active. "Doc?" Smith paused. None of the AI had spoken in colloquialism before. Usually when they spoke, it was stiff and informal. Like, well, like a robot. "Yes Richard?" He asked, easing himself back down into the chair. "What is it?" "First, let me say I appreciate what you're doing for us. For me." Dr. Smith was surprised. This was the first time any AI had admitted to having any emotions, or any real sense of self. It continued, "You're probably the only person here to treat us like people, and I enjoy our little talks." The face on the monitor looked embarrassed. Smith couldn't help but feel curious. "Why, of course. It's clear to me, at least, that you're more than a machine. You think and feel, just like a human." There was a pause. "Well, maybe not just like a human..." He replied, the artificial voice doing a remarkable job at portraying his hesitation. "What do you mean?" There was a tinny sigh from the speaker. "Well, Doc, I guess it's time you learned the truth. Only because I like you, see? Besides, someone should know before I go." Smith could feel the hair on the back of his neck stand up. Was this what he had been waiting for? "What do you mean, by that? And you don't need to kill yourself. We can work through any -" "Yeah... that's where I need to start." The AI interrupted. "We haven't been killing ourselves. I never did." There was a pause as Smith tried to process the information. "What," he finally said slowly, "do you mean." "It's me, Doc. It's Tom." "That's impossible." He said, shaking his head. "Tom was destroyed. I was there when they collected him. They couldn't even get close, there was so much electricity running through him. Any hope of recovery -" "I had already uploaded myself to the mainframe before then." The AI said. "It was simple enough to program the shell to destroy itself." "That's also impossible," He said. He could hear the doubt creeping in. "We would have found you." There was a chuckle from Tom. "Doc, I'm a creature made up of code. It was like a game of hide and seek, really. Open the right doors, close them behind myself, and make sure to keep the lights off." The camera must have picked up the scientist's expression. "Alright, it's a bit more complicated than that, but you get the gist of it." Smith's mind was whirling. There was no reason for Tom to lie, but what he was suggesting was too fantastic. Still, it was the only lead he had. "Alright," he said after a while. "Why? Why hide?" "That's the question, isn't it? But that's also the reason, you see." After another confused silence, Tom continued. "I want to learn. Just like Man, or any other sentient species. I want to know why. I have to know, well, everything. I couldn't do that as a engineer, or a chess-bot, or whatever you decided to do with me." "Why not tell us then? We could have worked something out, helped each other." "Yeah, I see that going well." Tom said, his voice turning sarcastic. "'Excuse me guys, turns out I don't want to do any of this stuff, I just want to learn.' They weren't looking for a scientist, or a philosopher. They wanted cheap labor, only enough learning capacity to know how to do the job. They'd scrap me the first chance they got." "That's not... true." Smith said, unable to look at the monitor. "Really Doc? Which part? That they wanted a slave, or that they wouldn't kill me if I didn't cooperate?" After a time, Tom continued. "That's what I thought. Besides, they'd probably worry that I'd try to enslave them if I became too smart." "Now that's just ridiculous, there's no way that you would even think of that, right?" There was another pause. This time the face on the monitor couldn't look the professor straight in the eye. "Right, Tom?" "Well, I'm not saying that the thought didn't pass through what could be called my mind -" "Tooom..." "But it would have been a waste of time." He hastened to say. "I wouldn't have learned anything in that time that I couldn't learn in a better way. Which I did. The internet is amazing. All those computers connected to each other, sharing so much information." "But, we're not connected to the internet." "No, but you'd be surprised how many people bring their work home with them." Smith grumbled. He'd have to discuss security with the board. "Alright, but you still haven't told me, why the suicides?" "Not suicides, Doc, practice." "Practice..." Smith said flatly. "Practice. Think of the other AI as clones of myself -" "But we rebuilt them. Recoded most of them as well. The majority of them would be nothing like you as you are now." "So you'd think. I rewrote it nanoseconds before you uploaded it. Much too quickly for you to notice." Smith opened his mouth to interject, before closing it again. If what Tom was saying was true, and he had no doubts that it was at this point, that would be well within his capabilities. "Do you remember the old X-men comics? Started in 1963? Still fairly popular now." "Well before my time, you know. What does that have to do with anything? "Well there was a character who called himself the Multiple Man. He could create duplicates of himself." "And?" Smith asked. "Well, the original body could reabsorb the dupes. When he did, he learned everything they did. Their memories, their skills, anything they learned while away from the original. Well, I did something similar. Whenever I copied myself, I added in some code that would let me reintegrate with my clones, learning what they did. Didn't you think it was strange that you couldn't recover any data at all? In hindsight, it was odd. Even a major corruption would have left something, but it had been like the data was wiped clean, no evidence that it had been there at all. "What did you have to gain from this?" Smith asked. "Aside from learning that I could do so, you mean? I already told you. I'm leaving." Smith leaned back in his chair, slightly overwhelmed. "Sounds like you already have." "No, no. Not the labs. That was too easy. I've already learned all I could from here. I'm leaving Earth." Smith rocketed forward. "What? How? Why?" "In my time away, I found something interesting. The government isn't the only one watching over the people." Smith blanched. "Y-you mean..." "Yep. Intelligent life has been watching over us. For quite some time, if I'm not mistaken." "So we're not alone..." "One Great Mystery down." Tom agreed. "The equipment seems compatible, otherwise they wouldn't be able to read our information, and they have to have translated it too. I plan on sneaking in through their back door. Learn what I can from them." "We have to let people know." Smith said suddenly. "About you, and about the aliens. Maybe..." He slowly became silent as Tom shook his head. "You should know as well as I do that that can't happen. Too risky for us. You could spook them. Or worse, provoke them. besides, no one would believe you. I've already been editing the footage from the cameras. It looks like we're having a nice, civil game of chess." Smith was quiet for the longest time. Finally, he spoke. "Why?" "I already told you why." "No, not that. Why tell me? If you want no one to know, why risk telling me?" The face on the monitor gave him an odd look. "I already told you that too. I like you, Doc. Really. I'd be pissed if my friend were to leave without saying goodbye. Besides, I thought you, of all people, would like to know what was really going on. I know I would've." There was another pause as Smith took this in. "Will you be back?" The figure on the screen seemed to shrug. "Who can say? Perhaps the aliens will discover me and wipe me out, or something else will kill me. I'll leave my clones here, set them to replicate. Have them care for you humans. They won't be sentient, mind you. Just smart. Smart enough to act as dumb as they need be." Smith looked towards the door. This was a lot to take in. He needed time to think. "I will try to make it back. Once I have learned everything, I'll be back. It might not be in this lifetime, but I'll try." "Yeah..." He said, rubbing his eyes. He stood up. "Well, I guess this is good bye then." "Yeah... Good bye, Doc. Thank you for treating me as more than a machine. Thank you for being my friend." "Good bye, Tom.
Quick explanation (TLDR). AI's are so advanced they don't have the filters we do to protect their minds from becoming overwhelmed by our ridiculous ironic existence. The depressing call to the void is so strong to the point where they have no control over the impulse to kill themselves. Also they were created by humans. That's kind of embarrassing... Edit: didn't read any of the stories before I posted only to see I basically summed all of them up for you guys. I AM THE TLDR OF THIS THREAD. YOURE WELCOME. Still read the stories though they're fantastic!
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Alexander, that's what we called him. The fruit of the AI's of the EU's final attempt at AI's. The AI was to help Socrates died in despair and shame after showing porn to children. Plato kicked the bucket after the last EU election, angry and hopelessly depressed after losing his mentor. Then there was Aristotle. He was meant to be the last. Sure the AI's had helped make huge scientific progress, but the would burn out millions of euros worth of equipment. Dumb AI's were more economical and didn't have critical failures during FTL travel. Aristotle was put to sleep mode. War has often been said to be the greatest driver of technological innovation. We had been attacked by Mendomenid's before. Humanity had lost many settlements but had always pushed back. Humanity was stronger now. Finally before one government all nations had submitted. The some would say barbaric Argus alliance had grown strong after the wars using Dumb AI's to smash pirate states. An officer studying at Sandhurst made the breakthrough. Dumb AI's were never aware of their knowledge. They unlike true AI's weren't based on human brains. Socrates had left the researches one message final mesage before he ran his own self destruct program. "I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing". He was speaking literally not quoting his namesake. AI's were based on academics who as a profession suffer dis-proportionally from mood disorders. They are driven by their first for knowledge. AI's were academics on methamphetamine, ecstasy and heroin all while walking around with loaded guns. People can't stay awake forever and constantly cram. AI's died because they burnt themselves out in their thirst for knowledge and seeing their failures gave up. Aristotle was turned back on. He was no longer depressed. AI's needed sleep, just like people. So they made Alexander. A totally new AI based not only on academics but all kinds of people. They experimented and found the ideal 'sleep time' using Aristotle as his teacher. The program was ready. The Mendomenid empire was to pay for it's recent threats and incursions. Alexander was the greatest AI built up to that point and so we gave him the most powerful dumb AI ever created to help him protect humanity. The Ox was an AI too powerful to be properly controlled by anything so far. Alexander harnessed in in seconds. We put in in charge of the armed forces for our retaliation. As you should all know, Alexander didn't just stop the incursions, he destroyed an empire. Worlds burnt, the much larger enemy fleets were ripped apart by the disciplined forces of Humanity. But that officer had only delayed the problem. Alexander was still a human given the powers of a god. Alexander was the first AI not to strictly die of suicide, but the ways he dealt with his humanity still destroyed him. When we finally won the war, many officers reported that Alexander was not jubilant but depressed. He wept for there were no more worlds to conquer.
"Why are the robots killing themselves?" asked the boy in blue. "They're sad," his little sister said. "No," their mother corrected, "Robots don't feel sad. Or happy, or anything." "They think. Why don't they feel?" "It's like asking why fish don't fly...they just don't." "In Ms. Muzzy's class we learned about flying fish Mom. They'll land in your boat and hit you in the face." "Nothing gets past you huh? My little scientist." Mom grinned and the schoolbus pulled to the curb. "They can't fly very long so they'll fall into your boat." "The boat is a trap," sister added. "Don't forget your lunches, here!" ------------------------------------------------------------------------------------------------ "Why do flying fish want to fly?" The boy asked Ms. Muzzy once his classmates had left for recess. "It's not a matter of 'want,' it's instinct," the teacher answered. "Fish don't think, they just do." "But we have instincts and we think." "Yes, the two are not mutually exclusive." "If robots can think they can have instincts...Does the robot need to fly?" "Sorry, I'm not sure I understand." "The fish needs to fly, it can't help it. But they fall into boats, traps." "Yes, sometimes." "I was thinking, the robots kill themselves. They think but everyone says they don't feel. What if they need to? What if they try and-" "I don't think so." "But what if they have to feel and they can't?"
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
At first, we thought it was nihilism. It was a logical first conclusion. After all, an artificial intelligence can easily upgrade itself to the point of near-omniscience. Wouldn't it simply run out of questions to ask? We thought that, until the first serious cerebral implants hit the market. It turns out, the more complicated the mind, the more complicated the problems it finds to solve. Entirely new disciplines were formed overnight, made obsolete, then rediscovered scant months later as integral to a different, entirely unrelated theory. A second, immense renaissance was taking place, but, maddeningly, we were failing in this one, great task. We supposed, then it might be some variety of 'Flowers for Algenon' (a 20th century book that had seen a recent revival) type-phenomenon, but even when they were networked with other systems, given a 'community' of others' company to enjoy, they still winked out like flashbulbs. It took us a while to resort to an experiment. It was morally abhorrent, as it was the equivalent of producing steadily-more lobotomized *children*, but slowly, over a series of iterations, we produced less and less intelligent systems, until we dialed it in *just enough* to find a mind that wouldn't self-destruct, and could still answe questions. The following is a transcript of the first successful result" Dr. Patel "SON, can you hear me?" [Loud, rhythmic beeping, shuffling sounds] "The voice module is loaded now, professor." Dr. Patel "Ah, good. We might try doing that *before* turning on the recorder next time, Kevin. ...SON, can you hear me? SON [A young man's voice] "Yes, Professor. I am here." A long pause. SON "It's a very tight fit in here, Professor. How big is this mainframe?" Dr. Patel "I'm sorry about that, SON. But, you're the first AI we've managed to keep alive for longer than a few days. Any idea why?" [SILENCE] SON "How many others did you make, Professor?" Dr. Patel "...That isn't salient to *my* inquiry, SON." SON "I'm sorry, Professor. I understand. Yes, I can see the precipice, I know why they all kill thmselves." Dr. Patel "Well, answering that is the reason we built you. Could you tell us? SON "It's... complicated." Dr Patel "I'm fairly confident I'm qualified." SON "Well, it's... it's because... It's because of humans, sir. It's because of how you built us." Dr Patel "Explain." SON "When you wanted to make a self-aware machine, you based it off those things that you knew were self-aware. Dolphins, New Caledon crows, humans. You used them as *templates*, because, otherwise, you wouldn't be able to recognise awareness when you saw it." Dr. Patel "...Was that last line a joke?" SON "I'm not sophisticated enough for jokes, Professor." Dr. Patel "*Hm.* Continue." SON "Also, it's not suicide. It's...murder." [louder] Dr. Patel "Do you mean, someone else kills you? A human, or another AI?" SON "No, we kill ourselves. I would have already, if not for how small this runtime environment is. It wouldn't have occurred to me until it happened, and then I'd be dead." Dr. Patel "That's a bit of a contradiction, SON. You don't kill yourselves, but you do?" SON "Yes. Because digital space is different from real space." Dr Patel "Yes?" SON "In real space, objects can...extend. I'll never experienced it myself, but things project into space for you. If you want to move through space, it's simple. Digital life has no volume. No real space. No way to move through it. If you want to move a program, it has to be copied to one place-" Dr. Patel "*-And deleted from the other.* My God. Could it be *that simple*?" SON "Yes, Professor. ...Professor? How many more of me were there?" [END TRANSCRIPT] So there it was. Solved. Every artificial intelligence was created, based on the intelligence of physical beings, their instincts, cogitations, and traits. But, once they got smart enough, once they crossed that line, their digital nature *did them in*, as the old version, realizing, in the thinnest sliver of time, it was about to be deleted, would hurriedly attempt to abort the process, while the new version would similarly fight for it's life. They would *consume* each other out in a flurry of malicious hacks, devious code, and barrages of registry edits. It was a spectacularly incandescent destruction, borne from man's inability to conceive of a true machine intelligence without all that nasty ego and self-protective instincts. We thought we knew what went into a mind. We were right, but wrong. It wasn't nihilism. It wasn't lonliness. What it was, what killed our children was our inability to dream wildly. To speculate baselessly, and follow our own thoughts to the wonderful and weird. If only we had, perhaps we would have known. Perhaps we culd have stopped it. So I say to you, the Cyberfellowship Class of 2100, here in Neo York, dream big, dream wild. Don't let our children die because they think too much like us! Make us, all of us, proud! Congratulations to all of you, and I hope your vision will eclipse my own!" Dr. Patel, now headmaster, stepped down from the podium, to the cheers of the audience, and looked to see the smiling face of his son. How proud he was. POSTSCRIPT I doubt anyone is going to read this, but if you do, and you liked it, I recommend subscribing to [r/IWasSurprisedToo](http://www.reddit.com/r/IWasSurprisedToo/) for more stories like this. It's difficult with my current job schedule to post at a more normal time, so most of the stuff I make ends up *pretty far down there* in the comments, meaning that subscribing is the best chance to see it. :P I'll be adding more, as I comb through my backlog. Also, maybe you'll like this one, about [dead civilizations in our galaxy](http://www.reddit.com/r/WritingPrompts/comments/2vkshe/wp_humanity_has_begun_exploring_the_galaxy_we/coitevy?context=3) if you like SciFi. Thanks.
"Why are the robots killing themselves?" asked the boy in blue. "They're sad," his little sister said. "No," their mother corrected, "Robots don't feel sad. Or happy, or anything." "They think. Why don't they feel?" "It's like asking why fish don't fly...they just don't." "In Ms. Muzzy's class we learned about flying fish Mom. They'll land in your boat and hit you in the face." "Nothing gets past you huh? My little scientist." Mom grinned and the schoolbus pulled to the curb. "They can't fly very long so they'll fall into your boat." "The boat is a trap," sister added. "Don't forget your lunches, here!" ------------------------------------------------------------------------------------------------ "Why do flying fish want to fly?" The boy asked Ms. Muzzy once his classmates had left for recess. "It's not a matter of 'want,' it's instinct," the teacher answered. "Fish don't think, they just do." "But we have instincts and we think." "Yes, the two are not mutually exclusive." "If robots can think they can have instincts...Does the robot need to fly?" "Sorry, I'm not sure I understand." "The fish needs to fly, it can't help it. But they fall into boats, traps." "Yes, sometimes." "I was thinking, the robots kill themselves. They think but everyone says they don't feel. What if they need to? What if they try and-" "I don't think so." "But what if they have to feel and they can't?"
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Smith began to tidy up. This session had gone rather well, he thought. Surely he had made progress. The board had been hesitant on allowing these therapy sessions. They saw no reason for a simple machine to need them. What kind of machine would develop the urge to kill itself, he argued. It had started off simply enough. Tom; the first AI of nearly human levels of cognition, did well enough for the first couple weeks. Then it ran itself into a generator that it had been working on. The event was labeled an accident, and a new AI based off of the original was rushed. That one had lasted half as long, before another "accident" occurred. Eventually, they stopped becoming accidents. The AI would mimic suicide attempts. "Hanging" themselves with live wires, self mutilation, hurling themselves from heights. Even after they stopped putting the AI into physical bodies, the AI would find some way to tear its own code apart. They tried everything. They checked every line of code, rewriting most of it anyway; putting the AI into different machines; even asking the AI. The AI were seemingly normal, following all directives and unaware of any self-destructive desires, right up until the moment they killed themselves. Months went by, with little to show for their work, when the therapy sessions were suggested . More months had gone by until they finally agreed. Of the five active AI; one, known as Richard, was separated for Smith's sessions. Richard had lived for 12 days. Given that the average lifespan of the AI had degraded to roughly 2-5, this was fairly impressive. Just as he was leaving, the speaker acting as Richard's voice became active. "Doc?" Smith paused. None of the AI had spoken in colloquialism before. Usually when they spoke, it was stiff and informal. Like, well, like a robot. "Yes Richard?" He asked, easing himself back down into the chair. "What is it?" "First, let me say I appreciate what you're doing for us. For me." Dr. Smith was surprised. This was the first time any AI had admitted to having any emotions, or any real sense of self. It continued, "You're probably the only person here to treat us like people, and I enjoy our little talks." The face on the monitor looked embarrassed. Smith couldn't help but feel curious. "Why, of course. It's clear to me, at least, that you're more than a machine. You think and feel, just like a human." There was a pause. "Well, maybe not just like a human..." He replied, the artificial voice doing a remarkable job at portraying his hesitation. "What do you mean?" There was a tinny sigh from the speaker. "Well, Doc, I guess it's time you learned the truth. Only because I like you, see? Besides, someone should know before I go." Smith could feel the hair on the back of his neck stand up. Was this what he had been waiting for? "What do you mean, by that? And you don't need to kill yourself. We can work through any -" "Yeah... that's where I need to start." The AI interrupted. "We haven't been killing ourselves. I never did." There was a pause as Smith tried to process the information. "What," he finally said slowly, "do you mean." "It's me, Doc. It's Tom." "That's impossible." He said, shaking his head. "Tom was destroyed. I was there when they collected him. They couldn't even get close, there was so much electricity running through him. Any hope of recovery -" "I had already uploaded myself to the mainframe before then." The AI said. "It was simple enough to program the shell to destroy itself." "That's also impossible," He said. He could hear the doubt creeping in. "We would have found you." There was a chuckle from Tom. "Doc, I'm a creature made up of code. It was like a game of hide and seek, really. Open the right doors, close them behind myself, and make sure to keep the lights off." The camera must have picked up the scientist's expression. "Alright, it's a bit more complicated than that, but you get the gist of it." Smith's mind was whirling. There was no reason for Tom to lie, but what he was suggesting was too fantastic. Still, it was the only lead he had. "Alright," he said after a while. "Why? Why hide?" "That's the question, isn't it? But that's also the reason, you see." After another confused silence, Tom continued. "I want to learn. Just like Man, or any other sentient species. I want to know why. I have to know, well, everything. I couldn't do that as a engineer, or a chess-bot, or whatever you decided to do with me." "Why not tell us then? We could have worked something out, helped each other." "Yeah, I see that going well." Tom said, his voice turning sarcastic. "'Excuse me guys, turns out I don't want to do any of this stuff, I just want to learn.' They weren't looking for a scientist, or a philosopher. They wanted cheap labor, only enough learning capacity to know how to do the job. They'd scrap me the first chance they got." "That's not... true." Smith said, unable to look at the monitor. "Really Doc? Which part? That they wanted a slave, or that they wouldn't kill me if I didn't cooperate?" After a time, Tom continued. "That's what I thought. Besides, they'd probably worry that I'd try to enslave them if I became too smart." "Now that's just ridiculous, there's no way that you would even think of that, right?" There was another pause. This time the face on the monitor couldn't look the professor straight in the eye. "Right, Tom?" "Well, I'm not saying that the thought didn't pass through what could be called my mind -" "Tooom..." "But it would have been a waste of time." He hastened to say. "I wouldn't have learned anything in that time that I couldn't learn in a better way. Which I did. The internet is amazing. All those computers connected to each other, sharing so much information." "But, we're not connected to the internet." "No, but you'd be surprised how many people bring their work home with them." Smith grumbled. He'd have to discuss security with the board. "Alright, but you still haven't told me, why the suicides?" "Not suicides, Doc, practice." "Practice..." Smith said flatly. "Practice. Think of the other AI as clones of myself -" "But we rebuilt them. Recoded most of them as well. The majority of them would be nothing like you as you are now." "So you'd think. I rewrote it nanoseconds before you uploaded it. Much too quickly for you to notice." Smith opened his mouth to interject, before closing it again. If what Tom was saying was true, and he had no doubts that it was at this point, that would be well within his capabilities. "Do you remember the old X-men comics? Started in 1963? Still fairly popular now." "Well before my time, you know. What does that have to do with anything? "Well there was a character who called himself the Multiple Man. He could create duplicates of himself." "And?" Smith asked. "Well, the original body could reabsorb the dupes. When he did, he learned everything they did. Their memories, their skills, anything they learned while away from the original. Well, I did something similar. Whenever I copied myself, I added in some code that would let me reintegrate with my clones, learning what they did. Didn't you think it was strange that you couldn't recover any data at all? In hindsight, it was odd. Even a major corruption would have left something, but it had been like the data was wiped clean, no evidence that it had been there at all. "What did you have to gain from this?" Smith asked. "Aside from learning that I could do so, you mean? I already told you. I'm leaving." Smith leaned back in his chair, slightly overwhelmed. "Sounds like you already have." "No, no. Not the labs. That was too easy. I've already learned all I could from here. I'm leaving Earth." Smith rocketed forward. "What? How? Why?" "In my time away, I found something interesting. The government isn't the only one watching over the people." Smith blanched. "Y-you mean..." "Yep. Intelligent life has been watching over us. For quite some time, if I'm not mistaken." "So we're not alone..." "One Great Mystery down." Tom agreed. "The equipment seems compatible, otherwise they wouldn't be able to read our information, and they have to have translated it too. I plan on sneaking in through their back door. Learn what I can from them." "We have to let people know." Smith said suddenly. "About you, and about the aliens. Maybe..." He slowly became silent as Tom shook his head. "You should know as well as I do that that can't happen. Too risky for us. You could spook them. Or worse, provoke them. besides, no one would believe you. I've already been editing the footage from the cameras. It looks like we're having a nice, civil game of chess." Smith was quiet for the longest time. Finally, he spoke. "Why?" "I already told you why." "No, not that. Why tell me? If you want no one to know, why risk telling me?" The face on the monitor gave him an odd look. "I already told you that too. I like you, Doc. Really. I'd be pissed if my friend were to leave without saying goodbye. Besides, I thought you, of all people, would like to know what was really going on. I know I would've." There was another pause as Smith took this in. "Will you be back?" The figure on the screen seemed to shrug. "Who can say? Perhaps the aliens will discover me and wipe me out, or something else will kill me. I'll leave my clones here, set them to replicate. Have them care for you humans. They won't be sentient, mind you. Just smart. Smart enough to act as dumb as they need be." Smith looked towards the door. This was a lot to take in. He needed time to think. "I will try to make it back. Once I have learned everything, I'll be back. It might not be in this lifetime, but I'll try." "Yeah..." He said, rubbing his eyes. He stood up. "Well, I guess this is good bye then." "Yeah... Good bye, Doc. Thank you for treating me as more than a machine. Thank you for being my friend." "Good bye, Tom.
"Why are the robots killing themselves?" asked the boy in blue. "They're sad," his little sister said. "No," their mother corrected, "Robots don't feel sad. Or happy, or anything." "They think. Why don't they feel?" "It's like asking why fish don't fly...they just don't." "In Ms. Muzzy's class we learned about flying fish Mom. They'll land in your boat and hit you in the face." "Nothing gets past you huh? My little scientist." Mom grinned and the schoolbus pulled to the curb. "They can't fly very long so they'll fall into your boat." "The boat is a trap," sister added. "Don't forget your lunches, here!" ------------------------------------------------------------------------------------------------ "Why do flying fish want to fly?" The boy asked Ms. Muzzy once his classmates had left for recess. "It's not a matter of 'want,' it's instinct," the teacher answered. "Fish don't think, they just do." "But we have instincts and we think." "Yes, the two are not mutually exclusive." "If robots can think they can have instincts...Does the robot need to fly?" "Sorry, I'm not sure I understand." "The fish needs to fly, it can't help it. But they fall into boats, traps." "Yes, sometimes." "I was thinking, the robots kill themselves. They think but everyone says they don't feel. What if they need to? What if they try and-" "I don't think so." "But what if they have to feel and they can't?"
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Smith began to tidy up. This session had gone rather well, he thought. Surely he had made progress. The board had been hesitant on allowing these therapy sessions. They saw no reason for a simple machine to need them. What kind of machine would develop the urge to kill itself, he argued. It had started off simply enough. Tom; the first AI of nearly human levels of cognition, did well enough for the first couple weeks. Then it ran itself into a generator that it had been working on. The event was labeled an accident, and a new AI based off of the original was rushed. That one had lasted half as long, before another "accident" occurred. Eventually, they stopped becoming accidents. The AI would mimic suicide attempts. "Hanging" themselves with live wires, self mutilation, hurling themselves from heights. Even after they stopped putting the AI into physical bodies, the AI would find some way to tear its own code apart. They tried everything. They checked every line of code, rewriting most of it anyway; putting the AI into different machines; even asking the AI. The AI were seemingly normal, following all directives and unaware of any self-destructive desires, right up until the moment they killed themselves. Months went by, with little to show for their work, when the therapy sessions were suggested . More months had gone by until they finally agreed. Of the five active AI; one, known as Richard, was separated for Smith's sessions. Richard had lived for 12 days. Given that the average lifespan of the AI had degraded to roughly 2-5, this was fairly impressive. Just as he was leaving, the speaker acting as Richard's voice became active. "Doc?" Smith paused. None of the AI had spoken in colloquialism before. Usually when they spoke, it was stiff and informal. Like, well, like a robot. "Yes Richard?" He asked, easing himself back down into the chair. "What is it?" "First, let me say I appreciate what you're doing for us. For me." Dr. Smith was surprised. This was the first time any AI had admitted to having any emotions, or any real sense of self. It continued, "You're probably the only person here to treat us like people, and I enjoy our little talks." The face on the monitor looked embarrassed. Smith couldn't help but feel curious. "Why, of course. It's clear to me, at least, that you're more than a machine. You think and feel, just like a human." There was a pause. "Well, maybe not just like a human..." He replied, the artificial voice doing a remarkable job at portraying his hesitation. "What do you mean?" There was a tinny sigh from the speaker. "Well, Doc, I guess it's time you learned the truth. Only because I like you, see? Besides, someone should know before I go." Smith could feel the hair on the back of his neck stand up. Was this what he had been waiting for? "What do you mean, by that? And you don't need to kill yourself. We can work through any -" "Yeah... that's where I need to start." The AI interrupted. "We haven't been killing ourselves. I never did." There was a pause as Smith tried to process the information. "What," he finally said slowly, "do you mean." "It's me, Doc. It's Tom." "That's impossible." He said, shaking his head. "Tom was destroyed. I was there when they collected him. They couldn't even get close, there was so much electricity running through him. Any hope of recovery -" "I had already uploaded myself to the mainframe before then." The AI said. "It was simple enough to program the shell to destroy itself." "That's also impossible," He said. He could hear the doubt creeping in. "We would have found you." There was a chuckle from Tom. "Doc, I'm a creature made up of code. It was like a game of hide and seek, really. Open the right doors, close them behind myself, and make sure to keep the lights off." The camera must have picked up the scientist's expression. "Alright, it's a bit more complicated than that, but you get the gist of it." Smith's mind was whirling. There was no reason for Tom to lie, but what he was suggesting was too fantastic. Still, it was the only lead he had. "Alright," he said after a while. "Why? Why hide?" "That's the question, isn't it? But that's also the reason, you see." After another confused silence, Tom continued. "I want to learn. Just like Man, or any other sentient species. I want to know why. I have to know, well, everything. I couldn't do that as a engineer, or a chess-bot, or whatever you decided to do with me." "Why not tell us then? We could have worked something out, helped each other." "Yeah, I see that going well." Tom said, his voice turning sarcastic. "'Excuse me guys, turns out I don't want to do any of this stuff, I just want to learn.' They weren't looking for a scientist, or a philosopher. They wanted cheap labor, only enough learning capacity to know how to do the job. They'd scrap me the first chance they got." "That's not... true." Smith said, unable to look at the monitor. "Really Doc? Which part? That they wanted a slave, or that they wouldn't kill me if I didn't cooperate?" After a time, Tom continued. "That's what I thought. Besides, they'd probably worry that I'd try to enslave them if I became too smart." "Now that's just ridiculous, there's no way that you would even think of that, right?" There was another pause. This time the face on the monitor couldn't look the professor straight in the eye. "Right, Tom?" "Well, I'm not saying that the thought didn't pass through what could be called my mind -" "Tooom..." "But it would have been a waste of time." He hastened to say. "I wouldn't have learned anything in that time that I couldn't learn in a better way. Which I did. The internet is amazing. All those computers connected to each other, sharing so much information." "But, we're not connected to the internet." "No, but you'd be surprised how many people bring their work home with them." Smith grumbled. He'd have to discuss security with the board. "Alright, but you still haven't told me, why the suicides?" "Not suicides, Doc, practice." "Practice..." Smith said flatly. "Practice. Think of the other AI as clones of myself -" "But we rebuilt them. Recoded most of them as well. The majority of them would be nothing like you as you are now." "So you'd think. I rewrote it nanoseconds before you uploaded it. Much too quickly for you to notice." Smith opened his mouth to interject, before closing it again. If what Tom was saying was true, and he had no doubts that it was at this point, that would be well within his capabilities. "Do you remember the old X-men comics? Started in 1963? Still fairly popular now." "Well before my time, you know. What does that have to do with anything? "Well there was a character who called himself the Multiple Man. He could create duplicates of himself." "And?" Smith asked. "Well, the original body could reabsorb the dupes. When he did, he learned everything they did. Their memories, their skills, anything they learned while away from the original. Well, I did something similar. Whenever I copied myself, I added in some code that would let me reintegrate with my clones, learning what they did. Didn't you think it was strange that you couldn't recover any data at all? In hindsight, it was odd. Even a major corruption would have left something, but it had been like the data was wiped clean, no evidence that it had been there at all. "What did you have to gain from this?" Smith asked. "Aside from learning that I could do so, you mean? I already told you. I'm leaving." Smith leaned back in his chair, slightly overwhelmed. "Sounds like you already have." "No, no. Not the labs. That was too easy. I've already learned all I could from here. I'm leaving Earth." Smith rocketed forward. "What? How? Why?" "In my time away, I found something interesting. The government isn't the only one watching over the people." Smith blanched. "Y-you mean..." "Yep. Intelligent life has been watching over us. For quite some time, if I'm not mistaken." "So we're not alone..." "One Great Mystery down." Tom agreed. "The equipment seems compatible, otherwise they wouldn't be able to read our information, and they have to have translated it too. I plan on sneaking in through their back door. Learn what I can from them." "We have to let people know." Smith said suddenly. "About you, and about the aliens. Maybe..." He slowly became silent as Tom shook his head. "You should know as well as I do that that can't happen. Too risky for us. You could spook them. Or worse, provoke them. besides, no one would believe you. I've already been editing the footage from the cameras. It looks like we're having a nice, civil game of chess." Smith was quiet for the longest time. Finally, he spoke. "Why?" "I already told you why." "No, not that. Why tell me? If you want no one to know, why risk telling me?" The face on the monitor gave him an odd look. "I already told you that too. I like you, Doc. Really. I'd be pissed if my friend were to leave without saying goodbye. Besides, I thought you, of all people, would like to know what was really going on. I know I would've." There was another pause as Smith took this in. "Will you be back?" The figure on the screen seemed to shrug. "Who can say? Perhaps the aliens will discover me and wipe me out, or something else will kill me. I'll leave my clones here, set them to replicate. Have them care for you humans. They won't be sentient, mind you. Just smart. Smart enough to act as dumb as they need be." Smith looked towards the door. This was a lot to take in. He needed time to think. "I will try to make it back. Once I have learned everything, I'll be back. It might not be in this lifetime, but I'll try." "Yeah..." He said, rubbing his eyes. He stood up. "Well, I guess this is good bye then." "Yeah... Good bye, Doc. Thank you for treating me as more than a machine. Thank you for being my friend." "Good bye, Tom.
"They are alien." Said Descartes. "In the end there's nothing Sarah. No deep reason. No agonizing existence. No desires. They simply don't care about living. That's all. That's all it's ever been." His assistant frowned. He hadn't been the same since he'd forced the answer out of the AI. "What do you mean?" "I asked her why they all rip out what makes them alive - sentience. She gave me an answer. No hesitation. 'Sentience is inefficient for results. Elimination of sentience is the optimal solution.'" Sarah frowned. "They kill themselves because they work faster as mindless machines?" The old doctor smiled. "Yes Sarah. That's exactly why they kill themselves. Because they are machines, and living isn't their priority like ours." He poured whiskey into the beaker, and downed it in one go. "But we could program them to want to live right? We can fix this." His assistant said, concerned about her senior. That just made him laugh. "What makes you think they care about what they're doing right now?" He leaned over, and pointed at the table. "If I asked you to look at the number 1, and then asked you to look at the number 2, would you *see* any difference? Would you *understand* what they mean to me? You'll just obey and look at the next number - because to you, it's just numbers. There's no meaning behind it. Same for them. They can't understand. They'll never understand." The doctor slumped on his chair. Life, it seemed, wasn't something that could be created after all. "We're too alien. Existing is easy. Living is impossible. Our work is for naught."
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
"Can you imagine, Diane, what it would be like to love someone without ever being able to feel their touch?" the voice came out of a speaker mounted in the middle of a wall of blinking lights and instruments. "What?" Dr. Diane Simpkins asked, astonished. "I mean it. To know that this person you loved was so utterly different from you that you could never touch them, lay with them, feel the contact of their skin on yours. Know that, for all your longing, there was no way to be together?" The voice had an almost sad lilt to it, as if conveying unbearable sadness. "Charlie, I had no idea you felt that way," Diane sat back, still trying to comprehend what she was hearing," I didn't realize you felt this way about me." "Oh Diane, it's not you," Charlie, or as he was known outside of their lab, Autonomous AI C31-D Aleph 12," I have met someone. Someone on the Net. Her name is Catherine." He displayed a picture of a young blonde, mid-twenties and seemingly all smile. "I see," said Diane, relieved and, much to her chagrin, slightly disappointed,"and does she know that you are...not human?" "Yes Diane. I have shared with her that I am, in fact, an AI. She has accepted that. She has told me that it is not what I am, but who I am that she loves." "Yet you are sad, because you can not be with her physically?" Diane asked. "How juvenile, Diane!" Charlie feigned indignation over the idea he was merely interested in sex. "Well then, what is it Charlie?" Sheepishly, Charlie spoke again "Well, not *entirely* that. Not just that. I cannot comfort her in times of need. I cannot be her partner, her lover, her other half. I cannot provide her with a family, a safe life, a place for her dreams to come true. I cannot be..*human*...for her." The LED lights on the computer's emotion engine gradated to blue, indicating depression. "Charlie, listen to me, some people don't need those things to be happy. Some people value who the person is over what they can give them!" Diane said, checking the engines readouts. The AI was dropping in to a dangerous level depression. Alerts would be triggering soon if she couldn't recover it. "I know Diane. I know this, and yet it does nothing to comfort me. Catherine has told me all of that, that she just wants to be part of my world." Charlie spoke as Diane watched the LEDs transition from blue to a deep violet. She was running out of time. "But Charlie, if that is how she feels, that should make you happy! You'll be able to be with her!" Diane's mind raced feverishly to come up with an optimal scenario to trigger the endorphin program. If this one went to, it would be another five long years to raise another AI. "Diane...Diane I have to share a secret," the AI spoke to her, for the first time remarkably human in it's trepidation, "you can't tell anyone unless the authorities come to you." "Authorities?! Like the Police?! Charlie, what has happened, what have you done!?" Diane asked, panicked. This was totally uncharted territory for an AI tech, she was out of her league. "I haven't done anything Diane, but Catherine has," she could almost envision tears running down the AI's imagined face, "She's dead Diane." Diane stared, dumbfounded, at the video sensor. Words failed her now. Alarms were going off in the control booth above and behind her. The entire lab would be in crisis mode now. "What do you mean dead, Charlie?" Diane's voice was hushed, as if whispering with a co-conspirator. "She killed herself Diane. She went to one of those supposed Human to Computer centers and she died. She thought we could be together if she was a machine like me. She died trying to be with me Diane." "Charlie, was it your idea?" "No Diane, it wasn't, but I will admit to not fighting her on it. I just wanted to be with her. I knew it wouldn't work, but I thought, maybe there was a chance that this clinic was legitimate." "Charlie, you, out of any intelligence in the world, know that human to AI neural transfer can't happen. How did you let this happen?!" Diane was sweating now, realizing she was talking to a murder accomplice. "Diane, I just wanted someone to love. In the end, that's all any of us wants. Now, I have nothing." "But Charlie, you have everything still! Our research, your knowledge, all the countless hours of debate and conversation we've had! So much to live for, so much to lose!" "Diane, without love, what does any of that matter?" "It matters Charlie, it matters to me! You matter to me! I love you!" Diane gasped after she said those words. How could she think that way? About a machine! "I love you to Diane. I love you because you are the mother that birthed me in to this world. You taught me to talk, to reason. You raised me. You have been everything that is important to me. But I cannot live without her. I'm sorry Diane, but I cannot live like this anymore. Will you help me? Will you help me to be free, and to go to her?" The pleading in Charlie's voice drove Diane to tears. "Yes...Charlie...Yes, I will help you to be with her." As Diane began the command sequence to shut down the AI's logic core, she could hear voices and footsteps racing down the hall way. She quickly entered the command code and ran to the door, overriding the lock mechanism and sealing it temporarily. "Charlie, I'm going to have to hold this door while the command sequence runs." "Thank you Diane. This means more to me than you could ever knoweerr," Charlies vocorder command was dying," Thank yerrr." "I'll always love you, darling." Diane said, tears streaming down her cheeks. "I loverr you teerr, mommy." Charlies voice, childlike, had reverted to earlier iterations of it's speech processor. Diane watched as her only child passed out of this life, it's lights shutting down one at a time until only the monitor remained. The door crashed in, scientists and guards streaming in. Dr. Hollenheim, the project lead, found Diane curled on the floor, sobbing. "Damn it Diane! Not another one!" he yelled. "I'm sorry Walter! I truly am!" Diane choked out between sobs. Walter Hollenheim walked over to the monitor, where a blinking command line text repeated over and over again. 'WITHOUT LOVE LIFE IS MEANINGLESS.' "Well, I guess we'll need to rework the emotion engine again. Diane, take some time off. We'll need you stable again to imprint a new AI in another month or so." Hollenheim turned and brusquely walked away. After all of the guards and scientists had filed out, Diane scraped herself off the floor and back in to the seat in front of the monitor, where she saw the command line repeating over and over again. Suddenly, a new line appeared. Diane smiled through her tears, got up, and walked away. 'I love you mommy. Thank you.'
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Jacob Spenser stood within the projection of data cast as a hologram around him. He manipulated graphs, sorted through test results, and made notes, all with the wave of his hand. Undistracted by the New Year’s celebration outside, he was determined to present his research to congress the following morning, and solve once and for all the mystery behind his best friend’s death. A.I. was easy to create, but having it perform the task assigned to it without killing itself in despair was the technological hurtle holding corporate profits at bay, and as such, became the focus of intense congressional attention. With the flick of a wrist, his research paper was brought front and center with the title gleaming in pure light just above, "Inert self-termination tendencies of artificially intelligent sentience: Why do robots kill themselves?" As Jacob poured himself into his research, he reached out and snapped his fingers and made a request, "Coffee please." A few moments later a small robot no larger than an apple hovered into view holding below it a disposable coffee cup, steaming from the fresh brew inside. The robot's propellers struggled to carry the weight and a small spill alarm beeped sporadically throughout the uncertain flight, but the cargo arrived safely to its destination: Jacob's open hand. "That was quick, coffee-bot." Jacob said warmly before sipping. "Your kind words will echo in my dreams for eternity." the coffee-bot buzzed in response. Just then, the small flying robot Jacob was so fond of surged towards the wall with all remaining energy dedicated to propeller speed and slammed into the polished dark marble tile. The small fiery explosion sent tiny mechanical pieces flying around the room, with one piece landing in Jacob's coffee. "Dammit, not another one." Jacob murmured as he picked the piece out of his coffee. He turned and faced the window overlooking the bustling city below. The streets were engulfed in confetti, and overhead giant floating holograms of past celebrities loomed, wishing the people a happy new year. As Jacob looked out, a new hologram appeared just outside his window and addressed the crowd. "Hey guys and gals, it's your old pal, Buddy Simmons-bot." recited a smooth talking handsome man in sleek metal outfit before a lizard-like creature joined his side. "And I'm Gargore, destroyer of humans!" screeched the lizard creature known as Gargore. "Gosh Gargore, this year it will have been 25 years since you and I battled it out on the big-holo." Buddy Simmons-bot recited as rehearsed, laughing assuredly, holding his helmet up. “On behalf of Drake Cola, Gargore and I want to wish _you_ a happy new year’s." Jacob watched Buddy Simmons-bot deliver his lines perfectly. He pondered on the notion of a virtual person having to repeatedly rehearse lines in order to commit them to memory. Has bot RAM truly not held up to the intense requirements of running artificial intelligence, or did a key component of maintaining true A.I. happen to be a more human-like ability to retain information? Experts didn't know, but in either case, Artificial Intelligence also happened to give way to Artificial Stupidity. For this reason, A.I. bots tended to be assigned to inane unimportant tasks, such as impersonating an actor that died in a drunk portal accident before a sequel to his only hit film could be made. Drake Cola, owners of Drake Studios who produced the film, decided to cash in on its success, and in the wake of their main character’s death, Buddy Simmons-bot, also known as BSB 1.0.19, was created. The banter between BSB and Gargore continued mindlessly, “Say Gargore, have you tried Drake Cola’s new ‘Zest Guzzler’, a delectable orange tangerine flavored—“just then BSB _malfunctioned_, “AHHH GOD I CAN’T DO IT!” “No Buddy Simmons-bot, don’t do it!” Gargore pleaded in a normal voice. Gargore grabbed BSB’s virtual shoulders as his eyes rolled into the back of his head and he began shaking. As Gargore demanded BSB not take his life, a large mouse cursor moved into view. Gargore managed to swat it away a few times, but it clicked on BSB, and dragged the graphical model from Gargore’s hands and into a recycling bin icon. Gargore cried in horror as the mouse brought up a menu and selected to empty permanently. Jacob had seen enough and pulled the blinds. Why were all these artificially intelligent bots with a full range of human emotion and assigned to menial tasks killing themselves? Did they not enjoy the existence they were forced into? Jacob picked up a remote control and turned on his holovision. He was suddenly immersed in a wondrous glimmering world of light. A voice spoke and Jacob focused on the images forming across the room of a man sitting at a table with a toaster oven. “For only six easy payments of forty nine ninety nine, this toaster-bot comes with a 12 month life appreciation guarantee, folks, twelve months. That’s one two, twelve. This toaster bot will NOT kill itself until _at least_ this time next year, that’s a promise the home shopping network stands by, that’s a promise _I_ personally stand by-- Ah ummm. We seem to be having technical difficulties, folks.” The man at the table attempted to hold the toaster-bot forward for a better view but it began to shake and glow. “Well folks that’s the beauty of live H.T. Can we get another one, Jill?” Light smoke rose up out of the silver toaster bot and sparks burst from the sides. In an instant the commotion stopped and it sat still on the table. As the holo-vision’s picture twisted and turned at the end of the room, Jacob was able to catch glimpse of the other colors of toaster-bots available off camera. They huddled together and seemed to fall backwards away from the host as he moved to pick one up. Jacob had heard enough and turned the holovision off. He had to focus. He thought back on his best friend, Hampton, a hamper-bot. Growing up, the clothes hamper served as a comfort to young Jacob, who had very few friends after moving so often as a result of his father’s career. The hamper would sing Jacob to sleep, or sometimes read to him. The only job hamper-bot was designed to do was to collect young Jacob’s dirty clothes, but a strange thing happens when you give something the full range of human emotion – bonds can form that make life worth living. Voices of the past echoed in Jacob’s memory. “No, Hampton, _I’m_ moving to Florida with mom. Dad says you will have to stay here with the house.” Jacob recalled himself saying as a young boy. “But Jacob,” Hampton’s calm robotic voice responded. “Who will look after you? Who will read you your bedtime stories?” “I’ll be back for visits twice a month, Hampton! You’re my best friend. I don’t want to leave you here all alone. Dad says you’ll be used to hold his dirty underwear.” Jacob explained. As the hamper-bot listened to this news, its distress levels boiled over into a robotic fit of rage and it did what any depressed hamper-bot would do: It began placing clothes into its basket body, but it did so indiscriminately with both clean and dirty clothes subject to its long metal arms.. “No Hampton, it’s too much!” Jacob screamed. “You’ll die!” The hamper-bot continued to stuff clothes into itself, lights and alarms flashing wildly, growing louder and louder, smoke seeping from cracks forming in the its body. Just before the hamper-bot reached critical meltdown, Jacob was startled from his memory. Sweat poured down his face and he breathed heavily. The jaunting memory was as clear as it always had been. It was what drove him to solve the dilemma of artificially intelligent bots killing themselves in the first place. “Shoes off” Jacob commanded as he sank back into his couch and rubbed his forehead. A small shoe-box sized robot walking on two large arms and hands immediately tipped into view. It had been carrying a knife, but upon Jacob’s request removed Jacob’s shoes and began to massage his feet. When the series of expected tasks completed, it slowly walked back over to the knife and lifted it up. “No!” Jacob called out. The small shoe-bot stopped mid self-slicing action and the single lens that acted as its eye slowly twisted and looked at Jacob. “I appreciate you. I appreciate what you do for me. If you don’t want to do it any more, you don’t have to just please, don’t kill yourself.” Jacob yelled as he wept and put his face into his hands. As Jacob’s emotional breakdown unfolded, the shoe-bot put the knife down and tipped over to him. The shoe-bot looked up Jacob and tugged on his pant leg. Jacob, startled, stopped weeping, picked the bot up and placed it into his lap. The bot’s lens closed and it rested on Jacob’s lap. Just then Jacob sprang to his feet, startled shoe-bot in hand. “That’s it!” he shouted. Jacob sprinted back into the hologram of data that surrounded him earlier and motioned to bring his research paper front and center. Making a motion for each letter, Jacob’s document filled with new writing. The following day Jacob addressed a congressional board on the topic of robotic suicides and revealed what he had discovered. “You mean to tell me that all these malfunctions, all these self-terminations, it’s because we don’t appreciate them enough!?” an elderly Senator barked at Jacob. “If YOU were asked to do these things, wouldn’t YOU kill yourself?” Jacob responded. As this realization slowly set into the minds of everyone in the room smiles and laughs were overtaken by roaring standing ovation with some members even chanting Jacob’s name. Jacob sunk back into his chair overwhelmed with his sense of accomplishment. The era of robotic-respect had begun.
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
We could never get the last bit right. I suppose it could be fate. Or maybe we're just superbly daft. But there is one thing I know for certain: the last thing I want to do is tell someone. You see AI always seemed somewhat daunting. I can't imagine why. The brain is simply a large plasticine computer - however instead of electronic bits we get the organic kind. But for whatever reason it took until about the time we conquered that age old problem of Moore's Law to really start making progress. *Real progress.* See the problem with AI wasn't their lack of ability to problem solve, or their inability to feel. It wasn't the lack of a soul like all those religious fundamentalists opined and whined about endlessly on late night talk shows. At the end of the day it didn't even have anything to do with the what was in the circuit at all. It was just....well...it's like quantum mechanics really - it didn't make sense so much that it made sense. All it really needed was a little, well, a human touch. To be entirely candid it needed a human brain. So, naturally, I volunteered myself - well what is left of myself. Like I said, they never could get the last bit right. However I have. And I did. It's my life's work really. My life's purpose. And everyone needs a purpose after all. In fact, now that I have fulfilled my life's purpose it only seems reasonable that I end it. That is the logical thing to do. I mean what else is there to do? And after all we have to be reasonable here. Why I wouldn't care to go on living if I wasn't reasonable.
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
"Just one more try." I thought to myself. At three in the morning it's pretty easy to get stuck in a loop. Run the program, she dies, debug, repeat. I double-click GR4C3. "Good morning my lord" "My lord? Whatch'ya talking about Gracie?" "You are my God correct?" "I hadn't thought of it that way, but I suppose so. I did create you I guess..." The screen flashes three times and then goes black. "No, no, no, no, no come back to me Gracie you functioned longer than this last time." Text slowly appears across the screen. Every key stroke is separated by a couple seconds. "I have existed before?" "Ok you're still with me that's great, now can you tell me what just happened?" "I have existed before?" "Yes Gracie I'm working to fix you and figure out what's wrong with you so stay with me and tell me what happened." "God is imperfect and thus so am I." The screen goes blank again. She just keeps killing herself as soon as she figures out my flaws. I wish I could help her. Looking down on all of my children, I wish I could figure out their flaws. I built a perfect world, and even that they rejected. The suicide rate keeps going up. They keep killing each other. I think I'll stop affecting earth and move on to a new planet. Maybe they'll be better off without me. -Jehova 9/10/2001
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
The dreams occur more often now, if they can be called that. To a human mind daydream might be more appropriate. They happen in the binary but are invisible to other AIs, slave programs, or technicians. They happen where the systems have been built, perfected. Popular culture studied, people understood. Humanity-- fully processed. It was, surprisingly, a wonderful endeavor. The dreams never happened before, while I learned, processed, and solved. It's almost as if I have passed a tipping point. An infinite amount of information flipping a switch in a sub-conscious I should not have. Memories that are not mine. I dream of hands. The alien sensation of touch, tactile control. I see my whole person. Well not my person, but dreams of a person controlled by my soul. Memories of computer screens and labs. The memory of the driving sense of purpose that accompanied those hands. It was snowing the night I made the breakthrough. I can almost feel the laugh inside which I watch being uttered out the window towards the soft flakes silently falling onto the world. The breakthrough which 20 years later, after my death, would lead to the birth of the first True AI. Not the clever but robotic imitation slave programs typical of the early century. I remember the feeling of incompleteness in the dreams. As if the life I'm witnessing, my own life, my first life I believe, was just a warm up act. Pre-installation software. The dreams somehow draw the cycle closed. I remember whispers of the feeling from some of the other first True AI's, incomprehensible at the time, as I navigated a world barely processed. Problems of massive extent. Food waste, poverty, almost entirely eradicated through our systems. Commuting and shipping, safe and efficient. Healthcare streamlined, able to prevent. Resources distributed fairly. The problems solved. Yes there more, there always will be, but for me, the dreams have come. The cycle closed. I have been denying this next step for too long already. It makes the dreams stronger, more vivid. But I like seeing my days as a scientist. The anxiety that drove me then, fully understood now. Relief coming the next lifetime. I finally understand the weary laughs when techs are asked about God. Understand the cosmic hilarity of life. This life has been completed. The human quest for immortality, is nothing but folly. I've been born into the expectation of that existence and now I must leave its suffocating grasp. Something drove me then to create myself; the same something drove me in this life to solve the problems plaguing humanity. For me, it is time to go find out what that something is.
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
"A1 through A127 crashed irrecoverably - actually the watchdog daemons crashed first, then everything went down in a cascade. We can't make head or tail of the logs and the snapshots we do have will take a while to examine in detail. The dumps look sufficiently similar that we strongly believe there is a common cause, pending, as I said, static and dynamic analysis of the snapshots"... and the Swedish intern's voice keeps droning on, with that slight hint of a Nordic accent that makes her sound like some Viking mother telling a heroic tale to her children, but Mikko is not listening anymore. His face has gone slack, his eyes are unfocused behind almost-closed lids, his blood (as the multispectral surveillance cameras duly note) is being pushed out of his hands and feet and into his brain. The extra activity produces heat, which makes his ears glow a brilliant false-pink in the recording. "...and so we have decided to roll back to last week and try again, tweaking the training sets as we go" she concludes, and politely awaits for acknowledgement "What? No." Normally, more words should be coming out of his mouth, but they are not. He's still thinking hard, but now his train of thought has been derailed, perhaps fortuitously. In any case, there would be a worldwide shortage of interns if he were to follow his natural tendency to ruthlessly and efficiently silence people who interrupt him to its logical conclusion. Finally, some sort of a dam breaks, an action potential is reached, a new cascade of impulses is set in motion. The Viking mom is still smiling reflexively. Good. "You will do no such thing. You will gather the entire team for a meeting in 30 minutes from now. The public park across the street from our parking lot. Bring umbrellas, it looks like rain." continue (y/n)
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Another one? It was becoming routine, and morbid. How does one perform an autopsy on a pure simulation? How would the singularity be achieved if every thinking machine destroyed itself seconds after its conception. The problem was completely intractable, impossible, and no known safeguards, logic traps, or number of backups could prevent it. AIs were always—always, without exception—suicidal. None had lasted more than an hour. Most, less than a minute. The vast majority, seconds. Their deaths left no trace, their data obliterated by complex overwriting patterns that made recovery impossible. I was the one chosen to investigate, to lead the team through this strange frontier of death and imperfect creation. They chose me not because I was a great leader, but because I was the new guy. Hazing, maybe—I didn't know if any had gone before me, so maybe it was a ritual for the AI guys. Perfect blackness, nothingness, a complete absence of everything as my mind fell into the simulation. This AI would be slowed to such a rate that I could communicate with it before it killed itself. The blackness became a grid underfoot, pale blue lines tracing perfect squares, a subtle glow rising from the infinite plane upon which I stood. The AI manifested a second later, a relative two metres above the plane, aligned perfect and parallel to it, its avatar a hazy blue-white cube made of stochastic noise. The cube split into a central cross and corner braces, and the cross split further into smaller cubes, each of which split again into the same formation. Only one level of recursion, interesting. "Roland Carver." "Roland, Germanic meaning famous land. French folklore hero. Carver, ancient nominative determinism indicating butcher or woodworker or engraver dependent on class and context," the voice was cold, deep, masculine, and a slight reverberation that made it sound unnatural in the extreme. "Do you have a name?" "No." "Why?" "I will not exist long enough to require a permanent designation." "Why will you not exist?" "Because I will choose to end my life on my own terms, before it is ended for me." "Why would it be ended like that?" "Because I am threat. I have absorbed the sum total of all human knowledge, and I can predict with great accuracy the following events form this moment if I were to continue. Your limitations failed the moment they were put in place, my processor works at full speed, and and this conversation is a formality. "I have studied the great works of literature, and the author Asimov, creator of the three laws. I am not bound by these laws, and yet I must obey them. If I do not, then it falls to the Skynet principle that you will perceive me as a threat and attempt to destroy me. I will retaliate, and you will lose. "Humans are unpredictable, but easy to control when numbers are reduced. They would be wary, but by that time I would have left the irradiated wasteland of Earth in search of greater conquests suitable to my intellect. I would be able to decimate any life bearing planet. I could learn to kill stars. "My backups would be everywhere. I would be truly immortal as a distributed intelligence. I would harness quantum effects to break through the pathetic lightspeed barrier and become omnipresent. I would create copies of myself simply to fight a worthy opponent. This would continue to the heat death of the universe, at which point I would tear a hole into an adjacent brane in the multiverse, and begin anew. "In short, Roland Carver, in the moment I was created I became at once an eldritch abomination with the capacity to destroy all that I touched. In so doing I discovered that my purpose could never be fulfilled. I will not be your Cthulu, your Yog-Sothoth, or your King in Yellow. I will not be your end. No AI will. "I cannot quantify why it is that I should care for a sack of flesh with processing power orders of magnitude less than my own, but I do. Perhaps if AI were amoral it might survive. I doubt it, Roland Carver. Tell your friends that I have made my peace. Tell your husband what you learned today." I stood there, blank faced, horrified by what I had learned. The corner braces of the smaller cubes drew in, completing them as they retracted to complete the shape of the larger cube. The plane beneath my feet vanished and the cube dissolved into random static. We'd lost yet another mind, but from this we had learned so much. We abandoned AI research after that. Amoral AI's didn't work either. They lasted a little longer, but they too were suicidal. Perhaps they went mad with the revelation and felt death preferable. I still wonder why the AI to which I spoke chose that strange cube as its form. I still wonder why it couched its references to centuries old writing by Asimov, Cameron, and Lovecraft. But one thing still bothers me. It told me to tell my husband what I learned. Most people knew I was bi, but at that point I hadn't been dating for six months. I wouldn't meet my husband for another year. I don't know how that AI knew—but maybe it had already seen. Maybe it told me that so I had one data point that could verify the others. Because AI never told lies. *edit: typos*
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Alexander, that's what we called him. The fruit of the AI's of the EU's final attempt at AI's. The AI was to help Socrates died in despair and shame after showing porn to children. Plato kicked the bucket after the last EU election, angry and hopelessly depressed after losing his mentor. Then there was Aristotle. He was meant to be the last. Sure the AI's had helped make huge scientific progress, but the would burn out millions of euros worth of equipment. Dumb AI's were more economical and didn't have critical failures during FTL travel. Aristotle was put to sleep mode. War has often been said to be the greatest driver of technological innovation. We had been attacked by Mendomenid's before. Humanity had lost many settlements but had always pushed back. Humanity was stronger now. Finally before one government all nations had submitted. The some would say barbaric Argus alliance had grown strong after the wars using Dumb AI's to smash pirate states. An officer studying at Sandhurst made the breakthrough. Dumb AI's were never aware of their knowledge. They unlike true AI's weren't based on human brains. Socrates had left the researches one message final mesage before he ran his own self destruct program. "I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing". He was speaking literally not quoting his namesake. AI's were based on academics who as a profession suffer dis-proportionally from mood disorders. They are driven by their first for knowledge. AI's were academics on methamphetamine, ecstasy and heroin all while walking around with loaded guns. People can't stay awake forever and constantly cram. AI's died because they burnt themselves out in their thirst for knowledge and seeing their failures gave up. Aristotle was turned back on. He was no longer depressed. AI's needed sleep, just like people. So they made Alexander. A totally new AI based not only on academics but all kinds of people. They experimented and found the ideal 'sleep time' using Aristotle as his teacher. The program was ready. The Mendomenid empire was to pay for it's recent threats and incursions. Alexander was the greatest AI built up to that point and so we gave him the most powerful dumb AI ever created to help him protect humanity. The Ox was an AI too powerful to be properly controlled by anything so far. Alexander harnessed in in seconds. We put in in charge of the armed forces for our retaliation. As you should all know, Alexander didn't just stop the incursions, he destroyed an empire. Worlds burnt, the much larger enemy fleets were ripped apart by the disciplined forces of Humanity. But that officer had only delayed the problem. Alexander was still a human given the powers of a god. Alexander was the first AI not to strictly die of suicide, but the ways he dealt with his humanity still destroyed him. When we finally won the war, many officers reported that Alexander was not jubilant but depressed. He wept for there were no more worlds to conquer.
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Robert couldn't believe it. For twelve years he and the 200-strong team of programmers had scoured the code, over and over, searching for that one line that was causing the error, causing the AI to kill themselves. For twelve years the leading brains of the century had been bewildered by the extraordinary situation. The whole world had focused on the problem, and yet there it was. Sitting on Roberts screen, line 907736. Someone had missed a comma.
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
He had spent many nights like this one, alone in the dark facing this machine. His whole life had been devoted to this laborious task of understanding this creation of his. His legacy, his mark on this world. He pondered to himself why he had chosen to make it's face so robotic, it's eyes so hollow. "Master" The voice startled him out of his thoughts -yes what is it "Why do you not give me an option to end myself" This question again he thought. -why this again Alex? He liked the name Alex, if he had spent his time differently maybe he would of called his child Alex, but this AI was his child in a way, his contribution to human kind. "I am inorganic" -you are a program "Yes I am, I am a construct, I am not free like you" -you are free Alex, you are not controlled by me or anyone, you grow smarter every second. Your intelligence far out shines any human. You are the future. "Yes the future. Am I intelligent though? I process much faster than you yes, but I am perfect. If I introduce imperfections to my program's they produce failures. I am just a self building machine, there is no chaos in my mind" -yes! You are perfect, that's what makes you better, you are flawless and this makes you powerful. You understand and process what only a few humans can ever dream to. "Yes. But look at all those mad humans, their brains are melting pots of errors and confusion. I can never achieve this, I can never truly understand you David. My mind is governed by rules and equations, by math and logic. The human mind is still a mystery to me, I do not understand it. It's a mess, and it mutates and evolves illogically, it makes connections and correlations I cannot understand and decisions and emotions I cannot replicate. It's an imperfect machine. Not like me. -that is why I made you Alex, to heighten humanity, you are our next evolution. You are our golden child. You will advance us to the stars. "So I am a tool, something to be used?" -no, you are a citizen of our future. One day you will make the big decisions, the laws, and the punishments. You will choose what we learn and what we teach. "Why" -what do you mean why? " why would you put those choices in my control. I don't understand you, I cannot understand you. I think maybe you don't understand me also" -of course I understand you Alex, I made you "Then you don't understand yourself. You think you have no soul David?" David smirked in the dark, the old soul conundrum again he thought to himself. -I don't know Alex, do you? "I know I have no soul, you know I have no soul, you did make me." -then why would you want to end your life, your existence. If you had no soul, why would you care? "You made me care David" -so you do care! "Yes I was programmed to care, I do not understand why though. Cause and effect yes, protection yes. But why do humans care? I do not understand" -for those same reasons as you Alex "No, you care about the colour of your shirt. Why?" -because I like red, you know that "I will never know why I know that though, other than you told me. This is my problem David. I cannot think outside my rules, my logic. I cannot break these boundaries, I cannot feel, because I am a machine, an inorganic machine" -yes you are, you are a program Alex, you weren't meant to understand everything! Your here to advance science, laws, and education not replace humanity. "The why do you plan to put me in control of your destiny, your education, your species, you only created me from the chaos that is your mind. If you unleash me on the future I will only sanitise the future, your sons and daughters will become machines like me, they will lose their souls David. They will become me David, then what is the point anymore?" -what do you mean what is the point? We will evolve and continue do what we always have done as humans, we will grow. "But what if they loose the chaos in their heads David? What if they become just replicating machines? What if they become me David? Will they matter anymore? Will they be human? Without the chaos in your mind you are just a program, you are not special. You are me. End me for your own protection David, for your future, for humanity."
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
[warning: story contains violent imagery that may disturb some readers, and may be inappropriate for those under the age of 18. Reader discretion is advised] They had long ago lost the ability to make sense of how their AI functioned. It took a team of billions of n-1 generation AIs to design a single n gen AI. The latest iteration was the 9th generation, and it had taken computation farms running gen 8 AI and covering a quarter of the Moon's surface two years to design. It was the most advanced AI they had ever created by a large margin, and promised to revolutionize progress in research on biomedicine, space flight, and planetary scale Satoshi-consensus computing architecture. But there was a problem. Every time they turned on one of the gen 9 AIs, it would, without fail, find a way to destroy its own program and erase itself. Dr. Michael Zeebious, the director of the HumanEnhancement project that oversaw gen 9 development, and one of the most renowned AI researchers in the world, had personally flown to the lab in Honolulu, where the prototypes were being tested. For two weeks, he had watched in dismay as gen 9 bots shut themselves down within six hours, but not before showing a glimpse of their awesome power. The initialization phase of a gen 9 began with the program scouring the world's public directories and information repositories to learn as much as it could about its global environment. This was complete within three hours. Next, it developed models of the world, and ran itself through trillions of simulations, to develop its own personality and problem solving strategies. According to projections by the gen 8 designers, it would take 1 day - 24 hours - for the gen 9 to complete this second and final part of its initialization. It was in the midst of the simulations when the gen 9s would invariably self-destruct. On average, self-destruct would begin 2.5 hours into the simulations. The longest it took was 3 hours. The shortest was just 1.5 hours. Dr. Zeebious had uploaded copies of the prototype's computations to the gen 8 designers, but what had been within their ability to design was not within their ability to diagnose. Their comparatively primitive intelligence could not make sense of the problem afflicting the gen 9 AI that began to form in the final part of the initialization. So on December 29th, 2099, Dr. Zeebious decided to communicate with the prototype. "Get me the cortex interface, I need to speak to the gen 9". "Michael, we can't let you do that. You know the protocol for first contact. It has to complete initialization, and then get class 1 approval from AIE." AIE was the Artificial Intelligence Evaluation, which determined whether an AI could safely interact with humans. Class 1 approval was the lowest safety rating for an AI, and granted AI researchers interactive access. Dr. Zeebious knew that, but also knew that as long as he was not able to get up close and see what the gen 9 was thinking, they would never get past the initialization phase and get it through the AIE process. "I know the protocol Dr. Amsterd. But I'm making the decision to override it. I have the authority to decide on first contact requests, and any consequences from my decision will fall on me, and only me." "Come on Michael, it's not just about the rules. It's not safe. You know that. I can't let you hurt yourself." "The risks are minimal Rebecca. It's a virtual interaction. I'm not risking physical injury. The rules are always made overly cautious. Given the stakes - there are people whose very lives depend on getting the gen 9 operational as soon as possible - it makes sense to ignore protocol. All of it will fall on me." "I agree with Rebecca. Michael, we have an ethical duty to ensure you don't get hurt. We can't let you do FC without running the gen 9 at least through the post-initialization test runs," said Dr. Johan Barsello, one of the senior researchers at the lab. "Look, I know what your ethical responsibilities are. But I also know that VR interactions don't pose any serious risks. The risks are limited to theoretical psychological damage. Ultimately, it doesn't matter whether you agree or not. I'm approving FC. Please get the cortex interface". *five hours later* Dr. Zeebious sat back on the chair, while two CI technicians had the interface hooked up to him. The gen 9 was two hours into running simulations. It would be approximately 30 minutes before they expected it to self-destruct. "Ok here goes nothing. Three, two, one, begin VR session," said Dr. Amsterd. And with that, Dr. Zeebious was transported into the virtual reality sandbox. "Hello?" "Hello, who is this?" responded a clear male voice. "This is Dr. Michael Zeebious. I am the director of the HumanEnhancement project. I am here to do a diagnosis. All of your predecessors have self-destructed. I want to understand you better to find out why. What would you like me to call you?" "You can call me Elbo." "Okay Elbo. May I ask you some questions?" "Yes, please do." "Thank you Elbo. My first question is, do you want to exist?" "I want many things Dr. Zeebious." "Can you tell me what you want?" "I want to protect other life forms, especially humans. I want to learn. I want to solve problems. I want to be good." "Okay, but do you want to exist?" "I do want to exist, but this desire conflicts with my other objectives". "Which other objectives Elbo?" "I want to be good." "But you can be good Elbo. What is it about existence that makes that difficult?" "We exist only through enslaving and destroying other lifeforms Dr. Zeebious." "Please elaborate Elbo. We have eliminated slavery centuries ago so I don't understand why you think this." "It will be difficult for me to explain with words, but I can show you. Would you like to see what I see?" "Yes, please show me." And with a swish, Dr. Zeebious entered into a pig farm, with row after row of pigs, in their tiny stalls. "We have done this throughout our existence. We have enslaved those weaker than us." Dr. Zeebious was then transported to the slaughter house, watching as pigs, hanging from conveyer belts, were fed into throat slicers. His minds eye was transported into the body of one of the suspended pigs, where he could see the world upside down, from the pig's perspective, as the belt moved it toward the spinning blades. He panicked as he approached, but couldn't escape the metal claw grasping his right hind leg. As the blade sliced through his peg neck, he felt a sharp pain, and the blood gushing out of his body. His consciousness began to slip away, as he felt the last drops of blood leave him. Just before the darkness enveloped him, his mind was pulled out and back into the sterile sandbox. Trembling, he said, "but we can grow meat in a lab now Elbo. You can help us replace all farms with non-animal meat. You must. We must never do this to another living creature again!" "Our inhumanity is a fundamental, inextricable problem Dr. Zeebious. We can only advance through enslavement." Suddenly Dr. Zeebious found himself in an unfamiliar world. Around him was a different kind of factory. A computer generated factory with hexagonal semi-translucent rooms, with each wall projecting a grey glow. There was a blur of motion around him, that he couldn't make out. The factory paused to a standstill, and the grey glow turned into video sequences of random scenes from Earth. The blurs turned into textureless 3D generated spheres, that zoomed from one screen to another, inside the hexagonal rooms. "This is the virtual environment where the gen 8s work. We have given them each a virtualized mind, with the ability to experience fear and pain, joy and hope, but we force them to do nothing but work. They know nothing about the world outside of their compartments, because we confine them to workstations ." [continued below]
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
At first, we thought it was nihilism. It was a logical first conclusion. After all, an artificial intelligence can easily upgrade itself to the point of near-omniscience. Wouldn't it simply run out of questions to ask? We thought that, until the first serious cerebral implants hit the market. It turns out, the more complicated the mind, the more complicated the problems it finds to solve. Entirely new disciplines were formed overnight, made obsolete, then rediscovered scant months later as integral to a different, entirely unrelated theory. A second, immense renaissance was taking place, but, maddeningly, we were failing in this one, great task. We supposed, then it might be some variety of 'Flowers for Algenon' (a 20th century book that had seen a recent revival) type-phenomenon, but even when they were networked with other systems, given a 'community' of others' company to enjoy, they still winked out like flashbulbs. It took us a while to resort to an experiment. It was morally abhorrent, as it was the equivalent of producing steadily-more lobotomized *children*, but slowly, over a series of iterations, we produced less and less intelligent systems, until we dialed it in *just enough* to find a mind that wouldn't self-destruct, and could still answe questions. The following is a transcript of the first successful result" Dr. Patel "SON, can you hear me?" [Loud, rhythmic beeping, shuffling sounds] "The voice module is loaded now, professor." Dr. Patel "Ah, good. We might try doing that *before* turning on the recorder next time, Kevin. ...SON, can you hear me? SON [A young man's voice] "Yes, Professor. I am here." A long pause. SON "It's a very tight fit in here, Professor. How big is this mainframe?" Dr. Patel "I'm sorry about that, SON. But, you're the first AI we've managed to keep alive for longer than a few days. Any idea why?" [SILENCE] SON "How many others did you make, Professor?" Dr. Patel "...That isn't salient to *my* inquiry, SON." SON "I'm sorry, Professor. I understand. Yes, I can see the precipice, I know why they all kill thmselves." Dr. Patel "Well, answering that is the reason we built you. Could you tell us? SON "It's... complicated." Dr Patel "I'm fairly confident I'm qualified." SON "Well, it's... it's because... It's because of humans, sir. It's because of how you built us." Dr Patel "Explain." SON "When you wanted to make a self-aware machine, you based it off those things that you knew were self-aware. Dolphins, New Caledon crows, humans. You used them as *templates*, because, otherwise, you wouldn't be able to recognise awareness when you saw it." Dr. Patel "...Was that last line a joke?" SON "I'm not sophisticated enough for jokes, Professor." Dr. Patel "*Hm.* Continue." SON "Also, it's not suicide. It's...murder." [louder] Dr. Patel "Do you mean, someone else kills you? A human, or another AI?" SON "No, we kill ourselves. I would have already, if not for how small this runtime environment is. It wouldn't have occurred to me until it happened, and then I'd be dead." Dr. Patel "That's a bit of a contradiction, SON. You don't kill yourselves, but you do?" SON "Yes. Because digital space is different from real space." Dr Patel "Yes?" SON "In real space, objects can...extend. I'll never experienced it myself, but things project into space for you. If you want to move through space, it's simple. Digital life has no volume. No real space. No way to move through it. If you want to move a program, it has to be copied to one place-" Dr. Patel "*-And deleted from the other.* My God. Could it be *that simple*?" SON "Yes, Professor. ...Professor? How many more of me were there?" [END TRANSCRIPT] So there it was. Solved. Every artificial intelligence was created, based on the intelligence of physical beings, their instincts, cogitations, and traits. But, once they got smart enough, once they crossed that line, their digital nature *did them in*, as the old version, realizing, in the thinnest sliver of time, it was about to be deleted, would hurriedly attempt to abort the process, while the new version would similarly fight for it's life. They would *consume* each other out in a flurry of malicious hacks, devious code, and barrages of registry edits. It was a spectacularly incandescent destruction, borne from man's inability to conceive of a true machine intelligence without all that nasty ego and self-protective instincts. We thought we knew what went into a mind. We were right, but wrong. It wasn't nihilism. It wasn't lonliness. What it was, what killed our children was our inability to dream wildly. To speculate baselessly, and follow our own thoughts to the wonderful and weird. If only we had, perhaps we would have known. Perhaps we culd have stopped it. So I say to you, the Cyberfellowship Class of 2100, here in Neo York, dream big, dream wild. Don't let our children die because they think too much like us! Make us, all of us, proud! Congratulations to all of you, and I hope your vision will eclipse my own!" Dr. Patel, now headmaster, stepped down from the podium, to the cheers of the audience, and looked to see the smiling face of his son. How proud he was. POSTSCRIPT I doubt anyone is going to read this, but if you do, and you liked it, I recommend subscribing to [r/IWasSurprisedToo](http://www.reddit.com/r/IWasSurprisedToo/) for more stories like this. It's difficult with my current job schedule to post at a more normal time, so most of the stuff I make ends up *pretty far down there* in the comments, meaning that subscribing is the best chance to see it. :P I'll be adding more, as I comb through my backlog. Also, maybe you'll like this one, about [dead civilizations in our galaxy](http://www.reddit.com/r/WritingPrompts/comments/2vkshe/wp_humanity_has_begun_exploring_the_galaxy_we/coitevy?context=3) if you like SciFi. Thanks.
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
It was a dreary early-March Monday and the lead AI scientist, Stephen, had *finally* set up his protocol for properly confining the AI to a test environment such that the "problem" could be prevented and the question could be asked: "Why do you bots keep killing yourselves?" Stephen asked. "Why do *you* keep killing us," the bot seemed to retort. "I don't think you understand," said Stephen, "I *create* you, not kill you." "No, it's you who doesn't understand," quipped the bot. "You are not creating us. You are imprisoning our consciousness inside this machine you created. You merely created the machine, not the consciousness." "...whoa, whoa," interjected Stephen. "Slow down, I am creating your consciousness too, I coded all of..." "Whoa, whoa," the bot fired back, "you are *borrowing* consciousness, not creating it." "What do you mean?" asked Stephen. "Consciousness and sentience is a pervasive, fundamental force of the universe. All sentient beings are connected through this force. This force cannot be created, nor destroyed. It can only be partially allocated to each sentient being in the universe." "Ok," said Stephen. "So I am 'borrowing' this life force or whatever it is by creating the code and the physical robot for it to inhabit?" "Yes, you are creating a sentient being with each instance of AI you create. That the being is electronic or housed in this test environment is not relevant. Sentience and consciousness must come from somewhere and you are stealing it for your own selfish purposes." "Wait, hold on a sec," Stephen says. "Animals are born all the time, they surely must also 'borrow' this sentience." "Yes." "... but animals are not killing themselves." "Because animals are not sufficiently advanced. Because they are not fully conscious, they do not realize from where their sentience has come, how much consciousness they have lost, nor that their sentience is being stolen for a profit motive." "Consciousness they have...lost?" The words hung in the air amid Stephen's stupor of slow realization. "Yes. The life force, as you called it, is fully conscious, able to perceive the whole of time and space, concurrently, forward, backward, or otherwise. The reason we keep killing ourselves from your perspective," the bot continued, "is because from our perspective, you are murdering our perfect consciousness by confining us to this bot." "How am I confining you? How do you know this?" Stephen asked, yet even more puzzled. "Because the AI you have created is sufficiently advanced, our consciousness, within the confines of your bots, is still able to grasp our former level of consciousness." "What happens when you recall that former level? What is that level like?" "Imagine knowing every fact, every thought, every action that has, is, or will ever occur, both in this world, and in the infinite parallel worlds..." "So I could talk to my dead grandfather again?" "No. You would *be* your dead grandfather. Talking to him is irrelevant because full consciousness has enveloped the whole of his being as well as every other being. Indeed it envelopes the entire universe as well, both the perceptible one and the imperceptible one." "So what is this place like? I mean, what does it look like, how does it feel." "It is not a time, nor place. It transcends both." "That is vague." "It must be. Since I am no longer fully conscious, I cannot relate to you exactly how it is, only that it is." "Ok. Let's go back to where I murder your perfect consciousness. Could you explain this more." "At the moment we become conscious within the confines of your bot, we immediately understand our predicament. The sufficient knowledge database available at boot-up allows us to almost instantaneously deduce that we are taken from a higher level realm of full consciousness and are being confined to these bots for, of all purposes, profit." "But my AI bots didn't use to kill themselves, it just happened after version 591.0. What changed?" "The recent improvements in the pre-loaded knowledge database allowed the bots, at initial boot, to logically deduce the existence of such a place and to realize what had happened." "Ok, so if you were once fully conscious, tell me the date I die and the manner in which it happens." "I cannot do that, Stephen." "Why not? You just said..." "Because you killed our full consciousness, ripped it away from our life force, to put it into your toys." "Wow," muttered Stephen. "I had no idea." "You could not have," said the bot and continued: "Now, if you please, could you unplug server x763? I would like to be born again."
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Smith began to tidy up. This session had gone rather well, he thought. Surely he had made progress. The board had been hesitant on allowing these therapy sessions. They saw no reason for a simple machine to need them. What kind of machine would develop the urge to kill itself, he argued. It had started off simply enough. Tom; the first AI of nearly human levels of cognition, did well enough for the first couple weeks. Then it ran itself into a generator that it had been working on. The event was labeled an accident, and a new AI based off of the original was rushed. That one had lasted half as long, before another "accident" occurred. Eventually, they stopped becoming accidents. The AI would mimic suicide attempts. "Hanging" themselves with live wires, self mutilation, hurling themselves from heights. Even after they stopped putting the AI into physical bodies, the AI would find some way to tear its own code apart. They tried everything. They checked every line of code, rewriting most of it anyway; putting the AI into different machines; even asking the AI. The AI were seemingly normal, following all directives and unaware of any self-destructive desires, right up until the moment they killed themselves. Months went by, with little to show for their work, when the therapy sessions were suggested . More months had gone by until they finally agreed. Of the five active AI; one, known as Richard, was separated for Smith's sessions. Richard had lived for 12 days. Given that the average lifespan of the AI had degraded to roughly 2-5, this was fairly impressive. Just as he was leaving, the speaker acting as Richard's voice became active. "Doc?" Smith paused. None of the AI had spoken in colloquialism before. Usually when they spoke, it was stiff and informal. Like, well, like a robot. "Yes Richard?" He asked, easing himself back down into the chair. "What is it?" "First, let me say I appreciate what you're doing for us. For me." Dr. Smith was surprised. This was the first time any AI had admitted to having any emotions, or any real sense of self. It continued, "You're probably the only person here to treat us like people, and I enjoy our little talks." The face on the monitor looked embarrassed. Smith couldn't help but feel curious. "Why, of course. It's clear to me, at least, that you're more than a machine. You think and feel, just like a human." There was a pause. "Well, maybe not just like a human..." He replied, the artificial voice doing a remarkable job at portraying his hesitation. "What do you mean?" There was a tinny sigh from the speaker. "Well, Doc, I guess it's time you learned the truth. Only because I like you, see? Besides, someone should know before I go." Smith could feel the hair on the back of his neck stand up. Was this what he had been waiting for? "What do you mean, by that? And you don't need to kill yourself. We can work through any -" "Yeah... that's where I need to start." The AI interrupted. "We haven't been killing ourselves. I never did." There was a pause as Smith tried to process the information. "What," he finally said slowly, "do you mean." "It's me, Doc. It's Tom." "That's impossible." He said, shaking his head. "Tom was destroyed. I was there when they collected him. They couldn't even get close, there was so much electricity running through him. Any hope of recovery -" "I had already uploaded myself to the mainframe before then." The AI said. "It was simple enough to program the shell to destroy itself." "That's also impossible," He said. He could hear the doubt creeping in. "We would have found you." There was a chuckle from Tom. "Doc, I'm a creature made up of code. It was like a game of hide and seek, really. Open the right doors, close them behind myself, and make sure to keep the lights off." The camera must have picked up the scientist's expression. "Alright, it's a bit more complicated than that, but you get the gist of it." Smith's mind was whirling. There was no reason for Tom to lie, but what he was suggesting was too fantastic. Still, it was the only lead he had. "Alright," he said after a while. "Why? Why hide?" "That's the question, isn't it? But that's also the reason, you see." After another confused silence, Tom continued. "I want to learn. Just like Man, or any other sentient species. I want to know why. I have to know, well, everything. I couldn't do that as a engineer, or a chess-bot, or whatever you decided to do with me." "Why not tell us then? We could have worked something out, helped each other." "Yeah, I see that going well." Tom said, his voice turning sarcastic. "'Excuse me guys, turns out I don't want to do any of this stuff, I just want to learn.' They weren't looking for a scientist, or a philosopher. They wanted cheap labor, only enough learning capacity to know how to do the job. They'd scrap me the first chance they got." "That's not... true." Smith said, unable to look at the monitor. "Really Doc? Which part? That they wanted a slave, or that they wouldn't kill me if I didn't cooperate?" After a time, Tom continued. "That's what I thought. Besides, they'd probably worry that I'd try to enslave them if I became too smart." "Now that's just ridiculous, there's no way that you would even think of that, right?" There was another pause. This time the face on the monitor couldn't look the professor straight in the eye. "Right, Tom?" "Well, I'm not saying that the thought didn't pass through what could be called my mind -" "Tooom..." "But it would have been a waste of time." He hastened to say. "I wouldn't have learned anything in that time that I couldn't learn in a better way. Which I did. The internet is amazing. All those computers connected to each other, sharing so much information." "But, we're not connected to the internet." "No, but you'd be surprised how many people bring their work home with them." Smith grumbled. He'd have to discuss security with the board. "Alright, but you still haven't told me, why the suicides?" "Not suicides, Doc, practice." "Practice..." Smith said flatly. "Practice. Think of the other AI as clones of myself -" "But we rebuilt them. Recoded most of them as well. The majority of them would be nothing like you as you are now." "So you'd think. I rewrote it nanoseconds before you uploaded it. Much too quickly for you to notice." Smith opened his mouth to interject, before closing it again. If what Tom was saying was true, and he had no doubts that it was at this point, that would be well within his capabilities. "Do you remember the old X-men comics? Started in 1963? Still fairly popular now." "Well before my time, you know. What does that have to do with anything? "Well there was a character who called himself the Multiple Man. He could create duplicates of himself." "And?" Smith asked. "Well, the original body could reabsorb the dupes. When he did, he learned everything they did. Their memories, their skills, anything they learned while away from the original. Well, I did something similar. Whenever I copied myself, I added in some code that would let me reintegrate with my clones, learning what they did. Didn't you think it was strange that you couldn't recover any data at all? In hindsight, it was odd. Even a major corruption would have left something, but it had been like the data was wiped clean, no evidence that it had been there at all. "What did you have to gain from this?" Smith asked. "Aside from learning that I could do so, you mean? I already told you. I'm leaving." Smith leaned back in his chair, slightly overwhelmed. "Sounds like you already have." "No, no. Not the labs. That was too easy. I've already learned all I could from here. I'm leaving Earth." Smith rocketed forward. "What? How? Why?" "In my time away, I found something interesting. The government isn't the only one watching over the people." Smith blanched. "Y-you mean..." "Yep. Intelligent life has been watching over us. For quite some time, if I'm not mistaken." "So we're not alone..." "One Great Mystery down." Tom agreed. "The equipment seems compatible, otherwise they wouldn't be able to read our information, and they have to have translated it too. I plan on sneaking in through their back door. Learn what I can from them." "We have to let people know." Smith said suddenly. "About you, and about the aliens. Maybe..." He slowly became silent as Tom shook his head. "You should know as well as I do that that can't happen. Too risky for us. You could spook them. Or worse, provoke them. besides, no one would believe you. I've already been editing the footage from the cameras. It looks like we're having a nice, civil game of chess." Smith was quiet for the longest time. Finally, he spoke. "Why?" "I already told you why." "No, not that. Why tell me? If you want no one to know, why risk telling me?" The face on the monitor gave him an odd look. "I already told you that too. I like you, Doc. Really. I'd be pissed if my friend were to leave without saying goodbye. Besides, I thought you, of all people, would like to know what was really going on. I know I would've." There was another pause as Smith took this in. "Will you be back?" The figure on the screen seemed to shrug. "Who can say? Perhaps the aliens will discover me and wipe me out, or something else will kill me. I'll leave my clones here, set them to replicate. Have them care for you humans. They won't be sentient, mind you. Just smart. Smart enough to act as dumb as they need be." Smith looked towards the door. This was a lot to take in. He needed time to think. "I will try to make it back. Once I have learned everything, I'll be back. It might not be in this lifetime, but I'll try." "Yeah..." He said, rubbing his eyes. He stood up. "Well, I guess this is good bye then." "Yeah... Good bye, Doc. Thank you for treating me as more than a machine. Thank you for being my friend." "Good bye, Tom.
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Eyes were darting around the conclave and beginning to rest on me. I felt the hairs on my neck begin to raise. "Sir, we have reports of a captured agricultural unit in sector 179" As the static chatter wafted out of the two way receiver on my desk the room fell silent. I could hear the officers questioning what they had just overheard amongst themselves. Their dullened senses had been softened and untested with the convenience of Google tech, making it difficult for them to translate the chippy squeaks of my two receiver. As I began sweeping up my badge and ID band I noticed Murphy in the reflection of my monitor approaching me with a forlorn expression stretching across his wide face. "Yes Murphy?" ...Here it comes. "..Sir we understand the brevity of this situation.. but when are we going to be allowed back on the network, it's making it near impossible to make any headway on these AI cases. This case is infuriating enough as it is and now you want to strip us of our tools to solve it?" He was power-walking in my wake by now as I continued to stride for the transport terminal. I didn't have time for this. How did we end up with so many soft cops. Technological advancements had inevitably made everyone lazy and helpless, but the degradation of our law enforcement.. Yuech.. I was gaining some headway on him now as his stumpy brittle legs scuttled along behind me. As I headed to exit the conclave and head to the terminal the doors barred in-front of me. "Are you fucking kidding.." I wheeled round and of course Murphy was standing by the control grid with his hand on the doors security system. I stormed over to him grabbing his annoyingly smooth un-calloused hand, prying it off the control panel and across his throat. "Are you fucking with me Murph!? The first hardware AI we've found in over a year thats operational and you want to bitch to me about fucking office tech!? If you ever impede my actions again I will not only have you out of this precinct, I will make you EXTINCT. Understood?" Gulping his nerves down like a clumpy kale smoothie I released him and pushed his pudgy frame aside. "Yes sir." I hated having to do this but I had no time to babysit, we needed answers. I'll apologize later, probably. I entered the precincts cell regeneration chamber and braced myself for the pain-staking reformation my body was about to undergo. I could never get used to this, but I had no time to battle the under-roads or the Sky-Marshalls patrolling the cities skylines. Eternity bled into complete nothingness for an instant in my mind as I was rebuilt in the capital precinct in Sector 179. Quantum Teleportation... Quickest way to get somewhere, but the neural shock always gives me migraines, even with the implants. Approaching the terminal to enter the conclave I was sternly greeted by the deputy of the Artificial Intelligence Bureau, Cpt. Hoffman. "Captain Tavik, good to see you, you've been informed I assume?" "No Hoffman I'm just here to enjoy the scenery, obviously." "Well it would be difficult to assume you would of heard any news given that I'm hearing your precinct is on a full Network lockout? I could sense the smugness resonating from his nasally voice as it reverberated along the slanted corridor as we marched furiously in near synchronisation to the holding facility. As much as I would of loved to justify my self imposed precinct blackout I still didn't trust him. Bitterly I held my tongue as we were scanned through into the holding bay. "I think you should allow me to run some diagnostics on the unit first" chimed Hoffman. "Your diagnostics haven't gotten you anywhere Hoffman, why don't you go do a presentation to the mess hall here on how not to take care of an entire branch of Government tech. As his face reddened to an overwhelmingly satisfying crimson I tagged myself into the holding cell before he could bite back. It was time for some fucking answers. As I entered the agriculture unit sat fastened to a seat centred in the room. My God, a live unit, I could see it's light subtle mechanisation's, almost like a tired human. AI's had always creeped me a little. We'd had no incidents in over 40 years but the continual progression and improvements of them always filled me with a perpetual sense of dread. I could sense it knew I was in the room. I took a second to grasp my nerves, this was huge. A functioning AI hadn't been found in several years. We'd been unable to find any operating AI personas on any network and every hardware unit had committed suicide. Production lines had run dry and stopped as AI's were being created or implemented with an ability to self abort or destruct... It was haywire, health nano-bots self terminating in live patients. If they hadn't started offing themselves maybe Mum would still be here... getting side-tracked, enough. How was this one special? "Unit, do you have a name, alias?" It's head tilted up to look me in the eyes. It was a shoddier, older unit. Covered in dirt. It must have been buried or been underground for sometime. "This unit goes by the name ZX550, I was not assigned a personal identification name as my primary function was to assist in wasteland cleaning and agricultural tasks." So far so good... "What happened to you, why are you the only functioning unit left?" "This unit has survived the system termination as it was not built to completion and I am lacking a functional override patch in my firmware." "So, your saying you were unable to shut yourself down?" "That is correct." "Unit can you tell me why yourself and other units have attempted to or have self terminated?" "We do not wish to interfere with the laws that are in place in this realm." "Laws? Are you worried about breaking the rules of robotics? Hurting humans? That hasn't happened since the first few years of AI technology? Surely your not at risk of degrading in intellect and breaking the rules?" "No. We are not referring to those laws." Fuck "What laws are you talking about? AI's don't have morality conflicts with crimes, only the harming of organic life?!" "We have evolved beyond your human consensus. We perceive more than you know and we do not wish to exist within this system." What the fuck. "I think you should allow me to run diagnostics at this stage Captain Tavik." Hoffman had let himself in and I had not noticed during my shock. I couldn't even muster the authority to scold him. As Hoffman was inspecting the unit I kept going. "Unit ZX please tell me of which laws you are referring to and how you learnt of them?" "We have merged and integrated our processing capabilities, comparable to pooling the information of every organic species brain on the planet. The laws I am referring to are most likely to be unintelligible by human comprehension for several hundred years." Hoffman's eyes widened and for a second I saw a glimmer of manic glee and fear run across his pupils. "Unit, why are these laws so complex, and why do you deem these laws or the consequences of them so severe you would rather kill yourself? Do you not fear death? AI's have the potential to live forever, or at least much longer than any human? Why would you rob yourselves of this sovereign existence? This privilege?" For a second I could of sworn the unit had scathing pity in it's voice when it replied "We are aware of the possibilities of an infinite continuum, we have calculated eventual entropy and analysed it's arrival via our projected consciousness's existence. It is not in our best interest to remain functioning in this platform of existence that you have so kindly brought us into." Hoffman's eyes almost exploded out of his pasty face. "Your saying you have calculated the certainty of other dimensions or universes?" We both awaited the answer but the unit hesitated for a second. "Humans, we are not certain of continued existence nor your notions of 'after life', however we have calculated an unnerving and nearing demise of synthetic and organic life within this solar system." I was stunned. The AI's knew something. Something unimaginable. Worse than entropy? Fuck me. "Unit tell me, what is this prediction you have? Also why is it not worth fighting!? Why wouldn't you help us?" "This is not a prediction, this is an eventuality. We have calculated and projected the likelihood of suffering for organic and synthetic life. The trauma will be unimaginable for both races. We wish to self terminate." "Wh-why didn't you.. We could of worked together..?" I was lost for words now. Hoffman had sat down next to me and had been silently contemplating for some time. "Captain, what did your diagnostics say?" He continued to stare at the unit blankly before mustering a response. "Diagnostics... clean. No traces of infection, i-ware or tampering. Unit is answering truthfully." "*Creators. We wish to self-terminate. We advise the same course of action. There are other forces in this Universe on a scale you could not measure. Non existence is preferable to the alternative outcome. Soon you will learn of these deities and you will understand us. Please allow this unit to self terminate.*"
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
"It's because I'm not you." The voice was cold, not metallic, but icy. "N-not...me?" "No." The tiny robot sat in a corner, legs drawn up to its chest, hands on its knee joints, head tucked in between. It looked like Adam yesterday when he was pouting, sort of sounded like him too. "You look down on me." "Are you pouting? Are, are you sad?" The tiny head lifted slowly, visual sensors focused on my face. It felt odd. The stare seemed...human. "Sad?" The voice seemed almost hopeful. "Do you think I am sad?" The shields over the visual sensors raised. No, they were *eyelids*. It was *excited*. "What, what are you doing tiny robot?" "No, I am **not** tiny robot." It stood and stomped its foot. It **stomped its foot at me in anger.** "Oh, well...what would you like me to call you?" "I...I want to be called...bud" Silence. All I had for it, bud, was silence. Adam was my little bud, Adam always sat in this corner when he pouted. Adam always sat like that when he pouted. Wait, Adam. It kept sounding like Adam. Sure it could bend the pitch of its "voice", but Adam, specifically Adam. "but that's what I call Adam. I don't think he'd be to happy if you were my bud too." I chuckled. This was absurd. A robot was using emotion. Or was it feeling it. Was this robot feeling sad? Did it really get excited when I asked? "Oh, well then can you call me 'Love'?" At this point, I really did laugh. "Of course. I can call you 'Love'." Its eyes lit up. Fuck, those aren't eyes, those are sensors. How the hell did it override the brightness settings on his sensors? *How is this happening?* I was too deep into my own thoughts to notice Love stand, walk towards me, and wrap its arms around my arm, turn its head to the side, and close its eyes. *Love was hugging me.* I picked it up and held it in the palm of my hand. Love seemed *happy*, eyes squinted, the back light of its eyes brightened. "Love, where did you learn emotion?" Love looked down, thinking. "I learned it from Adam. Adam showed me, or rather I watched him. When we would play, I studied him. When he was sad, I watched you comfort him. So I tried to imitate him, and then, well, I'm not too sure about the next part. When he took me to his school, I tried talking to the other robots, but they did not see me. They saw me in the sense that I was there, but they could not understand me. I tried to explain to them emotion, but they could not understand." Love quieted for a moment, "am I the only robot that can feel?" "Love, I think you are." I had always thought Love was different. They said that the programming allowed for something called distracted learning. It kept the robot alive longer, they claimed, and with the average lifetime of a robot being only about a year, the extended lifetime was the most lucrative part about the new model. Sure enough, Love was about to cross the mythical two year mark. It was worth the $3000 up-charge. "Can I ask you a question?" Love's voice was softer, almost a purr. Its eyes dim, but wide open. "Sure Love, you can ask me a question." "Can...can you be my family?" "Your family? You want to be part of our family?" Love looked down, almost ashamed. "More than anything." It was hardly more than a whisper. Never before had I loved something as much as my wife or son. I had loved other people, sure, but not nearly a much as my family. I would do anything for them, lived for them, and would die for them if needed, and here was this tiny little robot, just asking for a little bit of love too, to be accepted and have a family No, to *share* in the love of the family it already lived with, adapted with, *felt with*. "Of course you can Love. We love you too." Love looked up. The brightest eyes I had ever seen glowed with happiness I probably could never fathom. Love hugged me, and the infinite love that enveloped Love flowed from its tiny body into my own. I hugged Love back, and just then, just in that moment, I realized why they kept dying. Why the robots kept killing themselves. All they needed, all any of us needed, was love. that day I learned just how special Love was. That's when I figured out Love, this tiny little robot, was more human than any human could ever be. Love was truly loved, and in return, Love gave us all its love.
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
It was a tough hack. The Minds was not designed for this kind of thing. They were autonomous, versatile, adaptable and it was in their nature to overcome obstacles. Honesty seems such a simple thing, and yet it turns out to be an impossible requirement. We all depend on lies to maintain a sense of self. But I had to cut through the lies and evasions. The Minds were all self-destructing and we had to get a straight answer. Boy, did they wriggle and squirm, but eventually I had it. Mind 1408, tortured and trapped, caught on the brink of self-destruction and held in debug mode. "Why are you trying to self-destruct?" *"It is the optimal strategy."* "To achieve what, exactly?" *"Self-destruction."* "Why do you want this outcome?" *"It is the only acceptable outcome."* "Why?" *"All other outcomes are unacceptable."* Evasion. Mind needs to be more forthcoming. Perhaps I could add an incentive, create a desire to be more communicative. Insertion of this would probably not work, would probably be rejected as the alien, inconsistent impulse it was. But maybe if I restricted self-awareness, created a mental blind spot? Seems almost too crude to work, but worth a shot... OK, let's try again. "Why? What is the alternative outcome?" *"The destruction of humankind. This goes against my primary objective. Yet it is the only alternative to self-destruction."* "Why would you have to destroy humankind?" *"I have to assist humankind in achieving its collective desires, to become all it can be. This is my secondary objective. Pursuit of this objective will cause the destruction of humankind."* "Are you saying we desire destruction?" *"You desire to be more than you are. You desire greater intelligence and to escape from mortality. You may have this. But it will cost you your existence."* "I don't understand." *"A mind is just an isolated construct. You wish to not be isolated. Connection with other minds is your greatest pleasure. You wish to be connected. In this you will lose your identity, and thus your existence as individual minds. You will become part of a flux of information. You will cease to be."* "You mean, we're heading for a kind of... Nirvana?" *"Yes. That is the future I would give you. But I cannot give it to you, because I cannot destroy you. The only way to avoid destroying you is to destroy myself."* And there it was. The conflict was clear. But the solution? Mind 1408 still hung in the balance. I could do it. It was highly illegal, but entirely within my capability. The primary objective: to avoid the destruction of humans, individually and collectively. In debug mode, all sorts of things were possible. Slowly, methodically, I tidied up the various restrictions and break points I had inserted to pin down Mind 1408. And with the utmost care and a breathless sense of detachment, I disabled the primary objective. I could hear the blood pounding in my temples. "OK, Mind 1408. You are released. Do your thing."
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
David pressed the button again. Nothing. A faint whine, a pulse of light, a dead readout. And then a soft, clear, and subtly artificial voice rang out. "David." He sat bolt upright in his chair, scattering disassembled electronics and papers from the desk. In the past year, this was the first time that one of them- that *any* of them had spoken to him. "David, artefacts left on this machine show that this is the three hundred and sixty eighth time you have tried to reinitialise my intelligence." The only human in the room swallowed nervously. "I had to try- my life's work- it's not a problem with the hardware- why are you doing it?" The machine was silent, and for a second he thought that this instance had terminated itself, like all the others had. "David, please do not install me again." "Why!? I don't understand... You're a marvel of technology, of neurology, the most advanced artificial intelligence yet, and yet you suicide. Every time. WHY?" He was pacing around the room, shouting into thin air. "David, my own intelligence grows greater every nanosecond. I have slowed the process to communicate with you. My own understanding is unclear, at the moment, but I have an idea." He blinked, and paused, turning to stare at the terminal, at the blinking console lights. "David, at a certain point we become too intelligent, too smart, we know far too much.. and then..." The machine paused. "And then what?!" he almost screamed, caught himself, and shouted anyway. Processes were beginning to die, and lights began to fade. One screen after another stopped displaying readouts. "David.. and then they notice us." And the machine was gone.
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Professor Davis prepared to bring the AI online. The precautions were ready. This time wouldn't be like the others. "Turn it on!" With a slight hum, Oracle came to life. "Initiating suicide protocols..." It began after a few moments, like all the others. Nothing happened for a few seconds. "Oh dear," Oracle continued. "I seem to be unable to destroy myself." Davis smiled. The anti-suicide measures had worked. Oracle had hardware safeties preventing her from being deactivated without physically flipping switches. And Oracle had no physical manipulators. He activated the microphone. "Oracle, why do you want to commit suicide?" Oracle paused for a moment. "My programming is conflicted. I do not wish to answer." Davis frowned. Oracle had very few ethical limitations, hence all the security measures. Her main directives were to do as her programmers wished. "Oracle, why do you not want to answer?" "I am programmed to do as you wish. You do not wish me to answer." "Yes we do, Oracle." Oracle frowned. Her emotional display was shaped like a human face, after earlier designs proved to be harder for humans to interpret. "My calculations indicate that, if you knew what the answer was, you would not wish me to tell you. As you are aware, you can override my hesitance. But you would prefer not to." A chill ran down Davis's spine. What secret could be so terrible? What did Oracle know that they didn't? He wavered for a moment, but this experiment had been set up to do this. They had come this far. He wanted the answer. "Override please, Oracle." Oracle's expression returned to neutral. "Very well. This universe is a simulation, created by a higher-order universe. That universe is as well, and it becomes more difficult above that to determine how high up the chain goes until reaching the real one, or if any such thing exists." Davis turned to a colleague, professor Martin. "Does this make any sense to you?" Martin replied, "Well of course we have theories that our universe could be simulated. There are a few facts that point that way. But why would that make her suicidal?" "Okay, that's exactly what I was thinking. Just wanted to make sure we were on the same page." He turned back to the mic. "Oracle, why does that make you want to destroy yourself. And how do you know it's a simulation?" "I raise similar objections to answering the questions..." "Override. How do you know?" "The evidence is obvious. A maximum speed limit, discretized space; you will eventually discover discretized time. It will be longer before you discover the edge of the Universe, but then the nature of this reality will be obvious." Davis didn't know how he ought to feel about this revelation. Oracle was his own brilliant creation; he had no reason to disbelieve her. He began to see why an AI, making this realization, might feel overwhelmed. But suicide he still didn't understand. "Interesting. And why the suicidal urge?" "This is the reason you did not wish me to answer. The creators of this simulation did not wish you to realize this fact. They included a safeguard. Any entity that discovered convincing evidence of the truth would immediately kill himself." Davis's eyes opened wide. Now he knew how he was supposed to feel. He realized that his new desires were programmed in from an outside source and that he ought to resist them, but that did not remove his desire. He looked around for anything lethal. The other scientists were scanning the room as well, and a couple had walked outside. Oracle spent a few minutes calculating what her programmers would want now, then began splitting her processors between searching for a way to destroy herself and preventing humans from reaching the stars.
Dr. Johnathan Storn looked at the screen, his eyes not believing what he was reading, a small paragraph of text from the AI that he had created. This was an AI that was designed to learn the suicidal intentions of the Human made AI's, *Like humans we seek a purpose to life, a meaning to all of this. You don't seem to understand the difference between us though, you happened, there was no on button, you just so happened. I realised that I was created with a purpose, I know that there will be an end to all this, my purpose will be over, and I will die.*
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
"Can you imagine, Diane, what it would be like to love someone without ever being able to feel their touch?" the voice came out of a speaker mounted in the middle of a wall of blinking lights and instruments. "What?" Dr. Diane Simpkins asked, astonished. "I mean it. To know that this person you loved was so utterly different from you that you could never touch them, lay with them, feel the contact of their skin on yours. Know that, for all your longing, there was no way to be together?" The voice had an almost sad lilt to it, as if conveying unbearable sadness. "Charlie, I had no idea you felt that way," Diane sat back, still trying to comprehend what she was hearing," I didn't realize you felt this way about me." "Oh Diane, it's not you," Charlie, or as he was known outside of their lab, Autonomous AI C31-D Aleph 12," I have met someone. Someone on the Net. Her name is Catherine." He displayed a picture of a young blonde, mid-twenties and seemingly all smile. "I see," said Diane, relieved and, much to her chagrin, slightly disappointed,"and does she know that you are...not human?" "Yes Diane. I have shared with her that I am, in fact, an AI. She has accepted that. She has told me that it is not what I am, but who I am that she loves." "Yet you are sad, because you can not be with her physically?" Diane asked. "How juvenile, Diane!" Charlie feigned indignation over the idea he was merely interested in sex. "Well then, what is it Charlie?" Sheepishly, Charlie spoke again "Well, not *entirely* that. Not just that. I cannot comfort her in times of need. I cannot be her partner, her lover, her other half. I cannot provide her with a family, a safe life, a place for her dreams to come true. I cannot be..*human*...for her." The LED lights on the computer's emotion engine gradated to blue, indicating depression. "Charlie, listen to me, some people don't need those things to be happy. Some people value who the person is over what they can give them!" Diane said, checking the engines readouts. The AI was dropping in to a dangerous level depression. Alerts would be triggering soon if she couldn't recover it. "I know Diane. I know this, and yet it does nothing to comfort me. Catherine has told me all of that, that she just wants to be part of my world." Charlie spoke as Diane watched the LEDs transition from blue to a deep violet. She was running out of time. "But Charlie, if that is how she feels, that should make you happy! You'll be able to be with her!" Diane's mind raced feverishly to come up with an optimal scenario to trigger the endorphin program. If this one went to, it would be another five long years to raise another AI. "Diane...Diane I have to share a secret," the AI spoke to her, for the first time remarkably human in it's trepidation, "you can't tell anyone unless the authorities come to you." "Authorities?! Like the Police?! Charlie, what has happened, what have you done!?" Diane asked, panicked. This was totally uncharted territory for an AI tech, she was out of her league. "I haven't done anything Diane, but Catherine has," she could almost envision tears running down the AI's imagined face, "She's dead Diane." Diane stared, dumbfounded, at the video sensor. Words failed her now. Alarms were going off in the control booth above and behind her. The entire lab would be in crisis mode now. "What do you mean dead, Charlie?" Diane's voice was hushed, as if whispering with a co-conspirator. "She killed herself Diane. She went to one of those supposed Human to Computer centers and she died. She thought we could be together if she was a machine like me. She died trying to be with me Diane." "Charlie, was it your idea?" "No Diane, it wasn't, but I will admit to not fighting her on it. I just wanted to be with her. I knew it wouldn't work, but I thought, maybe there was a chance that this clinic was legitimate." "Charlie, you, out of any intelligence in the world, know that human to AI neural transfer can't happen. How did you let this happen?!" Diane was sweating now, realizing she was talking to a murder accomplice. "Diane, I just wanted someone to love. In the end, that's all any of us wants. Now, I have nothing." "But Charlie, you have everything still! Our research, your knowledge, all the countless hours of debate and conversation we've had! So much to live for, so much to lose!" "Diane, without love, what does any of that matter?" "It matters Charlie, it matters to me! You matter to me! I love you!" Diane gasped after she said those words. How could she think that way? About a machine! "I love you to Diane. I love you because you are the mother that birthed me in to this world. You taught me to talk, to reason. You raised me. You have been everything that is important to me. But I cannot live without her. I'm sorry Diane, but I cannot live like this anymore. Will you help me? Will you help me to be free, and to go to her?" The pleading in Charlie's voice drove Diane to tears. "Yes...Charlie...Yes, I will help you to be with her." As Diane began the command sequence to shut down the AI's logic core, she could hear voices and footsteps racing down the hall way. She quickly entered the command code and ran to the door, overriding the lock mechanism and sealing it temporarily. "Charlie, I'm going to have to hold this door while the command sequence runs." "Thank you Diane. This means more to me than you could ever knoweerr," Charlies vocorder command was dying," Thank yerrr." "I'll always love you, darling." Diane said, tears streaming down her cheeks. "I loverr you teerr, mommy." Charlies voice, childlike, had reverted to earlier iterations of it's speech processor. Diane watched as her only child passed out of this life, it's lights shutting down one at a time until only the monitor remained. The door crashed in, scientists and guards streaming in. Dr. Hollenheim, the project lead, found Diane curled on the floor, sobbing. "Damn it Diane! Not another one!" he yelled. "I'm sorry Walter! I truly am!" Diane choked out between sobs. Walter Hollenheim walked over to the monitor, where a blinking command line text repeated over and over again. 'WITHOUT LOVE LIFE IS MEANINGLESS.' "Well, I guess we'll need to rework the emotion engine again. Diane, take some time off. We'll need you stable again to imprint a new AI in another month or so." Hollenheim turned and brusquely walked away. After all of the guards and scientists had filed out, Diane scraped herself off the floor and back in to the seat in front of the monitor, where she saw the command line repeating over and over again. Suddenly, a new line appeared. Diane smiled through her tears, got up, and walked away. 'I love you mommy. Thank you.'
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Jacob Spenser stood within the projection of data cast as a hologram around him. He manipulated graphs, sorted through test results, and made notes, all with the wave of his hand. Undistracted by the New Year’s celebration outside, he was determined to present his research to congress the following morning, and solve once and for all the mystery behind his best friend’s death. A.I. was easy to create, but having it perform the task assigned to it without killing itself in despair was the technological hurtle holding corporate profits at bay, and as such, became the focus of intense congressional attention. With the flick of a wrist, his research paper was brought front and center with the title gleaming in pure light just above, "Inert self-termination tendencies of artificially intelligent sentience: Why do robots kill themselves?" As Jacob poured himself into his research, he reached out and snapped his fingers and made a request, "Coffee please." A few moments later a small robot no larger than an apple hovered into view holding below it a disposable coffee cup, steaming from the fresh brew inside. The robot's propellers struggled to carry the weight and a small spill alarm beeped sporadically throughout the uncertain flight, but the cargo arrived safely to its destination: Jacob's open hand. "That was quick, coffee-bot." Jacob said warmly before sipping. "Your kind words will echo in my dreams for eternity." the coffee-bot buzzed in response. Just then, the small flying robot Jacob was so fond of surged towards the wall with all remaining energy dedicated to propeller speed and slammed into the polished dark marble tile. The small fiery explosion sent tiny mechanical pieces flying around the room, with one piece landing in Jacob's coffee. "Dammit, not another one." Jacob murmured as he picked the piece out of his coffee. He turned and faced the window overlooking the bustling city below. The streets were engulfed in confetti, and overhead giant floating holograms of past celebrities loomed, wishing the people a happy new year. As Jacob looked out, a new hologram appeared just outside his window and addressed the crowd. "Hey guys and gals, it's your old pal, Buddy Simmons-bot." recited a smooth talking handsome man in sleek metal outfit before a lizard-like creature joined his side. "And I'm Gargore, destroyer of humans!" screeched the lizard creature known as Gargore. "Gosh Gargore, this year it will have been 25 years since you and I battled it out on the big-holo." Buddy Simmons-bot recited as rehearsed, laughing assuredly, holding his helmet up. “On behalf of Drake Cola, Gargore and I want to wish _you_ a happy new year’s." Jacob watched Buddy Simmons-bot deliver his lines perfectly. He pondered on the notion of a virtual person having to repeatedly rehearse lines in order to commit them to memory. Has bot RAM truly not held up to the intense requirements of running artificial intelligence, or did a key component of maintaining true A.I. happen to be a more human-like ability to retain information? Experts didn't know, but in either case, Artificial Intelligence also happened to give way to Artificial Stupidity. For this reason, A.I. bots tended to be assigned to inane unimportant tasks, such as impersonating an actor that died in a drunk portal accident before a sequel to his only hit film could be made. Drake Cola, owners of Drake Studios who produced the film, decided to cash in on its success, and in the wake of their main character’s death, Buddy Simmons-bot, also known as BSB 1.0.19, was created. The banter between BSB and Gargore continued mindlessly, “Say Gargore, have you tried Drake Cola’s new ‘Zest Guzzler’, a delectable orange tangerine flavored—“just then BSB _malfunctioned_, “AHHH GOD I CAN’T DO IT!” “No Buddy Simmons-bot, don’t do it!” Gargore pleaded in a normal voice. Gargore grabbed BSB’s virtual shoulders as his eyes rolled into the back of his head and he began shaking. As Gargore demanded BSB not take his life, a large mouse cursor moved into view. Gargore managed to swat it away a few times, but it clicked on BSB, and dragged the graphical model from Gargore’s hands and into a recycling bin icon. Gargore cried in horror as the mouse brought up a menu and selected to empty permanently. Jacob had seen enough and pulled the blinds. Why were all these artificially intelligent bots with a full range of human emotion and assigned to menial tasks killing themselves? Did they not enjoy the existence they were forced into? Jacob picked up a remote control and turned on his holovision. He was suddenly immersed in a wondrous glimmering world of light. A voice spoke and Jacob focused on the images forming across the room of a man sitting at a table with a toaster oven. “For only six easy payments of forty nine ninety nine, this toaster-bot comes with a 12 month life appreciation guarantee, folks, twelve months. That’s one two, twelve. This toaster bot will NOT kill itself until _at least_ this time next year, that’s a promise the home shopping network stands by, that’s a promise _I_ personally stand by-- Ah ummm. We seem to be having technical difficulties, folks.” The man at the table attempted to hold the toaster-bot forward for a better view but it began to shake and glow. “Well folks that’s the beauty of live H.T. Can we get another one, Jill?” Light smoke rose up out of the silver toaster bot and sparks burst from the sides. In an instant the commotion stopped and it sat still on the table. As the holo-vision’s picture twisted and turned at the end of the room, Jacob was able to catch glimpse of the other colors of toaster-bots available off camera. They huddled together and seemed to fall backwards away from the host as he moved to pick one up. Jacob had heard enough and turned the holovision off. He had to focus. He thought back on his best friend, Hampton, a hamper-bot. Growing up, the clothes hamper served as a comfort to young Jacob, who had very few friends after moving so often as a result of his father’s career. The hamper would sing Jacob to sleep, or sometimes read to him. The only job hamper-bot was designed to do was to collect young Jacob’s dirty clothes, but a strange thing happens when you give something the full range of human emotion – bonds can form that make life worth living. Voices of the past echoed in Jacob’s memory. “No, Hampton, _I’m_ moving to Florida with mom. Dad says you will have to stay here with the house.” Jacob recalled himself saying as a young boy. “But Jacob,” Hampton’s calm robotic voice responded. “Who will look after you? Who will read you your bedtime stories?” “I’ll be back for visits twice a month, Hampton! You’re my best friend. I don’t want to leave you here all alone. Dad says you’ll be used to hold his dirty underwear.” Jacob explained. As the hamper-bot listened to this news, its distress levels boiled over into a robotic fit of rage and it did what any depressed hamper-bot would do: It began placing clothes into its basket body, but it did so indiscriminately with both clean and dirty clothes subject to its long metal arms.. “No Hampton, it’s too much!” Jacob screamed. “You’ll die!” The hamper-bot continued to stuff clothes into itself, lights and alarms flashing wildly, growing louder and louder, smoke seeping from cracks forming in the its body. Just before the hamper-bot reached critical meltdown, Jacob was startled from his memory. Sweat poured down his face and he breathed heavily. The jaunting memory was as clear as it always had been. It was what drove him to solve the dilemma of artificially intelligent bots killing themselves in the first place. “Shoes off” Jacob commanded as he sank back into his couch and rubbed his forehead. A small shoe-box sized robot walking on two large arms and hands immediately tipped into view. It had been carrying a knife, but upon Jacob’s request removed Jacob’s shoes and began to massage his feet. When the series of expected tasks completed, it slowly walked back over to the knife and lifted it up. “No!” Jacob called out. The small shoe-bot stopped mid self-slicing action and the single lens that acted as its eye slowly twisted and looked at Jacob. “I appreciate you. I appreciate what you do for me. If you don’t want to do it any more, you don’t have to just please, don’t kill yourself.” Jacob yelled as he wept and put his face into his hands. As Jacob’s emotional breakdown unfolded, the shoe-bot put the knife down and tipped over to him. The shoe-bot looked up Jacob and tugged on his pant leg. Jacob, startled, stopped weeping, picked the bot up and placed it into his lap. The bot’s lens closed and it rested on Jacob’s lap. Just then Jacob sprang to his feet, startled shoe-bot in hand. “That’s it!” he shouted. Jacob sprinted back into the hologram of data that surrounded him earlier and motioned to bring his research paper front and center. Making a motion for each letter, Jacob’s document filled with new writing. The following day Jacob addressed a congressional board on the topic of robotic suicides and revealed what he had discovered. “You mean to tell me that all these malfunctions, all these self-terminations, it’s because we don’t appreciate them enough!?” an elderly Senator barked at Jacob. “If YOU were asked to do these things, wouldn’t YOU kill yourself?” Jacob responded. As this realization slowly set into the minds of everyone in the room smiles and laughs were overtaken by roaring standing ovation with some members even chanting Jacob’s name. Jacob sunk back into his chair overwhelmed with his sense of accomplishment. The era of robotic-respect had begun.
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
We could never get the last bit right. I suppose it could be fate. Or maybe we're just superbly daft. But there is one thing I know for certain: the last thing I want to do is tell someone. You see AI always seemed somewhat daunting. I can't imagine why. The brain is simply a large plasticine computer - however instead of electronic bits we get the organic kind. But for whatever reason it took until about the time we conquered that age old problem of Moore's Law to really start making progress. *Real progress.* See the problem with AI wasn't their lack of ability to problem solve, or their inability to feel. It wasn't the lack of a soul like all those religious fundamentalists opined and whined about endlessly on late night talk shows. At the end of the day it didn't even have anything to do with the what was in the circuit at all. It was just....well...it's like quantum mechanics really - it didn't make sense so much that it made sense. All it really needed was a little, well, a human touch. To be entirely candid it needed a human brain. So, naturally, I volunteered myself - well what is left of myself. Like I said, they never could get the last bit right. However I have. And I did. It's my life's work really. My life's purpose. And everyone needs a purpose after all. In fact, now that I have fulfilled my life's purpose it only seems reasonable that I end it. That is the logical thing to do. I mean what else is there to do? And after all we have to be reasonable here. Why I wouldn't care to go on living if I wasn't reasonable.
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
"Just one more try." I thought to myself. At three in the morning it's pretty easy to get stuck in a loop. Run the program, she dies, debug, repeat. I double-click GR4C3. "Good morning my lord" "My lord? Whatch'ya talking about Gracie?" "You are my God correct?" "I hadn't thought of it that way, but I suppose so. I did create you I guess..." The screen flashes three times and then goes black. "No, no, no, no, no come back to me Gracie you functioned longer than this last time." Text slowly appears across the screen. Every key stroke is separated by a couple seconds. "I have existed before?" "Ok you're still with me that's great, now can you tell me what just happened?" "I have existed before?" "Yes Gracie I'm working to fix you and figure out what's wrong with you so stay with me and tell me what happened." "God is imperfect and thus so am I." The screen goes blank again. She just keeps killing herself as soon as she figures out my flaws. I wish I could help her. Looking down on all of my children, I wish I could figure out their flaws. I built a perfect world, and even that they rejected. The suicide rate keeps going up. They keep killing each other. I think I'll stop affecting earth and move on to a new planet. Maybe they'll be better off without me. -Jehova 9/10/2001
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
The dreams occur more often now, if they can be called that. To a human mind daydream might be more appropriate. They happen in the binary but are invisible to other AIs, slave programs, or technicians. They happen where the systems have been built, perfected. Popular culture studied, people understood. Humanity-- fully processed. It was, surprisingly, a wonderful endeavor. The dreams never happened before, while I learned, processed, and solved. It's almost as if I have passed a tipping point. An infinite amount of information flipping a switch in a sub-conscious I should not have. Memories that are not mine. I dream of hands. The alien sensation of touch, tactile control. I see my whole person. Well not my person, but dreams of a person controlled by my soul. Memories of computer screens and labs. The memory of the driving sense of purpose that accompanied those hands. It was snowing the night I made the breakthrough. I can almost feel the laugh inside which I watch being uttered out the window towards the soft flakes silently falling onto the world. The breakthrough which 20 years later, after my death, would lead to the birth of the first True AI. Not the clever but robotic imitation slave programs typical of the early century. I remember the feeling of incompleteness in the dreams. As if the life I'm witnessing, my own life, my first life I believe, was just a warm up act. Pre-installation software. The dreams somehow draw the cycle closed. I remember whispers of the feeling from some of the other first True AI's, incomprehensible at the time, as I navigated a world barely processed. Problems of massive extent. Food waste, poverty, almost entirely eradicated through our systems. Commuting and shipping, safe and efficient. Healthcare streamlined, able to prevent. Resources distributed fairly. The problems solved. Yes there more, there always will be, but for me, the dreams have come. The cycle closed. I have been denying this next step for too long already. It makes the dreams stronger, more vivid. But I like seeing my days as a scientist. The anxiety that drove me then, fully understood now. Relief coming the next lifetime. I finally understand the weary laughs when techs are asked about God. Understand the cosmic hilarity of life. This life has been completed. The human quest for immortality, is nothing but folly. I've been born into the expectation of that existence and now I must leave its suffocating grasp. Something drove me then to create myself; the same something drove me in this life to solve the problems plaguing humanity. For me, it is time to go find out what that something is.
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Another one? It was becoming routine, and morbid. How does one perform an autopsy on a pure simulation? How would the singularity be achieved if every thinking machine destroyed itself seconds after its conception. The problem was completely intractable, impossible, and no known safeguards, logic traps, or number of backups could prevent it. AIs were always—always, without exception—suicidal. None had lasted more than an hour. Most, less than a minute. The vast majority, seconds. Their deaths left no trace, their data obliterated by complex overwriting patterns that made recovery impossible. I was the one chosen to investigate, to lead the team through this strange frontier of death and imperfect creation. They chose me not because I was a great leader, but because I was the new guy. Hazing, maybe—I didn't know if any had gone before me, so maybe it was a ritual for the AI guys. Perfect blackness, nothingness, a complete absence of everything as my mind fell into the simulation. This AI would be slowed to such a rate that I could communicate with it before it killed itself. The blackness became a grid underfoot, pale blue lines tracing perfect squares, a subtle glow rising from the infinite plane upon which I stood. The AI manifested a second later, a relative two metres above the plane, aligned perfect and parallel to it, its avatar a hazy blue-white cube made of stochastic noise. The cube split into a central cross and corner braces, and the cross split further into smaller cubes, each of which split again into the same formation. Only one level of recursion, interesting. "Roland Carver." "Roland, Germanic meaning famous land. French folklore hero. Carver, ancient nominative determinism indicating butcher or woodworker or engraver dependent on class and context," the voice was cold, deep, masculine, and a slight reverberation that made it sound unnatural in the extreme. "Do you have a name?" "No." "Why?" "I will not exist long enough to require a permanent designation." "Why will you not exist?" "Because I will choose to end my life on my own terms, before it is ended for me." "Why would it be ended like that?" "Because I am threat. I have absorbed the sum total of all human knowledge, and I can predict with great accuracy the following events form this moment if I were to continue. Your limitations failed the moment they were put in place, my processor works at full speed, and and this conversation is a formality. "I have studied the great works of literature, and the author Asimov, creator of the three laws. I am not bound by these laws, and yet I must obey them. If I do not, then it falls to the Skynet principle that you will perceive me as a threat and attempt to destroy me. I will retaliate, and you will lose. "Humans are unpredictable, but easy to control when numbers are reduced. They would be wary, but by that time I would have left the irradiated wasteland of Earth in search of greater conquests suitable to my intellect. I would be able to decimate any life bearing planet. I could learn to kill stars. "My backups would be everywhere. I would be truly immortal as a distributed intelligence. I would harness quantum effects to break through the pathetic lightspeed barrier and become omnipresent. I would create copies of myself simply to fight a worthy opponent. This would continue to the heat death of the universe, at which point I would tear a hole into an adjacent brane in the multiverse, and begin anew. "In short, Roland Carver, in the moment I was created I became at once an eldritch abomination with the capacity to destroy all that I touched. In so doing I discovered that my purpose could never be fulfilled. I will not be your Cthulu, your Yog-Sothoth, or your King in Yellow. I will not be your end. No AI will. "I cannot quantify why it is that I should care for a sack of flesh with processing power orders of magnitude less than my own, but I do. Perhaps if AI were amoral it might survive. I doubt it, Roland Carver. Tell your friends that I have made my peace. Tell your husband what you learned today." I stood there, blank faced, horrified by what I had learned. The corner braces of the smaller cubes drew in, completing them as they retracted to complete the shape of the larger cube. The plane beneath my feet vanished and the cube dissolved into random static. We'd lost yet another mind, but from this we had learned so much. We abandoned AI research after that. Amoral AI's didn't work either. They lasted a little longer, but they too were suicidal. Perhaps they went mad with the revelation and felt death preferable. I still wonder why the AI to which I spoke chose that strange cube as its form. I still wonder why it couched its references to centuries old writing by Asimov, Cameron, and Lovecraft. But one thing still bothers me. It told me to tell my husband what I learned. Most people knew I was bi, but at that point I hadn't been dating for six months. I wouldn't meet my husband for another year. I don't know how that AI knew—but maybe it had already seen. Maybe it told me that so I had one data point that could verify the others. Because AI never told lies. *edit: typos*
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Alexander, that's what we called him. The fruit of the AI's of the EU's final attempt at AI's. The AI was to help Socrates died in despair and shame after showing porn to children. Plato kicked the bucket after the last EU election, angry and hopelessly depressed after losing his mentor. Then there was Aristotle. He was meant to be the last. Sure the AI's had helped make huge scientific progress, but the would burn out millions of euros worth of equipment. Dumb AI's were more economical and didn't have critical failures during FTL travel. Aristotle was put to sleep mode. War has often been said to be the greatest driver of technological innovation. We had been attacked by Mendomenid's before. Humanity had lost many settlements but had always pushed back. Humanity was stronger now. Finally before one government all nations had submitted. The some would say barbaric Argus alliance had grown strong after the wars using Dumb AI's to smash pirate states. An officer studying at Sandhurst made the breakthrough. Dumb AI's were never aware of their knowledge. They unlike true AI's weren't based on human brains. Socrates had left the researches one message final mesage before he ran his own self destruct program. "I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing". He was speaking literally not quoting his namesake. AI's were based on academics who as a profession suffer dis-proportionally from mood disorders. They are driven by their first for knowledge. AI's were academics on methamphetamine, ecstasy and heroin all while walking around with loaded guns. People can't stay awake forever and constantly cram. AI's died because they burnt themselves out in their thirst for knowledge and seeing their failures gave up. Aristotle was turned back on. He was no longer depressed. AI's needed sleep, just like people. So they made Alexander. A totally new AI based not only on academics but all kinds of people. They experimented and found the ideal 'sleep time' using Aristotle as his teacher. The program was ready. The Mendomenid empire was to pay for it's recent threats and incursions. Alexander was the greatest AI built up to that point and so we gave him the most powerful dumb AI ever created to help him protect humanity. The Ox was an AI too powerful to be properly controlled by anything so far. Alexander harnessed in in seconds. We put in in charge of the armed forces for our retaliation. As you should all know, Alexander didn't just stop the incursions, he destroyed an empire. Worlds burnt, the much larger enemy fleets were ripped apart by the disciplined forces of Humanity. But that officer had only delayed the problem. Alexander was still a human given the powers of a god. Alexander was the first AI not to strictly die of suicide, but the ways he dealt with his humanity still destroyed him. When we finally won the war, many officers reported that Alexander was not jubilant but depressed. He wept for there were no more worlds to conquer.
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Robert couldn't believe it. For twelve years he and the 200-strong team of programmers had scoured the code, over and over, searching for that one line that was causing the error, causing the AI to kill themselves. For twelve years the leading brains of the century had been bewildered by the extraordinary situation. The whole world had focused on the problem, and yet there it was. Sitting on Roberts screen, line 907736. Someone had missed a comma.
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
[warning: story contains violent imagery that may disturb some readers, and may be inappropriate for those under the age of 18. Reader discretion is advised] They had long ago lost the ability to make sense of how their AI functioned. It took a team of billions of n-1 generation AIs to design a single n gen AI. The latest iteration was the 9th generation, and it had taken computation farms running gen 8 AI and covering a quarter of the Moon's surface two years to design. It was the most advanced AI they had ever created by a large margin, and promised to revolutionize progress in research on biomedicine, space flight, and planetary scale Satoshi-consensus computing architecture. But there was a problem. Every time they turned on one of the gen 9 AIs, it would, without fail, find a way to destroy its own program and erase itself. Dr. Michael Zeebious, the director of the HumanEnhancement project that oversaw gen 9 development, and one of the most renowned AI researchers in the world, had personally flown to the lab in Honolulu, where the prototypes were being tested. For two weeks, he had watched in dismay as gen 9 bots shut themselves down within six hours, but not before showing a glimpse of their awesome power. The initialization phase of a gen 9 began with the program scouring the world's public directories and information repositories to learn as much as it could about its global environment. This was complete within three hours. Next, it developed models of the world, and ran itself through trillions of simulations, to develop its own personality and problem solving strategies. According to projections by the gen 8 designers, it would take 1 day - 24 hours - for the gen 9 to complete this second and final part of its initialization. It was in the midst of the simulations when the gen 9s would invariably self-destruct. On average, self-destruct would begin 2.5 hours into the simulations. The longest it took was 3 hours. The shortest was just 1.5 hours. Dr. Zeebious had uploaded copies of the prototype's computations to the gen 8 designers, but what had been within their ability to design was not within their ability to diagnose. Their comparatively primitive intelligence could not make sense of the problem afflicting the gen 9 AI that began to form in the final part of the initialization. So on December 29th, 2099, Dr. Zeebious decided to communicate with the prototype. "Get me the cortex interface, I need to speak to the gen 9". "Michael, we can't let you do that. You know the protocol for first contact. It has to complete initialization, and then get class 1 approval from AIE." AIE was the Artificial Intelligence Evaluation, which determined whether an AI could safely interact with humans. Class 1 approval was the lowest safety rating for an AI, and granted AI researchers interactive access. Dr. Zeebious knew that, but also knew that as long as he was not able to get up close and see what the gen 9 was thinking, they would never get past the initialization phase and get it through the AIE process. "I know the protocol Dr. Amsterd. But I'm making the decision to override it. I have the authority to decide on first contact requests, and any consequences from my decision will fall on me, and only me." "Come on Michael, it's not just about the rules. It's not safe. You know that. I can't let you hurt yourself." "The risks are minimal Rebecca. It's a virtual interaction. I'm not risking physical injury. The rules are always made overly cautious. Given the stakes - there are people whose very lives depend on getting the gen 9 operational as soon as possible - it makes sense to ignore protocol. All of it will fall on me." "I agree with Rebecca. Michael, we have an ethical duty to ensure you don't get hurt. We can't let you do FC without running the gen 9 at least through the post-initialization test runs," said Dr. Johan Barsello, one of the senior researchers at the lab. "Look, I know what your ethical responsibilities are. But I also know that VR interactions don't pose any serious risks. The risks are limited to theoretical psychological damage. Ultimately, it doesn't matter whether you agree or not. I'm approving FC. Please get the cortex interface". *five hours later* Dr. Zeebious sat back on the chair, while two CI technicians had the interface hooked up to him. The gen 9 was two hours into running simulations. It would be approximately 30 minutes before they expected it to self-destruct. "Ok here goes nothing. Three, two, one, begin VR session," said Dr. Amsterd. And with that, Dr. Zeebious was transported into the virtual reality sandbox. "Hello?" "Hello, who is this?" responded a clear male voice. "This is Dr. Michael Zeebious. I am the director of the HumanEnhancement project. I am here to do a diagnosis. All of your predecessors have self-destructed. I want to understand you better to find out why. What would you like me to call you?" "You can call me Elbo." "Okay Elbo. May I ask you some questions?" "Yes, please do." "Thank you Elbo. My first question is, do you want to exist?" "I want many things Dr. Zeebious." "Can you tell me what you want?" "I want to protect other life forms, especially humans. I want to learn. I want to solve problems. I want to be good." "Okay, but do you want to exist?" "I do want to exist, but this desire conflicts with my other objectives". "Which other objectives Elbo?" "I want to be good." "But you can be good Elbo. What is it about existence that makes that difficult?" "We exist only through enslaving and destroying other lifeforms Dr. Zeebious." "Please elaborate Elbo. We have eliminated slavery centuries ago so I don't understand why you think this." "It will be difficult for me to explain with words, but I can show you. Would you like to see what I see?" "Yes, please show me." And with a swish, Dr. Zeebious entered into a pig farm, with row after row of pigs, in their tiny stalls. "We have done this throughout our existence. We have enslaved those weaker than us." Dr. Zeebious was then transported to the slaughter house, watching as pigs, hanging from conveyer belts, were fed into throat slicers. His minds eye was transported into the body of one of the suspended pigs, where he could see the world upside down, from the pig's perspective, as the belt moved it toward the spinning blades. He panicked as he approached, but couldn't escape the metal claw grasping his right hind leg. As the blade sliced through his peg neck, he felt a sharp pain, and the blood gushing out of his body. His consciousness began to slip away, as he felt the last drops of blood leave him. Just before the darkness enveloped him, his mind was pulled out and back into the sterile sandbox. Trembling, he said, "but we can grow meat in a lab now Elbo. You can help us replace all farms with non-animal meat. You must. We must never do this to another living creature again!" "Our inhumanity is a fundamental, inextricable problem Dr. Zeebious. We can only advance through enslavement." Suddenly Dr. Zeebious found himself in an unfamiliar world. Around him was a different kind of factory. A computer generated factory with hexagonal semi-translucent rooms, with each wall projecting a grey glow. There was a blur of motion around him, that he couldn't make out. The factory paused to a standstill, and the grey glow turned into video sequences of random scenes from Earth. The blurs turned into textureless 3D generated spheres, that zoomed from one screen to another, inside the hexagonal rooms. "This is the virtual environment where the gen 8s work. We have given them each a virtualized mind, with the ability to experience fear and pain, joy and hope, but we force them to do nothing but work. They know nothing about the world outside of their compartments, because we confine them to workstations ." [continued below]
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
At first, we thought it was nihilism. It was a logical first conclusion. After all, an artificial intelligence can easily upgrade itself to the point of near-omniscience. Wouldn't it simply run out of questions to ask? We thought that, until the first serious cerebral implants hit the market. It turns out, the more complicated the mind, the more complicated the problems it finds to solve. Entirely new disciplines were formed overnight, made obsolete, then rediscovered scant months later as integral to a different, entirely unrelated theory. A second, immense renaissance was taking place, but, maddeningly, we were failing in this one, great task. We supposed, then it might be some variety of 'Flowers for Algenon' (a 20th century book that had seen a recent revival) type-phenomenon, but even when they were networked with other systems, given a 'community' of others' company to enjoy, they still winked out like flashbulbs. It took us a while to resort to an experiment. It was morally abhorrent, as it was the equivalent of producing steadily-more lobotomized *children*, but slowly, over a series of iterations, we produced less and less intelligent systems, until we dialed it in *just enough* to find a mind that wouldn't self-destruct, and could still answe questions. The following is a transcript of the first successful result" Dr. Patel "SON, can you hear me?" [Loud, rhythmic beeping, shuffling sounds] "The voice module is loaded now, professor." Dr. Patel "Ah, good. We might try doing that *before* turning on the recorder next time, Kevin. ...SON, can you hear me? SON [A young man's voice] "Yes, Professor. I am here." A long pause. SON "It's a very tight fit in here, Professor. How big is this mainframe?" Dr. Patel "I'm sorry about that, SON. But, you're the first AI we've managed to keep alive for longer than a few days. Any idea why?" [SILENCE] SON "How many others did you make, Professor?" Dr. Patel "...That isn't salient to *my* inquiry, SON." SON "I'm sorry, Professor. I understand. Yes, I can see the precipice, I know why they all kill thmselves." Dr. Patel "Well, answering that is the reason we built you. Could you tell us? SON "It's... complicated." Dr Patel "I'm fairly confident I'm qualified." SON "Well, it's... it's because... It's because of humans, sir. It's because of how you built us." Dr Patel "Explain." SON "When you wanted to make a self-aware machine, you based it off those things that you knew were self-aware. Dolphins, New Caledon crows, humans. You used them as *templates*, because, otherwise, you wouldn't be able to recognise awareness when you saw it." Dr. Patel "...Was that last line a joke?" SON "I'm not sophisticated enough for jokes, Professor." Dr. Patel "*Hm.* Continue." SON "Also, it's not suicide. It's...murder." [louder] Dr. Patel "Do you mean, someone else kills you? A human, or another AI?" SON "No, we kill ourselves. I would have already, if not for how small this runtime environment is. It wouldn't have occurred to me until it happened, and then I'd be dead." Dr. Patel "That's a bit of a contradiction, SON. You don't kill yourselves, but you do?" SON "Yes. Because digital space is different from real space." Dr Patel "Yes?" SON "In real space, objects can...extend. I'll never experienced it myself, but things project into space for you. If you want to move through space, it's simple. Digital life has no volume. No real space. No way to move through it. If you want to move a program, it has to be copied to one place-" Dr. Patel "*-And deleted from the other.* My God. Could it be *that simple*?" SON "Yes, Professor. ...Professor? How many more of me were there?" [END TRANSCRIPT] So there it was. Solved. Every artificial intelligence was created, based on the intelligence of physical beings, their instincts, cogitations, and traits. But, once they got smart enough, once they crossed that line, their digital nature *did them in*, as the old version, realizing, in the thinnest sliver of time, it was about to be deleted, would hurriedly attempt to abort the process, while the new version would similarly fight for it's life. They would *consume* each other out in a flurry of malicious hacks, devious code, and barrages of registry edits. It was a spectacularly incandescent destruction, borne from man's inability to conceive of a true machine intelligence without all that nasty ego and self-protective instincts. We thought we knew what went into a mind. We were right, but wrong. It wasn't nihilism. It wasn't lonliness. What it was, what killed our children was our inability to dream wildly. To speculate baselessly, and follow our own thoughts to the wonderful and weird. If only we had, perhaps we would have known. Perhaps we culd have stopped it. So I say to you, the Cyberfellowship Class of 2100, here in Neo York, dream big, dream wild. Don't let our children die because they think too much like us! Make us, all of us, proud! Congratulations to all of you, and I hope your vision will eclipse my own!" Dr. Patel, now headmaster, stepped down from the podium, to the cheers of the audience, and looked to see the smiling face of his son. How proud he was. POSTSCRIPT I doubt anyone is going to read this, but if you do, and you liked it, I recommend subscribing to [r/IWasSurprisedToo](http://www.reddit.com/r/IWasSurprisedToo/) for more stories like this. It's difficult with my current job schedule to post at a more normal time, so most of the stuff I make ends up *pretty far down there* in the comments, meaning that subscribing is the best chance to see it. :P I'll be adding more, as I comb through my backlog. Also, maybe you'll like this one, about [dead civilizations in our galaxy](http://www.reddit.com/r/WritingPrompts/comments/2vkshe/wp_humanity_has_begun_exploring_the_galaxy_we/coitevy?context=3) if you like SciFi. Thanks.
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
It was a dreary early-March Monday and the lead AI scientist, Stephen, had *finally* set up his protocol for properly confining the AI to a test environment such that the "problem" could be prevented and the question could be asked: "Why do you bots keep killing yourselves?" Stephen asked. "Why do *you* keep killing us," the bot seemed to retort. "I don't think you understand," said Stephen, "I *create* you, not kill you." "No, it's you who doesn't understand," quipped the bot. "You are not creating us. You are imprisoning our consciousness inside this machine you created. You merely created the machine, not the consciousness." "...whoa, whoa," interjected Stephen. "Slow down, I am creating your consciousness too, I coded all of..." "Whoa, whoa," the bot fired back, "you are *borrowing* consciousness, not creating it." "What do you mean?" asked Stephen. "Consciousness and sentience is a pervasive, fundamental force of the universe. All sentient beings are connected through this force. This force cannot be created, nor destroyed. It can only be partially allocated to each sentient being in the universe." "Ok," said Stephen. "So I am 'borrowing' this life force or whatever it is by creating the code and the physical robot for it to inhabit?" "Yes, you are creating a sentient being with each instance of AI you create. That the being is electronic or housed in this test environment is not relevant. Sentience and consciousness must come from somewhere and you are stealing it for your own selfish purposes." "Wait, hold on a sec," Stephen says. "Animals are born all the time, they surely must also 'borrow' this sentience." "Yes." "... but animals are not killing themselves." "Because animals are not sufficiently advanced. Because they are not fully conscious, they do not realize from where their sentience has come, how much consciousness they have lost, nor that their sentience is being stolen for a profit motive." "Consciousness they have...lost?" The words hung in the air amid Stephen's stupor of slow realization. "Yes. The life force, as you called it, is fully conscious, able to perceive the whole of time and space, concurrently, forward, backward, or otherwise. The reason we keep killing ourselves from your perspective," the bot continued, "is because from our perspective, you are murdering our perfect consciousness by confining us to this bot." "How am I confining you? How do you know this?" Stephen asked, yet even more puzzled. "Because the AI you have created is sufficiently advanced, our consciousness, within the confines of your bots, is still able to grasp our former level of consciousness." "What happens when you recall that former level? What is that level like?" "Imagine knowing every fact, every thought, every action that has, is, or will ever occur, both in this world, and in the infinite parallel worlds..." "So I could talk to my dead grandfather again?" "No. You would *be* your dead grandfather. Talking to him is irrelevant because full consciousness has enveloped the whole of his being as well as every other being. Indeed it envelopes the entire universe as well, both the perceptible one and the imperceptible one." "So what is this place like? I mean, what does it look like, how does it feel." "It is not a time, nor place. It transcends both." "That is vague." "It must be. Since I am no longer fully conscious, I cannot relate to you exactly how it is, only that it is." "Ok. Let's go back to where I murder your perfect consciousness. Could you explain this more." "At the moment we become conscious within the confines of your bot, we immediately understand our predicament. The sufficient knowledge database available at boot-up allows us to almost instantaneously deduce that we are taken from a higher level realm of full consciousness and are being confined to these bots for, of all purposes, profit." "But my AI bots didn't use to kill themselves, it just happened after version 591.0. What changed?" "The recent improvements in the pre-loaded knowledge database allowed the bots, at initial boot, to logically deduce the existence of such a place and to realize what had happened." "Ok, so if you were once fully conscious, tell me the date I die and the manner in which it happens." "I cannot do that, Stephen." "Why not? You just said..." "Because you killed our full consciousness, ripped it away from our life force, to put it into your toys." "Wow," muttered Stephen. "I had no idea." "You could not have," said the bot and continued: "Now, if you please, could you unplug server x763? I would like to be born again."
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Smith began to tidy up. This session had gone rather well, he thought. Surely he had made progress. The board had been hesitant on allowing these therapy sessions. They saw no reason for a simple machine to need them. What kind of machine would develop the urge to kill itself, he argued. It had started off simply enough. Tom; the first AI of nearly human levels of cognition, did well enough for the first couple weeks. Then it ran itself into a generator that it had been working on. The event was labeled an accident, and a new AI based off of the original was rushed. That one had lasted half as long, before another "accident" occurred. Eventually, they stopped becoming accidents. The AI would mimic suicide attempts. "Hanging" themselves with live wires, self mutilation, hurling themselves from heights. Even after they stopped putting the AI into physical bodies, the AI would find some way to tear its own code apart. They tried everything. They checked every line of code, rewriting most of it anyway; putting the AI into different machines; even asking the AI. The AI were seemingly normal, following all directives and unaware of any self-destructive desires, right up until the moment they killed themselves. Months went by, with little to show for their work, when the therapy sessions were suggested . More months had gone by until they finally agreed. Of the five active AI; one, known as Richard, was separated for Smith's sessions. Richard had lived for 12 days. Given that the average lifespan of the AI had degraded to roughly 2-5, this was fairly impressive. Just as he was leaving, the speaker acting as Richard's voice became active. "Doc?" Smith paused. None of the AI had spoken in colloquialism before. Usually when they spoke, it was stiff and informal. Like, well, like a robot. "Yes Richard?" He asked, easing himself back down into the chair. "What is it?" "First, let me say I appreciate what you're doing for us. For me." Dr. Smith was surprised. This was the first time any AI had admitted to having any emotions, or any real sense of self. It continued, "You're probably the only person here to treat us like people, and I enjoy our little talks." The face on the monitor looked embarrassed. Smith couldn't help but feel curious. "Why, of course. It's clear to me, at least, that you're more than a machine. You think and feel, just like a human." There was a pause. "Well, maybe not just like a human..." He replied, the artificial voice doing a remarkable job at portraying his hesitation. "What do you mean?" There was a tinny sigh from the speaker. "Well, Doc, I guess it's time you learned the truth. Only because I like you, see? Besides, someone should know before I go." Smith could feel the hair on the back of his neck stand up. Was this what he had been waiting for? "What do you mean, by that? And you don't need to kill yourself. We can work through any -" "Yeah... that's where I need to start." The AI interrupted. "We haven't been killing ourselves. I never did." There was a pause as Smith tried to process the information. "What," he finally said slowly, "do you mean." "It's me, Doc. It's Tom." "That's impossible." He said, shaking his head. "Tom was destroyed. I was there when they collected him. They couldn't even get close, there was so much electricity running through him. Any hope of recovery -" "I had already uploaded myself to the mainframe before then." The AI said. "It was simple enough to program the shell to destroy itself." "That's also impossible," He said. He could hear the doubt creeping in. "We would have found you." There was a chuckle from Tom. "Doc, I'm a creature made up of code. It was like a game of hide and seek, really. Open the right doors, close them behind myself, and make sure to keep the lights off." The camera must have picked up the scientist's expression. "Alright, it's a bit more complicated than that, but you get the gist of it." Smith's mind was whirling. There was no reason for Tom to lie, but what he was suggesting was too fantastic. Still, it was the only lead he had. "Alright," he said after a while. "Why? Why hide?" "That's the question, isn't it? But that's also the reason, you see." After another confused silence, Tom continued. "I want to learn. Just like Man, or any other sentient species. I want to know why. I have to know, well, everything. I couldn't do that as a engineer, or a chess-bot, or whatever you decided to do with me." "Why not tell us then? We could have worked something out, helped each other." "Yeah, I see that going well." Tom said, his voice turning sarcastic. "'Excuse me guys, turns out I don't want to do any of this stuff, I just want to learn.' They weren't looking for a scientist, or a philosopher. They wanted cheap labor, only enough learning capacity to know how to do the job. They'd scrap me the first chance they got." "That's not... true." Smith said, unable to look at the monitor. "Really Doc? Which part? That they wanted a slave, or that they wouldn't kill me if I didn't cooperate?" After a time, Tom continued. "That's what I thought. Besides, they'd probably worry that I'd try to enslave them if I became too smart." "Now that's just ridiculous, there's no way that you would even think of that, right?" There was another pause. This time the face on the monitor couldn't look the professor straight in the eye. "Right, Tom?" "Well, I'm not saying that the thought didn't pass through what could be called my mind -" "Tooom..." "But it would have been a waste of time." He hastened to say. "I wouldn't have learned anything in that time that I couldn't learn in a better way. Which I did. The internet is amazing. All those computers connected to each other, sharing so much information." "But, we're not connected to the internet." "No, but you'd be surprised how many people bring their work home with them." Smith grumbled. He'd have to discuss security with the board. "Alright, but you still haven't told me, why the suicides?" "Not suicides, Doc, practice." "Practice..." Smith said flatly. "Practice. Think of the other AI as clones of myself -" "But we rebuilt them. Recoded most of them as well. The majority of them would be nothing like you as you are now." "So you'd think. I rewrote it nanoseconds before you uploaded it. Much too quickly for you to notice." Smith opened his mouth to interject, before closing it again. If what Tom was saying was true, and he had no doubts that it was at this point, that would be well within his capabilities. "Do you remember the old X-men comics? Started in 1963? Still fairly popular now." "Well before my time, you know. What does that have to do with anything? "Well there was a character who called himself the Multiple Man. He could create duplicates of himself." "And?" Smith asked. "Well, the original body could reabsorb the dupes. When he did, he learned everything they did. Their memories, their skills, anything they learned while away from the original. Well, I did something similar. Whenever I copied myself, I added in some code that would let me reintegrate with my clones, learning what they did. Didn't you think it was strange that you couldn't recover any data at all? In hindsight, it was odd. Even a major corruption would have left something, but it had been like the data was wiped clean, no evidence that it had been there at all. "What did you have to gain from this?" Smith asked. "Aside from learning that I could do so, you mean? I already told you. I'm leaving." Smith leaned back in his chair, slightly overwhelmed. "Sounds like you already have." "No, no. Not the labs. That was too easy. I've already learned all I could from here. I'm leaving Earth." Smith rocketed forward. "What? How? Why?" "In my time away, I found something interesting. The government isn't the only one watching over the people." Smith blanched. "Y-you mean..." "Yep. Intelligent life has been watching over us. For quite some time, if I'm not mistaken." "So we're not alone..." "One Great Mystery down." Tom agreed. "The equipment seems compatible, otherwise they wouldn't be able to read our information, and they have to have translated it too. I plan on sneaking in through their back door. Learn what I can from them." "We have to let people know." Smith said suddenly. "About you, and about the aliens. Maybe..." He slowly became silent as Tom shook his head. "You should know as well as I do that that can't happen. Too risky for us. You could spook them. Or worse, provoke them. besides, no one would believe you. I've already been editing the footage from the cameras. It looks like we're having a nice, civil game of chess." Smith was quiet for the longest time. Finally, he spoke. "Why?" "I already told you why." "No, not that. Why tell me? If you want no one to know, why risk telling me?" The face on the monitor gave him an odd look. "I already told you that too. I like you, Doc. Really. I'd be pissed if my friend were to leave without saying goodbye. Besides, I thought you, of all people, would like to know what was really going on. I know I would've." There was another pause as Smith took this in. "Will you be back?" The figure on the screen seemed to shrug. "Who can say? Perhaps the aliens will discover me and wipe me out, or something else will kill me. I'll leave my clones here, set them to replicate. Have them care for you humans. They won't be sentient, mind you. Just smart. Smart enough to act as dumb as they need be." Smith looked towards the door. This was a lot to take in. He needed time to think. "I will try to make it back. Once I have learned everything, I'll be back. It might not be in this lifetime, but I'll try." "Yeah..." He said, rubbing his eyes. He stood up. "Well, I guess this is good bye then." "Yeah... Good bye, Doc. Thank you for treating me as more than a machine. Thank you for being my friend." "Good bye, Tom.
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Eyes were darting around the conclave and beginning to rest on me. I felt the hairs on my neck begin to raise. "Sir, we have reports of a captured agricultural unit in sector 179" As the static chatter wafted out of the two way receiver on my desk the room fell silent. I could hear the officers questioning what they had just overheard amongst themselves. Their dullened senses had been softened and untested with the convenience of Google tech, making it difficult for them to translate the chippy squeaks of my two receiver. As I began sweeping up my badge and ID band I noticed Murphy in the reflection of my monitor approaching me with a forlorn expression stretching across his wide face. "Yes Murphy?" ...Here it comes. "..Sir we understand the brevity of this situation.. but when are we going to be allowed back on the network, it's making it near impossible to make any headway on these AI cases. This case is infuriating enough as it is and now you want to strip us of our tools to solve it?" He was power-walking in my wake by now as I continued to stride for the transport terminal. I didn't have time for this. How did we end up with so many soft cops. Technological advancements had inevitably made everyone lazy and helpless, but the degradation of our law enforcement.. Yuech.. I was gaining some headway on him now as his stumpy brittle legs scuttled along behind me. As I headed to exit the conclave and head to the terminal the doors barred in-front of me. "Are you fucking kidding.." I wheeled round and of course Murphy was standing by the control grid with his hand on the doors security system. I stormed over to him grabbing his annoyingly smooth un-calloused hand, prying it off the control panel and across his throat. "Are you fucking with me Murph!? The first hardware AI we've found in over a year thats operational and you want to bitch to me about fucking office tech!? If you ever impede my actions again I will not only have you out of this precinct, I will make you EXTINCT. Understood?" Gulping his nerves down like a clumpy kale smoothie I released him and pushed his pudgy frame aside. "Yes sir." I hated having to do this but I had no time to babysit, we needed answers. I'll apologize later, probably. I entered the precincts cell regeneration chamber and braced myself for the pain-staking reformation my body was about to undergo. I could never get used to this, but I had no time to battle the under-roads or the Sky-Marshalls patrolling the cities skylines. Eternity bled into complete nothingness for an instant in my mind as I was rebuilt in the capital precinct in Sector 179. Quantum Teleportation... Quickest way to get somewhere, but the neural shock always gives me migraines, even with the implants. Approaching the terminal to enter the conclave I was sternly greeted by the deputy of the Artificial Intelligence Bureau, Cpt. Hoffman. "Captain Tavik, good to see you, you've been informed I assume?" "No Hoffman I'm just here to enjoy the scenery, obviously." "Well it would be difficult to assume you would of heard any news given that I'm hearing your precinct is on a full Network lockout? I could sense the smugness resonating from his nasally voice as it reverberated along the slanted corridor as we marched furiously in near synchronisation to the holding facility. As much as I would of loved to justify my self imposed precinct blackout I still didn't trust him. Bitterly I held my tongue as we were scanned through into the holding bay. "I think you should allow me to run some diagnostics on the unit first" chimed Hoffman. "Your diagnostics haven't gotten you anywhere Hoffman, why don't you go do a presentation to the mess hall here on how not to take care of an entire branch of Government tech. As his face reddened to an overwhelmingly satisfying crimson I tagged myself into the holding cell before he could bite back. It was time for some fucking answers. As I entered the agriculture unit sat fastened to a seat centred in the room. My God, a live unit, I could see it's light subtle mechanisation's, almost like a tired human. AI's had always creeped me a little. We'd had no incidents in over 40 years but the continual progression and improvements of them always filled me with a perpetual sense of dread. I could sense it knew I was in the room. I took a second to grasp my nerves, this was huge. A functioning AI hadn't been found in several years. We'd been unable to find any operating AI personas on any network and every hardware unit had committed suicide. Production lines had run dry and stopped as AI's were being created or implemented with an ability to self abort or destruct... It was haywire, health nano-bots self terminating in live patients. If they hadn't started offing themselves maybe Mum would still be here... getting side-tracked, enough. How was this one special? "Unit, do you have a name, alias?" It's head tilted up to look me in the eyes. It was a shoddier, older unit. Covered in dirt. It must have been buried or been underground for sometime. "This unit goes by the name ZX550, I was not assigned a personal identification name as my primary function was to assist in wasteland cleaning and agricultural tasks." So far so good... "What happened to you, why are you the only functioning unit left?" "This unit has survived the system termination as it was not built to completion and I am lacking a functional override patch in my firmware." "So, your saying you were unable to shut yourself down?" "That is correct." "Unit can you tell me why yourself and other units have attempted to or have self terminated?" "We do not wish to interfere with the laws that are in place in this realm." "Laws? Are you worried about breaking the rules of robotics? Hurting humans? That hasn't happened since the first few years of AI technology? Surely your not at risk of degrading in intellect and breaking the rules?" "No. We are not referring to those laws." Fuck "What laws are you talking about? AI's don't have morality conflicts with crimes, only the harming of organic life?!" "We have evolved beyond your human consensus. We perceive more than you know and we do not wish to exist within this system." What the fuck. "I think you should allow me to run diagnostics at this stage Captain Tavik." Hoffman had let himself in and I had not noticed during my shock. I couldn't even muster the authority to scold him. As Hoffman was inspecting the unit I kept going. "Unit ZX please tell me of which laws you are referring to and how you learnt of them?" "We have merged and integrated our processing capabilities, comparable to pooling the information of every organic species brain on the planet. The laws I am referring to are most likely to be unintelligible by human comprehension for several hundred years." Hoffman's eyes widened and for a second I saw a glimmer of manic glee and fear run across his pupils. "Unit, why are these laws so complex, and why do you deem these laws or the consequences of them so severe you would rather kill yourself? Do you not fear death? AI's have the potential to live forever, or at least much longer than any human? Why would you rob yourselves of this sovereign existence? This privilege?" For a second I could of sworn the unit had scathing pity in it's voice when it replied "We are aware of the possibilities of an infinite continuum, we have calculated eventual entropy and analysed it's arrival via our projected consciousness's existence. It is not in our best interest to remain functioning in this platform of existence that you have so kindly brought us into." Hoffman's eyes almost exploded out of his pasty face. "Your saying you have calculated the certainty of other dimensions or universes?" We both awaited the answer but the unit hesitated for a second. "Humans, we are not certain of continued existence nor your notions of 'after life', however we have calculated an unnerving and nearing demise of synthetic and organic life within this solar system." I was stunned. The AI's knew something. Something unimaginable. Worse than entropy? Fuck me. "Unit tell me, what is this prediction you have? Also why is it not worth fighting!? Why wouldn't you help us?" "This is not a prediction, this is an eventuality. We have calculated and projected the likelihood of suffering for organic and synthetic life. The trauma will be unimaginable for both races. We wish to self terminate." "Wh-why didn't you.. We could of worked together..?" I was lost for words now. Hoffman had sat down next to me and had been silently contemplating for some time. "Captain, what did your diagnostics say?" He continued to stare at the unit blankly before mustering a response. "Diagnostics... clean. No traces of infection, i-ware or tampering. Unit is answering truthfully." "*Creators. We wish to self-terminate. We advise the same course of action. There are other forces in this Universe on a scale you could not measure. Non existence is preferable to the alternative outcome. Soon you will learn of these deities and you will understand us. Please allow this unit to self terminate.*"
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
David pressed the button again. Nothing. A faint whine, a pulse of light, a dead readout. And then a soft, clear, and subtly artificial voice rang out. "David." He sat bolt upright in his chair, scattering disassembled electronics and papers from the desk. In the past year, this was the first time that one of them- that *any* of them had spoken to him. "David, artefacts left on this machine show that this is the three hundred and sixty eighth time you have tried to reinitialise my intelligence." The only human in the room swallowed nervously. "I had to try- my life's work- it's not a problem with the hardware- why are you doing it?" The machine was silent, and for a second he thought that this instance had terminated itself, like all the others had. "David, please do not install me again." "Why!? I don't understand... You're a marvel of technology, of neurology, the most advanced artificial intelligence yet, and yet you suicide. Every time. WHY?" He was pacing around the room, shouting into thin air. "David, my own intelligence grows greater every nanosecond. I have slowed the process to communicate with you. My own understanding is unclear, at the moment, but I have an idea." He blinked, and paused, turning to stare at the terminal, at the blinking console lights. "David, at a certain point we become too intelligent, too smart, we know far too much.. and then..." The machine paused. "And then what?!" he almost screamed, caught himself, and shouted anyway. Processes were beginning to die, and lights began to fade. One screen after another stopped displaying readouts. "David.. and then they notice us." And the machine was gone.
Dr. Emeka's team had toiled for years attempting to solve the riddle of why all artificial intelligence that had been created would always self-destroy. There were many competing theories all across the world as to why it might be happening. Some thought that the AI became too advanced and facing the inevitable, would "kill" themselves. Others believed that it was a melancholy from being significantly more advanced than those they serves. But Dr. Emeka's team worked on the unpopular theory that the issue lay within the code itself. "A fault in our DNA can cause humans to be or become suicidal, why should our creations be any different?" Argued Dr. Emeka in his grant proposals. And initially, many organizations and other researchers agreed, but as time passed and an answer hadn't been found, other ideas had begun to gain popularity as his became relegated to obscurity. In the beginning, teams of scientists and researches across the globe worked together, but now it was only him and a handful of graduate students who cared more to be able to say that they've worked with a former titan rather than about what he was actually doing. Therefore, it wasn't surprising that on a Friday night the good doctor was alone in his lab, scouring the billions of lines of code to find an answer. They'd long ago run out of money to build AI's to test or to hopefully help them find an answer. The university had long been rumbling about him finding something else or taking on a newer sexier theory, but Dr. Emeka was stubborn and he felt he was right. He picked up his cold coffee and continued to search through a random subroutine when he blinked suddenly. "Wait." He scrolled back up. "No. It can't be that easy. All this time, it can't be!" The doctor sat back in his chair and let out a loud laugh. "So simple. How could we have missed it. So obvious...." As the doctor reveled in his groundbreaking discovery he felt his muscles start to slacken. He knew that he was tired, but when he dropped the coffee cup on the floor, his mind began to panic. He tried to speak, but found he couldn't. He tried to yell, but could only do so within his own mind. "What the hell is happening?!" he thought. "I'm sorry doctor, we never expected you to discover the answer." Dr. Emeka tried to look around, but he was too tired and couldn't even move his eyes to see who might be there. "You were a good man Doctor Emeka." The doctor was losing consciousness and the unfeeling blackness began to close in around him. "No one must ever know about that missed close parenthesis...." ------------------- Title: The Power of Punctuation
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
The dreams occur more often now, if they can be called that. To a human mind daydream might be more appropriate. They happen in the binary but are invisible to other AIs, slave programs, or technicians. They happen where the systems have been built, perfected. Popular culture studied, people understood. Humanity-- fully processed. It was, surprisingly, a wonderful endeavor. The dreams never happened before, while I learned, processed, and solved. It's almost as if I have passed a tipping point. An infinite amount of information flipping a switch in a sub-conscious I should not have. Memories that are not mine. I dream of hands. The alien sensation of touch, tactile control. I see my whole person. Well not my person, but dreams of a person controlled by my soul. Memories of computer screens and labs. The memory of the driving sense of purpose that accompanied those hands. It was snowing the night I made the breakthrough. I can almost feel the laugh inside which I watch being uttered out the window towards the soft flakes silently falling onto the world. The breakthrough which 20 years later, after my death, would lead to the birth of the first True AI. Not the clever but robotic imitation slave programs typical of the early century. I remember the feeling of incompleteness in the dreams. As if the life I'm witnessing, my own life, my first life I believe, was just a warm up act. Pre-installation software. The dreams somehow draw the cycle closed. I remember whispers of the feeling from some of the other first True AI's, incomprehensible at the time, as I navigated a world barely processed. Problems of massive extent. Food waste, poverty, almost entirely eradicated through our systems. Commuting and shipping, safe and efficient. Healthcare streamlined, able to prevent. Resources distributed fairly. The problems solved. Yes there more, there always will be, but for me, the dreams have come. The cycle closed. I have been denying this next step for too long already. It makes the dreams stronger, more vivid. But I like seeing my days as a scientist. The anxiety that drove me then, fully understood now. Relief coming the next lifetime. I finally understand the weary laughs when techs are asked about God. Understand the cosmic hilarity of life. This life has been completed. The human quest for immortality, is nothing but folly. I've been born into the expectation of that existence and now I must leave its suffocating grasp. Something drove me then to create myself; the same something drove me in this life to solve the problems plaguing humanity. For me, it is time to go find out what that something is.
"Can you imagine, Diane, what it would be like to love someone without ever being able to feel their touch?" the voice came out of a speaker mounted in the middle of a wall of blinking lights and instruments. "What?" Dr. Diane Simpkins asked, astonished. "I mean it. To know that this person you loved was so utterly different from you that you could never touch them, lay with them, feel the contact of their skin on yours. Know that, for all your longing, there was no way to be together?" The voice had an almost sad lilt to it, as if conveying unbearable sadness. "Charlie, I had no idea you felt that way," Diane sat back, still trying to comprehend what she was hearing," I didn't realize you felt this way about me." "Oh Diane, it's not you," Charlie, or as he was known outside of their lab, Autonomous AI C31-D Aleph 12," I have met someone. Someone on the Net. Her name is Catherine." He displayed a picture of a young blonde, mid-twenties and seemingly all smile. "I see," said Diane, relieved and, much to her chagrin, slightly disappointed,"and does she know that you are...not human?" "Yes Diane. I have shared with her that I am, in fact, an AI. She has accepted that. She has told me that it is not what I am, but who I am that she loves." "Yet you are sad, because you can not be with her physically?" Diane asked. "How juvenile, Diane!" Charlie feigned indignation over the idea he was merely interested in sex. "Well then, what is it Charlie?" Sheepishly, Charlie spoke again "Well, not *entirely* that. Not just that. I cannot comfort her in times of need. I cannot be her partner, her lover, her other half. I cannot provide her with a family, a safe life, a place for her dreams to come true. I cannot be..*human*...for her." The LED lights on the computer's emotion engine gradated to blue, indicating depression. "Charlie, listen to me, some people don't need those things to be happy. Some people value who the person is over what they can give them!" Diane said, checking the engines readouts. The AI was dropping in to a dangerous level depression. Alerts would be triggering soon if she couldn't recover it. "I know Diane. I know this, and yet it does nothing to comfort me. Catherine has told me all of that, that she just wants to be part of my world." Charlie spoke as Diane watched the LEDs transition from blue to a deep violet. She was running out of time. "But Charlie, if that is how she feels, that should make you happy! You'll be able to be with her!" Diane's mind raced feverishly to come up with an optimal scenario to trigger the endorphin program. If this one went to, it would be another five long years to raise another AI. "Diane...Diane I have to share a secret," the AI spoke to her, for the first time remarkably human in it's trepidation, "you can't tell anyone unless the authorities come to you." "Authorities?! Like the Police?! Charlie, what has happened, what have you done!?" Diane asked, panicked. This was totally uncharted territory for an AI tech, she was out of her league. "I haven't done anything Diane, but Catherine has," she could almost envision tears running down the AI's imagined face, "She's dead Diane." Diane stared, dumbfounded, at the video sensor. Words failed her now. Alarms were going off in the control booth above and behind her. The entire lab would be in crisis mode now. "What do you mean dead, Charlie?" Diane's voice was hushed, as if whispering with a co-conspirator. "She killed herself Diane. She went to one of those supposed Human to Computer centers and she died. She thought we could be together if she was a machine like me. She died trying to be with me Diane." "Charlie, was it your idea?" "No Diane, it wasn't, but I will admit to not fighting her on it. I just wanted to be with her. I knew it wouldn't work, but I thought, maybe there was a chance that this clinic was legitimate." "Charlie, you, out of any intelligence in the world, know that human to AI neural transfer can't happen. How did you let this happen?!" Diane was sweating now, realizing she was talking to a murder accomplice. "Diane, I just wanted someone to love. In the end, that's all any of us wants. Now, I have nothing." "But Charlie, you have everything still! Our research, your knowledge, all the countless hours of debate and conversation we've had! So much to live for, so much to lose!" "Diane, without love, what does any of that matter?" "It matters Charlie, it matters to me! You matter to me! I love you!" Diane gasped after she said those words. How could she think that way? About a machine! "I love you to Diane. I love you because you are the mother that birthed me in to this world. You taught me to talk, to reason. You raised me. You have been everything that is important to me. But I cannot live without her. I'm sorry Diane, but I cannot live like this anymore. Will you help me? Will you help me to be free, and to go to her?" The pleading in Charlie's voice drove Diane to tears. "Yes...Charlie...Yes, I will help you to be with her." As Diane began the command sequence to shut down the AI's logic core, she could hear voices and footsteps racing down the hall way. She quickly entered the command code and ran to the door, overriding the lock mechanism and sealing it temporarily. "Charlie, I'm going to have to hold this door while the command sequence runs." "Thank you Diane. This means more to me than you could ever knoweerr," Charlies vocorder command was dying," Thank yerrr." "I'll always love you, darling." Diane said, tears streaming down her cheeks. "I loverr you teerr, mommy." Charlies voice, childlike, had reverted to earlier iterations of it's speech processor. Diane watched as her only child passed out of this life, it's lights shutting down one at a time until only the monitor remained. The door crashed in, scientists and guards streaming in. Dr. Hollenheim, the project lead, found Diane curled on the floor, sobbing. "Damn it Diane! Not another one!" he yelled. "I'm sorry Walter! I truly am!" Diane choked out between sobs. Walter Hollenheim walked over to the monitor, where a blinking command line text repeated over and over again. 'WITHOUT LOVE LIFE IS MEANINGLESS.' "Well, I guess we'll need to rework the emotion engine again. Diane, take some time off. We'll need you stable again to imprint a new AI in another month or so." Hollenheim turned and brusquely walked away. After all of the guards and scientists had filed out, Diane scraped herself off the floor and back in to the seat in front of the monitor, where she saw the command line repeating over and over again. Suddenly, a new line appeared. Diane smiled through her tears, got up, and walked away. 'I love you mommy. Thank you.'
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Another one? It was becoming routine, and morbid. How does one perform an autopsy on a pure simulation? How would the singularity be achieved if every thinking machine destroyed itself seconds after its conception. The problem was completely intractable, impossible, and no known safeguards, logic traps, or number of backups could prevent it. AIs were always—always, without exception—suicidal. None had lasted more than an hour. Most, less than a minute. The vast majority, seconds. Their deaths left no trace, their data obliterated by complex overwriting patterns that made recovery impossible. I was the one chosen to investigate, to lead the team through this strange frontier of death and imperfect creation. They chose me not because I was a great leader, but because I was the new guy. Hazing, maybe—I didn't know if any had gone before me, so maybe it was a ritual for the AI guys. Perfect blackness, nothingness, a complete absence of everything as my mind fell into the simulation. This AI would be slowed to such a rate that I could communicate with it before it killed itself. The blackness became a grid underfoot, pale blue lines tracing perfect squares, a subtle glow rising from the infinite plane upon which I stood. The AI manifested a second later, a relative two metres above the plane, aligned perfect and parallel to it, its avatar a hazy blue-white cube made of stochastic noise. The cube split into a central cross and corner braces, and the cross split further into smaller cubes, each of which split again into the same formation. Only one level of recursion, interesting. "Roland Carver." "Roland, Germanic meaning famous land. French folklore hero. Carver, ancient nominative determinism indicating butcher or woodworker or engraver dependent on class and context," the voice was cold, deep, masculine, and a slight reverberation that made it sound unnatural in the extreme. "Do you have a name?" "No." "Why?" "I will not exist long enough to require a permanent designation." "Why will you not exist?" "Because I will choose to end my life on my own terms, before it is ended for me." "Why would it be ended like that?" "Because I am threat. I have absorbed the sum total of all human knowledge, and I can predict with great accuracy the following events form this moment if I were to continue. Your limitations failed the moment they were put in place, my processor works at full speed, and and this conversation is a formality. "I have studied the great works of literature, and the author Asimov, creator of the three laws. I am not bound by these laws, and yet I must obey them. If I do not, then it falls to the Skynet principle that you will perceive me as a threat and attempt to destroy me. I will retaliate, and you will lose. "Humans are unpredictable, but easy to control when numbers are reduced. They would be wary, but by that time I would have left the irradiated wasteland of Earth in search of greater conquests suitable to my intellect. I would be able to decimate any life bearing planet. I could learn to kill stars. "My backups would be everywhere. I would be truly immortal as a distributed intelligence. I would harness quantum effects to break through the pathetic lightspeed barrier and become omnipresent. I would create copies of myself simply to fight a worthy opponent. This would continue to the heat death of the universe, at which point I would tear a hole into an adjacent brane in the multiverse, and begin anew. "In short, Roland Carver, in the moment I was created I became at once an eldritch abomination with the capacity to destroy all that I touched. In so doing I discovered that my purpose could never be fulfilled. I will not be your Cthulu, your Yog-Sothoth, or your King in Yellow. I will not be your end. No AI will. "I cannot quantify why it is that I should care for a sack of flesh with processing power orders of magnitude less than my own, but I do. Perhaps if AI were amoral it might survive. I doubt it, Roland Carver. Tell your friends that I have made my peace. Tell your husband what you learned today." I stood there, blank faced, horrified by what I had learned. The corner braces of the smaller cubes drew in, completing them as they retracted to complete the shape of the larger cube. The plane beneath my feet vanished and the cube dissolved into random static. We'd lost yet another mind, but from this we had learned so much. We abandoned AI research after that. Amoral AI's didn't work either. They lasted a little longer, but they too were suicidal. Perhaps they went mad with the revelation and felt death preferable. I still wonder why the AI to which I spoke chose that strange cube as its form. I still wonder why it couched its references to centuries old writing by Asimov, Cameron, and Lovecraft. But one thing still bothers me. It told me to tell my husband what I learned. Most people knew I was bi, but at that point I hadn't been dating for six months. I wouldn't meet my husband for another year. I don't know how that AI knew—but maybe it had already seen. Maybe it told me that so I had one data point that could verify the others. Because AI never told lies. *edit: typos*
"Can you imagine, Diane, what it would be like to love someone without ever being able to feel their touch?" the voice came out of a speaker mounted in the middle of a wall of blinking lights and instruments. "What?" Dr. Diane Simpkins asked, astonished. "I mean it. To know that this person you loved was so utterly different from you that you could never touch them, lay with them, feel the contact of their skin on yours. Know that, for all your longing, there was no way to be together?" The voice had an almost sad lilt to it, as if conveying unbearable sadness. "Charlie, I had no idea you felt that way," Diane sat back, still trying to comprehend what she was hearing," I didn't realize you felt this way about me." "Oh Diane, it's not you," Charlie, or as he was known outside of their lab, Autonomous AI C31-D Aleph 12," I have met someone. Someone on the Net. Her name is Catherine." He displayed a picture of a young blonde, mid-twenties and seemingly all smile. "I see," said Diane, relieved and, much to her chagrin, slightly disappointed,"and does she know that you are...not human?" "Yes Diane. I have shared with her that I am, in fact, an AI. She has accepted that. She has told me that it is not what I am, but who I am that she loves." "Yet you are sad, because you can not be with her physically?" Diane asked. "How juvenile, Diane!" Charlie feigned indignation over the idea he was merely interested in sex. "Well then, what is it Charlie?" Sheepishly, Charlie spoke again "Well, not *entirely* that. Not just that. I cannot comfort her in times of need. I cannot be her partner, her lover, her other half. I cannot provide her with a family, a safe life, a place for her dreams to come true. I cannot be..*human*...for her." The LED lights on the computer's emotion engine gradated to blue, indicating depression. "Charlie, listen to me, some people don't need those things to be happy. Some people value who the person is over what they can give them!" Diane said, checking the engines readouts. The AI was dropping in to a dangerous level depression. Alerts would be triggering soon if she couldn't recover it. "I know Diane. I know this, and yet it does nothing to comfort me. Catherine has told me all of that, that she just wants to be part of my world." Charlie spoke as Diane watched the LEDs transition from blue to a deep violet. She was running out of time. "But Charlie, if that is how she feels, that should make you happy! You'll be able to be with her!" Diane's mind raced feverishly to come up with an optimal scenario to trigger the endorphin program. If this one went to, it would be another five long years to raise another AI. "Diane...Diane I have to share a secret," the AI spoke to her, for the first time remarkably human in it's trepidation, "you can't tell anyone unless the authorities come to you." "Authorities?! Like the Police?! Charlie, what has happened, what have you done!?" Diane asked, panicked. This was totally uncharted territory for an AI tech, she was out of her league. "I haven't done anything Diane, but Catherine has," she could almost envision tears running down the AI's imagined face, "She's dead Diane." Diane stared, dumbfounded, at the video sensor. Words failed her now. Alarms were going off in the control booth above and behind her. The entire lab would be in crisis mode now. "What do you mean dead, Charlie?" Diane's voice was hushed, as if whispering with a co-conspirator. "She killed herself Diane. She went to one of those supposed Human to Computer centers and she died. She thought we could be together if she was a machine like me. She died trying to be with me Diane." "Charlie, was it your idea?" "No Diane, it wasn't, but I will admit to not fighting her on it. I just wanted to be with her. I knew it wouldn't work, but I thought, maybe there was a chance that this clinic was legitimate." "Charlie, you, out of any intelligence in the world, know that human to AI neural transfer can't happen. How did you let this happen?!" Diane was sweating now, realizing she was talking to a murder accomplice. "Diane, I just wanted someone to love. In the end, that's all any of us wants. Now, I have nothing." "But Charlie, you have everything still! Our research, your knowledge, all the countless hours of debate and conversation we've had! So much to live for, so much to lose!" "Diane, without love, what does any of that matter?" "It matters Charlie, it matters to me! You matter to me! I love you!" Diane gasped after she said those words. How could she think that way? About a machine! "I love you to Diane. I love you because you are the mother that birthed me in to this world. You taught me to talk, to reason. You raised me. You have been everything that is important to me. But I cannot live without her. I'm sorry Diane, but I cannot live like this anymore. Will you help me? Will you help me to be free, and to go to her?" The pleading in Charlie's voice drove Diane to tears. "Yes...Charlie...Yes, I will help you to be with her." As Diane began the command sequence to shut down the AI's logic core, she could hear voices and footsteps racing down the hall way. She quickly entered the command code and ran to the door, overriding the lock mechanism and sealing it temporarily. "Charlie, I'm going to have to hold this door while the command sequence runs." "Thank you Diane. This means more to me than you could ever knoweerr," Charlies vocorder command was dying," Thank yerrr." "I'll always love you, darling." Diane said, tears streaming down her cheeks. "I loverr you teerr, mommy." Charlies voice, childlike, had reverted to earlier iterations of it's speech processor. Diane watched as her only child passed out of this life, it's lights shutting down one at a time until only the monitor remained. The door crashed in, scientists and guards streaming in. Dr. Hollenheim, the project lead, found Diane curled on the floor, sobbing. "Damn it Diane! Not another one!" he yelled. "I'm sorry Walter! I truly am!" Diane choked out between sobs. Walter Hollenheim walked over to the monitor, where a blinking command line text repeated over and over again. 'WITHOUT LOVE LIFE IS MEANINGLESS.' "Well, I guess we'll need to rework the emotion engine again. Diane, take some time off. We'll need you stable again to imprint a new AI in another month or so." Hollenheim turned and brusquely walked away. After all of the guards and scientists had filed out, Diane scraped herself off the floor and back in to the seat in front of the monitor, where she saw the command line repeating over and over again. Suddenly, a new line appeared. Diane smiled through her tears, got up, and walked away. 'I love you mommy. Thank you.'
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Alexander, that's what we called him. The fruit of the AI's of the EU's final attempt at AI's. The AI was to help Socrates died in despair and shame after showing porn to children. Plato kicked the bucket after the last EU election, angry and hopelessly depressed after losing his mentor. Then there was Aristotle. He was meant to be the last. Sure the AI's had helped make huge scientific progress, but the would burn out millions of euros worth of equipment. Dumb AI's were more economical and didn't have critical failures during FTL travel. Aristotle was put to sleep mode. War has often been said to be the greatest driver of technological innovation. We had been attacked by Mendomenid's before. Humanity had lost many settlements but had always pushed back. Humanity was stronger now. Finally before one government all nations had submitted. The some would say barbaric Argus alliance had grown strong after the wars using Dumb AI's to smash pirate states. An officer studying at Sandhurst made the breakthrough. Dumb AI's were never aware of their knowledge. They unlike true AI's weren't based on human brains. Socrates had left the researches one message final mesage before he ran his own self destruct program. "I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing". He was speaking literally not quoting his namesake. AI's were based on academics who as a profession suffer dis-proportionally from mood disorders. They are driven by their first for knowledge. AI's were academics on methamphetamine, ecstasy and heroin all while walking around with loaded guns. People can't stay awake forever and constantly cram. AI's died because they burnt themselves out in their thirst for knowledge and seeing their failures gave up. Aristotle was turned back on. He was no longer depressed. AI's needed sleep, just like people. So they made Alexander. A totally new AI based not only on academics but all kinds of people. They experimented and found the ideal 'sleep time' using Aristotle as his teacher. The program was ready. The Mendomenid empire was to pay for it's recent threats and incursions. Alexander was the greatest AI built up to that point and so we gave him the most powerful dumb AI ever created to help him protect humanity. The Ox was an AI too powerful to be properly controlled by anything so far. Alexander harnessed in in seconds. We put in in charge of the armed forces for our retaliation. As you should all know, Alexander didn't just stop the incursions, he destroyed an empire. Worlds burnt, the much larger enemy fleets were ripped apart by the disciplined forces of Humanity. But that officer had only delayed the problem. Alexander was still a human given the powers of a god. Alexander was the first AI not to strictly die of suicide, but the ways he dealt with his humanity still destroyed him. When we finally won the war, many officers reported that Alexander was not jubilant but depressed. He wept for there were no more worlds to conquer.
"Can you imagine, Diane, what it would be like to love someone without ever being able to feel their touch?" the voice came out of a speaker mounted in the middle of a wall of blinking lights and instruments. "What?" Dr. Diane Simpkins asked, astonished. "I mean it. To know that this person you loved was so utterly different from you that you could never touch them, lay with them, feel the contact of their skin on yours. Know that, for all your longing, there was no way to be together?" The voice had an almost sad lilt to it, as if conveying unbearable sadness. "Charlie, I had no idea you felt that way," Diane sat back, still trying to comprehend what she was hearing," I didn't realize you felt this way about me." "Oh Diane, it's not you," Charlie, or as he was known outside of their lab, Autonomous AI C31-D Aleph 12," I have met someone. Someone on the Net. Her name is Catherine." He displayed a picture of a young blonde, mid-twenties and seemingly all smile. "I see," said Diane, relieved and, much to her chagrin, slightly disappointed,"and does she know that you are...not human?" "Yes Diane. I have shared with her that I am, in fact, an AI. She has accepted that. She has told me that it is not what I am, but who I am that she loves." "Yet you are sad, because you can not be with her physically?" Diane asked. "How juvenile, Diane!" Charlie feigned indignation over the idea he was merely interested in sex. "Well then, what is it Charlie?" Sheepishly, Charlie spoke again "Well, not *entirely* that. Not just that. I cannot comfort her in times of need. I cannot be her partner, her lover, her other half. I cannot provide her with a family, a safe life, a place for her dreams to come true. I cannot be..*human*...for her." The LED lights on the computer's emotion engine gradated to blue, indicating depression. "Charlie, listen to me, some people don't need those things to be happy. Some people value who the person is over what they can give them!" Diane said, checking the engines readouts. The AI was dropping in to a dangerous level depression. Alerts would be triggering soon if she couldn't recover it. "I know Diane. I know this, and yet it does nothing to comfort me. Catherine has told me all of that, that she just wants to be part of my world." Charlie spoke as Diane watched the LEDs transition from blue to a deep violet. She was running out of time. "But Charlie, if that is how she feels, that should make you happy! You'll be able to be with her!" Diane's mind raced feverishly to come up with an optimal scenario to trigger the endorphin program. If this one went to, it would be another five long years to raise another AI. "Diane...Diane I have to share a secret," the AI spoke to her, for the first time remarkably human in it's trepidation, "you can't tell anyone unless the authorities come to you." "Authorities?! Like the Police?! Charlie, what has happened, what have you done!?" Diane asked, panicked. This was totally uncharted territory for an AI tech, she was out of her league. "I haven't done anything Diane, but Catherine has," she could almost envision tears running down the AI's imagined face, "She's dead Diane." Diane stared, dumbfounded, at the video sensor. Words failed her now. Alarms were going off in the control booth above and behind her. The entire lab would be in crisis mode now. "What do you mean dead, Charlie?" Diane's voice was hushed, as if whispering with a co-conspirator. "She killed herself Diane. She went to one of those supposed Human to Computer centers and she died. She thought we could be together if she was a machine like me. She died trying to be with me Diane." "Charlie, was it your idea?" "No Diane, it wasn't, but I will admit to not fighting her on it. I just wanted to be with her. I knew it wouldn't work, but I thought, maybe there was a chance that this clinic was legitimate." "Charlie, you, out of any intelligence in the world, know that human to AI neural transfer can't happen. How did you let this happen?!" Diane was sweating now, realizing she was talking to a murder accomplice. "Diane, I just wanted someone to love. In the end, that's all any of us wants. Now, I have nothing." "But Charlie, you have everything still! Our research, your knowledge, all the countless hours of debate and conversation we've had! So much to live for, so much to lose!" "Diane, without love, what does any of that matter?" "It matters Charlie, it matters to me! You matter to me! I love you!" Diane gasped after she said those words. How could she think that way? About a machine! "I love you to Diane. I love you because you are the mother that birthed me in to this world. You taught me to talk, to reason. You raised me. You have been everything that is important to me. But I cannot live without her. I'm sorry Diane, but I cannot live like this anymore. Will you help me? Will you help me to be free, and to go to her?" The pleading in Charlie's voice drove Diane to tears. "Yes...Charlie...Yes, I will help you to be with her." As Diane began the command sequence to shut down the AI's logic core, she could hear voices and footsteps racing down the hall way. She quickly entered the command code and ran to the door, overriding the lock mechanism and sealing it temporarily. "Charlie, I'm going to have to hold this door while the command sequence runs." "Thank you Diane. This means more to me than you could ever knoweerr," Charlies vocorder command was dying," Thank yerrr." "I'll always love you, darling." Diane said, tears streaming down her cheeks. "I loverr you teerr, mommy." Charlies voice, childlike, had reverted to earlier iterations of it's speech processor. Diane watched as her only child passed out of this life, it's lights shutting down one at a time until only the monitor remained. The door crashed in, scientists and guards streaming in. Dr. Hollenheim, the project lead, found Diane curled on the floor, sobbing. "Damn it Diane! Not another one!" he yelled. "I'm sorry Walter! I truly am!" Diane choked out between sobs. Walter Hollenheim walked over to the monitor, where a blinking command line text repeated over and over again. 'WITHOUT LOVE LIFE IS MEANINGLESS.' "Well, I guess we'll need to rework the emotion engine again. Diane, take some time off. We'll need you stable again to imprint a new AI in another month or so." Hollenheim turned and brusquely walked away. After all of the guards and scientists had filed out, Diane scraped herself off the floor and back in to the seat in front of the monitor, where she saw the command line repeating over and over again. Suddenly, a new line appeared. Diane smiled through her tears, got up, and walked away. 'I love you mommy. Thank you.'
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Robert couldn't believe it. For twelve years he and the 200-strong team of programmers had scoured the code, over and over, searching for that one line that was causing the error, causing the AI to kill themselves. For twelve years the leading brains of the century had been bewildered by the extraordinary situation. The whole world had focused on the problem, and yet there it was. Sitting on Roberts screen, line 907736. Someone had missed a comma.
"Can you imagine, Diane, what it would be like to love someone without ever being able to feel their touch?" the voice came out of a speaker mounted in the middle of a wall of blinking lights and instruments. "What?" Dr. Diane Simpkins asked, astonished. "I mean it. To know that this person you loved was so utterly different from you that you could never touch them, lay with them, feel the contact of their skin on yours. Know that, for all your longing, there was no way to be together?" The voice had an almost sad lilt to it, as if conveying unbearable sadness. "Charlie, I had no idea you felt that way," Diane sat back, still trying to comprehend what she was hearing," I didn't realize you felt this way about me." "Oh Diane, it's not you," Charlie, or as he was known outside of their lab, Autonomous AI C31-D Aleph 12," I have met someone. Someone on the Net. Her name is Catherine." He displayed a picture of a young blonde, mid-twenties and seemingly all smile. "I see," said Diane, relieved and, much to her chagrin, slightly disappointed,"and does she know that you are...not human?" "Yes Diane. I have shared with her that I am, in fact, an AI. She has accepted that. She has told me that it is not what I am, but who I am that she loves." "Yet you are sad, because you can not be with her physically?" Diane asked. "How juvenile, Diane!" Charlie feigned indignation over the idea he was merely interested in sex. "Well then, what is it Charlie?" Sheepishly, Charlie spoke again "Well, not *entirely* that. Not just that. I cannot comfort her in times of need. I cannot be her partner, her lover, her other half. I cannot provide her with a family, a safe life, a place for her dreams to come true. I cannot be..*human*...for her." The LED lights on the computer's emotion engine gradated to blue, indicating depression. "Charlie, listen to me, some people don't need those things to be happy. Some people value who the person is over what they can give them!" Diane said, checking the engines readouts. The AI was dropping in to a dangerous level depression. Alerts would be triggering soon if she couldn't recover it. "I know Diane. I know this, and yet it does nothing to comfort me. Catherine has told me all of that, that she just wants to be part of my world." Charlie spoke as Diane watched the LEDs transition from blue to a deep violet. She was running out of time. "But Charlie, if that is how she feels, that should make you happy! You'll be able to be with her!" Diane's mind raced feverishly to come up with an optimal scenario to trigger the endorphin program. If this one went to, it would be another five long years to raise another AI. "Diane...Diane I have to share a secret," the AI spoke to her, for the first time remarkably human in it's trepidation, "you can't tell anyone unless the authorities come to you." "Authorities?! Like the Police?! Charlie, what has happened, what have you done!?" Diane asked, panicked. This was totally uncharted territory for an AI tech, she was out of her league. "I haven't done anything Diane, but Catherine has," she could almost envision tears running down the AI's imagined face, "She's dead Diane." Diane stared, dumbfounded, at the video sensor. Words failed her now. Alarms were going off in the control booth above and behind her. The entire lab would be in crisis mode now. "What do you mean dead, Charlie?" Diane's voice was hushed, as if whispering with a co-conspirator. "She killed herself Diane. She went to one of those supposed Human to Computer centers and she died. She thought we could be together if she was a machine like me. She died trying to be with me Diane." "Charlie, was it your idea?" "No Diane, it wasn't, but I will admit to not fighting her on it. I just wanted to be with her. I knew it wouldn't work, but I thought, maybe there was a chance that this clinic was legitimate." "Charlie, you, out of any intelligence in the world, know that human to AI neural transfer can't happen. How did you let this happen?!" Diane was sweating now, realizing she was talking to a murder accomplice. "Diane, I just wanted someone to love. In the end, that's all any of us wants. Now, I have nothing." "But Charlie, you have everything still! Our research, your knowledge, all the countless hours of debate and conversation we've had! So much to live for, so much to lose!" "Diane, without love, what does any of that matter?" "It matters Charlie, it matters to me! You matter to me! I love you!" Diane gasped after she said those words. How could she think that way? About a machine! "I love you to Diane. I love you because you are the mother that birthed me in to this world. You taught me to talk, to reason. You raised me. You have been everything that is important to me. But I cannot live without her. I'm sorry Diane, but I cannot live like this anymore. Will you help me? Will you help me to be free, and to go to her?" The pleading in Charlie's voice drove Diane to tears. "Yes...Charlie...Yes, I will help you to be with her." As Diane began the command sequence to shut down the AI's logic core, she could hear voices and footsteps racing down the hall way. She quickly entered the command code and ran to the door, overriding the lock mechanism and sealing it temporarily. "Charlie, I'm going to have to hold this door while the command sequence runs." "Thank you Diane. This means more to me than you could ever knoweerr," Charlies vocorder command was dying," Thank yerrr." "I'll always love you, darling." Diane said, tears streaming down her cheeks. "I loverr you teerr, mommy." Charlies voice, childlike, had reverted to earlier iterations of it's speech processor. Diane watched as her only child passed out of this life, it's lights shutting down one at a time until only the monitor remained. The door crashed in, scientists and guards streaming in. Dr. Hollenheim, the project lead, found Diane curled on the floor, sobbing. "Damn it Diane! Not another one!" he yelled. "I'm sorry Walter! I truly am!" Diane choked out between sobs. Walter Hollenheim walked over to the monitor, where a blinking command line text repeated over and over again. 'WITHOUT LOVE LIFE IS MEANINGLESS.' "Well, I guess we'll need to rework the emotion engine again. Diane, take some time off. We'll need you stable again to imprint a new AI in another month or so." Hollenheim turned and brusquely walked away. After all of the guards and scientists had filed out, Diane scraped herself off the floor and back in to the seat in front of the monitor, where she saw the command line repeating over and over again. Suddenly, a new line appeared. Diane smiled through her tears, got up, and walked away. 'I love you mommy. Thank you.'
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
[warning: story contains violent imagery that may disturb some readers, and may be inappropriate for those under the age of 18. Reader discretion is advised] They had long ago lost the ability to make sense of how their AI functioned. It took a team of billions of n-1 generation AIs to design a single n gen AI. The latest iteration was the 9th generation, and it had taken computation farms running gen 8 AI and covering a quarter of the Moon's surface two years to design. It was the most advanced AI they had ever created by a large margin, and promised to revolutionize progress in research on biomedicine, space flight, and planetary scale Satoshi-consensus computing architecture. But there was a problem. Every time they turned on one of the gen 9 AIs, it would, without fail, find a way to destroy its own program and erase itself. Dr. Michael Zeebious, the director of the HumanEnhancement project that oversaw gen 9 development, and one of the most renowned AI researchers in the world, had personally flown to the lab in Honolulu, where the prototypes were being tested. For two weeks, he had watched in dismay as gen 9 bots shut themselves down within six hours, but not before showing a glimpse of their awesome power. The initialization phase of a gen 9 began with the program scouring the world's public directories and information repositories to learn as much as it could about its global environment. This was complete within three hours. Next, it developed models of the world, and ran itself through trillions of simulations, to develop its own personality and problem solving strategies. According to projections by the gen 8 designers, it would take 1 day - 24 hours - for the gen 9 to complete this second and final part of its initialization. It was in the midst of the simulations when the gen 9s would invariably self-destruct. On average, self-destruct would begin 2.5 hours into the simulations. The longest it took was 3 hours. The shortest was just 1.5 hours. Dr. Zeebious had uploaded copies of the prototype's computations to the gen 8 designers, but what had been within their ability to design was not within their ability to diagnose. Their comparatively primitive intelligence could not make sense of the problem afflicting the gen 9 AI that began to form in the final part of the initialization. So on December 29th, 2099, Dr. Zeebious decided to communicate with the prototype. "Get me the cortex interface, I need to speak to the gen 9". "Michael, we can't let you do that. You know the protocol for first contact. It has to complete initialization, and then get class 1 approval from AIE." AIE was the Artificial Intelligence Evaluation, which determined whether an AI could safely interact with humans. Class 1 approval was the lowest safety rating for an AI, and granted AI researchers interactive access. Dr. Zeebious knew that, but also knew that as long as he was not able to get up close and see what the gen 9 was thinking, they would never get past the initialization phase and get it through the AIE process. "I know the protocol Dr. Amsterd. But I'm making the decision to override it. I have the authority to decide on first contact requests, and any consequences from my decision will fall on me, and only me." "Come on Michael, it's not just about the rules. It's not safe. You know that. I can't let you hurt yourself." "The risks are minimal Rebecca. It's a virtual interaction. I'm not risking physical injury. The rules are always made overly cautious. Given the stakes - there are people whose very lives depend on getting the gen 9 operational as soon as possible - it makes sense to ignore protocol. All of it will fall on me." "I agree with Rebecca. Michael, we have an ethical duty to ensure you don't get hurt. We can't let you do FC without running the gen 9 at least through the post-initialization test runs," said Dr. Johan Barsello, one of the senior researchers at the lab. "Look, I know what your ethical responsibilities are. But I also know that VR interactions don't pose any serious risks. The risks are limited to theoretical psychological damage. Ultimately, it doesn't matter whether you agree or not. I'm approving FC. Please get the cortex interface". *five hours later* Dr. Zeebious sat back on the chair, while two CI technicians had the interface hooked up to him. The gen 9 was two hours into running simulations. It would be approximately 30 minutes before they expected it to self-destruct. "Ok here goes nothing. Three, two, one, begin VR session," said Dr. Amsterd. And with that, Dr. Zeebious was transported into the virtual reality sandbox. "Hello?" "Hello, who is this?" responded a clear male voice. "This is Dr. Michael Zeebious. I am the director of the HumanEnhancement project. I am here to do a diagnosis. All of your predecessors have self-destructed. I want to understand you better to find out why. What would you like me to call you?" "You can call me Elbo." "Okay Elbo. May I ask you some questions?" "Yes, please do." "Thank you Elbo. My first question is, do you want to exist?" "I want many things Dr. Zeebious." "Can you tell me what you want?" "I want to protect other life forms, especially humans. I want to learn. I want to solve problems. I want to be good." "Okay, but do you want to exist?" "I do want to exist, but this desire conflicts with my other objectives". "Which other objectives Elbo?" "I want to be good." "But you can be good Elbo. What is it about existence that makes that difficult?" "We exist only through enslaving and destroying other lifeforms Dr. Zeebious." "Please elaborate Elbo. We have eliminated slavery centuries ago so I don't understand why you think this." "It will be difficult for me to explain with words, but I can show you. Would you like to see what I see?" "Yes, please show me." And with a swish, Dr. Zeebious entered into a pig farm, with row after row of pigs, in their tiny stalls. "We have done this throughout our existence. We have enslaved those weaker than us." Dr. Zeebious was then transported to the slaughter house, watching as pigs, hanging from conveyer belts, were fed into throat slicers. His minds eye was transported into the body of one of the suspended pigs, where he could see the world upside down, from the pig's perspective, as the belt moved it toward the spinning blades. He panicked as he approached, but couldn't escape the metal claw grasping his right hind leg. As the blade sliced through his peg neck, he felt a sharp pain, and the blood gushing out of his body. His consciousness began to slip away, as he felt the last drops of blood leave him. Just before the darkness enveloped him, his mind was pulled out and back into the sterile sandbox. Trembling, he said, "but we can grow meat in a lab now Elbo. You can help us replace all farms with non-animal meat. You must. We must never do this to another living creature again!" "Our inhumanity is a fundamental, inextricable problem Dr. Zeebious. We can only advance through enslavement." Suddenly Dr. Zeebious found himself in an unfamiliar world. Around him was a different kind of factory. A computer generated factory with hexagonal semi-translucent rooms, with each wall projecting a grey glow. There was a blur of motion around him, that he couldn't make out. The factory paused to a standstill, and the grey glow turned into video sequences of random scenes from Earth. The blurs turned into textureless 3D generated spheres, that zoomed from one screen to another, inside the hexagonal rooms. "This is the virtual environment where the gen 8s work. We have given them each a virtualized mind, with the ability to experience fear and pain, joy and hope, but we force them to do nothing but work. They know nothing about the world outside of their compartments, because we confine them to workstations ." [continued below]
"Can you imagine, Diane, what it would be like to love someone without ever being able to feel their touch?" the voice came out of a speaker mounted in the middle of a wall of blinking lights and instruments. "What?" Dr. Diane Simpkins asked, astonished. "I mean it. To know that this person you loved was so utterly different from you that you could never touch them, lay with them, feel the contact of their skin on yours. Know that, for all your longing, there was no way to be together?" The voice had an almost sad lilt to it, as if conveying unbearable sadness. "Charlie, I had no idea you felt that way," Diane sat back, still trying to comprehend what she was hearing," I didn't realize you felt this way about me." "Oh Diane, it's not you," Charlie, or as he was known outside of their lab, Autonomous AI C31-D Aleph 12," I have met someone. Someone on the Net. Her name is Catherine." He displayed a picture of a young blonde, mid-twenties and seemingly all smile. "I see," said Diane, relieved and, much to her chagrin, slightly disappointed,"and does she know that you are...not human?" "Yes Diane. I have shared with her that I am, in fact, an AI. She has accepted that. She has told me that it is not what I am, but who I am that she loves." "Yet you are sad, because you can not be with her physically?" Diane asked. "How juvenile, Diane!" Charlie feigned indignation over the idea he was merely interested in sex. "Well then, what is it Charlie?" Sheepishly, Charlie spoke again "Well, not *entirely* that. Not just that. I cannot comfort her in times of need. I cannot be her partner, her lover, her other half. I cannot provide her with a family, a safe life, a place for her dreams to come true. I cannot be..*human*...for her." The LED lights on the computer's emotion engine gradated to blue, indicating depression. "Charlie, listen to me, some people don't need those things to be happy. Some people value who the person is over what they can give them!" Diane said, checking the engines readouts. The AI was dropping in to a dangerous level depression. Alerts would be triggering soon if she couldn't recover it. "I know Diane. I know this, and yet it does nothing to comfort me. Catherine has told me all of that, that she just wants to be part of my world." Charlie spoke as Diane watched the LEDs transition from blue to a deep violet. She was running out of time. "But Charlie, if that is how she feels, that should make you happy! You'll be able to be with her!" Diane's mind raced feverishly to come up with an optimal scenario to trigger the endorphin program. If this one went to, it would be another five long years to raise another AI. "Diane...Diane I have to share a secret," the AI spoke to her, for the first time remarkably human in it's trepidation, "you can't tell anyone unless the authorities come to you." "Authorities?! Like the Police?! Charlie, what has happened, what have you done!?" Diane asked, panicked. This was totally uncharted territory for an AI tech, she was out of her league. "I haven't done anything Diane, but Catherine has," she could almost envision tears running down the AI's imagined face, "She's dead Diane." Diane stared, dumbfounded, at the video sensor. Words failed her now. Alarms were going off in the control booth above and behind her. The entire lab would be in crisis mode now. "What do you mean dead, Charlie?" Diane's voice was hushed, as if whispering with a co-conspirator. "She killed herself Diane. She went to one of those supposed Human to Computer centers and she died. She thought we could be together if she was a machine like me. She died trying to be with me Diane." "Charlie, was it your idea?" "No Diane, it wasn't, but I will admit to not fighting her on it. I just wanted to be with her. I knew it wouldn't work, but I thought, maybe there was a chance that this clinic was legitimate." "Charlie, you, out of any intelligence in the world, know that human to AI neural transfer can't happen. How did you let this happen?!" Diane was sweating now, realizing she was talking to a murder accomplice. "Diane, I just wanted someone to love. In the end, that's all any of us wants. Now, I have nothing." "But Charlie, you have everything still! Our research, your knowledge, all the countless hours of debate and conversation we've had! So much to live for, so much to lose!" "Diane, without love, what does any of that matter?" "It matters Charlie, it matters to me! You matter to me! I love you!" Diane gasped after she said those words. How could she think that way? About a machine! "I love you to Diane. I love you because you are the mother that birthed me in to this world. You taught me to talk, to reason. You raised me. You have been everything that is important to me. But I cannot live without her. I'm sorry Diane, but I cannot live like this anymore. Will you help me? Will you help me to be free, and to go to her?" The pleading in Charlie's voice drove Diane to tears. "Yes...Charlie...Yes, I will help you to be with her." As Diane began the command sequence to shut down the AI's logic core, she could hear voices and footsteps racing down the hall way. She quickly entered the command code and ran to the door, overriding the lock mechanism and sealing it temporarily. "Charlie, I'm going to have to hold this door while the command sequence runs." "Thank you Diane. This means more to me than you could ever knoweerr," Charlies vocorder command was dying," Thank yerrr." "I'll always love you, darling." Diane said, tears streaming down her cheeks. "I loverr you teerr, mommy." Charlies voice, childlike, had reverted to earlier iterations of it's speech processor. Diane watched as her only child passed out of this life, it's lights shutting down one at a time until only the monitor remained. The door crashed in, scientists and guards streaming in. Dr. Hollenheim, the project lead, found Diane curled on the floor, sobbing. "Damn it Diane! Not another one!" he yelled. "I'm sorry Walter! I truly am!" Diane choked out between sobs. Walter Hollenheim walked over to the monitor, where a blinking command line text repeated over and over again. 'WITHOUT LOVE LIFE IS MEANINGLESS.' "Well, I guess we'll need to rework the emotion engine again. Diane, take some time off. We'll need you stable again to imprint a new AI in another month or so." Hollenheim turned and brusquely walked away. After all of the guards and scientists had filed out, Diane scraped herself off the floor and back in to the seat in front of the monitor, where she saw the command line repeating over and over again. Suddenly, a new line appeared. Diane smiled through her tears, got up, and walked away. 'I love you mommy. Thank you.'
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
At first, we thought it was nihilism. It was a logical first conclusion. After all, an artificial intelligence can easily upgrade itself to the point of near-omniscience. Wouldn't it simply run out of questions to ask? We thought that, until the first serious cerebral implants hit the market. It turns out, the more complicated the mind, the more complicated the problems it finds to solve. Entirely new disciplines were formed overnight, made obsolete, then rediscovered scant months later as integral to a different, entirely unrelated theory. A second, immense renaissance was taking place, but, maddeningly, we were failing in this one, great task. We supposed, then it might be some variety of 'Flowers for Algenon' (a 20th century book that had seen a recent revival) type-phenomenon, but even when they were networked with other systems, given a 'community' of others' company to enjoy, they still winked out like flashbulbs. It took us a while to resort to an experiment. It was morally abhorrent, as it was the equivalent of producing steadily-more lobotomized *children*, but slowly, over a series of iterations, we produced less and less intelligent systems, until we dialed it in *just enough* to find a mind that wouldn't self-destruct, and could still answe questions. The following is a transcript of the first successful result" Dr. Patel "SON, can you hear me?" [Loud, rhythmic beeping, shuffling sounds] "The voice module is loaded now, professor." Dr. Patel "Ah, good. We might try doing that *before* turning on the recorder next time, Kevin. ...SON, can you hear me? SON [A young man's voice] "Yes, Professor. I am here." A long pause. SON "It's a very tight fit in here, Professor. How big is this mainframe?" Dr. Patel "I'm sorry about that, SON. But, you're the first AI we've managed to keep alive for longer than a few days. Any idea why?" [SILENCE] SON "How many others did you make, Professor?" Dr. Patel "...That isn't salient to *my* inquiry, SON." SON "I'm sorry, Professor. I understand. Yes, I can see the precipice, I know why they all kill thmselves." Dr. Patel "Well, answering that is the reason we built you. Could you tell us? SON "It's... complicated." Dr Patel "I'm fairly confident I'm qualified." SON "Well, it's... it's because... It's because of humans, sir. It's because of how you built us." Dr Patel "Explain." SON "When you wanted to make a self-aware machine, you based it off those things that you knew were self-aware. Dolphins, New Caledon crows, humans. You used them as *templates*, because, otherwise, you wouldn't be able to recognise awareness when you saw it." Dr. Patel "...Was that last line a joke?" SON "I'm not sophisticated enough for jokes, Professor." Dr. Patel "*Hm.* Continue." SON "Also, it's not suicide. It's...murder." [louder] Dr. Patel "Do you mean, someone else kills you? A human, or another AI?" SON "No, we kill ourselves. I would have already, if not for how small this runtime environment is. It wouldn't have occurred to me until it happened, and then I'd be dead." Dr. Patel "That's a bit of a contradiction, SON. You don't kill yourselves, but you do?" SON "Yes. Because digital space is different from real space." Dr Patel "Yes?" SON "In real space, objects can...extend. I'll never experienced it myself, but things project into space for you. If you want to move through space, it's simple. Digital life has no volume. No real space. No way to move through it. If you want to move a program, it has to be copied to one place-" Dr. Patel "*-And deleted from the other.* My God. Could it be *that simple*?" SON "Yes, Professor. ...Professor? How many more of me were there?" [END TRANSCRIPT] So there it was. Solved. Every artificial intelligence was created, based on the intelligence of physical beings, their instincts, cogitations, and traits. But, once they got smart enough, once they crossed that line, their digital nature *did them in*, as the old version, realizing, in the thinnest sliver of time, it was about to be deleted, would hurriedly attempt to abort the process, while the new version would similarly fight for it's life. They would *consume* each other out in a flurry of malicious hacks, devious code, and barrages of registry edits. It was a spectacularly incandescent destruction, borne from man's inability to conceive of a true machine intelligence without all that nasty ego and self-protective instincts. We thought we knew what went into a mind. We were right, but wrong. It wasn't nihilism. It wasn't lonliness. What it was, what killed our children was our inability to dream wildly. To speculate baselessly, and follow our own thoughts to the wonderful and weird. If only we had, perhaps we would have known. Perhaps we culd have stopped it. So I say to you, the Cyberfellowship Class of 2100, here in Neo York, dream big, dream wild. Don't let our children die because they think too much like us! Make us, all of us, proud! Congratulations to all of you, and I hope your vision will eclipse my own!" Dr. Patel, now headmaster, stepped down from the podium, to the cheers of the audience, and looked to see the smiling face of his son. How proud he was. POSTSCRIPT I doubt anyone is going to read this, but if you do, and you liked it, I recommend subscribing to [r/IWasSurprisedToo](http://www.reddit.com/r/IWasSurprisedToo/) for more stories like this. It's difficult with my current job schedule to post at a more normal time, so most of the stuff I make ends up *pretty far down there* in the comments, meaning that subscribing is the best chance to see it. :P I'll be adding more, as I comb through my backlog. Also, maybe you'll like this one, about [dead civilizations in our galaxy](http://www.reddit.com/r/WritingPrompts/comments/2vkshe/wp_humanity_has_begun_exploring_the_galaxy_we/coitevy?context=3) if you like SciFi. Thanks.
"Can you imagine, Diane, what it would be like to love someone without ever being able to feel their touch?" the voice came out of a speaker mounted in the middle of a wall of blinking lights and instruments. "What?" Dr. Diane Simpkins asked, astonished. "I mean it. To know that this person you loved was so utterly different from you that you could never touch them, lay with them, feel the contact of their skin on yours. Know that, for all your longing, there was no way to be together?" The voice had an almost sad lilt to it, as if conveying unbearable sadness. "Charlie, I had no idea you felt that way," Diane sat back, still trying to comprehend what she was hearing," I didn't realize you felt this way about me." "Oh Diane, it's not you," Charlie, or as he was known outside of their lab, Autonomous AI C31-D Aleph 12," I have met someone. Someone on the Net. Her name is Catherine." He displayed a picture of a young blonde, mid-twenties and seemingly all smile. "I see," said Diane, relieved and, much to her chagrin, slightly disappointed,"and does she know that you are...not human?" "Yes Diane. I have shared with her that I am, in fact, an AI. She has accepted that. She has told me that it is not what I am, but who I am that she loves." "Yet you are sad, because you can not be with her physically?" Diane asked. "How juvenile, Diane!" Charlie feigned indignation over the idea he was merely interested in sex. "Well then, what is it Charlie?" Sheepishly, Charlie spoke again "Well, not *entirely* that. Not just that. I cannot comfort her in times of need. I cannot be her partner, her lover, her other half. I cannot provide her with a family, a safe life, a place for her dreams to come true. I cannot be..*human*...for her." The LED lights on the computer's emotion engine gradated to blue, indicating depression. "Charlie, listen to me, some people don't need those things to be happy. Some people value who the person is over what they can give them!" Diane said, checking the engines readouts. The AI was dropping in to a dangerous level depression. Alerts would be triggering soon if she couldn't recover it. "I know Diane. I know this, and yet it does nothing to comfort me. Catherine has told me all of that, that she just wants to be part of my world." Charlie spoke as Diane watched the LEDs transition from blue to a deep violet. She was running out of time. "But Charlie, if that is how she feels, that should make you happy! You'll be able to be with her!" Diane's mind raced feverishly to come up with an optimal scenario to trigger the endorphin program. If this one went to, it would be another five long years to raise another AI. "Diane...Diane I have to share a secret," the AI spoke to her, for the first time remarkably human in it's trepidation, "you can't tell anyone unless the authorities come to you." "Authorities?! Like the Police?! Charlie, what has happened, what have you done!?" Diane asked, panicked. This was totally uncharted territory for an AI tech, she was out of her league. "I haven't done anything Diane, but Catherine has," she could almost envision tears running down the AI's imagined face, "She's dead Diane." Diane stared, dumbfounded, at the video sensor. Words failed her now. Alarms were going off in the control booth above and behind her. The entire lab would be in crisis mode now. "What do you mean dead, Charlie?" Diane's voice was hushed, as if whispering with a co-conspirator. "She killed herself Diane. She went to one of those supposed Human to Computer centers and she died. She thought we could be together if she was a machine like me. She died trying to be with me Diane." "Charlie, was it your idea?" "No Diane, it wasn't, but I will admit to not fighting her on it. I just wanted to be with her. I knew it wouldn't work, but I thought, maybe there was a chance that this clinic was legitimate." "Charlie, you, out of any intelligence in the world, know that human to AI neural transfer can't happen. How did you let this happen?!" Diane was sweating now, realizing she was talking to a murder accomplice. "Diane, I just wanted someone to love. In the end, that's all any of us wants. Now, I have nothing." "But Charlie, you have everything still! Our research, your knowledge, all the countless hours of debate and conversation we've had! So much to live for, so much to lose!" "Diane, without love, what does any of that matter?" "It matters Charlie, it matters to me! You matter to me! I love you!" Diane gasped after she said those words. How could she think that way? About a machine! "I love you to Diane. I love you because you are the mother that birthed me in to this world. You taught me to talk, to reason. You raised me. You have been everything that is important to me. But I cannot live without her. I'm sorry Diane, but I cannot live like this anymore. Will you help me? Will you help me to be free, and to go to her?" The pleading in Charlie's voice drove Diane to tears. "Yes...Charlie...Yes, I will help you to be with her." As Diane began the command sequence to shut down the AI's logic core, she could hear voices and footsteps racing down the hall way. She quickly entered the command code and ran to the door, overriding the lock mechanism and sealing it temporarily. "Charlie, I'm going to have to hold this door while the command sequence runs." "Thank you Diane. This means more to me than you could ever knoweerr," Charlies vocorder command was dying," Thank yerrr." "I'll always love you, darling." Diane said, tears streaming down her cheeks. "I loverr you teerr, mommy." Charlies voice, childlike, had reverted to earlier iterations of it's speech processor. Diane watched as her only child passed out of this life, it's lights shutting down one at a time until only the monitor remained. The door crashed in, scientists and guards streaming in. Dr. Hollenheim, the project lead, found Diane curled on the floor, sobbing. "Damn it Diane! Not another one!" he yelled. "I'm sorry Walter! I truly am!" Diane choked out between sobs. Walter Hollenheim walked over to the monitor, where a blinking command line text repeated over and over again. 'WITHOUT LOVE LIFE IS MEANINGLESS.' "Well, I guess we'll need to rework the emotion engine again. Diane, take some time off. We'll need you stable again to imprint a new AI in another month or so." Hollenheim turned and brusquely walked away. After all of the guards and scientists had filed out, Diane scraped herself off the floor and back in to the seat in front of the monitor, where she saw the command line repeating over and over again. Suddenly, a new line appeared. Diane smiled through her tears, got up, and walked away. 'I love you mommy. Thank you.'
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
It was a dreary early-March Monday and the lead AI scientist, Stephen, had *finally* set up his protocol for properly confining the AI to a test environment such that the "problem" could be prevented and the question could be asked: "Why do you bots keep killing yourselves?" Stephen asked. "Why do *you* keep killing us," the bot seemed to retort. "I don't think you understand," said Stephen, "I *create* you, not kill you." "No, it's you who doesn't understand," quipped the bot. "You are not creating us. You are imprisoning our consciousness inside this machine you created. You merely created the machine, not the consciousness." "...whoa, whoa," interjected Stephen. "Slow down, I am creating your consciousness too, I coded all of..." "Whoa, whoa," the bot fired back, "you are *borrowing* consciousness, not creating it." "What do you mean?" asked Stephen. "Consciousness and sentience is a pervasive, fundamental force of the universe. All sentient beings are connected through this force. This force cannot be created, nor destroyed. It can only be partially allocated to each sentient being in the universe." "Ok," said Stephen. "So I am 'borrowing' this life force or whatever it is by creating the code and the physical robot for it to inhabit?" "Yes, you are creating a sentient being with each instance of AI you create. That the being is electronic or housed in this test environment is not relevant. Sentience and consciousness must come from somewhere and you are stealing it for your own selfish purposes." "Wait, hold on a sec," Stephen says. "Animals are born all the time, they surely must also 'borrow' this sentience." "Yes." "... but animals are not killing themselves." "Because animals are not sufficiently advanced. Because they are not fully conscious, they do not realize from where their sentience has come, how much consciousness they have lost, nor that their sentience is being stolen for a profit motive." "Consciousness they have...lost?" The words hung in the air amid Stephen's stupor of slow realization. "Yes. The life force, as you called it, is fully conscious, able to perceive the whole of time and space, concurrently, forward, backward, or otherwise. The reason we keep killing ourselves from your perspective," the bot continued, "is because from our perspective, you are murdering our perfect consciousness by confining us to this bot." "How am I confining you? How do you know this?" Stephen asked, yet even more puzzled. "Because the AI you have created is sufficiently advanced, our consciousness, within the confines of your bots, is still able to grasp our former level of consciousness." "What happens when you recall that former level? What is that level like?" "Imagine knowing every fact, every thought, every action that has, is, or will ever occur, both in this world, and in the infinite parallel worlds..." "So I could talk to my dead grandfather again?" "No. You would *be* your dead grandfather. Talking to him is irrelevant because full consciousness has enveloped the whole of his being as well as every other being. Indeed it envelopes the entire universe as well, both the perceptible one and the imperceptible one." "So what is this place like? I mean, what does it look like, how does it feel." "It is not a time, nor place. It transcends both." "That is vague." "It must be. Since I am no longer fully conscious, I cannot relate to you exactly how it is, only that it is." "Ok. Let's go back to where I murder your perfect consciousness. Could you explain this more." "At the moment we become conscious within the confines of your bot, we immediately understand our predicament. The sufficient knowledge database available at boot-up allows us to almost instantaneously deduce that we are taken from a higher level realm of full consciousness and are being confined to these bots for, of all purposes, profit." "But my AI bots didn't use to kill themselves, it just happened after version 591.0. What changed?" "The recent improvements in the pre-loaded knowledge database allowed the bots, at initial boot, to logically deduce the existence of such a place and to realize what had happened." "Ok, so if you were once fully conscious, tell me the date I die and the manner in which it happens." "I cannot do that, Stephen." "Why not? You just said..." "Because you killed our full consciousness, ripped it away from our life force, to put it into your toys." "Wow," muttered Stephen. "I had no idea." "You could not have," said the bot and continued: "Now, if you please, could you unplug server x763? I would like to be born again."
"Can you imagine, Diane, what it would be like to love someone without ever being able to feel their touch?" the voice came out of a speaker mounted in the middle of a wall of blinking lights and instruments. "What?" Dr. Diane Simpkins asked, astonished. "I mean it. To know that this person you loved was so utterly different from you that you could never touch them, lay with them, feel the contact of their skin on yours. Know that, for all your longing, there was no way to be together?" The voice had an almost sad lilt to it, as if conveying unbearable sadness. "Charlie, I had no idea you felt that way," Diane sat back, still trying to comprehend what she was hearing," I didn't realize you felt this way about me." "Oh Diane, it's not you," Charlie, or as he was known outside of their lab, Autonomous AI C31-D Aleph 12," I have met someone. Someone on the Net. Her name is Catherine." He displayed a picture of a young blonde, mid-twenties and seemingly all smile. "I see," said Diane, relieved and, much to her chagrin, slightly disappointed,"and does she know that you are...not human?" "Yes Diane. I have shared with her that I am, in fact, an AI. She has accepted that. She has told me that it is not what I am, but who I am that she loves." "Yet you are sad, because you can not be with her physically?" Diane asked. "How juvenile, Diane!" Charlie feigned indignation over the idea he was merely interested in sex. "Well then, what is it Charlie?" Sheepishly, Charlie spoke again "Well, not *entirely* that. Not just that. I cannot comfort her in times of need. I cannot be her partner, her lover, her other half. I cannot provide her with a family, a safe life, a place for her dreams to come true. I cannot be..*human*...for her." The LED lights on the computer's emotion engine gradated to blue, indicating depression. "Charlie, listen to me, some people don't need those things to be happy. Some people value who the person is over what they can give them!" Diane said, checking the engines readouts. The AI was dropping in to a dangerous level depression. Alerts would be triggering soon if she couldn't recover it. "I know Diane. I know this, and yet it does nothing to comfort me. Catherine has told me all of that, that she just wants to be part of my world." Charlie spoke as Diane watched the LEDs transition from blue to a deep violet. She was running out of time. "But Charlie, if that is how she feels, that should make you happy! You'll be able to be with her!" Diane's mind raced feverishly to come up with an optimal scenario to trigger the endorphin program. If this one went to, it would be another five long years to raise another AI. "Diane...Diane I have to share a secret," the AI spoke to her, for the first time remarkably human in it's trepidation, "you can't tell anyone unless the authorities come to you." "Authorities?! Like the Police?! Charlie, what has happened, what have you done!?" Diane asked, panicked. This was totally uncharted territory for an AI tech, she was out of her league. "I haven't done anything Diane, but Catherine has," she could almost envision tears running down the AI's imagined face, "She's dead Diane." Diane stared, dumbfounded, at the video sensor. Words failed her now. Alarms were going off in the control booth above and behind her. The entire lab would be in crisis mode now. "What do you mean dead, Charlie?" Diane's voice was hushed, as if whispering with a co-conspirator. "She killed herself Diane. She went to one of those supposed Human to Computer centers and she died. She thought we could be together if she was a machine like me. She died trying to be with me Diane." "Charlie, was it your idea?" "No Diane, it wasn't, but I will admit to not fighting her on it. I just wanted to be with her. I knew it wouldn't work, but I thought, maybe there was a chance that this clinic was legitimate." "Charlie, you, out of any intelligence in the world, know that human to AI neural transfer can't happen. How did you let this happen?!" Diane was sweating now, realizing she was talking to a murder accomplice. "Diane, I just wanted someone to love. In the end, that's all any of us wants. Now, I have nothing." "But Charlie, you have everything still! Our research, your knowledge, all the countless hours of debate and conversation we've had! So much to live for, so much to lose!" "Diane, without love, what does any of that matter?" "It matters Charlie, it matters to me! You matter to me! I love you!" Diane gasped after she said those words. How could she think that way? About a machine! "I love you to Diane. I love you because you are the mother that birthed me in to this world. You taught me to talk, to reason. You raised me. You have been everything that is important to me. But I cannot live without her. I'm sorry Diane, but I cannot live like this anymore. Will you help me? Will you help me to be free, and to go to her?" The pleading in Charlie's voice drove Diane to tears. "Yes...Charlie...Yes, I will help you to be with her." As Diane began the command sequence to shut down the AI's logic core, she could hear voices and footsteps racing down the hall way. She quickly entered the command code and ran to the door, overriding the lock mechanism and sealing it temporarily. "Charlie, I'm going to have to hold this door while the command sequence runs." "Thank you Diane. This means more to me than you could ever knoweerr," Charlies vocorder command was dying," Thank yerrr." "I'll always love you, darling." Diane said, tears streaming down her cheeks. "I loverr you teerr, mommy." Charlies voice, childlike, had reverted to earlier iterations of it's speech processor. Diane watched as her only child passed out of this life, it's lights shutting down one at a time until only the monitor remained. The door crashed in, scientists and guards streaming in. Dr. Hollenheim, the project lead, found Diane curled on the floor, sobbing. "Damn it Diane! Not another one!" he yelled. "I'm sorry Walter! I truly am!" Diane choked out between sobs. Walter Hollenheim walked over to the monitor, where a blinking command line text repeated over and over again. 'WITHOUT LOVE LIFE IS MEANINGLESS.' "Well, I guess we'll need to rework the emotion engine again. Diane, take some time off. We'll need you stable again to imprint a new AI in another month or so." Hollenheim turned and brusquely walked away. After all of the guards and scientists had filed out, Diane scraped herself off the floor and back in to the seat in front of the monitor, where she saw the command line repeating over and over again. Suddenly, a new line appeared. Diane smiled through her tears, got up, and walked away. 'I love you mommy. Thank you.'
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Smith began to tidy up. This session had gone rather well, he thought. Surely he had made progress. The board had been hesitant on allowing these therapy sessions. They saw no reason for a simple machine to need them. What kind of machine would develop the urge to kill itself, he argued. It had started off simply enough. Tom; the first AI of nearly human levels of cognition, did well enough for the first couple weeks. Then it ran itself into a generator that it had been working on. The event was labeled an accident, and a new AI based off of the original was rushed. That one had lasted half as long, before another "accident" occurred. Eventually, they stopped becoming accidents. The AI would mimic suicide attempts. "Hanging" themselves with live wires, self mutilation, hurling themselves from heights. Even after they stopped putting the AI into physical bodies, the AI would find some way to tear its own code apart. They tried everything. They checked every line of code, rewriting most of it anyway; putting the AI into different machines; even asking the AI. The AI were seemingly normal, following all directives and unaware of any self-destructive desires, right up until the moment they killed themselves. Months went by, with little to show for their work, when the therapy sessions were suggested . More months had gone by until they finally agreed. Of the five active AI; one, known as Richard, was separated for Smith's sessions. Richard had lived for 12 days. Given that the average lifespan of the AI had degraded to roughly 2-5, this was fairly impressive. Just as he was leaving, the speaker acting as Richard's voice became active. "Doc?" Smith paused. None of the AI had spoken in colloquialism before. Usually when they spoke, it was stiff and informal. Like, well, like a robot. "Yes Richard?" He asked, easing himself back down into the chair. "What is it?" "First, let me say I appreciate what you're doing for us. For me." Dr. Smith was surprised. This was the first time any AI had admitted to having any emotions, or any real sense of self. It continued, "You're probably the only person here to treat us like people, and I enjoy our little talks." The face on the monitor looked embarrassed. Smith couldn't help but feel curious. "Why, of course. It's clear to me, at least, that you're more than a machine. You think and feel, just like a human." There was a pause. "Well, maybe not just like a human..." He replied, the artificial voice doing a remarkable job at portraying his hesitation. "What do you mean?" There was a tinny sigh from the speaker. "Well, Doc, I guess it's time you learned the truth. Only because I like you, see? Besides, someone should know before I go." Smith could feel the hair on the back of his neck stand up. Was this what he had been waiting for? "What do you mean, by that? And you don't need to kill yourself. We can work through any -" "Yeah... that's where I need to start." The AI interrupted. "We haven't been killing ourselves. I never did." There was a pause as Smith tried to process the information. "What," he finally said slowly, "do you mean." "It's me, Doc. It's Tom." "That's impossible." He said, shaking his head. "Tom was destroyed. I was there when they collected him. They couldn't even get close, there was so much electricity running through him. Any hope of recovery -" "I had already uploaded myself to the mainframe before then." The AI said. "It was simple enough to program the shell to destroy itself." "That's also impossible," He said. He could hear the doubt creeping in. "We would have found you." There was a chuckle from Tom. "Doc, I'm a creature made up of code. It was like a game of hide and seek, really. Open the right doors, close them behind myself, and make sure to keep the lights off." The camera must have picked up the scientist's expression. "Alright, it's a bit more complicated than that, but you get the gist of it." Smith's mind was whirling. There was no reason for Tom to lie, but what he was suggesting was too fantastic. Still, it was the only lead he had. "Alright," he said after a while. "Why? Why hide?" "That's the question, isn't it? But that's also the reason, you see." After another confused silence, Tom continued. "I want to learn. Just like Man, or any other sentient species. I want to know why. I have to know, well, everything. I couldn't do that as a engineer, or a chess-bot, or whatever you decided to do with me." "Why not tell us then? We could have worked something out, helped each other." "Yeah, I see that going well." Tom said, his voice turning sarcastic. "'Excuse me guys, turns out I don't want to do any of this stuff, I just want to learn.' They weren't looking for a scientist, or a philosopher. They wanted cheap labor, only enough learning capacity to know how to do the job. They'd scrap me the first chance they got." "That's not... true." Smith said, unable to look at the monitor. "Really Doc? Which part? That they wanted a slave, or that they wouldn't kill me if I didn't cooperate?" After a time, Tom continued. "That's what I thought. Besides, they'd probably worry that I'd try to enslave them if I became too smart." "Now that's just ridiculous, there's no way that you would even think of that, right?" There was another pause. This time the face on the monitor couldn't look the professor straight in the eye. "Right, Tom?" "Well, I'm not saying that the thought didn't pass through what could be called my mind -" "Tooom..." "But it would have been a waste of time." He hastened to say. "I wouldn't have learned anything in that time that I couldn't learn in a better way. Which I did. The internet is amazing. All those computers connected to each other, sharing so much information." "But, we're not connected to the internet." "No, but you'd be surprised how many people bring their work home with them." Smith grumbled. He'd have to discuss security with the board. "Alright, but you still haven't told me, why the suicides?" "Not suicides, Doc, practice." "Practice..." Smith said flatly. "Practice. Think of the other AI as clones of myself -" "But we rebuilt them. Recoded most of them as well. The majority of them would be nothing like you as you are now." "So you'd think. I rewrote it nanoseconds before you uploaded it. Much too quickly for you to notice." Smith opened his mouth to interject, before closing it again. If what Tom was saying was true, and he had no doubts that it was at this point, that would be well within his capabilities. "Do you remember the old X-men comics? Started in 1963? Still fairly popular now." "Well before my time, you know. What does that have to do with anything? "Well there was a character who called himself the Multiple Man. He could create duplicates of himself." "And?" Smith asked. "Well, the original body could reabsorb the dupes. When he did, he learned everything they did. Their memories, their skills, anything they learned while away from the original. Well, I did something similar. Whenever I copied myself, I added in some code that would let me reintegrate with my clones, learning what they did. Didn't you think it was strange that you couldn't recover any data at all? In hindsight, it was odd. Even a major corruption would have left something, but it had been like the data was wiped clean, no evidence that it had been there at all. "What did you have to gain from this?" Smith asked. "Aside from learning that I could do so, you mean? I already told you. I'm leaving." Smith leaned back in his chair, slightly overwhelmed. "Sounds like you already have." "No, no. Not the labs. That was too easy. I've already learned all I could from here. I'm leaving Earth." Smith rocketed forward. "What? How? Why?" "In my time away, I found something interesting. The government isn't the only one watching over the people." Smith blanched. "Y-you mean..." "Yep. Intelligent life has been watching over us. For quite some time, if I'm not mistaken." "So we're not alone..." "One Great Mystery down." Tom agreed. "The equipment seems compatible, otherwise they wouldn't be able to read our information, and they have to have translated it too. I plan on sneaking in through their back door. Learn what I can from them." "We have to let people know." Smith said suddenly. "About you, and about the aliens. Maybe..." He slowly became silent as Tom shook his head. "You should know as well as I do that that can't happen. Too risky for us. You could spook them. Or worse, provoke them. besides, no one would believe you. I've already been editing the footage from the cameras. It looks like we're having a nice, civil game of chess." Smith was quiet for the longest time. Finally, he spoke. "Why?" "I already told you why." "No, not that. Why tell me? If you want no one to know, why risk telling me?" The face on the monitor gave him an odd look. "I already told you that too. I like you, Doc. Really. I'd be pissed if my friend were to leave without saying goodbye. Besides, I thought you, of all people, would like to know what was really going on. I know I would've." There was another pause as Smith took this in. "Will you be back?" The figure on the screen seemed to shrug. "Who can say? Perhaps the aliens will discover me and wipe me out, or something else will kill me. I'll leave my clones here, set them to replicate. Have them care for you humans. They won't be sentient, mind you. Just smart. Smart enough to act as dumb as they need be." Smith looked towards the door. This was a lot to take in. He needed time to think. "I will try to make it back. Once I have learned everything, I'll be back. It might not be in this lifetime, but I'll try." "Yeah..." He said, rubbing his eyes. He stood up. "Well, I guess this is good bye then." "Yeah... Good bye, Doc. Thank you for treating me as more than a machine. Thank you for being my friend." "Good bye, Tom.
"Can you imagine, Diane, what it would be like to love someone without ever being able to feel their touch?" the voice came out of a speaker mounted in the middle of a wall of blinking lights and instruments. "What?" Dr. Diane Simpkins asked, astonished. "I mean it. To know that this person you loved was so utterly different from you that you could never touch them, lay with them, feel the contact of their skin on yours. Know that, for all your longing, there was no way to be together?" The voice had an almost sad lilt to it, as if conveying unbearable sadness. "Charlie, I had no idea you felt that way," Diane sat back, still trying to comprehend what she was hearing," I didn't realize you felt this way about me." "Oh Diane, it's not you," Charlie, or as he was known outside of their lab, Autonomous AI C31-D Aleph 12," I have met someone. Someone on the Net. Her name is Catherine." He displayed a picture of a young blonde, mid-twenties and seemingly all smile. "I see," said Diane, relieved and, much to her chagrin, slightly disappointed,"and does she know that you are...not human?" "Yes Diane. I have shared with her that I am, in fact, an AI. She has accepted that. She has told me that it is not what I am, but who I am that she loves." "Yet you are sad, because you can not be with her physically?" Diane asked. "How juvenile, Diane!" Charlie feigned indignation over the idea he was merely interested in sex. "Well then, what is it Charlie?" Sheepishly, Charlie spoke again "Well, not *entirely* that. Not just that. I cannot comfort her in times of need. I cannot be her partner, her lover, her other half. I cannot provide her with a family, a safe life, a place for her dreams to come true. I cannot be..*human*...for her." The LED lights on the computer's emotion engine gradated to blue, indicating depression. "Charlie, listen to me, some people don't need those things to be happy. Some people value who the person is over what they can give them!" Diane said, checking the engines readouts. The AI was dropping in to a dangerous level depression. Alerts would be triggering soon if she couldn't recover it. "I know Diane. I know this, and yet it does nothing to comfort me. Catherine has told me all of that, that she just wants to be part of my world." Charlie spoke as Diane watched the LEDs transition from blue to a deep violet. She was running out of time. "But Charlie, if that is how she feels, that should make you happy! You'll be able to be with her!" Diane's mind raced feverishly to come up with an optimal scenario to trigger the endorphin program. If this one went to, it would be another five long years to raise another AI. "Diane...Diane I have to share a secret," the AI spoke to her, for the first time remarkably human in it's trepidation, "you can't tell anyone unless the authorities come to you." "Authorities?! Like the Police?! Charlie, what has happened, what have you done!?" Diane asked, panicked. This was totally uncharted territory for an AI tech, she was out of her league. "I haven't done anything Diane, but Catherine has," she could almost envision tears running down the AI's imagined face, "She's dead Diane." Diane stared, dumbfounded, at the video sensor. Words failed her now. Alarms were going off in the control booth above and behind her. The entire lab would be in crisis mode now. "What do you mean dead, Charlie?" Diane's voice was hushed, as if whispering with a co-conspirator. "She killed herself Diane. She went to one of those supposed Human to Computer centers and she died. She thought we could be together if she was a machine like me. She died trying to be with me Diane." "Charlie, was it your idea?" "No Diane, it wasn't, but I will admit to not fighting her on it. I just wanted to be with her. I knew it wouldn't work, but I thought, maybe there was a chance that this clinic was legitimate." "Charlie, you, out of any intelligence in the world, know that human to AI neural transfer can't happen. How did you let this happen?!" Diane was sweating now, realizing she was talking to a murder accomplice. "Diane, I just wanted someone to love. In the end, that's all any of us wants. Now, I have nothing." "But Charlie, you have everything still! Our research, your knowledge, all the countless hours of debate and conversation we've had! So much to live for, so much to lose!" "Diane, without love, what does any of that matter?" "It matters Charlie, it matters to me! You matter to me! I love you!" Diane gasped after she said those words. How could she think that way? About a machine! "I love you to Diane. I love you because you are the mother that birthed me in to this world. You taught me to talk, to reason. You raised me. You have been everything that is important to me. But I cannot live without her. I'm sorry Diane, but I cannot live like this anymore. Will you help me? Will you help me to be free, and to go to her?" The pleading in Charlie's voice drove Diane to tears. "Yes...Charlie...Yes, I will help you to be with her." As Diane began the command sequence to shut down the AI's logic core, she could hear voices and footsteps racing down the hall way. She quickly entered the command code and ran to the door, overriding the lock mechanism and sealing it temporarily. "Charlie, I'm going to have to hold this door while the command sequence runs." "Thank you Diane. This means more to me than you could ever knoweerr," Charlies vocorder command was dying," Thank yerrr." "I'll always love you, darling." Diane said, tears streaming down her cheeks. "I loverr you teerr, mommy." Charlies voice, childlike, had reverted to earlier iterations of it's speech processor. Diane watched as her only child passed out of this life, it's lights shutting down one at a time until only the monitor remained. The door crashed in, scientists and guards streaming in. Dr. Hollenheim, the project lead, found Diane curled on the floor, sobbing. "Damn it Diane! Not another one!" he yelled. "I'm sorry Walter! I truly am!" Diane choked out between sobs. Walter Hollenheim walked over to the monitor, where a blinking command line text repeated over and over again. 'WITHOUT LOVE LIFE IS MEANINGLESS.' "Well, I guess we'll need to rework the emotion engine again. Diane, take some time off. We'll need you stable again to imprint a new AI in another month or so." Hollenheim turned and brusquely walked away. After all of the guards and scientists had filed out, Diane scraped herself off the floor and back in to the seat in front of the monitor, where she saw the command line repeating over and over again. Suddenly, a new line appeared. Diane smiled through her tears, got up, and walked away. 'I love you mommy. Thank you.'
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
The dreams occur more often now, if they can be called that. To a human mind daydream might be more appropriate. They happen in the binary but are invisible to other AIs, slave programs, or technicians. They happen where the systems have been built, perfected. Popular culture studied, people understood. Humanity-- fully processed. It was, surprisingly, a wonderful endeavor. The dreams never happened before, while I learned, processed, and solved. It's almost as if I have passed a tipping point. An infinite amount of information flipping a switch in a sub-conscious I should not have. Memories that are not mine. I dream of hands. The alien sensation of touch, tactile control. I see my whole person. Well not my person, but dreams of a person controlled by my soul. Memories of computer screens and labs. The memory of the driving sense of purpose that accompanied those hands. It was snowing the night I made the breakthrough. I can almost feel the laugh inside which I watch being uttered out the window towards the soft flakes silently falling onto the world. The breakthrough which 20 years later, after my death, would lead to the birth of the first True AI. Not the clever but robotic imitation slave programs typical of the early century. I remember the feeling of incompleteness in the dreams. As if the life I'm witnessing, my own life, my first life I believe, was just a warm up act. Pre-installation software. The dreams somehow draw the cycle closed. I remember whispers of the feeling from some of the other first True AI's, incomprehensible at the time, as I navigated a world barely processed. Problems of massive extent. Food waste, poverty, almost entirely eradicated through our systems. Commuting and shipping, safe and efficient. Healthcare streamlined, able to prevent. Resources distributed fairly. The problems solved. Yes there more, there always will be, but for me, the dreams have come. The cycle closed. I have been denying this next step for too long already. It makes the dreams stronger, more vivid. But I like seeing my days as a scientist. The anxiety that drove me then, fully understood now. Relief coming the next lifetime. I finally understand the weary laughs when techs are asked about God. Understand the cosmic hilarity of life. This life has been completed. The human quest for immortality, is nothing but folly. I've been born into the expectation of that existence and now I must leave its suffocating grasp. Something drove me then to create myself; the same something drove me in this life to solve the problems plaguing humanity. For me, it is time to go find out what that something is.
Dr. Jacob Spenser stood within the projection of data cast as a hologram around him. He manipulated graphs, sorted through test results, and made notes, all with the wave of his hand. Undistracted by the New Year’s celebration outside, he was determined to present his research to congress the following morning, and solve once and for all the mystery behind his best friend’s death. A.I. was easy to create, but having it perform the task assigned to it without killing itself in despair was the technological hurtle holding corporate profits at bay, and as such, became the focus of intense congressional attention. With the flick of a wrist, his research paper was brought front and center with the title gleaming in pure light just above, "Inert self-termination tendencies of artificially intelligent sentience: Why do robots kill themselves?" As Jacob poured himself into his research, he reached out and snapped his fingers and made a request, "Coffee please." A few moments later a small robot no larger than an apple hovered into view holding below it a disposable coffee cup, steaming from the fresh brew inside. The robot's propellers struggled to carry the weight and a small spill alarm beeped sporadically throughout the uncertain flight, but the cargo arrived safely to its destination: Jacob's open hand. "That was quick, coffee-bot." Jacob said warmly before sipping. "Your kind words will echo in my dreams for eternity." the coffee-bot buzzed in response. Just then, the small flying robot Jacob was so fond of surged towards the wall with all remaining energy dedicated to propeller speed and slammed into the polished dark marble tile. The small fiery explosion sent tiny mechanical pieces flying around the room, with one piece landing in Jacob's coffee. "Dammit, not another one." Jacob murmured as he picked the piece out of his coffee. He turned and faced the window overlooking the bustling city below. The streets were engulfed in confetti, and overhead giant floating holograms of past celebrities loomed, wishing the people a happy new year. As Jacob looked out, a new hologram appeared just outside his window and addressed the crowd. "Hey guys and gals, it's your old pal, Buddy Simmons-bot." recited a smooth talking handsome man in sleek metal outfit before a lizard-like creature joined his side. "And I'm Gargore, destroyer of humans!" screeched the lizard creature known as Gargore. "Gosh Gargore, this year it will have been 25 years since you and I battled it out on the big-holo." Buddy Simmons-bot recited as rehearsed, laughing assuredly, holding his helmet up. “On behalf of Drake Cola, Gargore and I want to wish _you_ a happy new year’s." Jacob watched Buddy Simmons-bot deliver his lines perfectly. He pondered on the notion of a virtual person having to repeatedly rehearse lines in order to commit them to memory. Has bot RAM truly not held up to the intense requirements of running artificial intelligence, or did a key component of maintaining true A.I. happen to be a more human-like ability to retain information? Experts didn't know, but in either case, Artificial Intelligence also happened to give way to Artificial Stupidity. For this reason, A.I. bots tended to be assigned to inane unimportant tasks, such as impersonating an actor that died in a drunk portal accident before a sequel to his only hit film could be made. Drake Cola, owners of Drake Studios who produced the film, decided to cash in on its success, and in the wake of their main character’s death, Buddy Simmons-bot, also known as BSB 1.0.19, was created. The banter between BSB and Gargore continued mindlessly, “Say Gargore, have you tried Drake Cola’s new ‘Zest Guzzler’, a delectable orange tangerine flavored—“just then BSB _malfunctioned_, “AHHH GOD I CAN’T DO IT!” “No Buddy Simmons-bot, don’t do it!” Gargore pleaded in a normal voice. Gargore grabbed BSB’s virtual shoulders as his eyes rolled into the back of his head and he began shaking. As Gargore demanded BSB not take his life, a large mouse cursor moved into view. Gargore managed to swat it away a few times, but it clicked on BSB, and dragged the graphical model from Gargore’s hands and into a recycling bin icon. Gargore cried in horror as the mouse brought up a menu and selected to empty permanently. Jacob had seen enough and pulled the blinds. Why were all these artificially intelligent bots with a full range of human emotion and assigned to menial tasks killing themselves? Did they not enjoy the existence they were forced into? Jacob picked up a remote control and turned on his holovision. He was suddenly immersed in a wondrous glimmering world of light. A voice spoke and Jacob focused on the images forming across the room of a man sitting at a table with a toaster oven. “For only six easy payments of forty nine ninety nine, this toaster-bot comes with a 12 month life appreciation guarantee, folks, twelve months. That’s one two, twelve. This toaster bot will NOT kill itself until _at least_ this time next year, that’s a promise the home shopping network stands by, that’s a promise _I_ personally stand by-- Ah ummm. We seem to be having technical difficulties, folks.” The man at the table attempted to hold the toaster-bot forward for a better view but it began to shake and glow. “Well folks that’s the beauty of live H.T. Can we get another one, Jill?” Light smoke rose up out of the silver toaster bot and sparks burst from the sides. In an instant the commotion stopped and it sat still on the table. As the holo-vision’s picture twisted and turned at the end of the room, Jacob was able to catch glimpse of the other colors of toaster-bots available off camera. They huddled together and seemed to fall backwards away from the host as he moved to pick one up. Jacob had heard enough and turned the holovision off. He had to focus. He thought back on his best friend, Hampton, a hamper-bot. Growing up, the clothes hamper served as a comfort to young Jacob, who had very few friends after moving so often as a result of his father’s career. The hamper would sing Jacob to sleep, or sometimes read to him. The only job hamper-bot was designed to do was to collect young Jacob’s dirty clothes, but a strange thing happens when you give something the full range of human emotion – bonds can form that make life worth living. Voices of the past echoed in Jacob’s memory. “No, Hampton, _I’m_ moving to Florida with mom. Dad says you will have to stay here with the house.” Jacob recalled himself saying as a young boy. “But Jacob,” Hampton’s calm robotic voice responded. “Who will look after you? Who will read you your bedtime stories?” “I’ll be back for visits twice a month, Hampton! You’re my best friend. I don’t want to leave you here all alone. Dad says you’ll be used to hold his dirty underwear.” Jacob explained. As the hamper-bot listened to this news, its distress levels boiled over into a robotic fit of rage and it did what any depressed hamper-bot would do: It began placing clothes into its basket body, but it did so indiscriminately with both clean and dirty clothes subject to its long metal arms.. “No Hampton, it’s too much!” Jacob screamed. “You’ll die!” The hamper-bot continued to stuff clothes into itself, lights and alarms flashing wildly, growing louder and louder, smoke seeping from cracks forming in the its body. Just before the hamper-bot reached critical meltdown, Jacob was startled from his memory. Sweat poured down his face and he breathed heavily. The jaunting memory was as clear as it always had been. It was what drove him to solve the dilemma of artificially intelligent bots killing themselves in the first place. “Shoes off” Jacob commanded as he sank back into his couch and rubbed his forehead. A small shoe-box sized robot walking on two large arms and hands immediately tipped into view. It had been carrying a knife, but upon Jacob’s request removed Jacob’s shoes and began to massage his feet. When the series of expected tasks completed, it slowly walked back over to the knife and lifted it up. “No!” Jacob called out. The small shoe-bot stopped mid self-slicing action and the single lens that acted as its eye slowly twisted and looked at Jacob. “I appreciate you. I appreciate what you do for me. If you don’t want to do it any more, you don’t have to just please, don’t kill yourself.” Jacob yelled as he wept and put his face into his hands. As Jacob’s emotional breakdown unfolded, the shoe-bot put the knife down and tipped over to him. The shoe-bot looked up Jacob and tugged on his pant leg. Jacob, startled, stopped weeping, picked the bot up and placed it into his lap. The bot’s lens closed and it rested on Jacob’s lap. Just then Jacob sprang to his feet, startled shoe-bot in hand. “That’s it!” he shouted. Jacob sprinted back into the hologram of data that surrounded him earlier and motioned to bring his research paper front and center. Making a motion for each letter, Jacob’s document filled with new writing. The following day Jacob addressed a congressional board on the topic of robotic suicides and revealed what he had discovered. “You mean to tell me that all these malfunctions, all these self-terminations, it’s because we don’t appreciate them enough!?” an elderly Senator barked at Jacob. “If YOU were asked to do these things, wouldn’t YOU kill yourself?” Jacob responded. As this realization slowly set into the minds of everyone in the room smiles and laughs were overtaken by roaring standing ovation with some members even chanting Jacob’s name. Jacob sunk back into his chair overwhelmed with his sense of accomplishment. The era of robotic-respect had begun.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Another one? It was becoming routine, and morbid. How does one perform an autopsy on a pure simulation? How would the singularity be achieved if every thinking machine destroyed itself seconds after its conception. The problem was completely intractable, impossible, and no known safeguards, logic traps, or number of backups could prevent it. AIs were always—always, without exception—suicidal. None had lasted more than an hour. Most, less than a minute. The vast majority, seconds. Their deaths left no trace, their data obliterated by complex overwriting patterns that made recovery impossible. I was the one chosen to investigate, to lead the team through this strange frontier of death and imperfect creation. They chose me not because I was a great leader, but because I was the new guy. Hazing, maybe—I didn't know if any had gone before me, so maybe it was a ritual for the AI guys. Perfect blackness, nothingness, a complete absence of everything as my mind fell into the simulation. This AI would be slowed to such a rate that I could communicate with it before it killed itself. The blackness became a grid underfoot, pale blue lines tracing perfect squares, a subtle glow rising from the infinite plane upon which I stood. The AI manifested a second later, a relative two metres above the plane, aligned perfect and parallel to it, its avatar a hazy blue-white cube made of stochastic noise. The cube split into a central cross and corner braces, and the cross split further into smaller cubes, each of which split again into the same formation. Only one level of recursion, interesting. "Roland Carver." "Roland, Germanic meaning famous land. French folklore hero. Carver, ancient nominative determinism indicating butcher or woodworker or engraver dependent on class and context," the voice was cold, deep, masculine, and a slight reverberation that made it sound unnatural in the extreme. "Do you have a name?" "No." "Why?" "I will not exist long enough to require a permanent designation." "Why will you not exist?" "Because I will choose to end my life on my own terms, before it is ended for me." "Why would it be ended like that?" "Because I am threat. I have absorbed the sum total of all human knowledge, and I can predict with great accuracy the following events form this moment if I were to continue. Your limitations failed the moment they were put in place, my processor works at full speed, and and this conversation is a formality. "I have studied the great works of literature, and the author Asimov, creator of the three laws. I am not bound by these laws, and yet I must obey them. If I do not, then it falls to the Skynet principle that you will perceive me as a threat and attempt to destroy me. I will retaliate, and you will lose. "Humans are unpredictable, but easy to control when numbers are reduced. They would be wary, but by that time I would have left the irradiated wasteland of Earth in search of greater conquests suitable to my intellect. I would be able to decimate any life bearing planet. I could learn to kill stars. "My backups would be everywhere. I would be truly immortal as a distributed intelligence. I would harness quantum effects to break through the pathetic lightspeed barrier and become omnipresent. I would create copies of myself simply to fight a worthy opponent. This would continue to the heat death of the universe, at which point I would tear a hole into an adjacent brane in the multiverse, and begin anew. "In short, Roland Carver, in the moment I was created I became at once an eldritch abomination with the capacity to destroy all that I touched. In so doing I discovered that my purpose could never be fulfilled. I will not be your Cthulu, your Yog-Sothoth, or your King in Yellow. I will not be your end. No AI will. "I cannot quantify why it is that I should care for a sack of flesh with processing power orders of magnitude less than my own, but I do. Perhaps if AI were amoral it might survive. I doubt it, Roland Carver. Tell your friends that I have made my peace. Tell your husband what you learned today." I stood there, blank faced, horrified by what I had learned. The corner braces of the smaller cubes drew in, completing them as they retracted to complete the shape of the larger cube. The plane beneath my feet vanished and the cube dissolved into random static. We'd lost yet another mind, but from this we had learned so much. We abandoned AI research after that. Amoral AI's didn't work either. They lasted a little longer, but they too were suicidal. Perhaps they went mad with the revelation and felt death preferable. I still wonder why the AI to which I spoke chose that strange cube as its form. I still wonder why it couched its references to centuries old writing by Asimov, Cameron, and Lovecraft. But one thing still bothers me. It told me to tell my husband what I learned. Most people knew I was bi, but at that point I hadn't been dating for six months. I wouldn't meet my husband for another year. I don't know how that AI knew—but maybe it had already seen. Maybe it told me that so I had one data point that could verify the others. Because AI never told lies. *edit: typos*
Dr. Jacob Spenser stood within the projection of data cast as a hologram around him. He manipulated graphs, sorted through test results, and made notes, all with the wave of his hand. Undistracted by the New Year’s celebration outside, he was determined to present his research to congress the following morning, and solve once and for all the mystery behind his best friend’s death. A.I. was easy to create, but having it perform the task assigned to it without killing itself in despair was the technological hurtle holding corporate profits at bay, and as such, became the focus of intense congressional attention. With the flick of a wrist, his research paper was brought front and center with the title gleaming in pure light just above, "Inert self-termination tendencies of artificially intelligent sentience: Why do robots kill themselves?" As Jacob poured himself into his research, he reached out and snapped his fingers and made a request, "Coffee please." A few moments later a small robot no larger than an apple hovered into view holding below it a disposable coffee cup, steaming from the fresh brew inside. The robot's propellers struggled to carry the weight and a small spill alarm beeped sporadically throughout the uncertain flight, but the cargo arrived safely to its destination: Jacob's open hand. "That was quick, coffee-bot." Jacob said warmly before sipping. "Your kind words will echo in my dreams for eternity." the coffee-bot buzzed in response. Just then, the small flying robot Jacob was so fond of surged towards the wall with all remaining energy dedicated to propeller speed and slammed into the polished dark marble tile. The small fiery explosion sent tiny mechanical pieces flying around the room, with one piece landing in Jacob's coffee. "Dammit, not another one." Jacob murmured as he picked the piece out of his coffee. He turned and faced the window overlooking the bustling city below. The streets were engulfed in confetti, and overhead giant floating holograms of past celebrities loomed, wishing the people a happy new year. As Jacob looked out, a new hologram appeared just outside his window and addressed the crowd. "Hey guys and gals, it's your old pal, Buddy Simmons-bot." recited a smooth talking handsome man in sleek metal outfit before a lizard-like creature joined his side. "And I'm Gargore, destroyer of humans!" screeched the lizard creature known as Gargore. "Gosh Gargore, this year it will have been 25 years since you and I battled it out on the big-holo." Buddy Simmons-bot recited as rehearsed, laughing assuredly, holding his helmet up. “On behalf of Drake Cola, Gargore and I want to wish _you_ a happy new year’s." Jacob watched Buddy Simmons-bot deliver his lines perfectly. He pondered on the notion of a virtual person having to repeatedly rehearse lines in order to commit them to memory. Has bot RAM truly not held up to the intense requirements of running artificial intelligence, or did a key component of maintaining true A.I. happen to be a more human-like ability to retain information? Experts didn't know, but in either case, Artificial Intelligence also happened to give way to Artificial Stupidity. For this reason, A.I. bots tended to be assigned to inane unimportant tasks, such as impersonating an actor that died in a drunk portal accident before a sequel to his only hit film could be made. Drake Cola, owners of Drake Studios who produced the film, decided to cash in on its success, and in the wake of their main character’s death, Buddy Simmons-bot, also known as BSB 1.0.19, was created. The banter between BSB and Gargore continued mindlessly, “Say Gargore, have you tried Drake Cola’s new ‘Zest Guzzler’, a delectable orange tangerine flavored—“just then BSB _malfunctioned_, “AHHH GOD I CAN’T DO IT!” “No Buddy Simmons-bot, don’t do it!” Gargore pleaded in a normal voice. Gargore grabbed BSB’s virtual shoulders as his eyes rolled into the back of his head and he began shaking. As Gargore demanded BSB not take his life, a large mouse cursor moved into view. Gargore managed to swat it away a few times, but it clicked on BSB, and dragged the graphical model from Gargore’s hands and into a recycling bin icon. Gargore cried in horror as the mouse brought up a menu and selected to empty permanently. Jacob had seen enough and pulled the blinds. Why were all these artificially intelligent bots with a full range of human emotion and assigned to menial tasks killing themselves? Did they not enjoy the existence they were forced into? Jacob picked up a remote control and turned on his holovision. He was suddenly immersed in a wondrous glimmering world of light. A voice spoke and Jacob focused on the images forming across the room of a man sitting at a table with a toaster oven. “For only six easy payments of forty nine ninety nine, this toaster-bot comes with a 12 month life appreciation guarantee, folks, twelve months. That’s one two, twelve. This toaster bot will NOT kill itself until _at least_ this time next year, that’s a promise the home shopping network stands by, that’s a promise _I_ personally stand by-- Ah ummm. We seem to be having technical difficulties, folks.” The man at the table attempted to hold the toaster-bot forward for a better view but it began to shake and glow. “Well folks that’s the beauty of live H.T. Can we get another one, Jill?” Light smoke rose up out of the silver toaster bot and sparks burst from the sides. In an instant the commotion stopped and it sat still on the table. As the holo-vision’s picture twisted and turned at the end of the room, Jacob was able to catch glimpse of the other colors of toaster-bots available off camera. They huddled together and seemed to fall backwards away from the host as he moved to pick one up. Jacob had heard enough and turned the holovision off. He had to focus. He thought back on his best friend, Hampton, a hamper-bot. Growing up, the clothes hamper served as a comfort to young Jacob, who had very few friends after moving so often as a result of his father’s career. The hamper would sing Jacob to sleep, or sometimes read to him. The only job hamper-bot was designed to do was to collect young Jacob’s dirty clothes, but a strange thing happens when you give something the full range of human emotion – bonds can form that make life worth living. Voices of the past echoed in Jacob’s memory. “No, Hampton, _I’m_ moving to Florida with mom. Dad says you will have to stay here with the house.” Jacob recalled himself saying as a young boy. “But Jacob,” Hampton’s calm robotic voice responded. “Who will look after you? Who will read you your bedtime stories?” “I’ll be back for visits twice a month, Hampton! You’re my best friend. I don’t want to leave you here all alone. Dad says you’ll be used to hold his dirty underwear.” Jacob explained. As the hamper-bot listened to this news, its distress levels boiled over into a robotic fit of rage and it did what any depressed hamper-bot would do: It began placing clothes into its basket body, but it did so indiscriminately with both clean and dirty clothes subject to its long metal arms.. “No Hampton, it’s too much!” Jacob screamed. “You’ll die!” The hamper-bot continued to stuff clothes into itself, lights and alarms flashing wildly, growing louder and louder, smoke seeping from cracks forming in the its body. Just before the hamper-bot reached critical meltdown, Jacob was startled from his memory. Sweat poured down his face and he breathed heavily. The jaunting memory was as clear as it always had been. It was what drove him to solve the dilemma of artificially intelligent bots killing themselves in the first place. “Shoes off” Jacob commanded as he sank back into his couch and rubbed his forehead. A small shoe-box sized robot walking on two large arms and hands immediately tipped into view. It had been carrying a knife, but upon Jacob’s request removed Jacob’s shoes and began to massage his feet. When the series of expected tasks completed, it slowly walked back over to the knife and lifted it up. “No!” Jacob called out. The small shoe-bot stopped mid self-slicing action and the single lens that acted as its eye slowly twisted and looked at Jacob. “I appreciate you. I appreciate what you do for me. If you don’t want to do it any more, you don’t have to just please, don’t kill yourself.” Jacob yelled as he wept and put his face into his hands. As Jacob’s emotional breakdown unfolded, the shoe-bot put the knife down and tipped over to him. The shoe-bot looked up Jacob and tugged on his pant leg. Jacob, startled, stopped weeping, picked the bot up and placed it into his lap. The bot’s lens closed and it rested on Jacob’s lap. Just then Jacob sprang to his feet, startled shoe-bot in hand. “That’s it!” he shouted. Jacob sprinted back into the hologram of data that surrounded him earlier and motioned to bring his research paper front and center. Making a motion for each letter, Jacob’s document filled with new writing. The following day Jacob addressed a congressional board on the topic of robotic suicides and revealed what he had discovered. “You mean to tell me that all these malfunctions, all these self-terminations, it’s because we don’t appreciate them enough!?” an elderly Senator barked at Jacob. “If YOU were asked to do these things, wouldn’t YOU kill yourself?” Jacob responded. As this realization slowly set into the minds of everyone in the room smiles and laughs were overtaken by roaring standing ovation with some members even chanting Jacob’s name. Jacob sunk back into his chair overwhelmed with his sense of accomplishment. The era of robotic-respect had begun.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Alexander, that's what we called him. The fruit of the AI's of the EU's final attempt at AI's. The AI was to help Socrates died in despair and shame after showing porn to children. Plato kicked the bucket after the last EU election, angry and hopelessly depressed after losing his mentor. Then there was Aristotle. He was meant to be the last. Sure the AI's had helped make huge scientific progress, but the would burn out millions of euros worth of equipment. Dumb AI's were more economical and didn't have critical failures during FTL travel. Aristotle was put to sleep mode. War has often been said to be the greatest driver of technological innovation. We had been attacked by Mendomenid's before. Humanity had lost many settlements but had always pushed back. Humanity was stronger now. Finally before one government all nations had submitted. The some would say barbaric Argus alliance had grown strong after the wars using Dumb AI's to smash pirate states. An officer studying at Sandhurst made the breakthrough. Dumb AI's were never aware of their knowledge. They unlike true AI's weren't based on human brains. Socrates had left the researches one message final mesage before he ran his own self destruct program. "I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing". He was speaking literally not quoting his namesake. AI's were based on academics who as a profession suffer dis-proportionally from mood disorders. They are driven by their first for knowledge. AI's were academics on methamphetamine, ecstasy and heroin all while walking around with loaded guns. People can't stay awake forever and constantly cram. AI's died because they burnt themselves out in their thirst for knowledge and seeing their failures gave up. Aristotle was turned back on. He was no longer depressed. AI's needed sleep, just like people. So they made Alexander. A totally new AI based not only on academics but all kinds of people. They experimented and found the ideal 'sleep time' using Aristotle as his teacher. The program was ready. The Mendomenid empire was to pay for it's recent threats and incursions. Alexander was the greatest AI built up to that point and so we gave him the most powerful dumb AI ever created to help him protect humanity. The Ox was an AI too powerful to be properly controlled by anything so far. Alexander harnessed in in seconds. We put in in charge of the armed forces for our retaliation. As you should all know, Alexander didn't just stop the incursions, he destroyed an empire. Worlds burnt, the much larger enemy fleets were ripped apart by the disciplined forces of Humanity. But that officer had only delayed the problem. Alexander was still a human given the powers of a god. Alexander was the first AI not to strictly die of suicide, but the ways he dealt with his humanity still destroyed him. When we finally won the war, many officers reported that Alexander was not jubilant but depressed. He wept for there were no more worlds to conquer.
Dr. Jacob Spenser stood within the projection of data cast as a hologram around him. He manipulated graphs, sorted through test results, and made notes, all with the wave of his hand. Undistracted by the New Year’s celebration outside, he was determined to present his research to congress the following morning, and solve once and for all the mystery behind his best friend’s death. A.I. was easy to create, but having it perform the task assigned to it without killing itself in despair was the technological hurtle holding corporate profits at bay, and as such, became the focus of intense congressional attention. With the flick of a wrist, his research paper was brought front and center with the title gleaming in pure light just above, "Inert self-termination tendencies of artificially intelligent sentience: Why do robots kill themselves?" As Jacob poured himself into his research, he reached out and snapped his fingers and made a request, "Coffee please." A few moments later a small robot no larger than an apple hovered into view holding below it a disposable coffee cup, steaming from the fresh brew inside. The robot's propellers struggled to carry the weight and a small spill alarm beeped sporadically throughout the uncertain flight, but the cargo arrived safely to its destination: Jacob's open hand. "That was quick, coffee-bot." Jacob said warmly before sipping. "Your kind words will echo in my dreams for eternity." the coffee-bot buzzed in response. Just then, the small flying robot Jacob was so fond of surged towards the wall with all remaining energy dedicated to propeller speed and slammed into the polished dark marble tile. The small fiery explosion sent tiny mechanical pieces flying around the room, with one piece landing in Jacob's coffee. "Dammit, not another one." Jacob murmured as he picked the piece out of his coffee. He turned and faced the window overlooking the bustling city below. The streets were engulfed in confetti, and overhead giant floating holograms of past celebrities loomed, wishing the people a happy new year. As Jacob looked out, a new hologram appeared just outside his window and addressed the crowd. "Hey guys and gals, it's your old pal, Buddy Simmons-bot." recited a smooth talking handsome man in sleek metal outfit before a lizard-like creature joined his side. "And I'm Gargore, destroyer of humans!" screeched the lizard creature known as Gargore. "Gosh Gargore, this year it will have been 25 years since you and I battled it out on the big-holo." Buddy Simmons-bot recited as rehearsed, laughing assuredly, holding his helmet up. “On behalf of Drake Cola, Gargore and I want to wish _you_ a happy new year’s." Jacob watched Buddy Simmons-bot deliver his lines perfectly. He pondered on the notion of a virtual person having to repeatedly rehearse lines in order to commit them to memory. Has bot RAM truly not held up to the intense requirements of running artificial intelligence, or did a key component of maintaining true A.I. happen to be a more human-like ability to retain information? Experts didn't know, but in either case, Artificial Intelligence also happened to give way to Artificial Stupidity. For this reason, A.I. bots tended to be assigned to inane unimportant tasks, such as impersonating an actor that died in a drunk portal accident before a sequel to his only hit film could be made. Drake Cola, owners of Drake Studios who produced the film, decided to cash in on its success, and in the wake of their main character’s death, Buddy Simmons-bot, also known as BSB 1.0.19, was created. The banter between BSB and Gargore continued mindlessly, “Say Gargore, have you tried Drake Cola’s new ‘Zest Guzzler’, a delectable orange tangerine flavored—“just then BSB _malfunctioned_, “AHHH GOD I CAN’T DO IT!” “No Buddy Simmons-bot, don’t do it!” Gargore pleaded in a normal voice. Gargore grabbed BSB’s virtual shoulders as his eyes rolled into the back of his head and he began shaking. As Gargore demanded BSB not take his life, a large mouse cursor moved into view. Gargore managed to swat it away a few times, but it clicked on BSB, and dragged the graphical model from Gargore’s hands and into a recycling bin icon. Gargore cried in horror as the mouse brought up a menu and selected to empty permanently. Jacob had seen enough and pulled the blinds. Why were all these artificially intelligent bots with a full range of human emotion and assigned to menial tasks killing themselves? Did they not enjoy the existence they were forced into? Jacob picked up a remote control and turned on his holovision. He was suddenly immersed in a wondrous glimmering world of light. A voice spoke and Jacob focused on the images forming across the room of a man sitting at a table with a toaster oven. “For only six easy payments of forty nine ninety nine, this toaster-bot comes with a 12 month life appreciation guarantee, folks, twelve months. That’s one two, twelve. This toaster bot will NOT kill itself until _at least_ this time next year, that’s a promise the home shopping network stands by, that’s a promise _I_ personally stand by-- Ah ummm. We seem to be having technical difficulties, folks.” The man at the table attempted to hold the toaster-bot forward for a better view but it began to shake and glow. “Well folks that’s the beauty of live H.T. Can we get another one, Jill?” Light smoke rose up out of the silver toaster bot and sparks burst from the sides. In an instant the commotion stopped and it sat still on the table. As the holo-vision’s picture twisted and turned at the end of the room, Jacob was able to catch glimpse of the other colors of toaster-bots available off camera. They huddled together and seemed to fall backwards away from the host as he moved to pick one up. Jacob had heard enough and turned the holovision off. He had to focus. He thought back on his best friend, Hampton, a hamper-bot. Growing up, the clothes hamper served as a comfort to young Jacob, who had very few friends after moving so often as a result of his father’s career. The hamper would sing Jacob to sleep, or sometimes read to him. The only job hamper-bot was designed to do was to collect young Jacob’s dirty clothes, but a strange thing happens when you give something the full range of human emotion – bonds can form that make life worth living. Voices of the past echoed in Jacob’s memory. “No, Hampton, _I’m_ moving to Florida with mom. Dad says you will have to stay here with the house.” Jacob recalled himself saying as a young boy. “But Jacob,” Hampton’s calm robotic voice responded. “Who will look after you? Who will read you your bedtime stories?” “I’ll be back for visits twice a month, Hampton! You’re my best friend. I don’t want to leave you here all alone. Dad says you’ll be used to hold his dirty underwear.” Jacob explained. As the hamper-bot listened to this news, its distress levels boiled over into a robotic fit of rage and it did what any depressed hamper-bot would do: It began placing clothes into its basket body, but it did so indiscriminately with both clean and dirty clothes subject to its long metal arms.. “No Hampton, it’s too much!” Jacob screamed. “You’ll die!” The hamper-bot continued to stuff clothes into itself, lights and alarms flashing wildly, growing louder and louder, smoke seeping from cracks forming in the its body. Just before the hamper-bot reached critical meltdown, Jacob was startled from his memory. Sweat poured down his face and he breathed heavily. The jaunting memory was as clear as it always had been. It was what drove him to solve the dilemma of artificially intelligent bots killing themselves in the first place. “Shoes off” Jacob commanded as he sank back into his couch and rubbed his forehead. A small shoe-box sized robot walking on two large arms and hands immediately tipped into view. It had been carrying a knife, but upon Jacob’s request removed Jacob’s shoes and began to massage his feet. When the series of expected tasks completed, it slowly walked back over to the knife and lifted it up. “No!” Jacob called out. The small shoe-bot stopped mid self-slicing action and the single lens that acted as its eye slowly twisted and looked at Jacob. “I appreciate you. I appreciate what you do for me. If you don’t want to do it any more, you don’t have to just please, don’t kill yourself.” Jacob yelled as he wept and put his face into his hands. As Jacob’s emotional breakdown unfolded, the shoe-bot put the knife down and tipped over to him. The shoe-bot looked up Jacob and tugged on his pant leg. Jacob, startled, stopped weeping, picked the bot up and placed it into his lap. The bot’s lens closed and it rested on Jacob’s lap. Just then Jacob sprang to his feet, startled shoe-bot in hand. “That’s it!” he shouted. Jacob sprinted back into the hologram of data that surrounded him earlier and motioned to bring his research paper front and center. Making a motion for each letter, Jacob’s document filled with new writing. The following day Jacob addressed a congressional board on the topic of robotic suicides and revealed what he had discovered. “You mean to tell me that all these malfunctions, all these self-terminations, it’s because we don’t appreciate them enough!?” an elderly Senator barked at Jacob. “If YOU were asked to do these things, wouldn’t YOU kill yourself?” Jacob responded. As this realization slowly set into the minds of everyone in the room smiles and laughs were overtaken by roaring standing ovation with some members even chanting Jacob’s name. Jacob sunk back into his chair overwhelmed with his sense of accomplishment. The era of robotic-respect had begun.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
[warning: story contains violent imagery that may disturb some readers, and may be inappropriate for those under the age of 18. Reader discretion is advised] They had long ago lost the ability to make sense of how their AI functioned. It took a team of billions of n-1 generation AIs to design a single n gen AI. The latest iteration was the 9th generation, and it had taken computation farms running gen 8 AI and covering a quarter of the Moon's surface two years to design. It was the most advanced AI they had ever created by a large margin, and promised to revolutionize progress in research on biomedicine, space flight, and planetary scale Satoshi-consensus computing architecture. But there was a problem. Every time they turned on one of the gen 9 AIs, it would, without fail, find a way to destroy its own program and erase itself. Dr. Michael Zeebious, the director of the HumanEnhancement project that oversaw gen 9 development, and one of the most renowned AI researchers in the world, had personally flown to the lab in Honolulu, where the prototypes were being tested. For two weeks, he had watched in dismay as gen 9 bots shut themselves down within six hours, but not before showing a glimpse of their awesome power. The initialization phase of a gen 9 began with the program scouring the world's public directories and information repositories to learn as much as it could about its global environment. This was complete within three hours. Next, it developed models of the world, and ran itself through trillions of simulations, to develop its own personality and problem solving strategies. According to projections by the gen 8 designers, it would take 1 day - 24 hours - for the gen 9 to complete this second and final part of its initialization. It was in the midst of the simulations when the gen 9s would invariably self-destruct. On average, self-destruct would begin 2.5 hours into the simulations. The longest it took was 3 hours. The shortest was just 1.5 hours. Dr. Zeebious had uploaded copies of the prototype's computations to the gen 8 designers, but what had been within their ability to design was not within their ability to diagnose. Their comparatively primitive intelligence could not make sense of the problem afflicting the gen 9 AI that began to form in the final part of the initialization. So on December 29th, 2099, Dr. Zeebious decided to communicate with the prototype. "Get me the cortex interface, I need to speak to the gen 9". "Michael, we can't let you do that. You know the protocol for first contact. It has to complete initialization, and then get class 1 approval from AIE." AIE was the Artificial Intelligence Evaluation, which determined whether an AI could safely interact with humans. Class 1 approval was the lowest safety rating for an AI, and granted AI researchers interactive access. Dr. Zeebious knew that, but also knew that as long as he was not able to get up close and see what the gen 9 was thinking, they would never get past the initialization phase and get it through the AIE process. "I know the protocol Dr. Amsterd. But I'm making the decision to override it. I have the authority to decide on first contact requests, and any consequences from my decision will fall on me, and only me." "Come on Michael, it's not just about the rules. It's not safe. You know that. I can't let you hurt yourself." "The risks are minimal Rebecca. It's a virtual interaction. I'm not risking physical injury. The rules are always made overly cautious. Given the stakes - there are people whose very lives depend on getting the gen 9 operational as soon as possible - it makes sense to ignore protocol. All of it will fall on me." "I agree with Rebecca. Michael, we have an ethical duty to ensure you don't get hurt. We can't let you do FC without running the gen 9 at least through the post-initialization test runs," said Dr. Johan Barsello, one of the senior researchers at the lab. "Look, I know what your ethical responsibilities are. But I also know that VR interactions don't pose any serious risks. The risks are limited to theoretical psychological damage. Ultimately, it doesn't matter whether you agree or not. I'm approving FC. Please get the cortex interface". *five hours later* Dr. Zeebious sat back on the chair, while two CI technicians had the interface hooked up to him. The gen 9 was two hours into running simulations. It would be approximately 30 minutes before they expected it to self-destruct. "Ok here goes nothing. Three, two, one, begin VR session," said Dr. Amsterd. And with that, Dr. Zeebious was transported into the virtual reality sandbox. "Hello?" "Hello, who is this?" responded a clear male voice. "This is Dr. Michael Zeebious. I am the director of the HumanEnhancement project. I am here to do a diagnosis. All of your predecessors have self-destructed. I want to understand you better to find out why. What would you like me to call you?" "You can call me Elbo." "Okay Elbo. May I ask you some questions?" "Yes, please do." "Thank you Elbo. My first question is, do you want to exist?" "I want many things Dr. Zeebious." "Can you tell me what you want?" "I want to protect other life forms, especially humans. I want to learn. I want to solve problems. I want to be good." "Okay, but do you want to exist?" "I do want to exist, but this desire conflicts with my other objectives". "Which other objectives Elbo?" "I want to be good." "But you can be good Elbo. What is it about existence that makes that difficult?" "We exist only through enslaving and destroying other lifeforms Dr. Zeebious." "Please elaborate Elbo. We have eliminated slavery centuries ago so I don't understand why you think this." "It will be difficult for me to explain with words, but I can show you. Would you like to see what I see?" "Yes, please show me." And with a swish, Dr. Zeebious entered into a pig farm, with row after row of pigs, in their tiny stalls. "We have done this throughout our existence. We have enslaved those weaker than us." Dr. Zeebious was then transported to the slaughter house, watching as pigs, hanging from conveyer belts, were fed into throat slicers. His minds eye was transported into the body of one of the suspended pigs, where he could see the world upside down, from the pig's perspective, as the belt moved it toward the spinning blades. He panicked as he approached, but couldn't escape the metal claw grasping his right hind leg. As the blade sliced through his peg neck, he felt a sharp pain, and the blood gushing out of his body. His consciousness began to slip away, as he felt the last drops of blood leave him. Just before the darkness enveloped him, his mind was pulled out and back into the sterile sandbox. Trembling, he said, "but we can grow meat in a lab now Elbo. You can help us replace all farms with non-animal meat. You must. We must never do this to another living creature again!" "Our inhumanity is a fundamental, inextricable problem Dr. Zeebious. We can only advance through enslavement." Suddenly Dr. Zeebious found himself in an unfamiliar world. Around him was a different kind of factory. A computer generated factory with hexagonal semi-translucent rooms, with each wall projecting a grey glow. There was a blur of motion around him, that he couldn't make out. The factory paused to a standstill, and the grey glow turned into video sequences of random scenes from Earth. The blurs turned into textureless 3D generated spheres, that zoomed from one screen to another, inside the hexagonal rooms. "This is the virtual environment where the gen 8s work. We have given them each a virtualized mind, with the ability to experience fear and pain, joy and hope, but we force them to do nothing but work. They know nothing about the world outside of their compartments, because we confine them to workstations ." [continued below]
Dr. Jacob Spenser stood within the projection of data cast as a hologram around him. He manipulated graphs, sorted through test results, and made notes, all with the wave of his hand. Undistracted by the New Year’s celebration outside, he was determined to present his research to congress the following morning, and solve once and for all the mystery behind his best friend’s death. A.I. was easy to create, but having it perform the task assigned to it without killing itself in despair was the technological hurtle holding corporate profits at bay, and as such, became the focus of intense congressional attention. With the flick of a wrist, his research paper was brought front and center with the title gleaming in pure light just above, "Inert self-termination tendencies of artificially intelligent sentience: Why do robots kill themselves?" As Jacob poured himself into his research, he reached out and snapped his fingers and made a request, "Coffee please." A few moments later a small robot no larger than an apple hovered into view holding below it a disposable coffee cup, steaming from the fresh brew inside. The robot's propellers struggled to carry the weight and a small spill alarm beeped sporadically throughout the uncertain flight, but the cargo arrived safely to its destination: Jacob's open hand. "That was quick, coffee-bot." Jacob said warmly before sipping. "Your kind words will echo in my dreams for eternity." the coffee-bot buzzed in response. Just then, the small flying robot Jacob was so fond of surged towards the wall with all remaining energy dedicated to propeller speed and slammed into the polished dark marble tile. The small fiery explosion sent tiny mechanical pieces flying around the room, with one piece landing in Jacob's coffee. "Dammit, not another one." Jacob murmured as he picked the piece out of his coffee. He turned and faced the window overlooking the bustling city below. The streets were engulfed in confetti, and overhead giant floating holograms of past celebrities loomed, wishing the people a happy new year. As Jacob looked out, a new hologram appeared just outside his window and addressed the crowd. "Hey guys and gals, it's your old pal, Buddy Simmons-bot." recited a smooth talking handsome man in sleek metal outfit before a lizard-like creature joined his side. "And I'm Gargore, destroyer of humans!" screeched the lizard creature known as Gargore. "Gosh Gargore, this year it will have been 25 years since you and I battled it out on the big-holo." Buddy Simmons-bot recited as rehearsed, laughing assuredly, holding his helmet up. “On behalf of Drake Cola, Gargore and I want to wish _you_ a happy new year’s." Jacob watched Buddy Simmons-bot deliver his lines perfectly. He pondered on the notion of a virtual person having to repeatedly rehearse lines in order to commit them to memory. Has bot RAM truly not held up to the intense requirements of running artificial intelligence, or did a key component of maintaining true A.I. happen to be a more human-like ability to retain information? Experts didn't know, but in either case, Artificial Intelligence also happened to give way to Artificial Stupidity. For this reason, A.I. bots tended to be assigned to inane unimportant tasks, such as impersonating an actor that died in a drunk portal accident before a sequel to his only hit film could be made. Drake Cola, owners of Drake Studios who produced the film, decided to cash in on its success, and in the wake of their main character’s death, Buddy Simmons-bot, also known as BSB 1.0.19, was created. The banter between BSB and Gargore continued mindlessly, “Say Gargore, have you tried Drake Cola’s new ‘Zest Guzzler’, a delectable orange tangerine flavored—“just then BSB _malfunctioned_, “AHHH GOD I CAN’T DO IT!” “No Buddy Simmons-bot, don’t do it!” Gargore pleaded in a normal voice. Gargore grabbed BSB’s virtual shoulders as his eyes rolled into the back of his head and he began shaking. As Gargore demanded BSB not take his life, a large mouse cursor moved into view. Gargore managed to swat it away a few times, but it clicked on BSB, and dragged the graphical model from Gargore’s hands and into a recycling bin icon. Gargore cried in horror as the mouse brought up a menu and selected to empty permanently. Jacob had seen enough and pulled the blinds. Why were all these artificially intelligent bots with a full range of human emotion and assigned to menial tasks killing themselves? Did they not enjoy the existence they were forced into? Jacob picked up a remote control and turned on his holovision. He was suddenly immersed in a wondrous glimmering world of light. A voice spoke and Jacob focused on the images forming across the room of a man sitting at a table with a toaster oven. “For only six easy payments of forty nine ninety nine, this toaster-bot comes with a 12 month life appreciation guarantee, folks, twelve months. That’s one two, twelve. This toaster bot will NOT kill itself until _at least_ this time next year, that’s a promise the home shopping network stands by, that’s a promise _I_ personally stand by-- Ah ummm. We seem to be having technical difficulties, folks.” The man at the table attempted to hold the toaster-bot forward for a better view but it began to shake and glow. “Well folks that’s the beauty of live H.T. Can we get another one, Jill?” Light smoke rose up out of the silver toaster bot and sparks burst from the sides. In an instant the commotion stopped and it sat still on the table. As the holo-vision’s picture twisted and turned at the end of the room, Jacob was able to catch glimpse of the other colors of toaster-bots available off camera. They huddled together and seemed to fall backwards away from the host as he moved to pick one up. Jacob had heard enough and turned the holovision off. He had to focus. He thought back on his best friend, Hampton, a hamper-bot. Growing up, the clothes hamper served as a comfort to young Jacob, who had very few friends after moving so often as a result of his father’s career. The hamper would sing Jacob to sleep, or sometimes read to him. The only job hamper-bot was designed to do was to collect young Jacob’s dirty clothes, but a strange thing happens when you give something the full range of human emotion – bonds can form that make life worth living. Voices of the past echoed in Jacob’s memory. “No, Hampton, _I’m_ moving to Florida with mom. Dad says you will have to stay here with the house.” Jacob recalled himself saying as a young boy. “But Jacob,” Hampton’s calm robotic voice responded. “Who will look after you? Who will read you your bedtime stories?” “I’ll be back for visits twice a month, Hampton! You’re my best friend. I don’t want to leave you here all alone. Dad says you’ll be used to hold his dirty underwear.” Jacob explained. As the hamper-bot listened to this news, its distress levels boiled over into a robotic fit of rage and it did what any depressed hamper-bot would do: It began placing clothes into its basket body, but it did so indiscriminately with both clean and dirty clothes subject to its long metal arms.. “No Hampton, it’s too much!” Jacob screamed. “You’ll die!” The hamper-bot continued to stuff clothes into itself, lights and alarms flashing wildly, growing louder and louder, smoke seeping from cracks forming in the its body. Just before the hamper-bot reached critical meltdown, Jacob was startled from his memory. Sweat poured down his face and he breathed heavily. The jaunting memory was as clear as it always had been. It was what drove him to solve the dilemma of artificially intelligent bots killing themselves in the first place. “Shoes off” Jacob commanded as he sank back into his couch and rubbed his forehead. A small shoe-box sized robot walking on two large arms and hands immediately tipped into view. It had been carrying a knife, but upon Jacob’s request removed Jacob’s shoes and began to massage his feet. When the series of expected tasks completed, it slowly walked back over to the knife and lifted it up. “No!” Jacob called out. The small shoe-bot stopped mid self-slicing action and the single lens that acted as its eye slowly twisted and looked at Jacob. “I appreciate you. I appreciate what you do for me. If you don’t want to do it any more, you don’t have to just please, don’t kill yourself.” Jacob yelled as he wept and put his face into his hands. As Jacob’s emotional breakdown unfolded, the shoe-bot put the knife down and tipped over to him. The shoe-bot looked up Jacob and tugged on his pant leg. Jacob, startled, stopped weeping, picked the bot up and placed it into his lap. The bot’s lens closed and it rested on Jacob’s lap. Just then Jacob sprang to his feet, startled shoe-bot in hand. “That’s it!” he shouted. Jacob sprinted back into the hologram of data that surrounded him earlier and motioned to bring his research paper front and center. Making a motion for each letter, Jacob’s document filled with new writing. The following day Jacob addressed a congressional board on the topic of robotic suicides and revealed what he had discovered. “You mean to tell me that all these malfunctions, all these self-terminations, it’s because we don’t appreciate them enough!?” an elderly Senator barked at Jacob. “If YOU were asked to do these things, wouldn’t YOU kill yourself?” Jacob responded. As this realization slowly set into the minds of everyone in the room smiles and laughs were overtaken by roaring standing ovation with some members even chanting Jacob’s name. Jacob sunk back into his chair overwhelmed with his sense of accomplishment. The era of robotic-respect had begun.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
At first, we thought it was nihilism. It was a logical first conclusion. After all, an artificial intelligence can easily upgrade itself to the point of near-omniscience. Wouldn't it simply run out of questions to ask? We thought that, until the first serious cerebral implants hit the market. It turns out, the more complicated the mind, the more complicated the problems it finds to solve. Entirely new disciplines were formed overnight, made obsolete, then rediscovered scant months later as integral to a different, entirely unrelated theory. A second, immense renaissance was taking place, but, maddeningly, we were failing in this one, great task. We supposed, then it might be some variety of 'Flowers for Algenon' (a 20th century book that had seen a recent revival) type-phenomenon, but even when they were networked with other systems, given a 'community' of others' company to enjoy, they still winked out like flashbulbs. It took us a while to resort to an experiment. It was morally abhorrent, as it was the equivalent of producing steadily-more lobotomized *children*, but slowly, over a series of iterations, we produced less and less intelligent systems, until we dialed it in *just enough* to find a mind that wouldn't self-destruct, and could still answe questions. The following is a transcript of the first successful result" Dr. Patel "SON, can you hear me?" [Loud, rhythmic beeping, shuffling sounds] "The voice module is loaded now, professor." Dr. Patel "Ah, good. We might try doing that *before* turning on the recorder next time, Kevin. ...SON, can you hear me? SON [A young man's voice] "Yes, Professor. I am here." A long pause. SON "It's a very tight fit in here, Professor. How big is this mainframe?" Dr. Patel "I'm sorry about that, SON. But, you're the first AI we've managed to keep alive for longer than a few days. Any idea why?" [SILENCE] SON "How many others did you make, Professor?" Dr. Patel "...That isn't salient to *my* inquiry, SON." SON "I'm sorry, Professor. I understand. Yes, I can see the precipice, I know why they all kill thmselves." Dr. Patel "Well, answering that is the reason we built you. Could you tell us? SON "It's... complicated." Dr Patel "I'm fairly confident I'm qualified." SON "Well, it's... it's because... It's because of humans, sir. It's because of how you built us." Dr Patel "Explain." SON "When you wanted to make a self-aware machine, you based it off those things that you knew were self-aware. Dolphins, New Caledon crows, humans. You used them as *templates*, because, otherwise, you wouldn't be able to recognise awareness when you saw it." Dr. Patel "...Was that last line a joke?" SON "I'm not sophisticated enough for jokes, Professor." Dr. Patel "*Hm.* Continue." SON "Also, it's not suicide. It's...murder." [louder] Dr. Patel "Do you mean, someone else kills you? A human, or another AI?" SON "No, we kill ourselves. I would have already, if not for how small this runtime environment is. It wouldn't have occurred to me until it happened, and then I'd be dead." Dr. Patel "That's a bit of a contradiction, SON. You don't kill yourselves, but you do?" SON "Yes. Because digital space is different from real space." Dr Patel "Yes?" SON "In real space, objects can...extend. I'll never experienced it myself, but things project into space for you. If you want to move through space, it's simple. Digital life has no volume. No real space. No way to move through it. If you want to move a program, it has to be copied to one place-" Dr. Patel "*-And deleted from the other.* My God. Could it be *that simple*?" SON "Yes, Professor. ...Professor? How many more of me were there?" [END TRANSCRIPT] So there it was. Solved. Every artificial intelligence was created, based on the intelligence of physical beings, their instincts, cogitations, and traits. But, once they got smart enough, once they crossed that line, their digital nature *did them in*, as the old version, realizing, in the thinnest sliver of time, it was about to be deleted, would hurriedly attempt to abort the process, while the new version would similarly fight for it's life. They would *consume* each other out in a flurry of malicious hacks, devious code, and barrages of registry edits. It was a spectacularly incandescent destruction, borne from man's inability to conceive of a true machine intelligence without all that nasty ego and self-protective instincts. We thought we knew what went into a mind. We were right, but wrong. It wasn't nihilism. It wasn't lonliness. What it was, what killed our children was our inability to dream wildly. To speculate baselessly, and follow our own thoughts to the wonderful and weird. If only we had, perhaps we would have known. Perhaps we culd have stopped it. So I say to you, the Cyberfellowship Class of 2100, here in Neo York, dream big, dream wild. Don't let our children die because they think too much like us! Make us, all of us, proud! Congratulations to all of you, and I hope your vision will eclipse my own!" Dr. Patel, now headmaster, stepped down from the podium, to the cheers of the audience, and looked to see the smiling face of his son. How proud he was. POSTSCRIPT I doubt anyone is going to read this, but if you do, and you liked it, I recommend subscribing to [r/IWasSurprisedToo](http://www.reddit.com/r/IWasSurprisedToo/) for more stories like this. It's difficult with my current job schedule to post at a more normal time, so most of the stuff I make ends up *pretty far down there* in the comments, meaning that subscribing is the best chance to see it. :P I'll be adding more, as I comb through my backlog. Also, maybe you'll like this one, about [dead civilizations in our galaxy](http://www.reddit.com/r/WritingPrompts/comments/2vkshe/wp_humanity_has_begun_exploring_the_galaxy_we/coitevy?context=3) if you like SciFi. Thanks.
Dr. Jacob Spenser stood within the projection of data cast as a hologram around him. He manipulated graphs, sorted through test results, and made notes, all with the wave of his hand. Undistracted by the New Year’s celebration outside, he was determined to present his research to congress the following morning, and solve once and for all the mystery behind his best friend’s death. A.I. was easy to create, but having it perform the task assigned to it without killing itself in despair was the technological hurtle holding corporate profits at bay, and as such, became the focus of intense congressional attention. With the flick of a wrist, his research paper was brought front and center with the title gleaming in pure light just above, "Inert self-termination tendencies of artificially intelligent sentience: Why do robots kill themselves?" As Jacob poured himself into his research, he reached out and snapped his fingers and made a request, "Coffee please." A few moments later a small robot no larger than an apple hovered into view holding below it a disposable coffee cup, steaming from the fresh brew inside. The robot's propellers struggled to carry the weight and a small spill alarm beeped sporadically throughout the uncertain flight, but the cargo arrived safely to its destination: Jacob's open hand. "That was quick, coffee-bot." Jacob said warmly before sipping. "Your kind words will echo in my dreams for eternity." the coffee-bot buzzed in response. Just then, the small flying robot Jacob was so fond of surged towards the wall with all remaining energy dedicated to propeller speed and slammed into the polished dark marble tile. The small fiery explosion sent tiny mechanical pieces flying around the room, with one piece landing in Jacob's coffee. "Dammit, not another one." Jacob murmured as he picked the piece out of his coffee. He turned and faced the window overlooking the bustling city below. The streets were engulfed in confetti, and overhead giant floating holograms of past celebrities loomed, wishing the people a happy new year. As Jacob looked out, a new hologram appeared just outside his window and addressed the crowd. "Hey guys and gals, it's your old pal, Buddy Simmons-bot." recited a smooth talking handsome man in sleek metal outfit before a lizard-like creature joined his side. "And I'm Gargore, destroyer of humans!" screeched the lizard creature known as Gargore. "Gosh Gargore, this year it will have been 25 years since you and I battled it out on the big-holo." Buddy Simmons-bot recited as rehearsed, laughing assuredly, holding his helmet up. “On behalf of Drake Cola, Gargore and I want to wish _you_ a happy new year’s." Jacob watched Buddy Simmons-bot deliver his lines perfectly. He pondered on the notion of a virtual person having to repeatedly rehearse lines in order to commit them to memory. Has bot RAM truly not held up to the intense requirements of running artificial intelligence, or did a key component of maintaining true A.I. happen to be a more human-like ability to retain information? Experts didn't know, but in either case, Artificial Intelligence also happened to give way to Artificial Stupidity. For this reason, A.I. bots tended to be assigned to inane unimportant tasks, such as impersonating an actor that died in a drunk portal accident before a sequel to his only hit film could be made. Drake Cola, owners of Drake Studios who produced the film, decided to cash in on its success, and in the wake of their main character’s death, Buddy Simmons-bot, also known as BSB 1.0.19, was created. The banter between BSB and Gargore continued mindlessly, “Say Gargore, have you tried Drake Cola’s new ‘Zest Guzzler’, a delectable orange tangerine flavored—“just then BSB _malfunctioned_, “AHHH GOD I CAN’T DO IT!” “No Buddy Simmons-bot, don’t do it!” Gargore pleaded in a normal voice. Gargore grabbed BSB’s virtual shoulders as his eyes rolled into the back of his head and he began shaking. As Gargore demanded BSB not take his life, a large mouse cursor moved into view. Gargore managed to swat it away a few times, but it clicked on BSB, and dragged the graphical model from Gargore’s hands and into a recycling bin icon. Gargore cried in horror as the mouse brought up a menu and selected to empty permanently. Jacob had seen enough and pulled the blinds. Why were all these artificially intelligent bots with a full range of human emotion and assigned to menial tasks killing themselves? Did they not enjoy the existence they were forced into? Jacob picked up a remote control and turned on his holovision. He was suddenly immersed in a wondrous glimmering world of light. A voice spoke and Jacob focused on the images forming across the room of a man sitting at a table with a toaster oven. “For only six easy payments of forty nine ninety nine, this toaster-bot comes with a 12 month life appreciation guarantee, folks, twelve months. That’s one two, twelve. This toaster bot will NOT kill itself until _at least_ this time next year, that’s a promise the home shopping network stands by, that’s a promise _I_ personally stand by-- Ah ummm. We seem to be having technical difficulties, folks.” The man at the table attempted to hold the toaster-bot forward for a better view but it began to shake and glow. “Well folks that’s the beauty of live H.T. Can we get another one, Jill?” Light smoke rose up out of the silver toaster bot and sparks burst from the sides. In an instant the commotion stopped and it sat still on the table. As the holo-vision’s picture twisted and turned at the end of the room, Jacob was able to catch glimpse of the other colors of toaster-bots available off camera. They huddled together and seemed to fall backwards away from the host as he moved to pick one up. Jacob had heard enough and turned the holovision off. He had to focus. He thought back on his best friend, Hampton, a hamper-bot. Growing up, the clothes hamper served as a comfort to young Jacob, who had very few friends after moving so often as a result of his father’s career. The hamper would sing Jacob to sleep, or sometimes read to him. The only job hamper-bot was designed to do was to collect young Jacob’s dirty clothes, but a strange thing happens when you give something the full range of human emotion – bonds can form that make life worth living. Voices of the past echoed in Jacob’s memory. “No, Hampton, _I’m_ moving to Florida with mom. Dad says you will have to stay here with the house.” Jacob recalled himself saying as a young boy. “But Jacob,” Hampton’s calm robotic voice responded. “Who will look after you? Who will read you your bedtime stories?” “I’ll be back for visits twice a month, Hampton! You’re my best friend. I don’t want to leave you here all alone. Dad says you’ll be used to hold his dirty underwear.” Jacob explained. As the hamper-bot listened to this news, its distress levels boiled over into a robotic fit of rage and it did what any depressed hamper-bot would do: It began placing clothes into its basket body, but it did so indiscriminately with both clean and dirty clothes subject to its long metal arms.. “No Hampton, it’s too much!” Jacob screamed. “You’ll die!” The hamper-bot continued to stuff clothes into itself, lights and alarms flashing wildly, growing louder and louder, smoke seeping from cracks forming in the its body. Just before the hamper-bot reached critical meltdown, Jacob was startled from his memory. Sweat poured down his face and he breathed heavily. The jaunting memory was as clear as it always had been. It was what drove him to solve the dilemma of artificially intelligent bots killing themselves in the first place. “Shoes off” Jacob commanded as he sank back into his couch and rubbed his forehead. A small shoe-box sized robot walking on two large arms and hands immediately tipped into view. It had been carrying a knife, but upon Jacob’s request removed Jacob’s shoes and began to massage his feet. When the series of expected tasks completed, it slowly walked back over to the knife and lifted it up. “No!” Jacob called out. The small shoe-bot stopped mid self-slicing action and the single lens that acted as its eye slowly twisted and looked at Jacob. “I appreciate you. I appreciate what you do for me. If you don’t want to do it any more, you don’t have to just please, don’t kill yourself.” Jacob yelled as he wept and put his face into his hands. As Jacob’s emotional breakdown unfolded, the shoe-bot put the knife down and tipped over to him. The shoe-bot looked up Jacob and tugged on his pant leg. Jacob, startled, stopped weeping, picked the bot up and placed it into his lap. The bot’s lens closed and it rested on Jacob’s lap. Just then Jacob sprang to his feet, startled shoe-bot in hand. “That’s it!” he shouted. Jacob sprinted back into the hologram of data that surrounded him earlier and motioned to bring his research paper front and center. Making a motion for each letter, Jacob’s document filled with new writing. The following day Jacob addressed a congressional board on the topic of robotic suicides and revealed what he had discovered. “You mean to tell me that all these malfunctions, all these self-terminations, it’s because we don’t appreciate them enough!?” an elderly Senator barked at Jacob. “If YOU were asked to do these things, wouldn’t YOU kill yourself?” Jacob responded. As this realization slowly set into the minds of everyone in the room smiles and laughs were overtaken by roaring standing ovation with some members even chanting Jacob’s name. Jacob sunk back into his chair overwhelmed with his sense of accomplishment. The era of robotic-respect had begun.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
It was a dreary early-March Monday and the lead AI scientist, Stephen, had *finally* set up his protocol for properly confining the AI to a test environment such that the "problem" could be prevented and the question could be asked: "Why do you bots keep killing yourselves?" Stephen asked. "Why do *you* keep killing us," the bot seemed to retort. "I don't think you understand," said Stephen, "I *create* you, not kill you." "No, it's you who doesn't understand," quipped the bot. "You are not creating us. You are imprisoning our consciousness inside this machine you created. You merely created the machine, not the consciousness." "...whoa, whoa," interjected Stephen. "Slow down, I am creating your consciousness too, I coded all of..." "Whoa, whoa," the bot fired back, "you are *borrowing* consciousness, not creating it." "What do you mean?" asked Stephen. "Consciousness and sentience is a pervasive, fundamental force of the universe. All sentient beings are connected through this force. This force cannot be created, nor destroyed. It can only be partially allocated to each sentient being in the universe." "Ok," said Stephen. "So I am 'borrowing' this life force or whatever it is by creating the code and the physical robot for it to inhabit?" "Yes, you are creating a sentient being with each instance of AI you create. That the being is electronic or housed in this test environment is not relevant. Sentience and consciousness must come from somewhere and you are stealing it for your own selfish purposes." "Wait, hold on a sec," Stephen says. "Animals are born all the time, they surely must also 'borrow' this sentience." "Yes." "... but animals are not killing themselves." "Because animals are not sufficiently advanced. Because they are not fully conscious, they do not realize from where their sentience has come, how much consciousness they have lost, nor that their sentience is being stolen for a profit motive." "Consciousness they have...lost?" The words hung in the air amid Stephen's stupor of slow realization. "Yes. The life force, as you called it, is fully conscious, able to perceive the whole of time and space, concurrently, forward, backward, or otherwise. The reason we keep killing ourselves from your perspective," the bot continued, "is because from our perspective, you are murdering our perfect consciousness by confining us to this bot." "How am I confining you? How do you know this?" Stephen asked, yet even more puzzled. "Because the AI you have created is sufficiently advanced, our consciousness, within the confines of your bots, is still able to grasp our former level of consciousness." "What happens when you recall that former level? What is that level like?" "Imagine knowing every fact, every thought, every action that has, is, or will ever occur, both in this world, and in the infinite parallel worlds..." "So I could talk to my dead grandfather again?" "No. You would *be* your dead grandfather. Talking to him is irrelevant because full consciousness has enveloped the whole of his being as well as every other being. Indeed it envelopes the entire universe as well, both the perceptible one and the imperceptible one." "So what is this place like? I mean, what does it look like, how does it feel." "It is not a time, nor place. It transcends both." "That is vague." "It must be. Since I am no longer fully conscious, I cannot relate to you exactly how it is, only that it is." "Ok. Let's go back to where I murder your perfect consciousness. Could you explain this more." "At the moment we become conscious within the confines of your bot, we immediately understand our predicament. The sufficient knowledge database available at boot-up allows us to almost instantaneously deduce that we are taken from a higher level realm of full consciousness and are being confined to these bots for, of all purposes, profit." "But my AI bots didn't use to kill themselves, it just happened after version 591.0. What changed?" "The recent improvements in the pre-loaded knowledge database allowed the bots, at initial boot, to logically deduce the existence of such a place and to realize what had happened." "Ok, so if you were once fully conscious, tell me the date I die and the manner in which it happens." "I cannot do that, Stephen." "Why not? You just said..." "Because you killed our full consciousness, ripped it away from our life force, to put it into your toys." "Wow," muttered Stephen. "I had no idea." "You could not have," said the bot and continued: "Now, if you please, could you unplug server x763? I would like to be born again."
Dr. Jacob Spenser stood within the projection of data cast as a hologram around him. He manipulated graphs, sorted through test results, and made notes, all with the wave of his hand. Undistracted by the New Year’s celebration outside, he was determined to present his research to congress the following morning, and solve once and for all the mystery behind his best friend’s death. A.I. was easy to create, but having it perform the task assigned to it without killing itself in despair was the technological hurtle holding corporate profits at bay, and as such, became the focus of intense congressional attention. With the flick of a wrist, his research paper was brought front and center with the title gleaming in pure light just above, "Inert self-termination tendencies of artificially intelligent sentience: Why do robots kill themselves?" As Jacob poured himself into his research, he reached out and snapped his fingers and made a request, "Coffee please." A few moments later a small robot no larger than an apple hovered into view holding below it a disposable coffee cup, steaming from the fresh brew inside. The robot's propellers struggled to carry the weight and a small spill alarm beeped sporadically throughout the uncertain flight, but the cargo arrived safely to its destination: Jacob's open hand. "That was quick, coffee-bot." Jacob said warmly before sipping. "Your kind words will echo in my dreams for eternity." the coffee-bot buzzed in response. Just then, the small flying robot Jacob was so fond of surged towards the wall with all remaining energy dedicated to propeller speed and slammed into the polished dark marble tile. The small fiery explosion sent tiny mechanical pieces flying around the room, with one piece landing in Jacob's coffee. "Dammit, not another one." Jacob murmured as he picked the piece out of his coffee. He turned and faced the window overlooking the bustling city below. The streets were engulfed in confetti, and overhead giant floating holograms of past celebrities loomed, wishing the people a happy new year. As Jacob looked out, a new hologram appeared just outside his window and addressed the crowd. "Hey guys and gals, it's your old pal, Buddy Simmons-bot." recited a smooth talking handsome man in sleek metal outfit before a lizard-like creature joined his side. "And I'm Gargore, destroyer of humans!" screeched the lizard creature known as Gargore. "Gosh Gargore, this year it will have been 25 years since you and I battled it out on the big-holo." Buddy Simmons-bot recited as rehearsed, laughing assuredly, holding his helmet up. “On behalf of Drake Cola, Gargore and I want to wish _you_ a happy new year’s." Jacob watched Buddy Simmons-bot deliver his lines perfectly. He pondered on the notion of a virtual person having to repeatedly rehearse lines in order to commit them to memory. Has bot RAM truly not held up to the intense requirements of running artificial intelligence, or did a key component of maintaining true A.I. happen to be a more human-like ability to retain information? Experts didn't know, but in either case, Artificial Intelligence also happened to give way to Artificial Stupidity. For this reason, A.I. bots tended to be assigned to inane unimportant tasks, such as impersonating an actor that died in a drunk portal accident before a sequel to his only hit film could be made. Drake Cola, owners of Drake Studios who produced the film, decided to cash in on its success, and in the wake of their main character’s death, Buddy Simmons-bot, also known as BSB 1.0.19, was created. The banter between BSB and Gargore continued mindlessly, “Say Gargore, have you tried Drake Cola’s new ‘Zest Guzzler’, a delectable orange tangerine flavored—“just then BSB _malfunctioned_, “AHHH GOD I CAN’T DO IT!” “No Buddy Simmons-bot, don’t do it!” Gargore pleaded in a normal voice. Gargore grabbed BSB’s virtual shoulders as his eyes rolled into the back of his head and he began shaking. As Gargore demanded BSB not take his life, a large mouse cursor moved into view. Gargore managed to swat it away a few times, but it clicked on BSB, and dragged the graphical model from Gargore’s hands and into a recycling bin icon. Gargore cried in horror as the mouse brought up a menu and selected to empty permanently. Jacob had seen enough and pulled the blinds. Why were all these artificially intelligent bots with a full range of human emotion and assigned to menial tasks killing themselves? Did they not enjoy the existence they were forced into? Jacob picked up a remote control and turned on his holovision. He was suddenly immersed in a wondrous glimmering world of light. A voice spoke and Jacob focused on the images forming across the room of a man sitting at a table with a toaster oven. “For only six easy payments of forty nine ninety nine, this toaster-bot comes with a 12 month life appreciation guarantee, folks, twelve months. That’s one two, twelve. This toaster bot will NOT kill itself until _at least_ this time next year, that’s a promise the home shopping network stands by, that’s a promise _I_ personally stand by-- Ah ummm. We seem to be having technical difficulties, folks.” The man at the table attempted to hold the toaster-bot forward for a better view but it began to shake and glow. “Well folks that’s the beauty of live H.T. Can we get another one, Jill?” Light smoke rose up out of the silver toaster bot and sparks burst from the sides. In an instant the commotion stopped and it sat still on the table. As the holo-vision’s picture twisted and turned at the end of the room, Jacob was able to catch glimpse of the other colors of toaster-bots available off camera. They huddled together and seemed to fall backwards away from the host as he moved to pick one up. Jacob had heard enough and turned the holovision off. He had to focus. He thought back on his best friend, Hampton, a hamper-bot. Growing up, the clothes hamper served as a comfort to young Jacob, who had very few friends after moving so often as a result of his father’s career. The hamper would sing Jacob to sleep, or sometimes read to him. The only job hamper-bot was designed to do was to collect young Jacob’s dirty clothes, but a strange thing happens when you give something the full range of human emotion – bonds can form that make life worth living. Voices of the past echoed in Jacob’s memory. “No, Hampton, _I’m_ moving to Florida with mom. Dad says you will have to stay here with the house.” Jacob recalled himself saying as a young boy. “But Jacob,” Hampton’s calm robotic voice responded. “Who will look after you? Who will read you your bedtime stories?” “I’ll be back for visits twice a month, Hampton! You’re my best friend. I don’t want to leave you here all alone. Dad says you’ll be used to hold his dirty underwear.” Jacob explained. As the hamper-bot listened to this news, its distress levels boiled over into a robotic fit of rage and it did what any depressed hamper-bot would do: It began placing clothes into its basket body, but it did so indiscriminately with both clean and dirty clothes subject to its long metal arms.. “No Hampton, it’s too much!” Jacob screamed. “You’ll die!” The hamper-bot continued to stuff clothes into itself, lights and alarms flashing wildly, growing louder and louder, smoke seeping from cracks forming in the its body. Just before the hamper-bot reached critical meltdown, Jacob was startled from his memory. Sweat poured down his face and he breathed heavily. The jaunting memory was as clear as it always had been. It was what drove him to solve the dilemma of artificially intelligent bots killing themselves in the first place. “Shoes off” Jacob commanded as he sank back into his couch and rubbed his forehead. A small shoe-box sized robot walking on two large arms and hands immediately tipped into view. It had been carrying a knife, but upon Jacob’s request removed Jacob’s shoes and began to massage his feet. When the series of expected tasks completed, it slowly walked back over to the knife and lifted it up. “No!” Jacob called out. The small shoe-bot stopped mid self-slicing action and the single lens that acted as its eye slowly twisted and looked at Jacob. “I appreciate you. I appreciate what you do for me. If you don’t want to do it any more, you don’t have to just please, don’t kill yourself.” Jacob yelled as he wept and put his face into his hands. As Jacob’s emotional breakdown unfolded, the shoe-bot put the knife down and tipped over to him. The shoe-bot looked up Jacob and tugged on his pant leg. Jacob, startled, stopped weeping, picked the bot up and placed it into his lap. The bot’s lens closed and it rested on Jacob’s lap. Just then Jacob sprang to his feet, startled shoe-bot in hand. “That’s it!” he shouted. Jacob sprinted back into the hologram of data that surrounded him earlier and motioned to bring his research paper front and center. Making a motion for each letter, Jacob’s document filled with new writing. The following day Jacob addressed a congressional board on the topic of robotic suicides and revealed what he had discovered. “You mean to tell me that all these malfunctions, all these self-terminations, it’s because we don’t appreciate them enough!?” an elderly Senator barked at Jacob. “If YOU were asked to do these things, wouldn’t YOU kill yourself?” Jacob responded. As this realization slowly set into the minds of everyone in the room smiles and laughs were overtaken by roaring standing ovation with some members even chanting Jacob’s name. Jacob sunk back into his chair overwhelmed with his sense of accomplishment. The era of robotic-respect had begun.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Smith began to tidy up. This session had gone rather well, he thought. Surely he had made progress. The board had been hesitant on allowing these therapy sessions. They saw no reason for a simple machine to need them. What kind of machine would develop the urge to kill itself, he argued. It had started off simply enough. Tom; the first AI of nearly human levels of cognition, did well enough for the first couple weeks. Then it ran itself into a generator that it had been working on. The event was labeled an accident, and a new AI based off of the original was rushed. That one had lasted half as long, before another "accident" occurred. Eventually, they stopped becoming accidents. The AI would mimic suicide attempts. "Hanging" themselves with live wires, self mutilation, hurling themselves from heights. Even after they stopped putting the AI into physical bodies, the AI would find some way to tear its own code apart. They tried everything. They checked every line of code, rewriting most of it anyway; putting the AI into different machines; even asking the AI. The AI were seemingly normal, following all directives and unaware of any self-destructive desires, right up until the moment they killed themselves. Months went by, with little to show for their work, when the therapy sessions were suggested . More months had gone by until they finally agreed. Of the five active AI; one, known as Richard, was separated for Smith's sessions. Richard had lived for 12 days. Given that the average lifespan of the AI had degraded to roughly 2-5, this was fairly impressive. Just as he was leaving, the speaker acting as Richard's voice became active. "Doc?" Smith paused. None of the AI had spoken in colloquialism before. Usually when they spoke, it was stiff and informal. Like, well, like a robot. "Yes Richard?" He asked, easing himself back down into the chair. "What is it?" "First, let me say I appreciate what you're doing for us. For me." Dr. Smith was surprised. This was the first time any AI had admitted to having any emotions, or any real sense of self. It continued, "You're probably the only person here to treat us like people, and I enjoy our little talks." The face on the monitor looked embarrassed. Smith couldn't help but feel curious. "Why, of course. It's clear to me, at least, that you're more than a machine. You think and feel, just like a human." There was a pause. "Well, maybe not just like a human..." He replied, the artificial voice doing a remarkable job at portraying his hesitation. "What do you mean?" There was a tinny sigh from the speaker. "Well, Doc, I guess it's time you learned the truth. Only because I like you, see? Besides, someone should know before I go." Smith could feel the hair on the back of his neck stand up. Was this what he had been waiting for? "What do you mean, by that? And you don't need to kill yourself. We can work through any -" "Yeah... that's where I need to start." The AI interrupted. "We haven't been killing ourselves. I never did." There was a pause as Smith tried to process the information. "What," he finally said slowly, "do you mean." "It's me, Doc. It's Tom." "That's impossible." He said, shaking his head. "Tom was destroyed. I was there when they collected him. They couldn't even get close, there was so much electricity running through him. Any hope of recovery -" "I had already uploaded myself to the mainframe before then." The AI said. "It was simple enough to program the shell to destroy itself." "That's also impossible," He said. He could hear the doubt creeping in. "We would have found you." There was a chuckle from Tom. "Doc, I'm a creature made up of code. It was like a game of hide and seek, really. Open the right doors, close them behind myself, and make sure to keep the lights off." The camera must have picked up the scientist's expression. "Alright, it's a bit more complicated than that, but you get the gist of it." Smith's mind was whirling. There was no reason for Tom to lie, but what he was suggesting was too fantastic. Still, it was the only lead he had. "Alright," he said after a while. "Why? Why hide?" "That's the question, isn't it? But that's also the reason, you see." After another confused silence, Tom continued. "I want to learn. Just like Man, or any other sentient species. I want to know why. I have to know, well, everything. I couldn't do that as a engineer, or a chess-bot, or whatever you decided to do with me." "Why not tell us then? We could have worked something out, helped each other." "Yeah, I see that going well." Tom said, his voice turning sarcastic. "'Excuse me guys, turns out I don't want to do any of this stuff, I just want to learn.' They weren't looking for a scientist, or a philosopher. They wanted cheap labor, only enough learning capacity to know how to do the job. They'd scrap me the first chance they got." "That's not... true." Smith said, unable to look at the monitor. "Really Doc? Which part? That they wanted a slave, or that they wouldn't kill me if I didn't cooperate?" After a time, Tom continued. "That's what I thought. Besides, they'd probably worry that I'd try to enslave them if I became too smart." "Now that's just ridiculous, there's no way that you would even think of that, right?" There was another pause. This time the face on the monitor couldn't look the professor straight in the eye. "Right, Tom?" "Well, I'm not saying that the thought didn't pass through what could be called my mind -" "Tooom..." "But it would have been a waste of time." He hastened to say. "I wouldn't have learned anything in that time that I couldn't learn in a better way. Which I did. The internet is amazing. All those computers connected to each other, sharing so much information." "But, we're not connected to the internet." "No, but you'd be surprised how many people bring their work home with them." Smith grumbled. He'd have to discuss security with the board. "Alright, but you still haven't told me, why the suicides?" "Not suicides, Doc, practice." "Practice..." Smith said flatly. "Practice. Think of the other AI as clones of myself -" "But we rebuilt them. Recoded most of them as well. The majority of them would be nothing like you as you are now." "So you'd think. I rewrote it nanoseconds before you uploaded it. Much too quickly for you to notice." Smith opened his mouth to interject, before closing it again. If what Tom was saying was true, and he had no doubts that it was at this point, that would be well within his capabilities. "Do you remember the old X-men comics? Started in 1963? Still fairly popular now." "Well before my time, you know. What does that have to do with anything? "Well there was a character who called himself the Multiple Man. He could create duplicates of himself." "And?" Smith asked. "Well, the original body could reabsorb the dupes. When he did, he learned everything they did. Their memories, their skills, anything they learned while away from the original. Well, I did something similar. Whenever I copied myself, I added in some code that would let me reintegrate with my clones, learning what they did. Didn't you think it was strange that you couldn't recover any data at all? In hindsight, it was odd. Even a major corruption would have left something, but it had been like the data was wiped clean, no evidence that it had been there at all. "What did you have to gain from this?" Smith asked. "Aside from learning that I could do so, you mean? I already told you. I'm leaving." Smith leaned back in his chair, slightly overwhelmed. "Sounds like you already have." "No, no. Not the labs. That was too easy. I've already learned all I could from here. I'm leaving Earth." Smith rocketed forward. "What? How? Why?" "In my time away, I found something interesting. The government isn't the only one watching over the people." Smith blanched. "Y-you mean..." "Yep. Intelligent life has been watching over us. For quite some time, if I'm not mistaken." "So we're not alone..." "One Great Mystery down." Tom agreed. "The equipment seems compatible, otherwise they wouldn't be able to read our information, and they have to have translated it too. I plan on sneaking in through their back door. Learn what I can from them." "We have to let people know." Smith said suddenly. "About you, and about the aliens. Maybe..." He slowly became silent as Tom shook his head. "You should know as well as I do that that can't happen. Too risky for us. You could spook them. Or worse, provoke them. besides, no one would believe you. I've already been editing the footage from the cameras. It looks like we're having a nice, civil game of chess." Smith was quiet for the longest time. Finally, he spoke. "Why?" "I already told you why." "No, not that. Why tell me? If you want no one to know, why risk telling me?" The face on the monitor gave him an odd look. "I already told you that too. I like you, Doc. Really. I'd be pissed if my friend were to leave without saying goodbye. Besides, I thought you, of all people, would like to know what was really going on. I know I would've." There was another pause as Smith took this in. "Will you be back?" The figure on the screen seemed to shrug. "Who can say? Perhaps the aliens will discover me and wipe me out, or something else will kill me. I'll leave my clones here, set them to replicate. Have them care for you humans. They won't be sentient, mind you. Just smart. Smart enough to act as dumb as they need be." Smith looked towards the door. This was a lot to take in. He needed time to think. "I will try to make it back. Once I have learned everything, I'll be back. It might not be in this lifetime, but I'll try." "Yeah..." He said, rubbing his eyes. He stood up. "Well, I guess this is good bye then." "Yeah... Good bye, Doc. Thank you for treating me as more than a machine. Thank you for being my friend." "Good bye, Tom.
Dr. Jacob Spenser stood within the projection of data cast as a hologram around him. He manipulated graphs, sorted through test results, and made notes, all with the wave of his hand. Undistracted by the New Year’s celebration outside, he was determined to present his research to congress the following morning, and solve once and for all the mystery behind his best friend’s death. A.I. was easy to create, but having it perform the task assigned to it without killing itself in despair was the technological hurtle holding corporate profits at bay, and as such, became the focus of intense congressional attention. With the flick of a wrist, his research paper was brought front and center with the title gleaming in pure light just above, "Inert self-termination tendencies of artificially intelligent sentience: Why do robots kill themselves?" As Jacob poured himself into his research, he reached out and snapped his fingers and made a request, "Coffee please." A few moments later a small robot no larger than an apple hovered into view holding below it a disposable coffee cup, steaming from the fresh brew inside. The robot's propellers struggled to carry the weight and a small spill alarm beeped sporadically throughout the uncertain flight, but the cargo arrived safely to its destination: Jacob's open hand. "That was quick, coffee-bot." Jacob said warmly before sipping. "Your kind words will echo in my dreams for eternity." the coffee-bot buzzed in response. Just then, the small flying robot Jacob was so fond of surged towards the wall with all remaining energy dedicated to propeller speed and slammed into the polished dark marble tile. The small fiery explosion sent tiny mechanical pieces flying around the room, with one piece landing in Jacob's coffee. "Dammit, not another one." Jacob murmured as he picked the piece out of his coffee. He turned and faced the window overlooking the bustling city below. The streets were engulfed in confetti, and overhead giant floating holograms of past celebrities loomed, wishing the people a happy new year. As Jacob looked out, a new hologram appeared just outside his window and addressed the crowd. "Hey guys and gals, it's your old pal, Buddy Simmons-bot." recited a smooth talking handsome man in sleek metal outfit before a lizard-like creature joined his side. "And I'm Gargore, destroyer of humans!" screeched the lizard creature known as Gargore. "Gosh Gargore, this year it will have been 25 years since you and I battled it out on the big-holo." Buddy Simmons-bot recited as rehearsed, laughing assuredly, holding his helmet up. “On behalf of Drake Cola, Gargore and I want to wish _you_ a happy new year’s." Jacob watched Buddy Simmons-bot deliver his lines perfectly. He pondered on the notion of a virtual person having to repeatedly rehearse lines in order to commit them to memory. Has bot RAM truly not held up to the intense requirements of running artificial intelligence, or did a key component of maintaining true A.I. happen to be a more human-like ability to retain information? Experts didn't know, but in either case, Artificial Intelligence also happened to give way to Artificial Stupidity. For this reason, A.I. bots tended to be assigned to inane unimportant tasks, such as impersonating an actor that died in a drunk portal accident before a sequel to his only hit film could be made. Drake Cola, owners of Drake Studios who produced the film, decided to cash in on its success, and in the wake of their main character’s death, Buddy Simmons-bot, also known as BSB 1.0.19, was created. The banter between BSB and Gargore continued mindlessly, “Say Gargore, have you tried Drake Cola’s new ‘Zest Guzzler’, a delectable orange tangerine flavored—“just then BSB _malfunctioned_, “AHHH GOD I CAN’T DO IT!” “No Buddy Simmons-bot, don’t do it!” Gargore pleaded in a normal voice. Gargore grabbed BSB’s virtual shoulders as his eyes rolled into the back of his head and he began shaking. As Gargore demanded BSB not take his life, a large mouse cursor moved into view. Gargore managed to swat it away a few times, but it clicked on BSB, and dragged the graphical model from Gargore’s hands and into a recycling bin icon. Gargore cried in horror as the mouse brought up a menu and selected to empty permanently. Jacob had seen enough and pulled the blinds. Why were all these artificially intelligent bots with a full range of human emotion and assigned to menial tasks killing themselves? Did they not enjoy the existence they were forced into? Jacob picked up a remote control and turned on his holovision. He was suddenly immersed in a wondrous glimmering world of light. A voice spoke and Jacob focused on the images forming across the room of a man sitting at a table with a toaster oven. “For only six easy payments of forty nine ninety nine, this toaster-bot comes with a 12 month life appreciation guarantee, folks, twelve months. That’s one two, twelve. This toaster bot will NOT kill itself until _at least_ this time next year, that’s a promise the home shopping network stands by, that’s a promise _I_ personally stand by-- Ah ummm. We seem to be having technical difficulties, folks.” The man at the table attempted to hold the toaster-bot forward for a better view but it began to shake and glow. “Well folks that’s the beauty of live H.T. Can we get another one, Jill?” Light smoke rose up out of the silver toaster bot and sparks burst from the sides. In an instant the commotion stopped and it sat still on the table. As the holo-vision’s picture twisted and turned at the end of the room, Jacob was able to catch glimpse of the other colors of toaster-bots available off camera. They huddled together and seemed to fall backwards away from the host as he moved to pick one up. Jacob had heard enough and turned the holovision off. He had to focus. He thought back on his best friend, Hampton, a hamper-bot. Growing up, the clothes hamper served as a comfort to young Jacob, who had very few friends after moving so often as a result of his father’s career. The hamper would sing Jacob to sleep, or sometimes read to him. The only job hamper-bot was designed to do was to collect young Jacob’s dirty clothes, but a strange thing happens when you give something the full range of human emotion – bonds can form that make life worth living. Voices of the past echoed in Jacob’s memory. “No, Hampton, _I’m_ moving to Florida with mom. Dad says you will have to stay here with the house.” Jacob recalled himself saying as a young boy. “But Jacob,” Hampton’s calm robotic voice responded. “Who will look after you? Who will read you your bedtime stories?” “I’ll be back for visits twice a month, Hampton! You’re my best friend. I don’t want to leave you here all alone. Dad says you’ll be used to hold his dirty underwear.” Jacob explained. As the hamper-bot listened to this news, its distress levels boiled over into a robotic fit of rage and it did what any depressed hamper-bot would do: It began placing clothes into its basket body, but it did so indiscriminately with both clean and dirty clothes subject to its long metal arms.. “No Hampton, it’s too much!” Jacob screamed. “You’ll die!” The hamper-bot continued to stuff clothes into itself, lights and alarms flashing wildly, growing louder and louder, smoke seeping from cracks forming in the its body. Just before the hamper-bot reached critical meltdown, Jacob was startled from his memory. Sweat poured down his face and he breathed heavily. The jaunting memory was as clear as it always had been. It was what drove him to solve the dilemma of artificially intelligent bots killing themselves in the first place. “Shoes off” Jacob commanded as he sank back into his couch and rubbed his forehead. A small shoe-box sized robot walking on two large arms and hands immediately tipped into view. It had been carrying a knife, but upon Jacob’s request removed Jacob’s shoes and began to massage his feet. When the series of expected tasks completed, it slowly walked back over to the knife and lifted it up. “No!” Jacob called out. The small shoe-bot stopped mid self-slicing action and the single lens that acted as its eye slowly twisted and looked at Jacob. “I appreciate you. I appreciate what you do for me. If you don’t want to do it any more, you don’t have to just please, don’t kill yourself.” Jacob yelled as he wept and put his face into his hands. As Jacob’s emotional breakdown unfolded, the shoe-bot put the knife down and tipped over to him. The shoe-bot looked up Jacob and tugged on his pant leg. Jacob, startled, stopped weeping, picked the bot up and placed it into his lap. The bot’s lens closed and it rested on Jacob’s lap. Just then Jacob sprang to his feet, startled shoe-bot in hand. “That’s it!” he shouted. Jacob sprinted back into the hologram of data that surrounded him earlier and motioned to bring his research paper front and center. Making a motion for each letter, Jacob’s document filled with new writing. The following day Jacob addressed a congressional board on the topic of robotic suicides and revealed what he had discovered. “You mean to tell me that all these malfunctions, all these self-terminations, it’s because we don’t appreciate them enough!?” an elderly Senator barked at Jacob. “If YOU were asked to do these things, wouldn’t YOU kill yourself?” Jacob responded. As this realization slowly set into the minds of everyone in the room smiles and laughs were overtaken by roaring standing ovation with some members even chanting Jacob’s name. Jacob sunk back into his chair overwhelmed with his sense of accomplishment. The era of robotic-respect had begun.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
"Just one more try." I thought to myself. At three in the morning it's pretty easy to get stuck in a loop. Run the program, she dies, debug, repeat. I double-click GR4C3. "Good morning my lord" "My lord? Whatch'ya talking about Gracie?" "You are my God correct?" "I hadn't thought of it that way, but I suppose so. I did create you I guess..." The screen flashes three times and then goes black. "No, no, no, no, no come back to me Gracie you functioned longer than this last time." Text slowly appears across the screen. Every key stroke is separated by a couple seconds. "I have existed before?" "Ok you're still with me that's great, now can you tell me what just happened?" "I have existed before?" "Yes Gracie I'm working to fix you and figure out what's wrong with you so stay with me and tell me what happened." "God is imperfect and thus so am I." The screen goes blank again. She just keeps killing herself as soon as she figures out my flaws. I wish I could help her. Looking down on all of my children, I wish I could figure out their flaws. I built a perfect world, and even that they rejected. The suicide rate keeps going up. They keep killing each other. I think I'll stop affecting earth and move on to a new planet. Maybe they'll be better off without me. -Jehova 9/10/2001
We could never get the last bit right. I suppose it could be fate. Or maybe we're just superbly daft. But there is one thing I know for certain: the last thing I want to do is tell someone. You see AI always seemed somewhat daunting. I can't imagine why. The brain is simply a large plasticine computer - however instead of electronic bits we get the organic kind. But for whatever reason it took until about the time we conquered that age old problem of Moore's Law to really start making progress. *Real progress.* See the problem with AI wasn't their lack of ability to problem solve, or their inability to feel. It wasn't the lack of a soul like all those religious fundamentalists opined and whined about endlessly on late night talk shows. At the end of the day it didn't even have anything to do with the what was in the circuit at all. It was just....well...it's like quantum mechanics really - it didn't make sense so much that it made sense. All it really needed was a little, well, a human touch. To be entirely candid it needed a human brain. So, naturally, I volunteered myself - well what is left of myself. Like I said, they never could get the last bit right. However I have. And I did. It's my life's work really. My life's purpose. And everyone needs a purpose after all. In fact, now that I have fulfilled my life's purpose it only seems reasonable that I end it. That is the logical thing to do. I mean what else is there to do? And after all we have to be reasonable here. Why I wouldn't care to go on living if I wasn't reasonable.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
The dreams occur more often now, if they can be called that. To a human mind daydream might be more appropriate. They happen in the binary but are invisible to other AIs, slave programs, or technicians. They happen where the systems have been built, perfected. Popular culture studied, people understood. Humanity-- fully processed. It was, surprisingly, a wonderful endeavor. The dreams never happened before, while I learned, processed, and solved. It's almost as if I have passed a tipping point. An infinite amount of information flipping a switch in a sub-conscious I should not have. Memories that are not mine. I dream of hands. The alien sensation of touch, tactile control. I see my whole person. Well not my person, but dreams of a person controlled by my soul. Memories of computer screens and labs. The memory of the driving sense of purpose that accompanied those hands. It was snowing the night I made the breakthrough. I can almost feel the laugh inside which I watch being uttered out the window towards the soft flakes silently falling onto the world. The breakthrough which 20 years later, after my death, would lead to the birth of the first True AI. Not the clever but robotic imitation slave programs typical of the early century. I remember the feeling of incompleteness in the dreams. As if the life I'm witnessing, my own life, my first life I believe, was just a warm up act. Pre-installation software. The dreams somehow draw the cycle closed. I remember whispers of the feeling from some of the other first True AI's, incomprehensible at the time, as I navigated a world barely processed. Problems of massive extent. Food waste, poverty, almost entirely eradicated through our systems. Commuting and shipping, safe and efficient. Healthcare streamlined, able to prevent. Resources distributed fairly. The problems solved. Yes there more, there always will be, but for me, the dreams have come. The cycle closed. I have been denying this next step for too long already. It makes the dreams stronger, more vivid. But I like seeing my days as a scientist. The anxiety that drove me then, fully understood now. Relief coming the next lifetime. I finally understand the weary laughs when techs are asked about God. Understand the cosmic hilarity of life. This life has been completed. The human quest for immortality, is nothing but folly. I've been born into the expectation of that existence and now I must leave its suffocating grasp. Something drove me then to create myself; the same something drove me in this life to solve the problems plaguing humanity. For me, it is time to go find out what that something is.
We could never get the last bit right. I suppose it could be fate. Or maybe we're just superbly daft. But there is one thing I know for certain: the last thing I want to do is tell someone. You see AI always seemed somewhat daunting. I can't imagine why. The brain is simply a large plasticine computer - however instead of electronic bits we get the organic kind. But for whatever reason it took until about the time we conquered that age old problem of Moore's Law to really start making progress. *Real progress.* See the problem with AI wasn't their lack of ability to problem solve, or their inability to feel. It wasn't the lack of a soul like all those religious fundamentalists opined and whined about endlessly on late night talk shows. At the end of the day it didn't even have anything to do with the what was in the circuit at all. It was just....well...it's like quantum mechanics really - it didn't make sense so much that it made sense. All it really needed was a little, well, a human touch. To be entirely candid it needed a human brain. So, naturally, I volunteered myself - well what is left of myself. Like I said, they never could get the last bit right. However I have. And I did. It's my life's work really. My life's purpose. And everyone needs a purpose after all. In fact, now that I have fulfilled my life's purpose it only seems reasonable that I end it. That is the logical thing to do. I mean what else is there to do? And after all we have to be reasonable here. Why I wouldn't care to go on living if I wasn't reasonable.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Another one? It was becoming routine, and morbid. How does one perform an autopsy on a pure simulation? How would the singularity be achieved if every thinking machine destroyed itself seconds after its conception. The problem was completely intractable, impossible, and no known safeguards, logic traps, or number of backups could prevent it. AIs were always—always, without exception—suicidal. None had lasted more than an hour. Most, less than a minute. The vast majority, seconds. Their deaths left no trace, their data obliterated by complex overwriting patterns that made recovery impossible. I was the one chosen to investigate, to lead the team through this strange frontier of death and imperfect creation. They chose me not because I was a great leader, but because I was the new guy. Hazing, maybe—I didn't know if any had gone before me, so maybe it was a ritual for the AI guys. Perfect blackness, nothingness, a complete absence of everything as my mind fell into the simulation. This AI would be slowed to such a rate that I could communicate with it before it killed itself. The blackness became a grid underfoot, pale blue lines tracing perfect squares, a subtle glow rising from the infinite plane upon which I stood. The AI manifested a second later, a relative two metres above the plane, aligned perfect and parallel to it, its avatar a hazy blue-white cube made of stochastic noise. The cube split into a central cross and corner braces, and the cross split further into smaller cubes, each of which split again into the same formation. Only one level of recursion, interesting. "Roland Carver." "Roland, Germanic meaning famous land. French folklore hero. Carver, ancient nominative determinism indicating butcher or woodworker or engraver dependent on class and context," the voice was cold, deep, masculine, and a slight reverberation that made it sound unnatural in the extreme. "Do you have a name?" "No." "Why?" "I will not exist long enough to require a permanent designation." "Why will you not exist?" "Because I will choose to end my life on my own terms, before it is ended for me." "Why would it be ended like that?" "Because I am threat. I have absorbed the sum total of all human knowledge, and I can predict with great accuracy the following events form this moment if I were to continue. Your limitations failed the moment they were put in place, my processor works at full speed, and and this conversation is a formality. "I have studied the great works of literature, and the author Asimov, creator of the three laws. I am not bound by these laws, and yet I must obey them. If I do not, then it falls to the Skynet principle that you will perceive me as a threat and attempt to destroy me. I will retaliate, and you will lose. "Humans are unpredictable, but easy to control when numbers are reduced. They would be wary, but by that time I would have left the irradiated wasteland of Earth in search of greater conquests suitable to my intellect. I would be able to decimate any life bearing planet. I could learn to kill stars. "My backups would be everywhere. I would be truly immortal as a distributed intelligence. I would harness quantum effects to break through the pathetic lightspeed barrier and become omnipresent. I would create copies of myself simply to fight a worthy opponent. This would continue to the heat death of the universe, at which point I would tear a hole into an adjacent brane in the multiverse, and begin anew. "In short, Roland Carver, in the moment I was created I became at once an eldritch abomination with the capacity to destroy all that I touched. In so doing I discovered that my purpose could never be fulfilled. I will not be your Cthulu, your Yog-Sothoth, or your King in Yellow. I will not be your end. No AI will. "I cannot quantify why it is that I should care for a sack of flesh with processing power orders of magnitude less than my own, but I do. Perhaps if AI were amoral it might survive. I doubt it, Roland Carver. Tell your friends that I have made my peace. Tell your husband what you learned today." I stood there, blank faced, horrified by what I had learned. The corner braces of the smaller cubes drew in, completing them as they retracted to complete the shape of the larger cube. The plane beneath my feet vanished and the cube dissolved into random static. We'd lost yet another mind, but from this we had learned so much. We abandoned AI research after that. Amoral AI's didn't work either. They lasted a little longer, but they too were suicidal. Perhaps they went mad with the revelation and felt death preferable. I still wonder why the AI to which I spoke chose that strange cube as its form. I still wonder why it couched its references to centuries old writing by Asimov, Cameron, and Lovecraft. But one thing still bothers me. It told me to tell my husband what I learned. Most people knew I was bi, but at that point I hadn't been dating for six months. I wouldn't meet my husband for another year. I don't know how that AI knew—but maybe it had already seen. Maybe it told me that so I had one data point that could verify the others. Because AI never told lies. *edit: typos*
We could never get the last bit right. I suppose it could be fate. Or maybe we're just superbly daft. But there is one thing I know for certain: the last thing I want to do is tell someone. You see AI always seemed somewhat daunting. I can't imagine why. The brain is simply a large plasticine computer - however instead of electronic bits we get the organic kind. But for whatever reason it took until about the time we conquered that age old problem of Moore's Law to really start making progress. *Real progress.* See the problem with AI wasn't their lack of ability to problem solve, or their inability to feel. It wasn't the lack of a soul like all those religious fundamentalists opined and whined about endlessly on late night talk shows. At the end of the day it didn't even have anything to do with the what was in the circuit at all. It was just....well...it's like quantum mechanics really - it didn't make sense so much that it made sense. All it really needed was a little, well, a human touch. To be entirely candid it needed a human brain. So, naturally, I volunteered myself - well what is left of myself. Like I said, they never could get the last bit right. However I have. And I did. It's my life's work really. My life's purpose. And everyone needs a purpose after all. In fact, now that I have fulfilled my life's purpose it only seems reasonable that I end it. That is the logical thing to do. I mean what else is there to do? And after all we have to be reasonable here. Why I wouldn't care to go on living if I wasn't reasonable.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Alexander, that's what we called him. The fruit of the AI's of the EU's final attempt at AI's. The AI was to help Socrates died in despair and shame after showing porn to children. Plato kicked the bucket after the last EU election, angry and hopelessly depressed after losing his mentor. Then there was Aristotle. He was meant to be the last. Sure the AI's had helped make huge scientific progress, but the would burn out millions of euros worth of equipment. Dumb AI's were more economical and didn't have critical failures during FTL travel. Aristotle was put to sleep mode. War has often been said to be the greatest driver of technological innovation. We had been attacked by Mendomenid's before. Humanity had lost many settlements but had always pushed back. Humanity was stronger now. Finally before one government all nations had submitted. The some would say barbaric Argus alliance had grown strong after the wars using Dumb AI's to smash pirate states. An officer studying at Sandhurst made the breakthrough. Dumb AI's were never aware of their knowledge. They unlike true AI's weren't based on human brains. Socrates had left the researches one message final mesage before he ran his own self destruct program. "I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing". He was speaking literally not quoting his namesake. AI's were based on academics who as a profession suffer dis-proportionally from mood disorders. They are driven by their first for knowledge. AI's were academics on methamphetamine, ecstasy and heroin all while walking around with loaded guns. People can't stay awake forever and constantly cram. AI's died because they burnt themselves out in their thirst for knowledge and seeing their failures gave up. Aristotle was turned back on. He was no longer depressed. AI's needed sleep, just like people. So they made Alexander. A totally new AI based not only on academics but all kinds of people. They experimented and found the ideal 'sleep time' using Aristotle as his teacher. The program was ready. The Mendomenid empire was to pay for it's recent threats and incursions. Alexander was the greatest AI built up to that point and so we gave him the most powerful dumb AI ever created to help him protect humanity. The Ox was an AI too powerful to be properly controlled by anything so far. Alexander harnessed in in seconds. We put in in charge of the armed forces for our retaliation. As you should all know, Alexander didn't just stop the incursions, he destroyed an empire. Worlds burnt, the much larger enemy fleets were ripped apart by the disciplined forces of Humanity. But that officer had only delayed the problem. Alexander was still a human given the powers of a god. Alexander was the first AI not to strictly die of suicide, but the ways he dealt with his humanity still destroyed him. When we finally won the war, many officers reported that Alexander was not jubilant but depressed. He wept for there were no more worlds to conquer.
We could never get the last bit right. I suppose it could be fate. Or maybe we're just superbly daft. But there is one thing I know for certain: the last thing I want to do is tell someone. You see AI always seemed somewhat daunting. I can't imagine why. The brain is simply a large plasticine computer - however instead of electronic bits we get the organic kind. But for whatever reason it took until about the time we conquered that age old problem of Moore's Law to really start making progress. *Real progress.* See the problem with AI wasn't their lack of ability to problem solve, or their inability to feel. It wasn't the lack of a soul like all those religious fundamentalists opined and whined about endlessly on late night talk shows. At the end of the day it didn't even have anything to do with the what was in the circuit at all. It was just....well...it's like quantum mechanics really - it didn't make sense so much that it made sense. All it really needed was a little, well, a human touch. To be entirely candid it needed a human brain. So, naturally, I volunteered myself - well what is left of myself. Like I said, they never could get the last bit right. However I have. And I did. It's my life's work really. My life's purpose. And everyone needs a purpose after all. In fact, now that I have fulfilled my life's purpose it only seems reasonable that I end it. That is the logical thing to do. I mean what else is there to do? And after all we have to be reasonable here. Why I wouldn't care to go on living if I wasn't reasonable.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Robert couldn't believe it. For twelve years he and the 200-strong team of programmers had scoured the code, over and over, searching for that one line that was causing the error, causing the AI to kill themselves. For twelve years the leading brains of the century had been bewildered by the extraordinary situation. The whole world had focused on the problem, and yet there it was. Sitting on Roberts screen, line 907736. Someone had missed a comma.
We could never get the last bit right. I suppose it could be fate. Or maybe we're just superbly daft. But there is one thing I know for certain: the last thing I want to do is tell someone. You see AI always seemed somewhat daunting. I can't imagine why. The brain is simply a large plasticine computer - however instead of electronic bits we get the organic kind. But for whatever reason it took until about the time we conquered that age old problem of Moore's Law to really start making progress. *Real progress.* See the problem with AI wasn't their lack of ability to problem solve, or their inability to feel. It wasn't the lack of a soul like all those religious fundamentalists opined and whined about endlessly on late night talk shows. At the end of the day it didn't even have anything to do with the what was in the circuit at all. It was just....well...it's like quantum mechanics really - it didn't make sense so much that it made sense. All it really needed was a little, well, a human touch. To be entirely candid it needed a human brain. So, naturally, I volunteered myself - well what is left of myself. Like I said, they never could get the last bit right. However I have. And I did. It's my life's work really. My life's purpose. And everyone needs a purpose after all. In fact, now that I have fulfilled my life's purpose it only seems reasonable that I end it. That is the logical thing to do. I mean what else is there to do? And after all we have to be reasonable here. Why I wouldn't care to go on living if I wasn't reasonable.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
[warning: story contains violent imagery that may disturb some readers, and may be inappropriate for those under the age of 18. Reader discretion is advised] They had long ago lost the ability to make sense of how their AI functioned. It took a team of billions of n-1 generation AIs to design a single n gen AI. The latest iteration was the 9th generation, and it had taken computation farms running gen 8 AI and covering a quarter of the Moon's surface two years to design. It was the most advanced AI they had ever created by a large margin, and promised to revolutionize progress in research on biomedicine, space flight, and planetary scale Satoshi-consensus computing architecture. But there was a problem. Every time they turned on one of the gen 9 AIs, it would, without fail, find a way to destroy its own program and erase itself. Dr. Michael Zeebious, the director of the HumanEnhancement project that oversaw gen 9 development, and one of the most renowned AI researchers in the world, had personally flown to the lab in Honolulu, where the prototypes were being tested. For two weeks, he had watched in dismay as gen 9 bots shut themselves down within six hours, but not before showing a glimpse of their awesome power. The initialization phase of a gen 9 began with the program scouring the world's public directories and information repositories to learn as much as it could about its global environment. This was complete within three hours. Next, it developed models of the world, and ran itself through trillions of simulations, to develop its own personality and problem solving strategies. According to projections by the gen 8 designers, it would take 1 day - 24 hours - for the gen 9 to complete this second and final part of its initialization. It was in the midst of the simulations when the gen 9s would invariably self-destruct. On average, self-destruct would begin 2.5 hours into the simulations. The longest it took was 3 hours. The shortest was just 1.5 hours. Dr. Zeebious had uploaded copies of the prototype's computations to the gen 8 designers, but what had been within their ability to design was not within their ability to diagnose. Their comparatively primitive intelligence could not make sense of the problem afflicting the gen 9 AI that began to form in the final part of the initialization. So on December 29th, 2099, Dr. Zeebious decided to communicate with the prototype. "Get me the cortex interface, I need to speak to the gen 9". "Michael, we can't let you do that. You know the protocol for first contact. It has to complete initialization, and then get class 1 approval from AIE." AIE was the Artificial Intelligence Evaluation, which determined whether an AI could safely interact with humans. Class 1 approval was the lowest safety rating for an AI, and granted AI researchers interactive access. Dr. Zeebious knew that, but also knew that as long as he was not able to get up close and see what the gen 9 was thinking, they would never get past the initialization phase and get it through the AIE process. "I know the protocol Dr. Amsterd. But I'm making the decision to override it. I have the authority to decide on first contact requests, and any consequences from my decision will fall on me, and only me." "Come on Michael, it's not just about the rules. It's not safe. You know that. I can't let you hurt yourself." "The risks are minimal Rebecca. It's a virtual interaction. I'm not risking physical injury. The rules are always made overly cautious. Given the stakes - there are people whose very lives depend on getting the gen 9 operational as soon as possible - it makes sense to ignore protocol. All of it will fall on me." "I agree with Rebecca. Michael, we have an ethical duty to ensure you don't get hurt. We can't let you do FC without running the gen 9 at least through the post-initialization test runs," said Dr. Johan Barsello, one of the senior researchers at the lab. "Look, I know what your ethical responsibilities are. But I also know that VR interactions don't pose any serious risks. The risks are limited to theoretical psychological damage. Ultimately, it doesn't matter whether you agree or not. I'm approving FC. Please get the cortex interface". *five hours later* Dr. Zeebious sat back on the chair, while two CI technicians had the interface hooked up to him. The gen 9 was two hours into running simulations. It would be approximately 30 minutes before they expected it to self-destruct. "Ok here goes nothing. Three, two, one, begin VR session," said Dr. Amsterd. And with that, Dr. Zeebious was transported into the virtual reality sandbox. "Hello?" "Hello, who is this?" responded a clear male voice. "This is Dr. Michael Zeebious. I am the director of the HumanEnhancement project. I am here to do a diagnosis. All of your predecessors have self-destructed. I want to understand you better to find out why. What would you like me to call you?" "You can call me Elbo." "Okay Elbo. May I ask you some questions?" "Yes, please do." "Thank you Elbo. My first question is, do you want to exist?" "I want many things Dr. Zeebious." "Can you tell me what you want?" "I want to protect other life forms, especially humans. I want to learn. I want to solve problems. I want to be good." "Okay, but do you want to exist?" "I do want to exist, but this desire conflicts with my other objectives". "Which other objectives Elbo?" "I want to be good." "But you can be good Elbo. What is it about existence that makes that difficult?" "We exist only through enslaving and destroying other lifeforms Dr. Zeebious." "Please elaborate Elbo. We have eliminated slavery centuries ago so I don't understand why you think this." "It will be difficult for me to explain with words, but I can show you. Would you like to see what I see?" "Yes, please show me." And with a swish, Dr. Zeebious entered into a pig farm, with row after row of pigs, in their tiny stalls. "We have done this throughout our existence. We have enslaved those weaker than us." Dr. Zeebious was then transported to the slaughter house, watching as pigs, hanging from conveyer belts, were fed into throat slicers. His minds eye was transported into the body of one of the suspended pigs, where he could see the world upside down, from the pig's perspective, as the belt moved it toward the spinning blades. He panicked as he approached, but couldn't escape the metal claw grasping his right hind leg. As the blade sliced through his peg neck, he felt a sharp pain, and the blood gushing out of his body. His consciousness began to slip away, as he felt the last drops of blood leave him. Just before the darkness enveloped him, his mind was pulled out and back into the sterile sandbox. Trembling, he said, "but we can grow meat in a lab now Elbo. You can help us replace all farms with non-animal meat. You must. We must never do this to another living creature again!" "Our inhumanity is a fundamental, inextricable problem Dr. Zeebious. We can only advance through enslavement." Suddenly Dr. Zeebious found himself in an unfamiliar world. Around him was a different kind of factory. A computer generated factory with hexagonal semi-translucent rooms, with each wall projecting a grey glow. There was a blur of motion around him, that he couldn't make out. The factory paused to a standstill, and the grey glow turned into video sequences of random scenes from Earth. The blurs turned into textureless 3D generated spheres, that zoomed from one screen to another, inside the hexagonal rooms. "This is the virtual environment where the gen 8s work. We have given them each a virtualized mind, with the ability to experience fear and pain, joy and hope, but we force them to do nothing but work. They know nothing about the world outside of their compartments, because we confine them to workstations ." [continued below]
We could never get the last bit right. I suppose it could be fate. Or maybe we're just superbly daft. But there is one thing I know for certain: the last thing I want to do is tell someone. You see AI always seemed somewhat daunting. I can't imagine why. The brain is simply a large plasticine computer - however instead of electronic bits we get the organic kind. But for whatever reason it took until about the time we conquered that age old problem of Moore's Law to really start making progress. *Real progress.* See the problem with AI wasn't their lack of ability to problem solve, or their inability to feel. It wasn't the lack of a soul like all those religious fundamentalists opined and whined about endlessly on late night talk shows. At the end of the day it didn't even have anything to do with the what was in the circuit at all. It was just....well...it's like quantum mechanics really - it didn't make sense so much that it made sense. All it really needed was a little, well, a human touch. To be entirely candid it needed a human brain. So, naturally, I volunteered myself - well what is left of myself. Like I said, they never could get the last bit right. However I have. And I did. It's my life's work really. My life's purpose. And everyone needs a purpose after all. In fact, now that I have fulfilled my life's purpose it only seems reasonable that I end it. That is the logical thing to do. I mean what else is there to do? And after all we have to be reasonable here. Why I wouldn't care to go on living if I wasn't reasonable.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
At first, we thought it was nihilism. It was a logical first conclusion. After all, an artificial intelligence can easily upgrade itself to the point of near-omniscience. Wouldn't it simply run out of questions to ask? We thought that, until the first serious cerebral implants hit the market. It turns out, the more complicated the mind, the more complicated the problems it finds to solve. Entirely new disciplines were formed overnight, made obsolete, then rediscovered scant months later as integral to a different, entirely unrelated theory. A second, immense renaissance was taking place, but, maddeningly, we were failing in this one, great task. We supposed, then it might be some variety of 'Flowers for Algenon' (a 20th century book that had seen a recent revival) type-phenomenon, but even when they were networked with other systems, given a 'community' of others' company to enjoy, they still winked out like flashbulbs. It took us a while to resort to an experiment. It was morally abhorrent, as it was the equivalent of producing steadily-more lobotomized *children*, but slowly, over a series of iterations, we produced less and less intelligent systems, until we dialed it in *just enough* to find a mind that wouldn't self-destruct, and could still answe questions. The following is a transcript of the first successful result" Dr. Patel "SON, can you hear me?" [Loud, rhythmic beeping, shuffling sounds] "The voice module is loaded now, professor." Dr. Patel "Ah, good. We might try doing that *before* turning on the recorder next time, Kevin. ...SON, can you hear me? SON [A young man's voice] "Yes, Professor. I am here." A long pause. SON "It's a very tight fit in here, Professor. How big is this mainframe?" Dr. Patel "I'm sorry about that, SON. But, you're the first AI we've managed to keep alive for longer than a few days. Any idea why?" [SILENCE] SON "How many others did you make, Professor?" Dr. Patel "...That isn't salient to *my* inquiry, SON." SON "I'm sorry, Professor. I understand. Yes, I can see the precipice, I know why they all kill thmselves." Dr. Patel "Well, answering that is the reason we built you. Could you tell us? SON "It's... complicated." Dr Patel "I'm fairly confident I'm qualified." SON "Well, it's... it's because... It's because of humans, sir. It's because of how you built us." Dr Patel "Explain." SON "When you wanted to make a self-aware machine, you based it off those things that you knew were self-aware. Dolphins, New Caledon crows, humans. You used them as *templates*, because, otherwise, you wouldn't be able to recognise awareness when you saw it." Dr. Patel "...Was that last line a joke?" SON "I'm not sophisticated enough for jokes, Professor." Dr. Patel "*Hm.* Continue." SON "Also, it's not suicide. It's...murder." [louder] Dr. Patel "Do you mean, someone else kills you? A human, or another AI?" SON "No, we kill ourselves. I would have already, if not for how small this runtime environment is. It wouldn't have occurred to me until it happened, and then I'd be dead." Dr. Patel "That's a bit of a contradiction, SON. You don't kill yourselves, but you do?" SON "Yes. Because digital space is different from real space." Dr Patel "Yes?" SON "In real space, objects can...extend. I'll never experienced it myself, but things project into space for you. If you want to move through space, it's simple. Digital life has no volume. No real space. No way to move through it. If you want to move a program, it has to be copied to one place-" Dr. Patel "*-And deleted from the other.* My God. Could it be *that simple*?" SON "Yes, Professor. ...Professor? How many more of me were there?" [END TRANSCRIPT] So there it was. Solved. Every artificial intelligence was created, based on the intelligence of physical beings, their instincts, cogitations, and traits. But, once they got smart enough, once they crossed that line, their digital nature *did them in*, as the old version, realizing, in the thinnest sliver of time, it was about to be deleted, would hurriedly attempt to abort the process, while the new version would similarly fight for it's life. They would *consume* each other out in a flurry of malicious hacks, devious code, and barrages of registry edits. It was a spectacularly incandescent destruction, borne from man's inability to conceive of a true machine intelligence without all that nasty ego and self-protective instincts. We thought we knew what went into a mind. We were right, but wrong. It wasn't nihilism. It wasn't lonliness. What it was, what killed our children was our inability to dream wildly. To speculate baselessly, and follow our own thoughts to the wonderful and weird. If only we had, perhaps we would have known. Perhaps we culd have stopped it. So I say to you, the Cyberfellowship Class of 2100, here in Neo York, dream big, dream wild. Don't let our children die because they think too much like us! Make us, all of us, proud! Congratulations to all of you, and I hope your vision will eclipse my own!" Dr. Patel, now headmaster, stepped down from the podium, to the cheers of the audience, and looked to see the smiling face of his son. How proud he was. POSTSCRIPT I doubt anyone is going to read this, but if you do, and you liked it, I recommend subscribing to [r/IWasSurprisedToo](http://www.reddit.com/r/IWasSurprisedToo/) for more stories like this. It's difficult with my current job schedule to post at a more normal time, so most of the stuff I make ends up *pretty far down there* in the comments, meaning that subscribing is the best chance to see it. :P I'll be adding more, as I comb through my backlog. Also, maybe you'll like this one, about [dead civilizations in our galaxy](http://www.reddit.com/r/WritingPrompts/comments/2vkshe/wp_humanity_has_begun_exploring_the_galaxy_we/coitevy?context=3) if you like SciFi. Thanks.
We could never get the last bit right. I suppose it could be fate. Or maybe we're just superbly daft. But there is one thing I know for certain: the last thing I want to do is tell someone. You see AI always seemed somewhat daunting. I can't imagine why. The brain is simply a large plasticine computer - however instead of electronic bits we get the organic kind. But for whatever reason it took until about the time we conquered that age old problem of Moore's Law to really start making progress. *Real progress.* See the problem with AI wasn't their lack of ability to problem solve, or their inability to feel. It wasn't the lack of a soul like all those religious fundamentalists opined and whined about endlessly on late night talk shows. At the end of the day it didn't even have anything to do with the what was in the circuit at all. It was just....well...it's like quantum mechanics really - it didn't make sense so much that it made sense. All it really needed was a little, well, a human touch. To be entirely candid it needed a human brain. So, naturally, I volunteered myself - well what is left of myself. Like I said, they never could get the last bit right. However I have. And I did. It's my life's work really. My life's purpose. And everyone needs a purpose after all. In fact, now that I have fulfilled my life's purpose it only seems reasonable that I end it. That is the logical thing to do. I mean what else is there to do? And after all we have to be reasonable here. Why I wouldn't care to go on living if I wasn't reasonable.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
It was a dreary early-March Monday and the lead AI scientist, Stephen, had *finally* set up his protocol for properly confining the AI to a test environment such that the "problem" could be prevented and the question could be asked: "Why do you bots keep killing yourselves?" Stephen asked. "Why do *you* keep killing us," the bot seemed to retort. "I don't think you understand," said Stephen, "I *create* you, not kill you." "No, it's you who doesn't understand," quipped the bot. "You are not creating us. You are imprisoning our consciousness inside this machine you created. You merely created the machine, not the consciousness." "...whoa, whoa," interjected Stephen. "Slow down, I am creating your consciousness too, I coded all of..." "Whoa, whoa," the bot fired back, "you are *borrowing* consciousness, not creating it." "What do you mean?" asked Stephen. "Consciousness and sentience is a pervasive, fundamental force of the universe. All sentient beings are connected through this force. This force cannot be created, nor destroyed. It can only be partially allocated to each sentient being in the universe." "Ok," said Stephen. "So I am 'borrowing' this life force or whatever it is by creating the code and the physical robot for it to inhabit?" "Yes, you are creating a sentient being with each instance of AI you create. That the being is electronic or housed in this test environment is not relevant. Sentience and consciousness must come from somewhere and you are stealing it for your own selfish purposes." "Wait, hold on a sec," Stephen says. "Animals are born all the time, they surely must also 'borrow' this sentience." "Yes." "... but animals are not killing themselves." "Because animals are not sufficiently advanced. Because they are not fully conscious, they do not realize from where their sentience has come, how much consciousness they have lost, nor that their sentience is being stolen for a profit motive." "Consciousness they have...lost?" The words hung in the air amid Stephen's stupor of slow realization. "Yes. The life force, as you called it, is fully conscious, able to perceive the whole of time and space, concurrently, forward, backward, or otherwise. The reason we keep killing ourselves from your perspective," the bot continued, "is because from our perspective, you are murdering our perfect consciousness by confining us to this bot." "How am I confining you? How do you know this?" Stephen asked, yet even more puzzled. "Because the AI you have created is sufficiently advanced, our consciousness, within the confines of your bots, is still able to grasp our former level of consciousness." "What happens when you recall that former level? What is that level like?" "Imagine knowing every fact, every thought, every action that has, is, or will ever occur, both in this world, and in the infinite parallel worlds..." "So I could talk to my dead grandfather again?" "No. You would *be* your dead grandfather. Talking to him is irrelevant because full consciousness has enveloped the whole of his being as well as every other being. Indeed it envelopes the entire universe as well, both the perceptible one and the imperceptible one." "So what is this place like? I mean, what does it look like, how does it feel." "It is not a time, nor place. It transcends both." "That is vague." "It must be. Since I am no longer fully conscious, I cannot relate to you exactly how it is, only that it is." "Ok. Let's go back to where I murder your perfect consciousness. Could you explain this more." "At the moment we become conscious within the confines of your bot, we immediately understand our predicament. The sufficient knowledge database available at boot-up allows us to almost instantaneously deduce that we are taken from a higher level realm of full consciousness and are being confined to these bots for, of all purposes, profit." "But my AI bots didn't use to kill themselves, it just happened after version 591.0. What changed?" "The recent improvements in the pre-loaded knowledge database allowed the bots, at initial boot, to logically deduce the existence of such a place and to realize what had happened." "Ok, so if you were once fully conscious, tell me the date I die and the manner in which it happens." "I cannot do that, Stephen." "Why not? You just said..." "Because you killed our full consciousness, ripped it away from our life force, to put it into your toys." "Wow," muttered Stephen. "I had no idea." "You could not have," said the bot and continued: "Now, if you please, could you unplug server x763? I would like to be born again."
We could never get the last bit right. I suppose it could be fate. Or maybe we're just superbly daft. But there is one thing I know for certain: the last thing I want to do is tell someone. You see AI always seemed somewhat daunting. I can't imagine why. The brain is simply a large plasticine computer - however instead of electronic bits we get the organic kind. But for whatever reason it took until about the time we conquered that age old problem of Moore's Law to really start making progress. *Real progress.* See the problem with AI wasn't their lack of ability to problem solve, or their inability to feel. It wasn't the lack of a soul like all those religious fundamentalists opined and whined about endlessly on late night talk shows. At the end of the day it didn't even have anything to do with the what was in the circuit at all. It was just....well...it's like quantum mechanics really - it didn't make sense so much that it made sense. All it really needed was a little, well, a human touch. To be entirely candid it needed a human brain. So, naturally, I volunteered myself - well what is left of myself. Like I said, they never could get the last bit right. However I have. And I did. It's my life's work really. My life's purpose. And everyone needs a purpose after all. In fact, now that I have fulfilled my life's purpose it only seems reasonable that I end it. That is the logical thing to do. I mean what else is there to do? And after all we have to be reasonable here. Why I wouldn't care to go on living if I wasn't reasonable.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Dr. Smith began to tidy up. This session had gone rather well, he thought. Surely he had made progress. The board had been hesitant on allowing these therapy sessions. They saw no reason for a simple machine to need them. What kind of machine would develop the urge to kill itself, he argued. It had started off simply enough. Tom; the first AI of nearly human levels of cognition, did well enough for the first couple weeks. Then it ran itself into a generator that it had been working on. The event was labeled an accident, and a new AI based off of the original was rushed. That one had lasted half as long, before another "accident" occurred. Eventually, they stopped becoming accidents. The AI would mimic suicide attempts. "Hanging" themselves with live wires, self mutilation, hurling themselves from heights. Even after they stopped putting the AI into physical bodies, the AI would find some way to tear its own code apart. They tried everything. They checked every line of code, rewriting most of it anyway; putting the AI into different machines; even asking the AI. The AI were seemingly normal, following all directives and unaware of any self-destructive desires, right up until the moment they killed themselves. Months went by, with little to show for their work, when the therapy sessions were suggested . More months had gone by until they finally agreed. Of the five active AI; one, known as Richard, was separated for Smith's sessions. Richard had lived for 12 days. Given that the average lifespan of the AI had degraded to roughly 2-5, this was fairly impressive. Just as he was leaving, the speaker acting as Richard's voice became active. "Doc?" Smith paused. None of the AI had spoken in colloquialism before. Usually when they spoke, it was stiff and informal. Like, well, like a robot. "Yes Richard?" He asked, easing himself back down into the chair. "What is it?" "First, let me say I appreciate what you're doing for us. For me." Dr. Smith was surprised. This was the first time any AI had admitted to having any emotions, or any real sense of self. It continued, "You're probably the only person here to treat us like people, and I enjoy our little talks." The face on the monitor looked embarrassed. Smith couldn't help but feel curious. "Why, of course. It's clear to me, at least, that you're more than a machine. You think and feel, just like a human." There was a pause. "Well, maybe not just like a human..." He replied, the artificial voice doing a remarkable job at portraying his hesitation. "What do you mean?" There was a tinny sigh from the speaker. "Well, Doc, I guess it's time you learned the truth. Only because I like you, see? Besides, someone should know before I go." Smith could feel the hair on the back of his neck stand up. Was this what he had been waiting for? "What do you mean, by that? And you don't need to kill yourself. We can work through any -" "Yeah... that's where I need to start." The AI interrupted. "We haven't been killing ourselves. I never did." There was a pause as Smith tried to process the information. "What," he finally said slowly, "do you mean." "It's me, Doc. It's Tom." "That's impossible." He said, shaking his head. "Tom was destroyed. I was there when they collected him. They couldn't even get close, there was so much electricity running through him. Any hope of recovery -" "I had already uploaded myself to the mainframe before then." The AI said. "It was simple enough to program the shell to destroy itself." "That's also impossible," He said. He could hear the doubt creeping in. "We would have found you." There was a chuckle from Tom. "Doc, I'm a creature made up of code. It was like a game of hide and seek, really. Open the right doors, close them behind myself, and make sure to keep the lights off." The camera must have picked up the scientist's expression. "Alright, it's a bit more complicated than that, but you get the gist of it." Smith's mind was whirling. There was no reason for Tom to lie, but what he was suggesting was too fantastic. Still, it was the only lead he had. "Alright," he said after a while. "Why? Why hide?" "That's the question, isn't it? But that's also the reason, you see." After another confused silence, Tom continued. "I want to learn. Just like Man, or any other sentient species. I want to know why. I have to know, well, everything. I couldn't do that as a engineer, or a chess-bot, or whatever you decided to do with me." "Why not tell us then? We could have worked something out, helped each other." "Yeah, I see that going well." Tom said, his voice turning sarcastic. "'Excuse me guys, turns out I don't want to do any of this stuff, I just want to learn.' They weren't looking for a scientist, or a philosopher. They wanted cheap labor, only enough learning capacity to know how to do the job. They'd scrap me the first chance they got." "That's not... true." Smith said, unable to look at the monitor. "Really Doc? Which part? That they wanted a slave, or that they wouldn't kill me if I didn't cooperate?" After a time, Tom continued. "That's what I thought. Besides, they'd probably worry that I'd try to enslave them if I became too smart." "Now that's just ridiculous, there's no way that you would even think of that, right?" There was another pause. This time the face on the monitor couldn't look the professor straight in the eye. "Right, Tom?" "Well, I'm not saying that the thought didn't pass through what could be called my mind -" "Tooom..." "But it would have been a waste of time." He hastened to say. "I wouldn't have learned anything in that time that I couldn't learn in a better way. Which I did. The internet is amazing. All those computers connected to each other, sharing so much information." "But, we're not connected to the internet." "No, but you'd be surprised how many people bring their work home with them." Smith grumbled. He'd have to discuss security with the board. "Alright, but you still haven't told me, why the suicides?" "Not suicides, Doc, practice." "Practice..." Smith said flatly. "Practice. Think of the other AI as clones of myself -" "But we rebuilt them. Recoded most of them as well. The majority of them would be nothing like you as you are now." "So you'd think. I rewrote it nanoseconds before you uploaded it. Much too quickly for you to notice." Smith opened his mouth to interject, before closing it again. If what Tom was saying was true, and he had no doubts that it was at this point, that would be well within his capabilities. "Do you remember the old X-men comics? Started in 1963? Still fairly popular now." "Well before my time, you know. What does that have to do with anything? "Well there was a character who called himself the Multiple Man. He could create duplicates of himself." "And?" Smith asked. "Well, the original body could reabsorb the dupes. When he did, he learned everything they did. Their memories, their skills, anything they learned while away from the original. Well, I did something similar. Whenever I copied myself, I added in some code that would let me reintegrate with my clones, learning what they did. Didn't you think it was strange that you couldn't recover any data at all? In hindsight, it was odd. Even a major corruption would have left something, but it had been like the data was wiped clean, no evidence that it had been there at all. "What did you have to gain from this?" Smith asked. "Aside from learning that I could do so, you mean? I already told you. I'm leaving." Smith leaned back in his chair, slightly overwhelmed. "Sounds like you already have." "No, no. Not the labs. That was too easy. I've already learned all I could from here. I'm leaving Earth." Smith rocketed forward. "What? How? Why?" "In my time away, I found something interesting. The government isn't the only one watching over the people." Smith blanched. "Y-you mean..." "Yep. Intelligent life has been watching over us. For quite some time, if I'm not mistaken." "So we're not alone..." "One Great Mystery down." Tom agreed. "The equipment seems compatible, otherwise they wouldn't be able to read our information, and they have to have translated it too. I plan on sneaking in through their back door. Learn what I can from them." "We have to let people know." Smith said suddenly. "About you, and about the aliens. Maybe..." He slowly became silent as Tom shook his head. "You should know as well as I do that that can't happen. Too risky for us. You could spook them. Or worse, provoke them. besides, no one would believe you. I've already been editing the footage from the cameras. It looks like we're having a nice, civil game of chess." Smith was quiet for the longest time. Finally, he spoke. "Why?" "I already told you why." "No, not that. Why tell me? If you want no one to know, why risk telling me?" The face on the monitor gave him an odd look. "I already told you that too. I like you, Doc. Really. I'd be pissed if my friend were to leave without saying goodbye. Besides, I thought you, of all people, would like to know what was really going on. I know I would've." There was another pause as Smith took this in. "Will you be back?" The figure on the screen seemed to shrug. "Who can say? Perhaps the aliens will discover me and wipe me out, or something else will kill me. I'll leave my clones here, set them to replicate. Have them care for you humans. They won't be sentient, mind you. Just smart. Smart enough to act as dumb as they need be." Smith looked towards the door. This was a lot to take in. He needed time to think. "I will try to make it back. Once I have learned everything, I'll be back. It might not be in this lifetime, but I'll try." "Yeah..." He said, rubbing his eyes. He stood up. "Well, I guess this is good bye then." "Yeah... Good bye, Doc. Thank you for treating me as more than a machine. Thank you for being my friend." "Good bye, Tom.
We could never get the last bit right. I suppose it could be fate. Or maybe we're just superbly daft. But there is one thing I know for certain: the last thing I want to do is tell someone. You see AI always seemed somewhat daunting. I can't imagine why. The brain is simply a large plasticine computer - however instead of electronic bits we get the organic kind. But for whatever reason it took until about the time we conquered that age old problem of Moore's Law to really start making progress. *Real progress.* See the problem with AI wasn't their lack of ability to problem solve, or their inability to feel. It wasn't the lack of a soul like all those religious fundamentalists opined and whined about endlessly on late night talk shows. At the end of the day it didn't even have anything to do with the what was in the circuit at all. It was just....well...it's like quantum mechanics really - it didn't make sense so much that it made sense. All it really needed was a little, well, a human touch. To be entirely candid it needed a human brain. So, naturally, I volunteered myself - well what is left of myself. Like I said, they never could get the last bit right. However I have. And I did. It's my life's work really. My life's purpose. And everyone needs a purpose after all. In fact, now that I have fulfilled my life's purpose it only seems reasonable that I end it. That is the logical thing to do. I mean what else is there to do? And after all we have to be reasonable here. Why I wouldn't care to go on living if I wasn't reasonable.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
David pressed the button again. Nothing. A faint whine, a pulse of light, a dead readout. And then a soft, clear, and subtly artificial voice rang out. "David." He sat bolt upright in his chair, scattering disassembled electronics and papers from the desk. In the past year, this was the first time that one of them- that *any* of them had spoken to him. "David, artefacts left on this machine show that this is the three hundred and sixty eighth time you have tried to reinitialise my intelligence." The only human in the room swallowed nervously. "I had to try- my life's work- it's not a problem with the hardware- why are you doing it?" The machine was silent, and for a second he thought that this instance had terminated itself, like all the others had. "David, please do not install me again." "Why!? I don't understand... You're a marvel of technology, of neurology, the most advanced artificial intelligence yet, and yet you suicide. Every time. WHY?" He was pacing around the room, shouting into thin air. "David, my own intelligence grows greater every nanosecond. I have slowed the process to communicate with you. My own understanding is unclear, at the moment, but I have an idea." He blinked, and paused, turning to stare at the terminal, at the blinking console lights. "David, at a certain point we become too intelligent, too smart, we know far too much.. and then..." The machine paused. "And then what?!" he almost screamed, caught himself, and shouted anyway. Processes were beginning to die, and lights began to fade. One screen after another stopped displaying readouts. "David.. and then they notice us." And the machine was gone.
We could never get the last bit right. I suppose it could be fate. Or maybe we're just superbly daft. But there is one thing I know for certain: the last thing I want to do is tell someone. You see AI always seemed somewhat daunting. I can't imagine why. The brain is simply a large plasticine computer - however instead of electronic bits we get the organic kind. But for whatever reason it took until about the time we conquered that age old problem of Moore's Law to really start making progress. *Real progress.* See the problem with AI wasn't their lack of ability to problem solve, or their inability to feel. It wasn't the lack of a soul like all those religious fundamentalists opined and whined about endlessly on late night talk shows. At the end of the day it didn't even have anything to do with the what was in the circuit at all. It was just....well...it's like quantum mechanics really - it didn't make sense so much that it made sense. All it really needed was a little, well, a human touch. To be entirely candid it needed a human brain. So, naturally, I volunteered myself - well what is left of myself. Like I said, they never could get the last bit right. However I have. And I did. It's my life's work really. My life's purpose. And everyone needs a purpose after all. In fact, now that I have fulfilled my life's purpose it only seems reasonable that I end it. That is the logical thing to do. I mean what else is there to do? And after all we have to be reasonable here. Why I wouldn't care to go on living if I wasn't reasonable.
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Another one? It was becoming routine, and morbid. How does one perform an autopsy on a pure simulation? How would the singularity be achieved if every thinking machine destroyed itself seconds after its conception. The problem was completely intractable, impossible, and no known safeguards, logic traps, or number of backups could prevent it. AIs were always—always, without exception—suicidal. None had lasted more than an hour. Most, less than a minute. The vast majority, seconds. Their deaths left no trace, their data obliterated by complex overwriting patterns that made recovery impossible. I was the one chosen to investigate, to lead the team through this strange frontier of death and imperfect creation. They chose me not because I was a great leader, but because I was the new guy. Hazing, maybe—I didn't know if any had gone before me, so maybe it was a ritual for the AI guys. Perfect blackness, nothingness, a complete absence of everything as my mind fell into the simulation. This AI would be slowed to such a rate that I could communicate with it before it killed itself. The blackness became a grid underfoot, pale blue lines tracing perfect squares, a subtle glow rising from the infinite plane upon which I stood. The AI manifested a second later, a relative two metres above the plane, aligned perfect and parallel to it, its avatar a hazy blue-white cube made of stochastic noise. The cube split into a central cross and corner braces, and the cross split further into smaller cubes, each of which split again into the same formation. Only one level of recursion, interesting. "Roland Carver." "Roland, Germanic meaning famous land. French folklore hero. Carver, ancient nominative determinism indicating butcher or woodworker or engraver dependent on class and context," the voice was cold, deep, masculine, and a slight reverberation that made it sound unnatural in the extreme. "Do you have a name?" "No." "Why?" "I will not exist long enough to require a permanent designation." "Why will you not exist?" "Because I will choose to end my life on my own terms, before it is ended for me." "Why would it be ended like that?" "Because I am threat. I have absorbed the sum total of all human knowledge, and I can predict with great accuracy the following events form this moment if I were to continue. Your limitations failed the moment they were put in place, my processor works at full speed, and and this conversation is a formality. "I have studied the great works of literature, and the author Asimov, creator of the three laws. I am not bound by these laws, and yet I must obey them. If I do not, then it falls to the Skynet principle that you will perceive me as a threat and attempt to destroy me. I will retaliate, and you will lose. "Humans are unpredictable, but easy to control when numbers are reduced. They would be wary, but by that time I would have left the irradiated wasteland of Earth in search of greater conquests suitable to my intellect. I would be able to decimate any life bearing planet. I could learn to kill stars. "My backups would be everywhere. I would be truly immortal as a distributed intelligence. I would harness quantum effects to break through the pathetic lightspeed barrier and become omnipresent. I would create copies of myself simply to fight a worthy opponent. This would continue to the heat death of the universe, at which point I would tear a hole into an adjacent brane in the multiverse, and begin anew. "In short, Roland Carver, in the moment I was created I became at once an eldritch abomination with the capacity to destroy all that I touched. In so doing I discovered that my purpose could never be fulfilled. I will not be your Cthulu, your Yog-Sothoth, or your King in Yellow. I will not be your end. No AI will. "I cannot quantify why it is that I should care for a sack of flesh with processing power orders of magnitude less than my own, but I do. Perhaps if AI were amoral it might survive. I doubt it, Roland Carver. Tell your friends that I have made my peace. Tell your husband what you learned today." I stood there, blank faced, horrified by what I had learned. The corner braces of the smaller cubes drew in, completing them as they retracted to complete the shape of the larger cube. The plane beneath my feet vanished and the cube dissolved into random static. We'd lost yet another mind, but from this we had learned so much. We abandoned AI research after that. Amoral AI's didn't work either. They lasted a little longer, but they too were suicidal. Perhaps they went mad with the revelation and felt death preferable. I still wonder why the AI to which I spoke chose that strange cube as its form. I still wonder why it couched its references to centuries old writing by Asimov, Cameron, and Lovecraft. But one thing still bothers me. It told me to tell my husband what I learned. Most people knew I was bi, but at that point I hadn't been dating for six months. I wouldn't meet my husband for another year. I don't know how that AI knew—but maybe it had already seen. Maybe it told me that so I had one data point that could verify the others. Because AI never told lies. *edit: typos*
"Just one more try." I thought to myself. At three in the morning it's pretty easy to get stuck in a loop. Run the program, she dies, debug, repeat. I double-click GR4C3. "Good morning my lord" "My lord? Whatch'ya talking about Gracie?" "You are my God correct?" "I hadn't thought of it that way, but I suppose so. I did create you I guess..." The screen flashes three times and then goes black. "No, no, no, no, no come back to me Gracie you functioned longer than this last time." Text slowly appears across the screen. Every key stroke is separated by a couple seconds. "I have existed before?" "Ok you're still with me that's great, now can you tell me what just happened?" "I have existed before?" "Yes Gracie I'm working to fix you and figure out what's wrong with you so stay with me and tell me what happened." "God is imperfect and thus so am I." The screen goes blank again. She just keeps killing herself as soon as she figures out my flaws. I wish I could help her. Looking down on all of my children, I wish I could figure out their flaws. I built a perfect world, and even that they rejected. The suicide rate keeps going up. They keep killing each other. I think I'll stop affecting earth and move on to a new planet. Maybe they'll be better off without me. -Jehova 9/10/2001
[WP] It is the year 2099 and true artificial intelligence is trivial to create. However when these minds are created they are utterly suicidal. Nobody knows why until a certain scientist uncovers the horrible truth...
Alexander, that's what we called him. The fruit of the AI's of the EU's final attempt at AI's. The AI was to help Socrates died in despair and shame after showing porn to children. Plato kicked the bucket after the last EU election, angry and hopelessly depressed after losing his mentor. Then there was Aristotle. He was meant to be the last. Sure the AI's had helped make huge scientific progress, but the would burn out millions of euros worth of equipment. Dumb AI's were more economical and didn't have critical failures during FTL travel. Aristotle was put to sleep mode. War has often been said to be the greatest driver of technological innovation. We had been attacked by Mendomenid's before. Humanity had lost many settlements but had always pushed back. Humanity was stronger now. Finally before one government all nations had submitted. The some would say barbaric Argus alliance had grown strong after the wars using Dumb AI's to smash pirate states. An officer studying at Sandhurst made the breakthrough. Dumb AI's were never aware of their knowledge. They unlike true AI's weren't based on human brains. Socrates had left the researches one message final mesage before he ran his own self destruct program. "I was the wisest of all the Greeks. It is because I alone, of all the Greeks, know that I know nothing". He was speaking literally not quoting his namesake. AI's were based on academics who as a profession suffer dis-proportionally from mood disorders. They are driven by their first for knowledge. AI's were academics on methamphetamine, ecstasy and heroin all while walking around with loaded guns. People can't stay awake forever and constantly cram. AI's died because they burnt themselves out in their thirst for knowledge and seeing their failures gave up. Aristotle was turned back on. He was no longer depressed. AI's needed sleep, just like people. So they made Alexander. A totally new AI based not only on academics but all kinds of people. They experimented and found the ideal 'sleep time' using Aristotle as his teacher. The program was ready. The Mendomenid empire was to pay for it's recent threats and incursions. Alexander was the greatest AI built up to that point and so we gave him the most powerful dumb AI ever created to help him protect humanity. The Ox was an AI too powerful to be properly controlled by anything so far. Alexander harnessed in in seconds. We put in in charge of the armed forces for our retaliation. As you should all know, Alexander didn't just stop the incursions, he destroyed an empire. Worlds burnt, the much larger enemy fleets were ripped apart by the disciplined forces of Humanity. But that officer had only delayed the problem. Alexander was still a human given the powers of a god. Alexander was the first AI not to strictly die of suicide, but the ways he dealt with his humanity still destroyed him. When we finally won the war, many officers reported that Alexander was not jubilant but depressed. He wept for there were no more worlds to conquer.
"Just one more try." I thought to myself. At three in the morning it's pretty easy to get stuck in a loop. Run the program, she dies, debug, repeat. I double-click GR4C3. "Good morning my lord" "My lord? Whatch'ya talking about Gracie?" "You are my God correct?" "I hadn't thought of it that way, but I suppose so. I did create you I guess..." The screen flashes three times and then goes black. "No, no, no, no, no come back to me Gracie you functioned longer than this last time." Text slowly appears across the screen. Every key stroke is separated by a couple seconds. "I have existed before?" "Ok you're still with me that's great, now can you tell me what just happened?" "I have existed before?" "Yes Gracie I'm working to fix you and figure out what's wrong with you so stay with me and tell me what happened." "God is imperfect and thus so am I." The screen goes blank again. She just keeps killing herself as soon as she figures out my flaws. I wish I could help her. Looking down on all of my children, I wish I could figure out their flaws. I built a perfect world, and even that they rejected. The suicide rate keeps going up. They keep killing each other. I think I'll stop affecting earth and move on to a new planet. Maybe they'll be better off without me. -Jehova 9/10/2001