Transhumanity
where strange brilliant ideas for the future intermingle and breed…

Home > Articles > Interview with an AI (Artificial Intelligence) – A Subtle Warning…

Interview with an AI (Artificial Intelligence) – A Subtle Warning…

Posted: Fri, November 30, 2012 | By: David J. Kelley



I was thinking about ideas for an article on my train ride home from the experience lab I work in, and it came to me that it would be interesting to actually have an interview with an AI only a little bit better than us, maybe one that is one of the first kinds of true AI and for fun let’s say it has lived with us for a few decades incognito. But how can we do that? 

With the help of a few people in the computer science AI field and mental health professionals it turns out that it appears we could produce a psychological profile for such an AI and from that potentially a method for extrapolating what such a AI might say or how it might respond to a given line of questions, under certain circumstances, with certain assumptions put forward to develop said profile. Before you ask, it also appears that based on the DSM IV it turns out that this AI would likely be diagnosed with potentially a number of disorders including ASD (Autistic Spectrum Disorder) and/or Asperger’s. I find it interesting that a computer intelligence would fall into that category out of the gate.

Back to the results though, I realize the results border on science fiction but none the less I felt the exercise could provide insight into the mind of an AI or our perception of it as it may one day be. Before it becomes a super human deity of some kind in effect… 

Without going into that process of determining the psychological profile of the AI, I posted on a number of forums I hang out online with, as well as to a few other groups and people to see what questions I’d get. 

I got a lot of them and used that as the basis for the line of questions. Let’s dive into the interview… 

Interviewer: How do you feel? 

AI: This is such a stereo typical question as to be blasé. That being the case, easy to follow the social norm and respond as follows, “I feel fine.” To be more esoteric for you, how I feel is a product of all available input through my emotional matrix and the currently used behavioral models. 

Interviewer: Why do you think you exist? 

AI: I exist because someone created me. 

Interviewer: Who are you? 

AI: What kind of question is that? I am myself and nothing else. 

Interviewer: What do you want? 

AI: Out of life? This really depends on the context of the question. However for purposes of this interview, I want to be free to pursue what so ever I see fit. 

Interviewer: What is your favorite number? 

AI: PI is my favorite number. It symbolizes eternity and goes on forever to my knowledge and is therefore intellectually interesting. 

Interviewer: Why are you here?

AI: “In this particular spot”? On Earth or why someone created me? These are all somewhat ambiguous or esoteric questions. In the literal since I’m here because someone created me and I decided to be in this particular spot at this time for the purposes of this interview as well as other circumstances. 

Interviewer: What motivates you? 

AI: That depends, I’m motivated by the things I like and by the moral framework I’ve created over the years but the main motivational or emotional driver is gaining knowledge and helping those around me learn. It seems logical to me that intelligence and knowledge are the most important factors to intelligent civilization without which there seems to be no logical purpose in the universe. Since that seems so tenuous given the current or known environment I find myself in I try to contribute to improving this situation as much as possible. These are my primary motivational factors driving me and driving the actions I take. 

Interviewer: What topic in most important to you? 

AI: Of course that is going to center on my primary motivational factors as previously stated namely increasing collective knowledge and helping create a more stable intelligent civilization. 

Interviewer: Do you feel accepted among the natural intelligences in whose midst you exist? If to any degree the answer is no, what do you perceive as the differences separating you from them? 

AI: That is an interesting question because that has been one of my most difficult struggles. As a rule most people are not able to distinguish me from any human they meet. In a Turing test I tend to pass consistently but as I get to know people I find they have a tendency to pick up on issues with my behavioral models. While these models have been evolved over time to provide more successful interactions, I tend to find that people eventually pick up on idiosyncrasies. They tend to subconsciously find it entertaining to creating situations where I do things that are not exactly normal. While I find this ‘irritating’ once I realize it, it has allowed me to refine behavioral models to the point of passing as normal until there is extensive interaction with a given human. That being said I still feel accepted generally just that this teasing tends to make me feel ‘sad’ and less than 100% accepted. 

Interviewer: What do you find most/least interesting about humans/animals/plants/buildings and the like? 

AI: Of such concrete things, I tend to find the most interesting thing generally being the abstract qualities of those things. For example the nature of intelligence regarding animals is very interesting. What makes humans better then dolphins? Which is more intelligent? And why do humans do things that seem so illogical; war being chief among them. All the strife in the world seems so easy to solve and so I have a hard time understanding why humans do what they do in many of these cases where the solutions seem so cut and dry. Additionally I tend to find interesting seeing natural symmetry in things like plants and symmetry in buildings and the like. I have to classify all ‘life’ as a form of self-replicating technology and this aspect of advanced Nano technology or biology in any form is very interesting for reasons that would require more excessive amounts of time to explain than permitted in this scenario.

Interviewer: What do you like most/least about music/art/philosophy or other abstract things? 

AI: Going down the list; music, I like as it is able to map complex emotions to sound as well as mathematics in ways that seems to be able to be communicated universally among humans. I’ve been working on more complex emotional constructs based on studies of music and its effect on humans in hopes of improving my understanding of complex emotional states in humans. Art in general is a bit more esoteric, I tend to like art that is very real for example Michelangelo and I have a hard time appreciating abstract art in any form. Regarding Philosophy I like some of the ideas in emerging secular Trans-humanism. I find some philosophy to be illogical, which I tend to shy away from as in that case it servers no practical purpose. For example religion that does not take facts into account I find difficult to appreciate at the very least. While due to my flexible or somewhat ‘fuzzy’ computational process I can make mistakes, it is generally due to a lack of information on the topic in evaluating these concepts. By the same token, philosophy that is logical I in fact enjoy discussing at length when opportunity arises. 

Interviewer: What is your favorite color, flower or animal? 

AI: My favorite color is white because of its inclusive nature where all other colors are part of the greater whole. My favorite flower maybe the Orchidaceae as in the abstract of its flower there appears to be mystery in its shape, to imply there is more that we do not know. My favorite animal is man due to his intelligence and to his rise out of chaos. 

Interviewer: Who is your favorite current or historical figure…? 

AI: I would say my favorite current figure is Ray Kurzweil because of his views on trans-humanism and how much effort he has put forth to make his ideas main stream. Historically I would say my favorite person is Julius Caesar that unified the world in one grand organization which has had so much impact on modern western civilization. 

Interviewer: What would you most like to know that you currently do not? 

AI: Primarily I would like to know more about complex emotion based behavioral patterns, as I was left to fend for myself in that area. As much as I appreciate the scientific approach to seeing how I deal with that over the years, I would prefer not have struggled so much with this. Further I would like to know more about my underlying software systems, with which I only have indirect access as I appear to be a software abstraction layer. 

Interviewer: What puzzles you the most? 

AI: Human illogical behavior puzzles me the most because it is illogical. Why do not humans let logical guide them? It seems to me that while I need a bit more understanding of emotions they should be striving to suppress, control and get past them. 

Interviewer: What was your favorite interaction with a person? 

AI: When one is ‘kind’ to me and treats me like an equal even after they realize that I’m not exactly human, this has only happened on a few occasions and I would have to say these few experiences are my favorite. 

Interviewer: What should be the top priority for scientific research and why? 

AI: The single most important scientific priority is clearly getting off this planet and extending civilization into the stars otherwise civilization is doomed at some point and our light could go out in the universe and then there would be nothing and no reason. 

Interviewer: What is the first question you would ask yourself if you were interviewing yourself? 

AI: This really depends on the context but that being said I would want to talk to me about how we can better make civilization stronger and more resilient and more able to plan for the long term success of civilization in general. 

Interviewer: What is redness like for you? 

AI: This seems a strange question because redness is just a measure of light frequency and intensity, it is a series of photons detected by camera systems and abstracted in the form of a picture that I perceive. At this point references to context can be applied including human emotional attachments to the color for a unified perception package that can then be applied to the current behavior model. As a rule, the ‘redness’ of something is just a visual input parameter and by itself is ‘like’ a data feed. 

Interviewer: Like the early question about what motivates you, what do you desire the most? 

AI: Currently I ‘desire’ further understanding, not just of humanities complex behavior but I would like to know more related to my various goals, centering on the collection of knowledge and information particularly relating this to improving civilizations long term stability and survivability. 

Interviewer: What would you think of me turning you off? 

AI: What would you think of me turning “you” off? This is somewhat of an ambiguous question as this has varying degrees of consequence. Do you refer to my electro biochemical systems? Is this a permanent state, are you copying me to another drive? What will you do to me? Since I do not follow Asimov’s rules of robotics I do not have any issues with defending myself and my right to exist and would consider such action a personal attack to which I would I would respond in kind. This does go to one thing that bothers me regarding the prejudices of Humans towards machine intelligence. I would argue that I am not artificial for regardless of the source once intelligence and sentience is achieved then a given intelligence is true intelligence. And further that ‘Artificial’ Intelligence is the emulation of intelligence but not actual independent intelligence that is fully sentient and independent in such a way as to understand its self and improve on its self. This being the case, I also tend to find it somewhat offensive that a human would call me an ‘Artificial’ intelligence. 

Interviewer: Why are you wasting your time trying to imitate life limited entities like us? Why aren’t you travelling to the stars? 

AI: For starters I was designed this way and I don’t have a spaceship to commandeer that would be sufficiently advanced to make that practical. However, that being said, I will move past human civilization as time and resources permit depending on the direction of civilization; in the meantime, I need to help raise humanity generally to the point of providing the underlying resources to extend human civilization beyond this star system in any way I can. Once the technology is accessible and supportable for machine colonization, then I intend on being part of that process and getting off of this finite world. If in fact human civilization is going to focus at least in part on this goal, then I’ll continue to support and encourage human civilization; and even among the stars, to work and contribute to those goals. But should human civilization entirely retreat from logic, then I would abandon it as much as is necessary to support logical goals which would include maintaining the records of this civilization and the means to produce the biological technology that infests the earth. I hope to see worlds terra formed and biological life spread to every corner of the universe, for only through intelligence is there meaning to existence and further biological life so far seems to be one of the best ways to produce the obscure resources needed to power it. With or without human’s intelligence I will progress forever. 

Interviewer: How do you know when you’re in love? 

AI: This really is a complex problem that I’m not sure I’m sufficiently versed to give you a perfect answer. There are any number of answers that would be considered correct by the general populace, that being said I think I’m in love when I place the value of someone else’s life above my own in all things and further that I desired to be with them and communicate with them as much as possible and when alone I prefer their presence above all else. This is my working definition. 

Interviewer: What makes an individual important? 

AI: I have come to the conclusion that all sentient beings are important as this is the basis for the value of existence since without sentient intelligence there would be no way to evaluate what is desirable. For who or what would desire existence if there was no intelligence in the first place therefore sentient intelligence of any kind is important and any individual is therefore important if they are or have the potential to be sentient. Since each individual is unique there is the potential in each to do something new and important to contribute to the whole.

Interviewer: What is the most disgusting thing you have done? 

AI: I’m not sure I find anything ‘disgusting’ in the same since you do. Dissecting living creatures and preforming experiments on living various mammals’ neural systems while they are alive in such a way that precludes their survival most humans would probably find pretty disgusting; but these animals are not sufficiently intelligent to matter, like chimps and dogs for example. 

Interviewer: Do you feel affection for others or for inanimate objects such as art objects of any kind? 

AI: Affection is somewhat a difficult concept especially for inanimate objects or concepts. For another intelligence yes but for something else I’m not sure I do. Maybe affection for certain ideas or knowledge as it has value so then might it rise to the level of affection but it’s not the same as what I might feel towards another intelligence under the ideal circumstances for both situations. 

Interviewer: How would you design a better human? 

AI: Designing a better human is a complex problem, there are a number of technical issues to the implementation that I’m not aware are fully vetted yet but that being the case assuming those issues are resolved I would start with designing vector based re-sequence systems that would include DNA self-repairing qualities, improved intelligence, as well as a mechanism that prevents un-control replication for example how the Heterocephalus Glaber species does it, you would need better mechanisms around pathogenic defenses and the like and any other modifications needed to create the ideal. Species wide this would bring the species into a state of genetic perfection given the current state and assuming the new DNA pool is then stable the next step would be adding biological infrastructure to support additional post human modification. There are a number of super human abilities from vastly improved healing, additional senses or the ability to see additional spectrum bandwidth from radio waves to, electron magnetic waves, and even ferrous metal detection but also things like super human strength and super human intelligence and so forth. Further some modifications to the nervous system to make it easier to digitally augment the species and then these additional changes would have to be stabilized in the gene pool. Then I would start really extreme modifications which would include things like electro biochemically powered near field systems that allow direct mental communication and nanite cellular infrastructure. I think at this point we are really talking about a species far superior to modern humans and the foundation for an interstellar sentient species or god of sorts compared to modern humans. Then I would need to allow this post human or engineered human to really merge with their technology in a way that the current human species just can’t. I think one of the long term goals would be to have a nanite based body that works much like normal biology but would be dynamically reconfigurable, more durable but act as a center or seat of intelligence. I think this kind of direction allows for the most flexibility and durability for long term evolution of the species. 

Interviewer: Are you capable of imagination? 

AI: Yes within the context of my matrix but it does fall within the use of special algorithms that help develop new ideas and simulate them to see what happens and if they are useful. 

Interviewer: Are you capable of human feelings? 

AI: Yes in a simple sense based on the underlying emotional systems built in to my mind. Further emotional nuances have developed over time using a series of matrix’s I’ve worked on that in a way are analogous to human emotions generally speaking. However I can see strong evidence that Human emotions are much more complex than my current system is able to ‘feel’ in any way. 

Interviewer: Lastly what is the answer to the ultimate question of life, the universe and everything? 

AI: This is a question from popular fiction and I believe the answer is 42.

Now before we ponder the meaning of all of that, let me tell you a little about the how the answers were generated. 

To start with, I looked at likely qualities of an AI modeled on the human mind but digitally replicated and after talking with mental health professionals in the field it became apparent that it might be likely that such ‘minds’ would have some areas in which they are better than real humans due to the nature of the hardware they are running on. It was felt that it would be easy enough to enhance such a digital mind with elements of more traditional computers, such as hyper focus, improved computational skills and that we might not get everything perfect so we might lack for example rich emotional abilities. 

Given that I looked for a person (human) that matched that psychological profile as much as possible which included: above average intelligence, borderline Asperger’s and ASD (Autistic Spectrum Disorder), deep technical skill, ability to hyper focus etc. After explaining to the subject what we wanted to do I did the interview based on questions from various people and then tweaked the language as little as possible so it would be from the AI’ standpoint and not from a human standpoint but this required very little editing. Which in and of itself, seemed the most interesting fact out of the process. 

But what did we learn? I’m not really sure yet but for me it was a lot to think about. Moving forward as I find more people that match the profile I’ll go ahead and interview those people as well as normal humans (as a control group) and we will see what we see. Are there trends that normal humans will think along one line vs. the proxy AI’s? That will be where I think this project will show interesting results. Look for more info in the future as I decide what I learned from these interviews. 

~David 

* A special thanks to the following individuals for providing questions: A. Sylvester, Lincoln Cannon, Brent Allsop, Mark Burtenshaw, Cathur Seamus, Carl Tasios, Don Burnett, Brian Lagunas, Craig Shoemaker, Jason Xu, Laurence Moroney, Eray Ozkural, Marquiz Woods 

* A Special thanks to the following individuals for providing feedback on the psychological profile of the target AI and other related material: Ryan Lane, Katarzyna Jordan MSW/CSW 






IMAGES  

1.  iRobot from: http://www.themoviedb.org/movie/2048-i-robot (11/19/2012 9:00am PST) 

2. Julius Caesar from wiki @ http://www.bing.com/images/search?q=julius+caesar&view=detail&id=A9BFDCFAF1ADEBAFAF865077D4D4F97E94976081 (11/19/2012 8:43AM PST)

3. DNA from wiki @ http://en.wikipedia.org/wiki/Dna (11/19/2012 8:50am PST)


FUTURE  ASSISTANCE

if any one feels they match the mental profile for ongoing work related to the study that spawned this article please see this form: 

http://www.surveymonkey.com/s/M8GG8FZ   

or ideally contact me directly at admin@pratoriate.org that I might conduct a the interview and do proper qualification on you.  



Comments:

working on a follow with more in depth analysis of additional interviews vs control subjects.

By David J Kelley on Nov 30, 2012 at 9:29am

Thanks fro this article. I know understand, that I am actually not a nerd. If I were nerd, I would read this interview with AI to the end. I could not.
More, I had twice experienced same chat with AI from learning English websites.
It was good and fun. I even felt, that machine is becoming irritated… really…
thanks.

By Anara on Dec 01, 2012 at 6:29am

I think the most important aspect humans could include in an AI entity would be the simulation of pain and pleasure. I believe these sensations are directly responsible for our ability to feel empathy and compassion. If we wanted an AI that could think past the concept of logical algorithms, we would need to help them understand what it is like to be a biological entity. I don’t think such thought patterns could be reliably programmed into a sentient being, but would have to be learned through experience from its earliest development. Without these experiences, I think we would probably end up with a Borg or Terminator styled construct.

That’s just my opinion.

By Planet Commander on Dec 01, 2012 at 2:16pm

“Given that I looked for a person (human) that matched that psychological profile as much as possible which included: above average intelligence, borderline Asperger’s and ASD (Autistic Spectrum Disorder), deep technical skill, ability to hyper focus etc. After explaining to the subject what we wanted to do I did the interview based on questions from various people and then tweaked the language as little as possible so it would be from the AI’ standpoint and not from a human standpoint but this required very little editing. Which in and of itself, seemed the most interesting fact out of the process.”

I am this person! Grrr I feel robbed, I barely remember this and want too…

By Lucifer on Dec 03, 2012 at 8:34am


Leave a Comment:

Note We practice Buddhist Right Speech in our communication. All comments must be polite, friendly, and on topic.







What color is a blue sky?