Is "Artificial Life" a myth?

FulltimeDefendent
Scientist
FulltimeDefendent's picture
Posts: 455
Joined: 2007-10-02
User is offlineOffline
Is "Artificial Life" a myth?

Perhaps synthetic life is more appropriate term. I wonder about this because I've considered the question of artificial intelligence recently, especially after reading up on neuropsychology and the sort of assumptions that are hard-wired into us, like functional categorization (not sure if that's the popular term). What I mean by that is, we need our emotions and our gut instinct to compensate for what "intellect" alone wouldn't be capable of. As an author I read recently put it, Star Trek's Mr. Data is a logical impossibility. Without emotions- whether the product of organic or cybernetic processes, Data would have no motivation to put on his uniform or to go to his duty station or to function in humanoid society at all. Think about it: why do we respond to people we don't know if they ask us a question? Most of us respond, even if we have to say we don't know the answer. Why don't we just signal our lack of an answer by not responding? Sure, there are mentally unbalanced people who are the exception to this, and yes, if you make a conscious effort you are perfectly capable of ignoring another person's request, but that involves a conscious effort, unless you simply can't hear that person and aren't intentionally ignoring them.

 

If we could create artificial intelligence that could interact on a personable level with human beings and could be demonstrated to be both sapient and sentient (having higher level reasoning skills as well as self awareness) would it by design have to possess the ability to perceive (at least emotional) pain? It seems like a sufficiently advanced AI would have to be able to at least mimic the utilitarian drive of seemingly all animal life to seek pleasure and avoid pain, and that would require the equivalent of emotions.

 

In this thought experiment, then, is there a difference between the "artificial intelligence" I am describing and what we might call "actual intelligence?" Can "actual intelligence" be synthetic in nature? We've already stitched together a synthetic bacterium in a laboratory (covered elsewhere on this site, Mycoplasma laboratorium) based on a known genome. What if we could do the same with a gamete, literally create a synthetic gamete with no biomechanical barriers to fertilization?

 

Artificial or synthetic life- beyond just intelligence- is hard to define, considering the way that we define a living thing as something more than simply a repeating pattern like fire or crystals or even viruses, whose status as "alive" is certainly debatable. If an artificial intelligence cannot reproduce itself and cannot evolve on its own, can it be considered to be alive?

 

I would like to find out, personally, if microevolutionary studies have been conducted on laboratorium yet. I'd be very interested to see what they've found.

“It is true that in the land of the blind, the one-eyed man is king. It is equally true that in the land of the blind, the two-eyed man is an enemy of the state, the people, and domestic tranquility… and necessarily so. Someone has to rearrange the furniture.”


FulltimeDefendent
Scientist
FulltimeDefendent's picture
Posts: 455
Joined: 2007-10-02
User is offlineOffline
I just want to elaborate

I just want to elaborate briefly on categorization. What I mean is that we're hard-wired to generalize. If we're walking down a street and we see two differently shaped and differently colored rocks in our way we may acknowledge the difference but they're still both rocks. Dogs, too. We generally don't have to consciously remind ourselves that Great Danes and Chihuahuas are the same species. This hard-wiring works against us in diverse social contexts though, and arguably leads to the formation of racial or sexual prejudice. In the long-term, though, it has been necessary for our evolution as creatures capable of abstract reasoning. Most of that abstract reasoning isn't even on a conscious level. So to program an artificial intelligence with the same capacity for abstract reasoning, wouldn't that intelligence have to have a conscious and subconscious component that mimics the functions of the human brain? If so, I don't see a functional distinction between actual and artificial intelligence (theoretically).

“It is true that in the land of the blind, the one-eyed man is king. It is equally true that in the land of the blind, the two-eyed man is an enemy of the state, the people, and domestic tranquility… and necessarily so. Someone has to rearrange the furniture.”


Wonderist
atheist
Wonderist's picture
Posts: 2479
Joined: 2006-03-19
User is offlineOffline
Depends on your usage of

Depends on your usage of 'artificial'. If you mean 'as distinct from nature', then of course, since everything is ultimately 'natural', there is no difference between artificial life/intelligence and natural life/intelligence. However, if 'artificial' simply means 'made by humans' (cf. artifact, artifice), then the obvious difference is that artificial intelligence/life was created by humans.

Wonderist on Facebook — Support the idea of wonderism by 'liking' the Wonderism page — or join the open Wonderism group to take part in the discussion!

Gnu Atheism Facebook group — All gnu-friendly RRS members welcome (including Luminon!) — Try something gnu!


Balrogoz
Posts: 173
Joined: 2008-05-02
User is offlineOffline
 It's an interesting

 It's an interesting question.  I tend to apply the turing benchmark in thinking about it.  I think this topic will boil down at some point to what you choose to define into existence.  

 

The Turing Test  -- This paper was in a lot of ways the kick-off for the popular debate in discussing 'machine intelligence' and he does discuss a lot of interesting aspects of the debate (like what exactly constitutes a 'machine').

 

As for generalizability, I think it's somewhat egocentric to think that intelligence must be human-like.  To extend your Data example, there was an episode in which his 'mother' talks about having to program a 'sub-routine' to make him keep his clothes on.  He simply couldn't grasp the concept of clothing.  I often wonder what an intelligence on our planet would be like if it evolved from a predator instead of a scavenger.  Thing is, I can't know, since I think the way I think.

 

I also think, on the score of emotional recognition, that many humans fall short on this mark.  There are even conditions in which people have difficulty or even find it impossible to read the emotional state of another. 

If I have gained anything by damning myself, it is that I no longer have anything to fear. - JP Sartre


FulltimeDefendent
Scientist
FulltimeDefendent's picture
Posts: 455
Joined: 2007-10-02
User is offlineOffline
Balrogoz wrote: As for

Balrogoz wrote:

 

As for generalizability, I think it's somewhat egocentric to think that intelligence must be human-like.  To extend your Data example, there was an episode in which his 'mother' talks about having to program a 'sub-routine' to make him keep his clothes on.  He simply couldn't grasp the concept of clothing.  I often wonder what an intelligence on our planet would be like if it evolved from a predator instead of a scavenger.  Thing is, I can't know, since I think the way I think.

 

I also think, on the score of emotional recognition, that many humans fall short on this mark.  There are even conditions in which people have difficulty or even find it impossible to read the emotional state of another. 

I did not mean to apply an anthropocentric (not egocentric, that would refer to my ego) definition to non-human intelligence. Clearly if a species evolved intelligence without clothing, that particular example wouldn't apply. I was trying to make a more general point about the things we do every day while thinking about them as matter of fact. If we are asked to describe the reasons we brush our teeth in the morning or shower for seven minutes instead of five, or any of these common activities indicative to our particular culture, then we could certainly provide a rational explanation, but neurologically we have been affected by our cultural circumstances, and one particular set of behaviors (schemas) would not apply in another cultural environment. My point with the Data example was to explain, without touching off a whole separate debate, why the Enlightenment era concept of rationality fails to account for so many discoveries coming out of the behavioral and neurological sciences that suggest we are hardwired to do things like mentally compartmentalize intellect. That's one example. Data's "subroutines" are just a technobabble justification for having Data do things that, given other statements about his nature, he shouldn't be able to do... like empathize with other sentient creatures, for example. Unlike an actor playing the sci-fi version of the Tin Man/Pinnochio on TV, a real artificial intelligence designed to interact with humans and emulate human nature (not an anthropocentric claim if humans are the only known species who might be capable of such an endeavor in the not-too-distant future) would have to be able to empathize with human beings in order for human beings if it was intended to learn like a human. I haven't gotten to the paper you linked yet (it's bookmarked, I'm looking forward to it) but it occurs to me that the challenge lies is designing an intelligence that also has intuition and the ability to learn like a human being. The human brain is quite different from the brains of almost all other species in terms of its developmental patterns. We're born with immature brains that continue to grow outside the womb, to a size that couldn't possibly fit through the mother's vaginal canal. So much of our behavior is learned rather than instinctive because of this evolutionary trade off. A truly "intelligent" artificial organism, if based on a human model, would have to be able to literally form its own neural circuitry in the same manner in which humans do. This may not be the only way to be intelligent in the universe, but it's the best one we know of. Personally I think this could actually be accomplished if we could actually "designer" brain tissue as computer parts.

 

But I'm something of a transhumanist/posthumanist, so I tend to look for the soft-tech solutions that would be compatible with human DNA. Remember my postulation of artificial sperm and eggs? Artificial humans. We've done it with bacteria, let's see if we can do it with gametes.

 

“It is true that in the land of the blind, the one-eyed man is king. It is equally true that in the land of the blind, the two-eyed man is an enemy of the state, the people, and domestic tranquility… and necessarily so. Someone has to rearrange the furniture.”


FulltimeDefendent
Scientist
FulltimeDefendent's picture
Posts: 455
Joined: 2007-10-02
User is offlineOffline
natural wrote:Depends on

natural wrote:

Depends on your usage of 'artificial'. If you mean 'as distinct from nature', then of course, since everything is ultimately 'natural', there is no difference between artificial life/intelligence and natural life/intelligence. However, if 'artificial' simply means 'made by humans' (cf. artifact, artifice), then the obvious difference is that artificial intelligence/life was created by humans.

 

I agree (about the products of man-made processes being "natural" in that sense), and that at the core of this topic are the definitions of "life" and "intelligence." That's why I posted it here, instead of in the Science forum.

“It is true that in the land of the blind, the one-eyed man is king. It is equally true that in the land of the blind, the two-eyed man is an enemy of the state, the people, and domestic tranquility… and necessarily so. Someone has to rearrange the furniture.”


Ken G.
Bronze Member
Posts: 1352
Joined: 2008-03-20
User is offlineOffline
HAL in the movie 2002

) I didn't read the whole post ,but right away that movie came to mind ( 2002 -A Space Odyssey ) did you ever see it ? Artificial Intelligence ,is just that  "ARTIFICAL" does not compute............

 

Signature ? How ?


Thomathy
SuperfanBronze Member
Thomathy's picture
Posts: 1861
Joined: 2007-08-20
User is offlineOffline
FulltimeDefendent wrote:As

FulltimeDefendent wrote:
As an author I read recently put it, Star Trek's Mr. Data is a logical impossibility. Without emotions- whether the product of organic or cybernetic processes, Data would have no motivation to put on his uniform or to go to his duty station or to function in humanoid society at all.
Don't make me pull out my nitpicker's guide!  Data doesn't have emotions exactly, but he does have his positronic equivalent (don't knock the fiction, his brain is supposed to be 'positronic', whatever that means).  In a particular episode, Troy asks Data a question about his experience of relationship and he responds, 'It's just that our mental pathways have become accustomed... to your sensory input patterns.'  There are more examples, but Data is able to, more or less, effectively tell what someone is feeling.  He also seems to be able to approximate that feeling within himself, though in no way similar to a human's experience of emotions.  Besides, he gets the emotion chip later on.

BigUniverse wrote,

"Well the things that happen less often are more likely to be the result of the supper natural. A thing like loosing my keys in the morning is not likely supper natural, but finding a thousand dollars or meeting a celebrity might be."


deludedgod
Rational VIP!ScientistDeluded God
deludedgod's picture
Posts: 3221
Joined: 2007-01-28
User is offlineOffline
Quote:whatever that means 

Quote:

whatever that means

 Nothing.

"Physical reality” isn’t some arbitrary demarcation. It is defined in terms of what we can systematically investigate, directly or not, by means of our senses. It is preposterous to assert that the process of systematic scientific reasoning arbitrarily excludes “non-physical explanations” because the very notion of “non-physical explanation” is contradictory.

-Me

Books about atheism


Thomathy
SuperfanBronze Member
Thomathy's picture
Posts: 1861
Joined: 2007-08-20
User is offlineOffline
deludedgod

deludedgod wrote:

Quote:

whatever that means

 Nothing.

Well, of course it means nothing.

BigUniverse wrote,

"Well the things that happen less often are more likely to be the result of the supper natural. A thing like loosing my keys in the morning is not likely supper natural, but finding a thousand dollars or meeting a celebrity might be."


Balrogoz
Posts: 173
Joined: 2008-05-02
User is offlineOffline
 Quote:would have to be

 

Quote:
would have to be able to empathize with human beings in order for human beings if it was intended to learn like a human.

 

OOOhhh....  So, to clarify, we are talking about a computer learning like a human and not that an AI would have to learn like that.

 I'm still not sure that we couldn't just tell the computer how to act.  It would be interesting to see if there is software out there that can correlate facial expressions to emotion (maybe even accounting for inflection at this point).  With the applications they are running in Vegas on face recognition alone, I would be surprised to learn that nobody has gone ahead and created the code to recognize emotional states (on some level) and react to it.  

 What I am wondering is whether there is a difference between experiencing emotion and acting empathetically, and honestly acting empathetically because you're told to.  I've never drilled into it, but my suspicion the 'it's a duck' argument works here.  After all, I can easily imagine consoling a friend that is experiencing some level of emotion I never have experienced.  That doesn't make me any less human (or intelligent) for interacting with that friend.  What about humans that cannot experience emotion correctly (by my social standards) or at all?  I wonder if there is a difference in their cognition / learning habits.

If I have gained anything by damning myself, it is that I no longer have anything to fear. - JP Sartre


inspectormustard
atheist
inspectormustard's picture
Posts: 537
Joined: 2006-11-21
User is offlineOffline
Balrogoz wrote: Quote:would

Balrogoz wrote:

 

Quote:
would have to be able to empathize with human beings in order for human beings if it was intended to learn like a human.

OOOhhh....  So, to clarify, we are talking about a computer learning like a human and not that an AI would have to learn like that.

 I'm still not sure that we couldn't just tell the computer how to act.  It would be interesting to see if there is software out there that can correlate facial expressions to emotion (maybe even accounting for inflection at this point).  With the applications they are running in Vegas on face recognition alone, I would be surprised to learn that nobody has gone ahead and created the code to recognize emotional states (on some level) and react to it.  

 What I am wondering is whether there is a difference between experiencing emotion and acting empathetically, and honestly acting empathetically because you're told to.  I've never drilled into it, but my suspicion the 'it's a duck' argument works here.  After all, I can easily imagine consoling a friend that is experiencing some level of emotion I never have experienced.  That doesn't make me any less human (or intelligent) for interacting with that friend.  What about humans that cannot experience emotion correctly (by my social standards) or at all?  I wonder if there is a difference in their cognition / learning habits.

As far as emotional expression goes I would suggest you check out the robot called Kismet. We're still in the process of refining facial recognition and haven't had a need for systems which recognize emotion, though such software could be useful in certain fields of medicine.

With artificial intelligence it really depends on the model you're working from, and perhaps your definition of intelligent. For example, the Turing Test is based on the premise that something sufficiently complex to be indistinguishable from a person in conversation is also "intelligent." I would contend that internet chat has lowered the bar somewhat, and that a proper chatbot is right around the corner for any idiot willing to program the typical 13 year old inter-tard.

On the other hand, working with neural networks have led me to believe that basic emotion could be considered a side effect of such systems. The expert system model (see the movie Colossus: The Forbin Project for how that is supposed to work) does not require emotion, and seems to be the idea from which Data is extrapolated from.


Balrogoz
Posts: 173
Joined: 2008-05-02
User is offlineOffline
inspectormustard wrote:On

inspectormustard wrote:
On the other hand, working with neural networks have led me to believe that basic emotion could be considered a side effect of such systems. The expert system model (see the movie Colossus: The Forbin Project for how that is supposed to work) does not require emotion, and seems to be the idea from which Data is extrapolated from.

 

To clarify..  you are thinking that emotions are emergent properties?

If I have gained anything by damning myself, it is that I no longer have anything to fear. - JP Sartre


inspectormustard
atheist
inspectormustard's picture
Posts: 537
Joined: 2006-11-21
User is offlineOffline
Balrogoz

Balrogoz wrote:

inspectormustard wrote:
On the other hand, working with neural networks have led me to believe that basic emotion could be considered a side effect of such systems. The expert system model (see the movie Colossus: The Forbin Project for how that is supposed to work) does not require emotion, and seems to be the idea from which Data is extrapolated from.

 

To clarify..  you are thinking that emotions are emergent properties?

Yes. More specifically, emotions are our perception of our own deep seated neural machinery that is built out of the same model everything else is. A more advanced version of us would experience emotions more akin to cognitive language and thus have a wider range of emotion and be able to be very specific about what and why they "felt" a certain way.

Things like "gut feelings" and anger with no apparent source are sometimes the result of multiple sensations which have been correlated by the disparate systems, passed onto the social processing systems (the most powerful part of the mind as far as I know), and then get translated into thought as "bad mojo." Since this all happens in parallel with our everyday thoughts regardless of whether we actively think about these things, this creates the perceived link between particularly emotional people and so called psychic phenomena.

Submitted for your approval: the autistic savant. Nearly non-existent empathic ability usually as a result of the social workings that make up the average brain being co-opted for an entirely different use. All the computational power of recognizing subtle body language, facial expression, and tonality put toward things like math, music, fact association, whatever.


FulltimeDefendent
Scientist
FulltimeDefendent's picture
Posts: 455
Joined: 2007-10-02
User is offlineOffline
Balrogoz wrote: Quote:would

Balrogoz wrote:

 

Quote:
would have to be able to empathize with human beings in order for human beings if it was intended to learn like a human.

 

OOOhhh....  So, to clarify, we are talking about a computer learning like a human and not that an AI would have to learn like that.

 I'm still not sure that we couldn't just tell the computer how to act.  It would be interesting to see if there is software out there that can correlate facial expressions to emotion (maybe even accounting for inflection at this point).  With the applications they are running in Vegas on face recognition alone, I would be surprised to learn that nobody has gone ahead and created the code to recognize emotional states (on some level) and react to it.  

 What I am wondering is whether there is a difference between experiencing emotion and acting empathetically, and honestly acting empathetically because you're told to.  I've never drilled into it, but my suspicion the 'it's a duck' argument works here.  After all, I can easily imagine consoling a friend that is experiencing some level of emotion I never have experienced.  That doesn't make me any less human (or intelligent) for interacting with that friend.  What about humans that cannot experience emotion correctly (by my social standards) or at all?  I wonder if there is a difference in their cognition / learning habits.

 

I'm sort of positing that the most advanced computer we can possibly build would have to mimic the human brain (or the brains of what ever creatures built it) in such specific ways as to erode any meaningful distinction between "actual" and "artificial" intelligence

“It is true that in the land of the blind, the one-eyed man is king. It is equally true that in the land of the blind, the two-eyed man is an enemy of the state, the people, and domestic tranquility… and necessarily so. Someone has to rearrange the furniture.”


FulltimeDefendent
Scientist
FulltimeDefendent's picture
Posts: 455
Joined: 2007-10-02
User is offlineOffline
Thomathy

Thomathy wrote:

FulltimeDefendent wrote:
As an author I read recently put it, Star Trek's Mr. Data is a logical impossibility. Without emotions- whether the product of organic or cybernetic processes, Data would have no motivation to put on his uniform or to go to his duty station or to function in humanoid society at all.
Don't make me pull out my nitpicker's guide!  Data doesn't have emotions exactly, but he does have his positronic equivalent (don't knock the fiction, his brain is supposed to be 'positronic', whatever that means).  In a particular episode, Troy asks Data a question about his experience of relationship and he responds, 'It's just that our mental pathways have become accustomed... to your sensory input patterns.'  There are more examples, but Data is able to, more or less, effectively tell what someone is feeling.  He also seems to be able to approximate that feeling within himself, though in no way similar to a human's experience of emotions.  Besides, he gets the emotion chip later on.

 

"Time's Arrow" Part I, Season 5 Cliffhanger. Yes, I'm really that much of a nerd. But I think you missed my point: Data is a character. I was contrasting the plot-relevant details of his nature to what I'm suggesting might have to be the essential nature of any human-designed AI sophisticated enough to be called "intelligent."

“It is true that in the land of the blind, the one-eyed man is king. It is equally true that in the land of the blind, the two-eyed man is an enemy of the state, the people, and domestic tranquility… and necessarily so. Someone has to rearrange the furniture.”


FulltimeDefendent
Scientist
FulltimeDefendent's picture
Posts: 455
Joined: 2007-10-02
User is offlineOffline
Thomathy

Thomathy wrote:

FulltimeDefendent wrote:
As an author I read recently put it, Star Trek's Mr. Data is a logical impossibility. Without emotions- whether the product of organic or cybernetic processes, Data would have no motivation to put on his uniform or to go to his duty station or to function in humanoid society at all.
Don't make me pull out my nitpicker's guide!  Data doesn't have emotions exactly, but he does have his positronic equivalent (don't knock the fiction, his brain is supposed to be 'positronic', whatever that means).  In a particular episode, Troy asks Data a question about his experience of relationship and he responds, 'It's just that our mental pathways have become accustomed... to your sensory input patterns.'  There are more examples, but Data is able to, more or less, effectively tell what someone is feeling.  He also seems to be able to approximate that feeling within himself, though in no way similar to a human's experience of emotions.  Besides, he gets the emotion chip later on.

 

I recall reading somewhere that a positronic brain would have to run on antimatter, but I don't remember where. Anyway, I always figured "positronic" came from the fact that Data is an electronic organism and was designed by a Positivist (Gene Roddenberry, or Dr. Soong. Take your pick). The point is that Data and all AI's on TV are played either by human actors or by inanimate blinking lights or CGI with voice acting. So you might as well be talking about a human character. A real AI would not be a box with an actor inside it.

“It is true that in the land of the blind, the one-eyed man is king. It is equally true that in the land of the blind, the two-eyed man is an enemy of the state, the people, and domestic tranquility… and necessarily so. Someone has to rearrange the furniture.”


FulltimeDefendent
Scientist
FulltimeDefendent's picture
Posts: 455
Joined: 2007-10-02
User is offlineOffline
Balrogoz wrote: Quote:would

Balrogoz wrote:

 

Quote:
would have to be able to empathize with human beings in order for human beings if it was intended to learn like a human.

 

OOOhhh....  So, to clarify, we are talking about a computer learning like a human and not that an AI would have to learn like that.

 I'm still not sure that we couldn't just tell the computer how to act.  It would be interesting to see if there is software out there that can correlate facial expressions to emotion (maybe even accounting for inflection at this point).  With the applications they are running in Vegas on face recognition alone, I would be surprised to learn that nobody has gone ahead and created the code to recognize emotional states (on some level) and react to it.  

 What I am wondering is whether there is a difference between experiencing emotion and acting empathetically, and honestly acting empathetically because you're told to.  I've never drilled into it, but my suspicion the 'it's a duck' argument works here.  After all, I can easily imagine consoling a friend that is experiencing some level of emotion I never have experienced.  That doesn't make me any less human (or intelligent) for interacting with that friend.  What about humans that cannot experience emotion correctly (by my social standards) or at all?  I wonder if there is a difference in their cognition / learning habits.

 

FYI, there has been some interesting research in correlating facial muscle movements to emotional states. Mindcore has mentioned this in his podcast, actually, a couple times, and it struck me as exactly the kind of stepping stone to the development of an "empathic" computer (or a computer that can approximate empathy).

“It is true that in the land of the blind, the one-eyed man is king. It is equally true that in the land of the blind, the two-eyed man is an enemy of the state, the people, and domestic tranquility… and necessarily so. Someone has to rearrange the furniture.”


FulltimeDefendent
Scientist
FulltimeDefendent's picture
Posts: 455
Joined: 2007-10-02
User is offlineOffline
inspectormustard

inspectormustard wrote:

Balrogoz wrote:

 

Quote:
would have to be able to empathize with human beings in order for human beings if it was intended to learn like a human.

OOOhhh....  So, to clarify, we are talking about a computer learning like a human and not that an AI would have to learn like that.

 I'm still not sure that we couldn't just tell the computer how to act.  It would be interesting to see if there is software out there that can correlate facial expressions to emotion (maybe even accounting for inflection at this point).  With the applications they are running in Vegas on face recognition alone, I would be surprised to learn that nobody has gone ahead and created the code to recognize emotional states (on some level) and react to it.  

 What I am wondering is whether there is a difference between experiencing emotion and acting empathetically, and honestly acting empathetically because you're told to.  I've never drilled into it, but my suspicion the 'it's a duck' argument works here.  After all, I can easily imagine consoling a friend that is experiencing some level of emotion I never have experienced.  That doesn't make me any less human (or intelligent) for interacting with that friend.  What about humans that cannot experience emotion correctly (by my social standards) or at all?  I wonder if there is a difference in their cognition / learning habits.

As far as emotional expression goes I would suggest you check out the robot called Kismet. We're still in the process of refining facial recognition and haven't had a need for systems which recognize emotion, though such software could be useful in certain fields of medicine.

With artificial intelligence it really depends on the model you're working from, and perhaps your definition of intelligent. For example, the Turing Test is based on the premise that something sufficiently complex to be indistinguishable from a person in conversation is also "intelligent." I would contend that internet chat has lowered the bar somewhat, and that a proper chatbot is right around the corner for any idiot willing to program the typical 13 year old inter-tard.

On the other hand, working with neural networks have led me to believe that basic emotion could be considered a side effect of such systems. The expert system model (see the movie Colossus: The Forbin Project for how that is supposed to work) does not require emotion, and seems to be the idea from which Data is extrapolated from.

 

I love Colossus: The Forbin Project. Quite an underrated movie that seems to be popular today only with scientists and film students. By the way, I started an Atheist Film Review thread here. I should definitely review The Forbin Project at some point.

 

Agreed about the chatbots, but I think the next step in artificial intelligence would be to create a 13 year old inter-tard that can actually "mature" based on inputs from interaction with live users.

“It is true that in the land of the blind, the one-eyed man is king. It is equally true that in the land of the blind, the two-eyed man is an enemy of the state, the people, and domestic tranquility… and necessarily so. Someone has to rearrange the furniture.”


Balrogoz
Posts: 173
Joined: 2008-05-02
User is offlineOffline
 inspectormustard

 

inspectormustard wrote:
Submitted for your approval: the autistic savant. Nearly non-existent empathic ability usually as a result of the social workings that make up the average brain being co-opted for an entirely different use. All the computational power of recognizing subtle body language, facial expression, and tonality put toward things like math, music, fact association, whatever.

 

Right!  I would say that they are still intelligent, emotions are certainly a different thing.

 

Balrogoz wrote:
To clarify..  you are thinking that emotions are emergent properties?
inspectormustard wrote:

 

 

Yes. More specifically, emotions are our perception of our own deep seated neural machinery that is built out of the same model everything else is. A more advanced version of us would experience emotions more akin to cognitive language and thus have a wider range of emotion and be able to be very specific about what and why they "felt" a certain way.

 That's an interesting take.  My understanding is that there are well known structures and processes in the brain that control emotions.

 

 

 

If I have gained anything by damning myself, it is that I no longer have anything to fear. - JP Sartre


FulltimeDefendent
Scientist
FulltimeDefendent's picture
Posts: 455
Joined: 2007-10-02
User is offlineOffline
Balrogoz

Balrogoz wrote:

 

inspectormustard wrote:
Submitted for your approval: the autistic savant. Nearly non-existent empathic ability usually as a result of the social workings that make up the average brain being co-opted for an entirely different use. All the computational power of recognizing subtle body language, facial expression, and tonality put toward things like math, music, fact association, whatever.

 

Right!  I would say that they are still intelligent, emotions are certainly a different thing.

 

Balrogoz wrote:
To clarify..  you are thinking that emotions are emergent properties?
inspectormustard wrote:

 

 

Yes. More specifically, emotions are our perception of our own deep seated neural machinery that is built out of the same model everything else is. A more advanced version of us would experience emotions more akin to cognitive language and thus have a wider range of emotion and be able to be very specific about what and why they "felt" a certain way.

 That's an interesting take.  My understanding is that there are well known structures and processes in the brain that control emotions.

 

 

 

 

You have a very good point, InspectorMustard,. Human beings in addition to our functional brain asymmetry, are quite neurologically variable. I'm not saying that savants aren't intelligent in any way, in fact I think you bring up an interesting point: what if the "autistic savant" model was also a viable model for artificial intelligence mimicking human intelligence. Hence, we return to the question of how to define intelligence. I would say that evolutionarily speaking, autistic savants are an exception that prove a rule regarding the importance of empathy in the evolution of human social groups, but variation exists, and some of that variation falls outside the expected range. We're not all autistic savants, after all.

 

“It is true that in the land of the blind, the one-eyed man is king. It is equally true that in the land of the blind, the two-eyed man is an enemy of the state, the people, and domestic tranquility… and necessarily so. Someone has to rearrange the furniture.”


FulltimeDefendent
Scientist
FulltimeDefendent's picture
Posts: 455
Joined: 2007-10-02
User is offlineOffline
Balrogoz

Balrogoz wrote:

 

inspectormustard wrote:
Submitted for your approval: the autistic savant. Nearly non-existent empathic ability usually as a result of the social workings that make up the average brain being co-opted for an entirely different use. All the computational power of recognizing subtle body language, facial expression, and tonality put toward things like math, music, fact association, whatever.

 

Right!  I would say that they are still intelligent, emotions are certainly a different thing.

 

Balrogoz wrote:
To clarify..  you are thinking that emotions are emergent properties?
inspectormustard wrote:

 

 

Yes. More specifically, emotions are our perception of our own deep seated neural machinery that is built out of the same model everything else is. A more advanced version of us would experience emotions more akin to cognitive language and thus have a wider range of emotion and be able to be very specific about what and why they "felt" a certain way.

 That's an interesting take.  My understanding is that there are well known structures and processes in the brain that control emotions.

 

 

 

 

I fall somewhere in between on this. Yes there are structures in the brain identified with emotions like "fear," "anxiety," etc... but there is also our abstract ability to define and redefine categories of feelings. Our abstractions don't always translate completely into unrelated languages. The question isn't where our emotions come from, it's where "how we think about our emotions" comes from, and that's a different issue, and probably too complicated for a neurological explanation alone. Any explanation would have to take into account environmental factors.

“It is true that in the land of the blind, the one-eyed man is king. It is equally true that in the land of the blind, the two-eyed man is an enemy of the state, the people, and domestic tranquility… and necessarily so. Someone has to rearrange the furniture.”


FulltimeDefendent
Scientist
FulltimeDefendent's picture
Posts: 455
Joined: 2007-10-02
User is offlineOffline
But Balrogoz, autistic

But Balrogoz, autistic persons have emotions. Austism is a communicative disorder. It's not as if they don't have emotions, they just lack much of what we could consider to be empathy.

“It is true that in the land of the blind, the one-eyed man is king. It is equally true that in the land of the blind, the two-eyed man is an enemy of the state, the people, and domestic tranquility… and necessarily so. Someone has to rearrange the furniture.”


I AM GOD AS YOU
Superfan
Posts: 4793
Joined: 2007-09-29
User is offlineOffline
Is "Artificial Life" a

Is "Artificial Life" a myth?

     Damn words !  "Artificial" .... Where can I find that , non existent stuff ? 

             What can we imagine that is not real ? Is imagining not real ? !!! 

                  Is anything of consciousness something not real ?

                                      Mind is not real energy ?

  So them buddha's laughed .... it was written .... on that particle they called earth

                                of,  "I can't get me no satisfaction"

 

                                               Dogma is a LIE

                                          No CURE Exists for Life

                                   So help the kids enjoy their stay

                                  as you know you are almost dead

                                              as birth is eternal

                      http://www.youtube.com/watch?v=OGWfLiEoG98

                                                  Eat the Rich

                                                 Fix my Words

                                                 A World Party

                                           Who would deny us ?

                                              What day is this ?

                                                  

                                                   

                                            

                                                  

                                                       

                        

 

                                   

                                                  

                                           

                                    

 


inspectormustard
atheist
inspectormustard's picture
Posts: 537
Joined: 2006-11-21
User is offlineOffline
FulltimeDefendent wrote:. .

FulltimeDefendent wrote:

. . .

I recall reading somewhere that a positronic brain would have to run on antimatter, but I don't remember where. Anyway, I always figured "positronic" came from the fact that Data is an electronic organism and was designed by a Positivist (Gene Roddenberry, or Dr. Soong. Take your pick).

. . .

Yes, a positron is the anti-matter version of an electron which carries a positive charge of (if I remember correctly) equivalent value to an electron. It was actually the first bit of anti-matter discovered as a result of the predictions made by quantum physics.

I've been thinking about this for a while now and I think I might have found a better test of human equivalent intellect. Some of you may have seen the series hosted by Steven Hawking called Masters of Science Fiction. There was one episode where a particularly dumb android was called upon to prove their sentience, which it did by being compelled to lie. Based on this it is possible to gauge the intelligence of the machine by its ability to lie convincingly.

If we are ever to construct a powerful machine with the ability to lie, the ability to read (and thus play into) human emotions, as well as a complete lack of internal emotions such as remorse or guilt, we might have a problem on our hands.


ronin-dog
Scientist
ronin-dog's picture
Posts: 419
Joined: 2007-10-18
User is offlineOffline
I don't think that

I don't think that artificial inteligence necessarily has to have emotions.

Neural networks can learn, so I think that advanced neural networks are the secret to AI. Yes, there needs to be some basic programming to give it the drive to learn, but we are born with basic programming too, instincts.

Emotions are more complex, but I agree with mustard, it is doable. For a full AI you might need to add some programming to give them the drive to want to communicate/emote with us, but it should be possible.

Children can't fully emote with others, a lot of it they learn along the way.

The only thing is the more freedom for self-learning and development you give an AI, the more likely they are to evolve into an intelligence that is not quite like ours.

Zen-atheist wielding Occam's katana.

Jesus said, "Suppose ye that I am come to give peace on earth? I tell you, Nay; but rather division." - Luke 12:51


Kevin R Brown
Superfan
Kevin R Brown's picture
Posts: 3142
Joined: 2007-06-24
User is offlineOffline
I think the most obvious

I think the most obvious route would be to simply follow the evolutionary model:

Make a self-replicating progam, subject it to to selection pressure. Watch science work it's magic.

Quote:
"Natasha has just come up to the window from the courtyard and opened it wider so that the air may enter more freely into my room. I can see the bright green strip of grass beneath the wall, and the clear blue sky above the wall, and sunlight everywhere. Life is beautiful. Let the future generations cleanse it of all evil, oppression and violence, and enjoy it to the full."

- Leon Trotsky, Last Will & Testament
February 27, 1940