Building gods: Inductive proof that AI will be friendly

inspectormustard
atheist
inspectormustard's picture
Posts: 537
Joined: 2006-11-21
User is offlineOffline
Building gods: Inductive proof that AI will be friendly

Judging from recent hardware advances we are quickly approaching (or have passed, depending on what has yet to be released to the press) the end of Moore's law of transistor doubling. The problem artificial intelligence-smiths have faced since this revelation is what to do with it all. As a consultant/scientist-in-training I feel that it is my duty to be insufferably vague: we now or soon will have the basic parts to build a human comparable artificial intelligence within the next decade.

That said, should we be arming ourselves to face off with Earth's inheritors? I don't think so. Here is why.

Computers are notoriously efficient in space and resource drain, but they are likewise relatively difficult to prepare when compared with our own reproductive faculties. This is fine, as they require comparably inexpensive maintenance - hot switchable components are the best advance in interchangeable  parts since 18th century gunsmiths realized they could repair things easier if they made extra parts to specific specifications. Imagine turning off your kidney and swapping it out on the way to the shopping mall; bad idea for humans. Thus computers are effectively immortal in this way.

This leads to an interesting notion. Computers, being immortal, are unlikely to desire reproduction save to have another sentient being with a similar outlook to talk to. A computer would more likely be interested in amplifying its own capabilities. The resource draw in this case is very small.

But why build such a thing if there is nothing to be gained for us? Consider this: If the goal of the computer is to grow in the way that computers do (more and more intelligent) and our goal is to grow in the way that we do (learning what we can while we can, and enjoying all the other fruits of life during that time) then the proper course of action from the computer's perspective is to manage our affairs for us and free us up to do things like explore space, colonize other worlds, make art and music, etc. Solving the world's difficulties is merely a hardware problem - dump enough processors into the computer and it will do the rest. After that point, most of our technology can be developed by computer as well.

But we, as easily self-replicating and relatively intelligent organisms, are ourselves Von Neumann's probes. We are in our own way more stable, though not as hearty, than silicon. Computer hardware currently does not self-repair. When and if it does, its resource drain will become comparable to our own. Thus it is in the computer's best interests to keep us around doing repair and exploration work. It is in the computer's best interests to keep us unbelievably happy, a relatively minor task in comparison to all that the computer will be capable of.

We would merely have to ask ourselves whether we are more interested in seeing things with our own eyes as pampered slaves to master control brains, or doing it all the hard way and risk wiping ourselves out in the meanwhile due to our own ignorance and insecurities.


I AM GOD AS YOU
Superfan
Posts: 4793
Joined: 2007-09-29
User is offlineOffline
That's some interesting

That's some interesting stuff there Inspector. Thanks. Got any fun vids or articles on this stuff ?  I think our future achievements will rival our imagination today. I seem to agree to not fear the computer itself  , but I do fear knowledge and power in the hands of a few, as even now .......  Yes, no more to preventable "ignorance and insecurities" ..... When, How ???? "Eat the Rich" !

I Posted this a short while back, got no replies. It's lonely being god sometimes .....    

Artificial Consciousness ?  http://www.rationalresponders.com/forum/13340 

This is pretty wild. I hung on every word. Gets my bored friends thinking ! 

"Surviving the Singularity" , 20 min.  A mind stretcher, the ending is a plea ....    http://www.dharmaflix.com/wiki/Surviving_the_Singularity

Or for full screen,  http://www.youtube.com/watch?v=YDqaMFHGEZ8  (low volume)

Here's a comment from a youtuber - wulf8121  -  "I used to be extremely skeptical of the singularity but now I'm finding it harder and harder to dismiss. But one thing is for certain, if the singularity does happen it will be a reflection of what humanity is... I only hope we like what we see."

Here's a couple other semi related fun vids. 

 "The Machine is Us/ing Us" , 4 min http://www.dharmaflix.com/wiki/The_Machine_is_Us/ing_Us

ONENESS AND THE HOLOGRAPHIC PARADIGM  7 min   http://www.youtube.com/watch?v=bB7-uySXSCk

"The Holographic Universe" , 6 min  "All is ONE"  ??? !!!   http://www.dharmaflix.com/wiki/The_Holographic_Universe

    .... ?  fAr OuT, I meant iN ?  .....    What Is ? .....      

 


inspectormustard
atheist
inspectormustard's picture
Posts: 537
Joined: 2006-11-21
User is offlineOffline
I'm on the fence regarding

I'm on the fence regarding the singularity. I think it will happen, but once it does things won't change as fast as the name makes it sound. It's definitely not something we should be unprepared for, philosophically. People might begin to think that androids are walking the streets and those who have received the benefits of advanced prosthesis may suffer as a result. I'm sure all kinds of things will change rapidly, but only because people are basically crazy.


deludedgod
Rational VIP!ScientistDeluded God
deludedgod's picture
Posts: 3221
Joined: 2007-01-28
User is offlineOffline
I have to disagree. Not with

I have to disagree. Not with your inductive proof, but rather with the more basic premise that we can emulate consciousness within 10 years. Give evolution some credit.

The problem is as thus. The basic premise is that if our brain functions like a computer, we should be able to emulate it with sufficient artificial computing power. Each neuron functions like a logic gate, with a binary operation based on an electrical signal propogated along an axon to a synaptic cleft of a post synaptic membrane. The binary operation comes from the fact that unlike most intracellular signalling mechanisms, which operate on continously graded variables in proportion to the ligand concentration that induces the cellular response, action potentials work on thresholds below which nothing happens and above which a singlular action potential, whose depolarization per se is always the same, propogates, in an all-or-nothing fashion. Since the Post-synaptic neuron recieves multiple inputs from many, many axons, the post-synaptic potential is a continously graded variable which is also encoded in binary form via a set of K+ channels that allow for a direct proportionality of rate of AP generation to the depolarization of the membrane, hence allowing for the signal integration of countless axons which meet at the axon hillock of the post-synaptic membrane.

(I'm sure you already know all of that since I have read your posts, but I thought everyone else would be interested too)

In principle if the emulating system has enough power, it could duplicate this. The human brain having 100 billion neurons and about 10^14 synaptic clefts, functions to a degree like an enormous integrated circuit board. Unfortunately, the process by which we are conscious is ever so slightly more complicated than a measure of synaptic clefts and signal integration by neurons. Although the neural circuitry, the computation is in principle able to be emulated by virtue of more computing power, the mechanisms by which the underlying processes of consciousness work are, at least not yet, anywhere near able to be emulated. There is no way to emulate LTP or plasticity, not to mention those mechanisms who underlying processes we haven't worked out (hence, how could we emulate them?) The problem at hand is not one of sufficient computing power, although that is one of the problems, albeit the easiest to overcome.

I don't reject the notion of AI, since if the conscious process depends wholly on the physical brain then in principle with sufficient technology such a process should be fully able to be emulated in the future, I just reject the notion that it will come so soon, and that the process by which it will come about is one of increased computational capacity. Rather, I like to consider the idea that our current technological paradigm shift should not be measured in terms of increased computational capacity. Consider that, in the 1950s, before the integrated circuit, the understanding of improved technology was one whereby the current "technology" was simply improved, as opposed to a radical shift in the direction that innovation occured. We thought we'd have flying cars. We don't, indeed, motor vehicles have for the most part not changed in decades, at least the principles upon which they work. Instead, our improved technology constituted the exploration of a field which was completely untapped. A 1950s speculator could have imagined bigger, faster cars, but not the internet. Similarily, I think our current shift, and you will forgive me for career bias, is biological, in terms of our understanding of DNA and cellular biology and such. Hence, I'm laying my bets that before we attempt to construct AI, we will first attempt to construct synthetic life, which we are already doing. Any emulation of the conscious process we attempt will be biological first, long before we try with our own materials. We cannot emulate neural plasticity, since we cannot construct hardware which has the dynamism of organic machinery. Hence, we might try engineering synthetic neural clusters, based on organic machinery as opposed to artificial. There are several biological hurdles it would be necessary to crack before attempting to emulate them purely by ourselves, as opposed to employing already constructed natural structures with conscious capacities. One of these hurdles to overcome is the notion of a "consciousness transplant", ie that via the transplant of a brain from one human to another, it will be possible to retain the consciousness of being A in body B. Obviously, the idea of emulating consciousness would be dubious if we couldn't transplant preexisting consciousness. The other problem we'd have to crack is the neurological problem of perception, since consciousness is not intrinsic. As an extrinsic process it ultimately depends on sensory data. The brain dies without any sensory input. A fully conscious being cannot exist if it does not interact with an external world. At present, the notion of "interact" can be emulated in a highly primitive way via measuring instruments, but these things cannot percieve anything. Perception is the antecedant to consciousness. Even organisms we would not consider to be conscious, these animals can still percieve the world around them. The emulation of this process requires understanding of how it works, something that 10 years of work probably will not produce.

As for the Singularity, I consider the notion to be bizarre to say the least. Kurzweil's fourth stage of the process, the employment of all matter in the universe as computational substrate, appears to directly violate the second law of thermodynamics, which requires that all order generating processes release a greater degree of disorder to the surroundings than the order that is generated in the open system.

"Physical reality” isn’t some arbitrary demarcation. It is defined in terms of what we can systematically investigate, directly or not, by means of our senses. It is preposterous to assert that the process of systematic scientific reasoning arbitrarily excludes “non-physical explanations” because the very notion of “non-physical explanation” is contradictory.

-Me

Books about atheism


HisWillness
atheistRational VIP!
HisWillness's picture
Posts: 4100
Joined: 2008-02-21
User is offlineOffline
 Yeah. What deludedgod

 Yeah. What deludedgod said, and also computers wouldn't have desire unless we programmed it into them. It's actually one of the serious problems of developing artificial intelligence, since nobody addresses it seriously. Also, their life-cycle isn't as "immortal" in practice as it is in theory. Computers die after about five years. There are those that don't fail at five years, and there are those that fail before, but in engineering terms, five years is a good estimate for replacement hips, and it's a good estimate for computers. Even swapping things around, you end up with a system that's just not as efficient as a biological system, since it needs so many diverse and changing inputs. (You have to change an entire assembly line to produce an upgraded hard-drive, which in turn needs materials from several other intermediate producers, which in turn need ...)

When you say a self-repairing computer would have a resource drain comparable to our own, I have to call shenannigans. Computers even in a future world would be faced with diminishing energy returns unlike anything that biological systems get (since our environment is, itself, biological). Computers would have to adapt to an environment that is even more unfriendly to its existence than it is to ours. Robots, if you're to give these computers mobility, would be completely helpless to human destruction, in contrast with the Hollywood vision of terrifying Terminator humanoids.

What alloy could they be made of that we couldn't bathe them in acid? What batteries would they run on that we couldn't sabotage? (Or like lithium batteries, simply get weaker over time.) What shielding could they possibly employ that we couldn't EMP them? Supply lines would be even more precious to them than they are to us, since they have no individual resourcefulness. How would they defend themselves against the destructive creativity of humanity? They're not even close to a threat from the get-go.

Saint Will: no gyration without funkstification.
fabulae! nil satis firmi video quam ob rem accipere hunc mi expediat metum. - Terence


HisWillness
atheistRational VIP!
HisWillness's picture
Posts: 4100
Joined: 2008-02-21
User is offlineOffline
deludedgod wrote:The problem

deludedgod wrote:

The problem is as thus. The basic premise is that if our brain functions like a computer, we should be able to emulate it with sufficient artificial computing power. Each neuron functions like a logic gate, with a binary operation based on an electrical signal propogated along an axon to a synaptic cleft of a post synaptic membrane.

... and the premise misrepresents the problem outright. I'm sure that's what you were saying, too, but the fact that our brains (and neurons) work in three dimensions, and aren't exactly analogous to a logic gate means that the hurdle to consciousness is grossly underestimated. Even the largest collection of two-dimensionally arranged transistors is orders of magnitude less complex than a three-dimensional structure of dynamically assigned neurons.

We're not even close.

Saint Will: no gyration without funkstification.
fabulae! nil satis firmi video quam ob rem accipere hunc mi expediat metum. - Terence


nigelTheBold
atheist
nigelTheBold's picture
Posts: 1868
Joined: 2008-01-25
User is offlineOffline
HisWillness wrote:deludedgod

HisWillness wrote:

deludedgod wrote:

The problem is as thus. The basic premise is that if our brain functions like a computer, we should be able to emulate it with sufficient artificial computing power. Each neuron functions like a logic gate, with a binary operation based on an electrical signal propogated along an axon to a synaptic cleft of a post synaptic membrane.

... and the premise misrepresents the problem outright. I'm sure that's what you were saying, too, but the fact that our brains (and neurons) work in three dimensions, and aren't exactly analogous to a logic gate means that the hurdle to consciousness is grossly underestimated. Even the largest collection of two-dimensionally arranged transistors is orders of magnitude less complex than a three-dimensional structure of dynamically assigned neurons.

We're not even close.

And it's even more complex than that. We don't even have a decent understanding of intelligence, let alone a method of transcribing it into a completely foreign framework.

Over the past 40+ years of AI research, we are not one whit closer to AI than we were when we started. There's been lots of excitement over certain advances (genetic algorithms, neural networks, etc) that haven't panned out, and haven't really gotten us any closer to AI. For a long time, we thought it was merely the limitations of hardware, that if we had the processing power, we could do it. Now we're pinning our hopes on quantum computers, another form of hardware, which we really don't understand yet, and of which we have no method of evaluating the full implications.

The failings of AI to date  make it clear the problem with AI isn't a hardware limitation. It's an ontological limitation. We simply don't understand enough about our own workings to be able to imitate intelligence in a strictly-logical form. If we ever manage to stumble on a process of emulating intelligence, it will be completely different from the intelligence we possess, and so induction fails as a process of determining the nature of the beast.

Any predictions about AI are premature. This includes Verner Vinge and his singularity (which I don't deny as a distinct possiblity).

I believe Vinge generally ignores the biggest likelihood, though: the probability that we will begin to use technology to augment our own abilities. We will use biotech and computer augmentation to make ourselves smarter long before we create AI. I say this with certainty because I firmly believe we will only gain the knowledge necessary for AI through the process of augmentation, and that the augmentation itself will allow us to think more clearly about intelligence and consciousness. His recursively-improving machine intelligence will not be a machine intelligence, but us, ourselves.

At least, that's my opinion. I could be wrong.

"Yes, I seriously believe that consciousness is a product of a natural process. I find that the neuroscientists, psychologists, and philosophers who proceed from that premise are the ones who are actually making useful contributions to our understanding of the mind." - PZ Myers


inspectormustard
atheist
inspectormustard's picture
Posts: 537
Joined: 2006-11-21
User is offlineOffline
nigelTheBold wrote:I believe

nigelTheBold wrote:

I believe Vinge generally ignores the biggest likelihood, though: the probability that we will begin to use technology to augment our own abilities. We will use biotech and computer augmentation to make ourselves smarter long before we create AI. I say this with certainty because I firmly believe we will only gain the knowledge necessary for AI through the process of augmentation, and that the augmentation itself will allow us to think more clearly about intelligence and consciousness. His recursively-improving machine intelligence will not be a machine intelligence, but us, ourselves.

At least, that's my opinion. I could be wrong.

That is why I am relatively certain we will have the basic necessities for general artificial intelligence within the next 10 years.

HisWillness wrote:

. . . Also, their life-cycle isn't as "immortal" in practice as it is in theory. Computers die after about five years. There are those that don't fail at five years, and there are those that fail before, but in engineering terms, five years is a good estimate for replacement hips, and it's a good estimate for computers. Even swapping things around, you end up with a system that's just not as efficient as a biological system, since it needs so many diverse and changing inputs. (You have to change an entire assembly line to produce an upgraded hard-drive, which in turn needs materials from several other intermediate producers, which in turn need ...)

When you say a self-repairing computer would have a resource drain comparable to our own, I have to call shenannigans. Computers even in a future world would be faced with diminishing energy returns unlike anything that biological systems get (since our environment is, itself, biological). Computers would have to adapt to an environment that is even more unfriendly to its existence than it is to ours. Robots, if you're to give these computers mobility, would be completely helpless to human destruction, in contrast with the Hollywood vision of terrifying Terminator humanoids.

No, I think the better course of development is not aiming for motility among what I am imagining to be tremendously large computers. Also, as deludedgod mentioned, "The brain dies without any sensory input." I would expect a sufficiently advanced artificial intelligence to behave in a similar way. So a basic deterrent to such an event would be adding a basic instinct to perceive, learn, and process. I'm not sure any further hardware instincts are required, since I believe heavily in the game theory of morality.

deludedgod wrote:

There is no way to emulate LTP or plasticity, not to mention those mechanisms who underlying processes we haven't worked out (hence, how could we emulate them?) The problem at hand is not one of sufficient computing power, although that is one of the problems, albeit the easiest to overcome.

nigelTheBold wrote:

The failings of AI to date  make it clear the problem with AI isn't a hardware limitation. It's an ontological limitation. We simply don't understand enough about our own workings to be able to imitate intelligence in a strictly-logical form. If we ever manage to stumble on a process of emulating intelligence, it will be completely different from the intelligence we possess, and so induction fails as a process of determining the nature of the beast.

I agree that it is not a hardware limitation. I also think that whatever the solution is it probably won't be a useful version of human brain emulation. The model developed for us by evolution is particularly well suited to promoting survival as a machine roughly 6 feet tall with two legs that needs to eat and reproduce; I don't know if you could get any further from a computer. I am essentially positing the ideal AI as a gigantic brain with an insatiable hunger radically different from our own instincts - the need to do research and development 24 hours a day. So I do think that we can inductively predict what, by our own definition, an intelligent AI would do because of game theory. We can look at what would be most advantageous to the system regardless of what it needs based on its survival and the instincts that I mentioned.

I am certain that we have been tackling the AI problem the wrong way. However, we have learned several things which I believe bring us several whits closer. Here is my rundown of what I think is the most promising conglomeration of technology:

1. Artificial neural networks - Wonderfully effective at pattern recognition, emulating basic animal behavior, and so forth. Unfortunately they are about as inefficient to use in software as they are wonderful. However, somebody (http://www.recognetics.com/) fixed this. Yay chip manufacturers, but that's not the last piece since there is certainly as basic "operating system" for any brain.

2. Cellular Automata - I would bet that there is a certain rule set that will give us the plasticity needed to produce effective interconnection, and one of my own goals is to find it. These are also mind numbingly simple to implement in hardware, which I think is doubly wonderful.

3. Evolutionary Algorithms - Clumsy, require a lot of tweaking to get pointed in the right direction. More useful in finding the solution than being the solution. I'll explain.

I currently have two models floating around in my head which, to the best of my knowledge, haven't been thoroughly tested. One model is merely directed at developing the other.

The input and output of one or more artificial neurons via io-1 is managed by a system of cellular automata designed to select those frequencies which are "most interesting" to the module as well as selecting the volume of each channel when broadcasting pulses. The rule system used for this is similar to wireworld. Individual channels are isolated via fourier transform, with multiple channels being broudcasted in the inverse fashion.

Initial AN and CA state is set via io-2. This initial state is to be determined by massively parallel evolutionary algorithm designed to set the complete network of devices to be receptive to learning.

Once released into the electrolyte solution, input and output will be obtained using still-wired (via io-2) modules. Overall system isolation is achieved using a faraday cage. (Patent pending )


qbg
Posts: 298
Joined: 2006-11-22
User is offlineOffline
nigelTheBold wrote:Over the

nigelTheBold wrote:
Over the past 40+ years of AI research, we are not one whit closer to AI than we were when we started.

I present Tesler's Theorem: "AI is whatever hasn't been done yet." AI research has accomplished some amazing things over the years, it is just that many people don't consider the accomplishments as part of AI so hence the impression that AI isn't going anywhere.

"What right have you to condemn a murderer if you assume him necessary to "God's plan"? What logic can command the return of stolen property, or the branding of a thief, if the Almighty decreed it?"
-- The Economic Tendency of Freethought


nigelTheBold
atheist
nigelTheBold's picture
Posts: 1868
Joined: 2008-01-25
User is offlineOffline
qbg wrote:nigelTheBold

qbg wrote:
nigelTheBold wrote:
Over the past 40+ years of AI research, we are not one whit closer to AI than we were when we started.
I present Tesler's Theorem: "AI is whatever hasn't been done yet." AI research has accomplished some amazing things over the years, it is just that many people don't consider the accomplishments as part of AI so hence the impression that AI isn't going anywhere.

That's true. I apologize if I've understated the accomplishments of AI research.

I was mostly referring to the advancement towards AI. There's a lot to come of AI research that benefits all of us. I doubt Google would exist without it, for instance.

It's kind of like the space program. People keep saying we get nothing from it, other than pure knowledge. (And what's wrong with that? I ask.) What they don't understand is that much of what we take for granted today was developed because of the space program -- if not directly, then indirectly.

There are two things I want: a smart computer (like Sigmund von Shrink, from Gateway), and flying cars. WHERE'S MY FRICKIN' FLYING CAR?!?!?!? Oh, and lifelike hot fembots. They don't have to have AI themselves. Just fembots.

Can we use game theory to predict that lifelike fembots will be hot? I bet we can.

"Yes, I seriously believe that consciousness is a product of a natural process. I find that the neuroscientists, psychologists, and philosophers who proceed from that premise are the ones who are actually making useful contributions to our understanding of the mind." - PZ Myers


inspectormustard
atheist
inspectormustard's picture
Posts: 537
Joined: 2006-11-21
User is offlineOffline
nigelTheBold wrote: Can we

nigelTheBold wrote:

Can we use game theory to predict that lifelike fembots will be hot? I bet we can.

Yes and no. It seems as though the more realistic we make our gynoids the more eerie they get.

 

  This one is really creepy:  


deludedgod
Rational VIP!ScientistDeluded God
deludedgod's picture
Posts: 3221
Joined: 2007-01-28
User is offlineOffline
Quote:Also, their life-cycle

Quote:

Also, their life-cycle isn't as "immortal" in practice as it is in theory.

Actually, I was just thinking...

Let us suppose, that we mananged one day to emulate an organic, dynamic system based on a 3D computational network that has the capacity to respond to changing outputs at rates that are measured in milliseconds, and managed to link this system to a set of perceptive functions of sufficient technology that their inputs would induce plasticity changes in the ANN themselves so that the system could respond to countless internal and external inputs that constantly changed, and the ANN could respond by physically altering their connectivity and structure (and then there is a new problem at hand, the artificial equivalent of the generation of new neurons that reflects the learning process in young children, and the artificial equivalent of the deletion of preexisting but unused connections to make new ones.

I doubt that such a system would be able to maintain "immortality". In principle it would have the same function and working as a biological system, except emulated with our own materials. Hence it would be prone to precisely the same degenerations as a function of time. The same errors would accumulate, and in the absence of controlling factors, the ANN will start to generate corrupt dead-ends and feedback loops. Any system which is based on the functional dynamism of organic machinery will follow a similar pattern. Our pattern is one of a rapidly accelerating increase in capability, followed by a levelling off, then a long, steady decline, followed by a fast drop. Although the biological answer to the question "why do we die" is not fully understood, emulating the system merely using different materials, isn't likely to solve any problems.

"Physical reality” isn’t some arbitrary demarcation. It is defined in terms of what we can systematically investigate, directly or not, by means of our senses. It is preposterous to assert that the process of systematic scientific reasoning arbitrarily excludes “non-physical explanations” because the very notion of “non-physical explanation” is contradictory.

-Me

Books about atheism


HisWillness
atheistRational VIP!
HisWillness's picture
Posts: 4100
Joined: 2008-02-21
User is offlineOffline
deludedgod wrote:I doubt

deludedgod wrote:

I doubt that such a system would be able to maintain "immortality". In principle it would have the same function and working as a biological system, except emulated with our own materials. Hence it would be prone to precisely the same degenerations as a function of time. The same errors would accumulate, and in the absence of controlling factors, the ANN will start to generate corrupt dead-ends and feedback loops. Any system which is based on the functional dynamism of organic machinery will follow a similar pattern. Our pattern is one of a rapidly accelerating increase in capability, followed by a levelling off, then a long, steady decline, followed by a fast drop. Although the biological answer to the question "why do we die" is not fully understood, emulating the system merely using different materials, isn't likely to solve any problems.

Immortality would be anethma to biological systems or systems that sought to emulate them because death is what makes adaptation possible. It IS the mechanism of adaptation. Those that die shape the population more than those that live.

But I'd still say that computers, regardless of how we endeavour to give them "instincts" for survival, will still have a very serious problem on their hands: I can regenerate by eating an apple. They have to mine, ship, refine, ship, press, manufacture, ship and install. It's just no contest. The use of energy there far outstrips any entity's ability to keep up. We're not just "support staff" for them, they're completely non-existent without an industrial society, in the same way that we'd be helpless outside of a biosphere. But an industrial society is much less robust than a biosphere.

Saint Will: no gyration without funkstification.
fabulae! nil satis firmi video quam ob rem accipere hunc mi expediat metum. - Terence


inspectormustard
atheist
inspectormustard's picture
Posts: 537
Joined: 2006-11-21
User is offlineOffline
HisWillness wrote:deludedgod

HisWillness wrote:

deludedgod wrote:

I doubt that such a system would be able to maintain "immortality". In principle it would have the same function and working as a biological system, except emulated with our own materials. Hence it would be prone to precisely the same degenerations as a function of time. The same errors would accumulate, and in the absence of controlling factors, the ANN will start to generate corrupt dead-ends and feedback loops. Any system which is based on the functional dynamism of organic machinery will follow a similar pattern. Our pattern is one of a rapidly accelerating increase in capability, followed by a levelling off, then a long, steady decline, followed by a fast drop. Although the biological answer to the question "why do we die" is not fully understood, emulating the system merely using different materials, isn't likely to solve any problems.

Immortality would be anethma to biological systems or systems that sought to emulate them because death is what makes adaptation possible. It IS the mechanism of adaptation. Those that die shape the population more than those that live.

But I'd still say that computers, regardless of how we endeavour to give them "instincts" for survival, will still have a very serious problem on their hands: I can regenerate by eating an apple. They have to mine, ship, refine, ship, press, manufacture, ship and install. It's just no contest. The use of energy there far outstrips any entity's ability to keep up. We're not just "support staff" for them, they're completely non-existent without an industrial society, in the same way that we'd be helpless outside of a biosphere. But an industrial society is much less robust than a biosphere.

Adding to this, just for anyone who might drop through and wonder, there is a further maintenance problem if you want to use robots to do all the mining, shipping, etc.. An entire species of robots would be vastly more difficult to maintain, which is again why I think that there are applications for robots (biologically hazardous jobs), robot-brains (research), and humans (everything else).

 

Now, on this topic of effective immortality. I am sure I underestimated the resource draw of a self-regenerating computer, but this is unimportant since that was my "best case scenario" reason why we don't need self-regenerating computers. Like I said, half of the problem is solved through the use of hot swappable repair. Next, I think that it should be understood that once we have a working model of intelligence we have to modularize it so that it doesn't fail massively every five years or so. I would expect any repair or upgrade work to move along similar to a server mainframe. Components would be switched out before they had a chance to break, all while maintaining continuity of service. This would be the equivalent of switching out small chunks of our motor cortex for a new one every 2 years or so, small chunks of Broca's area every 6, etc. Whatever schedule best suits caution, and small enough in function each time that overall decrease in quality goes more or less unnoticed.