I think I found a way for the infinite consciousness to be falsafiable

Cpt_pineapple
atheist
Posts: 5492
Joined: 2007-04-12
User is offlineOffline
I think I found a way for the infinite consciousness to be falsafiable

The Black Hole information paradox.

 Basically, Hawking radiation emitted by a Black Hole is independent of the hole itself and hence cannot provide information of what fell into the hole. 

Now, if data/information can be destroyed, then the infinite conscioussness is in a rut.

However, it it could be resolved by transfering the information to other universes (Einstien-Rosen bridges), if that is the case, then I'm still good to go. Once the LHC is turned on, we may get better insight into the nature of black holes (Since it may produce minature ones).

 

Well, what do you think? 


Cpt_pineapple
atheist
Posts: 5492
Joined: 2007-04-12
User is offlineOffline
inspectormustard

inspectormustard wrote:
Cpt_pineapple wrote:

Is that your definition of consciousness? The ability to adapt their learning process?

 

My Oxford Dictionary states that consciousness is the state of being conscious as well as one’s awareness or perception of something. Typically sentience, or the ability to feel emotively about things, is implied as well. Producing something that can learn is pretty easy; producing something that can learn how to learn (metalearn) is harder than hard. Adding emotional states are trivial and usually neglected as they are not useful in solving anything but cognitive problems and prompting them is difficult (though well known) to implement.

  While I don't disagree per se, but our level of consciousness allows for emotional feelings. I don't think consciousness can be pigeon holed into one level.

 According to evolution, our level of consciousness was developed by stages. From the first single celled organism (which I would consider conscious, just not on our level) to modern day humans who can feel emotions and are sentient.

 

 

Quote:

Cpt_pineapple wrote:
Quote:

Even something as "amazing" as the ability to learn does not generate consciousness; most animals are not conscious. Some are barely conscious. What sets us apart is our internal simulative and predictive abilities, and our inclusion of ourselves in those simulations and predictions.

I disagree. I think all animals are conscious, just not at our level.

If it is your opinion that at least everything up to mice are even semi-conscious then I would direct you to the works of those scientists who have recently completed simulation of the equivalent of half a mouse's brain at one-tenth speed. The simulation will grow as technology progresses. If that is consciousness then we're already playing god and may begin to ponder the ghost in the machine questions.

 Nope, because the computer program still needs input. The computer won't develop on it's own, it needs researchers and such (conscious entities) to analyze the data for them and compile it into an algorithim or whatever they're using. 

I don't think the computer has a choice. It won't independitly reject programs that it should accept otherwise. That would make the computer on our level of consciousnes. (Unless of course you're talking about window's vista >_>.). Mice do have choices. 

 

 

Quote:

Seeing as consciousness did not unexpectedly arise, we must wait until technology catches up with our cognitive conjecture. Had it shown up in such a model it would have been a great suprise, there would have been a media explosion, politics would be in an uproar (do androids have human rights?), and we'd have to figure out why it broke the model.

 This reminds me of the book 'Do andriods dream of electric sheep?' and the T.V show Andromeda. In both, andriods are pretty much indistinguishable from the human population. 

I do doubt that technology will ever produce those kinds of results.

 I'm not ruling out the possibility of course, but unless I missed some dramatic revolution, I hold my doubts. I did hear Japan is making progress,  but nowhere near the results of andriods etc.


inspectormustard
atheist
inspectormustard's picture
Posts: 537
Joined: 2006-11-21
User is offlineOffline
Cpt_pineapple wrote: I

Cpt_pineapple wrote:

I don't think the computer has a choice. It won't independitly reject programs that it should accept otherwise. That would make the computer on our level of consciousnes. (Unless of course you're talking about window's vista >_>.). Mice do have choices.

Yes; you could think of a conscious system as a type of operating system. However, instead of running programs the job of this operating system is to make predictions about the world and test them. Analogues of anger, sadness, and frustration would develop as those expectations are violated. In order to relieve the tension, the operating system must adjust its predictive methods in order to produce better predictions. If its predictions are accurate, the machine could be said to be happy.

As an operting system, there is no superior chain of commands to return to. The program doesn't stop thinking until it is turned off, crashes, or its hardware breaks. If the memory is lost, it could be said to have died. This is a philosophical thing with no real truth to be found in it. Ethically, we shouldn't wipe sentient computer's memories. But I digress; the model is equivalent in function to cognition.

If the operating system were coded into the hardware then it would not be a computer, having no capacity to run programs or perform calculations at the whim of its operators.

Cpt_pineapple wrote:
This reminds me of the book 'Do andriods dream of electric sheep?' and the T.V show Andromeda. In both, andriods are pretty much indistinguishable from the human population. 

I do doubt that technology will ever produce those kinds of results.

You're right. Unless the goal is to produce human analogues it is unlikely that intelligences created by us will have any semblance of human features. As I mentioned, emotions are created by the pressures which drive us to change our thinking. It may be, and this is the craziest thing, that we will produce machines with a far greater range of emotions than we. It would then be up to the machine to describe these emotions to us - something which could be a social problem.

With the advances we are making in artificial intelligence, saying that we will never achieve something like that is akin to saying that we would never find a cheaper light source than incandescent lighting, or that we would never split the atom. It's a bit of a leap for our time period, but far from an inconceivable design.