If the intelligence of the human mind is the product of the a physical mind, and thusly its actions dictated by physical laws, then it is reasonable to conclude that an artificial mind of equal physical capacity would be necessarily intelligent. This is the principle of multiple realizability (1). For any process that produces an affect, there is no reason to assume that said process is the only means of producing the very same affect. That a form produces or infers a specific action does not logically imply that no other form could produce the same action. While form must play a role in considering artificial intelligence, specific form does not need to be considered.
If the activities of the mind are not purely physical, and the emergent characteristics of the mind are not solely accounted for by said activities, then it would be unreasonable to conclude that an artificial mind would be necessarily intelligent. If the latter case is true, it may be that, while some aspects of the mind are partly or purely immaterial, others, such as intelligence, may not be. If the a specific causal relationship can be found between every aspect of intelligence and a physical process, then the hypothesis of the immaterial mind would be falsified. The hypothesis is worthy of some consideration because it explains everything we have observed so far. However, even prior to falsification the scientific community should not seriously consider the hypothesis.
If we presume existence of an immaterial mind we do so in spite of a large number of implausible, impractical, and confusing implications. The presence of a forelife and afterlife are implied. If the mind is, at least in part, separate from the physical realm then biological death should not eliminate it. But once we begin to discuss the details of the nonphysical we find ourselves in the realm of pure speculation. What parts of our mind are preserved and which are not, where did the mind come from, where will it end up, and where does it go? These are only a few unanswerable questions. Using the principle of Ockham’s razor we can remove rationalism and dualism from scientific inquiry because they raise more questions than they answer and most of the questions they raise can’t be answered at all (2). So, if we disregard the idea of an immaterial mind we must acknowledge that an artificial mind is at least possible.
Humanity has created machines that can outwardly replicate many aspects of thought and reason. For example, the chess playing Deep Blue computer observes its circumstances, the chessboard, and from a pool of all possible moves chooses the one most likely to ensure victory. So, prompted by external circumstance, the machine acts to produce an effect. However, this short description is as equally applicable to the rational process as it is of the falling of an anvil from the sky. An anvil prompted by its external circumstance acts to produce an effect. The action of the agent, i.e., of the machine, the anvil, or the person, is determined by the properties of that agent. The machine acts to ensure victory because it is programmed to do so. Deep Blue can no more deviate from this course of action by its own accord than an anvil can, by its own accord, deviate from descending from the sky. In other words, the pool of potential actions for both of these non-human agents is limited by agents’ own physical form.
It is no longer appropriate to say that external circumstances determine an agents’ actions, but rather, that conditions both internal and external to the agent dictate the course of events. However, the very same that has been said of the falling anvil and the Deep Blue computer can also be said of the human being. So, lest we are content is saying an anvil has intelligence, further distinctions must be made in order to distinguish intellect from everything else.
I think it is important to make a distinction between indivisible and divisible properties. Gravity, the tendency for objects of mass to accelerate towards one another, is an indivisible property of matter because the only possible alterations to it are elimination and opposition. That objects might repel each other is the opposite of gravity, and therefore can not be termed equally. That objects might have no affect on each other at all is a simple lack of gravity, and therefore it too can not be defines equally. Any alteration of the anvil itself that would prevent it from acting consistently with the laws of gravity would change it so completely that it would no longer be possible to call it an anvil. The ability to reason is also an indivisible property. Similar to gravity, reason is process that mediates the relationship between causative conditions and resulting action or inaction. However, the ability to reason towards survival, such as it might be in a human being, and the ability to reason towards victory, such as it is for Deep Blue, are divisible. We can divide the application of reason with the intent of victory into two concepts. The first concept is of invariable and indivisible reason, and the second concept is of the potentially variable intention of the act. For the anvil, several hundred feet of this air between it and the grown can lead to only one action and only one intention, i.e., compliance with the law of gravity. Any alteration to the intention of the act would force us to either alter the anvil so utterly that it no longer can be called an anvil, or to change or contradict the law of gravity. The intention of the act in the context of reason, though, can be varied without altering the property of reason. For Deep Blue, victory is the standard that the application of reason is meant to accomplish. It is entirely possible that the intention of Deep Blue’s reason can be altered, say to the intention of loss, while simultaneously conserving its ability to apply reason and play a game of chess, however poorly.
Furthermore, the anvil’s actions determined completely by its unchanging internal characteristics. However, Deep Blue was capable of learning by analyzing possible circumstances and altering its reactions to certain stimuli. The programmers gave Deep Blue an intention and the potential to apply reason. Upon analyzing numerous chess games, Deep Blue altered its strategy in order to achieve success. With new information, Deep Blue was capable of self-directed changes to how it might react in the future. No alteration of the properties of an anvil can alter the mechanism or the efficiency with which it succeeds in responding to gravity. It must succeed, by virtue of its very nature, because success is the only option. While Deep Blue might be enslaved to the goal of success, it is still realistically possible that it can fail to achieve that goal. The anvil can’t fail or be made to fail, because gravity and matter are inseparable.
At this point, I still do not think it is reasonable to conclude that deep blue is intelligent. Our intelligence might be an emergent byproduct of our mind’s processes, conferring no particular advantage, or it might the intended result of specific biological agents. The latter case seems more likely because our consciousness seems to disregard the majority of our brain activity and seems restricted to certain regions of the brain. This indicates that because deep blue has not been intentionally programmed with a conscious that it should not have one. Until we understand the causal mechanism of consciousness, artificial intelligence will likely elude us.
Even if we can replicate all the intelligent processes of the mind, it still difficult to determine if our efforts have succeeded. The Turing test, for example, identifies intelligence based on the symptoms of intelligence (1). Multiple relizability, as discussed earlier, tells us that a single symptom could be causes by intelligent and non-intelligent processes. Unless we understand the specific causal relationship responsible, we have no way of knowing if our machine is artificially intelligent.
Lawhead, F. The Philosophical Journey: An Interactive Approach, Third Edition, McGraw Hill: New York, NY, 2006, pp 197-240