In analyzing functionalism, I will argue for the thesis, as presented. I find that the theory of functionalism, especially as laid out amongst the other theories pertaining to the issue of the mind/body problem, presents a logical construct. However, in terms of applying the theory to the idea of Artificial Intelligence, and the prospect of machines being able to think, I will argue that the functionalist theory does not show that artificial intelligence is a plausible idea. This paper will first discuss functionalism in detail, how it compares to both the behaviorist theory and the identity theory.

I will then relate functionalism to the idea of artificial intelligence by means of the concept of realization, and argue that functionalism does not provide basis for a sufficient argument in favor of artificial intelligence. Through criticizing the other theories, functionalism has been able to adapt itself into a theory that makes up for what the others lack, which in turn makes it a strong theory. Functionalism keeps the strengths of behaviorism and identity theory but also makes additions; hence creating a new theory that holds high popularity in today's domain of philosophy of the mind.Although functionalism has different forms, these forms all aim at resolving the issue of the mind/body problem. In considering mental states, variations of functionalism target themselves at answering questions pertaining to what determines these mental states. Ned Block, in speaking about functionalism, describes three different types: decompositional functionalism (analytic functionalism), computation-representation functionalism and metaphysical functionalism.

Decompositional/analytic functionalism is simply in reference to a research strategy that puts heavy weight on decomposing a system into its components, and then analyzing them in terms of these functional units. Analytic functionalism is a branch of functionalism that gives mental terms their meanings in terms of their causal relations. For example, in looking at pain, the meaning is determined by looking at what stimuli causes the pain, the responses caused by pain, and the changes in mental state caused by pain. Thus, according to analytic functionalism, a mental state is the unique thing that satisfies a corresponding causal role.Computation-representation functionalism (also known as Machine Functionalism) is a type of functionalism that places more emphasis on the analogy of the "computer-as-mind".

In this type of functionalism, endorsed by Hilary Putnam, mental processes can be decomposed into simple computer-like processes. Machine functionalism holds that mental states are functional states of a person under an appropriate description, where description refers to a machine table that outlines sensory inputs (stimuli) and the outputs (behavior).This means that whatever is experienced by an individual, for example pain, is experienced because that person has a description with a table that includes pain states within it. Thirdly, metaphysical functionalism is a form that generates the hypothesis that mental states are just functional states. This type of functionalism refers to the causal relationship between inputs, outputs and other mental states of a system. The essence of functionalism lies in defining a mental state.

One cannot simply make reference to behavior or even the relation between stimuli and behavior in defining a mental state, yet must also go further to include other mental states. The functional theorist claims that mental states consist of causal relations to one another as well as sensory inputs and behavioral outputs. The functionalist holds that mental states are translatable into functional states of the brain; however, functionalists would not agree that the description of physical states of the brain is enough to understand what is occurring in terms of mental states.With this mentality, the functional theory further elaborates by drawing a distinction as to what is the dependent variable. In drawing a parallel to computers, we can state that in a computer system the functions of the computer are dependent upon the computer hardware. In the same sense, when viewing the brain, the functionalist would maintain that mental states are essentially dependent upon brain activities.

This view shows that the mind is a complete functional system in which the total possible functional (mental) states are interacting with inputs, thus allowing for modification through further internal interaction, eventually leading to behavioral outputs. Functionalism takes elements from both the behaviorist theory and identity theory in order to formulate the functionalist theory. Like the other two theories, functionalism is generally considered to be a materialist theory, such that mental states can be deduced to material things.However, the point at which this theory differs from the others is predominantly in respect to its definition of mental states.

The main difference between functionalism and the identity theory is that functionalism does not hold brain states as being mental states, whereas identity theory equates mental states with brain states. The functionalist interjects here and asserts that this aspect of identity theory is flawed because making statements about neurological processes, such as "The C-fibers in my brain are currently firing," is not the same as making the statement "I am in pain".Thus, it cannot be concluded that brain states and mental states are one and the same, since making a statement about one's brain state, does not provide the necessarily equate to making a statement about their mental state. Hence, something else must be present, beyond neurological processes. Behaviorism and functionalism differ with respect to behaviorism having an apparent lack of explanation of mental states.The only way in which behaviorism accounts for the mind is through observable behavior, maintaining that mental states are behavioral states.

The problem here is namely that different behaviors can result from identical stimuli and that various stimuli can produce the same behaviors. In asking a behaviorist what may account for these differences in behavior, the response would likely be that different beliefs account for the differences, However, this leads to a contradiction in the theory, since beliefs cannot be explained by means of overt behavior.Functionalism does maintain that brain states are responsible for mental states, however they are not in accordance with the idea that they are equivalent to one another (as identity theorists do). Functionalists argue that neurological processes are responsible for helping to ascertain mental states, which then leads to behavior. However, this does not mean that all mental states are brain states. This view mends the problems created by the other views, such that it incorporates both the neurological states as well as the behavioral states and fills in any holes simultaneously.

An issue known as realization arises when looking at the computer analogy of functionalism, which gives rise to the idea of artificial intelligence; the idea that it is possible for computers/machines to "think" in the same sense that people do. Calculations that are performed on a computer are transferable to other hardware systems; it is just a matter of having another system that can run the same program. Thus, a single machine table can be realized by different physical systems.In taking this concept and applying it to mental states and machines, it is hypothesized that there is the possibility that there may exist a machine that can one day even be programmed to "think" like human beings; since the process of thinking would simply be looked upon as a computer program that is able to run on different machines, it would just be a matter of finding the hardware that would be able to run the program. This begins to outline the idea of artificial intelligence.Some people are avid believers in artificial intelligence.

Their belief seems to be based on their idea that any possible outcome can arise from a computer, so long as it has the according machine table. It seems as though the strong artificial intelligence theorist holds the belief that machines can, after having enough experience, be programmed to make a decision in a situation that they have never been in before, and think like human beings; so long as they have been written the write program to do so.Although forms of functionalism may suggest that such is possible (machine functionalism, for example), I will argue that this is not a possibility, and that artificial intelligence can never reach the point to where computers can "think" as humans do. Human beings are able to interject emotions into situations and learn from experiences.

This crucial factor contributes to the makeup of the way in which humans think and come to rationalize. Computers may be able to run under an established program, and process its given inputs, thus exerting particular outputs.However, this process will always remain a routine compilation of what is input by means of an external source (in most case by a human, or in some cases another computer, which inevitably would deduce to a human doing the input). Neither computers nor machines nor robots could be able to establish the intricate process of rationalization, a component essential to thinking and unique to human beings. Another aspect that pertains essentially to human beings is the fact that we are able to make predictions based on very uncertain cues.Humans are able to identify themselves with sources from their surroundings, past experiences and personal beliefs and values, in order to assess particular situations in a non-systematic matter.

In addition to this, information diffuses into humans through different ways of the world. Humans are able to come up with methods and are able to insert creativity in various situations they are in. In looking at functionalism, we can see that a specific input causes internal states to change and thus causes a behavioral output (like Hilary Putnam's pop machine example).In this case, a specific input corresponds to a particular output. However, with humans, the same input will not necessarily lead to the same output in each circumstance. In principal, this concept of mental states as functional states can be applied to both humans and computers.

However, with a computer, the inputs that it receives are more limited, disabling computers to have access to the same variety of avenues as humans have access to. Humans are able to absorb information from various different avenues including, but no limited to emotions, arising ideas, and acceptances of new and non-preexisting ideas.Whereas knowledge attained by machines is only an output of what is fed into them. How can there exist a machine that can think like this? A machine that is able to generate novel ideas, insert creativity or sporadically change its mind? The human mind seems to work in such a way that in essence, nothing is routine or constant.

A perfect example of this is viewing the changes in people's beliefs and values regarding security after September 11. Overnight, our thinking, value and judgments changed- a computer is not capable of doing this.The functionalist theory provides us with an outline that corresponds inputs necessarily equaling a constant output. This is fine, in the sense that computers are very useful for elaborate, routine situations; for instance, something that requires a great statistical output. By no means though, does this equivocate to the workings of the human mind, which is a much more complex structure than inputs, internal mental (functional) states, and outputs.