The Metaphysics of Artificial Intelligence
------------------------------ --------------------
----------------------
By Jim J. McCrea
First published in the Iron Warrior (June 1987), which is the engineering paper of the University of Waterloo
------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------------------
Republished with minor modifications
------------------------------ ----------------
Many concerns, philosophical in nature, are now being raised about artificial intelligence (AI). They say that it is replacing the human mind. The fears expressed are, at best, that when AI becomes sufficiently advanced a large pool of unemployed people will be created, and at worst, that the computer will take over. However, it is my belief that these fears are completely groundless.
Firstly, it is a sheer impossibility for the computer to ever equal, let alone surpass the ability of the human intellect. Artificial intelligence, no matter what level of sophistication it attains, will always be artificial. Although it can mimic many operations of the human mind, it is still only a finite approximation of an infinite process. It cannot equal the human mind (statically) in its complexity, or (dynamically) in its operation.
It cannot equal the human mind in its complexity for the following reason: we can see that to construct anything, whether it be by man or machine, a coordinating principle is required. In the case of a man, that coordinating principle is his intellect. The intellect of man directs the assembly of basic components which confer on a final product a specific nature or essence (whether it be hardware or software). If his intellect is the efficient cause of this essence, the essence must be entirely contained within his intellect prior to the assembly of the product. One cannot give what one does not possess. Moreover, the essence of the most sophisticated device he might design may only be a small part of the complex web of ideas and relations which is the totality of his mind (arbitrarily small, in fact). Therefore we can conclude that the human mind is not only more complex than any system that can be constructed, but infinitely more complex (complex only within respect to the ideas and relations contained within the intellect; as a substance the intellect is a unity).
We can see that AI cannot equal the human mind in its operation in light of the principle of causality. First with respect to the final cause, and then with respect to efficient cause (in the terminology of philosophy, final cause refers to the goal for the event taking place, and efficient cause is that which makes the event happen).
With respect to final cause, suppose one is given a concrete value to calculate; say it is the electrical impedance of a certain component. The goal of having this knowledge may be the increased cost effectiveness of a certain product, such as a stereo receiver. This would be a first order final cause, because it is in immediate relation to the knowledge gained by the calculation. The second order final cause would be, perhaps, increased sales of this product, because it is the goal of the first order final cause. Similarly, a third order final cause could be increased profits for the company producing the product and so on. There may be many more orders in the chain, but we cannot proceed to infinity. Since the purpose of the computer is to serve humans, this series must ultimately terminate at a manifold which is a set of human needs and wants (All human activity whether assisted by machine or not has happiness as its ultimate goal, says Aristotle). The final cause of all computation must be this manifold of human needs or wants; therefore, no matter how advanced AI becomes it must always be subordinated to human control.
With respect to efficient cause, let us again suppose we are given the impedance of an electrical component to compute. That which immediately allows this value to be known is an algebraic expression from which it is computed. Because it is immediate it can be called a first order efficient cause. A second order efficient cause may be an operation in calculus that gives us that algebraic expression. We may continue in this series, going to ever increasing levels of abstraction; but again, we cannot go to infinity in the number of terms of the series. It must terminate; but where? What would be the ultimate efficient cause of all computation?
This would be a manifold, again, which I call the axioms of pure reason (in traditional philosophy these are the first principles of thought and being). These are acquired necessarily and invariably by the human mind. Their acquisition is a natural function of the human intellect. One of these, which is ultimate because it inheres in all the others, is the law of non-contradiction. It states that something cannot both be and not be under the same relation at the same time. Another very fundamental axiom is the law of identity, which state that a thing is what it is - A is A. A third is the syllogism, which states that if all of a group x has the property e, and a is a member of x, then a has the property e.
These axioms of pure reason are not something which the intellect manufactures, but are acquired because the mind mirrors the logical aspects of reality. These axioms cannot be computed, but are those on which all computation ultimately depends. They cannot be computed because the act of acquiring them is a function of understanding. Understanding is the act of one perceiving an idea, and in the same act of cognition knowing that one perceives this idea. Understanding is an exclusively human activity (in the material world). While in a machine one part can reflect upon another, only the human mind has the ability to totally reflect upon itself. Thus for the reason that the ultimate efficient cause of all computation must be the axioms of pure reason. AI must again be subordinated to human control.
The example given above is the calculation of a specific numerical parameter, but the above mentioned concepts are also valid for AI which deals with non-numeric computation involving expert systems and natural language synthesis/analysis. As AI advances it will be able to perform increasingly subtle and complex operations. The effect will not be to dehumanize, as many people fear, but to free the mind from drudgery so that it can perform actions which are more and more human.
** End note (written in 2014) - At present large software projects may involve the work of thousands of computer programmers. All the work of these programmers is not cobbled together haphazardly, but with it there is a profound coordination. The work is arranged in a hierarchy. At the bottom level are the programmers writing all the sub-routines to take care of details. Then on the next level are programmers who use these subroutines to write sub-routines for more general functions. At the top is the programmer who integrates those subroutines to write the program to perform the AI function needed. The programmer at the top needs to know the *essence* of the program (the essence is that which makes it what it is). Now this essence is only a part of the sum total of relations and notions in his mind, so that any AI program that he writes must be significantly inferior to his mind. That is why AI can never be superior to the human mind.
----
----