Intelligence as We Know It

18 thoughts on “Intelligence as We Know It”

  1. This reads like a colloquial summery of John Searle’s work on the philosophy of mind that’s hiding behind a conclusion he would not draw. So I wonder if your use of the word “volition” is meant to be basically synonymous with the way Searle uses the words “understanding” or “intentionality.” Because he does not reach the conclusion that it must necessarily be impossible for machines to be created with these traits. Searle simply argues that computation, as we program it now, does not seem likely to get us there.

    You initially show the same skepticism of our current technology in your opening paragraph, but then in the next paragraph you go much further. Regarding the creation of intelligence by humans you say:

    “The problem with this idea, however, is that if so called intelligence were reducible to physical material, it would be so far removed from our understanding of intelligence that we would have no business calling it that—it would not be intelligence in so far as we conceive it as rational ability.”

    You support this sweeping claim with something similar to Searle’s famous “Chinese Room” argument, but I don’t see where your basis is for us knowing that it must necessarily be impossible for us to manufacture volition or rational ability.

    1. If you notice, he introduces the idea “there seem to be two essential elements to human intelligence that computers do not have: volition and the ability to ascribe value” but then explains them in reverse order. In order to be rational, discursive thought is required which requires will (which Ellison uses synonymously with volition.) That which is the product of antecedent conditions has no will or volition, but is determined. If that’s all we are, we cannot know it (or anything else.)

      1. I got that, but on what basis can we rule out as an impossibility that volition cannot also be manufactured? Is it because all manufacturing involves the use of an antecedent condition? That conclusion goes beyond the premises of his argument which are all wedded to status quo computer programming. Forty years ago the same claim could and was made about leaning, but we can now program learning.

      2. I think if it is the case that volition cannot be manufactured since manufacturing requires working with antecedent conditions, then it applies to computer programming and any other application.
        As for computers “learning,” what exactly does that mean? That a computer can be programmed to respond to patterns? This seems to be an equivocation on what it means to learn. If knowledge requires volition, and learning (epistemically) means acquiring knowledge, then it is not strictly the case that learning can be programmed. A computer producing a certain output in response to a pattern of input may simulate “learning” but it is not really learning.

      3. Your first sentence is correct, if we assume that volition cannot be manufactured then of course that assumption would apply to computer programs as well. On what basis would we make that assumption?

        The exact meaning of machine learning was first defined by Arthur Samuel in 1959 as giving computers “the ability to learn without being explicitly programmed.” A more formal definition that is used today was composed by Tom Mitchell in the 1990s. At the time some thought this was impossible, making a similar extrapolation about computers always needing an antecedent condition of information solely based on what computers were able to do at the time. In this case the prediction was that the only way a computer could ever respond to the existence of a fact was if the existence of that fact was programmed into the computer. On the matter of the mere responding to inputs, there is a big difference between inputs programmed into the computer by its creator and inputs that the computer acquires on its own by observing the the world.

        How does the recognition of patters equivocate the word “learning?”

        Are you sure knowledge always requires volition? Again you seem to assume that’s the case and then proceed. What basis is there to assume that all knowledge requires volition?

      4. I guess it depends on how you define knowledge. If knowledge is in some sense justified true belief, then when presented with data, the knower must decide if he/she/it thinks the data corresponds to that which it refers. That itself requires volition. Even more so if presented with a potential defeater for the belief. Yet even with a computer, in what sense does it have “belief?” I can see how a computer can be designed to change how it responds according to a given set of input. But I think it does run into the “Chinese room” problem. It is one thing to recognize a pattern, it is another thing to recognize meaning in a pattern.

      5. I asked several questions but am not sure which one you just now tried to answer. My guess is it was my last question:”what basis is there to assume that all knowledge requires volition?” By saying “it depends on how you define knowledge” you simply beg the question as to why one would arbitrarily choose to define knowledge only as “some sense of justified true belief.” By that do you mean only the acquisition of normative knowledge constitutes learning? Would learning positive knowledge require volition?

        What do you mean by “a potential defeater of that belief?” Is that only an interlocutor holding a different normative belief? Or could it also include an interlocutor making a factual claim about some data the computer has never been exposed to before? Because in the positive case, we certainly do have programs today that can recognize what the interlocutor is referring to, look at the new data itself, and then respond accordingly, and even then show the interlocutor why he’s wrong.

        One question that I was interested in getting a reply for was: How does the recognition of patters equivocate the word “learning?” Take a look at Webster’s definition of “learning” and “knowledge.” Neither seems to imply the necessity of volition. Why erect proprietary definitions of these terms to arbitrarily exclude non-volition learning?

        There is a possibility that we’re getting side tracked here, because I never said machine learning shows that machines have volition. I used it as an example of the danger of arguing that since we don’t have the technology to do something so daunting as manufacture volition, to then conclude we never will. Indeed it was in response to Roger Schank’s work in machine learning that John Searle constructed his Chinese Room argument. In it Searle did not argue that Schank’s machines were not learning. His argument was that even if it were to learn to speak Chinese, it will never be able to speak the Chinese the computer had learned with “intentionality” given the way computers are being programmed.

        Also, Searle doesn’t argue that there will never be a different means of designing machines that will allow us to manufacture intentionality. His Chinese Room argument is insufficient to reach that conclusion. This article above is basically just a less rigorous delivery of Searle’s 1980 paper, but Ellison does try to go that extra step anyway.

        So the biggest question I was hoping you might try to answer is again: on what basis can you claim to know that volition can never be manufactured?

  2. Hello Eric,
    Thank you for your thoughtful engagement. What a fun topic this is! I think your question, “on what basis can you claim to know that volition can never be manufactured?” is an interesting one. I do not know how one could justify that claim, but notice, that is not a claim that I made anywhere in this post. The thesis is that, “human beings are not even close to being able to manufacture genuine intelligence.”
    Searle is a great thinker; I’ll happily accept a comparison with him, even if I am on the “less rigorous” end of the comparison  While I have not read his 1980’s paper, I certainly have been influenced by his views. I think you have misunderstood the argument that I made, however. The first reason I gave for why actual intelligence as we know it is not even close to being manufactured by people (from which you drew my quote) is that materialism cannot even account for human knowledge, and therefore, a computer’s (since they are entirely composed of physical parts). On materialism, all “apparently intelligent” conclusions have a comprehensive explanation in physical cause and effect. Physical cause and effect would be entirely sufficient to account for every thought that everything that thinks has. My argument is: if this were the case, there would be no reason to think that our thoughts are even rational. Therefore, we would not even have justification for thinking that our thoughts are rational, much less a machine’s made by us. Searle does not make this argument; he is a materialist.
    My second argument draws from Searle’s work, but I haven’t heard Searle make it. Searle’s Chinese room analogy (as you compared my argument to) is to show the distinction between syntax and semantics; namely, that no amount of syntax shuffling is sufficient to account for apprehending the meaning of terms. The argument that I made was: to arrive at a conclusion because it is rational is a different kind of thing than arriving at a conclusion because a thing was physically caused (even though the conclusion might be consistent with what is rational). Thus, computers (as they are presently) are different in kind, not in degree. For this reason, computers are not close to being instantiations of actual intelligence.
    As to whether or not they ever will be or if it is possible for people to one day manufacture a thing with volition or rational ability, I do not know. That is not something I attempted to address in my post. I’d love to dialogue more about this if you have further thoughts.

    1. It appears that I did not fully understand your argument, but not the part that you just now said I misunderstood. I think I correctly perceived you were going BEYOND the conclusion that current technology merely has not yet manufactured volition. So I quoted your sentence that gave me that impression in my first comment:

      “The problem with this idea, however, is that if so called intelligence were reducible to physical material, it would be so far removed from our understanding of intelligence that we would have no business calling it that—it would not be intelligence in so far as we conceive it as rational ability.”

      Because manufacturing by humans implies a scope of inputs limited to the physical world, it seems that you did indeed in this sentence make the claim that we will never be able to manufacture volition. You restated it just now this way:

      “The first reason I gave for why actual intelligence as we know it is not even close to being manufactured by people (from which you drew my quote) is that materialism cannot even account for human knowledge, and therefore, a computer’s (since they are entirely composed of physical parts).”

      If that’s merely a premise, then it’s like you offered a premise that “no material thing can ever have volition” to support the conclusion that “a certain instance of material thing therefore has no volition.” You meant this to be a premise to support a more modest conclusion, but this premise is itself not just more ambitious a conclusion than what you identified as your thesis. It seems to contain your modest thesis within it while implying much more.

      But perhaps my assumption that your comparing dogs to numbers was a colloquial way of comparing syntax with semantics was mistaken. Searle makes that distinction to describe how a machine with what he calls “intentionality” qualitatively processes information differently than a machine that lacks “intentionality.” The way you use the words “volition,” “rationality,” and “the ability to ascribe value” sure sound like they are all contained in Searle’s chosen word. So how is the rationality you’re describing different from Searle’s intentionality? You differ with him by rejecting his position that the brain fully accounts for intentionality, but do you differ with him on the meaning of intentionality?

      The most interesting thing in this article of yours above then is that outlandish premise. It upstages your entire article, as if the article’s REAL conclusion is that physical matter is insufficient to account for human knowledge. If that’s true, then I think we would have a strong basis to predict that we’ll never manufacture volition, but given the revolution in neuroscience underway, perhaps we need some more premises to support an alternative way of accounting for human knowledge that’s a stronger explanandum than the human brain. How is Searle’s position “so far removed from our understanding of intelligence that we would have no business calling it that—it would not be intelligence in so far as we conceive it as rational ability”?

      1. Eric,

        I claimed that mere physical parts are not sufficient to account for rational processes. You are mistaken in thinking that (through this affirmation) I claimed that we will never be able to manufacture volition. Who knows, maybe people will one day be able to manufacture non-material substances. If that is even possible, then simply stating that physical parts are not sufficient for rational ability is not making the claim that you claimed I made: “humans will never manufacture volition.”

        I do think that as long as computers remain merely physical, they will never be capable of actual intelligence. The fact that my premise may be less modest than my conclusion is irrelevant. After all, all arguments from the greater to the lesser rely on less modest premises to support more modest conclusions. The fact that you think it is outlandish to say that “mere physical parts are insufficient for intelligence” is also irrelevant (I don’t think it is outlandish; many Christians and non-Christians believe this, and the history of philosophy includes a plethora of people that would also affirm this. Anyone who affirms that an immaterial mind is necessary for intelligence would affirm this view). Someone may think that it is outrageous to assume that people have libertarian free will in the premise of an argument. I may think that it is outrageous for a person to assume that people don’t have libertarian free will in the premise of an argument. Both of these simply reveal the level of justification one personally ascribes to these propositions. And, one can dispute the premises accordingly. What is relevant is whether or not the premises are true and whether or not the conclusion follows from them.

        I do not differ from Searle on the definition of intentionality. Intentionality may be necessary for volition, rationality, and an ability to ascribe value, but it is still distinct from them. I do not think that simply being about something else is sufficient for rationality, volition, and or an ability to ascribe value.

        Lastly, you asked: “How is Searle’s position ‘so far removed from our understanding of intelligence that we would have no business calling it that—it would not be intelligence in so far as we conceive it as rational ability”? Well, it’s not! To think that I made this claim from what I said is to confuse ontology and epistemology. Searle’s view of intelligence is very much like mine and many others’. He simply does not think that materialism entails what I think it entails. If I am correct about what materialism entails, and materialism is true, then intelligence would be so far removed from what we presently conceive it as, regardless of Searle’s or my view of it.

      2. Perhaps much of the field of philosophy of mind simply rests on a given time period’s understanding of what matter is. If “people will one day be able to manufacture non-material substances” wouldn’t that thing that we are now calling “non-material” no longer be a “non-material substance?” In the Amarna period of Egypt, when their knowledge of matter was more limited than ours, solar radiation was reasonably taken to be super-natural. But now with our more advanced understand as to what constitutes matter, we have a materialistic understanding of solar radiation and that previous era’s understanding stopped being reasonable.

        Since the set of things identified as “physical material” expands with scientific discovery and the set of things called “non-material substance” is contracting, aren’t you using a term that should be understood to be in flux? When you say volition will never be “reducible to physical material” aren’t you then ruling out the possibility that our understanding of what constitutes “physical material” will also include things we today think of as “non-material substances?” A materialist that dogmatically asserts our contemporary understanding of matter is the final analytical framework to explain the physical universe probably lives only as a straw man.

        The outlandishness of the big premise of yours is relevant because the soundness of your argument depends on its truth value. Given the dynamic state of our understanding of the physical universe, how can anyone reasonably assume that:

        “if so called intelligence were reducible to physical material, it would be so far removed from our understanding of intelligence that we would have no business calling it that—it would not be intelligence in so far as we conceive it as rational ability.”

        Given how little folks in the past knew about the physical universe or even the physiology of the human body, we would expect the history of philosophy to include “a plethora of people that would also affirm this.” They have predated the current revolution in neuroscience that has robed us of the use of the adjective “plethora” to describe the prevalence of this position today.

        I did not say your use of this premise was invalid. We can set up valid arguments with all kinds of unreasonable assumptions. By “outlandish” I meant to say your article seemed lopsided, because your stated thesis is so trivial. That we lack the technical knowledge to manufacture “genuine intelligence” is a truism that no one would argue with. The most interesting thing you wrote is this big premise, a premise that deserves to treated like a conclusion. What set of premises would support it?

        I think you just gave us one: “If I am correct about what materialism entails, and materialism is true, then intelligence would be so far removed from what we presently conceive it as.” Before that you said Searle “simply does not think that materialism entails what I think it entails.” Now were’re getting somewhere: how is your understanding of materialism different?

        Finally, how did I confuse ontology with epistemology when I asked: ““How is Searle’s position ‘so far removed from our understanding of intelligence that we would have no business calling it that—it would not be intelligence in so far as we conceive it as rational ability”? That question was mostly quoting you by contrasting your position with Searle’s materialist position, which has sort of been my consistent question from my very first comment.

  3. Ultimately, our intelligence will never be replicated by the machine. Perhaps the calculations will be faster or the logic quicker, but the intuitive nature of intelligence is something that cannot be programmed to perfection. We are wondrously made.

    1. Hello Eric,

      I hope you are doing well. Ontology has to do with what is. Epistemology has to do with our knowledge of what is. You asked: “If ‘people will one day be able to manufacture non-material substances’ wouldn’t that thing that we are now calling ‘non-material’ no longer be a ‘non-material substance?” The answer to this is, no. This is because the actual existence of a substance (its ontological status) does not change because we discover of it. It is what it is regardless of whether or not we discover it and regardless of the categories we designate it to.

      Similarly, the fact that people thought solar radiation was a different substance than it in fact is, has no effect on the actual substance itself. The fact that people once categorized solar radiation as a non-material substance and now people categorize it as a material substance is in the realm of epistemology; it has no effect on the actual ontology of solar radiation itself. My argument has to do with the ontological status of intelligence and physical cause and effect; namely, whether or not an immaterial substance is necessary for rationality. This is independent of the categories people may rightly or wrongly designate the phenomenon to. This is one example of how ontology and epistemology are being confused. The same issue occurs in your second paragraph: “Since the set of things identified as ‘physical material’…” Again, “the set of things identified as” is in the realm of epistemology. How people identify what a physical substance is or is not is irrelevant to the truth value of what a thing actually is. The definition of a material substance and an immaterial substance is not ambiguous. The realities themselves are not in flux, and it is the realities to which I am referring.

      You are correct in saying that the soundness of my argument depends on the truth value of the premise, but you are mistaken if you think that the truth value of the premise is contingent on the categories people designate substances to. John may categorize intelligence as a physical substance and Jones may categorize intelligence as an immaterial substance. Neither of these determine the truth value of the actual ontology of intelligence. Notice this does not change if you increase the numbers on one side of the debate. Truth is not determined by counting noses, even if they are the noses of physicists.

      In the context of the “dynamic state of our understanding of the physical universe,” you asked: “How can anybody reasonably assume that, ‘if so called intelligence were reducible to physical material, it would be so far removed from our understanding of intelligence that we would have no business calling it that—it would not be intelligence in so far as we conceive it as rational ability’”? This is not relevant to the truth value of the premise (for the same reasons mentioned above), but here is the answer. My justification for this premise is not rooted in an empirical survey; it is rooted in a rational principle. That is, I am not drawing this conclusion because I am evaluating all the data as it comes in and then inferring my conclusion. If this is what my argument depended on, I could agree with you Eric. If someone came up to you and said, “How do you know that a dog is a different kind of a thing than a number? You have not surveyed all the numbers.” You could rightly reply that you don’t need to survey all the numbers to accurately arrive at the conclusion that a dog is a different kind of thing than a number; you can deduce it on the basis of rational principles.

      John Searle’s view of materialism is the same as mine and the same as what you will find in the dictionary; namely, that all that exists is physical material. Searle and I simply differ on what that entails.

      If you disagree with my premise, that’s fine, but you’re going to have to give some reasons for thinking that it is false. Can you show that rationality is not necessarily different from physical material, or give a reason for thinking this is plausible? Telling me that you think it is outlandish, which parts of my article you find interesting and trivial, how our understanding of physics is growing, and how people have wrongly categorized physical things as immaterial things in the past are all irrelevant to the truth value of this premise.

      1. I’m not sure it’s even possible for someone to confuse ontology with epistemology. I’ve met folks who did not know what those words mean, but never get such easy concepts confused. I certainly did not here. I described the compounding human achievement of discovering that what we once considered supernatural was actually a materialistic phenomenon all along. You then interpreted this as my declaration of a change in the physical matter’s existence. Who would even say that? So I ask again but in slightly tighter wording: If people will one day be able to manufacture what we today call non-material substances, wouldn’t that thing that we are now calling non-material no longer be rightly called a non-material substance?

        Of course solar radiation was never non-material. That’s just a word we use to describe the set of things we lack a technical understanding of. It’s not that ancient people had a different thinking about the Sun’s substance. It’s that they had NO knowledge of the Sun’s substance. So they posited a supernatural explanation for the Sun’s observable effects. From the 5th Century BC to the 17th Century AD we did the same thing with the mind.

        All of the things out there that we have yet to discover, have always been a part of our physical universe, even though we once predominantly used words to describe the things we don’t understand in other-worldly terms. When Baruch Spinoza figured this out, Plato’s era of Dualism went the way of Aristotelian physics. Regarding the mind, the definitions of a material substance and an immaterial substance have always been ambiguous, but for centuries now, this ambiguity has been shown to make the word “non-material” non-sensical.

        I made no claim that the truth value of your premise was contingent on the lopsided number of people on Spinoza’s side of the debate. I was simply putting some perspective on YOUR “counting noses” when you tried to describe your position as being something other than an outlier by saying that in the history of philosophy there have been “a plethora of people that would also affirm this.” I’m not one to throw around the world “irrelevant.” I simply pointed out if you’re going to make such claims, then emphasis needs to be made on the word “history.”

        I used the word “outlandish” not to be pejorative, but to remind a guy who seems so reluctant to even fully explain his position where the burden of proof belongs; it’s not on the guy asking clarifying questions in the comments section. Before we even consider proof, how about a full explanation as to how you know that it’s impossible for physical processes to cause intentionality? Seemingly central to this claim of yours was this opaque quote:

        “if so called intelligence were reducible to physical material, it would be so far removed from our understanding of intelligence that we would have no business calling it that – it would not be intelligence in so far as we conceive it as rational ability.”

        You tried to answer my question as to how anyone could reasonably make this assumption, but before doing so, you said your answer is not relevant to the truth value of the premise. How is it not relevant? Where else have you even approached presenting a valid basis for your claim to know that physical material causing intentionality is impossible?

        I certainly agree that your answer did not support the truth value of your premise. You said it’s not rooted in any empirical evidence. Such evidence might be helpful, but no, your answer is more of the dog vs number analogy. I’m still not sure if this is supposed to be a modified version of the Chinese Room argument or what. So we’re left wondering what two things you are saying are as qualitatively different as numbers and dogs, or even how that makes the cause of intentionally necessarily something other than physical matter. One can name almost any two qualitatively different things in the physical universe and show they are different even if one causes the other. That doesn’t imply that one must be an element outside the set of all things in the universe.

        Finally why the staccato about how you differ from Searle? You claimed that he simply does not think materialism entails what you think it entails. So I ask how your and his understanding of materialism differs. You said: “John Searle’s view of materialism is the same as mine,” which is a strange answer for someone who just said: “He does not think it entails what I think it entails.” Then you simply repeated what you said before: “Searle and I simply differ on what that entails.” So how do you two differ on what materialism entails?

  4. Although value and volition work together, it is perhaps the ability to evaluate information that is the greater. There are categories of importance that are unapproachable from an information-processing standpoint. Whether true volition or an approximation of volitional reason can be manufactured it is necessary to evaluate whether the application of information is appropriate to supporting a volitional movement.

    It’s an important sidestep here to observe that the human will is hardly monolithic. It is the culmination of a jumble of competing desires, each one of which having a separate evaluation of each item of information at the disposal of a human intelligence. To work one’s way through all of that is a practice in emotive conditioning and physiological predisposition. Perhaps those things can be mechanically or mathematically reproduced.

    However, it also requires another category of evaluation that operates on a more transcendent level. That is to say that one may will something against all other reason for the sake of a greater cause. This is the evaluation of moral agency. It is something that I assert cannot be reproduced merely physically. My rationally behind the assertion is on a biblical basis on the one hand, but also on the basis that it must achieve things that are not epistemically helpful to the intelligence. That is to say that what is right often involves intentionally trusting in what may momentarily be doubtful on the basis of nothing more than a promise. It’s the ability to discern right from wrong.

    This is the point at which Hollywood products featuring artificial intelligence trade. The wrestling of an artificial intelligence is typically used in fiction to investigate the human condition in order to import a particular worldview for the purposes of indoctrinating the audience, that is, informing their moral agency. It is fiction, however, according to my assertion, to presume that artificial intelligence is capable of moral agency.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s