Wednesday, July 19, 2017

The human perspective and its limits

A couple more points occurred to me after I hit "publish" on the previous post.  Both of them revolve around subjectivity versus objectivity, and to what extent we might be limited by our human perspective.


In trying to define whether a kind of behavior is simple or complex, I gave two different notions which I claimed were equivalent: how hard it is to describe and how hard it is to build something to copy it.

The first is, in a sense, subjective, because it involves our ability to describe and understand things.  Since we describe things using language, it's tied to what fits well with language.  The second is much more objective.  If I build a chess-playing robot, something with no knowledge of human language or of chess could figure out what it was doing, at least in principle.

One of the most fundamental results in computer science is that there are a number of very simple computing models (stack machines, lambda calculus, combinators, Turing machines, cellular automata, C++ templates ... OK, maybe not always so simple) which are "functionally complete".  That means that any of them can compute any "total recursive function". This covers a wide range of problems, from adding numbers to playing chess to finding cute cat videos and beyond.

It doesn't matter which model you choose.  Any of them can be used to simulate any of the others.  Even a quantum computer is still computing the same kinds of functions [um ... not 100% sure about that ... should run that down some day --D.H.].  The fuss there is about the possibility that a quantum computer could compute certain difficult functions exponentially faster than a non-quantum computer.

Defining a totally recursive function for a problem basically means translating it into mathematical terms, in other words, describing it objectively.  Computability theory says that if you can do that, you can write a program to compute it, essentially building something to perform the task (generally you tell a general-purpose computer to execute the code you wrote, but if you really want to you can build a physical circuit to do the what the computer would do).

So the two notions, of describing a task clearly and producing something to perform it are, provably, equivalent.  There are some technical issues with the notion of complexity here that I'm going to gloss over.  The whole P = NP thing revolves around whether being able to state a problem simply means being able to solve it simply, but when it comes to deciding whether recognizing faces is harder than walking, I'm going to claim we can leave that aside.

The catch here is that my notion of objectivity -- defining a computable function -- is ultimately based on mathematics, which in turn is based on our notion of what it means to prove something (the links between computing and theorem proving are interesting and deep, but we're already in deep enough as it is).  Proof, in turn, is -- at least historically -- based on how our minds work, and in particular how language works.  Which is what I called "subjective" at the top.

So, is our notion of how hard something is to do mechanically -- my ostensibly "objective" definition -- limited by our modes of reasoning, particularly verbal reasoning, or is verbal/mathematical reasoning a fundamentally powerful way of describing things that we happened to discover because we developed minds capable of apprehending it?  I'd tend to think the latter, but then maybe that's just a human bias.



Second, as to our tendency to think that particularly human things like language and house-building are special, that might not just be arrogance, even if we're not really as special as we'd like to think.  We have a theory of mind, and not just of human minds.  We attribute very human-like motivations to other animals, and I'd argue that in many, maybe most, cases we're right.  Moreover, we also attribute different levels of consciousness to different things (things includes machines, which we also anthropomorphize).

There's a big asymmetry there: we actually experience our own consciousness, and we assume other people share the same level of consciousness, at least under normal circumstances, and we have that confirmed as we communicate our consciousnesses to each other.  It's entirely natural, then, to see our own intelligence and consciousness, which we see from the inside in the case of ourselves and close up in the case of other people, as particularly richer and more vivid.  This is difficult to let go of when trying to study other kinds of mind, but it seems to me it's essential at least to try.

3 comments:

  1. Just yesterday I heard on tv, in a scary discussion of AI, someone suggest that the fact that humans can do the things we do does not mean that we are the highest possible intelligence, only that we are the stupidest bunch who can do the things we do.

    ReplyDelete
    Replies
    1. And remember, all of us together are stupider than any one of us alone.

      Delete
    2. As the Cowardly Lion said, "Ain't it da truth, Ain't it da truth!"

      Delete