One way to point out
flaws in a proposal is to describe it as "vague," meaning that the
proposer hasn't really spelled out what he or she wants to do. In those
cases, "vague"
is not a term of approval.
But coming from University
of Aberdeen professor Kees Van
Deemter, it just might be. The subtitle of his 2010 book Not Exactly
is "In Praise of Vaguenes," and Van Deemter suggests vagueness is actually an important part of our world and one
we can't really function very well without. Van Deemter has quite a bit
of interest in the subject, since his area of specialty is artificial
intelligence. In order to help computers think like people think, they
have to be able to handle vague concepts and terms, as well as questions
and situations which have more than two possible answers.
The first section covers physical measurements, an area where we
may think precision rules. As Van Deemter points out, though, the scale
at which you measure determines the amount of precision you have
available to you. For most measurements people do in their ordinary
lives, things like rulers, tape measures and yardsticks work just fine
in giving them the precision they need. If something matches the ruler
mark at, say, two feet three and three sixteenths inches, then that's
how long or wide or deep it is for just about any everyday use you or I
could think of.
But if someone is doing something that
needs a greater degree of precision, they may be thrown off by something
as small as the actual width of the mark at two feet three and
three sixteenths inches. The same way someone using a saw needs to take
the width of the saw blade into account when making a cut, precision
measurement needs to take in the width of the mark. Van Deemter uses the
famed "metre bar" as an example. You can find an excerpt from Not
Exactly telling this story here.
Developed by international standards in the 19th century, the bar is
one of platinum-iridium alloy that was measured to be exactly
1/40,000,000 of the distance between the north and south poles as
measured along the specific longitudinal meridian that contained the
Pantheon in Paris. Anyone who wanted to create an exact meter
measurement petitioned to have their measuring device matched to the
meter bar, kept in a vault in Paris.
Set aside why that meridian should be chosen over others,
and you still have the problem that measurements in the 20th century
showed that the bar was actually off by .00005 meters. No problem for
most everyday work, but a big problem for some of the incredibly tiny
distances with which scientists were beginning to work and the precision
which that work required. The standard was changed to the wavelengths
of certain kinds of radiation, and then in 1983 to the distance traveled
by light through a vacuum in 1/299,792,458 of a second.
Van
Deemter leaves out the well-known Heisenberg
Uncertainty Principle,
which tells us there is a level of precision we can't reach no matter
how much we refine our equipment. But since he's shown the imprecision
or vagueness that's a part of the universe on a much larger scale than
Heisenberg worked with, he doesn't really need to explore it. Plus, Heisenberg's principle is actually a case of ambiguity, which differs from vagueness. Ambiguity, Van Deemter says, happens when we can't determine which of two or more equally clear cases or situations is true. Heisenberg showed that we can know either the spin or the position of a subatomic particle, but not both at the same time, so his principle describes ambiguity rather than vagueness, because vagueness means we aren't all that clear about what the cases or situations are.
The next sections get into areas that start to affect Van Deemter's
own work with artificial intelligence -- language and logic. We might
think our language describes things in the real world, which is why we
use it. But our language actually makes a whole host of assumptions in
order to describe things; without the assumptions the words themselves
are vague. What, for example, do we mean when we say something is
"tall?" If we look at that statement, we know we don't have a specific
measurement in mind. A tall person is not the same height as a tall
tree, and a tall tree is not the same height as a tall building. We mean
"comparatively" or "relatively" tall. A six-footer is tall compared to
me, but short compared to a redwood. So then, Van Deemter asks, at what
height does the adjective need to change so that instead of calling
someone "short," we call them "tall?" The answer will depend on the
group of people being measured. Obviously "short" on an NBA team is
different than "short" on a middle-school basketball team.
Because their meanings depend on their surroundings, "tall" and other
words like it are actually vague, rather than precise (Van
Deemter uses the word "crisp"). In order for computers to begin to think
like people think, they have to be able to use those vague words and
the concepts they represent.
Most computers today use a different system, based on classical logic
that has just two values: true and not-true. A two-valued logic system
corresponds nicely with the binary numbering system that can be used to
write computer programs. A computer is told that if a certain thing is
true, then it should take one action. But if it is not true, then it
should take a different action. Each step in a program involves the computer making an either/or choice and taking the step it's told to take. Obviously doing something according to this process would take a long time. Computers can make such decisions so quickly, though, that they can do in seconds what you or I would need hours to finish.
On the other hand, they can't easily do what you and I can do, which is use vague concepts like "tall" in our reasoning. They run into what's called the "sorites paradox," named after the Greek word soros, or "heap" in English. It goes like this: Do you call one grain of wheat (or of sand, or one stone, or one of anything) a heap? Do you call two grains a heap? How about three? Most folks say no to those and probably for awhile longer, but at some point you've got a heap. What is that point? If you stick with the truth/not-truth or "yes/no" binary kind of logic computers use, you can't name it with any clarity. Nor can we define words like heap, tall, short, fat or others that we use all the time.
Van Deemter uses a great deal of "symbolic logic" in this chapter to explore alternative systems of logic and how they might or might not handle vagueness. The symbolic part itself takes getting used to, because it uses shorthand for certain expressions the way that math equations use shorthand like "+" to show "is added to" or "-" to show "is subtracted from." Once you get to where you can keep the symbols straight, though, you can follow the ideas. The problem is that he spends a lot of time with them before throwing them out and moving towards the idea that seems to him best suited to handling a world that includes vagueness. That idea, which he refers to as "degrees of truth," allows for a logical system that can include context -- like designating someone as tall who is still shorter than an NBA player because they're among people who are mostly shorter than they.
The linguistics section also seems overly extensive and unfocused -- Van Deemter seems to want to show that vagueness is a part of language at its deepest levels and thus underlies most if not all languages. I don't know much about linguistics, but I'm leery of anything that relies this heavily on Noam Chomsky and cites Chomsky's political bushwah as proof of his perceptive abilities. The inclusion of a Tony Blair slam by way of demonstrating a malicious use of vagueness in political discourse seems a little forced as well. Whether or not language is vague seems to rely less on its inherent structure and more on the fact that the world deals in vagueness just as often if not more often than it deals in precision.
These missteps don't harm a wonderful book exploring an amazing subject. For me, mired as I am in my traditional Christian theism, it's absolutely fascinating to see that some of the assumptions that people have made which seem to exclude my way of thinking may not be as warranted as previously believed. We've been told that the universe can be completely and precisely explained by descriptions of its physical processes, leaving no room for God -- or at least, no room for a God who mattered. But we find imprecision at all levels of our measurement. We find yes/no logic leaving great paradoxical gaps in our ability to reason, unless we assume (or "take on faith," as it were) certain things to be true.
Van Deemter doesn't really follow the religious implications of his work, and I have no idea if he'd consider the questions I believe it raises to be legitimate so I won't put them in his mouth. And vagueness by itself or combined with other uncertain aspects of the universe's existence do not make the case that God must exist. A vague universe could be without a God as easily as could a crisp and precise one.
On the other hand, it means that the supposedly open and shut case against God may not be wrapped up as tightly as has been thought either. Which is just fine with me.