Brain Bits

If the brain really is a computer, it’s certainly a strange-looking one. Perhaps the marketing execs at IBM and Dell just traditionally avoid the soggy, squishy, wrinkled look.

Artificial Intelligence

But ask virtually any neuroscientist in the rapidly expanding field of the cognitive sciences and artificial intelligence, and he’ll tell you exactly that: the brain functions just like any digital computer. It’s the rigidly enforced orthodoxy at many of America’s most prestigious universities.

That should set off an urgently blinking yellow light in any Catholic’s theological sensibilities: after all, a computer does no more than slavishly hop through lists of instructions called “programs” — there’s no place for freedom there. If there’s no such thing as freedom, then everything you believe about morality is just a bad joke. And if I am just a machine, does an eternal soul really fit into the picture?

But you don’t have to be Catholic to object. Almost anyone will have the good sense to respond: “Wait a minute, my brain can’t be a computer. It never crashes, and I didn’t have to spend weeks learning how to use it. Besides, if my brain is a computer, does that make me a robot? If my friends and I are just interacting robots, that’s creepy at best and like living in the Matrix at worst.”

So what are the neuroscientists trying to tell us? For starters, if the brain is a thinking computer, then it must be possible to build computers that think — that’s what gets called artificial intelligence.

Someone once defined AI as “the science of how to get machines to do the things they do in the movies.” The billions of research dollars pumped into projects over the last fifty years bear witness that that’s much harder than it sounds. Will we see a thinking computer in our lifetimes? Or is it even possible at all?

A computer, in its broadest definition, is any machine that carries out a sequence of rules and instructions. Most of the ones we see run on silicon and electricity, but that’s just because it happens to work faster than making them out of tinkertoys and rubber bands.

Those rules and instructions form what is called a computer program. The simplest ones print out “Hello, World” on the screen; the most complex allow you to craft, save, and then lose word-processing documents, simulate the weather, or keep a fighter-jet flying straight. No matter what the task, the basic principles are always the same: input, manipulation of data according to fixed instructions, and output.

Here’s the trick: following that basic model, scientists have managed to coax behavior out of computers that, it was previously thought, only humans were capable of. It’s commonplace to see computers translating texts between languages, beating grandmasters at chess, buying and selling stocks (with each other), and even diagnosing medical conditions.

If the average desktop computer right now only equals the cognitive processing capacity of an insect’s nervous system, the argument runs, just wait until they hum along faster than the human brain: then you’ll be hard-pressed to claim any difference between human and artificial intelligence.

But are things really that simple? The key concept here is “simulation.” A parrot that repeats the words it hears is imitating, not speaking. Simulating the action of a tornado on your laptop is a lot different from seeing a real tornado outside your window. An actor simulates being a long-dead hero, but that doesn’t bring Julius Caesar back to life. So along those lines, can a computer ever really “think” and “understand,” or is it destined never to achieve more than acting like it?

For there really to be no bottom-line difference between human thinking and computer clicking, a computer needs to be replicating what a mind really does. Is that what’s happening?

Artificial Stupidity

John Searle, a philosopher at UC-Berkeley, asks you to imagine what it would be like to be sitting in a small room with lots of instruction manuals on your desk. Someone slips a piece of paper under the door with some squiggly marks on it. You go to your instruction manuals, look for matching squiggles, and work through elaborate directions (in English, of course) until you get to an instruction that tells you to write some other squiggles on the paper and send it back under the door.

Unbeknownst to you, those first squiggles formed a question written in Chinese, and what you wrote is a coherent answer in Chinese to that question. The person who slipped you the note figures that whoever is inside the room must understand Chinese too.

In this little parable, you’re the computer, the books are the program you follow, the person outside the room is the user, and the paper represents input and output. Note well: the key is that you don’t understand Chinese, and you certainly can’t give him any help on his question about Confucianism, but you act like it. You’re something like the little Einstein figure in Word who acts like he understands what you’re typing, but in reality is just hundreds of lines of computer code getting methodically chewed through to look for certain keystroke patterns. You are both perfect examples of artificial stupidity.

Or look at it another way. When you stand with cultured awe in front of a Rembrandt or a Velazquez, your aesthetic experience goes totally beyond the capabilities of any machine. It is something like the difference between seeing a computer printout of the sound frequencies at the various moments of a Beethoven symphony and being transported into rapture on hearing the symphony live at Carnegie Hall. To the computer, those numbers are as equally meaningless as a table of measurements of silt levels in the Mississippi.

The fatal flaw with saying that the brain is just a computer is that this “computational theory” is entirely materialistic at its foundations.

Materialism claims that there is nothing that isn’t matter and that can’t be explained by science. What does that imply? Molecules and atoms and quarks, after all, slosh and bump around the world following the strictest laws ever concocted: physics. In a machine, they follow exactly the same laws, even further constrained by the shape and rules of the machine: the bottom line is that matter isn’t “free” to act like we obviously are, so our minds can’t just be brains made out of matter. There has to be something else there.

That means that cognitive scientists have had to bend over backwards to redefine “meaning,” “freedom,” the “self,” and “consciousness,” and in extreme cases they have denied their very existence. In the end, materialism reduces the very highest things in man to the very lowest building blocks of the universe. Man is subjected to the subhuman; the programmer calls himself a program.

Materialism is fake science. In fact, as a theory it’s pure philosophy, and false at that. There is nothing conceivable that could prove materialism’s claim that no spiritual realities exist. Physical science, after all, is the study of the measurable and the observable. To define what is measurable and observable, and especially to claim that there is nothing more than that, one needs to step outside that realm and survey all that is. But materialism denies even the possibility of that.

God and the Matrix

So how can such an implausible and patently false theory have such a hold on the higher-education establishment? Because the real point of the debate goes much deeper than this. Many have never really given up the Enlightenment dream that human reason is the only source of truth, that every human problem is solvable by science, that morality is just a social construct, and that God, if He exists, can be pliably bent to our will.

All that can only be true if the soul doesn’t exist. That’s why the 18th-century thinker La Mettrie followed up the characterization of Descartes’ vision of man as “a ghost in a machine” by calling man simply “a machine,” the perfectly materialist vision.

If man is a machine, then the natural 21st-century response — in fact, the only philosophically consistent one — is that man runs like a computer. The downside is that everything that the soul explains — love and freedom, the joys and the sorrows of life, responsibility and the hope in the afterlife, the ability to think about the brain in the first place — has to be ruthlessly theorized into non-existence. Like it or not, that’s the inevitable consequence of materialism.

The Church, naturally, has had a far more realistic theory for 2000 years. It is the philosophical option that most researchers today are determined to leave unexplored.

The Catechism (numbers 362-365) teaches that man is at once corporeal and spiritual, a profound unity between the spiritual soul and the material body to which it gives life. The human person is a single nature made in the image of God. On that basis, even the most advanced scientific research on the brain will only ever uncover half of the puzzle.

That means that the matrix which I live in is the Creator’s matrix: where He has called me into being, where I am surrounded by His creation, where I exist because He loves me, and where I never have to go to Me.com to download the latest version of my software.

Thanks be to God: one less password to remember. Then again, if I were a computer, I would have no problem remembering my passwords…

© Copyright 2006 Catholic Exchange

Br. Shane Johnson, of the Legionaries of Christ, studies for the priesthood at Rome’s Regina Apostolorum Pontifical University. He is simultaneously pursuing graduate-level degrees in religion and science and in philosophy of mind. He can be reached at authors@arcol.org.

Subscribe to CE
(It's free)

Go to Catholic Exchange homepage

MENU