My friend Scott is an expert river rafter. I went down the American River with him once. We had about five crew. It was a big raft, and well-behaved, so he decided he’d leave the driving to us, sit in the bottom of the boat, and go through one of the rapids with his eyes closed.
He was grinning afterwards. He said that when he closed his eyes the temperature instantly got ten degrees cooler, and everything became three-dimensional.
I’ve thought about his experience a lot.
I think that Scott’s built-in orientation processor, which was already in high gear due to being in the rapids, suddenly lost an input channel (vision). The senses he had to work with were touch and hearing. They jumped up to do the job, and the model of the environment changed character according to what those senses had to offer.
We are constantly building a model of our environment in our minds, as a way of orienting ourselves in it. It’s a vital survival skill, and it takes in data from all the senses. Each sense contributes something different to the model.
I’m posting now from a restaurant, and as I walked here, I paid attention to what each sense does.
Vision gives me a really detailed view of the front surface of everything in my field of view. I have to move my eyes and head to get a wider picture. I can’t see through most things.
Touch gives me the surface texture and slope of the sidewalk, the feel of the breeze on my skin (which is a clue to my speed), the warmth of the sun which must be coming from the West at this time of day.
Balance gives me the direction of gravity, which is really important to know, in San Francisco.
I stop and close my eyes, and listen.
I have a lot less information now. A hazy picture starts to form, of what is all around me — above, below, and 360 degrees around. My ears are like dragonfly eyes, seeing in all directions at once.
The longer I listen, the more detail appears. Leaves are rustling above my head. A tree. There is a bird in it. Two birds. A couple walks by, in conversation, and I can tell exactly where they are as they come into earshot behind and above me, pass me on the right a few feet away, and fade away down the hill, below and in front of me. I can tell that the woman on the left is taller than the one on the right.
Somebody is hammering a couple of blocks away to my left, and there is sawing further away. A scooter goes by on a street behind me.
Hearing gives me three-dimensional X-ray vision, and more. I can hear far away, and through objects. I can even feel textures at a distance — the rustling of the leaves is almost tactile.
My hearing is building the big picture, the large-scale, full surround, 3D framework of the environmental model.
I think this is exactly why the organs of balance are located in the ears. If the reference frame of my world model is created by my ears, what happens when I turn my head? The balance input must be fed to the processor right along with the hearing data, or the model will spin and I will have vertigo.
Somewhere in us, we have an audio processor that is capable of doing the following, in a few milliseconds:
- Start with two complex pressure waves, one for each ear. Each wave consists of hundreds or thousands of separate sound frequencies, made by many different objects, all mixed together into one single waveform.
- Reverse-engineer the waves, sorting them back into individual sound sources.
- Deduce from each sound the nature of the source.
- Build a three-dimensional model of the environment within hearing range, including objects, their locations, movements and physical properties.
I don’t know how much computing power this represents, but I’m impressed. It’s like turning a smoothie back into peaches and strawberries. A quantum computer might be good at it — it could interpret the data in every possible way, all at once, and let the most likely, or lowest energy, scenario pop out.
Something in us, whatever is in that black box, takes in that huge mess of frequencies, sorts them into related bundles, and figures out what is making each bundle, in real time. I suggest that this processor is very, very good at recognizing and analyzing harmonic series.
Vibrating objects don’t just produce one frequency. They produce clusters of waves, with many frequencies that are small, whole number multiples of each other — 2x, 3x, 4x, 5x — the harmonic series. If the input sound contains frequencies of, say, 100, 200, 300 and 400 cycles per second, those waves almost certainly came from the same object.
And then, the harmonic content of a sound contains a huge amount of information about the object that made it. The overtones, and the way they change with time, are what tell us whether we’re hearing a barking dog, or a friend’s voice, or rustling leaves. They can give us information about what the source is made of, how big it is, its surface texture and motion.
This amazing harmonic processor must be built in at a deep level. I imagine any creature that hears can do it to some extent, and many better than we do. Imagine the hearing-model of a bat!
I think harmony feels good for the same reason sugar tastes good. Sugar is vital to our survival. The brain mostly runs on it. We usually have to work hard to get it, extracting small amounts from food, metabolizing it from starches. Our bodies are tuned to seek out sweetness, to find pleasure in it.
We humans have figured out how to concentrate and purify sugar, and the straight stuff tastes mighty good. It’s a focused shot to the pleasure center.
Same with harmony. Normally, our orientation processor gets a diet with a lot of roughage. There are dozens or hundreds of unrelated sound sources to sort out. The environment is full of noise, frequencies that aren’t in nice harmonic relationship to each other. The processor has to work hard to identify what’s making the noise.
When we are presented with music, we are getting a straight shot of undiluted harmonic information — a nutrient that is vital to our survival, in an easier-to-assimilate, more concentrated form than is found in nature.
Harmony is ear candy.
Next: A Mirror Quad