My new song video, Real Girl, contains many examples of consonance and dissonance, tension and resolution. In my last post, I extracted a phrase from the song and slowed it way down to illustrate how the bass and melody dance, creating and resolving tension in several different ways. Here is the last half of that analysis.
When we last left our heroes, they were on the 4 and b6, quite consonant relative to each other, but still unresolved because the ear remembers where the tonic is. Here is that clip:
Now the melody moves back to the 7. This interval, against the 4, is the dreaded tritone, the devil’s interval, and it’s dissonant indeed.
Then the bass moves up to the 1, lessening the dissonance, and the melody soon joins it, and all is consonant.
But there is still a sense of incompleteness, even though both the bass and melody are smack on the tonic, the most consonant interval of all. What’s up?
The answer is that the ear remembers that the root is still the 4, and we aren’t quite home yet. Getting there requires a cadence, or final resolution. Notice that in this next clip the bass note never moves, but the harmonies and the melody signal that the root has now moved to the 1 and we are home. The bass note has magically changed character.
This melodic passage occurs many times in the song, and it contains a rather dizzying series of tensions and resolutions. My friend Jody Mulgrew, who has an exquisite sense of pitch, experienced actual nausea the first time he heard the song. He told me, “I was wondering how to tell my friend Gary that I didn’t like his new song. Then, before the chorus, it started to sweeten up, and when the song was over I immediately hit the ‘replay’ button. I realized it was just tension and resolution.”
I think my friend was experiencing what I call tonal vertigo. His comment spurred some of my thinking on the nature of harmony, how it may be a byproduct of our orientation software. The “100 girlfriends” section is a roller coaster ride in the tonal gravity field. Here it is in its original form:
Now to slow it way down and take it apart.
The first dissonant melody move is to the 7. The interval is a major seventh, down a half step in pitch, and the harmonic distance is great enough (3×5=15) that the note is quite dissonant. But the bass, alternating between 1 and 5 as so many bass lines do, quickly moves to resolve the dissonance.
Note that there is still an unresolved, unfinished feeling. Even though everything you can hear is beautifully consonant, the ear still remembers that the real root of the chord is the 1. This memory is crucial to tonal music.
The next move creates a different kind of dissonance. This is the tension of reverse polarity.
First the melody moves to the 1. This note is right next to that 5 in the bass, and beautifully harmonious. But there is tension, because it’s a reciprocal note. The way to get from a 5 to a 1 is to divide by 3 — it’s one move to the left on the lattice.
And, in two moves, the melody has covered a lot of harmonic territory, all in the reciprocal, Southwest direction. No wonder Jody felt nausea! It’s an E-ticket ride.
Once again, the bass moves to save the day. The chord changes too — that 4 in the bass is the new root. The melody note magically becomes a minor third, not fully consonant, not fully resolved, but a lot better.
In the next post, the famous tritone! Then full resolution.
Here is my third stop-motion animation of a full song.
Real Girl uses a custom nine-note scale. It occupies the Southeast quadrant of the lattice, the zone of the natural minor, with two added notes — the 7, which allows for a major V chord in the progression, and the 7b5, a blue note that is showcased often in the melody.
This scale contains a sharp dissonance, between the b6 and the 7. I go back and forth between those two notes a lot, with a stop on the 1 in between to help ease the transition.
Watch how the melody and bass chase each other around. In the next few blog posts, I’ll slow this dance down, and show how the polarity flips create tension and resolution. When the melody is below and to the left of the bass, the energy is reciprocal, tense. Then one or the other moves so that the melody is above and to the right, the energy becomes overtonal, and the tension resolves.
Another fun thing to watch is the alternating bass. Roots and fifths are right next to each other on the lattice. The red lens swings like a pendulum throughout the verses.
The 2- is a common melody note in my songs, and in the blues. It goes well with the blue note 7b3 — there is an extremely common melody that goes 7b3, 2-, 1. It’s a darker, more dissonant note than its comma sibling, the 2.
The b7 is dissonant and gorgeous — check out the sequence at the end of this post.
Each note is a compound of three legs on the lattice — two fifths, or a factor of 9, and a major third, a factor of 5. By the logic of the last post, the short leg should predominate, which would make the 2- slightly overtonal and stable, and the b7 slightly reciprocal and unstable.
I’m setting up here for a map of the tonal gravity field. I think I can put some numbers on this stuff. Coming soon. I’ll use that new song animation as a basis — it’s full of fleeting dissonances and polarity flips.
The 7 has its mirror twin too, the b2-, at 112 cents. Its ratio is 1/15.
Here is how they sound:
For me, the pattern continues. The 7 is stable, but less so than the notes we’ve heard so far, and it’s getting dissonant as well, because it’s farther from the center. The b2- is both dissonant and unstable.
These notes each traverse two legs of the lattice, a 3 and a 5. The 7 is two legs “up,” or multiplying, and the b2- is two “down,” or dividing.
My friend Scott is an expert river rafter. I went down the American River with him once. We had about five crew. It was a big raft, and well-behaved, so he decided he’d leave the driving to us, sit in the bottom of the boat, and go through one of the rapids with his eyes closed.
He was grinning afterwards. He said that when he closed his eyes the temperature instantly got ten degrees cooler, and everything became three-dimensional.
I’ve thought about his experience a lot.
I think that Scott’s built-in orientation processor, which was already in high gear due to being in the rapids, suddenly lost an input channel (vision). The senses he had to work with were touch and hearing. They jumped up to do the job, and the model of the environment changed character according to what those senses had to offer.
We are constantly building a model of our environment in our minds, as a way of orienting ourselves in it. It’s a vital survival skill, and it takes in data from all the senses. Each sense contributes something different to the model.
I’m posting now from a restaurant, and as I walked here, I paid attention to what each sense does.
Vision gives me a really detailed view of the front surface of everything in my field of view. I have to move my eyes and head to get a wider picture. I can’t see through most things.
Touch gives me the surface texture and slope of the sidewalk, the feel of the breeze on my skin (which is a clue to my speed), the warmth of the sun which must be coming from the West at this time of day.
Balance gives me the direction of gravity, which is really important to know, in San Francisco.
I stop and close my eyes, and listen.
I have a lot less information now. A hazy picture starts to form, of what is all around me — above, below, and 360 degrees around. My ears are like dragonfly eyes, seeing in all directions at once.
The longer I listen, the more detail appears. Leaves are rustling above my head. A tree. There is a bird in it. Two birds. A couple walks by, in conversation, and I can tell exactly where they are as they come into earshot behind and above me, pass me on the right a few feet away, and fade away down the hill, below and in front of me. I can tell that the woman on the left is taller than the one on the right.
Somebody is hammering a couple of blocks away to my left, and there is sawing further away. A scooter goes by on a street behind me.
Hearing gives me three-dimensional X-ray vision, and more. I can hear far away, and through objects. I can even feel textures at a distance — the rustling of the leaves is almost tactile.
My hearing is building the big picture, the large-scale, full surround, 3D framework of the environmental model.
I think this is exactly why the organs of balance are located in the ears. If the reference frame of my world model is created by my ears, what happens when I turn my head? The balance input must be fed to the processor right along with the hearing data, or the model will spin and I will have vertigo.
Somewhere in us, we have an audio processor that is capable of doing the following, in a few milliseconds:
Start with two complex pressure waves, one for each ear. Each wave consists of hundreds or thousands of separate sound frequencies, made by many different objects, all mixed together into one single waveform.
Reverse-engineer the waves, sorting them back into individual sound sources.
Deduce from each sound the nature of the source.
Build a three-dimensional model of the environment within hearing range, including objects, their locations, movements and physical properties.
I don’t know how much computing power this represents, but I’m impressed. It’s like turning a smoothie back into peaches and strawberries. A quantum computer might be good at it — it could interpret the data in every possible way, all at once, and let the most likely, or lowest energy, scenario pop out.
Something in us, whatever is in that black box, takes in that huge mess of frequencies, sorts them into related bundles, and figures out what is making each bundle, in real time. I suggest that this processor is very, very good at recognizing and analyzing harmonic series.
Vibrating objects don’t just produce one frequency. They produce clusters of waves, with many frequencies that are small, whole number multiples of each other — 2x, 3x, 4x, 5x — the harmonic series. If the input sound contains frequencies of, say, 100, 200, 300 and 400 cycles per second, those waves almost certainly came from the same object.
And then, the harmonic content of a sound contains a huge amount of information about the object that made it. The overtones, and the way they change with time, are what tell us whether we’re hearing a barking dog, or a friend’s voice, or rustling leaves. They can give us information about what the source is made of, how big it is, its surface texture and motion.
This amazing harmonic processor must be built in at a deep level. I imagine any creature that hears can do it to some extent, and many better than we do. Imagine the hearing-model of a bat!
I think harmony feels good for the same reason sugar tastes good. Sugar is vital to our survival. The brain mostly runs on it. We usually have to work hard to get it, extracting small amounts from food, metabolizing it from starches. Our bodies are tuned to seek out sweetness, to find pleasure in it.
We humans have figured out how to concentrate and purify sugar, and the straight stuff tastes mighty good. It’s a focused shot to the pleasure center.
Same with harmony. Normally, our orientation processor gets a diet with a lot of roughage. There are dozens or hundreds of unrelated sound sources to sort out. The environment is full of noise, frequencies that aren’t in nice harmonic relationship to each other. The processor has to work hard to identify what’s making the noise.
When we are presented with music, we are getting a straight shot of undiluted harmonic information — a nutrient that is vital to our survival, in an easier-to-assimilate, more concentrated form than is found in nature.