Although much in animation can be communicated entirely via action - such as the pantomime-based performances of Charlie Chaplin's tramp character of silent picture fame, and Mr Bean, for example - there are times when dialogue is the most efficient means of expressing the desires, needs and thoughts of a character in order to progress the storyline. Dialogue can be as profound as a speech that changes the lives of other characters in the plot, or as mundane as a character muttering to itself in a manner that fleshes out its personality making it more believable to the audience.
Choosing the right voice is vital. Much of a character and its personality
traits can be quickly established by the performance of the actor
behind the drawings thereby taking a huge load off the animator.
If the real-life actor who is supplying the voice to your drawings
understands the part, they can very often make significant contributions
to a scene through ad libs and asides that are always 'in character'.
If you have given your character something to do during the delivery
of their dialogue, you must inform the voice talent. If your character
is doing some action that requires effort, for example, that physical
strain should be reflected in the delivery of the line.
Just as the designs for any ensemble of animated characters should
look distinctive, so should their voices. Heavy, lightweight, male,
female, husky, smooth or accented voices are some of the dialogue
textures that need to be considered when thinking about animated
characters. Using professional talent who can tune and time their
performance to the animator's requirements usually pays dividends.
It is immensely inspiring to animate to a well acted and delivered
dialogue. It is interesting that if you ask practicing animators
about what they actually do, most will describe themselves as actors
whose on-camera performance is realised through their craft.
Unfortunately drawings, clay puppets
and computer meshes don't talk, so when our synthesised characters are required to say
something, their dialogue has to be recorded and analysed first
before we can begin to animate them speaking. Lip synchronisation or 'lip-sync' is the technique of
moving a mouth on an animated character in such a way that it appears
to speak in synchronism with the sound track. So how is this done?
Still in use today is a method of analysing sound frame by frame
which dates from the genesis of sound cartoons themselves during
the late 1920's. Traditionally, this involved transferring the dialogue
tracks for animated films onto sprocketed optical sound film, and later from the 1950s, sprocketed magnetic film. The sprocket
holes on this sound film exactly match with those of motion picture
film enabling sound and image to be mechanically locked together
on editing and sound mixing machines.
A 'gang synchroniser' was used to locate individual components of
the dialogue track with great precision. This device consists of
a large sprocketed wheel over which the magnetic film can be threaded.
The sound film is driven by hand back and forth over a magnetic
pick-up head until each part of a word can be identified. This process
is called 'track reading'. The dialogue track is analysed and the
information is charted up onto camera exposure sheets, sometimes
called 'dope sheets' or 'camera charts', as a guide for the animator.
Dialogue can now be accurately analysed using digital sound tools
such as 'SoundEdit16' or 'Audacity' which allows you to 'scrub' back and forth over
a graphical depiction of a sound wave. When using a digital tool to do your track-reading, its vital that the frame-rate or tempo is set to 25 fps (frames per second), otherwise your soundtrack may not synchronise with your animation.
The timeline of 'Flash' showing a sound waveform, individual frames and the 25 frames per second setting.
Dialogue is charted up in the sound column of the dope sheet. Each
dope sheet represents 100 frames of animation or 4 seconds of screen
time. Exposure sheets have frame numbers printed down one side making
it possible to locate any sound, piece of dialogue, music beat or
drawing against a frame number. This means that when the animation
is eventually photographed onto motion picture film, it will exactly
synchronise with the soundtrack.
Dope sheets and the information charted up on them provide an exact
means of communicating the animator's intent to those further down
the production chain so that everyone in the studio understands
how all the hundreds or thousands of drawings are to come together
and how they are to be photographed under the camera. (See your
'Exposure Sheet' notes for an example of a typical dope sheet).
Dope sheets employ a kind of standardised language and symbology
which is universally understood by animators around the world. Even
computer animators use dope sheets! Get to know and love them.
There is an art to analysing dialogue. Sentences are like a continuous
river of various sounds with few obvious breaks. More often than
not, the end of one word sound flows directly into the next. It
is our understanding of the rules of language that gives us the
key to unlock the puzzle and to resolve each individual word.
English is not a phonetic language and part of the art of good lip-sync
is the ability to interpret the sounds (phonetics) you are hearing
rather than attempting to animate each letter of a word. For example,
the word 'there' consists of five letters yet requires only two
mouth shapes to animate, the 'th' sound and the 'air' sound. The
word 'I' is a single letter in its written form but also requires
two mouth positions, 'Ah' and 'ee'. Accents can also determine which
mouth shapes you choose. Its actually easier to chart up dialogue
in foreign language even though we can't understand it.
The simplest lip-sync involves correctly timing the 'mouth-open'
and 'mouth-closed' positions. Think of the way the Muppets are forced
to talk. Their lips can't deform to make all of the complex mouth
shapes required for true dialogue, but the simple contrast of open
and shut makes for effective lip-sync if reasonably timed. More
convincing lip-sync requires about 8 to 10 mouths of various shapes.
(See the attached sheet for some typical mouth positions).
As you work through a dialogue passage, it quickly becomes apparent
that the key mouth shapes can be re-cycled in different combinations
over and over again so that we could keep our character talking
for as long as we like. We can use this to advantage to save ourselves
work. If a character's head remains static during a passage of dialogue,
we can simply draw a series of mouths onto a separate cell level
and place these over a drawing of a face without a mouth. Special
care should be taken to design a mouth so that it looks as though
it belongs to the character. Retain the same sort of perspective
view in the mouth as you have chosen for the face to avoid mouths
that look as though they are merely stuck on over the top of the
face. Remember too, that the top set of teeth are fixed to the skull
and its the bottom teeth and jaw that do the moving.
Sometimes the whole head can be treated as the animating 'lip-sync'
component. This enables you to have a bottom jaw that actually opens
and drops lower and also allows you to work stretch and squash distortions
into the entire face. Rarely does any one mouth position have to
be on screen for less than two frames. Single frame animating for
lip-sync usually looks too busy. In-betweens from one mouth shape
to the next are mostly unnecessary in 'limited' animation unless
the character speaks particularly slowly. Therefore the mouth can
snap directly from one of the recognised key mouth shape positions
to the next.
Talking heads can be boring and, without the richness of detail
and texture found in real-life faces, animated ones are even more
so. Gestures can tell us something about the personality of a particular
character and the way it is feeling. Give your character something
to do during the dialogue sequence. The use of hand, arm, body gestures
and facial expressions, in fact involving the whole body in the
delivery of dialogue, makes for something far richer to look at
than just watching the mouth itself move. These gestures may wild
and extravagant, a jump for joy, large sweeps of the arms, or as
small and subtle as the raising of an eyebrow.
Pointing, banging the table, a shrug of the shoulders, anything
may be useful to emphasise a word in the dialogue or to pick up
a sound accent which helps gives the audience a clue as to what
the character is feeling and absolutely gives the animated character
ownership of the words. The delivery of the dialogue during recording
will often dictate where these accents should fall. Mannerisms help
establish character too. A scowl, a scratch of the ear, or some
uncontrollable twitch or other idiosyncratic behaviour.
Use quick thumbnail sketches to help you develop the key poses that you believe will best help express the meaning and emotional content of the words and they way they have been delivered. Broadly phrasing the dialogue into sections where a key poses seems appropriate is a good starting point. Some times these visual accents (key poses) might occur just on one word that you want to emphasise. At other times the gesture might flow across an entire sentence.
Disney animator, Frank Thomas, uses rough thumbnail sketches to work out key poses for a dialogue sequence for Baloo in Jungle Book.
Character animators often refer to themselves as actors. All actors must understand what motivates their characters and what kind of emotional context is required for any given scene. More on this later, but suffice to say that you must try and animate from the inside out. That is, to know the inner thoughts and feelings of your character, and to try and express these externally.
When charting up 'dope sheets', always use a soft pencil and keep
an eraser at hand. You'll be making plenty of mistakes to start
with. The best way to begin mapping out a dialogue sequence is to
divide the dialogue into its natural phraseology. Draw a whole
lot of thumb-nail sketches in various expressive poses and decide
which ones best relate to what is being said and which might usefully
underpin the way a line of dialogue, or a word, is delivered. Animate
gestures and body language first, then, when you are happy with
the action test, go back and add in the mouth afterwards.
Having arrived at several expressive gestural poses, don't throw
this effort away by having them appear on the screen for too short
a time. Save yourself work by wringing out as much value from these
strategic poses as you can before moving on. Disney rarely stopped
anything moving for too long exploiting a technique his studio developed
called the 'moving hold' in which the characters almost, but never
quite stopped moving when they fell into a pose. Loose appendages
come to a stop after the main mass of the character had reached
its final position, and before any part of the character stops entirely,
other parts begin to move off again. That's great if you have a
vast studio to back up the production where each animator had an
assistant and an inbetweener to do a lot of the hack work. You are
a one person band, so learn the value of the 'hold'.
Unless your character is a particularly loud and overbearing soul,
most lip-sync is best underplayed, except for important accents
and vowel sounds. This is especially true where a film's style has
moved character design closer to realistic human proportions. In
this case minimal mouth movement is usually more successful. Much
lip-sync animation is spoiled not so much by inaccurate interpretation
of the mouth shapes required, but by undue emphasis on the size
and mechanics of the mouth. Been there done that to my embarrassment.
The audience often watches the eyes, particularly during close-ups, so emphasis
and accents and can be initiated here even before the rest of the
face and mouth is considered. Speak to me with thine eyes - its
a powerful way of getting a character to communicate inner feelings
without actually saying anything. Even the act of thinking of words to speak can be expressed in the eyes. See notes on animating eyes <click here>
Animated characters need to breath too, especially where indicated
on the sound track. Its also a good idea to anticipate dialogue
with an open mouth shape that lets the character suck in some air
before forming the first word.
Approaches to lip-sync can be just as varied as the different stylistic
approaches to character design - simple, elaborate, restrained,
exaggerated - busy with teeth and tongue, or just a plain slit.
Every individual animator's approach to lip-sync is different too.
In large studios where more than one animator is in charge of the
same character, extensive notes and drawings will instruct the team
how to work the mouth to keep it looking the same throughout. The
way a mouth might work is very often determined by the design of
the head in character model sheets. Think of five o'clock shadow on the faces of Homer Simpson or
Fred Flintstone and the way this bit of
design can pulled off to make the mouth move. Sometimes mouths are
simply hidden behind a wiggling mustache.
The Simpsons, South Park,
Reboot, UPA stuff (Mr McGoo), Charlie Brown (you never see teeth),
the distinctive lip-sync of Nick Park's Creature Comforts and Wallace
and Gromit (since parodied by one of our graduates, Nick Donkin,
in a Yogo commercial) are all based on a stylistic solution than
fits their characters' designs. I'm always amused by the Japanese
approach to lip-sync. A petite young lady will have a tiny mouth
which occupies about .01% of her face, but sometimes it can open
up to become a gross 60% when she gets agitated!
Along with the application of computer technology to nearly every
aspect of animated film production, not only 3D but also in tools
for 2D animation, has come an increasing effort to automate the
process of lip-sync. "Why", software designers and producers are
asking, "can't the computer analyse a sound wave form automatically
and then determine which mouth shapes to use?" There are lip-sync
plug-ins for 3D animation that create a muscle-like structure in
the mouth area of a 3D character which can be made to deform according
to a predetermined library of shapes or 'morph targets'. The children's animated series, 'Reboot' uses this technique.
There are also tools which allow the animator to quickly try out
mouth shapes against a piece of dialogue. Check out 'Magpie' at
Well blow me down and shut my mouth! Now there is a piece of software which will do the analysis for you and chart up the phonetic breakdown into an electronic dope sheet. You can throw away that old gang synchroniser. Its called dubAnimation
Look at the way dubAnimation writes up its electronic exposure sheet. Some letters of the cursive writing are extended to indicate the length of that particular phonetic. This is just the way animators used to write up their exposure sheets. What a clever little tool!
to say about lipsync..
I'm too busy walking"
When it comes to lipsync, beginners often get overly fussy with their mouth shapes. Experienced character animators usually work out the body gestures first and put in the actual mouths once the acting works. There is usually very little need to inbetween mouth shapes. There is certainly none in the example below. Each shape just pops to the next giving a very snappy look to the face. Besides, our brain does all the inbetweening for us. It is also usually unnecessary to animate on single frames unless the character is talking extremely fast indeed.
Click on the above image to play movie.
This is an example of very limited animation where nothing moves except the mouth. It
took only the 7 mouth shapes below to lip sync the dialogue "Hello mum! How are you? Nice day isn't it? Hah hah hah"
The sound wave form used in the above example. You can see the phrasing of the words, including the three 'hah, hah, hahs' at the end. Those waveforms which are greater in amplitude, roughly correspond with those mouths shapes which are more wide open.
|The character in this example says "How are ya" rather than "How are you" which would require a different set of mouth shapes. Note the way the 'A' mouth shape hangs after "...ya" until the next line "Nice day..." It does not look natural to try and return the mouth to a neutral resting position between each line of dialogue.
Shere the tiger from Jungle Book
Medusa, the villian, from 101 Dalmatians
The above three examples from Disney Studios demonstrate a very rich approach to animation in which the whole face and body are involved in the delivery of dialogue.
This is an example of limited posing
tied to the phrasing of sentences and the accent placed on specific words.
Although when working with paint-on-glass animation the lipsync shapes are the same, because of the technique, the mouths are smudged away as other shapes are painted in anew. There is no library of shapes to reuse.
A lipsync sequence by Julian Chapleusing photographic collage and various mouths sourced from magazines.