By: admin                Categories: General

Tony Bancroft, Director of Disney’s Mulan will publish Directing for Animation with Focal Press this summer. Directing for Animation integrates Tony’s personal stories, experiences, and tips learned at Disney and other studios with interviews of A-list animation directors including Nick Park, Jennifer Yuh Nelson, John Musker and more.  In anticipation of Directing for Animation, we have decided to give you a sneak peek at some of the interviews captured for the book. Full interviews, tips, and techniques can be found in the forthcoming Directing for Animation.

A pioneer in computer animation film making, Chris Wedge was there at the beginning of what is now a multi-billion dollar business. With a small band of animation innovators, Wedge co-founded and is Vice President of Creative Development at Blue Sky Studios. In 1998, Wedge and his Blue Sky crew received an Academy Award for their innovative and character driven short Bunny. Wedge directed the first Ice Age, which was nominated for an Academy Award for Best Animated Feature and went on to become one of the most successful franchises of all time spawning three sequels to date. Since then, he has directed 2005’s Robots in between his responsibilities as executive producer on everything Blue Sky produces.

Tony: So, how did you get into the animation industry?

Chris: Well, I was interested in animation from the time I was a kid. You know, I grew up in the boondocks needing to, kind of, make my own fun and I got interested in animation. It was something I had total control over and I could just experiment and fuss with it from the time I was about twelve. I went to film school and studied animation in a film program that didn’t have too much animation going on at the time, but I had a lot of support and I just continued to do the same thing, make my own little movies on my own, and, you know, I would spend sometimes two or three years on one little movie and just keep it alive, keep it going.

Tony: What do you like most about working in animation?

Chris: What I like most about it is that it’s a technique where you can communicate the most complete version of a fantasy from your brain to another person. I just love that you can divorce yourself from the world of physics, the world of what things are supposed to look like and the way things are supposed to move, and just go to places you can’t see any other way. I mean, that just philosophically is what I like about it. You can heighten physics. You can heighten the color. You can stylize characters to exaggerate personalities, and…

Tony: Kind of make a new reality.

Chris:..Yeah. That’s what I like about it.

Tony: When people ask you, “What is a director for animation?” what do you tell them that you do?

Chris: I tell them somebody has to tell everybody what to do! You know, somebody has to be the person everybody can go to.

Tony: So, you see yourself as a creative supervisor then?

Chris: Well, yeah. I mean, it’s my idea that they’re doing. We make our films with three or four hundred people around us, and the films are so complex, there’s so much work to do. It’s a waste of my time to sit down and storyboard, or to sit down and animate, because all those disciplines are so time consuming that I can’t do that anymore, so that what I end up doing is talking the film to life. I talk, and talk, and talk, and talk, and talk, and I make little sketches every once in a while, and every once in a while I can pound out a page of screenplay, but, for the most part, I’m talking with other writers, or I’m talking with storyboard artists, or I’m talking with character designers, or I’m talking with editors, or animators and, you know, just coaxing the film to life by describing it to people.

Tony: I’ve never heard that translation of what you do on a day-to-day basis as a director. I like that. What is the best and worst part of your job as a director?

Chris: Well, you know, I can’t really, I mean, it’s all going to be relative, because, well, the worst part of my job would seem silly to someone that has a real job.

Tony: Like somebody that tars roofs for a living…

Chris: Yeah, I mean, it’s all relative but for me, the best part of directing is when you achieve something that is beyond what you imagined, and the worst part is when something doesn’t quite get to where you wanted it to be. That’s all it is. It’s creative.

No Comments


By: admin                Categories: AnimationGeneral

The following is an excerpt from Designing Sound for Animation, 2e by Robin Beauchamp. This nuts-and-bolts guide to sound design for animation will explain to you the theory and workings behind sound for image, and provide an overview of the stems and production path to help you create your soundtrack. Here, Robin gives you some tips on recorded dialogue and some of its common issues.

Whether recorded as scratch, final dialogue, or ADR, there are many objective criteria for evaluating recorded dialogue. It is the responsibility of the dialogue mixer to ensure that the dialogue is recorded at the highest possible standards. The following is a list of common issues associated with recorded dialogue.

Sibilance — Words that begin with s, z, ch, ph, sh, and th all produce a hissing sound that, if emphasized, can detract from a reading. Experienced voice talents are quick to move off of these sounds to minimize sibilance. For example, instead of saying “sssssnake” they would say “snaaaake” maintaining the length while minimizing the offending sibilance. Moving the microphone slightly above or to the side of the talent’s mouth (off-axis) will also reduce sibilance. In rare cases, sibilance can be a useful tool for character development. Such was the case with Sterling Holloway’s hissing voice treatment of Kaa, the python in the Walt Disney feature The Jungle Book (1967).

Figure 3.9 Bass Roll-Off

Peak Distortion — A granular or pixilated sound caused by improper gain staging of the microphone pre-amp or from overloading the microphone with too much SPL (sound pressure level). Adjust the gain staging on the microphone pre-amps, place a pad on the microphone, move the source further from the microphone, if using a condenser microphone, consider a dynamic microphone instead.

Plosives (Wind Distortion) — Words that begin with the letters b, p, k, d, t, and g produce a rapid release of air pressure that can cause the diaphragm to pop or distort. Plosives can be reduced or prevented with off-axis microphone placement or through the use of a pop-filter. Some takes can be improved by applying a high-pass filter set below the fundamental frequency of the dialogue to reduce plosives.

Nerve-related Problems — Recording in a studio is intimidating for many actors. The sense of permanence and a desire for perfection often produce levels of anxiety that can impact performance. Signs of anxiety include exaggerated breathing, dry mouth, and hurried reading. It is often helpful to show the talent how editing can be used to composite a performance. Once they learn that the final performance can be derived from the best elements of individual takes, they typically relax and take the risks needed to deliver a compelling performance.

Lip and Tongue Clacks — Air conditioning and nerves can cause the actor’s mouth to dry out. This in turn causes the lip and tongue tissue to stick to the inside of the mouth, creating an audible sound when they separate. Always provide water for the talent throughout the session and encourage voice actors to refrain from drinking dairy products prior to the session.

Extraneous Sounds — Sounds from computer fans, florescent lights, HVAC, and home appliances can often bleed into the recordings and should be addressed prior to each session. In addition, audible cloth and jewelry sounds may be captured by the microphone due to close placement of the microphone to the talent. It is equally important to listen for unwanted sound when recording dialogue.

Phase Issues — Phase issues arise when the voice reflects off a surface such as a script, music stand, or window and is re-introduced into the microphone. The time difference (phase) of the two signals combine to produce a hollow or synthetic sound. Phase can be controlled by repositioning the microphone and placing sound absorbing material on the music stand.

Extreme Variations in Dynamic Range — Variations in volume within a vocal performance contribute greatly to the expressive quality and interpretation. Unfortunately, dialogue performed at lower levels often gets lost in the mix. Equally problematic is dialogue performed at such high levels as to distort the signal at the microphone or pre-amp. A compressor is used to correct issues involving dynamic range.

Handling Noise — Handling noise results when the talent is allowed to hold the microphone. Subtle finger movements against the microphone casing translate to thuddy percussive sounds. The actors should not handle microphones during a dialogue session. Instead, the microphone should be hung in a shock-mounted microphone cradle attached to a quality microphone stand.

Excerpt from Designing Sound for Animation, 2e by Robin Beauchamp © 2013 Taylor and Francis Group. All Rights Reserved. Designing Sound for Animation can be bought on Amazon,, or your favorite online retailer.

No Comments


By: admin                Categories: AnimationBooksGeneralInterviews

The following is an excerpt from The Animated Life: A Lifetime of tips, tricks, techniques and stories from an animation Legend. In this book, legendary Disney Animator, Floyd Norman gives you a guided tour through an entire lifetime of techniques, practical hands-on advice and insight into an entire industry. In this excerpt, you will learn the history of our childhood favorite, 101 Dalmatians.

Note: Because of increasing budget concerns, animation had to be reinvented at the Walt Disney Studios.

After six long years crafting the animated classic Sleeping Beauty, the Walt Disney Studios found itself at a crossroads. Walt’s brother Roy, who oversaw the company’s finances, presented an ultimatum to the studio boss: reduce cost, or the future of animation was questionable.

THE LEARNING CONTINUES A quick sketch of the spotted doggie.

Sleeping Beauty had taken a serious toll on the studio, with its lengthy production schedule and massive staff. Plus, a disappointing opening and the film’s failure to make back its cost didn’t bode well for the future. The market for Mickey, Donald, and Goofy shorts, once the bread and butter of the company, had dwindled as more and more kids watched cartoons on TV. Disney Animation needed to make some serious changes, and make them soon. Fortunately or unfortunately for all of us, the studio was prepared to do just that. First of all the sizable animation department was reduced to half its size. That still left hundreds of workers in Disney’s Ink & Paint department where the acetate sheets called “cels” were inked and painted by hand. However, that department would soon see a change.

Ub Iwerks was well known as Disney’s technical wizard. Ub ran the process lab, where the optical work and photographic effects were created. Most of us looked at this Disney facility as a “secret research and development lab” where technicians crafted masterful solutions to the studio’s technical challenges. The photocopier had emerged on the scene as an amazing new technology destined to revolutionize business. Could this device revolutionize putting animation drawings onto cels as well? Ub Iwerks decided it could. He deconstructed a Xerox photocopy machine and rebuilt his own. Animation tests were completed and photographed using the new process. When the time was right, the Disney technician screened the results for Walt and it appeared the experiment was successful. Animation cels would no longer be inked by hand. This meant considerable cost savings. Characteristically, Walt Disney gave his approval. “Look into it,” he replied.

WALT DISNEY STUDIOS The 1960s brought changes and a new technology to Disney Animation.

However, the new Xerox process meant changes for art direction as well. Disney’s art directors would have to consider how this new production process affected future motion pictures. The Xerox process lacked the subtlety of hand inking. Characters could no longer be done in “self-line”—meaning that the outline colors of the character were the same colors of the characters—to increase their three-dimensionality. Because the Xerox machines could not yet do color, they would have to go back to a hard black outline, like they had in the 1930s. Could there be a way to incorporate this new process in a film’s design? Art director Ken Anderson was convinced that he had a solution. Working with character designer Tom Oreb and color stylist Walt Peregoy, the team crafted an exciting new look that would move animation in an exciting new direction. Inspired by the brilliant work of British cartoonist, Ronald Searle, Disney’s new film would feature a more linear design and a richly expressive “thick-thin” outline for the characters. Because the animation drawings were being photographed by the photocopier and not traced by an inker, you could even let the outline be of a rougher, less smooth texture, with some hint of construction lines. In addition, the color palette by Peregoy would be bold and provocative. This motion picture represented a whole new design approach for Disney.

Downstairs in the Animation Department, things were no less revolutionary, as the studio artists struggled to adapt to the new way of producing animation. The sizable crews of the previous feature film were reduced to a handful of animation artists. The animators would more than double their output, and the clean-up process would slowly evolve into something we eventually called “touch up.” The smaller units were clearly faster, cheaper, and more efficient. In almost no time, these new animation units functioned like a well-oiled machine cranking out reams of footage that would have been unimaginable on the previous feature, Sleeping Beauty.

With smaller crews and a greatly compressed production schedule, 101 Dalmatians was completed in a fraction of the time it took to create Sleeping Beauty. That meant a huge cost savings and a new lease on life for Disney’s Animation Department. Looking back on this remarkable film, I’m reminded of the stories I’ve heard over the years. Many still believe that it was the Xerox process that enabled the creation of multiple spots on the Dalmatians. Though the process did allow us to duplicate multiple drawings of the puppies, the spots on the dogs were still drawn by hand. Clever animation assistants worked out a system that allowed them to keep the multiple spots in the right doggie location. 101 Dalmatians was still very much a hand-drawn feature animated film, although the elimination of the venerable Inking Department and the incredibly talented women who traced the drawings would change animation forever.

101 Dalmatians was a turning point at Disney Animation. Though it seems like ancient history today, the film’s production represented technology’s first impact on the animation process. It changed the way we worked in animation and pushed styling in a bold new direction. Of course, this was only the beginning of the technological shifts in cartoon making. Though digital techniques were still decades away, it was clear that they would one day affect Disney Animation as well.

However, this Disney film also taught us all the importance of remaining flexible and adaptable—open and willing to change and to accept any challenge as an opportunity to become even more inventive and creative. I have little doubt that Walt Disney would have encouraged that.

Excerpt from The Animated Life: A Lifetime of tips, tricks, techniques and stories from an animation Legend by Floyd Norman © 2013 Taylor and Francis Group. All Rights Reserved. The Animated Life can be bought on Amazon,, or your favorite online retailer.

No Comments


By: admin                Categories: General

There are three main factors that contribute to the negative effects the two Visual Sins can have on the audience:

1.) Where is the audience looking? The Visual Sins can’t cause problems if the audience doesn’t look at them. Every shot has a subject and a lot of non-subjects. The audience spends most of its time, or all of its time, looking at the subject. The subject is the actor’s face, the speeding car, the alien creature, the adorable dog etc. If the Visual Sins have impacted the subject, the audience sees the problem and gets brain strain.

But most of a scene is not the subject. Peripheral objects, backgrounds, unimportant characters, crowds etc. are all non-subjects that the audience acknowledges but tends to ignore in favor of the subject. Non-subjects can tolerate most of the Visual Sins because the audience is looking elsewhere.

2.) What’s the screen size? The problems caused by the Visual Sins can occur on any size screen, but the problems become more severe as the screen gets larger.

3.) How long is the screen time? Time is critical. The longer the audience looks at the Visual Sins the greater the risk of brain strain. All of the Sins have degrees of strength and may cause instantaneous discomfort or take more time to have a negative effect on the audience. Brief 3D movies like those shown in theme park thrill-rides can get away with using the Visual Sins in ways that would be unsustainable in a feature-length movie. An audience can even tolerate the Visual Sins in a long movie if the Sins’ appearance is brief.

Fortunately, the Visual Sins can be avoided or controlled to create a comfortable 3D viewing situation. The following discussion assumes the 3D is being presented on a 40-foot theatre screen.

Sin #1: Divergence

A stereoscopic 3D movie may require the audience’s eyes to diverge. This can be a serious viewing problem and can cause brain strain.

Divergence occurs when the viewer’s eyes turn outward in opposite directions to look at the subject in a scene. In real-life, our eyes don’t diverge. Ever. Look at yourself in a mirror and try to simultaneously force your left eye to look at your left ear and your right eye to look at your right ear. It’s impossible to do. Both eyes want to look at the same ear at the same time.

In the real world, both eyes converge on the same object at the same time.

But when watching 3D, our eyes can be forced to diverge or angle outwards in opposite directions to look at an image pair. Divergence can be a problem when it involves the subject of the shot because that’s where the audience is looking.

Consider how our eyes see a stereoscopic image pair for a subject that appears behind the screen. The left eye sees the screen left image and the right eye sees the screen right image. Human eyes have a 2.5-inch IO. If an image pair’s actual measured parallax on the screen surface is 2.5 inches or less, the audience’s eyes will not diverge.

On a 40-foot theater screen with 2K resolution, a 10-pixel parallax will measure 2.5 inches or about 0.5 percent of the screen width. The 2.5-inch parallax separation forces the audience’s eyes to look in parallel but that will not cause eyestrain. In real life, we do the same thing when we look at any object more than about 40 feet away.

As the measured parallax widens past 2.5 inches, divergence will occur. The tolerance for subject divergence varies, but most people can watch subject divergence up to about 7.5 inches of measured screen parallax without feeling eyestrain. A 7.5-inch parallax is +30 pixels or about 1.5 percent of the screen’s width.

A parallax separation greater than 7.5 inches is called hyper-divergence. It can be used briefly for extreme subject punctuations but sustained hyper-divergence for a subject can cause eyestrain and headaches. Hyper-divergence can be used successfully for peripheral non-subjects without causing eyestrain because the audience isn’t looking at them directly; it’s watching the subject. Non-subject divergence can add depth that would be difficult to assign to the subject.

Watching hyper-divergence can be aesthetically distracting, and visually tiring. It’s like trying to hold a heavy weight. Initially, the weight feels tolerable but as time passes your muscles fatigue, the weight feels heavier, and eventually you collapse. The same pattern occurs with hyper-divergence and it becomes visually stressful.

Hyper-divergence is less likely to occur on television screens. A 60-inch (measured diagonally) consumer HD 2K television has an actual measured screen width of approximately 52 inches. A parallax of +92 pixels (4.75 percent of the screen width) measures about 2.5 inches. Any background object with a +92 pixel parallax places that object at infinity, and will not cause divergence.

A pixel parallax up to +280 or 14.25 percent is theoretically tolerable but is unusable in practice because other problems occur like ghosting. In practice, a television background object’s parallax of up to +92 pixels is tolerable, won’t cause eyestrain, and is extremely useful directorially. Placing objects farther away than +100 (5.25 percent of the screen width) isn’t necessary.

Divergence’s eyestrain is actually due to a combination of screen-measured parallax and the viewer’s distance from the screen. See Appendix C for a full explanation.

Hyper-divergence can cause another problem for the audience. If an object’s image pair is too far apart, the audience won’t be able to fuse them into a single 3D image. Even when wearing 3D glasses, the image pair appears as two identical objects rather than a single, fused stereoscopic image. The non-fused image pair visually disconnects the stereoscopic depth and the 3D illusion collapses.

Sin #2: Ghosting

Ghosting (sometimes called cross-talk) appears because most 3D viewing systems cannot completely separate the left and right eye images of the stereoscopic pair. Each eye gets some “contamination” and sees a faint “ghost” of the image meant for the other eye. Ghosting is most visible in high contrast image pairs with a large parallax.

Put on your 3D glasses and look at these photos. Moon #1’s stereoscopic pair shows severe ghosting because it has high contrast and a large parallax. Even with your 3D glasses on, you can still see two moons instead of one. The ghosting is less noticeable in Moon #2 because there is less parallax. Moon #3’s ghosting has been eliminated by completely removing the parallax but it’s lost its depth.

Lowering the tonal contrast between Moon #4 and the background reduces the ghosting. Moon #5 uses a glow to decrease the contrast and minimize the ghosting.

Ghosting can be reduced by art direction and lighting. Avoiding high tonal contrast in sets, locations, set decoration, and costumes can reduce the problem. A fill light can reduce the tonal contrast and add light to deep shadows to avoid the ghosting.

Single person 3D viewing systems, like those pictured here, eliminate ghosting because their mechanics completely isolate the image for each eye.

Excerpt from 3D Storytelling: How Stereoscopic 3D Works and How to Use It by Bruce Block and Philip Captain 3D McNally © 2013 Taylor and Francis Group. All Rights Reserved.

No Comments


By: Elyse                Categories: AnimationBooksGeneralInspirationInterviews

The following is an excerpt from Digital Art Masters: Volume 5. In this volume, you will meet some of the finest 2D and 3D artists working in the industry today and discover how they create some of the most innovative digital art in the world. Here, Mariusz Kozik shows you step-by-step how he created Charge of the Cuirassiers.

Software Used: Photoshop
Job Title: Concept Artist / Illustrator

Using the Smudge tool almost as a Paintbrush

After many years of working at the easel, I found it hard to switch to an electronic medium. My experience in working with physical materials certainly left its mark and set of habits. Learning digital methods was not straightforward. I am still not entirely at ease with Photoshop and for that reason I will not write much about the techniques of working with the program, but rather will focus on painting itself.

Fig 1

When I first took up digital art I spent quite a long time looking for a way to easily manipulate a pen on a slippery surface and discarding any “computer stiffness”. I had to get used to drawing on a tablet whilst looking at a monitor, devoid of an ability to touch the canvas and feel the paint.

Fig 2

I found the Smudge tool (Fig.01) bore the closest resemblance to working with real paint. I like to start sketches with some black spots on a white background, which I then process with the Smudge tool. If, at any point, there is a need to add a different shade, I apply these as grays using the Dodge and Burn tools. I advise only using shades of gray when sketching as using color causes strange gradient effects. In this case it is better to use brushes that leave texture; this will save time when it comes to adding textures later. For this piece, this method helped me to quickly build the outline of the cuirassier’s sheepskin saddle (a regular part of their equipment). I used the basic brushes: Linden Leaves for the Dodge Tool and Maple Leaves for the Burn Tool (see Fig.03).

Fig 3

Battle Scenes
Ever since I sat down at a tablet four years ago, I have been creating battle scenes. The most important issue for me is a compliance with historical accuracy. However, I create the first sketches, composition and color from my imagination. Towards the end of the sketching process I start using instructional materials and references, through which I can develop the historical details. Creating battle scenes in art is a hard and laborious task. I have the advantages of a good knowledge of history and extensive knowledge of weapons, uniforms, military tactics, etc.


The Charge of the Cuirassiers during the Battle of Waterloo was fought on very wet ground. After heavy rain, the earth and grass were saturated with water. I thought that light shining through splashes caused by the horses’ hooves would make a very interesting compositional element in the image. Black horses with streaks of light illuminating the drops and mud could create a very interesting rhythm in the composition. Obtaining strong contrasts would also heighten the dynamics of the frantic cavalry attack.

I wanted to use a similar effect to other artworks, as demonstrated by some of my unfinished drafts. Celtic chariots scampering through the snow was the theme of Fig.02. With Charge of the Cuirassiers I was able to realize this idea in another environment.

Fig 4

Dynamism & Movement: Diversity in Unity; Unity in Diversity

One very important factor in a scene of charging cavalry is dynamism. A monotonous group of uniformed cavalry in straight and smart colonne serre formation can cause a lot of problems when it comes to achieving the right perspective. It is important to remember that no two cavalrymen should be moving in the same way. Their movement needs to be aimed in different directions, preferably centrifugally as this helps to strengthen their dynamism.

A convergent perspective can help a lot. When the central characters are approaching from the front, the characters at the sides need to be in a three-quarter setup (Fig.03). This is one of the many possible ways of energizing a composition.

In this painting I tried not to repeat the masses, the size of spots (although it seems to me that the two horses in the middle compete with each other), directions of movement, the layout or indeed the composition of light. I did this according to the principle of: diversity in unity; unity in diversity. Dynamism can also be strengthened by sharp edges, contrast and color.

Color: Less is More

It might seem that the use of many meretricious colors, with maximum saturation, would be a good way of achieving a highly expressive dynamic. Nothing could be further from the truth.

Fig 5a

Getting a message across in an image is like getting a message across in everyday life. If everyone in a crowd starts to scream information, I can guarantee that you will understand nothing. The same is true with colors. Artists use colors to provide information. They create a logical world on a plane based on internal rules that should be close to being harmonious. Confidence and awareness in the use of colors is extremely important. Basic rules have to be remembered, such as the interaction between different colors and the ratio of mass to saturation. By reducing the overall saturation of the artwork we effectively expand the palette of colors, allowing us to emphasize the most important information contained in the image. This also creates a harmony between the colors. A color gains its true quality and value when it is next to a “calm neighbor.”
Fig.04 shows how a red color next to a “calm neighbor” reveals its true power.

The French cuirassiers and their environment imposed a range of colors on the painting. The most active parts were the red elements and the details of the uniforms, completed by the blue areas on the uniforms and the gold inlay of the equipment (Fig.05a).

Red, blue and gold colors in the background; a delicate sky; and dirty, heavy ground with an olive hue made for a good, solid range of colors. All of this, along with the rhythm of the black horses, streaks of light and drops of water, helped to give us an interesting effect.

Fig 5b

Why is the setting in Fig.05b good? Because the largest volume in this view is the background, the colors of which are unsaturated. They do not compete with or disturb the more important elements, the cuirassiers, who should be the focus of our attention. High contrasts, saturated colors, complex details and reflections of inlays ensure that these characters dominate the painting.
Reflections of Light on the Cuirasses

The reflections in the cuirasses and helmets are the most important elements in this scene. All the characters are shown with the sun behind them. Although each item reflects the light from the ground and other objects, the cuirasses and helmets act as mirrors and give the whole painting a high luminosity and clearer form.

Fig 6

Cuirasses and helmets are not flat and so the reflections from the environment are deformed according to the spherical surfaces of these objects. In Fig.06 it is possible to see how the surrounding objects are mirrored. The sky reflected in the metal cuirass is more saturated and darker as the cuirass reflects the darker sky behind the viewer. This is achieved using a blue color with a slightly purple hue. These mirrors introduce the luminosity of the reflected world in deep shadows.


It was only at the very end of the work, once I had already established a solid composition, that I developed the details. I arranged the masses, set the light and range of colors, and resolved the sense of movement (Fig.07).
It is often the case that there is no need for excessive detail. If all the above-mentioned elements are carried out correctly then the artwork will already convey its message well. Adding detail will only serve to highlight the most important elements. Unfortunately, in historical illustrations, descriptions of individual items such as uniforms and weapons are more important than artistic matters. This is a source of constant regret for me. The important thing is that detail should not be added where it would be superfluous: in the background, shadows, etc. An excess of small details makes a painting unreadable, heavy and stuffy, especially if it is small-scale. Obviously, when working in a format such as 120 x 80cm, it is necessary to carry out work with sufficient detail. What we see on the monitor can be misleading because we do not see the complete work displayed in 1:1 scale. Well-prepared detail can jump out at the viewer when printed and once we can see the entire work in full.

Fig 7

After analyzing my work I came to the conclusion that it should be improved, or even started again. I feel that I devoted too much attention to the form in detail, and because of this I lost control of the whole composition. Also, at some point I stopped focusing on the lighting, which establishes the form of the painting. This work is not quite in accordance with the principles of nature. It is a well-known principle of good painting that through the manipulation of light and temperature it is possible to create good form, and therefore a good painting. Line drawings can guide you, but ultimately should be subordinate to light and temperature. I don’t feel that I fully achieved the objectives I had in mind when I started this picture. What I can say is that the best way to achieve these objectives became clearer to me as the work progressed.
I hope that these few paragraphs about my work and experience will help some of you to avoid making similar mistakes.

Excerpt from Digital Art Masters: Volume 5 by 3dtotal.Com © 2010 Taylor & Francis Group. All Rights Reserved. Digital Art Masters can be purchased,, and wherever fine books can be found.

No Comments


By: Adam Watkins                Categories: Animation

Using a structured and pragmatic approach Getting Started in 3D with Maya begins with basic theory of fundamental techniques of 3D modeling in Autodesk Maya, then builds on this knowledge using practical examples and projects to put your new skills to the test. In this excerpt, Adam provides a polygon primer for beginners.


Figure 01: A portrait of the star of the 3D show – the polygon . The polygon is both the star, and the smallest of players – it is what all forms (that we see) are made of.

Parts of a Polygon

Polygons have several component parts. These components are labeled in the Figure 01 above . Let’s talk about them for a minute:

Face : This is what we intuitively think of as the polygon. It’s the surface that we actually see. While it has a width and height, it has no depth – it’s infinitely thin.

Normal : A polygon’s normal is simply its front. The simplest way to think of this is that every polygon has a front and a back, and the normal (by default) runs perpendicular to the front of the face. This can be a little abstract until it’s seen in action (which we will examine in a little bit); but this becomes very important in situations like game creation because games (in order to draw things faster) don’t draw the backs of polygons. So, if the normal of a polygon is facing the wrong way, the polygon isn’t seen within a game engine. Normals can be further tough to understand because they aren’t shown by default when selecting a component and can be a little obscure to control. Not to worry though; we’ll spend some good time talking about them and especially getting them to face the direction they need to.

Edge : A face is surrounded by edges. These edges define the limitations of the polygon and the face. These edges also exist within 3D space, but actually contain no geometry of their own – they simply help describe the geometry of the polygon. When an edge is moved, rotated, or scaled, it changes the shape of the face and thus the polygon.

Vertex : Each edge has a vertex on either end of it. Vertices are one dimensional components that exist in 3D space. When a vertex is moved (one vertex cannot be scaled or rotated), it changes the length of the edges it is a part of, thus changing the shape of the polygons those edges contain. Do note, that a collection of vertices can be rotated or scaled which really is simply moving their relative location to each other.

UVs : These are really less of a “what” and more of a “where.” They are a coordinate system that allows Maya (or any 3D program) to know how to attach a texture to a collection of polygons. They are not particularly modifiable in 3D space – and really need to be handled in 2D space – most particularly in something we call “texture space.”

Traits of Polygon

To understand what polygons are and how they work, consider this metaphor. Polygons are like very thin (but very rigidly strong) plates of metal. An individual polygon cannot bend – it is planar. However, multiple polygons can be joined along their edges, and they can indeed bend where they connect. What this means is that if you take six polygons and attach them to each other, so that they share edges and vertices, you get a cube ( Fig 02 ). Increase the number of polygons and the number of places where the shape can bend increases; this means a form can become more and more round as the polygon count increases.

Maya 3D

Figure 02 : Increasing polygon count increases curve possibilities.

But notice that even the seemingly smooth sphere on the far right of the Figure 02 is still made of non-bending polygons. Check out the close-up of that sphere shown in Figure 03 below – see the edges of those polygons?


Figure 03: Close-up of a smooth sphere and the still non-smooth, rigid polygons.


So what does this mean for us? Well, polygons are not only the building blocks of shapes but also the building blocks of the data set that the computer must keep track of for any shape or scene. Especially in situations like games, this data set can be hugely important when considering frame rates (the rate— frames in a second—at which the video card is able to display the information of a scene). Too many polygons and the computer simply can’t process them, and the video card can’t draw them fast enough to allow for any sort of meaningful game-play.

Now, to be fair, polycount (the number of polygons in a scene) is rarely the most limiting factor of game-play. Textures and dynamic shadows usually have a bigger influence on that with today’s hardware. But get too many polys and even the most robust systems can be ground to a halt in both games and inside of Maya as the scene is being manipulated.

Thus, the age old dilemma – and the craft of good 3D – is to use as many polygons as are needed to describe a form, but no more. How many is too many?  The answer is tough and really a moving target. Too many for my machine as I’m writing this may be different for your machine when you read this. Not long ago, a scene with a million polys was way too many to work with and today that’s almost a trivial amount.

So the answer is: depends. I know, terribly unsatisfying, but along the way in our tutorials (found in the book), we will always be keeping our eye on efficient use of polys, so that we can ensure a project that is most useful on the most machines.

This is an excerpt from Getting Started in 3D with Maya. Getting Started in 3D with Maya can be purchased at,, and wherever fine books can be found

Adam Watkins

Adam Watkins is Associate Professor, 3D Animation, School of Interactive Media & Design at the University of the Incarnate Word. He is currently on a research sabbatical at the Los Alamos National Laboratory in New Mexico, where he is part of the VISIBLE effort creating virtual simulation games for use in non-proliferation exercises. Watkins has headed the 3D Animation program for over ten years and is the author of several books and over 100 articles on 3D Animation. His students are the winners of multiple national and international animation awards and festivals.

No Comments


By: admin                Categories: Animation

The following is an excerpt from Chris Georgenes’ Pushing Pixels. Pushing Pixels is the real-world guide to developing dynamic and fun content from conception to deployment. Here, Chris, a renowned Flash expert, demonstrates the importance of designing exciting and interesting backgrounds.

I’ve always treated my backgrounds as another character. Backgrounds should have a unique style and personality that compliments the characters. Backgrounds can also help provide a unique look and feel for the entire animation. For these reasons I typically put as much thought into the background design as I do the character design.

The entire background image for this project was drawn using Flash’s vector drawing tools. This allowed me to scale it to any size I needed without a loss in quality. I kept the image relatively simple by using the Rectangle tool and fill colors without outlines. Ultimately I wanted the background to provide some visual contrast with the character and as it was here, the character might get a little lost in all the flat bright colors. Professor Needs will eventually be placed in between the chair and the desk and it would be nice if there was some color contrast between him and his surroundings.

Here’s an exploded view of how the background layers were built. The desk is in its own layer above the chair and the rest of the office is on the bottom layer. The Professor Needs character symbol will be added to a fourth layer in between the desk and the chair. I only drew the back chair support because we never see any more detail than this.

The next step was to add texture and to do this meant bringing the image into Adobe Photoshop. From Flash I exported the background to PNG format by going to File > Export Image

Select Full Document Size from the Include drop-down menu to crop the image to the stage size. The Colors drop-down provides 8, 24 and 32 bit options. The Background setting provides the choice between an opaque or transparent background and the Smooth checkbox will antialias the image if checked.

Open the exported PNG file in Adobe Photoshop and then duplicate the Background layer by pressing Command + J. With the duplicate layer selected go to Filters > Artistic > Smudge Stick.

Set the Stroke Length to 0. Move the Highlight Area to a value of 5 and set the Intensity to 1. Click OK and then set the layer opacity to 50%. This tones the overall filter down a bit. To help it blend even more with the original background layer below it, set the Blend Mode to Overlay. Click OK.

Set the Blend Mode of this layer to Overlay so it blends with the original layer below it. Overlay blends in a way that makes light areas lighter and dark areas darker.

The result of the blending is a little too saturated. Lower the Opacity of the top layer to 50% to soften the overall strength of the Overlay blend mode. As you can see, the Smudge Stick filter combined with the Overlay Blend Mode and transparency creates a subtle texture as if the image was drawn on rough paper. But this effect for this project was a little too subtle.

It’s time to add some real texture to the background. These are two texture images from my folder collection. Similar textures can be found online but you can easily create them yourself by crumpling up paper, taking a photo of it and bringing the photo into Photoshop to add contrast and possibly some color. You don’t even need a professional DSLR camera to shoot the image as most smartphones come with quality cameras built in. I recommend shooting the textured images outside on a sunny day as the sun provides a perfect source of natural light.

Open the first texture in Photoshop and paste it into the background file in a layer above the background layers. With the texture layer selected, apply the Divide blend mode. The Divide blend mode simply divides the color of the pixel values between the image in the layer and the image in the layer below it. The end result blends the texture image with the background image as seen below.

The background is starting to take shape. We have a pretty nice texture added to it but for me it’s just a little bit too distracting due to its contrast. The easiest way to remove some of the contrast of the texture is to lower the opacity of its layer. Using the Opacity slider I adjusted the amount of opacity to about 30%. Your texture and amount of opacity may differ based on your design preference.

Adding a single texture may be plenty in most cases, but for this project I added the second texture for an even richer look to the backgrounds. After opening the second texture image, copy and paste it into the background layer. This time select Color Burn from the blend mode drop-down menu. The Color Burn mode darkens the layer below it while inverting and then dividing it based on the results. The more dark present in the bottom layer, the more it blends with them. This produces a very dark and rich effect.

The Color Burn produced a very dark and rich effect with the texture. Adjusting the opacity of the layer helped balance the texture evenly throughout the image. Now the background has a rich texture giving it a lot more character and interest. The rich textures will provide a great backdrop to the vector character when brought back into Flash. Save the image as a PNG from Photoshop so it can be imported into Flash.

Excerpt from Pushing Pixels: Secret Weapons for the Modern Flash Animator by Chris Georgenes.  © 2012 Taylor & Francis Group. All Rights Reserved. Pushing Pixels can be purchased,, and wherever fine books can be found.

No Comments


By: admin                Categories: General

In Elvin A. Hernandez’s new book Set The Action! Creating Backgrounds for Compelling Storytelling in Animation, Comics, and Games he discusses designing backgrounds that make character and story development more dynamic and realistic.

In the last video installment, Elvin examines helpful tools, line weight, image blocking, various mediums and their relevant image ratios, and more.  Feel free to watch the other videos in this series.

Part 1: Thumb Nails & Reference Material
Part 2: Using Two-Point Perspective
Part 3: Rules of Illusion
Part 4: Implied Realism

Thank you for checking out Elvin’s instructions.  If you found this information helpful, Set the Action! Creating Backgrounds for Animation, Comics, Games is in stores now!

No Comments