Download the pdf version here(file size )
My thesis explores the practice-as-research method in the process of making a short 3D animation. The structure of the animation filmmaking varies based on factors such as duration of the film, story structure, and likings of the director. In this research, I introduce my approach towards producing a short 3D animation, Come with Me. The story revolves around the relationship between two characters; a mother and her baby as the mother tries to teach the baby to walk.
This mother and child story has been a recurring theme in my creative research. I first developed a 2D animation that dealt with the interactions of the two characters ten years ago. In that earliest scenario, the mother appeared as an authority figure who impatiently pushes her child beyond the child’s abilities. The mother is entirely focused on what she expects from her child without realizing or considering the child’s capacities or willingness. The child’s attempt to break free of all constraints results in the child taking off, leaving the mother alone in surprise.
Over time, as I have matured, this story has evolved. In the recent version of Come with Me, there is a different approach toward characters as well as the storyline. The role of the mother has changed to a caring and supporting companion, someone quite familiar with the outside word but yet open to knowing her newborn. The child knows little about the outside world as well as herself and could be surprised by both. In this new scenario, the mother gives space to her child and accepts her for who she is. Burdens are lifted from the child by establishing a healthy relationship between the two characters.
Introducing a research method in 3D animation is quite challenging due to the nature of the subject. 3D animation is an ever-changing art form. There are always new updates to the software being used and some new features introduced in updates may change parts or all of the production workflow. 3D animation is also an interdisciplinary subject that belongs, interacts, and borrows from a variety of fields such as visual arts, acting, computer science, and physics.
In the pre-production process, as the storyline is being developed, an understanding of narratives and storytelling structures is needed. A general knowledge of visual arts and fine arts is required once the concept of the animation and characters are designed. It helps to know a little bit of coding when it comes to rigging the characters for the 3D animation. And a general knowledge of anatomy, physics, and acting is advised for character animators.
I started the MFA program in Creative Technologies with a handful of experience with 2D animation, but 3D animation was relatively new to me. As I was taking classes in 3D Modeling, Character Animation, Synthetic Photography, and History of Photography, I tried to work towards making a 3D animation instead of doing scattered none-related assignments. Taking this approach, the process of learning was more enjoyable knowing that it would eventually take shape as a cohesive story. Having said that, it is easy to become overwhelmed doing practice-as-research, because not all the attempts lead to instant results, let alone any. There are many routes I took while attempting to make Come with Me that did not find their way in the final animation, but they sure contributed to the progress of the project. These attempts were necessary for my learning process, and they provided me with experience and knowledge I can hopefully share with practitioners in the animation fields.
The first lesson I learned doing practice-as-research was to understand that making mistakes is part of the learning process and many failed attempts may be made before coming to the final result. Taking early steps of creating Come with Me, I used to think of the process as a straightforward procedure in which there is a correct path that I should find and follow to get to the point I want. However, going further down each road I took, there was a moment that I would realize this route is not going to lead to the result I want and I need to step back and change paths.
In theory, the workflow to produce a 3D animation is linear. In this linear workflow, each step of the production is completed and followed by the next step. The process starts with story development, then is followed by 3D modeling and rigging, and will end with rendering. However, this linear workflow from story development to rendering only happens in theory. In practice-as-research method, the workflow requires taking a zig-zag path between the steps of creating the animation rather than a straight path.
These zig-zag paths are the nature of the research as practice method because learning happens through practicing and each step of the production has an influence on other steps. As an example, when the 3D model of a character takes shape the character may change and evolve, so the story changes consequently. Besides, there are parts of the process that need to be put into practice before coming to their final stage. For example, some attributes of the 3D character only can be tested once the character is rigged and the rig itself can be tested only after it is animated.
At first, going through these zig-zag paths was frustrating for me. And there were times that my attempts would not lead to the result I had aimed for. In such cases, as the final result was not achieved, I used to think of my efforts as wasted. I remember an occasion of such frustrations taking over me. I took an individual study course in my second year into the program. It was close to the end of the semester, and I did not have the final result I aimed for at the beginning (I was going for rigging my character from scratch but ended up with a costume face rig and many failed attempts for rigging the body). I went to my advisor, Dane Webster, and expressed my concern about not getting the result I wanted as the semester is ending. “It is not just about the final result”, he told me. He mentioned that my attempts at finding a path to rig my character are considered when my work is evaluated. Because these failed attempts lead me to the conclusion that I don’t want to rig my character from scratch, and that is a step forward. It was there and then that I started to realize the importance of the process as a part of my research and practice. The attempts that may or may not lead to the expected result are part of the practice and are valuable in the learning process.
Then I found out about stories of failed attempts in the animation and film industries and realized making failed attempts are a common practice not just in the academia, but also in the animation industry. An example of such attempts that did not find their way into the final animation is the early design of Mike and Sullivan, characters from the Pixar film, Monster’s Inc. As you can see in the video shared here, Mike doesn’t have hands and Sullivan has tentacles instead of legs. It was only after animating a scene that they realized the physicality of these characters needs to be changed.
While as an artist I very much enjoy displaying my best works and make people go “wow,” I believe it is necessary to share my failed attempts and moments of frustration with my colleagues. Because I think failed attempts play a significant role in the learning process.
I think the best practice that makes any attempts, even failed ones, worthwhile, is keeping records of mind processes and what is learned while making those attempts. Taking notes helps me understand better, remember easier, and be able to come back and find out where I went wrong in the process. In fact, I enjoy note taking very much. My notebooks are more like a sketchbook/diary and are full of cartoons of me pulling my hair in frustration or crying in the joy of learning something that made my life easier.
When working on a project for so long, there are moments that you become exhausted of working. What I am about to say may seem to be bad advice, but to my experience, stepping away from the work is the best solution in such moments. Whenever I am stuck in my project, and there seems to be no solution to the problem I am facing, taking a break from the current work helps me the most. What I mean by taking a break from the work, is to stop thinking about it entirely and engage in a different activity. This stepping away from work could be interpreted in various ways for different people. For me, stepping away means to work on another project that asks for skill sets that are different from those required for the current project. Creative processes that involve coding are oddly relaxing for me, so I find myself coding with Processing or working on X3D models whenever I feel overwhelmed with animation. As I step away from the animation project, my mind takes a break from the chaotic situation wherein I feel stuck. And with coding, my mind enters to a state of order and control again. I have been doing this back and forth between coding and animation for so long that it appears to become a part of my creative process. Moreover, there were times when animation had become my side project, and I would spend most of my time on the so-called side project that involved coding.
In fact, I spent a whole semester on my Processing project after I realized I am not able to create a character rig that suits my animation style. During that semester, my main project was creating a drawing machine with Processing while 3D animation was my side project to fall back to whenever I was tired of coding.
Afterwards, I learned about a rigging tool that made me think about 3D animation as my main project again. Being flexible about my research subject and trying different paths helped me to be more confident about what I want to focus on at each stage of my journey as a graduate student.
After understanding the fact that I need to step back and start over for so many times, I came to realize the importance of a workflow that allows me to work in steps and go back whenever needed. It became clear to me that I should save, arrange, and name my files in a way that I can get back to them easily whenever needed. When working with large projects over time, saving files with a naming convention that is understandable to the future self becomes crucial.
As stated in this chapter, practice-as-research method is about learning through experience. While the research is conducted, all attempts may not lead to desired results, but these failed attempts make valuable contributions to the process of learning. Considering the inevitability of the failed attempts, developing a workflow that allows for mistakes becomes crucial. It is also important to enjoy the process of learning instead of just being focused on getting to the final result. One way of enjoying the process of learning is to make every attempt worthwhile through taking notes of mind processes and what is learned. Keeping records of what is learned, helps with avoiding mistakes that are made by forgetting what is learned earlier. Besides, you may contribute to the community who share the same research interest as you, by sharing those notes with them.
In any research projects, there are moments of frustration when you feel lost or stuck in the process. In those moments, stepping away from the project might be the best solution. After taking a break, you may realize the solution to overcome the challenge you were facing or realize that you need to change directions in your research path. Being flexible about the research path and being open to change directions is essential when doing practice-as-research.
In this chapter, my approach towards practice-as-research method was introduced. Highlights of the method are listed in the following.
- Learning through experience.
- Expecting many attempts before coming to an acceptable result.
- Enjoying the process of learning.
- Choosing a workflow that allows for mistakes.
- Keeping records of the process and thoughts.
- Being open to change directions.
. Rigging is the process in which a control system is created so that the character can be animated. A character’s rig consists of joints and a control system that drives the joints’ movement.
Come with Me is a story of a mother and her baby, as the mother tries to teach the baby to walk. I find myself coming back to this story since I first developed a 2D version of it as the final project for my Directing Animation class in 2008. To me, Come with Me goes beyond a mother and child story.
The challenges the characters face could be translated into any relationship in which one person is more knowledgeable than the other person in some ways. In such relationships, the more experienced person, the guide, may lead the other one, the learner. The approach that the guide takes toward communicating her expectations to the learner is crucial.
Over time, as I learn and grow, the story and characters evolve and change as well. In table-1, you may see the development of characters and storyline in Come with Me.
|Early Version||Recent Version|
|Mother wants the baby to walk.||Mother wants the baby to walk.|
|Baby wants to get away and crawl.
Baby is distracted.
Baby wants to be hugged.
|Baby wants to be hugged.|
|Mother is entirely focused on what she wants and pushes the baby beyond her abilities without paying attention to what the baby wants.||Mother acknowledges the baby’s needs, encourages her and believes in her|
|Mother appears as an authority figure who wants to be in control and have the baby walking quickly.||Mother appears as a patient and supportive companion to the baby.|
|Baby separates from the mother as soon as she becomes capable of doing so.||The baby stays with the mother even when she is capable of leaving her.|
A good practice to visualize the story, before becoming entangled in technical matters, is to draw thumbnails. Drawing thumbnails is the step before drawing the storyboard. Here, a question may arise: what is the difference between thumbnails and storyboards? The level of visual details distinguishes thumbnails from storyboards. However, I think the critical difference is in the thoughts and mind processes that are involved in drawing thumbnails versus drawing storyboards. When I draw thumbnails, I pay attention to the rhythm of scenes and think about the flow of the story through the size of each shot. I ask myself questions such as: is this shot better established as a full-shot or a close-up? Do I need to show this action from the beginning to the end? Is the rhythm of the story flows naturally? As I move on from thumbnails to storyboards, I then focus on establishing poses that tell the story in just one frame. Besides, I might add as many key poses as needed for each scene. Finally, I take the frames of the storyboard to a compositing application and make an animatic. An animatic is a movie that is created by putting together frames of the storyboard and assigning each frame an approximate time.
Through drawing a storyboard, the animation is seen as a whole before the distraction of technical matters. Moreover, once the storyboard, or animatic, is created, you may show it to your peers and the professionals to have their critics on your animation. My story went through significant changes as I was able to ask my advisor’s comments by showing him my storyboard. I could not make those corrections to my story if I had not adequately visualized it through storyboarding.
As the story and characters were evolving towards their final version, the appearance of the characters would go through changes as well. As mentioned in the method section, I was not always following the steps to create a 3D animation in their predicted order. There were many times when I needed to go back and fix some issues in the previous steps to be able to move forward. Instances of such cases happened as I was developing the mother character, Āzar.
3D modeling of Āzar was my first encounter with the translation of a 2D character to the 3D world. Moreover, my 3D modeling skills were limited only to inorganic shapes when I started modeling Āzar. I was also struggling with imaging my 2D design in a 3D space that was explicitly hard when I was trying to model the face. Because of all that, it took me a whole semester to model Āzar, and more than a semester to have her ready for animation. After I worked on an animated scene with Āzar, I realized that the exaggerated features of her are causing some problems, and as she bends or sits she looks out of proportion and awkward. Later, I found out that the rig of the character is also not up to the task when it comes to animating in a cartoony style. I realized that all the features of my 2D design could not be translated in 3D, features such as Āzar’s long hands and neck, and her big thighs. As a result, before starting the rigging process for the second time, I remodeled Āzar.
Animating Āzar with her out of proportion legs and arms made me realize my favorite style of squashy-stretchy animation works better with a character that is not so much exaggerated herself. As a result, I gave the new Āzar a simpler look, with no wrinkles on cloths, and body proportions closer to a real human. Some features of the face also needed modifications. The lips were causing problems when doing the frown because the default shape of the mouth was close to smiling.
As Jason Osipa says in his book, Stop Staring, it is important to make the default shape of the character bored. “It’s to say that everything refers to your base shape on a functional level, so leave the canvas open. The lack of muscle influence is, in my experience, always the best base, for that very reason: there’s no muscle influence. Nowhere to run from; every shape is going to its destination. If your default shape is smiling, then building a Narrow, a frown, and a Lips Up, etc., and mixing those all together-every one of them is likely to have an element of “un-Smile” (not a Frown, just less Smile) whether intentional or not(Osipia 2003, 114, 115).”
Details such as this one might not be seen, but they are defiantly felt and make a huge difference in the facial gestures of the character. Another case of such minor fixes happened as I was working with the early version of Āzar. I noticed there was something unnatural about her smile when her teeth were showing. Then I realized it is the perfect arranged teeth that make her look like a robot when she smiles. The recent Āzar has teeth that are not symmetrical in shape and size, and this almost invisible feature made her smile to look more believable and human-like.
After modeling Āzar, I moved on to the baby, Ābān. As I have mentioned before, translating a 2D design to a 3D space was not simple for me. To overcome this challenge, I decided to sculpt the baby’s head with clay before making any attempts for making the 3D character.
Doing so, I was able to generate the side and front image planes as references for 3D modeling. And since I had sculpted the head, I could rely on shapes and forms of the sculpture and not just my imaginations when I was making the 3D model. And above all, I was having so much fun working with clay while being away from the computer.
As for the appearance of the baby, I did not want to emphasize the baby’s gender by having her wearing a particular type of clothing. It is easier not to identify someone’s gender in Persian(Farsi) because it is a unisex language and the pronouns for males and females are the same. However, I needed to identify if Ābān is a girl or boy to be able to talk about her in my second language, English.
With Ābān, everything progressed smooth and fast. Modeling and rigging Ābān was less challenging because I had gained valuable experience and skills while working with Āzar.
As a result, while modeling Āzar for the first time took me a whole semester, I was able to model Ābān in almost two weeks. This showed me that to master something, experiencing failure is the key. I had to spend so much time 3D modeling my character to learn how to do it in less time. And the same pattern is seen throughout many processes that employ practice-as-research method.
. [ɒːˈzæɾ] Āzar is an Iranian name, and is the ninth month of Iranian calendar.
. [ɒːˈbɒːn] Ābān is a unisex Iranian name, and is the eighth month of Iranian calendar.
In the following chapter the process of my early attempts of designing a character for my animation Come with Meis described.
The 3D version of Come with Me started with the character design of the mother, Āzar, in Maya. Going from 2D to 3D was challenging because not all that works in 2D can be translated in 3D. Some exaggerated features of the 2D design needed to be toned down. Designing a 3D character that is proportional when looked at through different angles can be a challenge as well.
I also came to realize there are differences between a character that is modeled to be presented in a static form and a character that would be a game or animation asset. The significant difference is that a character which is developed to be animated needs to have a correct edge flow corresponding to muscle movements. The model also needs to consist of mostly quads (four-sided faces) and triangles. Moreover, the polygon count of the model should not exceed a certain amount. A 3D model that acquires all the properties mentioned above is said to have a clean topology. Having a clean topology (especially for face and hands) is the first step for having correct deformations for a character designed to be animated. There are some rules to acquire a clean topology, but each character has specific properties that would bend the rules. As a result, I studied many characters before settling down for the final topology of my character, Āzar.
Once my 3D model was ready I moved on to texturing. To put a 2D texture on a 3D model, the model needs to have a UV Map. Texturing a 3D model with a 2D image is like wrapping an object with a wrapping paper. UV Mapping is providing a map so that any software that handles 3D models with textures would know how the 2D image is wrapped around the model.
Some might say Maya is not suitable for modeling organic shapes and it is better to use applications such as ZBrush instead that have strong sculpting tools and brushes. However, I think choosing a modeling software is partially dependent on the application features. It is also equally important that the artist feels comfortable with the medium being used to create the artwork. I favored Maya over ZBrush due to some technical and personal reasons. To clarify these reasons a brief comparison of these two applications is given in the following.
To build a model in Maya, the artist needs to be mindful of the elements that are shaping the model. Should the face of the character be made from a sphere or a cube? Is it better to start modeling the lips by creating a curve or extruding edges of a plane? These are possible considerations as the 3D model is created in Maya. A clean topology could be achieved if these aspects of modeling are well considered from the beginning. As Jason Osipia, in his book Stop Staring, says: “it is important to be careful and think hard about what you’re actually doing with each shape-not just the look of the shape, but how you get there.(Osipia 2003, 115)”
In ZBrush, making the 3D model also could be started from basic shapes. However, the artist doesn’t have much control over the elements that shape the model. 3D modeling in ZBrush is done mostly through sculpting tools and is as playful as sculpting with clay. The artist is free to just focus on the appearance of the model and think about technical matters maybe at the end of the process. ZBrush can handle high polygon objects, and this feature makes it a suitable platform to work with models with a high level of details.
As it is explained above, 3D modeling process varies significantly in Maya and ZBrush. In my experience, it is best to work in Maya if the final model needs to be low polygon and certain aspects of the model need to be controlled from the beginning. Maya also is a suitable platform for modeling assets that later are going to be animated. However, if the model is not animated and is presented in a static form, ZBrush might be a better choice because the topology of the model is not important and only the polygon count needs to be watched. Having said that, there is no need to be dedicated to just one application when it comes to 3D modeling. The model can be exported from ZBrush to Maya and there it could be retopologized. The best practice is to know the strengths of each application and transfer the model between them when needed. An example of employing the strength of Maya and ZBrush while 3D modeling, is explained in the following.
In this early version of my character, I wanted to have detailed wrinkles on the clothing. To do so, the model needed to have enough geometry which means the polygon count of the model should have been increased. On the other hand, the model should be low in polygon count so that it would deform and animate more conveniently. To work around this issue, 3D modelers take different approaches. My approach toward this matter was to export the low polygon model (that has already been UVed) from Maya and bring it to ZBrush. Then the polygon count of the model was increased, and details such as cloth wrinkles were sculpted. Once the high polygon model was finished, the map of these details was generated as a Normal Map.
Afterwards, I went back to my low polygon model in Maya and applied the Normal Map to it. Once the model with a Normal Map is rendered, the rendering engine generates shadows based on the information provided by the map, and the final result will look like as if the model actually has all those details.
. Maya is an Autodesk 3D software used for 3D animation, modeling, simulation, and rendering.
. Some people say triangles should be avoided at all costs but to my experience that is not always true.
. “Retopology is the act of recreating an existing surface with more optimal geometry. A common use-case is creating a clean, quad-based mesh for animation, but it's also used for most any final object that needs to be textured, animated, or otherwise manipulated in a way that sculpted meshes are not conducive to (Williamson).”
When the character is modeled and appropriately textured, it is time to take the final step towards making it ready for animation. This final step is rigging. Rigging is the process in which a control system is created so that the character can be animated. A character’s rig consists of joints and a control system that drives the joints’ movement.
Depending on the range of movements that are expected from the character, the process of rigging and the resulting rig each could be significantly varied. As a result, there are many ways to rig a character, and multiple skill sets are required to do so. Coding is a must-have skill if a character rig is created from scratch. Being familiar with anatomy and body movements is also needed for placing the joints in the right position and skinningthe model to the joints.
As it might seem from the brief explanations given above, rigging could be the most challenging part of making a 3D animation. Having said that, there are many tools available to assist the process, and they are ranging from one click rigging solutions to complex and step by step rigging tools. When I first decided to rig my character, Āzar, I took the easy path and went with Mixamo that I categorize among one click rigging solutions.
. Skinning is the process of binding the 3D model to the joint setup you created. This means that the joints setup will have an influence on the vertices of the model and move them accordingly (Pluralsite 2014).
Mixamo is an online auto-rigger that creates a character rig in a matter of minutes. As shown in (figure 17), the user just needs to define body parts using the tool’s marker, and Mixamo takes care of the rest and gives you a rigged model.
Mixamo is a fast and easy solution, however, it has some downsides to it. First of all, the placements of the joints are defined in the front view, so there is no way to modify joints from any other view. As a result, the model cannot have natural curves of the spine, elbows, and knees. Mixamo rig also doesn’t allow modifications to the finger joints. Because of that, the character’s hand should be modeled according to the hand joints of Mixamo so that the fingers deformations are correct. Moreover, the character is skinned to the joints with a basic skinning method. If better deformations are desired, it is best to go through the process of painting the skin weights.
Painting skin weight is the process in which the influence of each joint on the skin of the character is defined. As the naming implies, this process requires artistic skills with digital brushes. The weight needs to be painted on the model and the influence of each joint should change gradually to another joint so there are no awkward deformations.
In addition to the toolset for painting skin weights in Maya, there are some other tools and plugins to help with the process such as “ngSkinTools.” However, I did not find them convenient enough to work with and used the default Maya toolset to paint the skin weight for the early version of Āzar. Before I found the best workflow, I spent lots of time on painting the weight for my character only to realize that I needed to change some geometry or joint placements and then I would be required to start over the whole process. Later, when I learned about another rigging tool (Advanced Skeleton5) I found helpful options there to speed up the process of painting skin weights. I also came up with a particular workflow that would allow me to change the geometry or even the joints hierarchy and still be able to copy the skin weights from the old geometry to the new one. These methods are explained in detail in 6.2.2.
In the early attempt to rig Āzar, I used the Mixamo rig as a base and started building my custom rig by adding extra joints and controls. I added controls to the breast joints as well as face joints. When the body rig was finished and all the extra joints were added, I started to make a costume facial rig for my character.
To create the face rig for my character I used a combination of joints and blend shapes. Blend shapes are deformed duplicates of the original mesh (3d model) that are connected to it. A common use of blend shapes is to create facial emotions and expressions. I used Maya deformers and sculpting tools to create the blend shapes for Āzar. Here, I would like to point out a quick note about blend shapes. Once you duplicate the mesh and create the blend shape, don’t freeze transforms on it. Because the relative distance between the blend shape and the original mesh is also part of the calculation that happens when Maya deforms the original mesh.
When blend shapes were created and connected to the mesh, and joints are added with controls assigned to them, it was time to create the GUI (graphical user interface).
The GUI I designed for Āzar includes one and two-dimension sliders. One-dimension sliders have one degree of freedom. Here, this means they only move along y-axis. There are also limitations on the movement of sliders because they need to stay in their bounding boxes. For example, the eyebrows’ emotion slider can only move along y-axis, and the range of movement is from -1 to 1.
These one-dimension sliders are identified with narrow rectangular bounding boxes in my GUI design. Second sets of sliders are those with two degrees of freedom and the bounding box for them is square shaped. These two-dimension controls are either simple or complex.
Simple two-dimension sliders can move along x and y-axis, and there is a one to one relation between x and y movements and the objects they drive. The jaw control is a good example of this two-dimension controls. Moving the slider along y-axis opens up the mouth and moving it along x-axis moves the jaw to the left and right. Complex two-dimension sliders also can move along x and y-axis. However, the connection between x and y movement and the objects they drive is not one to one. Here moving along x and y-axis drives four objects instead of two. I only used these kinds of sliders to drive blend shapes that create shapes of mouth.
I employed two methods to make the GUI drive the blend shapes, controls, and joints. These two methods were the use of Set Driven Keys and Expression Editor. Set driven keys were used for one-dimension sliders and simple two-dimension sliders. The way Set driven key works is that an object is set to be the driver and another object is set to be the driven. The desired channel of the driver with a certain value sets the value of the driven to a certain number. The fantastic part of this method is that rotation values could be driven by translation values. For example, the y channel of jaw slider is connected to the rotation channel of the jaw control that opens the mouth. As a result, instead of grabbing the jaw control on the face and rotate it to animate the jaw, the jaw slider in the GUI is selected and is moved up and down. The same slider is used to drive the jaw to the left and right by connecting the x channel of the slider to the rotation channel of the jaw control. In the same way, set driven keys are used to drive different blend shapes.
Expression editor was used to connecting complex two-dimension slider of the Mouth to four blend shapes that resemble shapes of mouth when “M,” “O,” “V,” and “E” are pronounced. This part of rigging required a little bit of coding and an understanding of mathematics. Writing an expression here was basically finding a mathematical function for each corner of the square’s bounding box of the Lip-sync slider. These functions have two variables, x and y and the numbers that they output are used to drive blend shapes.
There are advantages to having a complex two-dimensional Lip-sync slider. First, the mouth shape can only be animated using one slider. Second, varieties of mouth’s shapes are created while just four blend shapes are designed. The reason is that whenever the slider is not in the center or either corners of the bounding box, two blend shapes are activated at the same time. As a result, the shape of the mouth is a combination of the two blend shapes.
As explained above, I used a combination of blend shapes and joints to rig Āzar’s face. Then I used Set driven key and Expression Editor to drive the joints and blend shapes through a costume designed GUI. Once the face rig and the GUI was created, my character was ready for animation.
Once Āzar was fully rigged, I used it in two project assignments. One of the projects was making an animated scene for my Synthetic Photography course.
This course was about aesthetic and technical aspects of rendering in Maya using Pixar’s rendering engine, RenderMan. Learning about different lighting scenarios had a significant impact on my art practices. My favorite assignment was the still life assignment. A Maya scene including some cubes and spheres was given to the class, and we had to show any of the eight primary emotionsrandomly assigned to us just with the choice of composition and lighting.
Employing light to convey a mood in a scene with only cubes and spheres taught me the importance of lighting and composition in telling a story in just one frame. It was fascinating that in the critique session for the assignment, the class could guess the feeling of almost all rendered images correctly. You may want to look at my final render (figure 25) and guess the feeling I was going for with my still.
Sadness is the feeling I tried to capture in my still. However, I believe this still tells more than just sadness. I wanted the viewer to feel the loneliness of this blue-green sphere by framing her close to the corner and bottom of the image (I can’t help but referring to the sphere as a person and not an object at this point). Placing the sphere in a white-blue environment, even colder than herself, was another attempt to depict her being enveloped in a cold atmosphere. However, thinking about this sphere, I guess she is not all sad and lonely. There is this warm yellow light coming through the narrow opening between the walls surrounding her that I think might shine on her at some point.
Taking Synthetic photography course, I learned the basics of lighting and rendering with RenderMan. However, the most important lesson for me was to understand that light tells a story. Now when I am animating Come with Me, I always have a rendering window open and render the key moments of the scene to see if poses of my characters and the lights are telling the same story.
The other project in which I used Āzar, was the assignment for History of Photography class. In this assignment, I recreated three photographs of Irving Penn from the Vogue magazine, using my character, Āzar. In doing this assignment, I studied Irving Penn’s photography style and was amazed by his excellence in capturing human’s feelings. Looking at his photos, I was fascinated by the depth and feelings he depicted there through posing and lighting his subjects. His choice of simple and empty backgrounds was also very appealing to me. I took Irvin Penn’sadvice about embracing the white space to heart as he says: “The temptation to put too much in, to put in things that don’t tell anything, is very great. You have to be kind of surgeon and cut through this and not be afraid of white spaces, of emptiness.”(Hambourg, Rosenheim, and Dennett 2017, 325)
Even though the scene was not crowded with props and characters, achieving the final result for each of these still images took me a lot of time. As Jeremy Brin, in his book Digital Lighting and Rendering, says: “Most of the time spent on lighting is not spent setting up lights in your scene. More time is actually spent on adjusting your lighting, getting feedback on your work, and revising things, than on the initial set-up. Re-rendering scenes with better and better lighting, while getting feedback on your versions, is essential to perfecting your lighting skills.”(Birn 2006, 12)
. All the final stills presented in this document are rendered in Maya, using RenderMan rendering engine.
Working with Āzar in synthetic photography projects, I came to realize flaws of the model as well as the rig. From the modeling aspect, Āzar’s out of proportion legs and arms did not work well in many poses especially when I had to have her sitting down. Also, the default lips expression of Āzar was close to a smile instead of being neutral. This would make the deformation of the sad lips a little bit off. And about the rig, I realized that this rig is going to limit the way that I can animate my character. The rig did not have the option for squash and stretch, and since the controls for rotations could not be translated, the rig would cause awkward deformations when I wanted to pose Āzar close to extremes of each control.
These matters made the animating process frustrating, and I couldn’t achieve the result I desired with that rig. I think these high expectations I have from the 3D rig are coming from my 2D animation background. In 2D animation, any deformation is possible because the animator draws the character. However, in 3D animation, the animator’s skill is bounded to the rig that is animated. Although animators use some tricks to fake the effects of exaggerated 2D animations, having a rig that allows for squash and stretches is still crucial for animating in a more cartoony style. It is essential for animators to know the limitations they face with each rig they are working with. Qualities of good rigs are explained in detail in 5.2.
After re-evaluating my rigged character through posing her and animating short scenes, I realized the rig I have so far is not good enough for the animation style I am going for. As a result, I decided to rig my character from scratch without using Mixamo, but this did not work either. After spending a month trying to learn the rigging process, I realized creating a high-quality rig from scratch is not an option for me.
To build a squashy-stretchy rig with controls that would allow me to pose the character with minimum limitation was a different research path for someone who would want to pursue a career as a rigger, not an animator. So, I dropped the subject and gave up the idea of making a 3D animation for a while. I focused more on my Processing projects that I was working on alongside the 3D animation. Then one day, I found out about another rigging tool, Advanced skeleton. Watching their online tutorials and exploring some of their character rigs I found a new hope that I can probably make my dream rig with the help of this new tool.
After a short break from making Come with Me, I took a few steps back in the process and started again, this time with more knowledge and skills.
Animating a short scene and rendering still images of Āzar helped me to evaluate the rig I had designed. My expectations of characters’ movements became clear to me as well as the kind of rig that works best for my animation. In this chapter, I go through the critical aspects of my final attempts to rig and animate my characters.
Advanced Skeleton is an advanced modular rigging system that helps with rigging characters ranging from bipeds to birds and even octopuses. Advanced Skeleton is not an auto-rigger like Mixamo; it is a rigging tool that makes the rigging process more efficient by automating some steps of the rigging workflow. This rigging tool provides the user with many options to choose from at each step of the rigging. That is why the user needs a basic understanding of the rigging processes before starting to work with Advanced Skeleton.
Advanced Skeleton is constantly updated, and the bugs are fixed with every update. Although this constant updating is great, figuring out the workflow that works best with the new update is not always straightforward, at least not for me. There are many online tutorials that the developers of the tool provide. However, most of the subjects covered in older tutorials are outdated, and with every major update, they prepare a tutorial that focuses only on the new features of the current version. As a result, it is hard to have an overview of rigging with Advanced Skeleton workflow without watching all the tutorials even the outdated ones.
Due to the ever-changing nature of Advanced Skeleton, the steps that I followed to rig with this tool are not explained here. Instead, I am going to introduce some characteristics to look for in a character rig to evaluate the quality of the rig.
Before I become more engaged with rigging, I had a hard time understanding what is a good character rig. However, when I learned basics of rigging and worked with Advanced Skeleton, the characteristics of a quality character rig became more apparent to me. These qualities are introduced in this chapter.
The most important control to examine in a character rig, in my opinion, is the foot control. When looking at the channel box of feet controls, look for attributes other than Translate and Rotate. These extra attributes may have different names in different character rigs (Swivel, Toe, Roll, Anti Pop, etc.). However, their functionalities are pretty much the same. Some of them are responsible for the rotation of feet on different pivot points. I think of these attributes as the most important ones. When it comes to animating a walk for the character, the ability to animate multiple rotations of the feet becomes crucial. Another set of these extra attributes that may come in handy are attributes that let you control the knee position and the length of the leg.
I admit that this title seems odd and needs explanations. Let’s start by defining where to find these empty groups and then clarifying why they are helpful.
When there is a character rig in a Maya scene, the hierarchy of all the elements that make up of the character rig could be seen in the outliner. Looking at the outliner, you may find the control curves that are used to animate the character. The empty groups that I am talking about, if they exist in a character rig, are located on top of each control as can be seen in (figure 36).
Now that we know where to look for these empty top-level groups of controls, let’s find out their functionality.
These empty groups will come in handy when doing constraint animation. Constraint animation, as the naming implies, is done through constraining the movement of one object, Child (as Maya calls it), to the movement of another object, Parent. Doing so, Parent is controlling the movement of Child. For example, in the first scene of my animation, I needed the mother to hold her baby and walk into the scene. To do that, I had to do constraint animation in which the Wrist control of the mother would be Parent of the Main control (the control that moves the whole body) of the baby. There is a downside to constraint animation; when an object becomes the Child, it cannot be animated by itself because the Child’s animation is constrained to its Parent.
The empty top-level groups reveal their value whenever you need to animate the constrained control; the Child. The empty group of the Child control can be used for constraint animation. When the empty group is constrained, the control that lives underneath the empty group will follow its animation. However, the control itself is not constrained, and as a result, it can be freely animated.
As stated here, one of the helpful qualities of a character rig is to have empty top level groups for every control. These empty groups are crucial when it comes to constraint animations.
In each character rig, some controls are designed to be rotated such as shoulder and FK wrist controls. Some other controls are intended to be translated and rotated such as IK foot and IK wrist. While as humans we avoid dislocating our shoulder joints as we rotate our shoulders, having the option to do so in a character rig is quite helpful. To be able to translate, rotate, and scale all controls enables animators to pose characters more freely. That is to say, when looking for quality character rigs, check the channel box of each controls to see how many of the channels are keyable and unlocked. The more keyable channels means more freedom for animating.
It is incredible how we can communicate without saying any words. There are many subtleties we read in each other’s faces that are hard to describe in words. For example, most people can tell a fake smile from a real one. Moreover, we perceive mood and feelings not only by features of the face but also by postures of the body. Most of the times, even a silhouette of the body is enough for us to guess the general feeling of a character. When people communicate with each other, the message sent through the face and body is more telling than the words.
Capturing these features of human nature has always fascinated me. But when it comes to 3D animation, these subtleties in human’s expressions are not quite easy to depict. As my animation doesn’t have dialogue, showing the feelings and emotions of characters through their pose and gestures becomes even more important. As an animator, I tend to pay attention to the movement and emotions of people. However, just observing people is not enough. Before starting to animate, I read a book, Acting for Animators, by Ed Hooks, hoping it would help me to improve my acting skills because I was going to film myself as a video reference for my animation.
Unfortunately, my acting skills are beyond help although I recorded the reference videos. ReadingActing for Animators, taught me that: “The problem with doing your own live-action reference is that you already know the kind of movement you are trying to animate. So, you get up in front of camera and sort of act out the way you think that kind of movement will work. The thing is that you are trying to act and also be aware of your movement at the same time, which is almost a guarantee that the movement you record will be unnatural and stiff” (Hooks 2011, 53). To overcome this problem, Ed Hook suggests getting a friend to perform the live-action reference for you. (Hooks 2011, 55)At every step of making a 3D animation, challenges have at least two aspects. The first one is the conceptual and artistic challenges, and the second one is encountering technical matters. In the following chapter, I introduce you to the most challenging part of my first scene, creating a walk cycle.
“Why is that we recognize our Uncle Charlie even though we haven’t seen him for ten years – walking – back view – out of focus – far away? Because everyone’s walk is as individual and distinctive as their face. And one tiny detail will alter everything. There is a massive amount of information in a walk and we read it instantly.”(Williams 2009, 104)
For the mother, Āzar, I wanted her to walk in a happy and hopeful manner that would show her excitement as she was going to have her baby to walk. However, I always start the 3D walk cycle with what I call a boring walk cycle. This boring walk cycle is basic and only includes the placements of feet and up and down movements of the hip. Then I start to build the characteristics of the walk one step at a time on top of that boring walk cycle. The reason to do so is to maintain a workflow that allows for mistakes as discussed in the method chapter. In 3D animation, the key to success is to work in layers and to break the steps as you animate. Animating in steps was not easy for me as I begun 3D animating. Because of my 2D background, I tended to have each pose of the character at its final stage before moving on to the next pose. Over time, I learned that this approach causes many errors and fixing the animation afterward becomes hard and even impossible sometimes. Taking the step by step approach to complete the 3D animated scene helps with identifying the mistakes as they occur in each step instead of facing a cluster of errors at the end.
As Ken Harris says: “A walk is the firstthing to learn. Learn walk of all kinds, cause walks are about the toughest thing to do right” (Williams 2009, 102).
Animating a walk cycle in 3D is even more challenging because even when you see the problem in the scene, figuring out the technical matters to tackle the problem may take a while.
You may see the breakdown for up and down movements of the hip in a walk cycle shown in (figure 41). The hip’s rotations curves are also shown in (figure 42). As you may have noticed, the picks of the curves for rotations are not happening at the same time. Moreover, the picks of hip’s up and down movements also occur at different times than the rotation curve picks.
To understand the reason, we need to think of the walk as a force that moves like a wave through the body. As Richard Williams, in the book Animator’s Survival Kit, says: “Walking is a process of falling over and catching yourself just in time. We try to keep from falling over as we move forward. If we don’t put our foot down, we’ll fall flat on our face. We’re going through a series of controlled falls. We lean forward with our upper bodies and throw out a leg just in time to catch ourselves. Step, catch. Step, catch. Step catch.” (Williams 2009, 102).
Right after the foot in the air touches the ground, the wave spreads through the body originating from the point of contact. After the step is taken, the wave reaches the hip and then moves to the upper body and causes the stomach to go up and then the wave reaches the chest and shoulders. This force starts at the contact of the foot with the ground and spreads throughout the body. There is also a second force that originates from within the character’s body. This second force tries to keep the character balanced. To keep the balance, we transfer our body weight on the foot that stays on the ground as we walk. That causes the hip to move from side to side. It also makes the shoulders to rotate in the opposite direction to keep the balance.
To create a walk cycle, all these forces need to be considered. They might not be seen as you watch the walk, but they are felt. Animating a walk cycle considering all these forces is sure overwhelming.
That is why the only way to do it, in my opinion, is to think of it in steps as I have mentioned before. Honestly, even with the right approach, animating a walk cycle takes lots of time and patience. It would have been easier if I could skip animating all these forces. But unfortunately, everyone could spot the floating and fake quality of my 3D walk cycle before I added details such as exaggerated up and down movements of upper body parts. As Richard Williams says: “when we trace off a live action walk (the fancy word is rotoscoping), it doesn’t work very well. Obviously, it works in the live action – but when you trace it accurately, it floats. Nobody really knows why. So we increase the ups and downs – accentuate or exaggerate the up and downs- and it works” (Williams 2009, 106).
Creating a walk cycle in 3D requires time and patience. There are many forces to consider for animating the movements with the correct timing. The best approach to animate in 3D, in my experience, is working in steps. Breaking down the forces and applying their effect one step at a time makes the process of animating more manageable.
Thinking of light as an element of storytelling helps with understanding its importance in a 3D animation. “There is more to lighting a scene than simply running a simulation of real-world parameters. Lighting is also designed to achieve certain visual goals that help a viewer better appreciate a scene. How well a lighting artist accomplishes these goals determines how the lighting will enhance or detract from a shot” (Birn 2006, 9).
My approach toward lighting my scenes was to separate the mother and child from their empty surroundings through creating a warm and cozy atmosphere just around them. To do so, I used three-point lighting methodthroughout the scenes. The breakdown of the lights in one of my stills is shown in(figure 41).
. Three-point lighting is a technique of lighting that uses three sets of lights: key, fill, and rim.
Working on my 3D animation, Come with Me, I eventually developed workflows that would allow me to go back and fix errors whenever needed. In the following, I am going to introduce you to some of the best practices that helped me maintain such workflows at each step of producing Come with Me.
As mentioned in the method section, through working with a project over time, I realized the importance of arranging and preserving Maya files in such a way that can be easily accessed later. The most important part of arranging files is the way they are named.
The naming convention I use starts with a number followed by a short description and then another number. When the starting number of the file changes, it means a major update has occurred, and I have moved from one step of the work to another step. For example, when I am done with painting the skin weights on file: “01_PaintWeight_#”, I save the file under the name: “02_ BlendShape_#” and start creating blend shapes. While changing the starting number shows major updates to the file, changing of the ending number shows minor updates within files that share the same starting number. As an instance, when I am done with painting the weights of the spine joints in file “#_PaintWeight_01” I save the file as “#_PaintWeight_02” and start with painting the skin weights for legs.
This naming convention helps with arranging files in chronological order. It also contributes to the workflow that allows for mistakes. Whenever an error occurs, it is quite easy to pinpoint the file wherein the mistake has been made and to start over.
An important part of maintaining a workflow that allows for mistakes is using referencing when working with Maya scenes. Whenever a Maya scene is referenced in another Maya scene, the two Maya files become connected. After such connection is made through referencing, whenever the reference file is changed, the file that has the referenced file in it would update accordingly. Referencing workflow saves great deal of time and effort while working with Maya projects. In the following, instances of processes that benefit from referencing workflow are introduced.
Starting the rigging process, I eventually realized that many steps of rigging are irreversible. By irreversible steps, I mean that going back and fixing the mistakes that are made in earlier steps requires starting over the whole process. For example, as a 3D model is created, making changes to the 3D model will add to its construction history. Before the rigging process starts, the history can be easily deleted. However, when the model is in the rigging process, deleting the history breaks the rig. This means that after the rigging starts, you are hardly able to make changes to the model. Not being able to make modifications to the 3D model once the rigging process starts can be frustrating, especially if you are doing practice-as-research. Because being able to practice, to make mistakes, and to go back and fix the mistakes is the characteristic of practice-as-research method. Even in the animation industry, there are occasions that the art director will ask for modifications to the 3D model after the rig is created. Because some issues of the 3D model are not revealed until it is rigged and animated.
Fortunately, there is a way around this rigging challenge. Referencing Maya files may help with many cases in which the 3D model needs to be modified after the rigging process starts. When the 3D model is created and is ready for rigging, save it as a Maya Binary file; something like: model.mb. Then open an empty Maya file and in this new empty file, reference the “model.mb” file. You will notice that as you reference the “model.mb”, the character is going to appear in your Maya scene as if you have imported it. Now you may save this new file as a Maya Ascii file; such as rig.ma. Going through this process, you now created a connection between the 3D model and the rigged model in such a way that the modifications of the 3D model will be applied to the rigged model.
Now that the 3D model is referenced in the rig file, you may change textures, UV sets, and even the edge flow or the geometry of your 3D model without breaking the rig.
Referencing is significantly beneficial to the rigging workflow. However, you need to be very cautious as you employ referencing in any workflows. The fact that the rig file is updated based on the 3D model file may cause you lose data sometimes. Changing the geometry of the 3D model may cause nasty errors to the rig file especially after you have painted the skin weights to the 3D model in the rig file (the solution to fix this problem is explained in 6.2.2). Besides, the geometry of the 3D model should never change after the blend shapes are assigned to the 3D model in the rig file. In my experience, changing textures, materials, or UV sets almost never cause any issue with the rig file.
Although I mentioned all these scary aspects of referencing workflow, if you plan ahead before making major changes to the 3D model file, everything will be fine. Here I will show you my referencing workflow that helps with managing the files and not losing data.
The first matter to be cautious about is to keep the name and location of the reference file untouched. Maya knows the referenced file only by its name and location. That is why when the file’s name changes or the file is missed in the specified path, Maya is not able to read the file automatically. As mentioned above, Maya knows the referenced file only by its name and location. It means that you may change the content of the reference file, but as long as you keep the file’s name and path untouched, Maya wouldn’t mind. We can use this feature to back up our reference file before we go through risky changes that we might want to undo later. What I would do to back up reference files is to create a folder next to my reference file. Then I would save a copy of my reference file in that folder. This folder is a place holder that contains a copy of my reference before it goes through major changes. Now that I have a backup of my reference, I can change the reference and see the results in my rig file. If the changes lead to the desired result, great! If not, I will retrieve the backup reference that I saved in the placeholder folder and continue without losing precious data.
Having multiple versions of the reference saved in the placeholder folders gives you great opportunity to work with multiple reference files for a variety of purposes. For example, whenever I am animating or running test renders I use a reference file that includes the model with basic textures. This way the performance is speedier as I animate or run test renders. However, when I am ready for the final render, I swap the reference with the one that includes the model with high-quality textures.
Using Maya referencing in the rigging process is a good practice to maintain a workflow that allows for mistakes. Because it provides the opportunity to modify and update steps of the work without the rest of the steps being affected. However, considerable caution is advised while working in the referencing workflow. Since the Maya files become connected as you reference them, changing the path to the reference file or corrupting the reference file will lead to losing data in other Maya files.
Anyone who has experienced the process of painting the skin weights on a 3D model would wish to do it only once. It is a time-consuming process and being able to preserve the skin weight while the 3D model changes save lots of time and effort.
In a referencing workflow, because the rig and model are connected through referencing, if the model in the reference file is modified, then the rig file will be updated as well. However, if you modify the reference model in such a way that the number of its vertices changes while the model is skinned to the skeleton in the rig file, it will cause issues. I call this issue: messed-up skin weights. A case of messed-up skin weights is shown in (figure 46).
Problems like this happen due to the nature of painting the skin weights process. As you paint the skin weights on a model in the rig file, you define the influence of each joint on every vertex of the model. As a result, when you modify the model in the reference file by adding vertices, the rig file doesn’t know how much weight to assign to these new vertices and this causes the case of messed-up skin weights.
Fortunately, there is a way to work around this issue with the help of copying skin weights option in Maya. In the following, I will explain how the copying skin weights option works; then I will introduce you to my approach to work around cases of messed-up skin weights.
Maya lets you copy skin weights from one model to another one. There are conditions to be met to be able to copy skin weights between models. First, both models should be at the approximately same place in the Maya scene. Second, the models need to be attached to one joint hierarchy, or two very similar joint hierarchies. Once these conditions are met, you may copy skin weights between two models. Maya copies the skin weight between models that share the same space and joint hierarchy and amazingly, doesn’t care that much about the details of the geometry or edge flows of the models.
Do you remember what would cause the issue of messed-up skin weight? Minor changes to the model! Now that we know that Maya is not strict about the minor changes between the two models as it copies the weights let’s go back and find a solution for messed-up skin weights.
To solve the problem, my approach is to preserve the painted skin weights on a duplicate of the model that doesn’t update when we change the reference. This way, whenever the updated reference causes problems with the skin weights, we can copy the weights from the duplicate of the model to the referenced model.
To preserve the painted skin weights, I make a duplicate of the model’s geometry when I am done with painting the skin weight on the model. As a duplicate of any part of a referenced file is created, the duplicate becomes disconnected from the reference file. It means that the duplicate will not be updated if the reference model changes. Once the duplicate is created, I attach it to the joints hierarchy using any of basic skinning methods. Then I copy the skin weights from the original model to its duplicate (you can apply a different material to this duplicate, or hide it from the scene to prevent confusions). This way, I preserve the painted skin weights on a duplicate that stays untouched as the reference model changes. As a result, whenever I make any modifications to the reference that messes up the skin weights, I can simply fix it by copying the skin weights from the duplicate to the original model. Notice that painting skin weight works well in this case because duplicate and reference model are at the same location, attached to the same joint hierarchy, and the two models are very similar.
Preserving the painted skin weights for a rigged character is essential in the rigging process. To do so, it is best to create a duplicate of the model and copy the skin weights to it. This way, a backup of the skin weight is always available and this back up is not affected by changes of the reference file.
Starting to animate Come with Me, I learned that it is important to break the animating process into steps to maintain a workflow that allows for mistakes. To do so, I needed to plan ahead before starting to animate. In my first scene, the mother walks as she holds the baby. My attempt to break down this scene into steps was to separate the walk cycle from the rest of the scene. I separated the walk because I knew that walk cycle is the most challenging part that I would want to go back and fix for a couple of times. To separate the walk cycle, I animated the walk in a separate Maya scene that included only the mother; then I used Maya Time Editor to create a clip from the walk cycle animation. In the Time Editor, I selected lower body controls of the character (hips, legs, etc.) and added their animation to a clip. Then I imported the lower body walk cycle clip into my first scene and applied it to the character. When doing so, the animation for lower body controls that is saved in the clip would override the animation that is on the same controls of the character, in the scene. Following these steps, I was able to work on the walk constantly and replace the walk cycle clip as much as needed without the rest of my scene being affected.
To maintain a workflow that allows for mistakes while animating, the key is planning ahead for the scene. After the scene is analyzed, it is best to think about ways to break the animating process into steps. If any critical step is identified, a good practice is to separate it from the scene in such a way that replacing the animation for that step doesn’t affect the rest of the scene.
Many qualities of a 3D character are revealed as it is rigged and posed. Can the character reach her toe? How does she look like when she sits down? Does the baby look balanced as she crawls? To answer questions like these, the character needs to be rigged. However, as the 3D model still goes through major changes, you don’t want to go through a complicated high-quality rigging process only to realize the character cannot reach her toe. To work around this matter, it is best to first rig the character with a quick rigging solution such as Mixamo. Because you don't need a high-quality rig to test the physicality of the character. With Mixamo, you can rig the character in a matter of minutes. Then you can pose and test it to see if the proportions and the geometry are right.
Taking an in-between step, rigging with an auto-rigger, before starting a complicated high-quality rigging process allows for making mistakes earlier in the process where fixing them is easier and less time-consuming.
In the attempt to make a short 3D animation, Come with Me, I explored 3D subjects varying from character modeling to rigging, animating, lighting, and rendering. Considering the ever-changing and interdisciplinary nature of 3D animation, the practice of making the animation was interwoven with doing research as well. As a result, Come with Meprogressed as part of a practice-led research and developed as I was becoming more experienced. Utilizing a practice-as-research methodology emphasizes the process rather than just the final result. It also provides a framework to keep records of the process and thoughts, expect and accept failed attempts to improve openness, agility for changing directions, and adopting creative solutions whenever needed. Through this research, I acquired 3D skills to make a 3D animation. Moreover, I developed non-linear workflows for each step of the 3D animation production. Each production step was introduced along with the mind processes as Come with Me developed from the story to the final animated scenes. The failed attempts were also included as an important part of the learning process.
At every step of making a 3D animation, the practitioner fights at least two battles. The first one is encountering the conceptual and artistic challenges, and the second one is tackling technical obstacles. The practice-as-research method and suggested workflows introduced in this thesis hopefully will help practitioners in the animation field to tackle both the conceptual and artistic challenges as well as technical ones.