X

Portrait

Organic Modeling Tutorial - Realistic Portrait

Software used for this tutorial: Blender & YafRay + Photoshop (or Gimp)

Contents

Bases of the construction of the model

First of all, we needs some photographs of the subject which will be used as reference for the construction of the model. They should be at least:

  • A general photograph of 3/4, which will be used as a approximate reference for the final image:

Image:Ref1.jpg


  • Both a side and a front view, which will be used as backgrounds for modeling. A good advice for these shots is to place the camera as further as possible from the subject and to make use, if possible, of a zoom or better a teleobjective. The goal it to get as close as possible to an hypothetic orthonormal view.

Image:Ref.jpg


A few notes about using an image as reference in a 3D view: It is possible to place an image or a blueprint as background in a 3D view with the Use Background image option. The option Blend is used to control the transparency of the image and Size/X/Y Offset are used to control the size and the position of the image in the 3D view.(It should be noted that images used as backgrounds don't appear in renders) Image:Auto-(back).jpg


First of all, we will make an approximate 'template' from front and side view, using bezier curves. This approximate model is not essential for modeling the face itself, but it will be used to outline contours and principal features of the face before the accurate modeling phase, as a kind of “sketch” in 3D. It is a small trick which nevertheless makes it possible to save a lot of time in the accurate modeling phase. Image:img3.jpg


Bezier curves options:

  • Click E (extrude) to duplicate a point.
  • Click S and R (Scale, Rotation) for adjustments.

(You have to enable the option “3D” in the Curve and Surface panel to edit Bezier curves in 3D)

Image:img4.jpg

Image:Auto_(26).jpg


A wireframe view of the final sketch:


Image:Auto_(25).jpg

Image:Auto_(24).jpg

Modeling

We are now going to start the modeling part. However, it should be taken into account that there is not a single way of modeling a human head, and that this tutorial is more a report of this particular work than an universal way of doing these things. Everybody should proceed according with her/his preferences and sensitivity.

For this modeling, we will start by making the right half of the face. Once is finished, with a symmetrical duplication of that half we will make the left half, and connecting both we have got the complete head.

Then, in fact we start from a vertice (a mesh plane in which 3 vertices have been removed for example), which is extruded following the contour (curve) of the eye in a front view. Close the contour by selecting the first and the last vertices and using Alt+M (Merge) or F (join by an edge).

We will obtain then a first edgeloop for the eye.

Image:Auto-(23).jpg

We change into Edge mode (Ctrl+Tab), select the complete edgeloop using (Alt+Mouse Right) and we extrude (E) and scale (S) outwards, so we have got the beginning of our mesh. Noice the pivotal point for the scaling operation in the image below.

Image:Auto-(29).jpg

Next, we will make two operations:

  • Applying a subsurf modifier to the model.

Image:Auto-(sub).jpg

  • Recomputing the normals of the mesh outside by selecting all (A) and pressing Ctrl+N (if you want to see if normals pointing outside, change into Ctrl+Tab > Faces and enable Draw VNormals)

Image:Auto-(nor2).jpg

Image:Auto-(nor).jpg

The Subsurf modifier will necessarily change the vertices original position. All the modeling will be done under this modifier. Each vertice displacement will have effect on its immediate neighbors. Many times It will be necessary to reposition the vertices to stick with the references (template and/or background image).

Image:Auto-(22).jpg

We will repeat the operation (Extrusion + Scaling + Adjustment) as many times as necessary to obtain the contour of the eyes and the mouth.

This technique of concentric edgeloops, mainly used for openings of the face, makes it possible to obtain a very clean modeling and not containing any triangle, which first of all eliminates the problems of distortion or irregularity of the mesh in a render, in particular if you are going to work on the expressions, and/or in case of a possible future animation of the model.

Image:Auto (21).jpg

We will continue the modeling of the base of the face of the nose and the cheeks while following the template. For these operations we will use, among others, the next tools:

  • Make face (F) to make a face from a four vertices selection.
  • RIP vertices (V) to detach surfaces.
  • Extrudes (E) to extrude either edges, faces or vertices.

The goal is to get close to the general shape of the face while using as few quad faces as possibles.

Image:Auto_(20).jpg

The next image shows modeling the nose (and an edgeloop for the nostril)

Image:Auto_(19).jpg

Then the attachment of the upper lip to the cheek.

Image:Auto_(18).jpg

In this type of modeling, it helps to pay attention to the number of vertices in each edge loop in order to avoid holes or mismatches when attaching different parts of the mesh.

Image:Auto-(numvert).jpg

Next we look at modeling the back of the skull.

Image:Auto-(17).jpg

Then the ears, always following the edgeloop principal, but this time working from the exterior of the ears toward the interior auditory canal. (Note that to follow the forms of the cartilage, you may have to add or remove faces, but always try to avoid making triangular faces.)

Image:Auto-(16).jpg

We now have half of the complete portrait.

Image:Auto_(15).jpg

Now, (in Edit mode/Vertex select (Tab/Ctrl+Tab)) duplicate this half of the portrait. Select all (A), duplicate (Shift+D) then mirror along the x axis (Ctrl+M, choice 1)

Image:Auto_(14).jpg

Move the duplicated half of the head to the original half so it touches along the center line (G + X) then select all the vertices along the joint on both sides (Alt+Shift+RMB), then use the scale X tool (S+X) moving from the outside towards the joint, in such a way as to exactly superimpose the vertices on each other.

All that's left is to stick the two parts together using the command “Remove doubles”.

Image:Auto-(remd).jpg

Duplicating and mirroring will invert the normals of the second half of the portrait (you'll see a black line after the remove double operation) so you need to recalulate normals outside using the command (Ctrl+N)

Image:Auto_(13).jpg

Here is the completed portrait model.

Image:Auto_(10).jpg

Texturing

Now let's put some textures on the model

Before we get into UV Mapping we want to cut the mesh using the command Mark Seam *(Ctrl+E choice 1)

Image:Auto_(9).jpg

Once that's done we go on to the UV Image Editor and UV face select mode and unwrap the mesh using the Unwrap

  • command (U)

Image:Auto-(30).jpg

We then set out to arrange the UV net to cover the surface that we will use for textures later on.

Some useful tools for working with UV nets

  • (V) join adjascent vertices
  • (O) plastic deformation of the UV net
  • (P) Pin, allows fixing vertex locations, and deforming the entire UV net proportionally (Select “live Unwrap Transform”)

Once this is done, take a screen shot of the UV Image Editor window, or use the script that saves the wire frame as an image file.

This image will be used as a base to make the color map texture.

Image:Auto-(31).jpg

We can go directly to making the actual color map texture.

To do this, we need an image editor such as Photoshop (or Gimp) and a series of detailed photos of the subject from as many different angles as possible, to get as much detail as possible down to the level of skin pores. We assemble these images in our image editor using the UV mesh image to position the pieces correctly.

There are no magic formulas for this operation, just painstaking cutting, assembling, fitting and filling in missing pieces, erasing shadows, toning down the hair, until we get a usable texture.

Image:Panotext.gif

Image:color.jpg

Then it's back to Blender, and we put the texture on the model using the appropriate mapping coordinates.

Image:Auto-(col).jpg

The UV net may need some tweaking to fit the texture to the mesh perfectly.

Image:Auto_(8).jpg

Image:Auto_(7).jpg

Once the color map is in place, it can be used as a base to make both the Normal map and the Specularity map

As before, there's no magic tricks, just close observation of the subject and a lot of testing to see what looks the best when rendered.

Image:Bump.jpg

Image:Shine.jpg

A little peek at the button panels

Image:Auto-(textdet).jpg

Regarding SubSurfaceScattering SSS for the semi-transparent spots such as ear cartilage, we use vertexpaint using a very dark (almost black in fact) red color with a very light touch of red that's a bit more intense on the translucent spots.

  • Don't forget to select VColLight in the materials editor panel (and in XML if you're going to render in Yafray)

Image:Auto_(4).jpg

Lighting, shadows & reflections

To light the scene, we use two lamps, a spotlight and GI skydome.

Here's a small trick to point the camera at a fixed target and to get constant lighting regardless of the camera angle:

Add an empty in the middle of the objects in the scene (Camera_target) and assign a Track To contraint Camera_target -Z, Y to the camera. Then the camera will always point to the center of the scene, no matter where its positioned.

Next add another empty (Lamp_target) and add a Track To constraint Camera -Z,Y. All that remains is to parent the three light sources to Lamp_target and the lighting will follow the camera when the camera is moved.

Image:Auto-(lamp).jpg

We also use an HDRI map for the reflections: beach_probe.hdr[1]

Image:Auto-(hdr).jpg

Hair

To make the hair and the beard, we use a static particle system emitted by the mesh. For this we add a second semi-transparent material which will be used only by the hair and beard particles.

Image:Auto-(part).jpg

After roughly selecting the part of the mesh that will emit particles and making it a vertex group (hairs), we will procede to weight paint mode in order to specify the density and detailed location of the particles.

Image:Auto_(6).jpg

Then we use curve guides to make the particles move in the desired direction and thus determine the direction and length of the hair on various parts of the skull.

Image:Auto-(curves).jpg

A screenshot of the hair.

(I recommend, seeing the enormous amount of system resources used by the particle system, that testing be done with a Disp setting (% of particles shown on the screen) as low as possible)

Image:Auto_(5).jpg

Last changes

We now will explain some precise details about the finalization of the model, particularly the addition of the eyes, the glasses and clothing.

Image:Auto_(3).jpg

Regarding the modeling of the ocular spheres, we will start from a UV sphere which comprises the eye bulb and the cornea and inside there will be the iris and the pupil extruded (see images below). The mesh will be separated in distinct groups of vertices in order to apply them different shaders such as sphere, iris and cornea.

Image:Auto-(eyes).jpg

Regarding the modeling of the glasses, we will use the cranium as reference. The photographs of the face will be useful as blueprints and possibly as a textures source too (Don't hesitate about using procedural textures for the temples and the glasses frame. Besides, the shader used previously for the cornea of the eyes should be useful for the glasses in theory)

Regarding clothing, the modeling techniques required are similar to which has been used in the face, except the fact that you can modell only what is visible (it would be useless therefore to model and texture the cufflinks if we see only the collar in the final render). Although they are modelled in a separate mesh, nothing prohibits you to model them directly in the main mesh. For textures we will use a UV map in a color+ref modulator (you can scan or photograph your sweater and shirts to obtain the textures)

Image:Auto_(2).jpg

Image:Auto_(1).jpg

Render with YafRay

Regarding the render of the scene, we will use the YafRay engine. Below an screenshot with the rendering parameters used:

Image:Auto-(rend).jpg

And here is the final rendering...

Image:Auto.jpg

Full format 1280/1024: Media:Autohd.jpg

Tutorial by Alt-ligury 11-2006. Translation by Orinoco and Samo.

http://www.terrier-infographie.com





The Society

The CGSociety is the most respected and accessible global organization for creative digital artists. The CGS supports artists at every level by offering a range of services to connect, inform, educate and promote digital artists worldwide

Contact | Privacy | Advertising | About CGS