X

Portal:Hardware

Hardware

Alternative Browsing:

Category: Hardware

Portal for Areas of Hardware used in Computer Graphics


Subportals

Input Devices :

Portal for the Devices Used to Help Create Computer Graphics

Displays :

Portal for the Devices Used to Display Computer Graphics in a Digital Format.

Output Devices (non-display) :

Portal for the Devices Used to Create or Display Computer Graphics Content in a Non-Digital Format.

Computational Hardware (CPU/GPU related) :

Portal for the Processing Hardware Used to Aid in the Development of Computer Graphics Content.



7 Apr 2004


This article is a bit dated, but this should serve as proper for the CGSociety Wiki until somebody writes something of greater depth or relevancy. - Cheney

Contents

Software

Operating System

Choosing the correct operating system is the first important step as this impacts what art software you can and cannot run as well changing your potential selection of hardware. At the moment there are a variety of art software for the main three operating systems (MS Windows, OS X, and Linux based OSes). For Linux the native 2d painting software tends to be limited to opensource tools such as Gimp, Cinepaint, Krita, Inkscape, and Xara; however older versions of Photoshop can be used via the WINE api emulation layer. Photoshop, Corel Painter, Artrage, Paint.Net, and Illustrator are some of the important native paint software for Windows. Photoshop, Illustrator, ArtRage, Corel Painter, and Studio Artist are some of the important paint tools for the Macintosh OS X operating system.

Creating art for the web demands little consideration for hardware, however print art and 3D art are very different considerations. High resolution print work makes heavy use of the processor cache and thus a large processor cache is important; whereas 3D art puts greater demands on raw processing power. Knowing the type of art you wish to focus on should help determine your 'hardware ideal' as explained further in this article.

Patches

Manufacturers of Operating Systems often offer updates to their products. These are called patches and usualy serve several purposes.

  • Fixing bugs
  • Closing security holes
  • Adjustment to hardware changes

The one purpose that is mainly talked about when it comes to patches are the security fixes. Often enough security holes do not come from bugs in the software but from oversights in the design or lazy implementation.
Fixes for security holes are especialy important if the system has an internet connection, be it for web surfing, mail, games or whatever. Security holes that allow intruders to access information on the system, or maybe even taking over the system are known for nearly every usage of the internet.
Patches alone can not ensure that the system can not be access by intruders. It is important to know which regular ways of communication are open (and need to remain open for the system to do what is wanted) and protect them for example with a firewall.

Sometimes patches introduce new problems. Either that some parts of the software are not working anymore or that security fixes do more then is expected and prevent the intende use of the system. Having a working and up to date backup is the best preparation here.

Photoshop's Demands

How Photoshop uses hardware and writes to memory is not always understood even to the most elite of digital artists. Many artists believe using large numbers of layers requires enormous amounts of memory. The fact is that layers are only as data heavy as the contents they contain. Using hundreds of layers can actually result in lesser memory demands than using less than ten layers. If a layer contains a complex texture that spans the entire image it is going to be far heavier than several layers, which contain only a small areas of a single color. I once created all the graphic for a massive website inside a single Photoshop document containing over 2500 layers. That image is actually nearly 100 times less memory demanding than my last image project that contained a mere 20 layers full of textures for a large print image.

Most filters, plug-ins, tools, and other features of Photoshop that produce graphics through a generation process, called rendering, demand almost no memory at all. These features all benefit from processing power in the same way, but to a lesser degree, that a 3D rendering program benefits. Most tools in Photoshop use all the memory they are going to use when Photoshop initially loads them into memory as it opens. Most tools require virtually unnoticed amounts of memory to load. The only exceptions to this are large volumes of installed fonts and large collections of custom brushes.

Plugin filters tend to be far more complex than standard filters, so these require more memory to initially load. The memory requirements of plugin filters are insignificant to the processing demands these plugins often require. This is also largely variable based upon the plugin filters used.

3D Rendering Software

There are differences in hardware demands between the different band-names of specific 3D softwares, but since they all have similar demands I will group them all together for this article. 3D software does require large resources of memory to store texture data for the actual rendering process, but this is extremely pale in comparison to memory required for print graphics. Most noticeably is the 3D rendering engine's complete hunger for processing power. CPU power is the unquestioned king for 3D animation or high-resolution graphics. A common term kicked around 3D artists, render farm, is the idea of using multiple computers networked together to simultaneously render the data more quickly.

Most 3D programs use caching data tricks to help take away as much weight from the CPUs as possible so that the CPU can work harder. This is where memory demands for caching textures comes into play.

Additionaly to the most commonly used rendering softwares that run on off the shelve standrad systems there are some renderengines that rely on specific hardware, or are implemented directly in hardware. Usualy they have a speed advantage but at the same time they require the user to only use their specific features.

There are three main categories for rendering hardware

  • GPU based (i.e. Gelato)
  • vendor specific hardware (i.e. ART VPS)
  • FPGA

All but FPGA suffer from the problem that hardware is developing at a fast rate and multi purpose CPUs usualy surpass special hardware in speed within months or at wors few years. Another drawback for all but FPGA is that to change and improve basic functionality the hardware needs to be replaced. The main drawback of FPGAs is the comparably high price and the complexity in programming, since basicaly you not only have to create the software but also the hardware at the same time.

Hardware Drivers

All hardware requires drivers. With modern hardware standardization and later OS revisions the user is mostly unaware as they are integrated into Windows and Linux. Macintosh systems require virtually no driver installations since their hardware is bound to their software as proprietary technology. Users do need to be aware of the latest releases of drivers for their hardware since potential security, compatibility, and performance issues are likely addressed. Before installing the latest drives it is most wise to be aware of the issues the drivers address to determine if the drivers are actually needed.

Printing

Art Considerations

The idea of making art for print is to work as large as possible. A nice article , written by Faderhead for Raster, explains the rough concept of image sizes and resolutions. High resolutions are extremely important if superior quality prints represent your work as an artist. I am an artist with an obsession for details, because I have superior near-sided vision, other wise horrible eyes, to notice the details others might take for granted. I enjoy knowing that my art looks good on paper with sharpness and precision that rivals the most experienced of the competition. If a professional artist wishes to be successful then this is the minimum level of dedication and mentality that should be achieved.


Resolution

Knowing the how certain resolutions look when printed should determine that acceptable level of quality to choose for a print project. Text documents are the most limited of all printing projects. The standard quality level for printing text is 72dpi, dots per inch. Most software; such as word processing programs, internet browsers, and other productivity software; have 72dpi locked in as an unconfigurable default, as a result. Text looks perfect at this minimal resolution, because text is outputted as an anti-aliased rasterized vector image, since it is stored as vector image files called fonts. The result is that no matter how the text is used it will always look sharp when printed.

All other art is not as simple as text, and demands higher print resolutions as a result. The least complex form of digital art, after text, is vector art. Vector art is art defined entirely as graphed algebraic formulae that set lines and fill areas across a plane. Since vector is limited to lines and areas it is usually identified in imagery containing textures made up of flat colors that do not appear to overlap. At its most complex vector art can use a two-color gradient in fill areas over an existing color field, but no real texturing or dynamic lighting will exist. Modern vector programs do allow the use of layering, but upon publication the vector will still appear as flat colors pieced together on a plane. There are clear artistic limitations in this style of art, but this style also features the least of technical limitations. Since vector art is generated math formulae designated over areas it does not have the limitation of pixel size. These images can be easily upsizedd without loss of quality, but detail flaws will become more visible. As a result, this is the ideal image standard for printing since it can easily be scaled up. Vector art files tend to also have a far smaller file size than raster art files of similar detail. A common example of vector art is Macromedia's Flash .

Raster art is much more complicated than vector art. Raster art is imagery that's mapped across a grid of digital units, called pixels. This is the most common form of digital art. Modern raster art applications offer the ability to create photo-realistic art that appears to have no artistic limitations, but there are technical usage limitations. Raster image files are heavy file sizes, and they demand high resolutions to print well. The standard for printing at lower quality is 150dpi; high quality art is 300dpi for large prints, and 600dpi for highest quality small prints. An untrained eye would not notice any differences above 300dpi, and don't care to analyze the faults of 150dpi. Untrained people are quick to appreciate 150dpi quality prints, because they rarely see high quality prints to notice the difference. This is why most publications get away with printing at 150dpi. At 200dpi the four-color separation points used for printing color are no longer visible to most peoplee, and at 300dpi they are not visible to the human eye. This is how 300dpi prints look like a photo-processed print while 150dpi prints clearly do not. Just for thrills look at a magazine cover closely to see if there is any visible color separation.

The reason why small prints typically demand higher resolution than large prints is a matter of detail analysis. People tend to view small prints much more closely so that details become more obvious to the eye. Large posters are typically viewed from a distance; as a result they are analyzed far less than prints such as CD art.

3D art includes the strengths and limitations of both raster and vector imagery. 3D art is created as a wire mesh representing objects in space as models. While in the 3D software the view can be moved in or out to make the objects appear larger or smaller in space. At this point the image is only defined as a construct for the 3D image software. To be converted into a finished image the 3D data needs to go through a render process where the processors calculate the math definitions of the object models, lighting, atmosphere, textures, reflections, and so forth. The render process can render the 3D elements into any defined pixel size. Once the render process is complete the image is recreated as a raster type image. The final result is that 3D imagery allows full customization before its rendered, but after the rendering process it has the same limitations as raster imagery. The rendering process is extremely time consuming, especially at high resolutions, so all aspects and coonsiderations for the final image must be planned before rendering is started.

Colors

There are two digital color formats called additive and subtractive colors. Colors in reality are obtained from partial reflections of white light off scattered surfaces, or subtractive colors. Digitally, this color process is represented by the colors cyan, magenta, and yellow. Black is added as a forth color to define darkness in a way that the other three cannot do so on their own. This process is called CMYK for simplicity. Digital colors use a native color process where colors are created from adding red, green, and blue in certain combinations, or additive colors. This process is referred to as RGB for simplicity. Additive color processes represent everything displayed by a monitor, and subtractive colors represent everything displayed by a color printer. CMYK is a standardized color set so that images can be printed on different printers without harm to the intentions of the print. RGB colors, on the other hand, have several different standards. It is important for print artists to know the differences in these standards so that the color definitions of the print can be better controlled. Please read the detailed explanation of RGB standards , link requires Adobe Acrobat .

Printers

There are three primary types of printers and a forth more expensive print process. The lowest quality of standard printers is dot-matrix printers, but they offer the fastest possible print speeds. Ink-jet printers are the most common type of printers. Ink-jet printers offer superior print quality to dot-matrix printers, and have far fewer components than laser printers. Ink-jet printers are the ideal choice for portable printers. Digital artists are going to wish to use laser printers for artistic quality. Laser color printers are most expensive of the types of printers listed, but also deliver stunning quality prints up 1200dpi. Its extremely rare for digital artists to create art so large as to use 1200dpi due to current hardware limitations, so there is plenty tto spare until a hardware revolution allows digital artists to work at larger sizes.

Industrial printing does not use a printer such as those mentioned, but a large process called offset printing . Offset printing allows up to 1200-2400dpi and 9000 completed products per hour. Offset printing is very expensive to set up, because plates must be etched for each page and each color of a book before printing begins. Even though this process is incredibly expensive to initiate it is the least costly per page printing method.


Paper

Professional print paper is standardized by weight and dimension. Even though there are standards for paper size many jobs require unique dimensions for the final product, so it is wise to be aware of the print sizes supported and demanded by the job or print shop. Paperweight is used to measure the density and thickness of a sheet. Higher weight paper is more expensive, but is higher quality.

There are a variety of different types of paper available for print and art use, but photo-finish paper is the best quality choice digital prints. Any high-quality heavy weight paper will hold ink well without running, but photo-finish paper reflects light in a way that looks more professional and expensive compared to standard fiber stock papers, as a result photo-finish papers are the ideal choice for high resolution digital art. There are several different types of photo-finish papers that each reflects light differently. My personal favorite is pearl finish, but knowing how these papers look different will help determine when one type might work better than another type.

Photos-finish paper will not work for all occasions. Jobs such as business cards and gift cards look better on a more natural looking paper. Artistic papers can be made with a variety of physical and visual textures; as well as different colors, source fibers, opacity, and durability. Different paper producers customize most of these advanced options, so it requires a bit of pro-activity to be aware of the variety of papers available. Many print shops and paper producers have large catalogues available to help advertise the variety of their product selection. Knowing this will add a brilliant spark of creativity and excitement to any common, and otherwise boring, print job.

Data Caching

Importance of Caching:

The only aspects of computing that take a truly extreme approach to data caching are large-scale database administration and high-resolution digital art for print. Clearly, database issues are not what concern most digital artists. Try to imagine an image of 6000x9000 pixels with several layers and heavy textures. Such an image would easily occupy several gigabytes of cached data. I have gone so far as to have an image exceed 35gb of cached space over multiple hard drives. Constantly moving this much data around at a high demand is a horrible pain that would easily kill most machines. This is why an understanding of caching is vital to print artists.


Memory

There are two aspects to memory that should be considered. Memory is measured as a frequency, such as 266mhz. This is deceiving since motherboard chipset engineers' use different tricks to increase memory through put as much as possible. As a result it's best to measure memory speed as gigabytes/second. Using this measurement to determine memory speed will bypass all those tricks.</p>

The other consideration to memory is the maximum physical amount allowed by the motherboard. In Windows a single process as executed from the CPU can use a maximum of 2gb of memory by default. This can be upped to 3gb with a known Windows hack, but this is not recommended for stability. This is where a RAM disk comes into play.

RAM Disk

RAM disk is a virtual disk drive created entirely out of RAM. A RAM disk is given a drive letter specified by the user, and the operating system will see the RAM disk as yet other hard drive. RAM disks will work exactly like a hard drive except that it is many times faster and all its data is lost every time the operating system reboots.

The RAM disk driver for Windows 2000 , ramdisk.sys, is open source, but not yet open source for Windows XP. If the RAM disk is to be set from any Windows based operating system reading the instructions on this page are recommended even if this open source driver is not used for better familiarity. This RAM disk tutorial explains how to easily set a RAM disk in any version of Windows with default configurations.

It is my personal recommendation that all memory above 2.5gb be set for use as RAM disk. So, for instance, if your computer has 12gb of RAM on the motherboard I recommend that 9.5gb be set for use as RAM disk space. The reasoning behind this is that Photoshop will only recognize a maximum of 2gb of RAM if installed on a Windows OS kernel. Typically, newer computers are set for 256mb of RAM to be set aside as a graphics aperture. A graphics aperture is a dedicated cache buffer in main memory set aside for use by the graphics card in case it needs more memory in addition to the memory installed on the graphics card.

Disk Arrays

Disk arrays are used for several different reasons.

  • Minimum cost
  • Maximum availability
  • Maximum speed

Especialy the first two contradict each other and are usualy not possible to combine.

Minimum cost is achieved by bundeling several inexpensive drives and creating a virtual larger disk of them. This allows to combine many cheap drives to a lareg one instead of paying for usualy unproportional more expensive large drives. The main drawback is that with each added drive the propability of complete data loss increases, since the failure of one drive invalidates the complete virtual drive. This setup is usualy called JBOD (just a bunch of drives).

A variation of this is a RAID 0. It needs identical drives to work and will spread data over drives so it can be accessed in parallel, thereby increasing read and write times very much. Again the drawback is the higher risk of loosing data due to drive failure

Maximum availability is usualy done by adding redundancy. For workstations and small servers RAID 1 is often used. In a RAID one two disk will hold exactly the same information so if one fails the other still works. The drawback here is that you loose half the generaly available harddisk capacity.
RAID 5 is basicaly a RAID 0 with added redundancy. Instead of only spreading the main data additional control information (checksums) are calculated (usualy by a special RAID 5 controller) and stored with the rest of the data. The checksums are stored over all drives, just like the main data. In case of failure of one of the drives the missing information can be calculated from the remaining information and the checksum information. This usualy slows the system down a bit, but the data integrity is preserved. Otherwise a RAID 5 has the same speed advantage a RAID 1 has. Due to the need of storing the checksum data a RAID 5 will not offer the full capacity of all drives added, but over all only the capacity of one drive is needed (not actualy a single drive, but the capacity of one spread over the array).
RAID 1 and RAID 5 setups (depending on the calculation power for checksum) are usualy the fastest possible solutions.

In the past RAID arrays were mainly a thing for SCSI systems, but they have been available for ATA and SATA systems for some time now too. Due to it's high demand on complex controllers SCSI is used less and less. For desktop and workstation systems it is mostly replaced by SATA and for servers by SAS devices. The basic principles of RAID arrays still remain the same though. Many current desktop systems come with inbuild RAID controllers. Using RAID 1 or RAID 5 is recommended for everyone who relay depends on the safty of data stored on the harddisk.
RAID 1 and 5 can not replace a working backup, since they do not allow access to deleted information.

Photoshop's Scratch Disk

It is Photoshop's easily configurable scratch disk engine that allows an artist to take full advantage of their RAM disk. Basically, scratch disk is a cached page-file written by Photoshop for data that does not fit into main memory. This information is temporary, so once Photoshop closes or the computer restarts this information deletes.

Photoshop will only allow the user to specify up to 5 locations for scratch disk and each location must be a hard disk partition or RAM disk. Removable media cannot be used as scratch disks, nor would this be practical. Making scratch disks work as quickly and efficiently as possible is the goal for scratch disk management. Keeping the data in as few locations as possible from fastest to slowest will accomplish this. If you have a RAM disk this should definitely be the first in the list. The next should be a SCSI drive, if you have one. The more locations the scratch disk information is written to the slower access to the data becomes due to having to task information across multiple locations. Scratch disk does not work like a SCSI raid disk array, where the information can be read and written from multiple disks simultaneously.

If a computer shuts off while Photoshop is open, without properly closing out, the scratch disk information can sometimes survive as fixed data if it resides on a physical hard disk. This is a problem that does not happen very often, but when it does a user is left with a huge chunk of hard disk space filled with hard to find data. This is not a problem if using a RAM disk, because everything in RAM disk is lost when the system restarts.


Processing

Processors

Outside of memory bandwidth, as previously mentioned, there are only three aspects to a processor that determine its over-all power. The size of the CPU cache memory, the speed as measured in gigahertz, and the ability of a processor to communicate with other processors. The cache memory size of a processor is the real power measurement of a single CPU. The cache memory measures the cache memory block built onto a CPU. This is a bit of memory that stores instructions and addresses most critical to the CPU. This is considered short-term memory where RAM is considered slower long-term memory. Modern CPUs are many times faster than modern memory, but CPU cache operates at the speed of the CPU.

Processor speed is also an important factor. Faster processors tend to be far more powerful processors in terms of small tasks and multi-tasking operations. The reasoning behind this is that faster CPUs complete single tasks more quickly. CPU speed is often a miss-leading term; because it is limited by limitations imposed from memory bandwidth; but it is also boosted, in certain instances, from the advantages of CPU instruction sets. Please read the Audio/Video Endcoding section below. When multi-processing becomes a consideration CPU speed becomes a lesser-important factor. This is where scalability comes into play.

Scalability is the ability of CPUs to communicate with each other when more than one CPU either reside on the same motherboard or share local hardware resources. AMD's new Opteron CPUs are an excellent example of the benefits of scalability. These CPUs are far slower than their rival CPUs, Intel's Xeon, and are far less powerful over all when both CPUs are compared in single CPU setups. However, when these CPUs are used in dual or quad-processing configurations Opteron shows considerable gains over Xeon CPUs over a variety of computing demands. This example is due to Opterons using an independent pathway to memory per each CPU, and communicating to each other with greater ease. Xeon chips, however, share a single, though larger, pipe to memory. This results in CPUs fighting each other for optimum memory bandwidth, and it is especially evident in a quad-processing environment. The following article demonstrates how scalability makes CPUs more powerful as noted in this [httpp://www.aceshardware.com/read.jsp?id=60000275 Xeon vs. Opteron review] . Intel is hoping to rectify this loss of competition with a superior new memory standard they have invented called, FB-DIMM.

10-9-06: Much of the previous 3 paragraphs are highly inaccurate, and not very meaningful. This was obviously written by someone who does not understand CPU design. For starters, there are many factors that come into play when determining the overall speed of a CPU. In recent years, the focus has been on large MHz numbers. We saw speed jumps from 400 MHz all the way to 3800MHz. But these large numbers are highly misleading. The following should clear up much of the mess created by the above paragraphs.

There are many factors that go into the overall speed of a CPU. More so than was mentioned above. But, for the sake of simplicity, we'll ignore most of them :)

To begin, lets look at some main contributors to speed. First, is the ever so loved MHz and GHz. The marketers love these numbers because they are so absolutely huge. But what do they mean? In short, nothing really, but lets find out why. 1Hz, is one cycle per second. Similarly 1000 Hz (or 1MHz) is 1000 cycles per second. The more cycles in a second your computer completes, the more instructions it can crunch. So why do I say this number has little meaning? The answer: parallelism. What I mean by this is, for the most part, in order to increase the MHz of a computer, the design team must sacrifice another very important contributor to speed, IPC, or Instructions Per Clock/Cycle) The IPC number is the amount of instructions the CPU can complete per cycle. Hopefully you realize at this point that if I have a CPU that runs at 100Hz (that is 100 cycles per second) with an IPC of say 10 (10 instructions per cycle) My 100Hz CPU can execute 1000 instructions. Now lets say I have a second CPU. This CPU has an IPC of 2, but runs at 500Hz. It too performs 1000 instructions in one second. But using what we learned from the marketing folk at CPU companies, the 500Hz processor should be faster! At 5 times the Hz you'd think so, but in the end, the two should be relatively close in performance (assuming everything else is identical) So the real trick is a careful balance between IPC and Hz. It is very difficult and expensive to increase both. Most designers (AMD and Intel) will favor one over the other.

As a side note, there is a draw back to using very high Hz numbers, and that is power consumption and heat. The more Hz the CPU finishes in a second the hotter and more power it requires. In today's energy sensitive environment, this is a bad thing. This is why current AMD and Intel designs focus on higher IPC rather than more MHz.

The next factor that contributes to overall speed in more noticeable ways is the memory subsystem. This includes the bandwidth to RAM, the latency to RAM, and the size of the cache on the CPU die. I'm going to use a rather strange analogy here.

Lets say you have a lumber business. You need a mill to process the lumber into usable structures, you need a forest to get lumber from, and you need trucks to move your lumber around. In this analogy, the forest represents RAM, the highway the trucks drive on to deliver the lumber to the mill is the memory bus, and the mill will represent the processor. Additionally, lets say your mill is based in the center of a city (the CPU die). This means land is small and expensive, where as out in the forest, your lumber camp has plenty of land, and is very cheap. The road into town is a two lane road. So now trucks are hauling lumber to your mill. But since this is a two lane road with other traffic on it, your trucks do not come as regularly as you'd like, and you find that your mill is sitting around waiting for lumber to arrive. Because you're in a position of power, the two lane road gets upgraded to a 4 lane high way (the memory bus has more bandwidth). Great! But now you have a new problem. There is so much lumber coming into the mill that you cannot process it fast enough. So you end up wasting a lot of lumber by leaving it out on the street curb to rot. You then purchase a warehouse on the outskirts of town (the edge of the CPU die). Land here is not as expensive as in the center of town, but not as cheap as the lumber camp. So you have a moderately sized warehouse (a CPU's cache). Now your trucks haul a lot of lumber to the warehouse outside of town, and your mill can quickly retrieve the lumber from the warehouse when it needs it.

This represents a simple memory topology. Basically you have RAM <--> CPU cache <--> CPU. But there are more complex versions. Take for example Intel's Pentium 4. The layout is slightly different from this model. It looks something more like this:

RAM <--> L2 Cache <--> L1 Cache <--> CPU

Each level of cache (L2 L1) is faster and faster than the previous. L1 being fastest. But L1 cache is very expensive, and thus is small, usually only a few 100 KiloBytes. L2 is larger going up to 2 or even 4 MegaBytes, where as RAM is usually 1-4 Gigabytes.

So again, like the Hz and IPC there is a balance this time between cost, speed, space on the CPU die, and memory size. The cache on a CPU is meant to help buffer data from the RAM, and there are many aspects of this that can make things faster or slower. Things I will not get into. Basically though, you want as big a cache as you can get. The number usually advertised is the L2 cache.

One thing to notice here, is that the cache size on AMD chips is generally smaller than the cache size on Intel chips. This is because of the fundamental difference in the two CPU's designs. Athlon64 based CPUs have the memory controller built into the CPU its self. Where as the Intel CPUs all have their memory controller built into the motherboard chipset. This means that the Athlon64 talks directly to memory, while the Intel CPUs have to talk to a "middle man" before retriving or sending data to RAM. This creates a higher latency situation, and thus the more cache an Intel chip has the faster it will be able to retrieve data from RAM. AMD's Athlon64 and other K8 CPUs do not have this weakness and can get by on smaller caches. This also makes the CPUs cheaper.

The last thing that really has a big impact on CPU performance is its connection to the outside world. For an Intel CPU, this means the FSB speed. For AMD, the HyperTransport speed. The two are pretty different. First lets look at Intel's method, which is the traditional design. The CPU talks to the rest of the system through the FSB. You'll note that I have already mentioned the FSB when talking about RAM. This can be considered the "other traffic on the highway" in my lumber mill example. This is because all data going into or out of the CPU in the traditional design must cross the FSB. As you can imagine this is not ideal. This slows highly I/O dependent operations down.

AMD's design is somewhat better. HyperTransport, is a standard that AMD and IBM designed to create dedicated links between the CPU and other devices on the motherboard (the chipset, another CPU socket, etc). Since the AMD based CPUs as mentioned above, already have a dedicated link to RAM (think a 4 lane highway with only the purpose of bringing lumber to the mill in town, with no traffic on it). Hypertransport creates the link to the rest of the world. This is also a serial interface, which is good, but I wont get into. Intel is still a year or so off from an equivalent technology. There are many things that make this type of interconnect very attractive. But thats beyond the scope of what I'm writing here today.

Now for CPU recommendations. For the last few years, AMD's Athlon64 and Opteron have been dominating in performance across the board. Coupled with their low power, high IPC design, low latency, high bandwidth memory interface, and the scalability that HpyerTransport offers, AMD's Athlon64 was a force to be reconed with. In recent months, Intel has finally released their competitor, the Core Duo and Core 2 Duo. The Core CPUs bring many advantages. Some of which fix all the major problems that existed in the Pentium 4. Low IPC, very power consuming, poor memory interface, and small caches.

At this point in time, I would recommend the Core 2 Duo E6600 as a good price/performance ratio. You will get performance on par with an Athlon64 FX-62, at a fraction of the cost.

Multiprocessing

Nearly every operating system I have ever used shows incredible benefits from using at least two CPUs instead of merely one. The most noticeable issue is increased long-term stability. Most of us have heard of the infamous Blue Screen of Death error screens that come up when Windows crashes. These crashes are due most typically to faulty hardware drivers for hardware issues integrated into the motherboard or problems with memory addressing. Since Windows 2000 Service Pack 2 these crashes have become far fewer. This is even truer in a multiprocessing computer, because the multiprocessor kernel revision used by later versions of Windows address memory with far fewer faults. This is not always so much a glory entirely of the OS, but an issue with overcoming the complexities of getting CPUs to talk to each other.

Far increased operating system stability is a greatly appreciated benefit from multiprocessing, but it one of the least noticed benefits. More noticeable is the massive gain in processing power resulting from the increase in CPUs. Photoshop is a multi-threaded application written to a take great advantage of a simultaneous multi-processing, SMP, technology. Photoshop can notice as much as a 70% gain in processing performance over a single same type CPU.

Multithreading

In addition to SMP technology Intel was the first to release SMT, simultaneous multi-threading, technology. This is the ability for a single CPU to execute an additional CPU task with its left over resources while processing its primary task. Software has to be specifically written for multithreaded operations to take advantage of this technology to boost their performance from a single process. This technology carries little direct benefit for digital artists, but goes a great distance to make life easier on us as users. As Intel initially released this technology approximately 15% of each CPU could be used to execute additional tasks. Most software sees an SMT enabled CPU as two different CPUs even though it is a single physical processor. An operating system's kernel has to be written to specifically identify SMT technology or it will be viewed explicitly as SMP technology.

This technology only boosts a user's ability to multitask between different softwares unless their software is directly written to take advantage of multithreaded technology. So, for example, a user could execute a large-scale plugin filter that uses 80% of both CPUs for 6 minutes with SMT technology turned off. While these CPUs are busy executing the Photoshop filter they are completely locked up, and so the user is powerless to use this computer for anything else until this process completes its operation. Lets assume the user turns their SMT technology on before initiating this filter. Now, the user can render the filter, and still have the resources available to use the computer for other things while the filter renders. Intel rival, AMD, has since released their first attempt at SMT technology in their Opteron line of CPUs. Intel seems to have a slight advantage over AMD, regarding SMT, from experience with the technology.

Audio/Video Encoding

Audio and video encoding processes are most similar to 3D rendering since these tasks are most entirely processor driven. Numerous benchmarks indicate Intel based processors are superior for media processing to rival AMD based processors due to SSE2 and SSE3 processor extensions. This benchmark shows Intel's media superiority where the CPU would otherwise be less powerful for processing. Some media encoding applications are multi-threaded software, and others aren't. In this case it is difficult to determine CPU superiority versus scalability. At this time it is rare that anybody needs to encode anything so large that scalability could become a factor, so if video or audio production is your goal then it would be most wise to focus the processors for media suuperiority.

Display

Graphics Cards

Graphics cards matter little for 2D artists, and this only become an important consideration for high-end 3D artists. The most 2D artists needs to concern themselves with, as far as graphics cards, is maximum supported resolution and brightest color processing. Dual monitor support is a luxury that is well worth the extra money when it comes to high-resolution imagery. I will explain why this is import in the monitor section below. Beyond this everything else is excessive.

High-end 3D artists will find great relief from professional grade graphics cards. The purpose of these cards is to take load off CPUs so that they can focus on the complicated math calculations that go into the 3D rendering process. Considering high dollar super computers, or render-farms composed of many high-end workstations, are required to render modern 3D animation over days, or even weeks, for movies and television; anything that can save time saves huge amounts of money and production costs. These cards tend to feature more onboard memory and higher memory bandwidth. Some examples of professional grade cards include the Quadro from Nvidia, the FireGL from ATI, and the industry leading Wildcat series from 3Dlabs.

Data Backup

Purpose and Importance

Backing up your saved data should be an important part of your timely routine as a digital artist. Hard drives fail. It is a sad and common problem. Data loss results in lost time and money. Backing up data is vital to ensure that work is recoverable despite any thinkable cause of failure. Timely data backups are a necessary concern for all businesses that wish to survive any sort of disaster. The New York World Trade Center disaster serves a perfect example of why data backups can make or break a business at an instant.

The ideal for data backup of critical data for a business is to back up all data files once a day. Then these backups are to be stored on site and a copy is to be stored off site in a different building. A fireproof safe should be used at both locations. This is a bit extreme for a freelance digital artist, but routine back ups are still necessary. This is important to protect a client's work from loss or corruption.

DVD Burners

DVD burners are becoming cheap and practical. Soon DVD burners will completely replace CD burners. A DVD can quickly store 4.7gb of data, where a CD can store a mere 700mb, on an optical disk. The disks are resistant to all sort environmental concerns. Unlike hard disks and cassettes, plastic disks, such as CDs and DVDs, do not degrade from chemical oxidation or loss data from exposure to magnetic materials. This makes DVD disks the optimal choice for backing up data. This is compounded by the sad realization that certain print project files can easily exceed the limitation, 700mb, of CDs.

Traditional Backup Methods

Traditional methods of data back up still rely heavily on magnetic media. Years ago magnetic tape strung around large reels, looks like movie industry reels, was used. A more modern adaptation of this technology still exists in the form of data cassettes. The cassettes look like audiocassettes of the 1980s but tend to be a bit bulkier. The advantage to using magnetic cassettes over optical media, such as DVDs, is that they store large amounts of data; and are written to at incredible speeds. If an organization were needing to back up more than 50GB per night it would still be more time expedient to use magnetic cassettes over DVD burners because it would take less time to write the media. DVD burners are continually becoming faster, so in the near future large-scale back ups are likely to see a rapid change to DVD similar media since it is a more reliable storage device.

DVD Rs are a good way to store finalized projects or major steps of projects. They are usualy not usefull for incremental backups of complete systems.

Red Ink

Documentation

The documentation that comes with the hardware should be fully detailed on all user level and installation information. The installation instructions in the documentation should be written on a low level, with visuals, so that a young child could easily understand. The documentation should also provide all the necessary information to make the customer fully aware of product specifications and what the specifications mean. If the hardware documentation leaves you feeling uninformed, then the documentation is lacking necessary important information.

Warranties

Ensure that the hardware comes with a lengthy warranty. If the hardware manufacturer does not offer a warranty with a long enough service time period; or does not offer to fix, or replace, components upon notice of defect then don't purchase it. Sometimes companies will release flawed hardware with a reduced warranty in a shady effort to avoid a recall. Know the limitations and specifications of the warranty before considering the hardware as a viable purchase.

Conclusion

Focus and Discipline

The focus of hardware considerations should directly reflect the art discipline most heavily pursued. A high resolution print artist will want optimum memory storage at a high bandwidth for fastest possible IO, input/output; while a 3D artist will need greatest possible CPU power. Hardware created with computer gaming in mind rarely has any place as a tool for a professional artist, unless the artist needs a computer capable of testing computer games. Knowing and continually questioning why certain approaches to hardware are viable solutions for an artist's needs are the most important considerations an artist can make while considering hardware.

Education Equals Liberation and Success:

Never trust the opinions of others when your money and time are at stake. Opinions from others may be helpful to lead you in the proper direction, but always rely on your own comparative research for the final solution. Never throw money at a product without knowing explicitly why and product alternatives. To purchase hardware for high-end workstations you must be educated on what is available and why you need it. Hardware is always changing. When time comes to buy new hardware be aware of the trends, latest solutions, and price comparisons; and then judge your needs accordingly. Allow some growing space when making hardware decisions. Don't buy the minimal level of technology needed at a present moment, because the hardware will fail to meet future software needs.

References

Listed in order of appearance.

Section: Website: Article:
Operating System OSNews The Definitive Desktop Environment Comparison
Art Consideration Raster A dpi Tutorial from Faderhead
Resolution Macromedia Flash
Colors Ecole Polytechnique
Federale De Lausanne
Standard RGB Color Spaces
Colors Adobe Adobe Acrobat
Printers Datamation Dot-Matrix Printer
Printers Datamation Ink-Jet Printer
Printers Datamation Laser Printer
Printers How Stuff Works How Offset Printing Works
Paper Archive Builders Paper Sizes and Paper Weight: Metric and US Standards
Paper University of Arizona Paper finishes
RAM Disk Microsoft Ramdisk.sys Sample Driver for Windows 2000
RAM Disk TechTV Make a RAM Drive
SCSI/SATA Arrays Webopedia What is RAID?
Processors Ace's Hardware Dual Xeon, Dual Opteron and Quad Opteron
Audio/Video Encoding Intel Intel
Audio/Video Encoding AMD AMD
Audio/Video Encoding Wikipedia SSE2
Audio/Video Encoding Wikipedia SSE3
Audio/Video Encoding AnandTech AMD Opteron Coverage - Part 4: Desktop Performance
Graphics Cards NVIDIA NVIDIA Quadro FX
Graphics Cards ATI FireGLTM Workstation Graphics Accelerators
Graphics Cards 3Dlabs Wildcat Series
Monitors LaCie electronblue IV
Tablets Wacom Wacom
Traditional Backup Methods USByte Magneto-Optical Storage





The Society

The CGSociety is the most respected and accessible global organization for creative digital artists. The CGS supports artists at every level by offering a range of services to connect, inform, educate and promote digital artists worldwide

Contact | Privacy | Advertising | About CGS