Wednesday, September 16, 2009

48. Gamers Graphics Tuning Guide

By: Pulp (Dustin Sklavos)
From: NotebookReview.com

Even if you've got the latest and greatest CPU, graphics card, widescreen monitor and cutting edge video game, odds are you're not enjoying the ultimate gaming experience. Why? Because you're gaming on the default graphics settings. No matter how old or new your gaming rig, you can probably squeeze even more performance from the system by following the tips in this Gamers' Graphics Tuning Guide.

This information applies to both desktop and notebook users alike. We'll cover how to make your in-game visuals look their best, and how to do some basic performance optimization for your games by slightly (or significantly) tweaking your graphics settings.

BASIC KNOWLEDGE

There are two things you're need to know about your system: What is the monitor's native resolution? and What make and model is my graphics card?

In lay terms, your monitor's native resolution is the number of pixels (measures as width by height) the screen is designed to produce. The specification is available online for either your monitor or your notebook. The model number proper will be on the bottom of the notebook or the back of the monitor. If the specification given by the manufacturer uses a letter abbreviation like WXGA, you can check our How it Works: Screens guide to translate these codes into actual resolution numbers.

Your monitor will deliver its best picture at its native resolution. You can run a lower resolution, but for more information on that (and other tasty tidbits) it would be helpful to go back and check the aforementioned Screens guide.

The model of your video card or "display adapter" is listed on your system's specification sheet. If your desktop was custom-built you probably know this already. To learn more about the fundamentals of graphics hardware, you'll want to check out the first page of our Mobile Graphics Guide for 2009, which is largely applicable to desktop hardware as well. Optimizing your graphics output based on your graphics card is much, much trickier than just changing your software resolution to match your hardware resolution. We'll take it step by step in the sections below.

A BRIEF ANATOMY OF A GPU

Basically, with any graphics hardware -- also known as a graphics processing unit (GPU) or video card -- there are three characteristics that most greatly define performance: Memory size, memory bandwidth, and architecture.

Memory size accounts for how much data can actually fit in the video card's memory, space reserved largely for textures and filtering operations. I want to be clear here: More graphics card memory does not always equal better graphics. Some cards simply aren't fast enough to properly handle more than the currently preferred 512MB of on-card memory. Higher-end cards (Radeon HD 4800 series, GeForce GTX and GTS series) are powerful enough to make use of a full 1GB of on-card memory, but even they see virtually no improvement going past one gig.

With memory size comes memory bandwidth, defined by the type of memory used and the memory bus width. More details are available in our graphics guide, but it basically boils down to this: Higher memory bandwidth improves performance. A GPU equipped with DDR2 RAM or (heaven help you) memory shared from system RAM will be severely crippled in this respect. Lack of memory bandwidth is the single biggest graphics performance bottleneck I continue to see on any system.

Finally, there's the least tangible graphics factor: Architecture. ATI/AMD and Nvidia's designs differ from generation to generation and between one another, and these differences can have a direct impact on graphics performance. Certain games run better on Radeons, others thrive on GeForces, based on how developers structure their code. Dead Space, for example, runs nearly 100 percent faster on some GeForce GPUs. For what it's worth, Radeons still run that particular game beautifully, unless you feel like you can perceive the difference between 60 frames-per-second and 120.

While Nvidia's cards have scaled pretty linearly from generation to generation since they released their 8000 series so long ago, ATI/AMD has a massive break between the 2000/3000 generation and the 4000 generation. Parts from the 4000 generation are far more efficient than their predecessors. I personally own a Radeon HD 4870, and previously owned not just one but a pair of Radeon HD 3850s and the 4870 completely outclasses them in every way. The desktop Radeon HD 4670, even with half the memory bus of the 3850, still performs on par with it.

So how do these three characteristics ultimately shake down when you're configuring your game? Let's find out.

IN-GAME SETTINGS

One of the joys of PC games is that they can look better than their console versions, provided you know how to tune your gaming rig. Most games offer some variants on these settings, some more and some less, but the analogs will be present more often than not. While we may namecheck a few hard limits and settings levels, the best way to tune your game rig is to tweak each of these setting individually and then observe their effects on actual game performance. In most every case, turning these settings down will improve game performance at the cost of visual appeal, so only you can know where your sweet spot for speed-versus-pretty is located.

Resolution

In-game resolution settings can have the strongest impact on your graphics performance. Typically, the biggest limiter of this can be memory size, but if your GPU is equipped with DDR2 memory, shared memory, or a small memory bus (think 64-bit), you will hit a hard limit on available resolution very quickly. The memory bandwidth simply won't be enough to handle the amount of information streaming between the memory and the GPU, and I've seen otherwise decent hardware take a swan dive in performance when game resolution was raised too high. If your graphics hardware is relatively low-end (as in no on-card memory) you may be confined to running at 1024x768 or under.

Texture Detail

Texture detail is affected most heavily by memory size. If you don't have a lot of memory to go around, consider reducing texture detail to spruce up game performance. The game world around you -- such as details on the ground, walls, rocks, and other large solid objects -- will appear a bit blurrier as a result, but your game will run faster and with more fluidity.

Shadows

Shadows and shadow detail have had a significant influence on the performance of recent games. Typically, diminishing or disabling shadows will directly speed up game play. That said, shadow performance often scales with GPU processing power. A GPU with a high amount of shader power (related to architecture) isn't going to break as much of a sweat with a lot of shadows, but mid and low-end cards may start bawling with Shadows set too high.

Shaders

The term "shaders" is a bit all-encompassing, but generally shaders make things "prettier" overall. Shader performance is tied almost entirely to GPU processing power and architecture. On a higher-end GPU, this setting is pretty safe to turn up since there's likely power to spare, but when you're on the mid-end or lower, you may need to turn this a ways down.

Lighting

This setting will affect the general quality of lighting within the game, sometimes altering the color of light as well as how crisp it is. Lighting is often highly shader-dependent and as such will fall in line with the shaders setting itself, though architecture can have a lot to do with this. Older GPUs are going to struggle a lot more with effects like HDR (High Dynamic Range) lighting and bloom, while newer GPUs tend to be better optimized for these effects.

Anti-Aliasing

Anti-aliasing is a process where the GPU smoothes out jagged edges in the image, such as the barrel of the gun, character models, and so on. It's not supremely essential and many games don't even let you enable it, but it can improve image quality considerably depending on if the game is edge-heavy (F.E.A.R., I'm looking at you).

Unfortunately, next to resolution, anti-aliasing is the ultimate resource hog. Anti-aliasing demands good memory bandwidth, and at higher resolutions requires a great deal of memory as well. We've noticed that a resolution of about 1920x1200 is where modern GPUs can tend to run out of video memory, especially if they only have 512MB on-card memory or lower. These cards may run perfectly fine before anti-aliasing is enabled, but the performance impact if it's enabled can be devastating.

Anti-aliasing is also one of those points where architecture makes its appearance, particularly in ATI's Radeons. Simply put, the 4000 series performs substantially better when anti-aliasing is enabled than do previous generations. ATI has never confirmed that anti-aliasing was "broken" in the hardware of the 2000 and 3000 series, but generally speaking they produced much more precipitous performance drops even in the flagship 3000-gen GPUs when anti-aliasing was enabled. The 4000 series fixes the problem and brings anti-aliasing performance more or less in line with the GeForces on the market.

Texture Filtering

Texture filtering is my bread and butter, and I'll generally preserve this setting at the expense of almost all others.

Games will use different texture qualities depending on the distance from the "camera" in order to improve performance, in a technique called mip-mapping. The textures closest to you will look best, while those farther away will have their detail sometimes dramatically reduced. With filtering disabled, you'll often see the mip-mapping in effect, where there'll be a hard line in front of you that moves with you. The same texture will be clearer near you, and suddenly drop in quality past that line.

There are three basic texture-filter settings: Bilinear, trilinear, and anisotropic. Anisotropic filtering generally has settings from x2, x4, x8, and x16. These settings "move" the texture drop-off line in some cases, but also blur the border, allowing for smooth transitions between each level of detail on the texture so you don't see that hard line. Indeed, anisotropic filtering in particular will radically improve the detail of the texture on the ground in the distance, which keeps the look of the game consistent.

Mercifully, modern GPUs have optimized the texture filtering process extremely well, and it's typically safe to enable regardless of hardware class. You can always turn it down to see if there's an improvement in performance, and I've found in some cases that a setting of x8 anisotropic filtering will provide an adequate trade-off between performance and picture quality.

V-Sync

Last but not least, there's V-Sync, the setting that many gamers leave turned off but I, personally, cannot do without. V-Sync is short for Vertical Synchronization, and it synchronizes the frames rendered by the video card with the frame rate of your monitor (59 or 60 frames per second for the overwhelming majority of LCD screens). Left disabled, the game can perform much faster, but something called "tearing" can also occur, where a new frame starts getting drawn before the old one is finished, and a distinct line appears in the image where this occurs. A lot of modern games are pretty good about minimizing tearing, but nonetheless it exists.

V-Sync gets rid of tearing entirely, but introduces a couple of wrinkles of its own. It locks the frame rate to 60, and divides it in half if the graphics hardware can't keep up, plunging from 60 to 30, or even 15 or worse, which is essentially unplayable. This can be averted by enabling an option called Triple Buffering, but nonetheless, V-Sync incurs a considerable performance penalty not limited to any single hardware factor.

Another downside of V-Sync is that it can introduce sometimes noticeable latency, or "input lag," between the commands you issue and the game itself. This can cause a game to feel somewhat sluggish.

So with these horrendous downsides, why enable it at all? For me, it's because tearing is very distracting, and my games perform plenty fast enough with it enabled anyhow. Remember, this isn't about how fast you can get the game to run, but about what's comfortable for you and what trade-offs you're willing to make.

CPU

One last note: Your CPU impacts graphics performance, too. If your processor is on the slower end, it will limit how much work is fed to your graphics hardware. Some games run fine with a slower processor (Crysis, for example), while others can be surprisingly punishing. Those of you with older CPUs will notice that Left 4 Dead slows to a crawl when large numbers of zombies are on-screen, and Far Cry 2 seems to be heavily processor-reliant in general. If changing the software settings one way or the other doesn't have an appreciable effect on performance, odds are your processor is holding things up.

CONCLUSION

Hopefully this guide has given you some idea of what all those settings in your games do. I know how daunting they can seem, but an understanding of how to set them can make your gaming experience much more enjoyable overall. Games, for me at least, do hinge somewhat on the graphics. A game with excellent graphics can feel very immersive while a game with blocky models and blurry textures can remind you that you're just playing a game.

I'm a big fan of first-person shooters, in particular, and I've found they can be very sensitive to video performance. For me, a high frame rate when playing these definitely improves my performance and makes it much easier to make more precise shots. It doesn't necessarily make up for my being a miserable shot to begin with, but it helps. For you, it could be all the difference in the world.