If you spend much time at your computer, you probably occasionally wish there was a way to stick certain things on your desktop so you could keep an eye on them while you do something else. If you're like me, you've noticed that Microsoft desktop gadgets are invariably ugly, and are massive RAM whores, like everything else they've ever made. Gadgets are also violently limited by the fact that they suck worse than a 10 cent hooker.
So how can you fix this? Well, the good people over at www.rainmeter.net have a solution for you. It's a free desktop scripting program. It runs fairly lightly in the background, can come with tons of default themes, and can be modified heavily. Track anything from your system resource usage, to your Google Calendar, your webmail inbox. Have a clock, a regular calendar, notes, and an RSS reader and media player controller. All on your desktop. You can set it up so you can click through to your icons, you can make it hide on mouseover, you can arrange it all to your liking, and you can make your own skins if you can't find fifteen or twenty to do what you want.
It's not really all that complicated to use, whether you just want basic download plug and play stuff, or if you want to tinker. Plug and play is done with an installer, and it even has a self extractor that can work on skins and themes that are set up properly.
If you want to tinker, you open up your skins folder, make a new text document, save it as a .ini file, and edit away. There's tons of instructions available, and with all the skins already out there, there's tons of reference material available. Yesterday I modded a skin rather heavily to fit in with my new desktop background of a sexy EVGA motherboard. Today, I'm making a better CPU monitor for it.
With all my stuff running, with all the monitors that have to update quickly, my Rainmeter is only using 18MB of RAM. In this day and age, that's nothing. Literally. So get it, try it, and play with it. If you don't like it, you don't have to use it, but there's no excuse for not seeing what it can do for you.
Here is a link to the skin I customized yesterday if you want to look. Try it, have fun with it. (Note, requires rainmeter to work. Otherwise it's just a bunch of plaintext files.)
Here is a link to the self extracting Rainstaller file for the skin pack.
This pretty picture displays it without needing rainmeter.
Tech tips, rants, and possibly occasional benchmarks and product reviews. Occasional Enthusiast PC component discussion, mockery of tech stuff, and vehement opinion, all from the rather unique viewpoint of an ex-Infantry, Stay-At-Home-Dad Geek. If you like it, subscribe to the feed.
Showing posts with label monitor. Show all posts
Showing posts with label monitor. Show all posts
Wednesday, October 26, 2011
Wednesday, October 19, 2011
Sampling Different Types of Anti Aliasing
Anyone who pays much attention to settings that can be forced in drivers or games has probably noticed by now that there's a whole lot of different types of Anti-Aliasing. The main two things people know about it though, are that it murders GPU and VRAM, and that it makes things look smoother. The method behind the madness, however, is generally regarded with the same apprehension usually reserved for Voodoo rituals.
In a recent post, I discussed the benefits of Anti-Aliasing and how they apply to the average gamer. That doesn't change the fact that there are a whole helluva heap of acronyms, and apparently some of them are easier than others, and they theoretically all do the same thing, in different ways, using different amounts of resources, and for some reason, we always refer to it in multiples, like x4, x8, and so on.
Just as a quick recap, Aliasing is what we call it when a line that isn't directly vertical or horizontal is depicted in pixels, and gets a little staircase effect. Anti-Aliasing is just there to make your eye think that isn't happening to things look pretty.
One of the earliest forms of AA in gaming was SuperSampling AA, or SSAA. It just happened to be a bit too brutal for the graphics cards of the day, and got phased out for a while, but is making a comeback now. Supersampling basically means rendering the scene at a higher resolution so that each pixel you'll see is composed of more pixels. These pixels then get blended based on various algorithms, which were either determined by throwing darts or by someone way smarter than me. There's also adaptive supersampling, which mostly seems to involve a combination of witchcraft and tarot to determine which pixels actually need to be full supersampled, which means your GPU takes longer to explode trying to do all that work.
FSAA, or Full Scene AntiAliasing, is just another name for Supersampling, since they needed something new to call it to not scare the pants off of people who watched SSAA turn games into slideshows back in the day.
MultiSampling AntiAliasing, or MSAA, one of the versions we see more often, is essentially a refinement of SSAA that uses less GPU horsepower by only sampling certain portions of textures and polygons, based on depth and location in the scene. The best I've managed to understand the specifics imply some sort of mathematical formla involving the cosine of the square root of negative infinity minus pi. Or some such nonsense. Basically, it isn't quite as pretty, does part of the same job, and beats less of the shit out of your graphics card. Got it? Good, now help me figure it out, it gets more confusing every time I try to understand it.
Of course, there's still one thing we haven't covered. Where the hell does the x4, x8, etc. come from? Well, roughly, that tells it how many "samples" you want rendered for pixels that it decides need samples rendered for. Then it promptly goes back to the roulette wheel to decide which pixels to make prettier, and hey presto, it automagically looks better!
I hope this has been either educational or entertaining, if not go back and re-read the parts that confused you, (paragraphs 1-7?) while I go take a tylenol.
In a recent post, I discussed the benefits of Anti-Aliasing and how they apply to the average gamer. That doesn't change the fact that there are a whole helluva heap of acronyms, and apparently some of them are easier than others, and they theoretically all do the same thing, in different ways, using different amounts of resources, and for some reason, we always refer to it in multiples, like x4, x8, and so on.
Just as a quick recap, Aliasing is what we call it when a line that isn't directly vertical or horizontal is depicted in pixels, and gets a little staircase effect. Anti-Aliasing is just there to make your eye think that isn't happening to things look pretty.
One of the earliest forms of AA in gaming was SuperSampling AA, or SSAA. It just happened to be a bit too brutal for the graphics cards of the day, and got phased out for a while, but is making a comeback now. Supersampling basically means rendering the scene at a higher resolution so that each pixel you'll see is composed of more pixels. These pixels then get blended based on various algorithms, which were either determined by throwing darts or by someone way smarter than me. There's also adaptive supersampling, which mostly seems to involve a combination of witchcraft and tarot to determine which pixels actually need to be full supersampled, which means your GPU takes longer to explode trying to do all that work.
FSAA, or Full Scene AntiAliasing, is just another name for Supersampling, since they needed something new to call it to not scare the pants off of people who watched SSAA turn games into slideshows back in the day.
MultiSampling AntiAliasing, or MSAA, one of the versions we see more often, is essentially a refinement of SSAA that uses less GPU horsepower by only sampling certain portions of textures and polygons, based on depth and location in the scene. The best I've managed to understand the specifics imply some sort of mathematical formla involving the cosine of the square root of negative infinity minus pi. Or some such nonsense. Basically, it isn't quite as pretty, does part of the same job, and beats less of the shit out of your graphics card. Got it? Good, now help me figure it out, it gets more confusing every time I try to understand it.
Of course, there's still one thing we haven't covered. Where the hell does the x4, x8, etc. come from? Well, roughly, that tells it how many "samples" you want rendered for pixels that it decides need samples rendered for. Then it promptly goes back to the roulette wheel to decide which pixels to make prettier, and hey presto, it automagically looks better!
I hope this has been either educational or entertaining, if not go back and re-read the parts that confused you, (paragraphs 1-7?) while I go take a tylenol.
Saturday, October 15, 2011
Practical DX11, What It Does, and What It Means
As we all know, the current generation of Microsoft's ubiquitous and ambiguous DirectX runtime is DX11. Just to make that sound less like techno-babble designed to keep software engineers in a job, DirectX is the overall software package for handling graphics in Windows. (There's also some sound stuff, but that's outside the scope of this article.) Since DirectX includes features for 3D gaming, it can be kind of important to gamers and enthusiasts to have hardware supporting at least recent versions.
Most games right now, mind you, only require DX9 to play, due to the red-headed step brother, consoles. DX10, well, yeah, that got released at some point, and did some stuff, and nobody really gave a shit. DX11, on the other hand, has this wonky tessellation, which mostly sounds like it fell out of a science fiction novel, possibly as some really painful way to die.
Basically, to explain the portions of tessellation that really made sense beyond "ooh shiny" without two or three different degrees in graphics design, software, and who knows what else, it's just a different way of representing shapes. Tessellation, roughly, lets you break shapes down into smaller shapes. The primary use for it is for smoothing out certain kinds of detail.
Roughly, tessellation breaks shapes down into tiny triangles, letting you make what's called a displacement map. To make this make sense, do you remember the little plexiglass box toy with a whole lot of pins that could slide up and down, so when you set it on top of something, from the top it would be a 3-D representation of the object?
Roughly, this is what the displacement mapping does in 3-D rendering. Not specifically, but it lets you take a shape, and break it down into smaller shapes that can be worked with like this. That way, as you get closer to something, it can render it's way toward a model, instead of trying to render all the models and textures in full detail all the way out to max viewing distance.
I hope this makes some sense, this is all sounding smarter in my head than it looks once I type it, but I can't decide if lack of caffeine is affecting my reading or my writing. Maybe I'm just crazy. Or all of the above.
Don't get me wrong, this is a very basic and generalized explanation, which by nature will be somewhat wonky. If you want to read the fancy version, you can check it out here.
Ok, so, here's the question, what's this mean for me? Smoother textures and better poly mapping, making things look better all around, for starters. Pretty simple, and nice. The second major thing you'll see out of it is textures being able to look good close in without soaking up a ton of GPU resources at a long distance. No, this doesn't sound huge, but what it does is allow an increased view distance, since it can avoid rendering so much at a distance, meaning less stuff pops out of thin air.
I hope this has cleared this up reasonably well for some people, I know it's not that useful for some, but understanding graphics can help you understand what sort of system you personally need for the graphics quality you desire. I'm hoping to do some other similar ones over the next few days, so stay tuned.
Most games right now, mind you, only require DX9 to play, due to the red-headed step brother, consoles. DX10, well, yeah, that got released at some point, and did some stuff, and nobody really gave a shit. DX11, on the other hand, has this wonky tessellation, which mostly sounds like it fell out of a science fiction novel, possibly as some really painful way to die.
Basically, to explain the portions of tessellation that really made sense beyond "ooh shiny" without two or three different degrees in graphics design, software, and who knows what else, it's just a different way of representing shapes. Tessellation, roughly, lets you break shapes down into smaller shapes. The primary use for it is for smoothing out certain kinds of detail.
Roughly, tessellation breaks shapes down into tiny triangles, letting you make what's called a displacement map. To make this make sense, do you remember the little plexiglass box toy with a whole lot of pins that could slide up and down, so when you set it on top of something, from the top it would be a 3-D representation of the object?
Roughly, this is what the displacement mapping does in 3-D rendering. Not specifically, but it lets you take a shape, and break it down into smaller shapes that can be worked with like this. That way, as you get closer to something, it can render it's way toward a model, instead of trying to render all the models and textures in full detail all the way out to max viewing distance.
I hope this makes some sense, this is all sounding smarter in my head than it looks once I type it, but I can't decide if lack of caffeine is affecting my reading or my writing. Maybe I'm just crazy. Or all of the above.
Don't get me wrong, this is a very basic and generalized explanation, which by nature will be somewhat wonky. If you want to read the fancy version, you can check it out here.
Ok, so, here's the question, what's this mean for me? Smoother textures and better poly mapping, making things look better all around, for starters. Pretty simple, and nice. The second major thing you'll see out of it is textures being able to look good close in without soaking up a ton of GPU resources at a long distance. No, this doesn't sound huge, but what it does is allow an increased view distance, since it can avoid rendering so much at a distance, meaning less stuff pops out of thin air.
I hope this has cleared this up reasonably well for some people, I know it's not that useful for some, but understanding graphics can help you understand what sort of system you personally need for the graphics quality you desire. I'm hoping to do some other similar ones over the next few days, so stay tuned.
Labels:
AMD,
benchmark,
computer,
Crossfire,
directx,
display,
Enthusiast,
fps,
framerate,
gamer,
GPU,
graphics,
JingleHell,
memory,
monitor,
overclock,
PC,
RAM,
SLI,
tesselation
Saturday, October 8, 2011
Multi-GPU and Multi-Monitor Un-Confused
Eyefinity, SLI, 3DSurround, 3DVision, and CrossfireX. Multiple monitors. What does it all mean? What's the difference, in this epic pile of confusing terminology, and what do I need to make this work?
SLI and CrossfireX, for starters. These are multiple-GPU configurations designed for extra graphics power on a single display. These are the ones most people think of first in multi-GPU configurations, and the terms most likely to be generically thrown in as a placeholder value. SLI is the nVidia variant, CrossfireX is AMD. CrossfireX only requires multiple PCIE slots, SLI requires an SLI ready motherboard, which, as of recently, can include AMD motherboards.
3DVision: This is using funky technology and woogy glasses to make things pop out of the screen, like a bad '80s sci-fi flick. This hasn't been implemented all that well in a lot of games. It does, however, occasionally get mixed up with 3DSurround, due to the stupid similarity in naming.
3DSurround and Eyefinity are the methods of playing games across multiple displays. Not gaming on one and having a browser or other stuff going in a spare display, that's just a normal multiple display configuration. Some AMD cards will support Eyefinity off a single card, but it's pretty stupid. No nVidia solutions do that, which is really perfectly reasonable, since playing across multiple displays doesn't make a whole lot of sense if you're having to drop the settings to "fuzzy blob" to get the extra screen real estate. By the way, use an odd number of displays per row for this, you don't want lines in the middle of the game area.
Multiple monitor setups just include using two monitors, with at most one for gaming. This is pretty much supported by almost everything these days, since it's not all that harsh for non-3D use. It can be handy if you actually have a use for the screen real estate, but for a lot of people it's basically just a fancy thing.
Remember, for SLI, Eyefinity, 3DSurround, and CrossfireX, you need identical cards. You can't SLI a GTX 560Ti and a GT 9800. You could use the 9800 for dedicated PhysX, or for extra monitor outs, but you can't get the two different cards running SLI for alternate frame rendering.
No, this isn't long, really, but it's something that drives me nuts occasionally.
SLI and CrossfireX, for starters. These are multiple-GPU configurations designed for extra graphics power on a single display. These are the ones most people think of first in multi-GPU configurations, and the terms most likely to be generically thrown in as a placeholder value. SLI is the nVidia variant, CrossfireX is AMD. CrossfireX only requires multiple PCIE slots, SLI requires an SLI ready motherboard, which, as of recently, can include AMD motherboards.
3DVision: This is using funky technology and woogy glasses to make things pop out of the screen, like a bad '80s sci-fi flick. This hasn't been implemented all that well in a lot of games. It does, however, occasionally get mixed up with 3DSurround, due to the stupid similarity in naming.
3DSurround and Eyefinity are the methods of playing games across multiple displays. Not gaming on one and having a browser or other stuff going in a spare display, that's just a normal multiple display configuration. Some AMD cards will support Eyefinity off a single card, but it's pretty stupid. No nVidia solutions do that, which is really perfectly reasonable, since playing across multiple displays doesn't make a whole lot of sense if you're having to drop the settings to "fuzzy blob" to get the extra screen real estate. By the way, use an odd number of displays per row for this, you don't want lines in the middle of the game area.
Multiple monitor setups just include using two monitors, with at most one for gaming. This is pretty much supported by almost everything these days, since it's not all that harsh for non-3D use. It can be handy if you actually have a use for the screen real estate, but for a lot of people it's basically just a fancy thing.
Remember, for SLI, Eyefinity, 3DSurround, and CrossfireX, you need identical cards. You can't SLI a GTX 560Ti and a GT 9800. You could use the 9800 for dedicated PhysX, or for extra monitor outs, but you can't get the two different cards running SLI for alternate frame rendering.
No, this isn't long, really, but it's something that drives me nuts occasionally.
Labels:
computer,
Crossfire,
display,
DPI,
Enthusiast,
fps,
framerate,
gamer,
GPU,
JingleHell,
memory,
monitor,
PC,
RAM,
SLI
Monday, September 26, 2011
Benchmarking for Fun (Or for Info.)
So if you frequent tech forums, you might hear something along the lines of "...only matters in synthetic benchmarks." or "You can't tell the difference without a benchmark." And, if you like the idea of having a slightly overpowered system, you may just want to know how to go about those benchmarks. Even if that's not the case, you may be wondering what will help your performance in a particular game the most, or wanting to find out just how effective your CPU or GPU is for a certain game.
If you read a lot of reviews, you've probably seen all the methodology, all the different names of software, and seen tons of numbers with pretty bars and graphs. Well, I hate to tell you this, but generally, the pretty bars and graphs don't just come with the software. Luckily, it's the numbers that matter anyway.
So, for starters, lets discuss benching for fun. The most commonly used benches for gaming performance are Futuremark's 3DMark series. The "standard" comparison points are the default settings, which you can get in the free trial version. Lets you get a better database of scores going.
If you're going to bench for fun, the most important thing to remember is that you need a consistent set of benching processes. The best way to do this is to have a secondary account on your PC with the absolute minimum of automatic services, to make sure memory use is consistent. Keep drivers updated, although you usually won't want to use Beta drivers unless you need a specific feature. (Always test beta drivers prior to sustained use, some graphics drivers have been known to cause thermal issues.)
So, now that you have your consistent stuff, you get a baseline. That's just at your normal settings, how does it work. After that, you can try tweaking things to see if they gain you performance. Whatever you decide to tweak, make sure you test with the same processes as before. It really can make a huge difference in scores, or enough to skew data to the point of unreliability.
Generally, the GPU is the easiest thing to tweak, since there's simple GUI based OC utilities that work reasonably well, like MSI Afterburner, or (to a point) EVGA Precision. Remember to push it up a little at a time, and test for stability before starting the benchmark. If you raise your GPU OC and suddenly lose performance, you either need to raise your GPU voltage (at your own risk), or lower the clock back down a bit. Generally, in graphics cards, RAM clock will be less performance for the voltage than core clock and shaders, so you generally won't want to bother with it.
The CPU, is, of course, one of those things people really think about overclocking, and a lot of people are scared of it. If you do your homework, have the right components, and do it carefully, it can be a perfectly safe way to gain performance. You only void your warranty if you do damage that can definitely be attributed to overclocking, so usually you're safe if you keep the OC within the CPU manufacturers specified voltage range.
This isn't an overclocking guide, but I will say that in recent i7's, it's usually best to raise clock to the desired level first, then see if you can squeeze hyperthreading back on safely. Hyperthreading will raise your score in 3DMark, but less than most clock speed gains. RAM speed and timings can also affect it, as can IMC clock, and basically all the usual culprits.
But now, I'm sure, you're wondering about the benching for information thing. After all, E-peenery is fun, but there's only so much you can say about it. If you want to learn something through benchmarking, you're going to need a few things.
1: Consistent Monitoring. Be it FRAPS for FPS/Frametimes, HWMonitor for temps, or something else, you must use software. Eyeballing it is worse than useless.
2: Consistent Playthrough. You need a replay, or a specific save file, or something that you always use for benchmarking. If you're getting an FPS recording of different things, it doesn't tell you much.
3: Objectivity. This is the key. If you aren't objective, you're useless. You need to be willing to get results you didn't want or expect.
Well, as for methodology, it depends a bit, based on what exactly you're wanting to test, but essentially you need to isolate a component. For CPU/RAM testing, graphics settings not requiring the CPU should be turned down, to include resolution. This ensures that the limiting factor will be your CPU. For RAM, you just get a baseline at a "Typical" setting, like CL9 1333 for DDR3, and work from there, using the same CPU clock.
If you want to isolate GPU, you crank the desired graphics settings, turn down anything you can involving the CPU, and hope you aren't trying to isolate the graphics card on an RTS. Won't happen unless you have a really screwy rig.
If you want to monitor temps, you need a consistent ambient temperature, and consistency in everything except what you're testing. Including what you use to heat up the PC, and how long you run it prior to measuring.
I know this hasn't really been long on details, but it's really more to get you thinking the right way.
If you read a lot of reviews, you've probably seen all the methodology, all the different names of software, and seen tons of numbers with pretty bars and graphs. Well, I hate to tell you this, but generally, the pretty bars and graphs don't just come with the software. Luckily, it's the numbers that matter anyway.
So, for starters, lets discuss benching for fun. The most commonly used benches for gaming performance are Futuremark's 3DMark series. The "standard" comparison points are the default settings, which you can get in the free trial version. Lets you get a better database of scores going.
If you're going to bench for fun, the most important thing to remember is that you need a consistent set of benching processes. The best way to do this is to have a secondary account on your PC with the absolute minimum of automatic services, to make sure memory use is consistent. Keep drivers updated, although you usually won't want to use Beta drivers unless you need a specific feature. (Always test beta drivers prior to sustained use, some graphics drivers have been known to cause thermal issues.)
So, now that you have your consistent stuff, you get a baseline. That's just at your normal settings, how does it work. After that, you can try tweaking things to see if they gain you performance. Whatever you decide to tweak, make sure you test with the same processes as before. It really can make a huge difference in scores, or enough to skew data to the point of unreliability.
Generally, the GPU is the easiest thing to tweak, since there's simple GUI based OC utilities that work reasonably well, like MSI Afterburner, or (to a point) EVGA Precision. Remember to push it up a little at a time, and test for stability before starting the benchmark. If you raise your GPU OC and suddenly lose performance, you either need to raise your GPU voltage (at your own risk), or lower the clock back down a bit. Generally, in graphics cards, RAM clock will be less performance for the voltage than core clock and shaders, so you generally won't want to bother with it.
The CPU, is, of course, one of those things people really think about overclocking, and a lot of people are scared of it. If you do your homework, have the right components, and do it carefully, it can be a perfectly safe way to gain performance. You only void your warranty if you do damage that can definitely be attributed to overclocking, so usually you're safe if you keep the OC within the CPU manufacturers specified voltage range.
This isn't an overclocking guide, but I will say that in recent i7's, it's usually best to raise clock to the desired level first, then see if you can squeeze hyperthreading back on safely. Hyperthreading will raise your score in 3DMark, but less than most clock speed gains. RAM speed and timings can also affect it, as can IMC clock, and basically all the usual culprits.
But now, I'm sure, you're wondering about the benching for information thing. After all, E-peenery is fun, but there's only so much you can say about it. If you want to learn something through benchmarking, you're going to need a few things.
1: Consistent Monitoring. Be it FRAPS for FPS/Frametimes, HWMonitor for temps, or something else, you must use software. Eyeballing it is worse than useless.
2: Consistent Playthrough. You need a replay, or a specific save file, or something that you always use for benchmarking. If you're getting an FPS recording of different things, it doesn't tell you much.
3: Objectivity. This is the key. If you aren't objective, you're useless. You need to be willing to get results you didn't want or expect.
Well, as for methodology, it depends a bit, based on what exactly you're wanting to test, but essentially you need to isolate a component. For CPU/RAM testing, graphics settings not requiring the CPU should be turned down, to include resolution. This ensures that the limiting factor will be your CPU. For RAM, you just get a baseline at a "Typical" setting, like CL9 1333 for DDR3, and work from there, using the same CPU clock.
If you want to isolate GPU, you crank the desired graphics settings, turn down anything you can involving the CPU, and hope you aren't trying to isolate the graphics card on an RTS. Won't happen unless you have a really screwy rig.
If you want to monitor temps, you need a consistent ambient temperature, and consistency in everything except what you're testing. Including what you use to heat up the PC, and how long you run it prior to measuring.
I know this hasn't really been long on details, but it's really more to get you thinking the right way.
Labels:
benchmark,
computer,
Crossfire,
Enthusiast,
fps,
framerate,
gamer,
GPU,
heat sink,
hz,
JingleHell,
liquid cool,
memory,
monitor,
overclock,
PC,
RAM,
SLI,
Tech
Monday, September 19, 2011
Mac, Why I Hate It, and Why That's OK.
So, there's this funny misconception amongst Mac users. Apparently, there's something wrong with people hating Mac. Even for those of us with a legitimate reason. After all, disliking Mac is clearly a personal attack against all Mac users, or that's how they respond. Now, I'll admit, there's PC users who exacerbate the problem, and I've been one of them. But these days, I'd say that my reasons to hate Mac are legit, and people who disagree should be willing to accept them.
For starters, iMac. Take it out of the box, plug it in, and turn it on. Great. But it's a laptop GPU pushing more display than it should be. You can't open the case, you can't stick in a new GPU, you can't do anything with it. Aside from turn it on and maybe set it on your coffee table as some sort of centerpiece. But it is decorative. Can't change anything, can't tinker... yuck.
Second, OpenGL and game support. You just can't game on a Mac. Now I know not everybody plays a lot of games, and since most Valve and Blizzard games are available for Mac, some people don't need more. I do, and I need it to look good. DX11 with a high framerate doesn't happen on a Mac.
Pricing. The most frequently cited reason, and the least understood. Mac comes with good support, good build quality, good cooling, and good displays. But me, I'm fine with manufacturer warranty for components, and I can arrange good cooling. As far as displays? I don't need perfect color, just decent color and low response times. That's it. I can do that without a Mac, and get the stuff I do want with it.
Hardware... Yeah, you can't upgrade it, you can't customize it much, and it costs a fortune for what you get, from a pure performance standpoint.
Now, to revisit that feature thing. Can't tinker, but it just works. That's good for some people. Expensive, but you get awesome customer service. Can't upgrade, but you get amazing build quality. OpenGL and game support? Not everybody needs that, and for what Mac does, it's awesome. It just doesn't game.
For starters, iMac. Take it out of the box, plug it in, and turn it on. Great. But it's a laptop GPU pushing more display than it should be. You can't open the case, you can't stick in a new GPU, you can't do anything with it. Aside from turn it on and maybe set it on your coffee table as some sort of centerpiece. But it is decorative. Can't change anything, can't tinker... yuck.
Second, OpenGL and game support. You just can't game on a Mac. Now I know not everybody plays a lot of games, and since most Valve and Blizzard games are available for Mac, some people don't need more. I do, and I need it to look good. DX11 with a high framerate doesn't happen on a Mac.
Pricing. The most frequently cited reason, and the least understood. Mac comes with good support, good build quality, good cooling, and good displays. But me, I'm fine with manufacturer warranty for components, and I can arrange good cooling. As far as displays? I don't need perfect color, just decent color and low response times. That's it. I can do that without a Mac, and get the stuff I do want with it.
Hardware... Yeah, you can't upgrade it, you can't customize it much, and it costs a fortune for what you get, from a pure performance standpoint.
Now, to revisit that feature thing. Can't tinker, but it just works. That's good for some people. Expensive, but you get awesome customer service. Can't upgrade, but you get amazing build quality. OpenGL and game support? Not everybody needs that, and for what Mac does, it's awesome. It just doesn't game.
Labels:
computer,
display,
Enthusiast,
framerate,
gamer,
GPU,
JingleHell,
Mac,
monitor,
overclock,
PC,
Tech
Thursday, September 15, 2011
Customer Reviews, or, Stupidity as an Art Form.
So, we've all been there. Looking for whatever electronics or PC components online, at newegg or tigerdirect, and we see something that might fit our needs. Go to whatever search engine floats your boat, and start looking for reviews or benchmarks. Fast forward 30 minutes, for whatever reason, you couldn't find a good review or an unbiased benchmark. Why not just use customer reviews? It's rated 4 or even 5 stars, it should be good enough, right?
Of course, the correct answer to this question is hell no. Absolutely never, under any circumstances should you trust an E-tailers customer reviews on anything. Why? Well, first and foremost, statistical sampling is garbage. People are much more likely to complain than to say something positive, especially when their expectation was "function". Also, when someone does give a positive review, it's almost always going to be a very generous one, usually because they expected it to turn on, and OH MY GOD THE SHINY LIGHTS WORK!
But, you think, doesn't that mean that if it's rated high, it should be even better than the rating implies? Not really, no. Remember, most people only buy new electronics when it's an upgrade. As such, of course it feels faster, that would be the whole point of an upgrade, yes? Hell, odds are good they don't know what component actually matters anyway.
That means your best bet is to search for the people who say they're a high tech level, right? Wrong. Most technically inclined folks think a little too literally, and know that on a scale of idiot to Einstein, they're at best a 3-4/5, and won't put themselves as a 5. Anyone saying they're an expert thinks they are, and those people are much more dangerous to listen to. They're the ones with a metric asston of anecdotal evidence supporting all kinds of ridiculous stories and theories about how things work.
So, what good are the reviews? Well, they're great for figuring out how the unit is most likely to die in the event it does. That can be handy for a lot of things. They aren't any good for determining how well something works, because if someone who knew what they were doing had benched the components properly, you wouldn't be this desperate for reviews, now would you? They're also fairly good entertainment, if you're decent with the stuff, you can sit down with a beer and laugh until you cry.
In summation, e-tailer customer reviews suck.
Of course, the correct answer to this question is hell no. Absolutely never, under any circumstances should you trust an E-tailers customer reviews on anything. Why? Well, first and foremost, statistical sampling is garbage. People are much more likely to complain than to say something positive, especially when their expectation was "function". Also, when someone does give a positive review, it's almost always going to be a very generous one, usually because they expected it to turn on, and OH MY GOD THE SHINY LIGHTS WORK!
But, you think, doesn't that mean that if it's rated high, it should be even better than the rating implies? Not really, no. Remember, most people only buy new electronics when it's an upgrade. As such, of course it feels faster, that would be the whole point of an upgrade, yes? Hell, odds are good they don't know what component actually matters anyway.
That means your best bet is to search for the people who say they're a high tech level, right? Wrong. Most technically inclined folks think a little too literally, and know that on a scale of idiot to Einstein, they're at best a 3-4/5, and won't put themselves as a 5. Anyone saying they're an expert thinks they are, and those people are much more dangerous to listen to. They're the ones with a metric asston of anecdotal evidence supporting all kinds of ridiculous stories and theories about how things work.
So, what good are the reviews? Well, they're great for figuring out how the unit is most likely to die in the event it does. That can be handy for a lot of things. They aren't any good for determining how well something works, because if someone who knew what they were doing had benched the components properly, you wouldn't be this desperate for reviews, now would you? They're also fairly good entertainment, if you're decent with the stuff, you can sit down with a beer and laugh until you cry.
In summation, e-tailer customer reviews suck.
Max FPS Makes My Head Hertz
We've all seen the argument, "How many Frames per second can the human eye see?". The trouble is, while everybody seems to have a theory, nobody seems to be able to show concrete evidence to back it up. Instead of trying to convince you to agree with me based on no information, instead I'm going to discuss hardware limitations, and why they render the argument moot for most people.
If you know much about displays, you know they have a refresh rate, listed in Hertz(Hz). Hertz means cycles per second, so a 60Hz refresh rate means your display refreshes 60 times per second. In other words, no matter how many frames per second your GPU draws, only 60 can be shown on your screen. 60Hz is by far the most common refresh rate currently, although 3d capable displays and some others are capable of a 120Hz refresh rate.
Current video card drivers let you set video output rate, usually limiting you to a maximum of your display's refresh rate, but you can also lower it, which has some advantages for certain video enthusiasts. This is kind of outside the point, however. What's much more important is this: Your refresh rate is a hardware limitation on frame rate. There is no physical way to properly display a larger number of FPS. You can, however improperly display them.
Improper display will show itself in screen tearing. Essentially, your display is trying to show parts of two different frames at the same time, because it's receiving too many frames per second. This is where Vertical Sync comes into play. Vertical sync (also known as Vsync) lets you limit your rendered FPS to a fraction of your displays refresh rate. In other words, the most FPS you can get with Vsync enabled on a 60Hz display is 60 FPS.
You may be wondering, perhaps, exactly why I'm blathering on about vertical sync and refresh rates. Well, it becomes much more useful when you consider that no matter what actual frame rates the human eye is capable of differentiating, the human eye absolutely does notice sudden change. The human eye detects movement, and it is quite capable of noticing if the displayed frame rate suddenly drops from 60 to 45, which can happen quite easily with Vsync, if your rendered FPS drops below 60.
Since we don't want rendered FPS to drop below 60, some overhead in average frame rate is needed, to keep 60 as the minimum, if we're going to use Vsync. This is where excessive graphics power comes in on a display capped at 60Hz. In other words, while being able to get 90 FPS average may not make the game look any better than when it runs at 60 FPS, it does make the game look better by virtue of keeping it from making a sudden sharp dip. This is a hardware limitation. That means that any added perceived smoothness is purely a placebo effect. Your mind is playing tricks on you, plain and simple.
Oh, and if you want to know my thoughts on perceived framerate? It depends on the game, your PC, and you. If your PC has low input lag, and doesn't stutter, a fairly low framerate can look good in most cases. However, depending on the perception of speed being rendered, a higher framerate may be needed to make fast "motion" appear smooth, unless motion blur is rendered by the game.
If you know much about displays, you know they have a refresh rate, listed in Hertz(Hz). Hertz means cycles per second, so a 60Hz refresh rate means your display refreshes 60 times per second. In other words, no matter how many frames per second your GPU draws, only 60 can be shown on your screen. 60Hz is by far the most common refresh rate currently, although 3d capable displays and some others are capable of a 120Hz refresh rate.
Current video card drivers let you set video output rate, usually limiting you to a maximum of your display's refresh rate, but you can also lower it, which has some advantages for certain video enthusiasts. This is kind of outside the point, however. What's much more important is this: Your refresh rate is a hardware limitation on frame rate. There is no physical way to properly display a larger number of FPS. You can, however improperly display them.
Improper display will show itself in screen tearing. Essentially, your display is trying to show parts of two different frames at the same time, because it's receiving too many frames per second. This is where Vertical Sync comes into play. Vertical sync (also known as Vsync) lets you limit your rendered FPS to a fraction of your displays refresh rate. In other words, the most FPS you can get with Vsync enabled on a 60Hz display is 60 FPS.
You may be wondering, perhaps, exactly why I'm blathering on about vertical sync and refresh rates. Well, it becomes much more useful when you consider that no matter what actual frame rates the human eye is capable of differentiating, the human eye absolutely does notice sudden change. The human eye detects movement, and it is quite capable of noticing if the displayed frame rate suddenly drops from 60 to 45, which can happen quite easily with Vsync, if your rendered FPS drops below 60.
Since we don't want rendered FPS to drop below 60, some overhead in average frame rate is needed, to keep 60 as the minimum, if we're going to use Vsync. This is where excessive graphics power comes in on a display capped at 60Hz. In other words, while being able to get 90 FPS average may not make the game look any better than when it runs at 60 FPS, it does make the game look better by virtue of keeping it from making a sudden sharp dip. This is a hardware limitation. That means that any added perceived smoothness is purely a placebo effect. Your mind is playing tricks on you, plain and simple.
Oh, and if you want to know my thoughts on perceived framerate? It depends on the game, your PC, and you. If your PC has low input lag, and doesn't stutter, a fairly low framerate can look good in most cases. However, depending on the perception of speed being rendered, a higher framerate may be needed to make fast "motion" appear smooth, unless motion blur is rendered by the game.
Subscribe to:
Posts (Atom)