Discussion Forums > Technology

nVidia's GTX 600 Series

<< < (3/12) > >>

Lupin:

--- Quote from: vuzedome on March 23, 2012, 10:26:48 AM ---But do enlighten me, what is texture decompression and why is it that you use it to judge computational performances?
I'm really in the dark on that, google didn't help much.

--- End quote ---
Since this wasn't answered, texture compression allows you to cram in more textures (or higher detailed ones) in your video ram. Most games compress their textures to save space and to make them fit in your video ram. Imagine playing a game at eyefinity resolutions with uncompressed textures. All those textures won't fit in that 2GB (or higher) vram of your high end video card. The game will have to read your drive to load more textures. The game will stutter as it will have to wait for the new textures to load. Texture decompression is computation heavy because it's just like decompressing archives.

vuzedome:
(click to show/hide)
--- Quote from: Lupin on March 23, 2012, 01:29:16 PM ---
--- Quote from: vuzedome on March 23, 2012, 10:26:48 AM ---But do enlighten me, what is texture decompression and why is it that you use it to judge computational performances?
I'm really in the dark on that, google didn't help much.

--- End quote ---
Since this wasn't answered, texture compression allows you to cram in more textures (or higher detailed ones) in your video ram. Most games compress their textures to save space and to make them fit in your video ram. Imagine playing a game at eyefinity resolutions with uncompressed textures. All those textures won't fit in that 2GB (or higher) vram of your high end video card. The game will have to read your drive to load more textures. The game will stutter as it will have to wait for the new textures to load. Texture decompression is computation heavy because it's just like decompressing archives.

--- End quote ---
Thanks, pretty much understood what it was after reading about it on wikipedia.
kitamesume, regardless of those benchmark test, most of us buy graphics cards for it's main function, to play games.
And most results I see on gaming performance is that the GTX 680 performs better than its predecessor.
The GTX 680 is a new card, so it'll take one or more driver updates plus mainstream software to pick it up before we start seeing it's true potential.

kitamesume:
i think you missed my point, i'm saying that they could've done better with the performance if they left the fermi design alone and just went for the 28nm die-shrink. this could've took them less months to develop since the design is already there to boot, though a few tweaks so it could profit the 28nm die-shrink.

the switch to 28nm should already have a lot of bonus effects even if you didn't change the core design, i mean the benefits of the 28mm die-shrink is less watts per performance, plus a larger headroom on core clocks, not to mention since they've shrunk the die-size they could fit more cores.

then they could've started with a real overhaul with the GT? 7## series, if they went this path they could've got the "first to reach the 28nm fab" plus could've got the time to take a price hike and later on bring it down when competition rises. and ultimately have the time to develop the GT? 7## series.

as of now i see AMD as a better pick since now they have the overall performance edge, unless you're craving for Physx, and the only chance nvidia would take the top-dog crown now would be to fine tune their drivers though AMD still have theirs being fine tuned as well. also if they could get their lower tier cards to perform well then it shouldn't hurt them too much.

also, if AMD can think fast, and if its possible, they could effectively drop the prices of HD7970 and HD7950 by 50$ more or less, that'll hurt nvidia a lot.
to explain how it'll hurt nvidia a lot, it is because nvidia just released their card out, if AMD can give pressure then nvidia's profit from this would severely be crippled, and since AMD had a headstart of about 3months and profited from it then it wouldn't hurt AMD much with this move.

TMRNetShark:
They have a water cooled one? What what?

So theoretically now (I'm not up to date on all the high end cards), you can run a monster rig at relative silence (except for the hard drive and PSU)?

*Now patiently awaits a water cooled PSU*

On topic: I think this card isn't as good as it could of been. I remember the old GTX 6800's. Those were definitely a more significant leap forward in terms of graphics than the GTX 680 (regression in naming? XD)... At least Nvidia knows how to get their drivers right *cough ATI cough*

mgz:

--- Quote from: Tatsujin on March 23, 2012, 05:30:59 AM ---
--- Quote from: kitamesume on March 23, 2012, 05:08:47 AM ---idk, based on reviews from sites i think nvidia failed in the performance department, though their performace/watt ratio improved drastically.

well maybe i was just used to seeing nvidia's performance improvement going >20% in most of their newer models.

though i'm curious enough to know whats HD7770's equivalent, how much and what TDP will it have.
me would like an HD7770 equal to be <130$ while hovering <70W TDP.

--- End quote ---
I seriously feel like going with that Aquarium + Baby Oil and dumping my computer inside of it. I don't like heat, I want this whole thing to be like 10C or 15C - no more than that.

I'd probably want a GTX 690 for the dual-GPU - just one card would be enough.

--- End quote ---
running your computer submerged in oil doesnt keep your temps really low. What it does do is make everything incredibly stable in temp so it sits at a average temp and fluctuates very little under load and such. Radiators like that used in water cooling arent as effective at dissipating the heat from the oil.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version