Graphics Cards and Moore’s Law

After seeing the new GTX 1050 and 1050 Ti, my mind is intrigued.

These graphics cards are being called the kings of 1080p gaming, running Battlefield 1 in the low 60s in terms of FPS (frames per second). It does the same with most other graphically demanding titles out there, such as the new Doom redux. But I think that it’s more than just the specs (which are very impressive) that make this card special.

This card (the Ti edition) has 768 CUDA cores, which is very impressive. It has 4 gigs of GDDR5 VRAM, which is more than my 970 can boast (to sum it up, NVIDIA put 3.5 gigs of GDDR5 and then 0.5 gigs of VRAM that was 7x slower because of issues from the GM204-200 chip).

But what does it cost (yes, I know most of you will already know, but I will still ask as a courtesy)? Well, you can grab one of these for just 139.99 USD. Pretty impressive, if I do say so myself. With this, you could build a very impressive 1080p gaming build with 8 gigs of DDR4 and an i3-6320 dual core at 3.9 GHz for around 350 USD. Or, for a light-content editor by day, gamer by night you could pair it with an i5-6600K (3.5 base clock, 3.9 turbo) for a 450 USD solution. All in all, this card allows you to have a tubular setup at a respectable cost.

To put its power in perspective compared to modern gaming consoles, it is around 1.5x more powerful than the GPU of the Xbox One, and is on par with the PS4 GPU (certainly not the PS4 Pro however. The equivalent to that GPU would be the somewhere around the ballpark of the GTX 980).

This reminds me of Moore’s Law. Gordon Moore, co-founder of Intel, famously claimed that the number of components per integrated circuit would double every two years (originally one year, revised to two). People have since expanded this to new technology and also used it to predict that the cost of something would halve itself every two years.

So, let’s put the 1050 Ti to the test compared to the 2014 NVIDIA line-up.

We’ll start with cost. No card rivaled it in terms of clock speed, with the fastest in the bunch being the 770. The 750 Ti has 640 CUDA cores, making it the closest. It’s also the second-closest to the clock speed at 1020 MHz. So, we’ll use it for comparison. When we look at the cost, if we average the 750 Ti with the GTX 760 (the next closest card in terms of performance), we get around 150 USD. So, we get around a 20 dollar price drop (if you want to only factor in the 750 Ti, they are the same cost for lesser specs). So, not a big big win for the 1050 Ti here.

Now, lets look at it how Moore intended. There are 1.87 billion transistors on the 750 Ti on a 148mm sq. die size. The 1050 Ti has 3.3 billion transistors on a 135mm sq. die size. Pretty big difference, right?


So, should you spend your hard earned money on a card that is pretty much the same as a card from two years ago?

Short answer is, yes.

Here comes the long answer: While I can’t tell you about my experience with the card (because I do not own one), it does sound pretty promising. It supports DirectX 12, which will soon become a requirement for most games. It has around the ballpark of 1.5x the GFLOPS of the 750 Ti (1,320 vs 1,981), and also more CUDA cores. This card has been proven to handle all of today’s biggest titles at 1080p at 60fps. Some more reasons include: higher clock speed, higher effective memory clock speed, higher pixel rate, higher turbo clock, and a slightly higher texture rate.

I hope you enjoyed my first blog post, thank you for visiting Random Tech Blog.

P.S. Featured Image source is here