Ethereum’s Byzantium Hard Fork is over and now the NVIDIA GeForce GTX 1070 Ti graphics card has actually been launched. The NVIDIA GeForce GTX 1070 Ti is an interesting card for cryptocurrency fans as has 512 more CUDA cores than the GeForce GTX 1070 and still utilizes GDDR5 memory. When mining Ethereum the speed and latency of the memory subsystem is the most essential piece of the silicon. That is why the magnificent NVIDIA GeForce GTX 1080 with GDDR5X doesn’t perform as well as the lower priced GeForce GTX 1070 when it concerns mining Ether. The NVIDIA GeForce GTX 1070 Creators Edition runs $399 ($379 non-FE) and the brand-new NVIDIA GeForce GTX 1070 Ti Creators Edition runs $449. Not bad considering the NVIDIA GeForce GTX 1080 FE runs $549 ($499 non-FE) and the NVIDIA GeForce GTX 1080 Ti FE runs $699.

Since we have all of these graphics cards we figured that we ‘d put the new NVIDIA GeForce GTX 1070 Ti with 2,432 CUDA Cores and 8GB of GDDR5 memory performing at 8Gbps to work mining some Ethereum

. In stock type we discovered the GeForce GTX 1070 Ti was simply 0.2 MH/s quicker than the old GeForce GTX 1070. It looks like the extra CUDA cores do not build up for more mining efficiency and the cards similar memory bandwidth (256 GB/sec due to the 8Gbps memory speed on a 256-bit wide memory user interface) meant performance was the same.

When we took a look at the memory subsystem on the GeForce GTX 1070/1070 Ti cards we observed that some were dropping down to 1900MHz when mining and others were staying at 2000MHz. If we closed the mining application and opened other 3D applications the clocks jumped up to 2000MHz. The memory clocks should not drop like that and we’ve pointed out what we found with NVIDIA. They could not replicate what we were seeing, but we by hand set the memory clocks on the cards running 1900MHz to get them appropriately set at 2000MHz for the charts above. Keep that in mind when overclocking as it takes a +200 MHz overclock in EVGA Accuracy X to get the get the 100MHz boost displayed in GPU-Z considering that its running at quad-rate speeds. We simply wished to point this out as it might be happening to others. The good news is manually overclocking the memory to the max will still get the highest clock possible from the card.

We likewise tossed on some overclocks onto the cards and as you can see overclocking truly does help the hashrate one can receive from each card. The NVIDIA GeForce GTX 1070 FE that we have was a monster and had the ability to overclock +640 MHz on the Samsung branded GDDR5 memory for 33.0 MH/s. The brand-new NVIDIA GeForce GTX 1070 Ti FE that we had actually used Micron GDDR5 memory and might only overclock +425 MHz on the memory for 31.4 MH/s. We don’t have enough cards on hand to get into a Samsung versus Micron GDDR5 argument, but those are the outcomes.

We likewise have an EVGA GeForce GTX 1070 Ti FTW2 graphics card at our disposal and ran it overnight for 8.5 hours with the memory overclocked as high as we might get it with complete stability. We managed to drop the power target down to 60% and get the memory overclocked by +625 MHz on the Micron branded GDDR5 memory (real +425 MHz due bug we already covered on our platform). This made our ratings go from around 28.3 MH/s in stock form approximately 31.3 MH/s. The NVIDIA GeForce GTX 1070 Ti FE memory also got this high and was running 31.4 MH/s, so appears like that is around ball game you’ll get on an overclocked 1070 Ti video card. This is somewhat more than a 10% production boost and we’ll take it.

The EVGA GeForce GTX 1070 Ti FTW2 card had the GPU sitting at 65C with the memory and VRM temps at 65C after almost 9 hours of mining on an outdoors bench. Not bad numbers and the fans were spinning at just 634 RPM, so this was nearly silent mining. Power intake on the GeForce GTX 1070 Ti was reputable. Our Intel X99 workstation board is power hungry, so at idle the whole system was pulling 149 Watts of power. With the card in stock form we were getting 291 Watts. By lowering the power target down to 65% we dropped that down to 262 Watts an additional decrease to 50% got that number down to 242 Watts, but it wasn’t steady. Both 1070 Ti cards get the very same general performance, however the EVGA GeForce GTX 1070 TI FTW2 Gaming runs $499.99 delivered and the NVIDIA GeForce GTX 1070 Ti Founders Edition runs $449.99 shipped. For mining the blower style card for $50 less seems the easy option.

At the end of the day the very best bang for the dollar remains with the NVIDIA GeForce GTX 1070 series. With a recommended market price of $379 on board partner cards it still remains the very best NVIDIA card for mining from our viewpoint. We got 33 MH/s from our overclocked GeForce GTX 1070 and we weren’t able to match that with the brand-new new GeForce GTX 1070 Ti cards. The extra CUDA cores on the 1070 Ti do not do anything significant for concurrency mining, so no need to pay additional for them! This is likewise excellent news for players as it must suggest that the GeForce GTX 1070 Ti shouldn’t see inflated priced and be offered to acquire.

Make sure to take a look at our short articles on Ethereum mining: