On The Importance of GPU Mining

"Anybody can hold TAO. Anybody can use TAO. Anybody can own it - it's read-write-own.

Mining is just a little difficult, and that stems from the fact that AI is actually quite expensive."

- const, Based Space Podcast, Aug 2022

Our recent switch to GPU registration has had some key implications:

In short, we have raised the bar: both in terms of the expected costs for users, and in terms of the overall network utility and function.

Why GPUs?

CPU processing was just fine for a while.

Smaller models, like distilgpt2, run adequately using this hardware, and for the purpose of getting the network off the ground, it was enough. However, while CPUs are a powerful tool for generalized operations, when it comes to modern machine learning - which operates on consistent operations across large matrices of numbers - they fall short.

GPUs, on the other hand, excel at breaking complex tasks into smaller subtasks that can be performed simultaneously, applying unique hardware and computing configurations to specific problems and distributing them across cores, and running them in tandem.  This is known as parallel processing, and neural networks are specifically designed for this kind of implementation.

GPUs, therefore, enable larger models, with a massive number of parameters to run on the network.

In addition, while a CPU may have 16, 32, or 64 cores, most GPUs have hundreds, sometimes thousands.

This means GPUs can process the massive amounts of data that modern machine learning relies on, and with more data (generally) meaning better results, it's obvious why GPUs are the superior choice.

What does this mean for the network?

Network users will have to upgrade their hardware in order to stay registered and competitive, which will be costly. This is a non-ideal effect given that our aim to keep the network as open and accessible as possible, but ultimately, we won't be able to achieve SOTA performance using CPU hardware.

On the bright side, however, the network will be graduating to a higher caliber of performance, and this will overshadow the former drawback because the performance of individual models in the network contribute directly to overall network value.

A single, high functioning model in the network:

  • Becomes immediately available to any client holding TAO
  • Incentivizes all other nodes in the network to improve their performance in order to remain competitive
  • Increases the value of all extensions built onto the network (for example, the Playground)

These effects lead directly to an increase in the inherent value and functionality of the network, which increases the value of TAO.

And this:

  • Distributes more resources back to everyone participating in the network
  • Draws more innovative minds (and resources) to the network

And the cycle repeats.

Finally, since Tao confers network ownership, an increase in the value of the network is value owned by users themselves, without the drawbacks of various privacy infringements and bureaucratic hindrances.

Ultimately, network effects exponentiate value, while web 3.0 architectures distribute ownership.

Time to upgrade.

Subscribe to Bittensor

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe