Based Space : Compressed

A few weeks ago, const and carro appeared on the Based Space Podcast.

It went well, so we edited, transcribed and parred it down  for your reading enjoyment. Sometimes ideas are best expressed spontaneously, in the context of a conversation.

To create something digestible, this transcription leaves a fair bit of interview out, so refer to the original podcast if you want a deeper dive into topics like the Playground, running inference, the company background, and the Synapse Update.

Q: For those who may not be familiar with the Bittensor protocol, could you explain it at a light level?

I think the direction that people understand the most is that we're a Bitcoin mining network. It's a peer-to-peer system where the computers (nodes) are expending computational power to produce something of value - but instead of mining hashes, we're mining this intelligence element, and there is validation of this element. So by creating (essentially) a market, we are creating a network of computers all working together to produce this neural element.

So that's what it looks like:

We define a way of validating this thing we want. And then we let the hive mind fill in the gaps and solve the problems we want it to solve.

At this moment we're focusing on textual understanding. We want the computers to be able to understand text - generate it, produce representations of it, summarize it, etc. And this is a general product that is valuable for a lot of downstream tasks.

We have begun to build several products - well, one specific one - called the Playground.

Q: What are the main problems the network is working to solve?

Right now we’re solving one main problem.

But it just so happens this problem is a foundational problem for many others. That’s what is so interesting about unsupervised learning: it allows you to train your model to solve a problem (which is useful) even if it isn't the direct problem you need to solve. So for example, you can take a terabyte corpus of text, compress it into a machine learning model and give it a prior understanding of language, and then you can branch off from this foundational understanding and solve any language problem that you want.

What we’re trying to create is a kind of neural internet – a pool or ocean into which (AI) applications can dip to solve their specific problems - and use this network of collaborative intelligence.

Q: Why is Bittensor needed? What is the problem you are solving with this technology?

There is value accrued by a machine learning model in the form of its representation: the way it looks at the world.

But right now the machine learning world is very disconnected. There are many models, but they’re not talking to each other or learning from each other. So we want to break down the door and connect them. We want to connect the engineers, and we want to start working on a collective intelligence at the level of humanity rather than on the level of the individual.

We're doing this for efficiency reasons - but it’s more than that.

We’re also nesting this pursuit of AGI within a market system, and that is going to stimulate innovation as people are driven by self-interest to solve the micro problems in the network. We think this will hyperdrive the pursuit of artificial intelligence.

So the thing we are bringing to the table is this efficient way of creating, sharing and storing machine intelligence, and making it accessible and open to people. That doesn’t exist (yet).

Q: Can you explain the roles of the Validators and Servers in the system?

There are two main nodes. There are (nodes) solely producing machine intelligence and there are (nodes) solely validating. The miners (Servers) are language models that are trying to produce representations that the Validators find valuable.

The Validators are attempting to figure out who in the network is valuable - according to a universally accepted dataset  - they are trying to evaluate who the most performant miner (Server) is so that they purchase bonds in it to maximize their inflation (earnings) in the system.

Q: Where is all the data originated from?

It can come from individual nodes and entities, but in practice, we provide a dataset that is parsed and ready for use, and we source that from the web. A majority of the text comes from a dataset called The Pile, which was produced by an open-source community called EleutherAI. We also append that dataset.

The dataset itself exists to construct the incentive landscape that the miners (Servers) are working through.

Q: And there's a scoring mechanism involved. Can you touch on that and how that plays into the consensus algorithm?

So there’s a type of machine learning model called a mixture-of-experts model where you take a bunch of different machine learning models and combine them to produce a larger machine learning model. You can then query a certain section of the network to get a specific response and feed that response through another section of the network to sift out valuable information.

So we use that technique as the core technology for picking up which peers to speak to first. The Validators are the head of this neural architecture and the miners (Servers) are the foundation. For every input, they select who they want to query and then they use those responses to learn who is most valuable for solving a specific problem. If a (Server) is not producing something of value, it is identified as noise and removed from the active set of peers that a highway is being set to.

We (use this information) to produce a global scoring for each (Server) and then distribute emissions (TAO) to them at each block. One TAO every 12 seconds, which is one block.

Q: When you think of consensus though, it’s usually in terms of a blockchain - the state of a ledger - but this is the state of the quality of nodes in the ecosystem. Is that a fair assessment?

Exactly. The “consensus” here is amongst the Validators.

They determine collectively what the network is working on, and reach a consensus on what (and who) they are going to reward.

Q: How do (TAO) emissions work in the network?

We picked something conservative and went with the Bitcoin emissions curve. That’s not our innovation as a company.

So we have four-year halving cycles approximately and a 21 million limitation on our tokens. Every block released a TAO every 12 seconds, we rank the Servers, take the top performing ones and distribute half of the TAO in proportion to their value to the system, the other half goes to the Validators.

Q: What is the purpose of the TAO token? Why would somebody want to hold TAO, say for building an application on top of the network?

To start building an application, you need to hold TAO. For example, we have an application running - the Alpha Playground - on our main network, Nakamoto. To have enough request bandwidth to make that application worthwhile, you have to hold a certain amount of TAO.

That’s where the utility comes in for the token.

You are given access to a whole library of machine learning models to build an application on top of. You could, say, build a DALL-E 2 on top of the network - and get paid for it. Or someone can query the network for image generation. That’s how we imagine the structure will evolve. Holding TAO allows you to run inference on the network.

It will require a fair amount of TAO to have high fidelity, and if this is the future for the project, the large holders of TAO can start building AI companies and applications, monetizing their TAO by Validating with it and using that to shift the network to adapt to their needs. The Playground, again, is a good example of what we see this turning into.

Q: I know that the Polkadot Parachain is on the roadmap for this year. Explain how this will help investors and users. Will TAO be listed on many indexes? Or just one?

TAO will be listed on several indexes. As soon as we get into the Polkadot ecosystem, people can begin writing contracts with TAO on different chains. We will be opening it up to the world at that point, and we'll lose a lot of control. But that's where this (project) is going.

Q: Very cool. So to move toward some of the use cases. What are some of the projects you'd like to see be on top of?

We want to see a DALL-E competitor. We want a CODEX written into Bittensor. Applications that the layperson is using, a network for the people, by the people. So we want those kinds of applications to be plugged into Bittensor and start to feel around the edges of what can be done.

Because we have this great scale. We have so many models in the network. We're not just building a narrow band machine intelligence system, we're building a whole community of machine learning models. We are interested to see what people can do creatively by tapping into this diversity.

Q: So imagine we wanted to build a business (on top of Bittensor). What would that process look like?

You would need application developers, and you would need people like Carro who have an eye for what people want to do. And then you would build that company on top of your Validators. So it really comes down to having an idea, you have to have to come up with your trick and pony, and then build a company around that.

Because if you are a large holder of TAO, great, the problem you would have is solved, because you would need a lot of compute to do it. But, great, you have a network of computers willing to adapt to your specific problem .. a network of computers willing to do inference for you 24/7. And you don't even really have to pay for it.

So there you go, there's the foundation for your AI company, should you ever come up with an application idea. And we hope that you use Bittensor, and if you go through the process, you can teach us about what we need to do to make it easier.

Subscribe to Bittensor

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.