On Vitalik Buterin On Trust Models
In a recent article, Vitalki Buterin has offered an interesting analysis of trust. While I think his article is good starting point for discussion, I take issue with his definition of trust, as well as his vision for the way forward.
What is Trust?
According to Vitalik, we should understand trust as follows (we will call it vTrust, for Vitalik Trust).
- vTrust: trust is the use of any assumptions about the behavior of other people.
This is a perfectly fine operational definition of “trust” for certain applications, but it doesn’t work in general, and it doesn’t seem to work in the context of cryptocurrencies and blockchain technology.
The principle problem is that managed expectations about someone’s behavior is not the same as trust in their behavior. For example, I expect everyone reading this to use the bathroom today, but it would be weird to say that I trust them to use the bathroom. That is because when we trust people to do something, we don’t merely expect them to do that thing. Our expectation is that they will follow a given norm (for example don’t rug pull) even lacking external incentives to do so (and possibly given external incentives to do otherwise).
For example, let us take a famous recent case from DeFi, in which a dev in named Chef Nomi, exited his protocol with 14 million USD worth of Dev funds (he has since returned it). Up until that exit many people presumably trusted Chef Nomi to do the right thing, this despite the fact that he was heavily incentivized to rug pull (14 million dollars is a lot of money!). Our expectations were that he would act contrary to incentives. Why? Because that is how trust works.
Let’s formalize this definition of trust, calling it kTrust (for Kantian trust, as the core idea is basically Kantian).
- kTrust: trust is the expectation that an agent will act in accordance with a given norm (e.g. don’t rug pull) even when incentivized to do otherwise.
We will find it useful to think in terms of degrees of trust. We might trust a dev to not rug pull for a measly 10 thousand dollars, but we might run out of trust if the external incentives increased to a million or a billion dollars. When we say we trust someone “to a degree,” we mean that we trust the person to follow norms in spite of contrary external incentives, but there are limits. Perhaps all trust is conditional in this way. We only trust people to certain limits.
Given that people can and do violate trust, it is no wonder that we seek to engineer around the need for trust in others. For example, are told that the new crypto applications are “trustless”. But this is just wishful thinking. Elsewhere I have argued that these new protocols don’t eliminate the need for trust (kTrust), they merely reassign trust. So, for example, we no longer trust bankers to verify our money transfers, but we still need to trust code auditors to reliably evaluate and report on the soundness of the code in our DeFi applications. Even genius devs have to rely on auditors to verify the reliability of their code and to not do a half-assed job in those evaluations.
So, what does this tell us about Vitalik’s take on trust? The problem with Vitalik’s vision is that it seeks to engineer around trust, even when this is not possible. We must also be prepared to evaluate the ethical character of people we do business with. This is not an engineering problem. Or in any case, if it is an engineering problem it is a problem that is “AI complete” — to solve this problem you would first have to solve every problem in AI.
This leads us to Vitalik’s notion of trust models. Vitalik reasons that the real issue is not eliminating trust, but to minimize the amount of trust necessary for the system to work. In the ideal scenario one would need zero points of trust. In the worst case scenario, one relies on one person and one must trust that person (a potential Bernie Madoff scenario). In general, the smaller percentage of agents we must trust, the better off we are, as represented in the following diagram.
The way Vitalik sees it, we can’t actually eliminate trust, but we can steer ourselves into the green zone, mitigating by degrees our reliance on trust.
The Problem of MetaTrust
What can we say about this? I believe that the point of failure in Vitalik’s analysis arises with the issue of metaTrust. We might have a system that requires 0 of N trustworthy agents, but how do we know that is in fact the constitution of our system? We rely on code auditors and tech friends to be trustworthy in supplying these evaluations. The existence of systems that need only one in one million agents to be reliable are useless if we cannot be sure that we are in such a system. And thus we trust auditors to confirm that our system is so engineered.
In a nutshell, this is the problem: It doesn’t matter if we believe we are in a system that needs only one in one million trustworthy actors if we can’t trust that we are in such a system. And we typically only know we are in such a system (when we do know) because one or a handful of auditors have confirmed it is so. Indeed, we may find that the systems being audited are so complex that we actually need N out of N auditors to be reliable in order for us to know that we are in a trustworthy state. Thus metaTrust, which is necessary for the whole system to work, would be in Vitalik’s red zone.
We have, in effect, swept our need for trust under the rug, where it is difficult to see, but it is still there, and is now more dangerous precisely because it is difficult to see.
Cultivating a Culture of kTrustworthiness
If the need for kTrust can’t be eliminated and it can’t be engineered around, is there anything to be done? Yes!
Traditional finance has not failed because it has too many points of trust; it has been failed because it has cultivated a culture in which greed is good and that trust is for schlubs. “If you get burned it is your fault.”
The problem for us is that this attitude has been taken onboard by too many people in the DeFi space. “Code is law, and if you didn’t see the rug pull that is on you.” But kTrust is ineliminable. DeFi and crypto in general is not trustless, it merely offers a reassignment of trust.
Members of the DeFi community today have the opportunity to show that unlike traditional finance they value trustworthiness and they plan to inculcate trustworthiness as an important norm. It needs to be taken on as a norm by the community as a whole.
If it cannot do this, DeFi will fail just as surely as Centralized Finance is now failing.