96 total views
Coordination, that is, a large group of actors working together for the common good, is one of the most powerful forces in the universe. For example, if we work together to prevent the global temperature from rising, the temperature rise will be much smaller. It is this factor that makes it possible for companies, countries, and any social organization to be larger than a few.
Coordination can be improved in many ways: faster information dissemination, better specifications (identifying which behaviors are classified as cheating) and more effective punishments, stronger organizations, such as smart contracts that allow interactions that reduce trust levels Tools, governance techniques (voting, stocks, decision-making markets…) and so on. In fact, we, as a species, have been getting better and better in all these aspects in the past ten years.
However, there is also a very counterintuitive dark side in coordination. Although saying that “everyone coordinates with everyone” is indeed much better than “everyone for themselves”, it does not mean that every step that everyone takes towards more coordination is It must be beneficial. If coordination is improved unevenly, the results can easily be harmful.
Now, what are these dangerous forms of local coordination, and when will the coordination of some people lead to a deep black hole? This is best illustrated by examples:
Citizens of a country sacrifice themselves bravely for national interests in war…when that country becomes Germany or Japan during World War II;
The lobbyist bribed the politician in exchange for the politician to adopt the lobbyist’s preference policy;
Someone sold votes in the election;
All sellers of a product in the market collude to raise the price at the same time;
Large-scale blockchain miners conspired to launch a 51% attack;
In all the above cases, we have seen a group of people unite and help each other, but this has caused great damage to some people outside the coordination circle, which in turn harms the net interests of the entire world. In the first case, all the victims of aggression by the above-mentioned countries were not within the scope of coordination and suffered a heavy blow as a result. In the second and third cases, it is the people who are affected by the decisions made by corrupt voters and politicians, in the fourth case it is the customer, and in the fifth case it is the non-participating miners and the blockchain user. This is not an individual betraying the group, but a group betraying a wider group (usually the entire world).
This kind of local coordination is often called “collusion,” but it should be noted that the range of actions we are discussing is quite wide. In normal language, the word “collusion” is often used to describe relatively symmetrical relationships, but in the above cases, there are many examples with strong asymmetry. In this sense, even blackmailing sexual relationships (“voting for my preference policy, or I will disclose your affairs publicly”) is a form of collusion. In the rest of this article, we will generally use “collusion” to refer to “unwanted coordination.”
Evaluate intent, not behavior (!!)
An important feature of milder collusion cases is that people cannot judge whether an act is part of an unwelcome collusion by observing the behavior itself. The reason is that the actions taken by a person are a combination of the individual’s internal knowledge, goals and preferences, as well as external incentives imposed on him. Therefore, the actions taken by people when they collude are voluntarily (or coordinated in a benign way). ) The actions taken often overlap.
For example, consider a collusion between sellers (a violation of antitrust laws). If operated independently, each of the three sellers might set the price of a product between $5 and $10. The difference in scope reflects some incomprehensible factors, such as the seller’s internal costs, their own willingness to work with different wages, supply chain issues, etc. However, if the sellers collude, they may set the price between 8 and 13 dollars. Once again, the range reflects different possibilities regarding internal costs and other hard-to-see factors. If you see someone selling this product for $8.75, are they doing something wrong? If you don’t know if they coordinate with other sellers, you can’t tell! Enacting a law stating that selling the product at a price of more than $8 would be a bad idea, and there may be valid reasons why the current price must be high. However, enacting a law against collusion and successfully enforcing it will bring the desired result-if the price must be high enough to cover the cost of the seller, you can get a price of $8.75, but if the factors driving the price increase Naturally very low, you won’t get this price.
This also applies to bribery and election bribery cases: it is likely that some people will legally vote for the Orange Party, while others will vote for the Orange Party because they are paid. From the perspective of those who determine the rules of the voting mechanism, they do not know in advance whether the Orange Party is good or bad. But what they know is that the voting effect of people voting based on their true feelings is quite good, but the voting effect of voters being free to buy and sell votes is very bad. This is because there is a tragedy of the commons in selling votes: every voter gets only a small part of the benefits from the correct vote, but if they vote in the way the briber wants, they will get all the bribes. Therefore, voting that allows the sale of votes will soon become a game of the rich.
Understanding game theory
We can further narrow the perspective and conduct research from the perspective of game theory. In the version of game theory that focuses on individual choice, which assumes that each participant makes independent decisions and does not allow a group of agents to work for each other’s common interests, there is mathematical evidence that shows that in any game There must be at least a stable Nash equilibrium. In fact, mechanism designers have a lot of freedom to “design” games to achieve specific results. But in the version of game theory that allows alliances to cooperate (ie “collusion”), we call it cooperative game theory. We can prove that there are many types of games without any stable results (called “core”), and alliances cannot gain from it. Profit.
Most games are an important part of this group of inherently unstable games. Most games are formally described as an agent game, in which any subset of more than half of the people can get a fixed remuneration and distribute between them-this setting is related to corporate governance, politics and humanity Many other situations in life are very similar. In other words, if there is a fixed resource pool and some existing resource allocation mechanisms, and 51% of participants will inevitably conspire to seize control of resources, no matter what the current configuration is, there will always be some conspiracy Appears, which is profitable for participants. However, this conspiracy will in turn be attacked by potential new conspiracies, which may include the combination of previous co-conspirators and victims… etc.
This fact, the instability of most games under cooperative game theory, is considered a highly underestimated simplified general mathematical model. It explains why there is probably no “end of history” in politics, and no system has been proven to be. Completely satisfactory. For example, I personally think it is more useful than the more famous Arrow impossibility theorem (Arrow theorem).
Note again that the core dichotomy here is not “individuals and groups”. For mechanism designers, “individual vs. group” is very easy to deal with, and the challenge is “group vs. broader group”.
Decentralization as anti-collusion
According to this line of thinking, we have another smarter and more operable conclusion: if we want to establish a stable mechanism, then we know that an important part of doing so is to find ways to make collusion, especially large-scale The collusion is more difficult to occur and sustain. In the case of voting, we have secret voting-this mechanism ensures that voters have no way to prove to a third party how they voted, even if they want to prove it (MACI is an attempt to use cryptography to extend the principle of secret voting to Projects in a network environment). This undermines the trust between voters and bribers and greatly limits unwelcome collusion that may occur. In antitrust and other cases, we often rely on whistleblowers and even give them rewards, so as to clearly encourage participants in harmful collusion to defect. In terms of broader public infrastructure, we have a very important concept: decentralization.
A naive view is that decentralization is valuable because it reduces the single point of risk of technical failure. In traditional “enterprise” distributed systems, this is usually true, but in many other cases, we know this is not enough to explain what is happening. Looking at the blockchain from here is very enlightening. A large mining pool has publicly demonstrated how they distribute nodes and network dependencies internally, which does not quell community members’ fear of centralization of mining.
From the point of view of “decentralization is fault tolerance”, large miners can talk to each other without causing harm. However, if we view “decentralization” as an obstacle to harmful collusion, then this picture becomes quite scary because it shows that these obstacles are not as strong as we think. In fact, these miners can easily perform technical coordination, and they may all be in the same WeChat group, which actually means that Bitcoin is “not much better than a centralized company”.
So what are the remaining obstacles to collusion? Some of the main ones include:
Moral barriers, Bruce Schneier reminds us in his book “Liars and Outsiders” that many “security systems” also serve moral functions, reminding potential misbehaving people that they are about to commit Serious illegal behavior, if they want to be a good person, they shouldn’t do it. Decentralization can be said to help realize this function.
When internal negotiations fail, individual companies may begin to ask for concessions in exchange for the opportunity to participate in collusion, which may lead to a complete halt in negotiations (see “Refusal to Concessions” in economics).
Anti-coordination and the fact that the system is decentralized make it easy for participants who do not participate in the collusion to make a fork, drive the colluding attacker out, and continue the system from there. The barrier for users to join the fork is very low.
Defection risk, for the five companies, it is much more difficult for the five companies to unite to do something that is widely regarded as bad than for an uncontroversial or benign purpose. These five companies do not know each other well, so there is a risk that one of the companies refuses to participate and promptly informs it, and it is difficult for participants to judge the risk. In addition, individual employees within the company may also report.
Taken together, these obstacles are indeed huge-often enough to prevent potential attacks, even if the five companies are fully capable of coordinating quickly to do legitimate things at the same time. For example, Ethereum blockchain miners are fully capable of coordinating the increase in the gas limit, but this does not mean that they can easily collude to attack the blockchain.
The experience of the blockchain shows that the design of the protocol as a decentralized system organization, even if it is known in advance that most of the activities will be led by a few companies, is usually a very valuable thing. This idea is not limited to blockchain, it can also be applied to other environments (for example, see here for antitrust applications).
Forking is anti-coordination
But we cannot always effectively prevent the occurrence of harmful collusion. In order to deal with situations where harmful collusion does occur, it is best to make the system more robust to them, that is, to make collusion more costly and make the system easier to recover.
We can use two core operating principles to achieve this goal: (1) support for anti-coordination and (2) risk sharing. The idea behind anti-coordination is this: We know that we cannot design a system that is passively robust to collusion. This is largely because there are so many ways to organize collusion and there is no passive mechanism that can detect them. But what we can do is actively respond to collusion and counterattack.
In digital systems such as blockchain (which can also be applied to more mainstream systems such as DNS), a major and vital form of anti-coordination is bifurcation.
If a system is taken over by a harmful coalition, dissidents can get together and create an alternative version of the system, and this version has (mostly) the same rules, except that it eliminates the control of the system by the attacking coalition that power. In an open source software environment, forks are very easy. The main challenge in creating a successful fork is usually to collect legitimacy (in terms of game theory, this is a form of “common sense”) so that everyone who disagrees with the main alliance direction will follow you.
This is not only theoretical, but has been successfully implemented in reality. The most well-known example is the resistance of the Steem community to hostile acquisition attempts, which led to a new blockchain called Hive, and in this new block In the chain, the original opponent has no power.
Market and risk sharing
Another type of strategy against collusion is the idea of risk-sharing. In this case, it basically refers to any mechanism that holds individual contributors accountable for their contributions. If a group makes a wrong decision, those who approve of the decision must suffer more than those who try to oppose it. This avoids the “tragedy of the commons” inherent in the voting system.
Forking is a powerful form of anti-coordination, precisely because it introduces a risk-sharing mechanism. In Hive, Steem’s community fork gave up the attempt of hostile acquisition, and those coins that voted for it were deleted in the fork, that is to say, the key players involved in the attack suffered losses.
All of this gives us an interesting view on the behavior of people who build social systems. One of the goals of establishing an effective social system determines the structure of coordination to a large extent: which groups and configurations can be brought together to achieve their group goals, and which groups cannot?
Different coordination structures, different results
Sometimes, better coordination is a good thing: people can work together to solve their problems together. But at other times, more coordination is dangerous: some participants can coordinate to deprive everyone else of their rights. At other times, for another reason, more coordination is needed: to enable the wider community to “fight back” to prevent collusive attacks on the system.
In all three cases, there are different mechanisms that can be used to achieve these goals. Of course, it is very difficult to directly prevent communication, and it is also very difficult to make coordination perfect. However, there are many choices between the two that can have a powerful impact.
The following are some possible coordination structure techniques:
Privacy protection technology and specifications;
Technical means that are difficult to prove your actions (by secret ballot, MACI and similar techniques);
Deliberately delegating power and assigning control of certain mechanisms to a large group of people who know they are not well coordinated;
The decentralization of physical space, separating different functions (or different shares of the same function) into different locations (for example, see Samo Burja on the link between urban decentralization and political decentralization);
Separate different functions (or different shares of the same function) to different types of participants (for example, “core developers”, “miners”, “coin holders”, “application developers” in the blockchain “,”user”);
Schelling point, allowing a large number of people to quickly coordinate to advance around a single path;
Use a common prophecy (or assign control to multiple constituencies that speak different languages);
Use voting by person instead of voting by (coin/share) to greatly increase the number of people colluding to influence decision-making;
Encourage and rely on defectors, let them remind the public of impending collusion;
None of these strategies are perfect, but they can be used in different environments and achieve different levels of success. In addition, these technologies can and should be combined with mechanism design to minimize profit and increase risk for harmful collusion. In this regard, risk sharing is a very powerful tool. Which combination is most effective ultimately depends on your specific use case.