The VCG Mechanism
Posted: 2018-06-22.
This post reviews the famous Vickrey-Clarke-Groves mechanism: the canonical truthfulness-inducing auction (procedure involving monetary transfers) for making a group decision.
Setup
Here's the scenario: you're in a group of people -- $n$ of them -- who need to collectively make a decision from a set of alternatives $A$.
For example, maybe there's an item you all want, and the decision is whom to give the item (so $A = \{\text{Alex, Barbara, Cathy, etc}\}$). Or you're trying to elect the next president of your planet, and the alternatives are $\{\text{Lizard 1}, \text{Lizard 2}\}$. Or maybe you're trying to select a route through a network, and the alternatives are the set of all paths from $s$ to $t$.
Now, assume each of you "bidders" $i\in \{1,\dots,n\}$ has a valuation function $v_i: A \to \mathbb{R}$ where $v_i(a) = $ the value that bidder $i$ has for selecting alternative $a$. This is the utility or total amount of happy feelings you would get if we choose $a$. For example, if we are allocating a single item, you might have $v_i(a) = 100$ if $a=i$ (that is, if you get the item) and $v_i(a) = 0$ if $a\neq i$ (that is, somebody else gets it).
Next, assume each of you has quasilinear utility: If alternative $a$ is selected and you get a net amount of $c$ dollars, then your utility is $$ \text{utility} = v_i(a) + c $$ and the term quasilinear comes from the fact that utility = value plus money. For example, if you have $v_i(\text{Lizard 1}) = 8$ and $v_i(\text{Lizard 2}) = 10$, then you are indifferent between a world where Lizard 1 is selected and you get paid a dollar, and a world where Lizard 2 is selected and you pay a dollar. Either way, your utility is $9$. This assumption is very strong, but very common in mechanism design theory. We mostly expect it to hold if values $v_i$ are small relative to the individual's budget, and it is probably not a good assumption otherwise.
Now assume each individual's goal is always to maximize (expected) utility. But as a society, we want to design a mechanism -- that is, a game in the sense of game theory -- that selects a socially good alternative. In this post, the objective we want to maximize is the so-called "social welfare", the sum of the valuations:
$$ \arg\max_{a\in A} \sum_{i=1}^n v_i(a) $$So to sum up, we need to design an algorithm to do the following:
- Take as input $v_1,\dots,v_n$
- Output the alternative $a$ that maximizes social welfare (above)
- Also output, for each player $i$, the amount $\pi_i$ that they must pay
The payments can be kept by the auctioneer, or destroyed (burned). They can perhaps be otherwise redistributed to society, although one should be careful with how the redistribution interacts with incentives. Papers such as Cavallo 2006, Guo and Conitzer 2008 investigate incentive-compatible ways to redistribute as much of the payments as possible.
Truthfulness
The question is how to choose the payments of the players. We want the mechanism to be dominant-strategy truthful: each player $i$, regardless of what everyone else does, maximizes her utility by reporting $v_i$ truthfully rather than lying in any way.
Notice: if we don't have truthfulness, then it's not clear what the mechanism is achieving: It picks the alternative maximizing social welfare according to the reported valuations, but these are lies and the final choice may be very poor overall.
The Mechanism
There will be two key ideas to VCG. First, we will pay $i$ the total sum of values of all other agents, then charge $i$ something $X$ not in her control, which aligns her incentives with the social welfare objective. Second, $X$ will look like the welfare that other agents would have gotten if $i$ never existed, so that $i$ ends up paying for the externality she imposes on others.
Given $v_1,\dots,v_n$, let:
- $v_{-i}$ denote all valuations except $i$'s.
- $W(a) = \sum_{j=1}^n v_j(a)$, the social welfare of choosing alternative $a$.
- $W_{-i}(a) = \sum_{j\neq i} v_j(a)$, the welfare from all agents but $i$ for choosing $a$.
- $a^* = \arg\max_a W(a)$, the optimal alternative.
- $a^*_{-i} = \arg\max_a W_{-i}(a)$, the optimal alternative if we ignored $i$.
The VCG mechanism is: given inputs $v_1,\dots,v_n$, select alternative $a^*$, then charge each bidder $i$ this amount:
$$ W_{-i}(a^*_{-i}) - W_{-i}(a^*) $$The payment rule is interpreted as charging $i$ for the decrease in welfare from all other players that results from $i$'s preferences. In this world, $a^*$ is selected and the other players' total welfare generated is $W_{-i}(a^*)$; in a world where $i$ has no preferences and value zero for everything, the alternative $a^*_{-i}$ would have been selected and the total welfare of others would be $W_{-i}(a^*_{-i})$.
Theorem. The VCG mechanism is dominant-strategy truthful.
Proof. Fix everyone else's reports $v_{-i}$. If $i$ reports truthfully, then $a^*$ is chosen by the mechanism and her utility is \begin{align} utility &= v_i(a^*) - \text{payment} \\ &= v_i(a^*) + W_{-i}(a^*) - W_{-i}(a^*_{-i}) \\ &= W(a^*) - W_{-i}(a^*_{-i}) . \end{align} Now, what is $i$'s utility if she lies? The term $W_{-i}(a^*_{-i})$ will stay the same regardless. In fact, if we suppose she lies in such a way that $a'$ is selected instead of $a^*$, her utility equals her value for $a'$ minus her payment: \begin{align} utility &= v_i(a') - \text{payment} \\ &= v_i(a') + W_{-i}(a') - W_{-i}(a^*_{-i}) \\ &= W(a') - W_{-i}(a^*_{-i}) . \end{align} Of course, the whole point is that $W(a^*) \geq W(a')$ for all alternatives $a'$, because $a^*$ maximizes the social welfare $W$. So the utility for truthfulness is higher.
Examples:
- Allocating a single item to one of the bidders, when everyone has a value zero for not getting the item or a value $w_i$ for getting it. In this case, if $a=i$ (we give the item to $i$), then $W(a) = w_i$. So $a^* = \arg\max_i w_i$, that is, give the item to the highest bidder. Let $i$ be the highest bidder, then everyone else gets zero total welfare since none of them get the item; but if $i$ didn't exist, then the second highest bidder would have gotten the item. So the highest bidder pays the second-highest price; and you can check that everyone else pays zero since nothing would change if they hadn't arrived. So this is just the second price auction.
- Voting for Lizard 1 or Lizard 2. We pick whichever lizard has the higher sum of values. Often, it will be the case that nobody is pivotal, that is, if you take any one bidder out, the same lizard wins. In this case, none of the bidders pay anything! None of them individually impose any externality on anyone else. But if, say, Lizard 1 wins by a total of $7$, and bidder $i$ had $v_i(\text{Lizard 1}) = 10, v_i(\text{Lizard 2}) = 0$, then without bidder $i$, Lizard 2 would have won and everyone's utility would have been different. In fact, bidder $i$ will pay 3, since when we tally up everyone else's valuations, Lizard 2 wins by 3. Similarly, many other bidders will probably make similar payments.
- Choosing a route in a network. In this case, maybe it costs an edge a fixed amount $w_i$ to route traffic, so their valuation is $v_i(\text{route}) = 0$ if the route doesn't go over them or $v_i(\text{route}) = -w_i$ if it does. In this case, vertices that get routed over will be paid according to how much efficiency they bring to the route. For instance, if the best route without edge $i$ has total sum of costs $200$, but with edge $i$, the sum of costs is only $120$, and $i$'s cost is $20$, then $i$ improves everyone else's total cost by $100$, so that's how much she is paid.
Notes on Computation
VCG is famously difficult to compute in general. For one, the valuation functions themselves may be very large or complex objects. If we consider routes through a network, there may be exponentially many in the size of the network. (Although in the example above, valuation functions had a very simple form of "zero if the route does not pass through my edge, or $-w_i$ if it does".) Generally, to implement VCG, the key algorithm needed is finding the welfare-maximizing allocation, $a^*$. Then, for each $i$, we must apply this algorithm on $v_{-i}$ in order to find $a^*_{-i}$.
Other Manipulation
While VCG satisfies dominant-strategy truthfulness, it is not always resistant to other forms of manipulation. First, VCG is not group strategyproof, meaning that a group of individuals can all improve by colluding. Consider a second-price auction where Alex has value 100 and Barbara has value 60. If both are truthful, Alex wins and pays 60, getting utility 40. But if Alex bribes Barbara $\epsilon$ to bid $0$, then Alex wins and pays zero; both are strictly better off.
Second, VCG is not necessarily false-name proof, meaning that an individual can sometimes improve by creating false identities to bid in the mechanism separately. In the voting-for-two-lizards example, suppose a bidder enters under two identities, both of which bid a kajillion dollars for her favored candidate. That candidate wins, and since no bidder is pivotal, none of them have to pay anything at all! (Note that without ability to create fake identities, the bidder could only increase his only bid to two kajillion, but he still might be pivotal.
In the combinatorial auction setting (selling multiple items), VCG has been found to have false-name proofness properties at least when the items are substitutes; see e.g. Yokoo et al. 2000.
References
(1961) William Vickrey. Counterspeculation, auctions, and competitive sealed tenders. The Journal of Finance.(1971) Edward H Clarke. Multipart pricing of public goods. Public choice.
(1973) Theodore Groves. Incentives in teams. Econometrica.
(2004) Makoto Yokoo, Yuko Sakurai, and Shigeo Matsubara. The effect of false-name bids in combinatorial auctions: New fraud in Internet auctions. Games and Economic Behavior.
(2006) Ruggiero Cavallo. Optimal decision-making with minimal waste: strategyproof redistribution of {V}{C}{G} payments. Proceedings of the 5th International Conference on Autonomous and Multiagent Systems.
(2008) Mingyin Guo and Vincent Conitzer. Undominated {V}{C}{G} redistribution mechanisms. Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems.