Review of T.N.Tideman's book: Collective decisions and Voting, Ashgate 2006.

Review by Warren D. Smith. This review was responded to by Tideman in email, and he had some good responses, so the review has been modified to note them.

Tideman kindly sent me a free copy of his book. This is an important book both because it summarizes a lifetime of thinking about collective decisions and voting, and also because on some topics it offers the best available discussion. However:

  1. As a mathematician, I often find the style annoyingly wordy, non-concise, and/or imprecise.
  2. Its treatment often is idiosyncratic rather than comprehensive.
  3. Sometimes I view the book's treatment as flawed and/or missing out on some important not-covered development which may even obsolete the book's treatment.

This review will unabashedly reflect my personal points of view and mention a variety of ways the RangeVoting.org website goes beyond Tideman's book.

Social Utility – chapters 1-3

Although one often thinks of "voting" as synonymous with "collective decision making," Tideman begins by arguing that there are other important ways to make collective decisions. There is

  1. true (unanimous) consensus,
  2. false (non-unanimous) consensus,
  3. trading or monetary compensation schemes,
  4. and extortionary procedures
and these can all be either deterministic or randomized. If one agrees with (according to Tideman's recounting) Pareto that the utilities possessed by two human beings are incomparable, then only unanimous consensus decisions can be definitively regarded as socially good.

Most people, however, regard that Pareto viewpoint as asinine and regard some kind of cross-human summable utility function as existing (at least approximately). [For axiomatic developments see puzzles 36-39 and this and this.] If so, it becomes possible to speak of the "socially best" collective decision, which maximizes the utility-sum over all members of society. "Utility" does not even occur in Tideman's index, but he does discuss the "social welfare function" pages 27-31. He says that Hicks and Kaldor in 1939 identified "maximizing "social utility" with "maximizing aggregate value measured in money" or "maximizing physical productivity = aggregate real income." But this, Tideman observes, has numerous problems.

Here, in my view, Tideman was groping toward the "Bayesian regret" framework for evaluating and comparing social choice procedures, but he never got there and this whole framework is nowhere discussed in his book. (A large number of criteria, some more and some less precise versus muddled, are discussed in, e.g, his chapter 6 & 12 instead.) A severe price will be paid for that omission.

Voting methods – chapters 7-11 & 13

Ch.8: Tideman notes at great length that "majority rule" is the unique voting procedure for a 2-candidate election, satisfying certain axiom sets.

Ch.9: Tideman notes that Condorcet cycles can exist in rank-order voting methods – although he does not note that such cycles cannot exist in voting systems like range voting based on numerical ratings instead of rank-orderings. He has a rather annoying non-concise and informal discussion of Black's 1948 singlepeakedness theorem, which states that if the voters have singlepeaked (increasing then decreasing) utility functions for candidates located along a line, then Condorcet cycles cannot occur. Furthermore, any candidate located in a certain subinterval of the line (containing the median voter) will be a Condorcet winner.

Black's is a 1-dimensional theorem. I have recently proved the following version that is valid in all dimensions: Suppose each voter has a utility function which is a decreasing function of distance in some Euclidean space in which all voters and candidates are located. Suppose the voters are distributed within that space according to a probability density which is centrally symmetric about some centerpoint and decreasing along rays from the centerpoint. Then (in the limit of a large number of voters and ignoring exact ties) a Condorcet winner will always exist (and cycles cannot exist), and it is always the candidate whose distance to the centerpoint is least.

Tideman then asks (and this is new and important) what is the frequency in real-life ranked-ballot elections, of (a) lack of a Condorcet winner and (b) intransitive social orderings? Tideman assiduously collected 87 real-world elections. In 61 of them, a transitive ordering existed, and in the other 26, a unique "best" ordering "closest to being transitive" existed. (It is possible for that ordering to be non-unique, but that fortunately did not occur.) Based on this, Tideman then constructed a statistical model whose aim was to generate pairwise-victory-margins matrices for pseudo-real-life elections. If I understand it right, the model is: each entry of the upper triangle of the pairwise-victory-margins matrix is an independently sampled random variable of beta type, and the parameters of all the beta distributions are functions of the coordinates of that entry in the matrix, where those functions are got by a 4-parameter fit of a certain peculiar ansatz to Tideman's election data (page 113). Tideman then uses this model to estimate the answers to (a) and (b), finding, e.g. that the probability that a Condorcet winner exists is 99.0% in 3-candidate ∞-voter elections, but 76.7% in 30-candidate ∞-voter elections. However in the latter class of elections, the probability that no cycle exists anywhere is 3×10-9.

However, it should be noted that in legislative votes (which are a different kind of "election" which Tideman did not have in his dataset) Condorcet cycles are far more common that this. The whole reason that "poison pills" (unfortunately-common legislative tactic) work, is the intentional creation of a Condorcet cycle. (Tideman points out he has some mention of that pp.99-100.)

Ch.10 & 11: cover the Arrow and Gibbard-Satterthwaite impossibility theorems, and omit a great deal of interesting sidelights. And, again, these theorems could be viewed as a good reason to prefer rating-based over ranking-based voting rules, but again, that is an idea Tideman neglects to notice.

Criteria for evaluating social choice procedures – chapters 4-5 & 12

Tideman gives a long list of criteria, most of them simply and precisely mathematically definable, but some not, for comparing voting systems. I sometimes find his names for these criteria to be poorly chosen and in disagreement with the practice of other authors. The simple & precise ones are:

"Consistency" criteria (a misnomer, in my view; a better name would have been "goes-with-majority criteria"; I consider these criteria not to be about self-consistency of a voting method, but rather about agreement with externally imposed notions of how it ought to behave) which include
  1. Majority winner should be elected (if exists).
  2. "Mutual Majority" and "Smith set" and "Schwartz set"; some member should be elected (if set exists and nonempty)
  3. Condorcet winner should be elected (if exists).
  4. Condorcet loser (if exists) should not be elected.
I regard all of these criteria as wrong-headed. That is, in a perfect world, if 90% of voters honestly prefer A by intensity 0.001, whereas 10% prefer B with intensity 999, then I claim it is B who ideally should be elected, all of these so-called consistency criteria notwithstanding!
"Responsiveness" criteria which include
  1. If X moves up in a vote, and keeps doing so in various votes sequentially, eventually a state should be reached in which X wins.
  2. Ties are breakable to make any tied-victor win (if suitable extra votes are appended)
I agree all of these are desirable.
"Stability" criteria (But I consider these to be about self-consistency!) which include
  1. Monotonicity: If X moves up in a vote then X should not stop winning (or more generally if we consider probabilistic voting methods, "X's winning chances should not decrease").
  2. If X wins in district A and wins in district B, then in the combined A∪B country, X still should win.
  3. Clone independence.
I agree all of these are desirable. I would also add "non-negative responsiveness": If new votes are added with X top, X should not stop winning; if new votes are added with X bottom, X should not start winning.

Notably missing from Tideman's precise & simple criteria are any dealing with strategic voting. For example, the following have been suggested:

Strategy-related criteria:
  1. Favorite betrayal criterion: Voters should have no incentive to vote their honest-favorite below top.
  2. Majority-defense criterion: If X is honest-top-choice of a majority of voters, then those voters should have a way to vote strategically (and without ranking X below anybody), that forces X's election.
  3. Pairwise majority-defense criterion: If X is honestly preferred over Y by a majority of voters, then those voters should have a way to vote strategically, that prevents Y's election (and without any of them voting Y above a more-preferred candidate; or in a stronger version, not voting Y equal to a more-preferred candidate either).
  4. Indifference: it should be possible to express "no opinion" about a candidate or candidate-pair in your vote in which case it is as though you did not vote at all (Another possibility, which is not the same, is to express the opinion that two candidates are equal.)
  5. Semi-honesty criterion: Voters should have a way to maximize their expected utility without ever misordering a candidate pair (i.e. without ever voting B>A when honestly they think A>B)
  6. Semi-honesty criterion in 3-candidate incomplete-info elections: Voters should have a way to maximize their expected utility without ever misordering a candidate pair (voting B>A when honestly they think A>B) in 3-candidate elections, even if there is imperfect information about how the other voters voted.
  7. Zero-info honesty criterion: If all the other voters vote randomly independently (all votes equally likely) then it should be strategically optimal for you to cast an honest vote.
The lack of precise strategic-criteria is a major flaw in Tideman's treatment. He instead lumps all of his strategic eggs into one basket – a single number he calls "strategy resistance" which he obtains in a very complicated – but unfortunately flawed – way.

Tideman notes certain implications among his criteria on page 163 – but unfortunately there are many other such implications in the literature, which Tideman appears unaware of.

Among the imprecisely defined criteria (each a rating by Tideman on an 0-10 subjective scale) are
  1. Lucidity (ease of describing the voting method)
  2. Ease of use of the voting method
  3. Computational cost of use of the voting method
  4. "Resistance to strategy"
The first three of these imprecise notions could, in fact, be precisely formalized and made objective if we had a precise underlying computational model. (Also, one could use data on human-voter-error rates, or psychological tests, to assess understandability.) But Tideman doesn't. It should also be noted that some voting methods cannot be counted "in parallel," at least in certain computational models. Tideman ignores parallelizability. Tideman also ignores security issues.

Tideman's "strategy resistance" measure is flawed

This is a magic way of assigning any voting method a number from 0 to 10 with greater numbers being "better." It incorporates statistics from 87 real ranked-ballot elections. Unfortunately I must criticize this as very flawed. It probably is a somewhat better measure of voting-system quality than a random number – but not by much.

My suspicions of this number were first stimulated when I observed that, according to Tideman's measure, the "strategy resistances" of Plurality, Range, and Approval voting were 6.3, 4.0, and 3.9 respectively (table p.237) where larger numbers are better. These three numbers are ordered exactly oppositely to what I would expect based on, e.g, the fact that Approval was the only voting method in that table explicitly designed to be strategy-resistant (albeit Tideman's measure, insanely, gives it the worst strategy-resistance score of all the 25 voting methods in table 13.1!)

One reason underlying that is the following. Consider a 9-way plurality election such as the 2000 US Presidential election contested by Gore, Bush, Browne, Nader, Hagelin, Moorehead, Phillips, McReynolds, and Buchanan. In this race, it is known from NES polls that over 90% of the Nader- and Buchanan-favoring voters, actually strategically voted for somebody else (generally, Bush or Gore). So strategic voting was tremendous, as it is in every US presidential election. But Tideman's measure would not count this as strategic voting at all! Specifically, if different-feeling voters (say Nader>Gore>Bush voters, Moorehead>Gore>Bush voters, and Nader>Gore>Moorehead>Bush voters) adopt a common strategy (voting "Gore") that is not counted as strategic voting in Tideman's reckoning. And if the strategy does not affect the election winner, it similarly does not count (even if the voters do it in vast multitudes).

Tideman assesses strategy post-mortem, i.e. his voter-cofeeling-sets are asked "in view of the way the election went, and the exactly known number of voters who feel the same as you that exist, would you prefer now [in collusion with them] to change your votes (all in some identical way) to thus-alter the election result?" In reality, though, voters must make (and do make) strategic decisions before the election, and thus with incomplete information, and they necessarily do it highly independently of everybody else. This easily can alter the "number of strategic votes" by a factor of order 100.

Tideman considers it more-resistant if larger numbers of co-feeling voters (out of the given fixed total number of voters) need to collude to affect the election – but there is reason to doubt that is relevant in large elections (since in reality, explicit collusions of the necessary size essentially never happen – voters all make decisions highly independently, perhaps with the aid of propaganda/advice from parties or the media – in which case size makes little or no difference).

More generally, if, in some election, some voters are going to vote strategically, then probably other kinds of voters also will, and those strategies will tend to interact and to reinforce one another and/or cancel each other out. All that is ignored by Tideman's measure – it is only interested in the effects of (and possibility of) a single kind of voter-feeling type somehow finding each other and collaborating to alter the election in the absence of anybody else strategizing. (In reality, none of that happens.)

All that is totally unrealistic. But it gets worse. Some strategies affect the election in minor ways, some in major ways. (For example, if the result is "the candidate unanimously agreed to be worst is elected," as in the DH3 pathology, that is a major effect.) But Tideman ignores all that – for him, all changes have equal weight!
[Tideman in email remarks "for my strategy calculation, there is no way that a candidate ranked last by all voters could be elected strategically." This does not affect my point above, and indicates yet another flaw in Tideman's measure, since this can happen in real life, and Tideman's measure acts as though it cannot.]

Also, in, for example, Borda and Instant Runoff (IRV, which Tideman calls the "alternative vote"), and Schulze-beatpaths (with minmax as tiebreak) voting – even following Tideman's rules of post-mortem only & consider one co-feeling and co-voting and colluding voter subset only – there is another problem: the best strategy can be for that group to split into two differently voting subsets, thus accomplishing an election-altering feat that they could not accomplish by altering their votes all in the same manner. (And political parties have arranged such strategies in the past by distributing pre-randomized [in the right proportions] "how to vote" cards to their supporters – an example was discussed in Lakeman's book.)

Tideman's measure ignores that too (with the net effect of discriminating in favor of those systems but against Range Voting, in which such splitting is never strategically necessary).
[Tideman in email points out, however, that he was aware of the possibility of 2-pronged strategy, see p.235, just his strategy-resistance measure was not.]

As our final shot, we note that the idea of comparing 50 voting methods based on 87 real elections, seems dubious. It is like fitting a 50-parameter curve model to 87 datapoints. Are 87 real elections (which by the way are treated as having consisted of "honest" votes, when in fact they were undoubtably strategic using strategies designed for the particular voting method in them) really enough for good coverage of the full set of election-pathology behaviors? I doubt it. (Indeed, Tideman himself reckoned 3-candidate elections with a Condorcet cycle would only occur 1% of the time.) If we had 50,000 real elections and truly honest votes, then this effort would start to look plausibly useful. (Tideman, however, counter-argues that he really is using the 87 to construct a model of voter behavior, and then using this model to compare voting methods, and that is statistically legitimate – or at least more so than making such a statistical model not based on real data. Perhaps. The question of statistical legitimacy is not analysed by Tideman or anybody else. It could have been partially examined by a "cross-validation" study, e.g. comparing what would have happened if only random subsets of the 87 elections had been used. No such study was done. Tideman also responds there are not really 50 voting methods, there are only 25 in his summary table, which he regards as falling into only 11 classes. He asks: Is it legitimate to judge 11 weather-models based on their performance on 87 storms? But again I emphasize, only by some sort of cross-validation study can we assess statistical legitimacy; not by waving our hands about weather models. It might be legitimate, it might not, and how much error there is, is not at all obvious.)

[Tideman also responds that he believes his strategy calculations were flawed but mainly not for the reasons I indicate; instead he believes it is since most of his 87 underlying elections were conducted using STV in which strategy is difficult, and they were mostly multiwinner not single-winner, elections. If he were re-doing it, Tideman says he wishes he could get a broader sample of real single-winner ranked-ballot elections including some in which strategy was more important.]

Tideman's measure considers range and approval voting to be the two worst. But, there are two provable senses 1 & 2 in which these systems enjoy very pleasant – many would even say "optimum" – behavior in the presence of (certain reasonable kinds of) strategic voters. And the Gibbard-Satterthwaite dishonesty theorem does not apply to these systems (uniquely, of those in Tideman's list), see paper #97 here. These strategy-resistance facts are unmatched by any other voting system mentioned by Tideman and give important senses in which these voting systems are not the worst, but arguably the best, of those Tideman lists, about strategy resistance.

So what are the best voting methods?

Tideman's chapter 13 is one of the book's best in the sense that it offers an extensive survey (one of the best, if not the best, yet published) of different voting methods and their properties, with proofs. (However, there are not enough references to previously published such surveys.) If you read through all of those, you'll learn a lot.

But I disparage the entire idea (dating back to Arrow) of comparing voting methods using their logical "properties" or "criteria." I have no objection to – in fact encourage – understanding voting system properties; it is just that in the absence of quantitative frequency and severity information, purely qualitative properties are nearly useless for comparing two methods. That is why I advocate the Bayesian Regret framework, which is quantitative and automatable, not qualitative and non-automatable.

However Tideman leaves BR unaddressed and follows the older school. According to his property-satisfaction table, plurality voting and approval voting are the best methods (both satisfying every property except "universal domain" [and with a tiny amount of fudging, arguably that too]). Oddly, Tideman gives Plurality top scores for "lucidity," and "ease of use" (despite the fact that Approval voting – which is just Plurality with the "overvotes are illegal" rule discarded – is simpler to describe and to use)! And since on top of that Tideman considers Plurality far more "resistant to strategy" than Approval, it must be better.

Not!

Well, of course, approval is a better voting method than plurality. I don't think any serious student besides Tideman has ever disputed that proposition. (Actually, Tideman doesn't dispute it either: in email "Actually, I agree that approval is better than plurality... [and basically say so on p.240]." Tideman points out he views the properties merely as input to his judgment process, but that process is not a mere count, but rather is more mysterious! Problem with that is, then it is just Tideman's subjective opinion at the end of the day!)

And of course Tideman himself doesn't think plurality is the best in spite of the fact his comparison techniques "proved" it to be the best. Obviously, then, what he's really shown is that his comparison techniques are inadequate...

This should have rung a warning bell to Tideman. He knew Approval was superior to Plurality voting. He knew the properties he considered did not say so. Therefore he should have realized more properties needed to be added to his set. Ditto about strategy-resistance: Tideman knew (or should have!) that Approval has superior strategy-resistance to plurality, but his measure said the opposite. Again, that should have been a warning bell telling him his strategy-resistance measure was flawed or at least by itself inadequate.

Anyhow, now that we have cast enough doubt on all of Tideman's conclusions – what are they? On the basis of a combination of property satisfactions and his highly flawed strategy-resistance score, Tideman concludes that the best voting methods appear to be these 5:

Simpson-Kramer Maximin:
if a Condorcet (beats-all) winner exists, elect him. Otherwise the winner is the candidate whose strongest-pairwise-defeat is the weakest.
Tideman's Ranked Pairs Condorcet method:
We find the candidate pair AB with the largest pairwise margin of victory and "lock it in" by drawing an arrow from A to B. We proceed through all victories in decreasing-magnitude order, "locking them in" if so doing does not create a directed cycle in the directed graph we are drawing. The root of the resulting directed-graph (the only candidate with no arrows pointing to him) then is the winner.
Markus Schulze's beatpath Condorcet method:
A "beatpath" from A to B is a chain of pairwise victories "A beats X beats Y beats Z ... beats B." The "strength" of the beatpath (like the strength of a chain being the strength of its weakest link) is the least-convincing victory in the chain. S(AB) denotes the maximum strength of all AB beatpaths. Schulze proves that a candidate W must exist such that S(WX)≥S(XW) for all X. If that W is unique (and it generically will be), it wins, otherwise there is a tiebreaking scheme among those W.
Smith+IRV:
Have the voters rank the candidates. Find the smallest set of candidates such that everyone inside the set beats everyone outside in head-to-head comparisons (the Smith set). If the Smith set has only one candidate in it, he or she is the winner. If the Smith set has more than one candidate in it, then eliminate all candidates not in the Smith set and the candidate in the Smith set with the fewest first-place votes. With the remaining candidates, continue as if you were starting the count of an IRV election in which these were the only candidates.
Schwartz+IRV:
Similar.

Tideman, e.g (on p.240), brands plain IRV as "unsupportable" (provided it is "feasible" to construct a pairwise-table, which it is) on the grounds that it is dominated by other methods in terms of its (precise and imprecise) properties.

Of these five, Tideman appears to prefer the last three, especially (partly on simplicity-of-description grounds) Smith+IRV. Tideman remarks (slightly paraphrased):

The ranked pairs rule and the Schulze method are noticeably more susceptible to strategy than IRV. What gives IRV its greater resistance to strategy is the fact that it eliminates options based on first-place votes rather than paired comparisons. Strategants cannot reduce the number of first-place votes of a leading option that they are seeking to defeat. This suggests that, to create rules that satisfy as many non-qualitative criteria as the ranked pairs rule and the Schulze method, while also having as much resistance to strategy as IRV, one might begin by eliminating all options that are not in the Smith set or the GOCHA set, and then eliminate the option with the fewest first-place votes.

But Chris Benham has pointed out [following Douglas R. Woodall: Monotonicity of single seat preferential election rules, Discrete Applied Maths. 77,1 (1997) 81-98] that Tideman's Smith+IRV rule appears to be inferior to the following closely related (and actually simpler) rule:

Woodall (& Benham's) inequivalent Smith+IRV-type voting method (WBSIRV):
Proceed by successively eliminating the candidates with the fewest top-rank votes (just as in IRV) except that before each IRV elimination, check to see if there is a single candidate X with no (among remaining candidates) pairwise losses. As soon as such an X appears, elect X.

On what basis can we claim that WBSIRV appears superior? Well, WBSIRV appears to be easier to describe and appears to obey the same set of Tideman's properties. And it also obeys these two properties that Tideman's Smith+IRV method fails: "mono-append" and "mono-add-plump." We demonstrate both property failures in one example:

#voters their vote
10 A>B>C>D
6 B>C>D>A
2 C
5 D>C>A>B

All the candidates are in the Smith set (which Woodall calls the "top tier"), and the IRV winner is A. But if you add two extra ballots that plump for A (i.e. vote for A and leave the rest unranked; unranked candidates being regarded as ranked coequal bottom) or which append A to the two C ballots, then the top tier becomes {A,B,C}, and (when you dutifully delete D from all the ballots before applying IRV) then C wins.

So, at least based on Tideman's properties and its improved simplicity of description (and provided the WBSIRV modification does not injure the method's "strategy resistance") Woodall's WBSIRV modification seems clearly superior to Tideman's "best" method.

Hey! What about Range Voting?

Range voting is the only common system Tideman considers that allows voters to express intensity of preference. But Tideman does not mention that as a criterion. Range voting obeys all the criteria listed above, including the strategy-related ones not considered by Tideman (except for the last – zero-info honesty) except for Tideman's "consistency" criteria (which as I said I believe are actually not ideally desirable). Thus Range Voting should really get the highest score of any of Tideman's voting methods, based on these properties, in my view. Tideman's flawed strategy-resistance measure discriminates unfairly against range voting, as we have already discussed. As a result of all these facts, I believe range voting ought to get a considerably higher score than Tideman awards to it.

The main defects of Range Voting are its failure of zero-info honesty (which approval voting, arguably, obeys, and definitely Borda does, and presumably so do most IRV-like and Condorcet methods, although nobody has ever proved these latter); and the fact that in range voting one political class of voters can gain a large advantage over an opposed political class if the first class employs strategy while the second does not. (If both are honest, or if both are strategic, or presumably if both are mixed honest+strategic in the same proportions, then both Tideman and I agree Range Voting works well. The problems arise when the mixtures differ.) It is unclear to me that whether this "different-mixture" scenario will ever actually exist in large elections (and if it does, one might presume this phenomenon would self-correct with time because it is extremely easy to recognize and correct for it?). Therefore, it is unclear to me we should take it seriously.

I definitely consider unrealistic: the worry that just one colluding and exactly co-feeling voter group will devise a strategy both in the absence of any strategy from rival groups, and in perfect knowledge of everybody else's vote (both assumptions underlie Tideman's measure).

Chapter 14: Continuum candidate-sets
Here Tideman gets original and considers voting where the "candidates" actually are a continuum-infinite set. However, much remains to be desired. Suppose we are voting for the best location for the capital of our country. All voters vote for the location where they live on the shore of a circular lake (voters roughly equiangularly distributed along the waterfront). Result (with Tideman's voting methods): The capital should be underwater in the center of the lake. Hence I might suggest a more expressive kind of vote than "name a single point."

Also, Tideman did not mention, but should have, than many of his methods are "convex programming" problems and hence algorithmically efficient.

Were we to make the capital be the point with minimum sum of pth-powers of overland distances to the vote-points, then we'd get the same result as Tideman's position-average in waterless cases if p=2, and we'd get a convex-programming problem in cases where the land was simply-connected and p≥1.

Chapter 15: Proportional representation
Tideman discusses the usual party-list and reweighted-STV (single transferable vote) systems that are in wide use, as well as getting into some subtle STV design issues and little-known STV variants, some invented by him. (Some other ideas are also mentioned that are not currently in use in governments, such as congressmen with continuously variable voting-weights in the legislature, depending on their voter-support; this could enable a finer degree of "proportionality.") This is probably the best available published discussion in terms of both technical content and historical details, but it is not easy to follow. It would have been easier to follow if Tideman had employed formal algorithm descriptions as opposed to just English prose (the same is true of other parts of the book too).

But in my opinion reweighted-STV type voting is almost completely obsoleted now by my own reweighted range voting (RRV) and perhaps Asset Voting. (Tideman was unaware of these systems, which seem both simpler than and better than all STV-type systems.) But I think the whole area of multiwinner voting systems remains very much in its infancy.

Chapter 16
In this chapter, Tideman discusses the very interesting topics of Edward Clarke's "demand revealing (economic) process" [which can be used for voting, as was first pointed out by Tideman himself] and Earl Thompson's "insurance" mechanism. Again this is one of the best published treatments, but again it is a complicated topic and a more formal treatment might have helped. The Clarke idea is that votes are monetary bids. Election outcomes with more money bid for them, win. In the event that certain election outcomes happen, each voter may then be required to pay some or all of her bid ("Clarke taxes"). The ingenious point is that the payment formula is cleverly contrived in such a way that voters have no economic incentive to lie in their vote – they rationally should bid the true economic value, to them, of that election outcome happening. If everybody does so, the best outcome for all of society, is chosen. Furthermore, in large elections, Clarke tax payments are rare and on average are very small.

But unfortunately, all that only happens in an ideal world – in practice certain problems undercut these ideal properties. One of them is multi-voter collusions that allow "gaming the system." Others (little discussed by Tideman) might include corruption, transfer costs, and severe distortion of tiny incentives by much larger effects that are supposed to be "negligible."

For some reason I do not understand, Tideman prefers to handle N-candidate elections by means of bids about every pair of candidates. If candidate A wins (thanks to your vote=bid) instead of candidate B, then you pay an amount generated by formula from your AB-pair bid. There are two problems with this:

  1. There are (N-1)N/2 candidate-pairs, so if N is large, each voter has to specify a huge amount of information;
  2. There can be "Condorcet cycles" preventing determining a winner, although Tideman thinks that in practice these will be rare.

I do not understand why the voters cannot simply cast N, not (N-1)N/2, bids, one for each candidate X, expressing that voter's opinion of X's monetary-worth, and then base her payment on the difference between her A- and B-bids. This eliminates both of the above problems.
[Later note: Tideman in email to me has provided some justification for his stance on this. Both pairwise quadratic-info and linear-info Clarke-schemes are possible and both arguably have advantages over the other. The reasoning is somewhat subtle and involved for why one might prefer the quadratic scheme, hence will not be discussed here, but I now agree with Tideman it may have merit.]

The second important idea is "Thompson insurance." This idea is that voters could buy "insurance" to protect them against unfavorable election outcomes. Furthermore, the election outcome could be decided by the amount of insurance purchased (to minimize payouts, maximize profits for the insurance agency, and minimize societal harm – all of which are equivalent). All this depends on somehow knowing "fair odds" about election outcomes (on which to base all insurance decisions and prices).

A possible mechanism for that is the market mechanism often used by bookies. That is, initially, odds are quoted by the bookie. Bets are placed. If in the collective judgment of bettors, the odds are unfair, then the bets will be substantially in one direction. The bookie can then re-adjust his quoted odds in such as way as to avoid monetary loss. This (done iteratively) automatically causes the odds to become fair (if the market is rational and informed enough). Essentially, the bettors then are not betting against the bookie, but against each other. Again, Tideman proposes handling elections with more than two candidates, in peculiar ways. I do not understand why they cannot, more simply, just be handled in the same way that bookies handle bets on horse races with more than two horses.

An important question is whether (and how) Thompson insurance and Clarke bid-based elections should be combined. Tideman makes the interesting point that Thompson insurance alone would offer voters no incentive to make moral judgements (only financial ones) and hence some amount of Clarking seems necessary. But what should be the exact recipe for combining the two? Tideman is not clearly saying; and there are worries that, e.g. combining the two will allow/encourage speculators to try to manipulate the Clarke election to win Thompson bets, etc. I would like to have seen a formal analysis of that, but none is provided either here or anywhere in the literature.

The Index
Is not very good. I tried to look up several topics which I knew were in the book since I'd read them; but failed to find them via the index.


Return to main page