Pairs Statistics

I can measure a player's card play ability (CPA) within a partnership. The score has a maximum value of 10,000 that is weighted for the speed of claim (i.e., it ignores how fast a claim is made). This measures card play, not bidding. This is the number of "good" (according to double dummy) cards played in each board; if a claim is made, then all remaining cards are considered "good" cards. The percentage is multiplied by 1,000 to generate the rating.

This entire section is not about cheating (I use other tests for that), but is about how good different players are in different aspects of the game.

Disclaimer: The data for face-to-face (FTF) play is taken from approximately 300+ top competitions from 1955-2020. The list of tournaments is subjective but includes the Bermuda Bowl, Spingold, Vanderbilt, EBL, WBF events. Some of the data is from The Vugraph Project (TVP), which covers the old tournaments, and is known to have errors, for example, played cards not in the correct hand. More recent data is from Vugraph, where similar problems are known - human Vugraph operators may miss the exact card played. For the more recent tournaments from BBO, I have a known bug in the collection of data where some boards may be misassigned if the players changed during a session. I do not claim to have all data on all players, particularly for data before May 20, 2020. There is an occasional bug with BBO data, where the contract in the traveller does not match the contract in the hand record. The general presumption is that the Law of Large Numbers applies, and that these errors are consistent across all players and have a negligible impact on the data; therefore these numbers should not be used for work purposes. I have other tools that can generate the correct data on a single player/pair. There are many different ways of measuring a player's ability, I am using the same algorithm for both team play and pair matchpoint play; arguably a different criteria should be used.

Note: the data for FTF and online play has different cut-off thresholds.


Top team events (1955-2020)
Top online team events (October 1 2020 -July 31 2021) - 250 total board threshold (error rate)
Top online team events (March-July 31 2020) - 250 total board threshold (error rate)
Top online team events (March-July 15 2020) - 250 total board threshold (error rate)
Top online team events (March-June 2020) - 250 total board threshold (error rate) (Preferred)
Top online team events (March-June 2020) - 250 total board threshold (accuracy rate)
Top online team events (March-June 2020) - 175 total board threshold
ACBL BBO events (March-August 2020) - 1,000 total board threshold
VACB BBO events (March-August 2020) - 1,000 total board threshold

For the following, I use the error rate instead of an accurary rate.
ACBL BBO events (2020)