The results indicate that there are top active players that have not been caught. The software can analyze suspicious pairs and identify pairs that are not cheating thus saving countless hours of watching videos.
The software can also identify many active players in the American Contract Bridge League (ACBL) who are in violation of the Laws of Bridge. A polite way of saying they are cheating.
More importantly, the statistics that can now be generated can be used to help improve the game of honest bridge players.
It is possible to detect collusive cheating in Bridge using statistical methods. This had been an unsolved problem in Bridge for over 90 years. It was solved about four years ago in 2015. More importantly, it does not require humans with expert Bridge logic to analyze individual hands to determine if a pair is cheating. Computers with sophisticated software do the work.
To be more precise, it is possible to detect pairs that collusively cheat, without knowing their cheating methods, within statistically acceptable limits. Given sufficient data, these same methods can be used to detect historical cheating.
Using similar methods, it can be statistically determined how good a player and/or partnership is. Similarly, it is possible to use the same techniques to determine weaknesses in an individual player or partnership for training purposes.
Detecting cheating in Bridge has been a problem since Harold Vanderbilt introduced the current rules in 1925. Several pairs have been caught, but proving cheating has required knowledge of their cheating code or methods. Since 2015 several top pairs have been caught cheating by deducing some of their code through videos. Future cheaters will use varying codes (different meaning on different board numbers) making it highly unlikely that any future cheaters will be caught using videos. Statistics becomes the only method to detect cheating. This book shows the many different statistical methods that can be used, and also how these methods are able to detect the known cheating pairs from the past, without knowing how they were cheating.
I played Bridge briefly as a teenager growing up in Nottingham, England. I re-started playing in 2002 in Atlanta. I was very surprised that, unlike nearly all major sports, there are no statistics or meaningful rankings that are applied to Bridge. I wanted to improve my game so I created an Excel spreadsheet to track various statistics. I have this personal data back to 2004. This data has helped me validate some of the work in this book. The statistics were quite detailed and would usually take me 15-30 minutes after a pair session, or 5-10 minutes after a team game, to create. I was able to track my progress and the ability of my partners with this data. I found it very useful. For example, if I was at a tournament and my declarer rating went down, I would read a chapter on declarer play.
At the summer American Contract Bridge League (ACBL) North American Bridge Championship (NABC) in Chicago in 2015, someone cheated against me. It was in a limited masterpoints national event at the end of the tournament. Declarer played a card from dummy, finessed through my queen with their jack, and then played to the next trick without checking if my partner had won the trick with the queen. The whole time they were glaring at me with a huge smile on their face enjoying their moment of trickery. Declarer did not even glance at my partner.
It was pointless to report. Recorder forms had little effect at the time. Before anyone claims that this could have happened for a number of reasons, it was early in the board so no counting of points could be done. We had not bid. I had kept my cards underneath the table. My partner kept their cards close to themself. Neither declarer nor dummy had peeked at our hands.
The card backs are colored. I thought back to who had dealt the different boards and realized that declarer had shuffled and dealt that board. It is simple for even an amateur magician to track one card in a deal and know who is dealt that card. The smile and gleam in declarer's eye gave the game away - in fact, I think they wanted me to know that they had pulled one over on me and there was nothing I could do about it. I recall smiling back at them at how effective it was. They had made their point.
At my next tournament, I took a deck of cards and had someone film me shuffling and dealing the cards to show how easy it is to cheat in an ACBL team event. I also wrote up how easy it is to fix this type of cheating, see https://www.bridgewinners.com/article/view/cheating-in-acbl-tournaments. A small change in the procedures and this problem goes away. There has been no change in how ACBL National Swiss events, both open and limited masterpoints, are handled. It is still trivial to cheat in the same way. I am sure that the same player continues to play this way at ACBL Swiss events including National events.
The problem is shown at https://www.youtube.com/watch?v=QQvwZ1ncDZY. This reference means type https://www.youtube.com/watch?v=QQvwZ1ncDZY into a browser. The video is one minute long and shows a typical shuffle/deal at an ACBL Swiss event except the location of a card (queen of clubs) is shown at the end. To see how this is done, see YouTube(1DLqoGbrDGM) . This video is also a minute long. To solve this problem, the easiest solution is to shuffle, then pass the cards to an opponent for them to cut and deal. This change in procedure should be required in all ACBL Swiss events. It has been suggested to ACBL. They rejected the idea.
Update: I played in the 0-10K Swiss at the Memphis NABC in March 2019. It is still a player shuffle-and-deal event, no passes, no cuts. My partner was aware that I am involved in detecting cheating. When we went to start a new match, I saw who we were playing and told my partner, before we arrived at the table, what was going to happen. Sure enough, I predicted what happened. There were no cameras in the room. No point in filing any recorder forms.
Later in the same event, we were sitting at another table and starting to shuffle/deal the cards for our next match. My partner was now attuned to how players cheat and she nodded her head to the table next to us. South was about to start dealing his second board. My partner had seen him deal his first board. He picked up the cards, shuffled, and looked at the bottom card. He was trying to distract his opponents when he was “shuffling” and looking at the bottom card, but his clumsy shuffle technique meant that North could see the bottom card, and so could both of us as we were behind North. It was the eight of clubs. He shuffled again, turned the deck over to look at the bottom card, it was the seven of clubs. He shuffled again, looked again, the bottom card was now the ace of spades. He then shuffled six more times but did not look at the bottom of the deck. After each shuffle the ace of spades remained on the bottom of the deck. We could easily see the bottom card because of how he was shuffling and so could North. He then dealt the board. The current ACBL President was at the next table diagonally opposite from us. No cameras in the room, no point in filing a recorder form, but another name added to my list of players that I will require a cut before they deal (Bridge Law 6 allows me to request this in Swiss events).
In my teenage years, I appeared on "The Paul Daniels Magic Show". This was a British Saturday night television show that was watched by over 15 million people. Every segment was rehearsed multiple times, including my segment, without the audience present. I was taught where to stand, and more importantly where not to stand, to make sure I did not leave the camera view and to make sure I did not block the audience or camera view of the trick. I also had the chance to watch the other tricks that night. After watching enough rehearsals, it became clear how each trick was done, including the trick that Paul did with me. My parents were in the audience during all the rehearsals. They watched the same rehearsals as me, but were unable to see how any of the tricks were done. I had the advantage of being on stage to see up close.
Both Paul and Ali Bongo (real name William Wallace, he was the magic consultant for the show, not the Ali Bongo who became President of Gabon in 2009) would occasionally stop a rehearsal to improve the trick, or to make it less likely that the real magic could be discovered. It was always important that the camera was on the action to prevent the television audience from thinking that something was happening outside of the screen.
Paul was an impressive magician. Most impressive was his sleight of hand and redirection where your eyes are focused on where you think the trick is, but in fact something else is going on elsewhere, that is the magic.
I do not claim to be more aware than others in playing Bridge. However, having been exposed to both Paul and Ali Bongo, I am probably a little more aware than others when it comes to spotting misdirection.
On the way home from the Chicago tournament in 2015, I wondered if it would be possible to detect cheating players using mathematics and statistics. I had an epiphany on the train to the airport and later explored the idea.
No one, as far as I knew, had attempted to solve the problem of detecting cheating in Bridge using a mathematic model. It was too complex a problem. However, I had some tools and background that others did not have.
My company had developed ACBLscore+, since renamed Bridgescore+. ACBL had paid $1,500,000 to develop this software and my company had rights to it. Towards the end of the contract, ACBL found out that they did not own the Copyright to the code and their outside legal counsel told them that without the Copyright they should not use the code. Rather embarrassing for ACBL. My company both owned the Copyright, and more importantly, full rights to continue developing the code.
Bridgescore+ has the ability of importing files from various data sources including ACBL, Bridge Base Online (BBO), European Bridge League (EBL) and World Bridge Federation (WBF).
In my youth, I was very adept at solving the Rubik’s Cube. For a while, I was the fastest in the world, I appeared on many television shows and was mentioned in several publications including Scientific American. I developed one of the earliest software programs to solve the Rubik’s cube and also did some work on the mathematics of the cube. In other words, I was (am still?) quite good at solving problems and quickly spotting patterns.
I won an academic scholarship to study Mathematics at Gonville & Caius College, Cambridge. Professor Stephen Hawking gave one of the earliest lectures to the ten of us studying mathematics at Caius that year. I changed to Computer Science after one year of mathematics, and my professional career has been in the computer industry.
I set up the security of the world’s first on-line bank, Security First Network Bank, and made the first ever Internet banking transaction. In 1995, I started a company that performed computer security audits and had audited about 150 of the first 500 on-line banks, credit unions and stock markets before I sold the company. All of my work was white-gloved. We created the industry of security audits. Our methodology and technology was used by other vendors.
It is important to know how to think like a bank robber if you want to break in to a bank. In a similar vein, it is important to be able to think like a cheat if you want to detect cheating in Bridge. I was also one of two people that had the “keys to the [Internet] kingdom”. I had developed the secure web site that protected software keys that allowed anyone to break in to various sites across the Internet. This became one of the most attacked sites on the Internet and, to the best of my knowledge, was never broken in to. The front end of the software handled $100M+ of business. The back end was connected to warehouses in multiple continents and accounting systems in multiple countries. Therefore, I had early exposure to some of the most sophisticated attacks on the Internet and always had to make sure the site was safe, as well as handling a moderately complex data set. Computer security and cheating in Bridge are similar – you always want to be one step ahead. Sadly, detecting cheating in Bridge has been years behind, never ahead, until now.
I have had moderate success in Bridge. I have won limited masterpoints ACBL national events in both pairs and teams. I have played in the finals of an American Bridge Association (ABA) national knockout event. I won the consolation World Mixed Pairs event in Wroclaw, 2016 and qualified for the finals of the World Mixed Pairs in Orlando 2018. I am not in the world’s elite, but I can occasionally take my tricks.
My background includes the necessary tools to attack the problem of detecting cheating in Bridge. I have mathematical and computer skills, a reasonable Bridge background, experience of handling "Big data", experience in working in a new unknown field, Bridge software and a personal motive for wanting to tackle the cheating problem in Bridge.
Figure 6: Skills needed for detecting cheating in Bridge I have provided more background on myself than I intended, but I think it is important that you understand the credentials of who is writing this book and why.
In August 2015, I started to develop tools to validate the ideas that occurred to me on that train in Chicago. There were five generations of cheating detection that I could see being developed:
The remaining two generations have been partially developed but are not listed for proprietary reasons. Within each generation of software there are multiple software modules that can be written.
On August 19, 2015, I made the first mention of using statistics to detect cheating, see https://www.bridgewinners.com/view/article/technical-solutions-to-stop-cheating. By then I already had some rudimentary tools in place.
Little did I know what was about to happen in the world of top level Bridge ...
Bridge is a game of mistakes. All players make mistakes. Very importantly, cheating players also make mistakes - lots of them. The difference between a cheating pair and a world class pair is that the cheating pair makes fewer mistakes in certain areas of the game because the cheating player has additional information. Those mistakes are, within reason, quantifiable and measurable.
... more in book ...
The data is from EBL/WBF events from 1997-2018 with a minimum of 2,900 boards on Vugraph. The code is a two digit year with B=Bermuda Bowl, E=European Bridge Team Championship, O=WBF Open Teams, W=EBL Winter Games. The Y axis is the number of boards from Vugraph for selected tournaments. The X axis is the MF value. The arrows shows what happens when data from the known cheating pairs is removed. In all cases, the value increases, as expected.
For the Bermuda Bowl, we see that 2009, 2011 and 2013 are all to the left of the chart. We assume that no one was willing to risk cheating in the 2015 Bermuda Bowl because of the cheating revelations in the summer of 2015. The 2015 MF value is way off to the right of the chart. This is what happens when all players are on notice that cheaters may be caught. This is the value from an honest Bridge tournament. For the 2017 Bermuda Bowl, the value has moved back to the center.
The 18O value is from the 2018 WBF Open Teams event in Orlando.
For the European events, we see that 2006-2014 are all to the middle or left of the chart. The MF value starts out below 1.30. The 2014 EBTC shows the most cheating. The 2016 EBTC and 2018 EBTC are significantly to the right. The EBL events are all over 1.38 since 2015.
Look at the tournaments on the right side of the chart, to the right of the 1.3 line. These are the recent tournaments, 2015-2018. Look at the tournaments on the left side of the screen, to the left of the 1.3 line. These are the pre-2015 tournaments. This dramatic change in the MF value cannot be explained by removing only the known cheating pairs. There was a dramatic change in the amount of cheating in top level Bridge after 2015. This last chart is the most damning evidence of this fact. The MF value was designed to detect cheating.
The chart illustrates the impact of cheating at the top levels of Bridge over the last twenty years. The chart shows the effect of removing the known cheating pairs. The MF value moves to the right when cheating pairs are removed. The chart shows the impact on the undetected cheating players when there is suspicion that they may be caught. The values for 2015-2018 are significantly to the right of previous years. Even when we remove the known cheating pairs from earlier data, the values for 2015-2018 are still significantly to the right. This indicates that there are many cheating pairs from earlier years that have not yet been caught. Examining the data shows who they are.
I took the data from top tournaments, which includes the major ACBL, EBL and WBF championships, and generated statistics. This is Vugraph data; therefore, there is additional information available on each board. I then sorted the pairs based on the amount of data I had on each pair and took the top 120 pairs. A strange number but there were some pairs in the 100-120 range that I wanted to include in the data. 24 of the 120 pairs are from USA/Canada.
I redid the results from the same tournaments, but only included data when one of these top pairs played against another. I generated statistics on this data. I repeated this step but excluded all players from the naughty list.
I now have three sets of data: top tournaments, top tournaments with only the top 120 players playing against each other, top tournaments with the top 120 players but excluding players from the naughty list.
For the following diagram, I removed the bottom 15 pairs. There is not sufficient data for these 15 pairs to have meaningful results when they played against the other pairs. However, I added back in Hamman/Wolff and Belladonna/Garozzo. I am sure everyone is curious where these superstars from the past appear in the chart. For presentation purposes, I then removed an additional 20 pairs. These 20 pairs are spectacularly bad at defense compared to their peers. Their data-points all appear to the right of the chart under the 600 line.
This chart shows how the top pairs play against other top pairs on defense. It only includes data from top tournaments. Lower values on the X (horizontal) axis show better defenders. The value on the Y (vertical) axis is the amount of data for each pair. The higher the number, the more the data. The Law of Large Numbers (LLN) implies that as more data is added, the better the quality of the result in the X axis.
The first five annotated pairs from the left are Buratti/Lanzarotti (BL), Fisher/Schwartz (FS), Piekarek/Smirnov (PS), Fantoni/Nunes (FN) and Balicki/Zmudzinski (BZ). All of these pairs are on the naughty list. All of these pairs are outliers to the left of the chart – the further left, the more likely they are to be cheating. This chart is highly indicative of cheating pairs. The pairs with the most data that have very similar MF values are Helgemo/Helness (HH), Lauria/Versace (LV), Meckstroth/Rodwell (MR). The difference in MF values between the three pairs is very small.
Bob Hamman is shown with three partners: Soloway (HS), Wolff (HW) and Zia (HZ). Boye Brogeland is shown with two partners: Espen Lindqvist (BE) and Erik Saelensminde (BS). Belladonna/Garozzo is shown as BG. We see from this chart that Belladonna/Garozzo were not quite as good on defense, using this statistic, as the modern-day superstars Helgemo/Helness, Meckstroth/Rodwell and Lauria/Versace. Balicki/Zmudzinski (BZ) are known to cheat on defense and signal their distribution and high cards in suits. Despite this, we see that some pairs are able to defend better than Balicki/Zmudzinski.
It is interesting to compare the data for top pairs before 2015 and after 2015. For a proper comparison, I only compared the data when top players played against each other. For somewhat obvious reasons, I am not publishing the data. There are some pairs that suffered a “dip” in playing ability after the summer of 2015, then recovered after time to become statistically close to their previous playing ability. Amazing how that happened.
... more in book ...
The opening lead is one of the most critical decisions in Bridge. Almost 20% of the opening leads at top team events give up a trick. The elite World Class pairs, Meckstroth/Rodwell, Lauria/Versace and Helgemo/Helness, give up a trick 17-19% of the time. Thousands of books have been written about bidding, but very few about opening leads. Improving your opening leads by even a small amount will see the biggest improvement in your game. This chapter contains the best tip for improving your opening leads.
Data created by players for the opening lead cannot be trusted. I did analysis of the opening lead from the Pairs events at the 2016 World Bridge Federation (WBF) tournament in Wroclaw, Poland and the 2018 WBF events in Orlando, USA. Far too many players do not accurately record the opening lead. In Wroclaw, it was about 1 in 8 boards where the alleged opening lead was not in the opening leader’s hand. As such, little meaningful statistics and no cheating detection can be done on data provided by players.
I took the data from all tournaments, including Women and Seniors, in my database. There are over 5,000 pairs. I took the 100 pairs with the most data. I then ran double dummy analysis on the opening lead to see if it gave up a trick or not.
The vertical axis in the next chart shows the number of boards for each partnership. The horizontal axis is the percentage of opening leads that were safe (did not give up a trick per double dummy analysis). The further to the right, the better the percentage of safe opening leads. The higher in the chart, the more data on the partnership. Some players appear more than once because they have played with multiple partners, Bob Hamman appears three times with different partners – Soloway, Wolff and Zia. Fisher/Schwartz are known to cheat on opening leads. Can you spot them in the next chart (hint: I have between 500-1,000 records on them)?
So, where were Fisher/Schwartz? This is the same data but annotated with the pairs on the naughty list:
... more in book ...
If you are going to play against Jeff/Eric, Lorenzo/Alfredo, Steve/Bobby, Geir/Tor, Chip/Lew, Andrey/Alexander, Franck/Pierre, Nick/Ralph, Espen/Boye, Boye/Erik, Zia/Michael, Fred/Kit, Bob/Zia, Sylvie/Benedicte, Geoff/Eric, Roy/Sabine, Bob/Bobby, Sally/Nicola, Marion/Meike, David/Alan, Tony/Andrew or David/David, where should you sit?
Obviously, you want to be able to tell the best stories in the bar afterwards, so you want the weaker opening leader on your left so they are on lead when you declare – more chance of a better story to regale others for years to come. So, who is the stronger opening leader in these well-known partnerships?
... answer in the book ...
Did the outing of cheaters in the summer of 2015 change Bridge?
I took the Vugraph data from top tournaments and compared the data from before September 2015 (pre-2015) against the data after September 2015 (post-2015). These are the same tournaments and we would therefore expect similar statistical results. However, we know that at least four pairs from pre-2015 are not playing post-2015. What you do not know is the number of additional pairs that may have been cheating pre-2015 and how the knowledge that video-detection of cheating codes has changed their behavior.
[... Data removed from snippet ...]
The “Pre-2015 (-n)” shows the data from the same tournaments but without any data from any player on the naughty list. The amount of data has dropped by over 10%. The MF value has increased. The MF formula was designed to help detect cheating, and we see an increase in its value, which is what is expected when cheating pairs are removed from the data. The percentage of makeable contracts decreased by about 0.2%. This means that players were either bidding worse or were less likely to compete/sacrifice. If you were cheating on the bidding, then you are more likely to sacrifice. The overall percentage of contracts made remained the same. The percentage of makeable contracts that was made remained the same. This implies that the level of declarer play remained the same. If you are in a makeable contract, the onus is more on the declarer, than the defense, to make an error.
We would expect, assuming that all the cheating pairs have been identified, that the MF values for “Pre-2015 (-n)” and “Post-2015” would be similar. These are the same tournaments and mostly the same players. But they are not the same. The MF value has increased dramatically. It should have remained the same. Why? The MF formula detects cheating. If you were a cheating pair prior to 2015, and you know that you are now more likely to get caught, you are likely to stop cheating. Or find a different partner.
The difference in MF values between the “Pre-2015 (-n)” and “Post-2015” can only be explained by:
I cannot emphasize enough the importance of the change in MF value from “Pre-2015 (-n)” to “Post-2015”. The known cheating pairs are not playing. The unknown cheating players are still playing, but just not cheating as much. The change cannot be explained by only a few pairs.
I took the top 150 pairs based on amount of data from all tournaments from pre-2015. I sorted based on double dummy error percentage on defense. For post-2015, I took the amount of data I had for pair #150 from pre-2015 and made that the minimum amount of data for pairs from post-2015 tournaments. I ended up with 64 pairs. I sorted these 64 pairs based on double dummy error percentage on defense. I should have equivalent data sets. Pre-2015, there are ten pairs that have a double dummy error rate less than my cheating threshold. Five of these pairs are known naughty pairs. What about the other five? Simply brilliant Bridge players – what else would you think? Post-2015, there are none. Bridge became a lot harder after 2015. Must be climate change.