After 2-day model training it generates over random cards the following contracts (this is example on 1000 random deals):
| Contract | Probability | Count |
|---|---|---|
| 7NTxx | 74.80% | 748 |
| 7Sx | 7.40% | 74 |
| 7Sxx | 6.50% | 65 |
| 2C | 3.30% | 33 |
| PASS | 1.10% | 11 |
| 6NT | 1.10% | 11 |
| 1Hx | 0.90% | 9 |
| 1H | 0.80% | 8 |
| 7D | 0.70% | 7 |
| 1C | 0.60% | 6 |
| 6Sx | 0.40% | 4 |
| 1Cx | 0.30% | 3 |
| 6C | 0.30% | 3 |
| 6Cx | 0.30% | 3 |
| 2Hx | 0.20% | 2 |
| 7Hx | 0.20% | 2 |
| 6NTx | 0.20% | 2 |
| 5S | 0.10% | 1 |
| 5D | 0.10% | 1 |
| 6H | 0.10% | 1 |
| 3NT | 0.10% | 1 |
| 2Cx | 0.10% | 1 |
| 1Dx | 0.10% | 1 |
| 6D | 0.10% | 1 |
| 7NTx | 0.10% | 1 |
| 1S | 0.10% | 1 |
Surely, it is too optimistic.
Learning idea is as follows:
I initiate RNN with random values, then:
loop:
1. generate N random deals and biddings (player plays with itself, of course it does not know other hands)
2. I calculate result of contract (real points) using DDS (double dummy solver)
3. I compare real result with…. should be minimax, but, to make it simple, I compare it with Milton points expected result. So, I get score for the bidding result contract.
4. Then, for each subbidding, I assign the value for each bid (for declarer pair the result, for the second pair minus result)
5. train with obtained training examples
So, why the software generates 7NTxx?
Surely, double is correct while 7NT and redouble are not correct.

One Reply to “7NTxx gains over 74% popularity ;)”