Computer and Human Evaluations

This forum is for general discussions and questions, including Collectors Corner and anything to do with Computer chess.

Moderators: Harvey Williamson, Steve B, Watchman

Forum rules
This textbox is used to restore diagrams posted with the fen tag before the upgrade.
User avatar
spacious_mind
Senior Member
Posts: 4000
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Computer and Human Evaluations

Post by spacious_mind »

If someone has followed my Division 3 league, they would have noticed that I am trying to compare the computers in the league to humans. Recently I started working on evaluating human performances by using the Lichess Stockfish 8 evaluation function. Below are the results from 125 games that amount to 250 human evaluations.

1400 ELO Human Evaluations

Image

I have removed the player names to protect the innocent.

From the above chart you can see that I categorized them by year, sex, Age and Country which means that I can sort and evaluate the results under many categories.

Image

The above is a summary based on male and female performance and age groups. Interesting is that so far the male players perform slightly better than female players. Also young players perform better than adults and adults perform better than senior players.

I know I can associate with younger players scoring well compared to their rating as they are on a fast track to improvements especially when their games are all taken from championship and tournament matches.

Since I believe that there are outliers both on the high end scores and low end scores, I have removed 10% from both top and bottom performances. By doing this you get a final average score Total Score of 68.59 for a 1400 Player.

By breaking it down by country you can also see that the average 1400 adult player in Germany scores as below.

Image

The samples were taking through searching players in Chessbase between the ratings of 1376 ELO to 1425 ELO where these players played each other and each game had to be 40 moves or higher for both black and white player.

I was planning on doing a sample 250 for each category. But I find it interesting to see the different age groups showing up as well for which I don't have enough of a sample size at the moment. Therefore for 1400 evaluations I will continue doing these through the years 2014 to see if I can get a good number for everything before moving on to the next skill level.

You can compare these performances to the computers that are playing in division 3.

http://www.hiarcs.net/forums/viewtopic. ... 3&start=30

Best regards
Nick
User avatar
scandien
Member
Posts: 206
Joined: Mon Sep 12, 2011 1:15 pm
Contact:

Post by scandien »

hello,

i always find those analysis very interesting. The basis of those work is to analyse the move with the best of best chess engines and to give a notation to the move.

I think this is quite good but there is one problem (not very great but a real problem) .

Take two great players of the past : Fischer and Tal .

Both have very different playing style , but will be under rated by this system.

Tal , at his best , was able to complicate the game, in a way that is opponent wil not found the good move on the board. But Tal was aware that most of his sacrifice were dubious ! He was only very confident in his superior calculation skill ! And by the way some move were refuted year later , with the help of computer !

For Fischer , it is not the same. Fischer was a player very confident with "simple" and technical position , where is opponent could find no counter play. If he has the choice between two variations : one leading quickly to victory , but by the way giving counter play to opponent, and the other variation, leading to a sure victory without opponent counterplay , but by a longer way. Then he would choose the second one.
Fischer was confident with his technical skill, and his patience.

The choice of those two players will be considered by machine, as bad (or weaker , moves. Stackfish or Komodo or Houdini cannot consider that a weaker move, will be knowingly choosen by a player becuase , at this time , with position and opponent this is really the best move for human.

Obviously , most of us are not Fischer or Tal !

Best regards.

Nicolas
User avatar
spacious_mind
Senior Member
Posts: 4000
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

scandien wrote:hello,

i always find those analysis very interesting. The basis of those work is to analyse the move with the best of best chess engines and to give a notation to the move.

I think this is quite good but there is one problem (not very great but a real problem) .

Take two great players of the past : Fischer and Tal .

Both have very different playing style , but will be under rated by this system.

Tal , at his best , was able to complicate the game, in a way that is opponent wil not found the good move on the board. But Tal was aware that most of his sacrifice were dubious ! He was only very confident in his superior calculation skill ! And by the way some move were refuted year later , with the help of computer !

For Fischer , it is not the same. Fischer was a player very confident with "simple" and technical position , where is opponent could find no counter play. If he has the choice between two variations : one leading quickly to victory , but by the way giving counter play to opponent, and the other variation, leading to a sure victory without opponent counterplay , but by a longer way. Then he would choose the second one.
Fischer was confident with his technical skill, and his patience.

The choice of those two players will be considered by machine, as bad (or weaker , moves. Stackfish or Komodo or Houdini cannot consider that a weaker move, will be knowingly choosen by a player becuase , at this time , with position and opponent this is really the best move for human.

Obviously , most of us are not Fischer or Tal !

Best regards.

Nicolas
Good and valid points. However if I am going to rate the GM's, I would probably follow the same principle as below:

1) Plenty of games from each player
2) Remove 10% from the top end and remove 10% from the bottom end to get the best average.

Tal did not always play dubiously. He picked those positions when he felt he had a great chance to be successful with it. Anyway any outlying eccentricities would be removed by the 10% removal and could be an interesting side discussion.

Some Tal from his Botvinnik WC games:

Botvinnik, Mikhail
4 Inaccuracies
2 Mistakes
0 Blunders
27 Average centipawn loss

Tal, Mihail
5 Inaccuracies
3 Mistakes
1 Blunders
37 Average centipawn loss

Botvinnik, Mikhail
3 Inaccuracies
5 Mistakes
1 Blunders
37 Average centipawn loss

Tal, Mihail
1 Inaccuracies
4 Mistakes
0 Blunders
27 Average centipawn loss

Tal, Mihail
0 Inaccuracies
0 Mistakes
0 Blunders
9 Average centipawn loss

Botvinnik, Mikhail
0 Inaccuracies
0 Mistakes
0 Blunders
9 Average centipawn loss

Tal, Mihail
0 Inaccuracies
0 Mistakes
0 Blunders
15 Average centipawn loss

Botvinnik, Mikhail
3 Inaccuracies
3 Mistakes
1 Blunders
31 Average centipawn loss

Some Fisher versus Spassky games:

Spassky, Boris V (2660)
0 Inaccuracies
0 Mistakes
0 Blunders
13 Average centipawn loss

Fischer, Robert James (2785)
2 Inaccuracies
2 Mistakes
2 Blunders
31 Average centipawn loss

Spassky, Boris V (2660)
3 Inaccuracies
0 Mistakes
1 Blunders
25 Average centipawn loss

Fischer, Robert James (2785)
1 Inaccuracies
0 Mistakes
0 Blunders
12 Average centipawn loss

Fischer, Robert James (2785)
2 Inaccuracies
0 Mistakes
0 Blunders
8 Average centipawn loss

Spassky, Boris V (2660)
2 Inaccuracies
0 Mistakes
0 Blunders
9 Average centipawn loss

Fischer, Robert James (2785)
0 Inaccuracies
0 Mistakes
0 Blunders
12 Average centipawn loss

Spassky, Boris V (2660)
6 Inaccuracies
1 Mistakes
1 Blunders
37 Average centipawn loss

Best regards
Nick
herO
Member
Posts: 342
Joined: Wed Nov 12, 2014 8:49 am

Post by herO »

spacious_mind wrote:
scandien wrote:hello,

i always find those analysis very interesting. The basis of those work is to analyse the move with the best of best chess engines and to give a notation to the move.

I think this is quite good but there is one problem (not very great but a real problem) .

Take two great players of the past : Fischer and Tal .

Both have very different playing style , but will be under rated by this system.

Tal , at his best , was able to complicate the game, in a way that is opponent wil not found the good move on the board. But Tal was aware that most of his sacrifice were dubious ! He was only very confident in his superior calculation skill ! And by the way some move were refuted year later , with the help of computer !

For Fischer , it is not the same. Fischer was a player very confident with "simple" and technical position , where is opponent could find no counter play. If he has the choice between two variations : one leading quickly to victory , but by the way giving counter play to opponent, and the other variation, leading to a sure victory without opponent counterplay , but by a longer way. Then he would choose the second one.
Fischer was confident with his technical skill, and his patience.

The choice of those two players will be considered by machine, as bad (or weaker , moves. Stackfish or Komodo or Houdini cannot consider that a weaker move, will be knowingly choosen by a player becuase , at this time , with position and opponent this is really the best move for human.

Obviously , most of us are not Fischer or Tal !

Best regards.

Nicolas
Good and valid points. However if I am going to rate the GM's, I would probably follow the same principle as below:

1) Plenty of games from each player
2) Remove 10% from the top end and remove 10% from the bottom end to get the best average.

Tal did not always play dubiously. He picked those positions when he felt he had a great chance to be successful with it. Anyway any outlying eccentricities would be removed by the 10% removal and could be an interesting side discussion.

Some Tal from his Botvinnik WC games:

Botvinnik, Mikhail
4 Inaccuracies
2 Mistakes
0 Blunders
27 Average centipawn loss

Tal, Mihail
5 Inaccuracies
3 Mistakes
1 Blunders
37 Average centipawn loss

Botvinnik, Mikhail
3 Inaccuracies
5 Mistakes
1 Blunders
37 Average centipawn loss

Tal, Mihail
1 Inaccuracies
4 Mistakes
0 Blunders
27 Average centipawn loss

Tal, Mihail
0 Inaccuracies
0 Mistakes
0 Blunders
9 Average centipawn loss

Botvinnik, Mikhail
0 Inaccuracies
0 Mistakes
0 Blunders
9 Average centipawn loss

Tal, Mihail
0 Inaccuracies
0 Mistakes
0 Blunders
15 Average centipawn loss

Botvinnik, Mikhail
3 Inaccuracies
3 Mistakes
1 Blunders
31 Average centipawn loss

Some Fisher versus Spassky games:

Spassky, Boris V (2660)
0 Inaccuracies
0 Mistakes
0 Blunders
13 Average centipawn loss

Fischer, Robert James (2785)
2 Inaccuracies
2 Mistakes
2 Blunders
31 Average centipawn loss

Spassky, Boris V (2660)
3 Inaccuracies
0 Mistakes
1 Blunders
25 Average centipawn loss

Fischer, Robert James (2785)
1 Inaccuracies
0 Mistakes
0 Blunders
12 Average centipawn loss

Fischer, Robert James (2785)
2 Inaccuracies
0 Mistakes
0 Blunders
8 Average centipawn loss

Spassky, Boris V (2660)
2 Inaccuracies
0 Mistakes
0 Blunders
9 Average centipawn loss

Fischer, Robert James (2785)
0 Inaccuracies
0 Mistakes
0 Blunders
12 Average centipawn loss

Spassky, Boris V (2660)
6 Inaccuracies
1 Mistakes
1 Blunders
37 Average centipawn loss

Best regards
Could you please share PGN file for these games? Thank you.
Volodymyr
Member
Posts: 141
Joined: Sat Apr 08, 2017 1:03 pm
Location: Ukraine,Radyvyliv

Post by Volodymyr »

Nicolas,good post.
Nick,very large working done,you believe in this method.
I do not understand why.Just interesting?
Only three games.
Explain the difference. I will repeat, only three games.
My impression is you do not play chess. But you know the rules of the game of chess.:D

***************************
Only 10 min.per game.What is it?The game of two supercomputers?Perfect game.The masterpiece!

https://lichess.org/FfexoNpu

doktorOLEG Inaccuracies-0 Mistakes-0 Blunders-0 Average centipawn loss-8
Waldos Inaccuracies-0 Mistakes-0 Blunders-0 Average centipawn loss-7

***************************
Kramnik much weaker Waldos and doktorOLEG.
Ehlvest loser. A bad player.

https://lichess.org/e3JsVNo9

Kramnik Inaccuracies-0 Mistakes-0 Blunders-0 Average centipawn loss-19
Ehlvest Inaccuracies-2 Mistakes-1 Blunders-2 Average centipawn loss-60

***************************
Jones is much stronger than Ehlvest.
Noseinbook very weakest player.

https://lichess.org/SmKVxswZ

JHONS Inaccuracies-3 Mistakes-2 Blunders-0 Average centipawn loss-36
noseinbook Inaccuracies-4 Mistakes-5 Blunders-1 Average centipawn loss-75
User avatar
spacious_mind
Senior Member
Posts: 4000
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

Volodymyr wrote:Nicolas,good post.
Nick,very large working done,you believe in this method.
I do not understand why.Just interesting?
Only three games.
Explain the difference. I will repeat, only three games.
My impression is you do not play chess. But you know the rules of the game of chess.:D

***************************
Only 10 min.per game.What is it?The game of two supercomputers?Perfect game.The masterpiece!
Hi Volodymyr,

I don't know why you want to insult my chess knowledge, its insulting and silly.

The games I showed are just a couple of wins and losses from Tal and Fisher. It shows very well the highs and lows. It shows when they make mistakes and it shows when they play well. The point I was making is that the evaluations show Fischer mistakes and it will show Tal or the mistakes of any other Grandmaster.

I have examples where a 1300 player in a certain game scores as well as Fischer and Tal, therefore I consider this as something that can happen through a game's evaluation, but it doesn't mean a 1300 player will win. That is silly if that is what your posted response is about?

What I am trying to achieve is the average score based on a lot of games played over the table in real matches and not online hence why I have done 250 so far for players around 1400 ELO. Those are factual. Online game I consider flawed as I don't know if the player had his IPhone next to him and has his IPhone playing the moves that he enters into Lichess. Therefore irrelevant and not interesting for the tests that I am doing. As are your examples to argue your point, irrelevant and not interesting.

Of course in any given game a final score could be good or not so good, it depends on the game played by two people. You don't have to question my chess knowledge for that. Common sense is all that is needed for that.

If I wanted to test Tal then you can be assured that I would put several hundred of his games through a thorough test. I don't play around with half assed examples and make assumptions based on them.

You can rest assured before even taking SF8 seriously at 18 ply to evaluate humans I have tested SF8's 18 Ply strength. Even more so because I have evaluated human games in the past 30 to 50 ply deep for other rating tests that I have which are accurate. So of course I would look into SF8 and its accuracy at 18 ply as used by Lichess.

SF8 18 Ply 1.5 - 0.5 Komodo 1.0 (2865 ELO) 40/40
SF8 18 Ply 1.5 - 0.5 Komodo 1.3 JA (2918 ELO) 40/40
SF8 18 Ply 2.5 - 1.5 Hiarcs 14WCSC (2963 ELO) 40/40
SF8 18 Ply 1.0 - 1.0 Critter_1.6a_32bit (3069 ELO) 40/40
================================================== ==
SF8 18 PLY 6.5 - 3.5 Engines

The above chess programs played 1 minute per move against SF8 at 18 Ply. These programs are better than Deep Blue and ran on my modern 2.8 GHz Laptop playing 32 Bit 1 core (exactly the same as the ratings established for the above programs at CCRL 40 40) they search faster and deeper than Deep Blue and overall they searched deeper than SF8's 18 PLY. But SF8 at 18 ply beat the above. Therefore I have every confidence in SF8 18 ply evaluating the mistakes of human accurately as it is better than every human in existence today. There is not a human that exists that thinks 18 ply deep every single move. In summary I don't believe there is a human that would beat SF8 18 ply under real match conditions.

Therefore for the purposes of these tests and exercise which is meant to identify the average strength of players at different strength grades or even to evaluate the average strength of a Tal or Fisher or Carlsen, I consider SF8 more than capable at 18 ply to provide a decent value. Hence the tests.

However, the proof of any tests are in the results. I have not completed the tests for all the different strengths yet and neither have I tested a large set of Grandmaster games.

You can rest assured when tested that I am pretty sure that the GM will show a lot less mistakes than a 1400 player. :P But you can also be assured that they will show mistakes as they are human.

Best regards
Nick
User avatar
spacious_mind
Senior Member
Posts: 4000
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

herO wrote:
Could you please share PGN file for these games? Thank you.
You can get these games anywhere like for example Chessbase and from books. The examples I showed are the 1st 4 games over 40 moves from Tal's 1st worldchampionship against Botvinnik. Which he won. In a rematch he was trounced by Botvinnik (an old Man :) ). Not even close. Fisher's games are the same 1st 4 games against Spassky excluding game 2 that was a default win for Spassky when Fisher didn't show up. No magic and no cherry picking.

Best regards
Nick
Volodymyr
Member
Posts: 141
Joined: Sat Apr 08, 2017 1:03 pm
Location: Ukraine,Radyvyliv

Post by Volodymyr »

Nick,you a researcher, I do not know work or hobby. I do not understand why this is so. What information?

Here are two games

TC Standart
https://lichess.org/6aonL8lE
Alexander Morozevich Inaccuracies-9 Mistakes-4 Blunders-2 Average centipawn loss-42
Maxime Vachier-Lagrave Inaccuracies-4 Mistakes-3 Blunders-1 Average centipawn loss-28

It's a 5 minute blitz, about 1300-1400 FIDE.
https://lichess.org/Y73YhQkR
uola Inaccuracies-5 Mistakes-4 Blunders-3 Average centipawn loss-46
Gandalfbsc Inaccuracies-9 Mistakes-3 Blunders-1 Average centipawn loss-30

These magic numbers after the analysis is dust in the eyes.
Absolutely different level of games.
Morozevich - Vachier-Lagrave after the victim of the knight it is very difficult to count the moves.
For me this is a heavy puzzle. In this game, the player's class is visible.

OK.Two puzzles.Easy and heavy.Matе in two moves.White's move.
Who does not play chess will answer more pieces or fewer pieces. Or absolutely different pieces will say.
It is obvious.

Image
User avatar
spacious_mind
Senior Member
Posts: 4000
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

Volodymyr wrote:Nick,you a researcher, I do not know work or hobby. I do not understand why this is so. What information?

Here are two games

TC Standart
https://lichess.org/6aonL8lE
Alexander Morozevich Inaccuracies-9 Mistakes-4 Blunders-2 Average centipawn loss-42
Maxime Vachier-Lagrave Inaccuracies-4 Mistakes-3 Blunders-1 Average centipawn loss-28
When you do the analysis ask yourself did SF8 evaluate correctly. Morozevich played a novelty move with 13. Nf4?!. Totally upsets his opponent and by move 26 has a forced Mate win on his hand (if he were able to see 20 moves deep which of course he is not). He then blunders on moves 26 and 27 and ultimately with more mistakes from both sides ends up losing a game in which he had a forced checkmate after Black's move 25? So how is SF8 not correctly evaluating this game?

Now ask yourself if this game is so complicated, then would it not be even more complicated for a 1400 ELO player in this game and therefore the evaluations would be even worse at the end of the game then what it was. Now that is assuming that maybe a 1400 player plays even worse moves then what Morozevich played on moves 26 and 27 :) ?

If you evaluate 250 Morozevich games don't you think that in that bigger picture overall that Morozevich will have his GM average come out better than a player who is not a GM who was also tested over 250 Games?

My bigger concern is the question if SF8 18 Ply is a deep enough analysis and I am pretty sure that in about 5 or so years if you can search 35 Ply or more as quickly as SF8 searches 18 Ply today that you will get even better results. But I wont know that without trying out 18 Ply first and then maybe in 5 years compare it again with 35 ply.

Code: Select all

It's a 5 minute blitz, about 1300-1400 FIDE.
https://lichess.org/Y73YhQkR
uola         [color=red]Inaccuracies-5 Mistakes-4 Blunders-3 Average centipawn loss-46[/color]
Gandalfbsc   [color=red]Inaccuracies-9 Mistakes-3 Blunders-1 Average centipawn loss-30[/color] 
You have to ask yourself the same question if Morozevich played this particular game would he have as many mistakes? The answer is probably no. Again it is the average over many games that counts!!!
These magic numbers after the analysis is dust in the eyes.
Absolutely different level of games.
Morozevich - Vachier-Lagrave after the victim of the knight it is very difficult to count the moves.
For me this is a heavy puzzle. In this game, the player's class is visible.

OK.Two puzzles.Easy and heavy.Matå in two moves.White's move.
Who does not play chess will answer more pieces or fewer pieces. Or absolutely different pieces will say.
It is obvious.

Image
Here you have to ask yourself the same question. If a situation like this were to happen in a game. Would a 1400 player spot position 1 and the answer is yes. Would he spot position 2 and the answer is probably no. He is playing under match conditions and position 2 which is a test position is not so easy to spot under the pressure of a clock ticking down. Same with GM's do you really think that every GM in this world will spot position 2 under clock pressure?

If you keep worrying about individual games and example positions then you will drive yourself crazy with my test. The concept of this test has nothing to do with individual examples.

Best regards
Nick
Volodymyr
Member
Posts: 141
Joined: Sat Apr 08, 2017 1:03 pm
Location: Ukraine,Radyvyliv

Post by Volodymyr »

Stockfish correctly estimates the game.
But he does not assess the complexity. There are different complexity of the position.
Different errors.

Morozevich will not play this game.He played this game at the age of 6-7 years.
Player 1400 will not play better the move 26 or 27 better even after an hour of thought, he will turn dizzy.

The best result in the analysis for Morozevich is only because of the percentage of victories.
Morozevich is a attacking player. The main task is to create problems for the opponent. Sometimes by making an incorrect (doubtful in the analysis) move.
He does not play passively. Some players 2400-2600 will have the best indicators in the analysis. The correct sampling of games is important.
For example Player X (2500 Elo) 10 wins,
10 draws, 10 vicious. Average Elo, this is important, for his rivals (+ -50).

Passive game, exchanges and endgames are ideal in analysis.
Aggressive, sacrifice, attack on the king - this is questionable in the analysis.

I repeat,Nick you do not play chess.:D
Sometimes a dubious move - is the way to victory.
This is the way to confuse the opponent.

Whoever does not play against a chessman will not understand this either.Then play online.
The strategy of the game against the computer and the chess player is different.
User avatar
spacious_mind
Senior Member
Posts: 4000
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

Volodymyr wrote:Stockfish correctly estimates the game.
But he does not assess the complexity. There are different complexity of the position.
Different errors.

Morozevich will not play this game.He played this game at the age of 6-7 years.
Player 1400 will not play better the move 26 or 27 better even after an hour of thought, he will turn dizzy.

The best result in the analysis for Morozevich is only because of the percentage of victories.
Morozevich is a attacking player. The main task is to create problems for the opponent. Sometimes by making an incorrect (doubtful in the analysis) move.
He does not play passively. Some players 2400-2600 will have the best indicators in the analysis. The correct sampling of games is important.
For example Player X (2500 Elo) 10 wins,
10 draws, 10 vicious. Average Elo, this is important, for his rivals (+ -50).

Passive game, exchanges and endgames are ideal in analysis.
Aggressive, sacrifice, attack on the king - this is questionable in the analysis.

I repeat, you do not play chess.
Sometimes a dubious move - is the way to victory.
This is the way to confuse the opponent.

Whoever does not play against a chessman will not understand this either.Then play online.
The strategy of the game against the computer and the chess player is different.
You are still missing on how this works. Look at my U1400 list so far.. you have games were the evaluation finished at 15.00 and games where the evaluation finished at 190.00 The evaluation is dependent on the game. Why is this so hard for you understand? The average ends up being 70.00.

Morozevch will not finish at 70.00 :) So why is this so difficult for you? I don't care if Morozevich plays that other game or not. The point is if he had he would score better as a GM that is the point of the test.

Btw... Morozevich played like a beginner on move 26. Highest piece grabbing like a beginner and missing out on an easy win. That is exactly what a beginner would do. GM's are also capable of making beginner mistakes regardless of complexity. Move 26 is not difficult to see that grabbing the Rook on a8 is not the best move, when instinct should tell you that a checkmate might be around somewhere with the King trapped as it was on g8. I doubt if time pressure was an excuse as this was move 26 in a prepared variation that Morozevich hand picked and was prepared for when the situation arose. It is obvious that 13. Nf4?! was a preparation move. This game is from 2009. GM's have already been using computers for a while to find opening novelties to upset their opponents planning. This move was one of them. Therefore I would bet you that by move 26 Morozevich had plenty of time on his clock to study the position properly. Especially since many of the preceding moves were forced meaning that most of the time only 1 good move was possible.

What is your worry? Are you worried that Morozevich or Tal will finish lower than other GM's because of their play style? If so then you are worrying about something that has not been shown yet. Even if it did, so what perhaps then you argue that their risky style creates more errors. Your game example shows the high risk and high reward risks very well. Morozovich missed badly in this game. He had this game in the bag. If this were any of our dedicated computers, someone here would have posted and complained about it.

Best regards
Nick
Volodymyr
Member
Posts: 141
Joined: Sat Apr 08, 2017 1:03 pm
Location: Ukraine,Radyvyliv

Post by Volodymyr »

The newcomer will not find this move. This is too difficult for a beginner.
Even after 10 hours!

Nick,I'm worried because I saw other of your messages.

There is no evolution of the brain - there can be no progress in the game of chess players.
Experience and memory are important. Now other possibilities.
One good book can give 100-150 Elo. If you master practically.
Base and analysis of games, online coach lessons, more than 1000 books on chess are available, chess engines for sparring, etc.
Nick wrote - no brain evolution.There is no evolution - no progress.

CCRL is not the correct rating.
CCRL is more than 2,000,000 games.
You can download them and build any rating for any engines using Bayesian-Elo or Ordo.

My rating is the most correct.
There are many questionable positions.
Where the archive with games?

Nick,this is your idea. Is this the evidence base?
There is no progress and my rating is the most correct one.

Nick, but you do not play chess.
What questions can there be? Or the correct ratings.

That's Nick. It's the Internet. :)
User avatar
spacious_mind
Senior Member
Posts: 4000
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

Volodymyr wrote:The newcomer will not find this move. This is too difficult for a beginner.
Even after 10 hours!

Nick,I'm worried because I saw other of your messages.

There is no evolution of the brain - there can be no progress in the game of chess players.
Experience and memory are important. Now other possibilities.
One good book can give 100-150 Elo. If you master practically.
Base and analysis of games, online coach lessons, more than 1000 books on chess are available, chess engines for sparring, etc.
Nick wrote - no brain evolution.There is no evolution - no progress.

CCRL is not the correct rating.
CCRL is more than 2,000,000 games.
You can download them and build any rating for any engines using Bayesian-Elo or Ordo.

My rating is the most correct.
There are many questionable positions.
Where the archive with games?

Nick,this is your idea. Is this the evidence base?
There is no progress and my rating is the most correct one.

Nick, but you do not play chess.
What questions can there be? Or the correct ratings.

That's Nick. It's the Internet. :)
Are you sure you have the right Nick? You will to show me where I posted that. It doesn't seem to be something that I would have posted. Show us all the link where I said that.

Volodomyr, I refuse to be goaded into discussions regarding my chess playing ability. Since I have gotten into the computer chess hobby, I have never ever posted any games of me playing against a computer. I find it tacky as it leaves open the interpretation that maybe I am inflating my skill by showing a best game or not showing where a take back occurred. So I keep myself neutral at all times. You will also see if look back at any posts I have made that when someone posts their game that I keep myself neutral, I don't comment.

I can promise you without doubt that I wish a GM would give me a position like Morozevich had on move 26. It is not difficult for most people here in the Forum especially if you grew up in the Fisher and Tal era to see the beauty of sacrifices. It is not difficult to see the very simple simplification of 26. Rxf8 Rxf5 27. Qxe5 (doing this from the top of my head since I am posting while driving to work). After that the GM would probably have resigned. 3 moves that is all it takes. Believe me most people in this forum would see that if they took a couple of minutes to study the position.

You have to stop insulting my abilities because it is starting to get annoying. You do not KNOW what my capabilities are in chess.

In all honesty I don't even know how you would consider 26. Rxf8 Rxf5 27. Qxe5 as difficult because to me it is not!!

Best regards
Nick
Volodymyr
Member
Posts: 141
Joined: Sat Apr 08, 2017 1:03 pm
Location: Ukraine,Radyvyliv

Post by Volodymyr »

Nick,you found! But you watched the game, and looked at the analysis.
Your rating is the most correct!
But where is the archive with the games?
User avatar
spacious_mind
Senior Member
Posts: 4000
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

Volodymyr wrote:Nick,you found! But you watched the game, and looked at the analysis.
Your rating is the most correct!
But where is the archive with the games?
I don't know what to say, I am starting to think that you are so fixated about the horror of your countrymen making mistakes that you have to react with questioning my chess abilities and thereby getting personal with attacks on me to the point of harassment You remain blind as to why I posted the 4 Tal games and the 4 Fisher games. Creating a reaction from you that is unfathomable. It's the same reaction you provided recently when you reacted in horror when I posted the Fritz 3 Blitz table where it beat every top Grandmaster in the world. So much so that you are making this a personal attack on me rather than understanding what I posted, which is the beginnings of a new Test.

I can't help the scores that come from an SF8 evaluation. I just post them. The Fisher and Tal games were a response to Nicolas where he expressed his concerns which are valid but unproven yet.

So I will help you one last time. Why did I post those games? Let me make it very easy for you.

1) Count the number of players who scored less than 20 out of the 250 that I showed in the first table of this post. Unless my counting is bad which maybe it is, I make it 5. 5/250 = 2% of player evaluations. Right? Do I need to be a GM to know this?
2) Look again at the 8 games I posted. When I posted them as I said I responded to Nicolas with 4 games of Tal from his WC and 4 games from Fisher from his VC both times 1st 4 games over 40 moves and to keep it equal if I remember correctly it was 2 wins 1 draw and 1 loss for both Fisher and Tal.
3) Now look at the evaluations again do the same look at the games where the total score will be less than 20 (same as the U1400s) Well if my count is correct, I make it 8 out of 16 = 50%

So why am I even having to put up with your insults is beyond me. :P

Best regards
Nick
Post Reply