Your Country Needs You!

This forum is for general discussions and questions, including Collectors Corner and anything to do with Computer chess.

Moderators: Harvey Williamson, Steve B, Watchman

Forum rules
This textbox is used to restore diagrams posted with the fen tag before the upgrade.
Post Reply
User avatar
spacious_mind
Senior Member
Posts: 3999
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

fourthirty wrote:Great work Nick!

Have you tested your Novag Citrine yet?
No it has not been tested yet. If you have one then you are welcome to try the tests on it.

Regards
Nick
Nick
User avatar
spacious_mind
Senior Member
Posts: 3999
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

New additions this week are Mephisto London 68000 in it's three style settings Active, Risky and Solid.

Three more Gavon's are also added:

GAVON GNUCHESS V. 5.07 BY CHIA KONG SIAN
GAVON FAILE V. 1.4 BY ADRIAN REGIMBALD
GAVON PEPITO V. 1.59.2 BY CARLOS DEL CACHO

Image

Interestingly Mephisto London did not score as well as Mephisto Vancouver. Perhaps it is the better of the two programs. Also interesting is that the "RISKY" scores better than the "ACTIVE" setting for both Vancouver and London.

Best regards
Nick
User avatar
paulwise3
Senior Member
Posts: 1505
Joined: Tue Jan 06, 2015 10:56 am
Location: Eindhoven, Netherlands

Post by paulwise3 »

Interesting, these style strength differences. Last week I got the Excalibur Chess Station (in as new condition), and did some tests with testgame 3. I noticed that at level 16 (should be 32 secs/move) it seldom used more then 24 seconds. So I also tested it with level 18 (should be 36 secs/move), which comes very close to an average of 30 secs/move. This machine has a Fast option, which seems to be some kind of selective search. Only with Fast=On and at level 18, it found the mating move 21. c4+.

Anyone else having experience with average timing these levels for the Chess Station?

Fast=On regards,
Paul
User avatar
spacious_mind
Senior Member
Posts: 3999
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

I completed the Tests with Chess King Triomphe

Image

It actually did a lot better than I expected it to do. Finding some moves that I did not think it would find but also at the same time in some positions playing some very poor moves. But still albeit finishing last amongst the currently tested computers with a final score of 1665 ELO.

Image

Amazingly though which is really hard to believe it score better in game 4 than the following computers:

MEPHISTO LONDON 68000 RISKY 1694
MEPHISTO VANCOUVER 68000 RISKY 1667
SAITEK SPARC 1656
MEPHISTO VANCOUVER 68000 SOLID 1651
CHESS KING TRIOMPHE 1650
NOVAG SUPER EXPERT A 1631
HP VECTRA DX2-66MHZ SARGON V 1630
KRYPTON REGENCY 1618
SAITEK RENAISSANCE 1611
MEPHISTO LONDON 68000 ACTIVE 1605
NOVAG CONSTELLATION QUATTRO 1602
GAVON ADROIT CHESS V. 0.3 1590
MEPHISTO BERLIN 68000 1576
GAVON JFRESH V.0.1A 1526
MEPHISTO ROMA 68000 1467

Game 4 seems to be really hard for Richard Lang, David Kittinger and Spracklen?

Best regards
Nick
User avatar
mclane
Senior Member
Posts: 1600
Joined: Sun Jul 29, 2007 9:04 am
Location: Luenen, germany, US of europe
Contact:

Post by mclane »

The fact that the listed computers get only 1600 elo in your test is no
Good sign for your test.
What seems like a fairy tale today may be reality tomorrow.
Here we have a fairy tale of the day after tomorrow....
User avatar
spacious_mind
Senior Member
Posts: 3999
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

mclane wrote:The fact that the listed computers get only 1600 elo in your test is no
Good sign for your test.
Why? It's what they play in that game, should that game be hidden or only games be picked to suit the authors means? Is that how it works?

It's a closed game it shows the programs weakness in those situations. If you look at all the individual games, programs over-perform and under-perform, it is how they play. It's how you play, it's how I play. Bad moves cannot be hidden away.

These 5 test games are good, the results already show it and the relative perspectives between programs are seen as well. I am actually surprised that only 5 games can already show so much.

If you have read the complete post then you would also know that more test games will be coming.

Maybe these poor Lang's will do better, maybe they won't, I don't know, it is not me that plays the moves. :wink:

Best regards
Nick
User avatar
spacious_mind
Senior Member
Posts: 3999
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

Here are some more tests that I had recently completed.

486DX2-66 MHZ CHESS FRIEND

The below photo shows Chess Friend just completing Test Game 2.

Image

Chess Friend was written by Gyulah Horvath in 1993. It is based on his Pandix program, which over the years has played in several tournaments and world championships between 1987 and 2013.

With a final score of 2214 ELO Chess Friend currently lies 35th in the current rating table.

Image

I also added another Gavon with Jazz written by Evert Glebbeek. Jazz scored 2330 ELO and currently lies in 20th place.

Previously I only had Chessmachine's King 2.54 playing with the 32 Bit 30 MHz 512K card and playing with the NORMAL setting. Also previously Gideon 3.1 was playing with NORMAL setting but playing on the 32 MHz 512K Gold Card. So I also wanted to see how good King 2.54 is playing with this 32 MHz Gold Card and also for all of the above I also tried out the AGGRESSIVE setting to see if this made any difference to them.

CHESSMACHINE TESTS

Image

Well all the settings made the TOP 15. Gold Card King 2.54 is now the best Chessmachine program overtaking Gideon 3.1 with a great ELO score of 2467! I still have to test the AGGRESSIVE setting with King 2.54 Gold Card.

With Gideon 3.1 the AGGRESSIVE setting scored slightly less then the NORMAL setting.

Amazingly on the 32 Bit 30 MHz 512K card King 2.54 scored exactly the same final points of ELO 2357 with the AGGRESSIVE setting and the NORMAL setting. But as you can see from the individual game scores every game scored differently. The settings both play completely differently. Hence the surprise. Therefore it probably makes little difference in play strength if you choose AGGRESSIVE or NORMAL with King 2.54. But you are guaranteed a different game experience. Same applies to Gideon 3.1.

CURRENT RANKING OF ALL TESTED PROGRAMS

Image

Best regards
Nick
User avatar
spacious_mind
Senior Member
Posts: 3999
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

Chess Machine King 2.54 on the gold card with AGGRESSIVE setting did not quite perform as well as NORMAL setting, finishing with a Score of 2425 ELO.

Image

This places it right in between Gideon 3.1 NORMAL and AGGRESSIVE settings.

PERI BETA

Peri Beta completed the test with a final score of 1600 ELO. Peri was an Austrian Company that tried to provide some Italian Fashion styles into some Fidelity chess computers. BETA was designed by someone called Dabljuupi. Sounds more like a Finnish name to me than Italian. BETA is supposed to be exactly the same as the Fidelity Designer 1500. The program is written by Ron Nelson. BETA came out in 1989 and has an 8 Bit 6 MHz 80C50 processor with 4 KB ROM and 256 Bytes of RAM. You can play it with batteries or a standard 9 V adapter.

Image

It is a beautifully designed computer but not very strong. It's final score of 1600 ELO is the lowest score tested so far.

Image

Best regards
Nick
User avatar
spacious_mind
Senior Member
Posts: 3999
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

You know there is someone up there that wants us to continue debating for ever. I just completed Excalibur Igor at Level 53 and it finished with a Final Score of ELO 2054. Guess who else happened to have that exact same final score already out of 85 currently tested programs? You guessed it. Horvath with CXG Legend & Concerto. Every single game scored differently but the final score of ELO 2054 ends up being exactly the same. What are the chances of that?

Image

Best regards
Nick
User avatar
Steve B
Site Admin
Posts: 10140
Joined: Sun Jul 29, 2007 10:02 am
Location: New York City USofA
Contact:

Post by Steve B »

spacious_mind wrote:You know there is someone up there that wants us to continue debating for ever. I just completed Excalibur Igor at Level 53 and it finished with a Final Score of ELO 2054. Guess who else happened to have that exact same final score already out of 85 currently tested programs? You guessed it. Horvath with CXG Legend & Concerto
Actually i recall that happening a few times to me last year when i put all of those computers through your games 1 and 2
some scores were the same or very very close but the moves were different in arriving at the final score

anyway the Legend and Igor have different books and hardware specs but i decided to check the two computers just to see if they played and evaluated some positions the same at fixed time 60 seconds
i used fixed time because as you know Igor will ponder
hey...now that i think about it ..that makes 3 Nelson computers that ponder
(OK..Just Kidding)

they played different moves and returned different evals

i know i know...Legend can be set to play at billions and billions of different playing variations with the different style combinations and piece weighting's etc.etc

Carl Sagan Sends his regards
Steve
User avatar
Bryan Whitby
Senior Member
Posts: 1001
Joined: Wed Feb 18, 2009 9:57 pm
Location: England

Post by Bryan Whitby »

A few weeks ago I contacted David Levy for Horvath's email address telling him that I intended to ask him about his involvement with dedicated chess computers. David replied that he would forward my email to Horvath but advised me that he was very seriously ill and doubted if I would get a reply which I haven't.
Bryan
User avatar
spacious_mind
Senior Member
Posts: 3999
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

Chessmaster Ireland wrote:A few weeks ago I contacted David Levy for Horvath's email address telling him that I intended to ask him about his involvement with dedicated chess computers. David replied that he would forward my email to Horvath but advised me that he was very seriously ill and doubted if I would get a reply which I haven't.
Bryan
Yes we have been reading about it. I hope he recovers well.
Nick
User avatar
spacious_mind
Senior Member
Posts: 3999
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

Steve B wrote:
spacious_mind wrote:You know there is someone up there that wants us to continue debating for ever. I just completed Excalibur Igor at Level 53 and it finished with a Final Score of ELO 2054. Guess who else happened to have that exact same final score already out of 85 currently tested programs? You guessed it. Horvath with CXG Legend & Concerto
Actually i recall that happening a few times to me last year when i put all of those computers through your games 1 and 2
some scores were the same or very very close but the moves were different in arriving at the final score

anyway the Legend and Igor have different books and hardware specs but i decided to check the two computers just to see if they played and evaluated some positions the same at fixed time 60 seconds
i used fixed time because as you know Igor will ponder
hey...now that i think about it ..that makes 3 Nelson computers that ponder
(OK..Just Kidding)

they played different moves and returned different evals

i know i know...Legend can be set to play at billions and billions of different playing variations with the different style combinations and piece weighting's etc.etc

Carl Sagan Sends his regards
Steve
Yes, I really enjoy these tests because all decent moves are rewarded and it is surprising even to the Creator :) how over the 5 games the computer's final scores end up finding their appropriate position in the table. Even when computer gets a massive 2600 score in game 1 by the end of game 5 most of them are back in line in relation to where they lie with their peers.

I want to finish testing about a 100 then I will work on the next set of 5 tests. The next set will cover games between 1800 and 1850 followed by 1850 to 1900. I am excited about this era 1800 - 1900 as this is the era of Gambit players and swashbuckling games.

Best regards
Nick
User avatar
spacious_mind
Senior Member
Posts: 3999
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

Just added Gavon Diablo V. 0.5.1 written by Marcus Prewarski and Excalibur Ivan. Diablo is now the best Gavon engine that I have tested so far with a score of ELO 2564. Excalibur Ivan scored ELO 1960. Ivan messed up in Test Game 3 with a very low score of 1605. Otherwise Ivan might have been up there next to Igor.

Image

Best regards
Nick
User avatar
spacious_mind
Senior Member
Posts: 3999
Joined: Wed Aug 01, 2007 10:20 pm
Location: Alabama
Contact:

Post by spacious_mind »

Here are a few more Test Results:

1) Novag Milenio finshed with ELO 1785. This computer seems to be same or similar to Novag Agate Plus but without the LCD display.

Image

2) Franz Morsch's CXG Dominator V. 2.05 finished with a score of ELO 2057, just ahead of CXG Legend/Concerto.

Image

Dominator moves too fast IMO. It is not often that it calculated the full 30 seconds. Level 4 was used.

3) Gavon Redqueen v. 1.1.4 written by Ben-Hur Carlos Vieira Langoni Junior. This program scored ELO 2458 and is now placed 6th in the table.

Image

A total of 90 Programs have now been tested.

Best regards
Nick
Post Reply