Chess, fundamentally, falls into a large class of problems called tree searches, in which a number of interconnected structures are explored by proceeding from a starting point to those that are connected directly to it, and those connected directly to the first set, and so forth. There are lots of different ways to do this, depending mostly on what order you do the visiting in.
Examples of tree searches, besides finding good chess moves, include figuring out the best way to route packets in a network, solving a puzzle where odd-shaped blocks have to be arranged to form a cube, finding the shortest path through a maze and following reflections and refractions to find what color a particular pixel in an image will be (ray-tracing). There are many others.
Each point in a tree search is called a node. In chess, the nodes are chess positions (including where the pieces are, which side is to move and a few other bits such as whether each side's king has moved). At any point in a tree search you are either at a leaf node, meaning you have no further choices to make -- in chess, you have arrived at a win, loss or draw, or more often simply decided not to explore any deeper -- or not, in which case you have one or more child nodes to choose from. In chess, the children of a position are all the positions that result from legal moves in the current position.
Examples of tree searches, besides finding good chess moves, include figuring out the best way to route packets in a network, solving a puzzle where odd-shaped blocks have to be arranged to form a cube, finding the shortest path through a maze and following reflections and refractions to find what color a particular pixel in an image will be (ray-tracing). There are many others.
Each point in a tree search is called a node. In chess, the nodes are chess positions (including where the pieces are, which side is to move and a few other bits such as whether each side's king has moved). At any point in a tree search you are either at a leaf node, meaning you have no further choices to make -- in chess, you have arrived at a win, loss or draw, or more often simply decided not to explore any deeper -- or not, in which case you have one or more child nodes to choose from. In chess, the children of a position are all the positions that result from legal moves in the current position.
Since there is no element of chance in chess, there is one and only one game tree that starts at the initial position (the root node) and includes every possible position resulting from every series of legal moves. This means that in theory there it's possible to know whether any position is a win for white, a win for black or a draw, assuming each side plays perfectly. That includes the initial position, so in theory there is an optimal strategy for each player and all perfectly-played games of chess have the same result.
The problem is that the game tree for chess is incomprehensibly huge. There are way, way more legal chess positions than fundamental particles in the known universe. No computer will ever be able to examine more than a minuscule fraction of them. That doesn't necessarily mean that chess will never be "solved", in the sense of proving what the optimal strategies are for each player and what the result will be with perfect play. In theory there could be a way to rule out all but a tiny fraction of possible positions and exhaustively search what's left. In practice, we're nowhere near being able to do that.
Instead, we're left with approximate approaches. Either exhaustively search a tiny, tiny fraction of the possibilities, which is what AB engines do, or find some sort of approximate measure of what constitutes a "good" position and "just play good moves", using a partial search of the possible continuations to double-check that there aren't any obvious blunders. This is what humans and NN engines do.
One thing that makes chess particularly interesting as a search problem is that it's not totally random, but it's not totally orderly either.
Imagine a search tree as a series of branching passageways. At the end of every passageway, after however many branches, is a card reading "black wins", "white wins" or "draw". If all you know is that the cards are there then you might as well pick randomly at each junction. On the other hand, if you have a map of the entire complex of passageways and which card is at the end of each of them, you can find your way perfectly. Depending on where the cards are, you might or might not be able to win against a perfect player, but if your opponent makes a mistake you'll know exactly how to take advantage.
There are also databases of positions ("tablebases") with known outcomes, found by exhaustively searching backwards from all possible checkmate positions. Currently every endgame with seven or fewer pieces (including the kings) is known to be either a win for white, win for black or a draw. If it shows up by working backward from one of the possible checkmates, it's a win. If it didn't, it's a draw (the process of working backwards from known positions is itself a tree search).
There are a few kinds of positions that are well-known to be immediate wins for one side or the other, regardless of what other pieces are on the board. The classic example is a back-rank mate, where one side's king is blocked between the edge of the board and its own pawns and the other side has major pieces able to give check. It's simple to calculate whether the defending side can capture all the pieces that can give check or can safely interpose (block). If not, it's game over. My understanding is that chess engines don't bother to special-case these, since they're typically looking several moves ahead anyway.
And then there's a lot of gray area. If one side gains a material advantage, it will generally win, eventually, but this is far from an ironclad rule. There are plenty of cases where one side will give up material (sacrifice) in order to gain an advantage. This ranges from spectacular combinations where one side gives up a queen, or more, in order to get a quick checkmate, to "positional sacrifices" where one side gives up something small, generally a pawn, in order to get an initiative. Whether a positional sacrifice will be worth it is usually not clear-cut, though in some well-known cases (for example, the Queen's gambit), it's generally agreed that the other side is better off not taking the material and should concentrate on other factors instead.
There are also temporary sacrifices where one side gives up material in order to quickly win back more. If you're playing against a strong player and it looks like they've left a piece hanging, be very careful before taking it.
In short, which side has more material is a clue to which side will win, and it's often a good idea to try to win material, but it's not a guarantee. This goes equally well for other generally accepted measures of how well one is doing. Maybe it's worth it to isolate or double one of your pawns in order to gain an initiative, but often it's not. Games between human masters often turn on whose estimations of which small trade-offs are correct.
From a computing point of view, since it's impossible to follow all the passageways, we have to rely on the markings on the wall, that is, the characteristics of particular positions. An AB engine will look exhaustively at all the passages that it explores (subject to the "alpha/beta pruning" that gives them their name, which skips moves that can't possibly lead to better results than are already known). But in most positions it will have to give up its search before it finds a definitive result. In that case it has to rely on its evaluation function. Basically it's saying "I've looked ten branches ahead and if I make this move the worst that can happen is I'll get to a position worth X".
That's much better than picking randomly, but it only works well if you explore quite deeply. A good human player can easily beat a computer that's only searching three moves (six plies) ahead because the human can easily find something that looks good in the short term but turns bad a few moves later. Once an AB engine is looking dozens of moves ahead, though, there are many fewer short-term traps that don't pan out until late enough for the search to miss them.
This is a lot like the problem of local minima in hill-climbing algorithms. If you're trying to get to the highest point in a landscape, but it's so dark or foggy you can only see a short distance around you, your best bet could well be to go uphill until you can't any more, even though that fails if you end up atop a small hill that's not actually the high point. The further you can see, the better chance you have of finding your way to the true high point, even if you sometimes have to go downhill for a while to get on the right path.
Humans and neural networks, however, are doing something slightly different. They can't look very far, but they're able to read the landscape better. It's kind of like being on a mountainside in the fog and being able to look at the vegetation, or smell the wind, or examine the types of rock in the area or whatever, and be able to say that the peak is more likely this way than that way.
This wouldn't work if there weren't some sort of correlation between the features of a position and how likely it is to ultimately lead to a win. It's not a property of tree-searching algorithms. It's a property of chess as a tree-searching problem. It's probably a property of any chess-like game that humans find interesting, because that's probably what we find interesting about chess: there's some order to it, in that some positions are clearly better than others, but there's some mystery as well, since we often can't tell for sure which option is better.
No comments:
Post a Comment