Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>It should increase the branching factor considerably

Is branching factor actually that relevant? Just from writing some naive AIs for fun I got the impression that the only thing that actually mattered is how well you can evaluate a position without branching further. Whether you have 40 or 100 (or 1000) moves available, you usually just want to only look at "the best" 10-15 (ideally even less if you can), and how reliable this determining of "the best" is seems like a way larger factor than just how many branches there are in theory.



If you've done game AI then you're the expert, not me. I threw it in as it seemed relevant.

I dunno. Granted evaluation is key but so is branching, surely, as that is what generates new positions, and their complexity in evaluation as there are more factors to consider.

How does one 'look at "the best" 10-15 [positions]' without culling all the others first, however cheaply? I thought evaluation really was just tree searching (hence branching factor is crucial) with various weights (obviously this is traditional chess evaluation, not fancy NN stuff). I honestly am clueless.


> I dunno. Granted evaluation is key but so is branching, surely, as that is what generates new positions, and their complexity in evaluation as there are more factors to consider.

I think this is where the slight issue is. The branching factor of your game is related to but not the same as the branching factor of your search.

> I thought evaluation really was just tree searching (hence branching factor is crucial)

The most basic way would be to play every possible game through to completion, fully exploring the tree. If you have no way of estimating the value of a position you have to try it out and see what happens, and your branching factor of the game is the same as for your search. A small increase in possible moves makes an enormous difference - 5 to 10 for a 10 move game is a 1000x jump.

However, imagine the complete opposite. You have a perfect way of estimating your chance of winning for any particular board move. In that case, you'd just check all the moves you can make right now and not look any deeper in the tree. Going from 5 to 10 is a doubling of positions to check, no matter how many moves in the game.

Because of the exponential here, the number of positions to evaluate per move that you don't explore further is almost negligible (similarly it's easier to just calculate the size of the lowest expanded bit of the tree as that dominates the total).

So, the better you get at evaluating your position without searching, the more deeply you can explore because you're not branching off so much,


You do have to cull the others, but the culling isn't what is costly. What is costly is the exponential nature of searching the tree, i.e. if you want to evaluate 3 of your moves in advance, with 50 moves being available every time, you have to generate 50^6 moves, which is pretty fast on a modern CPU, but for 4 moves it's 50^8 which is definitely not that fast anymore. So at some point you have to sacrifice breadth for depth drastically, and as soon as you do that pretty much the only thing that matters is how well you can evaluate positions as they are (without searching further down the tree). Even if you didn't cull searches, at some point you have to stop searching and fundamentally decide what the best move is. If this evaluation isn't good, all your searching didn't do anything.

This is, from my understanding, also the reason why traditional AI approaches struggled so much with Go. Not because Go has so many possible moves, but because given a certain game state it is so difficult to evaluate which player is ahead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: