Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How to solve the "Mastermind" guessing game? (2009) (stackoverflow.com)
55 points by jokoon on Jan 13, 2024 | hide | past | favorite | 30 comments


This game is part of the Simon Tatham's Puzzle collection as "Guess" (https://www.chiark.greenend.org.uk/~sgtatham/puzzles/js/gues...)

I've seen a lot of algorithms for how to play this, Donal Knuth has one that can win in 5 moves, but none that are really usable by humans. The method I use, which seems to always win is:

Guess 1111

Depending how many 1's were right, Guess 1222, 1122, 1112, 2222

Continue in this manor until the solution is found.

I just played this game ID: https://www.chiark.greenend.org.uk/~sgtatham/puzzles/js/gues...

where it worked perfectly:

RRRR -> (1, 0) One red, somewhere.

RYYY -> (0, 1) No yellows, red is not first

GRGG -> (0, 2) One green in the 2nd position, red is 3 or 4.

BGRB -> (1, 1) No blues, red must be last.

OGOR -> (4, 0) Success.

Have been meaning to simulate this and see if there are any games where this method would not work. Sometimes it takes all the moves but I've never lost.


I created a solver using a SAT/SMT solver that guesses a solution that is consistent with all the previous guesses and their corresponding feedback. It always returns a solution in a small number of guesses. I don’t think it is the optimal solving algorithm but it was a fun exercise!


On Android the free app "Simon Tatham's Puzzles" has an implementation of Mastermind and this is the same algorithm I use to solve it.

It's not foolproof (it can get tricky if you're unlucky finding the location of pegs) but it's a very easy algorithm to remember and implement.


Also for apple devices.


Knuth's algorithm was in a short paper reprinted in one of his collections -- I forget which, but I think there was a "Fun and Games" one?


It is in the book "Selected papers on fun and games". But it is also available as a separate article "The Computer as Master Mind”.

I was certain he wrote the article about MOO and/or bulls and cows, but it seems like I remember wrong.


“The Computer as Master Mind” PDF: https://www.cs.uni.edu/~wallingf/teaching/cs3530/resources/k...

It minimizes the worst-case, not the expected number of moves. I think Knuth’s algorithm can be beaten in that respect.


ICYMI, 3blue1brown (Grant Sanderson) has a video applying information theory to solving Wordle, which is conceptually similar to mastermind -- https://www.youtube.com/watch?v=v68zYyaEmEA


There was also "Word Mastermind" which is almost exactly Wordle.


You don't need "information theory", you can just make a text file with each option on its own line, then use grep on the command line for each constraint with pipes in between.


the point of 3b1b's video wasn't "how can I figure out what words are still an option", but "what is the optimal word to guess at each step to minimize overall attempts needed to solve". Picking a "good" word will give you more information about the actual solution than picking a bad word, so you will get there faster.


I remember writing a Mastermind solver in Ocaml, about a year after getting started with programming. The solution space is fairly small so I would start with a random guess then, of all solutions that are still legal, pick the one that would -- at worst -- reduce the solution space the most (essentially a one move ahead minimax).


This was one of the computer games from 1976 BASIC computer games that some of us worked on ports of for the Coding Horror: Basic Computer Games repo. I recall there were a number of errors in the original and thus a few in the ports, python being one. I ended up writing about the deduction logic for solving mastermind because it seemed non-intuitive.

https://github.com/coding-horror/basic-computer-games/blob/m...


I think people are way overthinking this. Guessing at random from the solution space converges surprisingly quickly.


its a no hidden state turn game. so minimax. but state space is small. so render a table of all correct moves.


You must be thinking of a different game. With Mastermind, the chosen 4 colours are hidden and the other player tries to guess the colours and positions.


hmm. Minimax provides an optimal strategy for mastermind. But minimax doesnt work for hidden state games. Therefore, I can only assume mastermind doesnt really satisfy the hidden state property due to the fact that the opponent cannot changed the state between your turns.


Why are you assuming anything, just learn how the game works. This game can be solved on the command line with grep. There is no opponent. What are you talking about here?


About two months ago i learned how mastermind worked, put a good hour into reading about it and the solutions. played a few games too. Having spent a very long time implementing algorithms for chess and tictactoe a number of years ago I immediatly understood it. My friend also gave presentations on some solutions in rust, that I attended twice. I admit i am responding with old knowledge but I believe it is still correct. I dont find the game or its solution that interesting. However, the concept of hidden state games that have no possibility for state changing between turns did seem interesting to me. As somebody well versed in reinforcement learning, it surprised me that its a distinction I never came across previously. Usually algorithms for games like this are divided into non hidden state (minimax solveable if branching factor is small) and hidden state games (a whole other thing). In masterminds case the "opponents move"( a relevant framing if you know how minimax works) is just presented to you after your turn. The "opponent" always plays optimally in that the response is fixed. So its not really hidden state, despite the player not knowing the initial conditions. I knew that there were optimal tabular and minimax solutions to mastermind, but those techniques only work if the game is truly a no hidden state game. However, the person above posted that mastermind actually is a hidden state game because the player does not a-priori know the state of the hidden rule. It occured to me that must be irrelevant for some reason, and so I tried to deduce why.

I hope I have explained my motivation, and why I responded about the particular aspect of the game in the way that I did. My apologies for coming off retarded and shit.


You don't need to be well versed in reinforcement learning "a-priori" knowledge or worry about opponents moves or "optimal tabular and minimax solutions" or branching factors or anything else.

Start with a text file of every combination separated by lines.

RRRRR RRRRG RRRGG

Then use cat to output it to grep and use grep for all the constraints.

One for colors that aren't there. One for colors you know are in the right place. Then after that one grep for every color you know is there somewhere and one for each color you know isn't in a certain spot.

cat combinations.txt | grep -v [colors that aren't there] | grep ..R.G | grep B | grep -v .Y...

That's it. All you need is a command line. Generate guesses from the filtered list then add to the filters.


What you are saying may be true, and helpful, but just isn't related. My initial comment was about an optimal play algorithm. So that's what I was discussing.


That doesn't even make sense.

Are any solutions more likely than any others?

How are you getting an "optimal play algorithm" when the entire game is about using what you know to filter the solutions that are still possible?


You're framing it like an information extraction game, but it is also frameable as a zero-sum minimax game. And its an easy, and common solution to games like this. Read up.

https://en.wikipedia.org/wiki/Mastermind_(board_game)#Algori... https://en.wikipedia.org/wiki/Minimax#In_zero-sum_games

One of the early solutions to mastermind was by Donald Knuth. paper: https://www.cs.uni.edu/~wallingf/teaching/cs3530/resources/k... an implementation: https://github.com/johnathanlouie/mastermind

further development: https://arxiv.org/abs/1908.06183


It just says the "minimax" solution for the person who chooses the pattern is to pick randomly from patterns with more than one color.


Well the game is trivial for the person picking the pattern. So I dont expect the optimal play for them to be interesting.


If the game is trivial for the person picking, what is the strategy for the player after the filtering I described?


So they are doing a large problem space investigation with Python?

Are there libraries that will allow for full multicore utilization in Python from the numeric analysis stuff? Am I wrong that Python isn't a great language for a computational experiment like this?


There's nothing large about that space. With 6 colors and 4 choices you get 1296 possibilities. Would have been peanuts even for the first 8-bit home computers.


That's kind of strange, I only was peripherally reading the discussion and saw they were doing random/entropic strategies, which IMO you do if the problem space is so large that you can't wrap around it using normal exploration techniques.


The "Super Master Mind" version goes up to 9 colors and 5 holes. It claims that expands things to 59,049 possibilities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: