*Wed 11 December 2013*

I've made a hangman solver, currently called Hungman. The user interface is bad but most of the idea is there.

The bulk of it is the same as any hangman solver: take a bunch of letters and fit these to the dictionary. This part is uninteresting.

The interesting part is the statistical solver. Above the dictionary-based part is a comma-separated list of letters. These are ordered by how likely the letter is to occur in the word. There is an order-1 Markov Model of the dictionary (which will, eventually, be a model of some text corpus so that word frequencies are taken into consideration). Using this Markov Model, we can know, given some known letters, the probabilities of other letters following those letters. Currently the maths in the solver is suspect but I am working on this, and it works well enough as it is anyway. As far as I'm aware, this is the only hangman solver available that does anything more than grepping a dictionary.

I plan to make this the Best Hangman Solver In The World. All that is left is a decent user interface and non-questionable maths. Exciting times ahead.

**Update:** this is now mathematically-grounded and uses an order-2 Markov Model.

James Stanley - james@incoherency.co.uk | ricochet:it2j3z6t6ksumpzd | jesblogfnk2boep4.onion | /ipns/jes.xxx/ | [rss]