Microsoft’s new Surface, Pinterest can give you the recipe for a meal based on a picture, and why Europe could replace Silicon Valley.
Using a Screen Reader? Click here
Multiple versions (ogg, video etc.) from Archive.org.
Please SUBSCRIBE HERE.
Follow us on Soundcloud.
A special thanks to all our supporters–without you, none of this would be possible.
If you are willing to support the show or give as little as 5 cents a day on Patreon. Thank you!
Big thanks to Dan Lueders for the headlines music and Martin Bell for the opening theme!
Big thanks to Mustafa A. from thepolarcat.com for the logo!
Thanks to our mods, Kylde, Jack_Shid, tgstellar, KAPT_Kipper, and scottierowland on the subreddit
Show Notes
To read the show notes in a separate page click here!
Hi Tom,
about Chess engines.
You suggested that chess engines work exclusively by way of brute force. In fact, over time the increase of Chess engine strength was achieved by reduction of dependence on brute force.
The first chess machine to beat the human world champion, IBM’s Deep Blue, used dedicated hardware for a record number of position evaluations per second.
Later chess engines ran on standard hardware and over time their strength has way surpassed Deep Blue.
The most important factor in this development was the ability to avoid wasted effort. This is referred to as ‘pruning the search tree’. More intense pruning frees up time to probe more moves ahead. The delicate balance, of course, is to prune agressively without throwing out moves that are actually good moves but in a very non-obvious way, such as sacrifices for a strong initiative.
Dynamic search depth:
If there is a line of play where that branch of the search tree lends itself especially to pruning then that branch will be walked deeper than other branches.
Given the speed of the hardware the Chess engine can evaluate some number of positions per second. Aspects of the position are evaluated, those aspects are evaluated down to a single number.
The position evaluations are by now so good that Chess engines are almost as strong when they have to play faster. That is, as the software gets better the Chess engine has less and less need for extensive search depth.
The nature of the game of Go is that evaluating Go positions is more akin to evaluating pictures, such as done by Googles deep learning technology that is in charge of blurring peoples faces in Google street view pictures.
(Cue funny false positive pics:
https://www.theguardian.com/technology/2016/sep/16/beefed-up-google-street-view-privacy-blurs-cows-face )
In the case of Chess engines the evaluation algorithms are explicitly programmed. Then again, in Chess engine development some of the tweaking is done with a process of random-variation-and-selection. In evaluating a Chess position you have to decide how much weight you will assign to this or that aspect. Chess engine developers pit multiple versions of their chess engines agains each other, all with tweaked weights in the evaluation functions. Tweaks that turn out to be favorable are kept, tweaks that turn out to be unfavorable are reverted.
This random-variation-and-selection process is in effect a form of deep learning, in the sense that it involves a learning-by-reinforcement-only process
About AlphaGo:
Some things I learned after a superficial Google search:
The way AlphaGo operates is a hybrid of explicit programming and deep learning; not pure deep learning.
If you would have position evaluation that is perfect then you would only have to look one half move ahead: your own next move. A move that improves your position’s value (or at least does not decrease it) is the move to play.
But you don’t have a perfect position evaluation, so you try to overcome that by looking a number of moves ahead. If there is a line of play that leads to an increase of position value, even against the opponent’s best moves, then that is the line to play.
The problem, of course, is that the number of positions to evaluate increases exponentially with search depth.
One technique of reducing the number of positions to evaluate is called Monte Carlo Tree search (MCTS). MCTS is used by Chess engines and Go engines, including AlphaGo.
https://en.wikipedia.org/wiki/Monte_Carlo_tree_search
In the case of AlphaGo the deep learning side comes into play in the situations where a judgement call is required; How to prune the search tree, how to evaluate a position.
In presentations for a general audience Dennis Hassabis shows examples of simple Atari games that Google’s deep learning machines are learning _from scratch_. The rules of the game are not explicitly programmed. The deep learning machine tries things at random, and has to learn from just the results.
In the case of AlphaGo the learning is not from scratch at all. AlphaGo uses existing tree search technology, with the judgement calls shifted to deep learning.
Clearly it would be an immensely greater challenge to have a deep learning machine learn to play strong Go without any explicit programming (such as tree searching algorithms).