I first came across dicewalking a few years ago in a book called ‘Experimental Travel’. The basic principle is to walk around, deciding your direction by a roll of the die. At each crossroads, you assign an equal probability to each of your available routes, and let chance decide where to take you. So for example, you get to an intersection with 3 possible routes, left, straight and right. You roll a die, a 1 or 2 means you go left, 3-4 straight on and 5-6 meaning a right turn. The results of the roll mean different directions for different number of options (so that the probabilities remain equal). The only rule is that the dice must be followed exactly, even if it means going down the same road again.

Having done this several times I found this a nice way of exploring a new city. Since your path is decided by chance, you are more free to observe and absorb the surroundings, as well as see places you would have seen otherwise.

Now being a physicist, I decided to do some analysis on the dicewalk, so see what kind of interesting things I can learn from it. Hopefully the analysis process will give some insight into how to physicists tackle problems like this, and along the way explain a few interesting concepts that come up.

**Mathematical Background:**

The Dicewalk is a type of finite random walk, in particular a 2 dimensional non-Markovian random walk. Let me explain these terms:

2-dimensional and random walk are self explanatory, we’re walking on a surface, and the direction is decided by a dice.

Non-Markovian: This needs a little more introduction: A Markov process is a type of random process that is often called ‘memoryless’. In the context of a random walk, this means your next location depends only on where you are currently, none of the precious steps matter to the probabilities of the following dynamics. So more formally it is a system that depends on and , so you can predict the dynamics given the probability of being in a position x (just a general label for a coordinate) at time t, and the probability of going to location y, given you were at x in the previous step.

Here this is NOT a Markov process, since the next location depends on the position we are at currently, and the previous position before that. This is simply the condition that we don’t immediately turn back on ourselves. This means we have to characterise the system by .

Now, why is this important? Markov processes are well understood because they’re (mostly) easy to solve, and are generally analytically tractable. This means we can simply solve the equations analytically and derive our probability distributions and all our other quantities we want with no problems. Now unfortunately most problems don’t have this problems, which means we have to use numerical methods or approximations to get our insights (you’ll see what I mean by this).

**Where will you end up?**

I confess I didn’t really have much interest in attempting to solve this analytically, since it’s quite likely I wouldn’t get anywhere (it is possible to simplify the problem into 1 dimension which would the the place to start). There has been a lot of rigorous mathematics developed for random walks (eg. for things like protein folding), but I haven’t found any research into this particular type of a random walk with memory.

I use a numerical method: I wrote a simulation that runs millions of dicewalks, and collects their data. With this I can play around with the parameters, and get an intuition about how the dicewalk works.

First off is just a dicewalk on a grid (eg. most American cities). Here are the results of the simulations, with increasing numbers of dice rolls, ranging from 30 to 100. The colour represents the probability, as shown on the right, and the square in green is the starting square:

For simplicity I have given only the figure for even numbered moves here (all the figures can be found in the appendix). The even and odd are given separately as they each end up on different squares, giving this chequerboard appearance. This is simply because if you have an even number of moves in total, you have an even or odd number of moves in both x and y, so you end up on even numbered squares. Conversely with an odd number total of moves, you have an odd number of moves in only one of your axes, so you end up only on odd squares.

We see the probability distribution moving outwards in a diffusive process. Along any given row, the distribution is almost a Gaussian (normal) distribution. Had the walk dynamics been Markovian, we would see an exact Gaussian, though the deviation here is very small (higher moments are v. small). This distribution is also markedly wider than for a simple random walk without the dicewalk rules. Since we’ve removed the possibility of turning back, we’ve forced ourselves to go further out, which is nice for exploration.

So, shows us that you’re very likely to end up near where you started. This makes sense, as there are more possible combinations of trajectories that end up in a location near your initial location, compared to the further out. This demonstrates the concept of entropy. In this case, the endpoints have an associated entropy, which is just the number of possible dicewalk configurations that lead to that endpoint. So, the die selects a possible configuration at random, and more of these dicewalks finish close to the centre than far away. So, be reassured that on your dicewalks, entropy wants you to end up closer to home.

If we look at the average distance from your starting point:

**Interestingness?**

So, let’s look at how we can ensure that out dicewalk is interesting? Is there a particular length of walk that is more ‘interesting’ somehow than others?

I’ll get on it.