Coarse coding is just like Tile coding, but uses a sparser set of features to encode the state space. Imagine dropping a bunch of circles on your 2D continuous state space. Take any state S which is a position in this space, and mark all the circles that it belongs to. Prepare a bit vector with a one for those circles and 0 for the rest. And that’s your sparse coding representation of the state. Looking at a 2D space helps us visualize the basic idea. But it also extends to higher dimensions where circles become spheres and hyper spheres. There are some neat properties of coarse coding. Using smaller circles results in less generalization across the space. The learning algorithm has to work a bit longer, but you have greater effective resolution. Larger circles lead to more generalization, and in general a smoother value function. You can use fewer large circles to cover the space, thus reducing your representation, but you would lose some resolution. It’s not just the size of these circles that we can vary. We can change them in other ways like making them taller or wider to get more resolution along one dimension versus the other. In fact, this same technique generalizes to pretty much any shape. In coarse coding, just like in tile coding, are resulting state representation is a binary vector. Think of each tile or circle as a feature. One, if it is an active, zero if it is not. A natural extension to this idea is to use the distance from the center of each circle as a measure of how active that feature is. This measure or response can be made to fall off smoothly using a Gaussian or a bell-shaped curve centered on the circle, which is known as a radial basis function. Of course, the resulting feature values will no longer be discrete so we’ll end up with yet another continuous state vector. But what is cool is that the number of features can be drastically reduced this way. We’ll look at RBFs more closely later when we discuss function approximations.