## 9 – Non-Linear Function Approximation

Non-linear function approximation, this is what we’ve been building up to in this lesson. Recall from our previous discussion how we can capture non-linear relationships between input state and output value using arbitrary kernels like radial basis functions as our feature transformation. In this model, our output value is still linear with respect to the … Read more

## 8 – Kernel Functions

A simple extension to linear function approximation can help us capture non-linear relationships. At the heart of this approach is our feature transformation. Remember how we defined it in a generic sense? Something that takes a state or a state action pair and produces a feature vector? Each element of this vector can be produced … Read more

## 7 – Linear Function Approximation

Let’s take a closer look at linear function approximation and how to estimate the parameter vector w. As you’ve seen already, a linear function is a simple sum over all the features multiplied by their corresponding weights. Let’s assume you have initialized these weights randomly and computed the value of a state v hat (s,w). … Read more

## 6 – Function Approximation

So far, we’ve looked at ways to discretize continuous state spaces. This enables us to use existing reinforcement learning algorithms with little or no modification. But there are some limitations. When the underlying space is complicated, the number of discrete states needed can become very large. Thus, we lose the advantage of discretization. Moreover, if … Read more

## 5 – Coarse Coding

Coarse coding is just like Tile coding, but uses a sparser set of features to encode the state space. Imagine dropping a bunch of circles on your 2D continuous state space. Take any state S which is a position in this space, and mark all the circles that it belongs to. Prepare a bit vector … Read more

## 4 – Tile Coding

If you have prior knowledge about the state space, you can manually design an appropriate discretisation scheme. Like in our gears switching example, we knew the relationship between fuel consumption and speed. But in order to function in arbitrary environments, we need a more generic method. One elegant approach for this is tile coding. Here, … Read more

## 3 – Discretization

As the name suggests, discretization is basically converting a continuous space into a discrete one. Remember our continuous vacuum cleaner world? All we’re saying is let’s bring back a grid structure with discrete positions identified. Note that we’re not really forcing our agent to be in exactly the center of these positions. Since the underlying … Read more

## 2 – Discrete vs. Continuous Spaces

Let us first take a look at what we mean by discrete and continuous spaces. Recall the definition of a Markov Decision Process where we assume that the environment state at any time is drawn from a set of possible states. When the set is finite, we can call it a discrete state space. Similarly … Read more

## 10 – Summary

In summary, here’s what you learned in this lesson. Traditional reinforcement learning techniques use a finite MDP to model an environment which limits us to environments with discrete state and action spaces. In order to extend our learning algorithms to continuous spaces, we can do one of two things. Discretize the state space or directly … Read more

## 1 – Introduction

Welcome to Deep Reinforcement Learning. Interest in the field of reinforcement learning seems to have almost exploded with success stories like AlphaGo and platforms like OpenAI. Research in this area has been moving at a steady pace since the 1980s, but it has really taken off with recent advances in deep learning. As we progress … Read more