# 8 – Kernel Functions

A simple extension to linear function approximation can help us capture non-linear relationships. At the heart of this approach is our feature transformation. Remember how we defined it in a generic sense? Something that takes a state or a state action pair and produces a feature vector? Each element of this vector can be produced by a separate function, which can be non-linear. For example, let’s assume our state S is a single real number. Then we can define say X1(S) equals S, X2(S) equals X squared, X3(S) equals X cube, et cetera. These are called Kernel Functions or Basis Functions. They transform the input state into a different space. But note that since our value function is still defined as a linear combination of these features, we can still use linear function approximation. What this allows the value function to do is represent non-linear relationships between the input state and output value. Radial Basis Functions are a very common form of Kernels used for this purpose. You might’ve heard of them. Essentially, think of the current state S as a location in the continuous state space here depicted as a rectangular plane. Each Basis Function is shown as a blob. The closer the state is to the center of the blob, the higher the response returned by the function. And the farther you go, the response falls off gradually with the radius. Hence the name Radial Basis Function. Mathematically, this can be achieved by associating a Gaussian Kernel with each Basis Function with its mean serving as the center of the blob and standard deviation determining how sharply or smoothly the response falls off. So, for any given state, we can reduce the state representation to a vector of responses from these Radial Basis Functions. From that point onwards, we can use our same function approximation method.