Hey, everyone. With this, we’ve reached the end of the exciting module on Multi-agent RL. We began by introducing ourselves to the multi-agent systems present in our surroundings. We reasoned why multi-agent systems are an important puzzle to solving AI, and decided to pursue this complex topic. We also studied the Markov games framework, which is a generalization of MDPs to the multi- agent case. We talked about using single-agent RL algorithms, as they are in the multi-agent case. This either leads to non-stationarity, or a large joint action space. We saw the interesting kinds of environments present in the multi-agent case namely: cooperative, competitive, and mixed. Towards the end, we implemented the multi-agent DDBG algorithm which is a centralized training, and decentralized execution algorithm that can be used in any of the above environments. That was all from my side. I hope it was a great learning experience for you. Thanks a lot for being with me through this journey. Bye-bye.