AI class unit 9: Difference between revisions

From John's wiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 10: Line 10:


This is really important for my own research field like robotics where the world is full of uncertainty, and the type of techniques I'll tell you about today will really make it possible to drive robots in actual physical roles and find good plans for these robots to execute.
This is really important for my own research field like robotics where the world is full of uncertainty, and the type of techniques I'll tell you about today will really make it possible to drive robots in actual physical roles and find good plans for these robots to execute.
== Planning Under Uncertainty MDP ==
{{#ev:youtubehd|9D35JSWSJAg}}
Planning under uncertainty. In this class so far we talked a good deal about planning. We talked about uncertainty and probabilities, and we also talked about learning, but all three items were discussed separately. We never bought planning and uncertainty together, uncertainty and learning, or planning and learning. So the class today will fuse planning and uncertainty using techniques known as Markov Decision Processes or MDPs, and Partially Observable Markov Decision Processes or POMDPs. We also have a class coming up on reinforcement learning which combines all three of these aspects.

Revision as of 21:08, 11 November 2011

These are my notes for unit 9 of the AI class.

Planning under Uncertainty

Introduction

{{#ev:youtubehd|DgH6NaJHfVQ}}

So today is an exciting day. We'll talk about planning under uncertainty, and it really puts together some of the material we've talked about in past classes. We talked about planning, but not under uncertainty, and you've had many, many classes on uncertainty, and now it gets to the points where we can make decisions under uncertainty.

This is really important for my own research field like robotics where the world is full of uncertainty, and the type of techniques I'll tell you about today will really make it possible to drive robots in actual physical roles and find good plans for these robots to execute.

Planning Under Uncertainty MDP

{{#ev:youtubehd|9D35JSWSJAg}}

Planning under uncertainty. In this class so far we talked a good deal about planning. We talked about uncertainty and probabilities, and we also talked about learning, but all three items were discussed separately. We never bought planning and uncertainty together, uncertainty and learning, or planning and learning. So the class today will fuse planning and uncertainty using techniques known as Markov Decision Processes or MDPs, and Partially Observable Markov Decision Processes or POMDPs. We also have a class coming up on reinforcement learning which combines all three of these aspects.