2011-AI: Difference between revisions

From John's wiki
Jump to navigation Jump to search
m (moved AI class to 2011-AI)
 
(48 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[File:Untitled.PNG]]
These are my notes from [http://robots.stanford.edu/ Sebastian Thrun] and [http://norvig.com/ Peter Norvig]'s [http://ai-class.org/ AI class]. A course on Artificial Intelligence.
These are my notes from [http://robots.stanford.edu/ Sebastian Thrun] and [http://norvig.com/ Peter Norvig]'s [http://ai-class.org/ AI class]. A course on Artificial Intelligence.
* [[AI class overview]] for an overview
* [[AI class journal]] for my journal


Course undertaken October to December 2011. Here goes nothing. :P
Course undertaken October to December 2011. Here goes nothing. :P


= Week 1 =
= Welcome =


== Welcome to AI ==
Hi there. My on-going involvement in this class is sketchy at best. I have a whole heap of other things that I'm meant to be doing, and having done half the class I'm not sure that it's the right thing for me. Anyway, I've really enjoyed doing what I've done so far and if there's something here that's useful to you that's great. But I'm not sure that I'm going to be finishing my work here. Sorry.


=== Introduction ===
= Documentation =


{{#ev:youtubehd|BnIJ7Ba5Sr4}}
* [[AI class prerequisites]] for prerequisites
 
* [[AI class readings]] for the readings
Will deliver a good introduction to AI. Going to be a lot of work, but very rewarding.
* [[AI class overview]] for an overview
 
* [[AI class journal]] for my journal
New videos every week, and quizzes to test your knowledge built into the videos.
* [[AI class errata]] for my notes on mistakes I've found in the coursework
 
For the advanced version of the course are homework assignments and exams. These will be graded to determine if you can master AI the same way any good Stanford student could do it.
 
If you finish the course the lecturers will sign a letter of accomplishment and let you know what your rank in the class was.
 
=== Course Overview ===
 
{{#ev:youtubehd|Q7_GQq7cDyM}}
 
Purpose of the class:
 
* To teach the basics of AI artificial intelligence
* To excite you
 
Structure:
 
* Videos -> Quizzes -> Answer Videos
* Assignments (like quizzes but without the answers) and exams
 
=== Intelligent Agents ===
 
{{#ev:youtubehd|cx3lV07w-XE}}
 
The agent has sensors that measure the environment, and actuators that can affect the environment.
 
The function that maps sensors to actuators is called the Control Policy of the agent.
 
This is the Perception-Action cycle:
 
[[File:AI class 2011-10-15-191700.PNG]]
 
=== Applications of AI ===
 
{{#ev:youtubehd|N6JW8TQzbX8}}
 
[[File:AI class 2011-10-15-193200.PNG]]
 
[[File:AI class 2011-10-15-192600.PNG]]
 
[[File:AI class 2011-10-15-192800.PNG]]
 
[[File:AI class 2011-10-15-201000.PNG]]
 
[[File:AI class 2011-10-15-201200.PNG]]
 
[[File:AI class 2011-10-15-201500.PNG]]
 
=== Terminology ===
 
{{#ev:youtubehd|5lcLmhsmBnQ}}
 
[[File:AI class 2011-10-15-203000.PNG]]
 
[[File:AI class 2011-10-15-202600.PNG]]
 
=== Checkers ===
 
{{#ev:youtubehd|qVppDRbx2kM}}
 
[[File:AI class 2011-10-15-212700.PNG]]
 
=== Poker Question ===
 
{{#ev:youtubehd|M_AdFAazf4k}}
 
{{#ev:youtubehd|DjILhASM3A8}}
 
=== Robot Car Question ===
 
{{#ev:youtubehd|vz-ERydsKLU}}
 
{{#ev:youtubehd|nOWCfVG0xNQ}}
 
[[File:AI class 2011-10-18-093400.PNG]]
 
=== AI and Uncertainty ===
 
{{#ev:youtubehd|ytw6_8a5Wls}}
 
[[File:AI class 2011-10-17-001100.PNG]]
 
=== Machine Translation ===
 
{{#ev:youtubehd|sPSN0aI0PgE}}
 
[[File:AI class 2011-10-18-094200.PNG]]
 
[[File:AI class 2011-10-18-094400.PNG]]
 
=== Chinese Translation ===
 
{{#ev:youtubehd|RWhwKudtixY}}
 
{{#ev:youtubehd|vvyaXxjsxBU}}
 
{{#ev:youtubehd|lFJey0tOvBg}}
 
=== Summary ===
 
{{#ev:youtubehd|mXM38kjzK-M}}
 
[[File:AI class 2011-10-19-023500.PNG]]
 
== Problem Solving ==
 
=== Introduction ===
 
{{#ev:youtubehd|ZQmJuHtpGfs}}
 
In this unit we talk about problem solving. The theory and technology of building agents that can plan ahead to solve problems. In particular we're talking about problem solving where the complexity of the problem comes from the idea that there are many states, as in this problem here:
 
[[File:AI class 2011-10-19-024500.PNG]]
 
A navigation problem where there's many choices to start with and the complexity comes from picking the right choice now and picking the right choice at the next intersection and the intersection after that. Stringing together a sequence of actions.
 
This is in contrast to the type of complexity shown in this picture:
 
[[File:AI class 2011-10-19-024501.PNG]]
 
Where the complexity comes from the partial observability, that we can't see through the fog where the possible paths are or we can't see the results of our actions or even the actions themselves are not known. This type of complexity will be covered in another unit.
 
Here's an example of a problem:
 
[[File:AI class 2011-10-19-025600.PNG]]
 
This is a route finding problem. Where we are given start city Arad and a destination city Bucharest (the capital of Romania) and the problem then is to find a route from Arad to Bucharest. The actions that the agent can execute are driving from one city to the next along one of the roads shown on the map. The question is, is there a solution that the agent can come up with given the knowledge shown here to the problem of driving from Arad to Bucharest?
 
=== What is a Problem? ===
 
{{#ev:youtubehd|SIHc9LgMeaU}}
 
The answer is no. There is no solution that the agent could come up with, because Bucharest does not appear on the map. So the agent doesn't know any actions that can arrive there, so let's give the agent a better chance:
 
[[File:AI class 2011-10-19-032700.PNG]]
 
Now we've given the agent the full map of Romania. The start is in Arad; and the destination, or goal, is in Bucharest. The agent is given the problem of coming up with a sequence of actions that will arrive at the destination. Now is it possible for the agent to solve this problem?
 
The answer is yes, there are many routes, or steps, or sequences of actions, that will arrive at the destination, here is one of them:
 
[[File:AI class 2011-10-19-033200.PNG]]
 
Now let's formally define what a problem looks like. A problem can be broken down into a number of components.
 
[[File:AI class 2011-10-19-035100.PNG]]
 
First, the '''initial state''' that the agent starts out with. In our route finding problem the initial state was the agent being in the city of Arad.
 
Next, a function '''Actions''' that takes a state as input and returns a set of possible actions that the agent can execute when the agent is in this state. In some problems agents will have the same actions available in all states and in other problems they'll have different actions dependent on the state. In the route finding problem the actions are dependent on the state, when we're in one city we can take the routes to the neighbouring cities but we can't go to any other cities.
 
Next, we have a function called '''Result''' which takes as input a state and an action and delivers as its output a new state. So for example, if the agent is in the city of Arad (that would be the state) and takes the action of driving along route E671 toward Timisoara then the result of applying that action in that state would be the new state where the agent is in the city of Timisoara.
 
Next we need a function called '''GoalTest''' which takes a state and returns a boolean value (true or false) telling us if this state is a goal or not. In a route finding problem the only goal would be being in the destination city, the city of Bucharest, and all the other states would return false for the GoalTest.
 
Finally we need one more thing, which is a '''PathCost''' function, which takes a path, a sequence of state-action transitions, and returns a number which is the cost of that path. Now for most of the problems that we deal with we'll make the PathCost function be additive so that the cost of a path is just the sum of the cost of the individual steps. So we'll implement this PathCost function in terms of a '''StepCost''' function. The StepCost function takes a state, and action, and the resulting state from that action, and returns a number n which is the cost of that action. In the route finding example the cost might be the number of miles travelled or maybe the number of minutes it takes to get to that destination.
 
=== Example: Route Finding ===
 
{{#ev:youtubehd|bEi73QXP7PA}}
 
Now let's see how the definition of a problem maps on to the route finding domain. First the initial state we're given, let's say we start of in Arad; and the goal test, let's say that the state of being in Bucharest is the only state that counts as a goal and all other states are not goals.
 
[[File:AI class 2011-10-19-041600.PNG]]
 
Now the set of all the states here is known as the '''state space''', and we navigate the state space by applying actions. The actions are specific to each city, so when we're in Arad there are three possible actions (following each of the connecting roads).
 
As we follow roads we build paths, or sequences of actions. So just being in Arad is the path of length zero and now we could start exploring the space and add in various paths of length one, length two, length three, etc.
 
Now at every point we want to separate the state out into three parts. First the ends of the paths, the fatherst paths have been explored, we call the '''Frontier'''. And then to the left of that in this diagram we have the '''Explored''' part of the state, and then off to the right we have the '''Unexplored'''.
 
[[File:AI class 2011-10-19-042300.PNG]]
 
One more thing. In this diagram we've labelled the step cost of each action along the route. So the step cost of going between Neamt and Iasi would be 87, corresponding to a distance of 87 kilometers. So the cost of the path going from Arad to Oradea would be 71 + 75.
 
=== Tree Search ===
 
{{#ev:youtubehd|c0PfWsqtfdo}}
 
Now let's define a function for solving problems. It's called '''Tree Search''' because it superimposes a search tree over the state space. Here's how it works:
 
[[File:AI class 2011-10-19-044100.PNG]]
 
It starts off by initialising the frontier to be the path consisting of only the initial state. Then it goes into a loop in which it firsts checks to see if we still have anything left in the frontier, if not we fail, there can be no solution. If we do have something then we make a choice -- and Tree Search is really a family of functions, not a single algorithm, which depends on how we make that choice, and we'll see some of the options later -- we go ahead an make a choice of one of the paths from the frontier, and remove that path from the frontier. We find the state which is at the end of the path, and if that state's a goal then we're done, we found a path to the goal. Otherwise we do what's called expanding that path: we look at all the actions from that state and we add to the path the actions and the result of that state, so we get a new path that has the old path, the action, and the result of that action; and we stick all of those paths back on to the frontier.
 
Now Tree Search represents a whole family of algorithms, and where you get the family resemblance is that they're all looking at the frontier popping items off and looking to see if they're goal tests, but where you get the difference is in the choice of how you're going to expand the next item on the frontier. Which path do we look at first? We'll go through different sets of algorithms that make different choices for which path to look at first.
 
[[File:AI class 2011-10-19-045900.PNG]]
 
The first algorithm to consider is called '''Breadth-First Search'''. Now it could be called "Shortest First Search", because what it does is always takes off the frontier one of the paths that hasn't been considered yet that is the shortest possible.
 
So how does it work? Well we start off with the path of length zero, starting in the start state:
 
[[File:AI class 2011-10-19-053000.PNG]]
 
And that's the only path in the frontier, so it's the shortest one, so we pick it and then we expand it. We add in all the paths that result from applying all the possible actions. So now we've removed the first path from the frontier, but we've added in three new paths:
 
[[File:AI class 2011-10-19-053600.PNG]]
 
Now we're in a position where we have three paths on the frontier, and we have to pick the shortest one. Now in this case all three paths have the same length, length 1. So we break the tie at random -- or using some other technique -- and let's suppose that in this case we choose the path from Arad to Sibiu. So once the path from Arad to Sibiu is taken off the frontier which paths are going to be added to the frontier?
 
{{#ev:youtubehd|GKKQyJLee84}}
 
The answer is that in Sibiu the action function gives us four actions, corresponding to travelling along each of the four roads. So we have to add in paths for each of those actions:
 
[[File:AI class 2011-10-19-054300.PNG]]
 
The paths are:
 
* Arad to Sibiu to Oradea
* Arad to Sibiu to Fagaras
* Arad to Sibiu to Rimnicu Vilcea
* Arad to Sibiu to Arad
 
Now it may seem silly and redundant to have a path that starts in Arad, goes to Sibiu, and then returns to Arad... how can that help us get to our destination in Bucharest? But we can see that if we're dealing with a tree search why it's natural to have this type of formulation, and why the tree search doesn't even notice that it's backtracked. What the tree search does is superimpose on top of the state space a tree of searches, and the tree looks like this:
 
[[File:AI class 2011-10-19-055000.PNG]]
 
We start off in state A, from which there were three actions giving us paths to Z, S and T. And from S there were four actions which gave us paths to O, F, R, A. Notice that we returned to A in the state space, but in the tree it's just another item in the tree.
 
Here's another representation of the search space:
 
[[File:AI class 2011-10-19-060700.PNG]]
 
What's happening is that as we start to explore the state we keep track of the frontier which is the set of states that are at the ends of the paths that we haven't explored yet, and behind that frontier is the set of explored states, and ahead of the frontier is the unexplored states.
 
Now the reason we keep track of the explored states is that when we want to expand and we find a duplicate so say when we expanded from L and pointed back to state T if we hadn't kept track of that we would have to add in a new state for T below L:
 
[[File:AI class 2011-10-19-061400.PNG]]
 
But because we've already seen T and we know that it's a regressive step into the already explored state, now because we've kept track of that we don't need it any more.
 
=== Graph Search ===
 
{{#ev:youtubehd|mtbfvJuOV_U}}
 
Now we see how to modify the Tree Search function to make it be a '''Graph Search''' function, to avoid those repeated paths.
 
[[File:AI class 2011-10-19-061900.PNG]]
 
What we do is we start off and initialise a set called the Explored Set of states that we've already explored. Then when we consider a new path we add the new state to the set of already explored states. And then when we're expanding a path and adding in new states to the end of it we don't add that in if we've already seen that new state in either the frontier or the explored set.
 
Now back to Breath-First Search, and let's assume that we're using the Graph Search, so that we've eliminated the duplicate paths. The path that goes from Arad to Sibiu back to Arad is removed. We're left with 5 possible paths. Given these five paths show which ones are candidates to be expanded next by the Breadth-First Search algorithm:
 
[[File:AI class 2011-10-19-062500.PNG]]
 
=== Breadth First Search ===
 
{{#ev:youtubehd|HfYXaw56-0w}}
 
Breadth First Search always considers the shortest paths first, and in this case there are two paths of length one, Arad to Zerind and Arad to Timisoara, so those would be the two paths that would be considered. Now let's suppose that the tie is broken in some way and we chose the path from Arad to Zerind, so we want to expand that node (Zerind), we remove it from the frontier and put it in the explored list, and now we say what paths are we going to add?
 
{{#ev:youtubehd|CUfmOLQi3RM}}
 
In this case there is nothing to add, because of the two neighbours on is in the explored list and the other is in the frontier, so if we're using Graph Search then we won't add either of those.
 
{{#ev:youtubehd|I3lrnzdgwmI}}
 
So we move on, and we look for another shortest path, there's one path left of length one, so we look at that path, we expand it, add in the path to Lugoj, put Timisoara on the explored list, and now we've got three paths of length two, we choose one of them. Let's say we choose Fagaras, which states do we add to the path, and say whether the algorithm terminates now because we've reached the goal or whether we're going to continue.
 
[[File:AI class 2011-10-19-065500.PNG]]
 
{{#ev:youtubehd|cr1Ck1Fr60M}}
 
The answer is that we add one more path, the path to Bucharest, we don't add the path going back from Fagaras because it's in the explored list. But we don't terminate yet. True, we have added a path that ends in Bucharest, but the goal test isn't applied when we add a path to the frontier, rather it's applied when we remove a path from the frontier, and we haven't done that yet.
 
{{#ev:youtubehd|mueRduwpg-U}}
 
Now why doesn't the general Tree Search or Graph Search algorithm stop when it adds a goal node to the frontier? The reason is because it might not be the best path to the goal. Here we found a path of length two, and we added a path of length three that reached the goal. Now in general a Graph Search or a Tree Search doesn't know that there might be some other path that we could expand that might have a distance of say 2.5. But there's an optimisation that could be made. If we know we're doing Breadth First Search and we know there's no possibility of a path of length 2.5 then we could change the algorithm so that it checks states as soon as they're added to the frontier rather than waiting until they're expanded; and in that case we can write a specific Breadth First Search routine that terminates early and gives us a result as soon as we add a goal state to the frontier.
 
[[File:AI class 2011-10-19-070900.PNG]]
 
Breadth First Search will find this path that ends up in Bucharest and if we're looking for the shortest path in terms of number of steps Breadth First Search is guaranteed to find it. But, if we're looking for the shortest path in terms of total cost, by adding up the step costs, then it turns out there is another path that is shorter. So let's look at how we could find that path.
 
=== Uniform Cost Search ===
 
{{#ev:youtubehd|Qrig0mznzG4}}
 
An algorithm that has traditionally been called '''Uniform Cost Search''', but could be called "Cheapest First Search", is guaranteed to find the path with the cheapest total cost. Let's see how it works.
 
We start out as before in the start state:
 
[[File:AI class 2011-10-19-071400.PNG]]
 
And we pop that empty path off, move it from the frontier to explored:
 
[[File:AI class 2011-10-19-071600.PNG]]
 
Then add in the paths out of that state:
 
[[File:AI class 2011-10-19-071900.PNG]]
 
As before there will be three of those paths. And now, which path are we going to pick next to expand, according to the rules of cheapest first?
 
{{#ev:youtubehd|7MbW6kZ_vb8}}
 
Cheapest-first says we should pick the path with the lowest total cost. That would be Arad to Zerind which has a cost of 75 compared to 118 and 140 for the other paths. So we get to Zerind and we take that path off the frontier and put it on the explored list, add in its neighbours, not going back to Arad, but adding in Zerind to Oradea. Summing up the total cost of that path 71 + 75 = 146 for Arad to Zerind to Oradea.
 
[[File:AI class 2011-10-19-103500.PNG]]
 
Now the question is which path gets expanded next?
 
{{#ev:youtubehd|9vNvrRP0ymw}}
 
Of the three paths on the frontier we have paths with cost 146, 140 and 118. 118 is the cheapest so Arad to Timisoara gets expanded. We take it off the frontier, move it to explored, and add in the successors which in this case is only one (Eugoj) and that has a path total of 229.
 
[[File:AI class 2011-10-19-104000.PNG]]
 
Which path do we expand next? Well we've got 146, 140 and 229. So 140 is the lowest, so we take Sibiu off the frontier and add it to explored, and add in the path to Rimnicu Vilcea for a path cost of 220, and the path to Fagaras with total cost of 239.
 
[[File:AI class 2011-10-19-104400.PNG]]
 
So the question is which path do we expand next?
 
{{#ev:youtubehd|LVCMMPXaQlE}}


The answer is Oradea with a path cost of 146. Take it off the frontier and put it on explored, but there's nothing to add because both of its neighbours have already been explored.
= Resources =


[[File:AI class 2011-10-19-104800.PNG]]
* [https://www.ai-class.com/home/ Course web-site]
* [https://www.ai-class.com/sforum/ Course forum]
** [http://www.reddit.com/r/aiclass Reddit forum]
** [http://www.aiqus.com/ Aiqus forum]
*** [http://www.aiqus.com/users/1817/peter-norvig Peter Norvig on Aiqus]
* [https://www.ai-class.com/registration/progress Course progress]
* [https://www.ai-class.com/overview Course information]
** [https://www.ai-class.com/faq Course FAQ]
** [https://www.ai-class.com/resources Course related material]
** [https://www.ai-class.com/schedule Course schedule]
* [http://mike.thedt.net/ytsubs/ytsubs.php YouTube closed-captions ripper]


Which path to we look at next?
= Lecture notes =


{{#ev:youtubehd|G-H1AnA8uBI}}
* Week 1
** [[AI class unit 1|Welcome to AI]]
** [[AI class unit 2|Problem Solving]]
** [[AI class homework 1|Homework 1]]
* Week 2
** [[AI class unit 3|Probability in AI]]
** [[AI class unit 4|Probabilistic Inference]]
** [[AI class homework 2|Homework 2]]
* Week 3
** [[AI class unit 5|Machine Learning]]
** [[AI class unit 6|Unsupervised Learning]]
** [[AI class homework 3|Homework 3]]
* Week 4
** [[AI class unit 7|Representation with Logic]]
** [[AI class unit 8|Planning]]
** [[AI class homework 4|Homework 4]]
* Week 5
** [[AI class unit 9|Planning under Uncertainty]]
** [[AI class unit 10|Reinforcement Learning]]
** [[AI class homework 5|Homework 5]]


The answer is Rimnicu Vilcea. 220 is less than 229 and 239. Take it off the frontier, put it on explored. Add in (to the frontier) two more paths to Pitesti (366) and Craiova (317). Now notice that we're closing in on Bucharest: we've got two neighbours almost there but neither of them has their turn yet. Instead the cheapest path is Arad to Timisoara to Eugoj for 229, so take that off the frontier and add it to the explored list. Add 70 to the path cost so far for 299 to Mehadia.
= Office hours =


[[File:AI class 2011-10-19-105500.PNG]]
* [http://www.youtube.com/watch?v=pIfrmdM0Ht0 Office hours 1]
* [http://www.youtube.com/watch?v=ajMKay1LMRw Office hours 2]
* [http://www.youtube.com/watch?v=iL0fFmNWb0A Office Hours for Week 5]
* [http://www.youtube.com/watch?v=7cPOkxHtxJ8 Office Hours for Week 6]
* Office hours for Week 7: [http://www.youtube.com/watch?v=XF_ACsJiz64 part 1] and [http://www.youtube.com/watch?v=Y6LF-_-pMgI part 2]


Now the cheapest node is 239 at Fagaras so we expand and get a path to Bucharest for 460.
= Discussion =


[[File:AI class 2011-10-19-105700.PNG]]
* [http://www.aiqus.com/questions/5378/practicing-probability-calculations Practising probability calculations]
* [http://www.aiqus.com/questions/4152/thinking-about-giving-up-the-course Norvig defines conditional probability in terms of unconditional probability]
* [http://www.aiqus.com/questions/4183/probability-cheatsheet-for-ai-class Probability cheatsheet for AI]


Now the question is: are we done? Can we terminate the algorithm?
= Other people's notes =


{{#ev:youtubehd|NxCUVltVoZ8}}
* [https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B0JaMwvGlHuEYTE1MWI5NzgtZTViMC00YmNlLWI5ZjktZjg0YWM1MTcxYzAy&hl=en_US Conditional Independence Write-Up]
** [https://docs.google.com/viewer?a=v&pid=explorer&chrome=true&srcid=0B0JaMwvGlHuEOWM3MTliYTgtODZjNi00NGY3LTkzNTAtYWYyMDUzM2JlMTE1&hl=en_US Independence does not imply conditional independence...]
* [https://www.evernote.com/shard/s1/sh/333fb301-8d12-4927-87ec-c81cdd42b7d2/240bbab26642f5d49d32c38053e905f6 Unit 5 on evernote]
* [http://wiki.geekdownunder.net/index.php?title=Main_Page GeekDownUnder]
** [http://www.khanacademy.org Khan Academy]: free online video lectures on a variety of topics
** [http://openclassroom.stanford.edu/MainFolder/VideoPage.php?course=MachineLearning&video=01.5-Introduction-InstallingOctave&speed=100 Installing Octave]: video tutor about installing Octave
** [http://www.gnu.org/software/octave/doc/interpreter/ Octave Documentation]: documentation of Octave
** [http://guioctave.com/ Gui Octave]: graphical User Interface for Octave
** [http://thedatahub.org/ The Data Hub]: a registry of open knowledge datasets and projects
* [http://larvecode.tumblr.com/tagged/ai-class LarveCode]


And the answer is no, we are not done yet. We've put Bucharest, the goal state, onto the frontier, but we haven't popped it off the frontier yet; and the reason is because we've got to look around and see if there is a better path that can reach Bucharest. (There was a mistake in the video for the rest of the path, Pitesti should have already been off the frontier).
= Practice questions =


Eventually we pop off all of the paths and we'd get to the point where the 418 path was popped off the frontier and the cheapest path to Bucharest is known.
* [http://www.maths.uq.edu.au/courses/MATH3104/Lectures/goodhill/bayes_solutions.pdf MATH 3104: Practice with Bayes theorem]
* [http://www.intmath.com/counting-probability/10-bayes-theorem.php 10. Bayes' Theorem]

Latest revision as of 15:54, 10 November 2013

These are my notes from Sebastian Thrun and Peter Norvig's AI class. A course on Artificial Intelligence.

Course undertaken October to December 2011. Here goes nothing. :P

Welcome

Hi there. My on-going involvement in this class is sketchy at best. I have a whole heap of other things that I'm meant to be doing, and having done half the class I'm not sure that it's the right thing for me. Anyway, I've really enjoyed doing what I've done so far and if there's something here that's useful to you that's great. But I'm not sure that I'm going to be finishing my work here. Sorry.

Documentation

Resources

Lecture notes

Office hours

Discussion

Other people's notes

Practice questions