AI class overview: Difference between revisions

From John's wiki
Jump to navigation Jump to search
No edit summary
Line 51: Line 51:
** Benign environments: not really out there to get you
** Benign environments: not really out there to get you
** Adversarial environments: harder to find good actions when your opponent is counteracting what you're trying to achieve
** Adversarial environments: harder to find good actions when your opponent is counteracting what you're trying to achieve
== [[AI_class#Checkers|Checkers]] ==
* Fully observable
* Deterministic
* Discrete
* Adversarial
== [[AI_class#Poker_Question|Poker]] ==
* Partially observable
* Stochastic
* Discrete
* Adversarial
== [[AI_class#Robot_Car_Question|Robot Car]] ==
* Partially observable
* Stochastic
* Continuous
* Benign
== [[AI_class#AI_and_Uncertainty|AI and Uncertainty]] ==
* AI is uncertainty management: what to do when you don't know what to do?
* Reasons for uncertainty:
** Sensor limits
** Stochastic environment
** Laziness
** Ignorance
** Adversaries
== [[AI_class#Machine_Translation]] ==
Google's translation system works probabilistically and the probabilities are inferred from known translations.
== [[AI_class#Summary|Summary]] ==
* Key applications of AI
* Intelligent agent
* 4 key attributes of environments
* Sources of uncertainty
* Rationality
= [[AI_class#Problem_Solving|Problem Solving]] =
== [[AI_class#Introduction_2|Introduction]] ==
* There are different types of problem with different approaches
** Partially observable environments need special treatment which we will cover later
== [[AI_class#What_is_a_Problem.3F|What is a Problem?]] ==
* Definition of a problem
** Initial state
** Actions(S) -> {a<sub>1</sub>,a<sub>2</sub>,a<sub>3</sub>,...}
** Result(S,a) -> S'
** GoalTest(S) -> T|F
** PathCost(S-a->S-a->S) -> <em>n</em>
*** StepCost(S,a,S') -> <em>n</em>
== [[AI_class#Example:_Route_Finding|Example: Route Finding]] ==
* Categories of the state space:
** Explored
** Frontier
** Unexplored
== [[AI_class#Tree_Search|Tree Search]] ==
function TreeSearch( problem ):
  frontier = {[initial]}
  loop:
    if frontier is empty: return FAIL
    path = remove_choice( frontier )
    S = path.end
    if S is a goal: return path
    for a in actions:
      add [ path + a -> Result(S,a) ] to frontier
== [[AI_class#Graph_Search|Graph Search]] ==
function GraphSearch( problem ):
  frontier = {[initial]}
  explored = {}
  loop:
    if frontier is empty: return FAIL
    path = remove_choice( frontier )
    S = path.end
    add S to explored
    if S is a goal: return path
    for a in actions:
      add [ path + a -> Result(S,a) ] to frontier
      unless Result(S,a) in frontier or explored
== [[AI_class#Breadth_First_Search|Breadth First Search]] ==
* always considers the shortest paths first
* goal test not applied when we add path to frontier
** rather it's applied when we remove a path from the frontier
** this is because the path might not be the best possible
** can code up a custom BFS that checks upon insert into frontier and terminates early
== [[AI_class#Uniform_Cost_Search|Uniform Cost Search]] ==
* also known as Cheapest First Search
* is guaranteed to find the path with lowest total cost
== [[AI_class#Search_Comparison|Search Comparison]] ==
{|
! Algorithm
! Optimal?
! Complete?
! Space
|-
| Breadth First Search
| y
| y
| 2<sup>n</sup>
|-
| Cheapest First Search (Uniform Cost)
| y
| y
| ~2<sup>n</sup>
|-
| Depth First Search
|
|
| n
|}

Revision as of 13:29, 19 October 2011

This is an overview in point form of the content in the AI class.

Welcome to AI

Introduction

  • Lots of work to do
  • Will be ranked

Course Overview

Purpose of the class:

  • To teach the basics of AI artificial intelligence
  • To excite you

Structure:

  • Videos -> Quizzes -> Answer Videos
  • Assignments (like quizzes but without the answers) and exams

Intelligent Agents

  • Agent has sensors to determine its environment
  • Agent has actuators to affect its environment
  • Agent interacts in a Perception-Action cycle
  • Agent's Control Policy maps sensors to actuators

Applications of AI

  • Finance
  • Robotics
  • Games
  • Medicine
  • Web
  • Other fields

Terminology

  • Fully Observable vs Partially Observable
    • Fully Observable environment: if what your agent can sense at any point in time is completely sufficient to make the optimal decision, i.e. if the sensors can see the entire state
    • Partially Observable environment: need memory in the agent to make the best possible decision, because the agents' sensors can't see the entire state
    • When we look at Hidden Markov Models we will learn more about structuring memory when state is partially observable
  • Deterministic vs Stochastic
    • Deterministic environment: agent's actions uniquely determine the outcome, there is no randomness
    • Stochastic environment: outcome of actions involve a certain level of randomness
  • Discrete vs Continuous
    • Discrete environment: finitely many states and actions
    • Continuous environment: infinitely many states and actions
  • Benign vs Adversarial
    • Benign environments: not really out there to get you
    • Adversarial environments: harder to find good actions when your opponent is counteracting what you're trying to achieve

Checkers

  • Fully observable
  • Deterministic
  • Discrete
  • Adversarial

Poker

  • Partially observable
  • Stochastic
  • Discrete
  • Adversarial

Robot Car

  • Partially observable
  • Stochastic
  • Continuous
  • Benign

AI and Uncertainty

  • AI is uncertainty management: what to do when you don't know what to do?
  • Reasons for uncertainty:
    • Sensor limits
    • Stochastic environment
    • Laziness
    • Ignorance
    • Adversaries

AI_class#Machine_Translation

Google's translation system works probabilistically and the probabilities are inferred from known translations.

Summary

  • Key applications of AI
  • Intelligent agent
  • 4 key attributes of environments
  • Sources of uncertainty
  • Rationality

Problem Solving

Introduction

  • There are different types of problem with different approaches
    • Partially observable environments need special treatment which we will cover later

What is a Problem?

  • Definition of a problem
    • Initial state
    • Actions(S) -> {a1,a2,a3,...}
    • Result(S,a) -> S'
    • GoalTest(S) -> T|F
    • PathCost(S-a->S-a->S) -> n
      • StepCost(S,a,S') -> n

Example: Route Finding

  • Categories of the state space:
    • Explored
    • Frontier
    • Unexplored

Tree Search

function TreeSearch( problem ):
  frontier = {[initial]}
  loop:
    if frontier is empty: return FAIL
    path = remove_choice( frontier )
    S = path.end
    if S is a goal: return path
    for a in actions:
      add [ path + a -> Result(S,a) ] to frontier

Graph Search

function GraphSearch( problem ):
  frontier = {[initial]}
  explored = {}
  loop:
    if frontier is empty: return FAIL
    path = remove_choice( frontier )
    S = path.end
    add S to explored
    if S is a goal: return path
    for a in actions:
      add [ path + a -> Result(S,a) ] to frontier
      unless Result(S,a) in frontier or explored

Breadth First Search

  • always considers the shortest paths first
  • goal test not applied when we add path to frontier
    • rather it's applied when we remove a path from the frontier
    • this is because the path might not be the best possible
    • can code up a custom BFS that checks upon insert into frontier and terminates early

Uniform Cost Search

  • also known as Cheapest First Search
  • is guaranteed to find the path with lowest total cost

Search Comparison

Algorithm Optimal? Complete? Space
Breadth First Search y y 2n
Cheapest First Search (Uniform Cost) y y ~2n
Depth First Search n