Act like people Think rationally

  

KIK614303

Artificial Intelligence

Semester Genap 2016/2017 What is AI?

The science of making machines that:

  Think like people Act like people

  Think rationally Act rationally Purposes of AI

  • To build models of (or replicate) human cognition
  • Psychology, neuroscience, cognitive science: the brain is tr>To build useful intelligent artifacts
  • Enginee>To create and understand intelligence as a general property of systems
  • Rationality within

  Rationality

  • Maximally achieving pre-defined goals
  • • Goals are expressed in terms of the utility of outcomes

  • • Being rational means maximizing your expected utility

  • Computational rationality
History of AI 1940-1950: Early days 1943: McCulloch & Pitts: Boolean circuit model of brain 1950: Turing's “Computing Machinery and Intelligence” 1950

  —70: Excitement: Look, Ma, no hands!

1950s: Early AI programs, including Samuel's checkers program, Newell & Simon's Logic Theorist, Gelernter's Geometry

Engine 1956: Dartmouth meeting: “Artificial Intelligence” adopted

  1965: Robinson's complete algorithm for logical reasoning 1966-9: Failure of naïve MT and learning methods 1970 —90: Knowledge-based approaches 1969 —79: Early development of knowledge-based systems 1980 —88: Expert systems industry booms 1988 —93: Expert systems industry busts: “AI Winter” 1990 General increase in technical depth Resurgence of probability, focus on uncertainty —: Statistical approaches

  Agents and learning systems… “AI Spring”? 2000 —: Where are we now?

  percepts, environment, and action space dictate techniques for selecting rational actions

  Intelligent Agent

  • An agent is an entity that perceives and acts.
  • A rational agent selects actions that maximize its (expected) utility.
  • Characteristics of the

  A g en t ? Sensors Actuators

  En vir o n men t Percepts Actions Intelligent Agents

  • What is an agent?
  • • What makes an agent rational?

Key points:

  • • Performance measure

  • Actions • Percept sequence
  • Built-in knowledge
Agents and Environments agent is anything that can

  • An

  perceive its environment through sensors and act upon that Agent environment through actuators

  Environment

  Sensors

  Percepts

  • Human agent: eyes, ears, and other

  ? organs for sensors; hands, legs,

  Actuators

  mouth, and other body parts for Actions actuators

  • Robotic agent: camera and microphone for sensors; various motors for actuators

  Environments

  • To design an agent we must specify its task environment.
  • •PEAS description of the task environment:

  • P erformance
  • E nvironment
  • A ctuators
  • S ensors
Environment Types (vs. partially observable): An agent's sensors

  • Fully observable

  

give it access to the complete state of the environment at each

point in time.

  (vs. stochastic): The next state of the

  • Deterministic

  environment is completely determined by the current state and the action executed by the agent. (If the environment is deterministic except for the actions of other agents, then the environment is strategic )

  (vs. sequential): The agent's experience is divided into

  • Episodic

  atomic "episodes" (each episode consists of the agent

perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself. Environment Types (vs. dynamic): The environment is unchanged

  • Static

  while an agent is deliberating. (The environment is semidynamic if the environment itself does not change with the passage of time but the agent's performance score does)

  (vs. continuous): A limited number of distinct,

  • Discrete clearly defined percepts and actions.

  (vs. multiagent): An agent operating by

  • Single agent itself in an environment.

  Reflex Agents

  • • Select action on the basis of

    only the current percept.

    • – E.g. the vacuum-agent

  • Large reduction in possible percept/action situations(next page).
  • Implemented through

  condition-action rules

  • – If dirty then suck
Goal-based Agents

  • The agent needs a goal to know which situations are desirable.
    • – Things become difficult when long sequences of actions are required to find the goal.

    >

    • Typically investigated in search and planning research.

  • Major difference: future is taken into account
Learning Agents

  • Learning element: introduce

    improvements in performance

    element.
    • – Critic provides feedback on agents performance based on fixed performance standard.

  • Performance element: selecting actions based on percepts.
    • – Corresponds to the previous agent programs

  • Problem generator: suggests actions that will lead to new and informative experiences.
    • – Exploration vs. exploitation

Problem Solving

  • • It is possible to convert difficult

    goals into one or more easier- to-achieve subgoals.
  • Using the problem-reduction method, you generally recognize goals and convert them into appropriate subgoals.
  • When so used, problem reduction is often called, equivalently, goal reduction.

  Problem Reduction Method

Planning motion sequences Moving blocks

  Goal Tree

  • 1
  • 1 III.

  = (m≥n)

  Heuristic Transform

  Problem Apply

  Done? Find

  Look in Table

  Safe Transform

  Heuristic Transform Apply All

  Table Safe Transform

  (tanx, cscx ) ≈

  2

  (sinx, cosx) ≈ g

  1

  A. f(sinx, cosx, tanx, cotx, secx, cscx) ≈ g

  ( ) ( )

  Example Problem

  ( ) = ( ) 4.

  − ( ) = − ( ) 2. ( ) = ( ) 3.

  cos = sin 1.

  =

  = ln II.

  1

  5

  2

  1 −

  4

  5

  −

  √ N Y Depth-First Search

  G

  a a Strategy: expand c c b b a deepest node first e e d d f f

  Implementation:

  S

  h h Frontier is a LIFO p p r r q q stack

  S e p d q e h r b c h r p q f a a q c

  G Search Algorithm Properties

  • Complete: Guaranteed to find a solution if one exists?
  • Optimal: Guaranteed to find the least cost path?
  • Time complexity?

  1 node

  • Space complexity?

  b b nodes …

  2

  b nodes

  • Cartoon of search tree:

  m tiers

  • b is the branching factor
  • m is the maximum depth
  • solutions at various depths

  m

  b nodes

  • Number of nodes in entire tree?

  2 m m = O(b ) Depth-First Search (DFS) Properties

  • What nodes DFS expand? • Some left prefix of the tree.

  1 node

  • Could process the whole tree!

  b

  m

  )

  • If m is finite, takes time O(b

  b nodes …

  2

  b nodes

  • How much space does the fringe take?

  m tiers

  • Only has siblings on path to root, so O>Is it compl
  • m could be infinite, so only if we

  m

  b nodes prevent cycles (more later)

  • Is it opti
  • No, it finds the “leftmost” solution, regardless of depth or cost

  Breadth-First Search

  G

  Strategy: expand a c b a shallowest node first e d f Implementation:

  S h

  Frontier is a FIFO p r q queue

  S e p d

  Search

  q e h r b c

  Tiers

  a a h r p q f Breadth-First Search (BFS) Properties

  • What nodes does BFS exp
  • Processes all nodes above shallowest solution

  1 node b

  • Let depth of shallowest solution be

  b nodes … s s tiers

  s

  2

  )

  • Search takes time O(b

  b nodes

  • How much space does the fringe

  s

  b nodes

  take? s

  )

  • Has roughly the last tier, so
  • Is it complete?

  m

  b nodes

  • s must be finite if a solution exists, so >Is it opti
  • Only if costs are all 1 (more on costs later)
Uniform Cost Search

  2 G

  a Strategy: expand a c b

  8

  1

  cheapest node first:

  2

  2

  e

  3

  d f

  9

  2 Fringe is a priority

  8 S

  h

  1

  queue (priority: p r

  1

  cumulative cost) q

  15 S

  9

  1 e p

  3 d

  16

  11

  5

  17

  4 q e h r b c

  11

  6

  

13

h r p q f a a

  contours

  8 q c

  G p q f Uniform Cost Search (UCS) Properties

  • What nodes does UCS exp
  • Processes all nodes with cost less than cheapest solution!

  b c  1

  ,

  • If that solution costs C* and arcs cost at least

  …

  

  then the “effective depth” is roughly C*/ c C*/

   2

  C*/

   ) (exponential in effective

  • Takes time O(b

  c  3 depth)

  “tiers”

  • How much space does the fringe take? C*/

  

  )

  • Has roughly the last tier, so>Is it compl>Assuming best solution has a finite cost and minimum arc cost is positive,
  • Is it optimal?

  Route Finding Problem

  

States

Actions Solution

Start

  Flowchart of search algorithms Initialize queue with the initial state

  Yes Return fail Is the queue empty?

  No Remove the first node from the queue

  Yes Return node Is this node a goal?

  No Searching with a Search Tree

  • Sea
  • Expand out potential plans (tree nodes)

  frontier of partial plans under consideration

  • Maintain a
  • Try to expand as few tree nodes as possible
Heuristic Search Def.:

A search heuristic h(n) is an estimate of the cost of the optimal

(cheapest) path from node n to a goal node.

  Estimate: h(n1) n1

  Estimate: h(n2) n2 n3

  Estimate: h(n3)

  • A search
    • Idea: avoid expanding paths that are already expensive
    • • The evaluation function f(n) is the estimated total

      cost of the path through node n to the goal:
    • f(n) = g(n) + h(n<>g(n): cost so far to reach n (path cost)
    • h(n): estimated cost from n to goal (heuristic)
    Properties of A* Uniform-Cost A*

  • Complete?
  • Yes – unless there are infinitely many nodes

  b b

  with f(n) ≤ C*

  … …

  • Optimal?
  • Time?
  • Number of nodes for which f(n)

  ≤ C* (exponential)

  • Space?

Example: distance to Bucharest

  Urziceni Hirsova Neamt Oradea Zerind Arad Timisoara Lugoj Mehadia Drobeta Sibiu Fagaras Pitesti Vaslui Iasi Rimnicu Vilcea Bucharest 71 75 118 111 70 140 75 120 151 99 80 146 97 101 211 138 85 98 142 92 87 86 h(x) Route Finding 374 253

  366 329

  Heuristic -- Straight Line Distance

A* search example

  Start: Arad Goal: Bucharest

A* search example

  Start: Arad Goal: Bucharest

A* search example

  Start: Arad Goal: Bucharest

A* search example

  Start: Arad Goal: Bucharest

A* search example

  Start: Arad Goal: Bucharest

A* search example

  Start: Arad Goal: Bucharest

A* search example

  Start: Arad Goal: Bucharest

  Real World Problem

  Remarks : Problem solving by search Toy Problem