Week 0 Search

Cards (17)

  • agent
    entity within an enviroment
  • state
    configuration of agent and enviroment
  • actions
    choices an agent can make. ACTIONS(s) = function with state as argument.
  • transition model
    RESULT(s,a) = resulting state from an action within a state
  • state space
    set of all possible states
  • goal test
    identifies if current state is goal state
  • path cost
    numerical value assigned to each path
  • optimal solution
    solution with smallest path cost
  • SEARCHING METHOD: start /w an empty frontier and explored set. Repeatedly remove node from frontier and add to explored set, until node is found containing solution (or until all nodes in explored set and no solution found).
  • stack
    first-in-last-out data type
  • queue
    first-in-first-out data type
  • Depth-First-Search: starts with the last node, exploring full depth of the branch. If goal state not found, goes back up and explores next branch.
  • Breadth-First-Search: Expands shallowest node in frontier. May get more optimal solution than DFS but uses more memory.
  • Greedy Best-First Search: Expands node closest to goal, based on heuristic value as estimated by a function h(n).
  • A* Search ("a star search"): GB-FS but also considers path cost. Looks for node with smallest value for g(n)+h(n). (Cost to search + estimated distance to goal).
  • MINIMAX: numerical values assigned for particular actions, with adversarial players aiming to either minimise or maximise the score based on their actions. (Such as a win = 1, so player 1 aims for actions that increase whereas player 2 aims for 0 and looks for actions that decrease).
  • Alpha-Beta Pruning: method used in MINIMAX to dismiss some actions, based on unlikelihood of the opposing player selecting them. (Such as on MAX Player move, a branch's smallest value is greater than another branch's second value. No longer need to consider other values in that other branch.)