Cs Robots

Cards (73)

  • GPS degraded environment
    Environment where GPS exists but the quality, accuracy or acquisition time isn't what it would normally be
  • GPS degraded environments
    • Inside a building with lots of floors and no windows
    • Thick jungle with a triple canopy
  • GPS denied environment

    Environment where there is no GPS signal at all
  • GPS denied environments

    • Underground bunker
  • Rescue robots need to be able to navigate in an environment where things are constantly changing and they don't have prior knowledge of the structure
  • Rescue robots need to be able to detect any survivors, even if they are deformed or behind other objects, in a dark environment
  • Rescue robots need to be able to communicate with a base station or command station outside the building
  • Computer vision techniques

    Algorithms that allow computers to analyze, interpret and make sense of digital images or videos
  • Computer vision techniques
    • Used in autonomous cars to analyze video and take action accordingly
  • VSLAM
    Simultaneous localization and mapping with a camera - a method for estimating the position of an object and simultaneously constructing a 3D map of an unknown environment
  • VSLAM allows a robot to map the environment around it and keep track of its own position
  • Odometry sensor
    Provides information about the movement and position of robots by measuring the rotation of wheels or treads
  • A robot with a single camera is proposed for the scenario
  • LIDAR
    Light detection and ranging - uses lasers to create a 3D model of the environment by measuring the time for light to be reflected back
  • Inertial Measurement Unit (IMU)
    Measures acceleration, rotation and magnetic field to pinpoint the position of an object in 3D space
  • Dead reckoning
    Calculating the current position based on the previous position and the movement from that position
  • Dead reckoning has a drift problem where cumulative errors build up over time
  • Steps for a robot with a camera to determine its location and map the environment
    1. Capture images and sensor data
    2. Extract visual features from the images
    3. Match the visual features to a map or database to determine location
    4. Update the map with new information
  • What we're going to look at now is four steps that are taken by a robot with a camera every time it moves
    1. Capture images and/or sensor data
    2. Identify and isolate key points and regions of interest
    3. Engage in Data Association
    4. Update its pose and map
  • Key points
    Points that are relevant to helping the camera map the particular area it's in as well as generate a 3D map and navigate
  • Examples of key points
    • Corners
    • Edges
    • Landmarks
  • Data Association
    Look back at all the other frames to see if it's seen a particular feature before
  • Pose
    The position and orientation of the camera
  • This is an iterative process so every time the camera captures a frame these four steps are going to be taking place until the entire building has been mapped or the rescue team is satisfied with the map that has been built
  • Front end
    Where tracking and data association is done, at the robot
  • Back end
    Where pose estimation and optimization is done, at a base station or server
  • The vslam process approximately captures the first two modules of the vslam algorithm
  • Vslam algorithm modules
    1. Real-time camera and pose tracking
    2. Local mapping
    3. Loop closure
    4. Relocalization
    5. Global map optimization
  • Tracking
    Tracking visual features across frames for purposes of localization
  • Local mapping
    Using the vslam process to build a 3D map
  • Loop closure
    Detecting when the camera revisits a location it's seen before and ensuring consistency in the 3D map
  • Relocalization
    Re-establishing a camera's position and orientation in a known environment
  • Global map optimization
    Analyzing the entirety of the map and frames collected to improve the accuracy and detail of the map
  • Global map optimization processes
    1. Bundle adjustment
    2. Keyframe selection
  • Bundle adjustment
    An algorithm that takes camera position, camera calibration, 3D points, and 2D frames to minimize reprojection error and improve the 3D map
  • Bundle adjustment calculates a reprojection error for each 3D point, generates a cost function, and uses an optimization algorithm to adjust the 3D points to minimize the reprojection error
  • SLAM (Simultaneous Localization and Mapping)
    1. Ant frames it as input
    2. Outputs estimated reprojection error
    3. Uses optimization algorithm to adjust 3D points to minimize reprojection error
    4. Results in more accurate 3D map
  • Keyframe selection
    1. Selects a subset of frames that best represents a 3D scene or visual feature
    2. Criteria: quality, scene coverage, motion diversity, temporal spacing
    3. Reduces computational load by using fewer frames to generate 3D model
  • Occlusions
    Situation where the object being assessed is partially or completely obstructed by another object
  • Pose estimation
    Estimating the position and orientation of an object or human relative to the camera