paper 3

Cards (74)

  • Odometry sensor
    Device that calculates the distance a vehicle has traveled by keeping track of the moving parts in that vehicle that make its movement possible
  • Sensor fusion

    Situation in which data from multiple sensors, including cameras is provided to provide greater situational awareness than any of the individual sensors on their own
  • It is important to verify a robot's location, as obtained by dead reckoning, through the use of other methods such as GPS or external sensors
  • Bundle adjustment
    A computer vision technique used to refine a 3D reconstruction by minimizing the the reprojection error between observed image points and predicted (projected) 3D points
  • Computer vision
    Algorithms and techniques that allow computers to analyze, interpret and makes sense of digital images or videos
  • Dead reckoning

    Method for estimating sensor/object position (localization) and digitally constructing an unknown environment (mapping)
  • Dead reckoning data
    Estimation of current position made using dead reckoning
  • Edge computing
    Data processing takes place at local devices (i.e. robots), rather than relying on a centralized server
  • Global map optimization

    The process of enforcing visual consistency across a whole map (using bundle adjustment + other processes, but on a larger-scale)
  • Global Positioning System (GPS) signal

    Radio frequency signal transmitted by satellites in the Global Positioning System (GPS); used to determine the receiver's location, velocity, and time
  • GPS-degraded environment

    Situation where GPS signals are weakened; still available, but quality, accuracy and acquisition time degraded
  • GPS-denied environment
    No GPS signal available
  • Human pose estimation (HPE)

    Used to identify key points on animate objects
  • Rigid pose estimation (RPE)

    Used to identify key points on inanimate objects
  • Inertial measurement unit (IMU)

    Device that combines 3 sensors to measure acceleration, rotation, and magnetic field orientation
  • Keyframe selection

    The process of identifying and selecting significant frames from a video sequence to build a sparse map of the environment and estimate the camera's trajectory accurately
  • Key points/pairs

    Distinctive and identifiable locations or regions within an image or a video frame
  • Light detection and ranging (LiDAR)

    Remote sensing technology that uses laser light to measure distances
  • Occlusion is an object that is blocking another object that is the subject of analysis
  • One feature of vSLAM that makes it of particular value for use with rescue robots is its ability to conduct mapping in real-time
  • The bottom-up approach is generally considered better than the top-down approach at analyzing scenes with multiple people
  • Integrating an edge device with a rescue robot could be challenging due to the need to modify the edge device to survive the same environment as the robot, the power requirements, and the need for software to enable communication and utilization of the edge device
  • vSLAM process

    1. Analyze frame for key features
    2. Match features in current frame to previous frames
    3. Update pose and 3D map based on visual data
  • Any robot seeking to identify and rescue a child should be accompanied by human rescuers to reduce the emotional impact on the child
  • Robots should employ multiple sensors and sensor fusion to provide detailed atmospheric data and 3D mapping to aid human rescuers
  • Robots should use an edge architecture and peer-to-peer communication to isolate sensitive data and avoid centralized servers
  • Robots should use human pose estimation models trained to recognize child forms
  • light detection and ranging (LIDAR)

    Sensing system used in vSLAM that utilizes lasers to create detailed 3D representations of the surroundings and measure distances
  • object occlusion
    Situation in which objects or parts of objects in a scene are partially or completely obstructed from view by other object(s)
  • odometry sensor
    Provides information about movement and position of robots by measuring rotation of wheels and/or other moving parts
  • optimization
    The process of refining the estimated camera poses and 3D point positions to improve the accuracy and consistency of the 3D reconstruction and camera position
  • relocalization
    The process of reestablishing a camera's position and orientation in a known environment
  • robot drift
    Situation where cumulative error in dead reckoning leads to significant deviation of calculated position from true position
  • Simultaneous Localization and Mapping (SLAM)

    Method for estimating sensor/object position (localization) and digitally constructing an unknown environment (mapping)
  • sensor fusion model

    Paradigm in which information from multiple sensors is combined to get a better understanding of the surrounding environment
  • Visual Simultaneous Localization and Mapping (VSLAM)

    1. Initialization
    2. Local mapping
    3. Loop closure
    4. Relocalization
    5. Tracking
  • BotRobot (Hypothetical Company used in Case Study)
  • BotRobot makes underperforming rescue robots
  • 4 challenges for BotRobot

    • Navigating inside buildings in GPS-degraded and GPS-denied environments
    • Navigation in an unknown, constantly changing environment
    • Detecting survivors (in darkness, deformed, behind debris, etc.)
    • Communication (from within building to command station)
  • Proposed new rescue robot

    • Computer vision techniques
    • vSLAM (Visual Simultaneous Localization and Mapping) and Pose Estimation
    • Odometry Sensor