Snake Robot Simulation

Manual robot control and computer vision for human detection.

This project focuses on simulating a snake-like robot with manual control capabilities and integrating computer vision for identifying human casualties.

My Contribution

I was the project lead and fully developed the Computer Vision algorithms for image processing to identify human casualities. Also I integrated the SolidWorks models in Simulink along with Sanuda to implement motion control. Finally we prepared a detailed report on the project with results and analysis; open for further improvement.

Project Objectives

The primary objectives of this project are:

  • To simulate the agility and flexibility of natural snakes through manual control of the robot’s joint angles.
  • To enhance situational awareness by integrating image classification for real-time identification of human casualties.

Manual Motion Control

The snake robot uses manual motion control to mimic the agile movements of a natural snake, enabling it to navigate through various terrains and confined spaces.

  • User-Controlled Joint Angles: The robot’s joints can be adjusted manually, providing flexibility and control over its movement.
  • Agility Simulation: The robot mimics the serpentine motion of a snake, enabling it to move smoothly and efficiently.
Manual Control Interface
Robot Locomotion Snapshot

Casualty Identification

The robot’s functionality is enhanced by integrating a computer vision system for image processing using the Viola-Jones Algorithm.

  • Image Classification: The algorithm processes real-time images captured by the robot to identify human casualties.
  • Viola-Jones Algorithm: This algorithm is renowned for its rapid and accurate object detection capabilities, making it suitable for real-time applications.
Usage of the Viola-Jones Algorithm with real time face tracking.

We used three types of tracking method to evaluate metrics:

  1. Face Track Image Capture: Capturing images with a delay to simulate real-time processing lag.
  2. Face Track Video with Delay: Capturing images without any delay for immediate processing.
  3. Face Track Real-time without Delay: Processing video streams to identify casualties in a continuous feed.

Results

The project successfully demonstrates the feasibility of using a snake-like robot for search and rescue operations. The manual control system allows precise movement, while the computer vision integration enhances situational awareness. Key results include:

  • Effective Manual Control: The robot’s movements closely mimic those of a natural snake, providing high maneuverability.
  • Accurate Casualty Detection: The Viola-Jones Algorithm effectively identifies human casualties in real-time, with a high degree of accuracy.

Below are some images showcasing the results:

Angle Variation vs Time
Angle Derivative Variation vs Time

Future Work

While the current implementation focuses on manual control and basic image classification, future enhancements could include:

  • Automated Kinematic Algorithms: Implementing advanced kinematic algorithms to automate the robot’s movements.
  • Enhanced Image Processing: Integrating more sophisticated image processing techniques to improve detection accuracy.
  • Additional Sensors: Adding sensors to enhance the robot’s navigation and detection capabilities.

Conclusion

The Snake Robot Simulation project is a significant advancement in robotic systems for search and rescue operations, combining manual control with computer vision for efficient navigation and casualty identification.

For more details, you can explore the GitHub repository and the project documentation.