The Autonomous Disaster Victim Search Robot using the Waypoint Method

In the case of disaster, the deployment of robots is immensely helpful for SAR team in the process of searching the victims. Especially, if the deployed robots are autonomous to enhance the searching process, effectively and efficiently. In this paper, we propose an autonomous robot for searching the disaster victims embedded by automatic waypoint method by exploiting sensors, GPS and communication devices. Result shows that this robot is able to search the victims autonomously by detecting their slight movement and temperature then sending the coordinates to the SAR team for the rescue operation. This robot is also able to do the return trip autonomously to the initial post for charging when its power is almost running out.


PENDAHULUAN
As a critical action, search and rescue (SAR) operation aims to save the lives of disaster victims. The rescue timing is extremely critical related to the life-or-death of the victims. However, the ruins or voids occurred at the post-disaster area might delay the operation by blocking the way for the SAR team in searching and rescuing the victims. This problem brings the consequence of the delay or the failure in saving the lives of survivors ( The stability of buildings and ruins in a post-disaster area can be retained by deploying small tools (. These small tools have light weight and small dimensions that minimise the risk of victims' life-losses in searching operation as the weight of ruins over the victims can be reduced. The relatively small robots can be operated in the search and rescue operation, to speed up the operation as well as increasing the chance of survival.
A number of rescue robots have been designed in order to help the operation of SAR team in a post-disaster area. As an example, a robot with continuous track, or tank robot, had been implemented to rescue the disaster victims on the area of ruins on land (. It could maintain its stability on the ruining area compared to four-wheeled robots. However, the implemented robot was not autonomous as an operator had to control the robot using a joystick. Also, it could not detect the survivor of the disaster as the searching operation was monitored by a camera at the post of SAR team.
An improved robot had been implemented to inform the SAR team where the victims' positions are and which safe routes to be passed for the evacuation process (. However, this robot was not autonomous as joystick was used to control the robot's movement.
Several implemented robots had been embedded by waypoint method for their navigation to be autonomous ( ( ( ( (, for only mobile robots or specifically for rescue robots. The waypoints were used to direct the robots to navigate from a point to another, or to decide the shortest path to pass. However, all the given waypoints were manually inputted by human operators, so that robots could not operate when there is not waypoint input given. Therefore, although the movement of the robots are autonomous because there is no joystick involved, the overall robot systems are still not autonomous because the command from the human operator is still required.
In this paper, we propose the implementation of an autonomous robot using the waypoint method. By exploiting the ability of robot in detecting the potential presence of the victims, their positions will be used as the waypoints. Therefore, instead of giving manual command from human operator, robot can navigate autonomously as the positions of the victims are used for the automatic input for the waypoint method of the robot. This work is based on the final project related funded by PKM-KC Grant by Belmawa Ristekdikti 2019 (. However, we have made the improvement of the robot in choosing the wheels, motors, and sensors.

METHODS
The automatic waypoint has been designed and implemented by using the sensors equipped into the robot. When the robot is turned on at the post of SAR team, its start position is detected by GPS and recorded. Then robot navigate randomly searching for the victims. The navigation is supported by infrared sensors to avoid obstacles that are higher than the robot. When robot detects a small movement behind the rubble or a body temperature that indicates the victim, robot will send the current position to the post of SAR team in the coordinate of longitude and latitude wirelessly using LoRa (Long Range) device. Then robot continues the searching and repeats sending the recent position when detecting a victim. Robot will go back to the post of SAR team when its power is almost running out, or the return trip, based on the recorded start position.

Robot's hardware
Robot is using the following devices as seen in Figure 1: a. GPS Ublox Neo-7M to read the robot's position coordinate. b. Infrared sensors E18-D80NK to detect obstacles in front of the robot. c. LoRa transmitter to transmit the coordinate information to the post of SAR team. d. Microwave sensor RCWL-0516 to detect the slight movement of the victim. e. Arduino as the controller of the robot. f. Thermal sensor D6T to detect the victim's temperature. g. Compass HMC5883L for the heading of the robot. h. Motor driver VNH2SP30 to drive the DC motors, right and left. i. DC motors for the robot's movement, right and left. j. LoRa receiver to receive the coordinate information sending by the transmitter, assembled into the PC/laptop at the post of SAR team. k. PC/laptop at the post of SAR team, to display the coordinate of victims' position sending by the robot.
Based on Figure 1, GPS Ublox Neo-7M will read the information of latitude and longitude coordinate of the victims that will be used to determine the waypoint of robot navigation. Then the coordinate information will be sent wirelessy by LoRa transmitter to LoRa receiver that assembled with PC/laptop at the post of SAR team. The coordinate is firstly determined when the microwave sensor RCWL-0516 sending the information of a slight movement behind the rubble to Arduino, indicating the victim. The thermal sensor D6T will help the detection based on the reading of the body temperature. For navigation, three infrared sensors E18-D80NK are placed at the front, the right and the left of robot to detect the obstacles. Compass HMC5883L will read the robot heading for the ease in autonomous movement. The motor driver VNH2SP30 will drive two DC motors that are placed on the right and the left.
The movement of the robot is using differential-drive mechanism to control the two wheels, each is moved by a DC motor. To use this mechanism, we have made the robot to move with constant velocity when it is moving forward, thus both motors rotate in the same constant velocity. When it moves to the right, the right motor is off and the left motor is moving forward and vice versa.

Robot's software
The waypoint is designed to control the motion of autonomous robot in reaching the target position. The navigation system has been designed for the robot to identify its position and direction based on the earth coordinate system, as well as conducting the correction for its direction (bearing correction). Figure 2 shows the flowchart used for the robot system proposed in this paper.
At the beginning, GPS will read robot's position in latitude and longitude coordinate. This coordinate is used as the waypoint of the robot to navigate back to the start position. Then robot walk randomly in 5s while avoiding obstacles. The 5s setting is to give the time for microwave sensor to detect a slight motion from the victim, e.g. the movement of the hand. This sensor can detect the motion in 7m without obstacle. The thermal sensor D6T that is able to detect the body temperature in 8m is used to help the microwave sensor detection. In the testing, we limited the presence number of the victim testee into three times by inputting the variable m=n+1.
When the robot has successfully detected the victim testee, GPS will read the recent coordinate and this information will be sent to PC/laptop at the post of SAR team through LoRa. LoRa is capable to transmit and receive data in 10km. When robot has detected three victim testees, indicated by m=3, robot will go back to the start position (return trip) appropriately with the coordinate that has read in the first time. Then robot will ensure whether the start position is the exact one by comparing the recent reading from the GPS.

RESULTS
Before implementing the whole robot system, we calculated the torque of the robot because it is related to the power required to drive the system. When moving forward, both motors rotate in the same velocity, we have set them to rotate the wheels in 0.5m/s. When the robot turns right, the right wheel's velocity vr = 0m/s and the left wheel's velocity vl = 0.5m/s. When the robot turns left, the right wheel's velocity vr = 0.5m/s and the left wheel's velocity vl = 0m/s. Therefore, the robot's velocity when turns right or left will be: 1. turns right: = (0.5 + 0) 2 ⁄ = 0.25 ⁄ 2. turns left: = (0.5 + 0) 2 ⁄ = 0.25 ⁄ By assuming the constant velocity of the robot is 0.5m/s to determine the used DC motors, we used these data: 1. robot's mass = 7kg 2. supply = 12V 3. maximum current = 2A, based on assumption 4. acceleration = 0.5m/s 2 , based on the change of velocity between 0m/s to 0.5m/s The force to push the robot is: Where m is the robot mass (kg) and a is the acceleration of the robot (m/s 2 ). We got F is equal to 3.5N.
To drive the robot with a certain distance and a certain direction, robot requires the certain torque that is obtained from: Where F is the force to push the robot (N) and I is the distance (m). The torque required to drive the robot is equal to 1.75Nm. Because the robot has two motors, this torque is divided into two for each motor, 0.875Nm.
To determine the rotation of motors in each minute (rpm), we have: and = 2 60 ⁄ ( ⁄ ) Where P is the power (Watt), ω is the angular velocity (rad/s), T is torque (Nm) and n is rotation per minute (rpm). By substituting Equation (4) into Equation (3), we got: So that the rpm for each DC motor of the robot is = (60 × 12 × 2) (2 × 0.875) ⁄ ( )

= 261
We have implemented the autonomous robot system as shown in Figure 3.
Before testing the functionality of the robot, we have tested the sub-functions using related devices separately, located in Electronics Laboratory, Center of Robotics and Autonomous Systems Room, and the basement, all at Institut Teknologi Nasional, Bandung, Indonesia. The first sub-function tested is the obstacle avoidance that is performed by the reading of infrared sensors. We have arranged the distance between the robot and the object in 30cm. We would like to see whether the robot stop moving when detecting the object in a certain distance. Figure 4 shows that robot in average of ten tests can stop moving at 3cm when detecting an object in 30cm. The implementation to the obstacle avoidance function is making the robot turn right or turn left, depending on the reading of all sensors. The next step was to see whether microwave and thermal sensors are able to detect the victim testees in a certain distance. Table 1 shows the result of the test.
The testing is conducted by locating the victim testee behind the rubble simulation (tables, chairs, wood blocks and iron poles) every 0.5m, following the dimension of the rubble. For the thermal sensor, we added the test without placing the rubble. Figure 5 and Figure 6 show how the test was conducted.

Figure 5. The Rubble Simulation
The microwave sensor was successfully detecting the victim testee that is placed behind the rubble if the distance is not over than 3m. The thermal sensor was not able to detect the victim testee behind the rubble, even the distance is extremely close. However, when we set the testing without the rubble, thermal sensor was able to detect the victim testee up to 6m far. Therefore, we decided to combine both sensors in performing the victim detection by the slight motion and the body temperature of the victim. First, microwave sensor has to detect the slight motion of the victim. Then robot is getting closer to the victim suspect and ensure the temperature, whether it is the victim or not. The next testing is to see the work of the compass, whether its measurement is precise. We compared the compass reading with a protactor. Figure 7 shows that the measurement has an average error 0f 0.4° for all angle and this reading is still acceptable for the robot navigation. The complete test has been conducted to see the functionality of the robot as planned in accordance with the flowchart. This test was conducted in Institut Teknologi Nasional campus, Bandung, Indonesia. The robot navigated searching for the victim testees. Table 2 shows the detection of the first victim testee in ten tests which position is shown. The average differences of latitude and longitude are in radius of 4.08m as shown in Figure 8.  After sending the position of the first victim testee, robot continued navigating to search the second victim testee. Table 3 shows the results of ten tests and the average of coordinate is in radius 5.07m shown in Figure 9.
Similar with the previous step, after sending the second victim testee to the post, robot continued the navigation to find the third one. Table 4 shows the tests that were conducted in ten times, which average is in radius 4.37m shown in Figure 10. Because we limited the test for searching only three victim testees, right after the information of the third one's position has been sent, robot was navigating back to the start position, as shown in Table 5. The average of the coordinate reading is in radius 4.51m as shown in Figure  11.
The average of coordinate reading for all cases is between 4.08m to 5.07m. This result is better than the average accuracy of GPS device and the accuracy specified by datasheet, 7-10m. However, this result is quite far from what we expected as in the real case the radius of the victim ideally is in 1-2m, so that the rescue operation will not be widen that can bring more risk to the survivor under the rubble.
To see the robot performance in the real world, we conducted a test in Yogyakarta region, as the contour of terrain in this region is varied, such as mountain, sand, and a number of beaches. The results obtained show that robot was able to search the victim testees on the uneven terrain and it could go back to the start position. Robot was also able to go back to the start position when the battery was running out. Figure 12 shows the test on the sand terrain, Yogyakarta. In the final project related ( robot could not navigate on the sand and sandy beaches because of the wheel characteristic, the continunous track on four wheels. For this robot version proposed in this paper, it was successfully navigating on sand and sandy beaches as well as climbing the sand hill.

CONCLUSIONS
Based on the results, we have implemented an autonomous robot that is able to search the presence of victim suspect. The sub-functions of the robot have been tested and they are all working well. The robot is also able to navigate on even and uneven terrain that is required for the search and rescue operation. However, there are several things to improve so that the robot can perform well in the real post-disaster search and rescue operation. In the near future, the improvement of detecting the victim suspects is a must so that the range of detection can be widen, especially in detecting the body temperature. The accuracy of the GPS reading has to be improved by locating a portable GPS station at the post of SAR team. Therefore, the robot's GPS devices will take the reading from the station instead of using satellites directly. Deploying more robots in a post-disaster area is required to enhance the effectiveness and efficiency of the SAR operation. The more the number of robots, the higher the survivor will be saved. In implementing this multi-robot system, the communication between robots and between robots to the station needs a further investigation.