Home > Engineering essays > Mobile robot

Essay: Mobile robot

Essay details and download:

  • Subject area(s): Engineering essays
  • Reading time: 16 minutes
  • Price: Free download
  • Published: 18 February 2017*
  • Last Modified: 18 September 2024
  • File format: Text
  • Words: 4,482 (approx)
  • Number of pages: 18 (approx)

Text preview of this essay:

This page of the essay has 4,482 words.

Pictures of possible defects are made with the associated coordinates and are then processed into a report. More advanced techniques need to be used to be able to measure the severity, kind and the area of the defect. Using more advanced techniques takes a longer time because the results are much more thorough. Places that are harder to reach and need an inspection because a defect like rust can be seen with the naked eye are an opportunity for the robot. Currently, such places are reached by using scaffolding or a cherry picker. When a defect is found on the inside of the tank, the tank needs to be drained and cleaned to visually confirm the defect. If this defect is on a hard to reach place, a scaffold needs to be set up inside the tank.
Furthermore, when a robot is being used to do the inspections it can only perform one specific inspection service. In most cases, the robot is used to do wall thickness tests. But looking at the case of the petrobot project, the robots can all carry out four different jobs. One robot has cleaning capabilities, whereas the other is submersible and can perform floor scans. To compete with the competitors and improve the inspection service time, a modular robot is a possible concept. This means that the robot can be equipped with different non-destructive inspection devices without changing the base model of the robot. Multiple jobs can be performed with the use of one robot.
2.4 Conclusion of the performed research
Petrochemical storage tanks are currently and most of the time inspected manually. Only the places that are hard to reach are being inspected with a robot. In Europe, all inspections of petrochemical storage tanks both manually and robotic have to comply with the EEMUA 159 standard. In the Netherlands the PGS guide must also be followed. For petrochemical storage tank inspections a niche lies in making a robot that can perform much more, faster and detailed external inspections of the walls. Furthermore a robot that can replace the manual visual confirmation of a defect on the inside of the storage tank is a potential niche. This spares time on setting up a scaffold on the inside of the tank. Also, when the inspection information is more detailed so the severity and kind of defect is known, a decision of a necessary internal inspection can be made.
The robots that are currently used record the inspection with a video camera. The footage is viewed after the inspection, which is highly inefficient. It takes a lot of time to look back at the footage and detect possible defects. A system that can detect, locate and mark potential defects can possibly solve the aforementioned problems. When the robot does an inspection on the inside of the tank, it is useful to know where the robot is located. The location of the robot and found defects can be displayed on a GUI with a drawn map of the tank.
3. Requirements imposed on the robot and localisation prototype
The results from the research to standards, competitors and potential clients are used for the requirements of the robot and the localisation prototype. The internal and external requirements are a division of the requirements that are imposed on the robot. Internal requirements are team wise set by the designers, production department, CEO, and/or marketing department. The external requirements are commonly set by the legislator, social environment and/or client(s). Permanent requirements are requirements which always have to be met. This paragraph sums up the permanent requirements that the robot has to meet. Variable requirements are conditions that have to be met to a greater or lesser extent.
 
4. Methods for the localisation of a mobile robot
With the requirements that have been set, multiple techniques and methods on how to localise a mobile robot in an unknown environment are presented and discussed in this chapter. Pros and cons of the localisation techniques and/or methods are weighed against each other. Finally multiple methods are chosen that are worth testing and looked further into to localise the robot on the in- and exterior of a petrochemical storage tank. Orientation of the robot and the made rotations and translation are a key part to locate a robot.
4.1 Odometry
Odometry is a technique which uses data of one or multiple motion sensors to reckon the position of a mobile robot over a certain span of time. The technique can be used on robots that are equipped with wheels or legs. The motion sensors can also be placed around or on an object in the environment.
4.2.1 Odometry with rotary encoders mounted on the motors of a robot
A rotary encoder that is mounted on motors is an example of a sensor that is commonly applied for odometry. With a rotary encoder the amount of motor rotations can be calculated. When the travelling time, circumference of the wheels and the speed of the robot is known, the distance that is being travelled can be calculated. However, it is not as simple as the aforementioned example because most of the time the surface over which the robot travels is uneven. The robot will have an unequal surface contact and due to variable friction slip occurs. Furthermore, when odometry data is used to localise the robot in an environment integration of velocity measurements is done to give a position estimate. The data also includes error data which are being integrated and the estimated position will be off. When using a rotary encoder, the sampling rate and resolution are limited and need to be kept in mind. Appendix 4 shows an example of odometry being applied on a mobile robot.
Odometry data can be made more accurate by applying a Kalman filter to filter out the errors in the measurement. After applying the filter the data is more accurate then without the filter, but it will still be a lot off. Sensor fusion is a method to get more accurate data from multiple sensors. Sensor fusion combines data that is derived from multiple sensors so that the output data/information has less uncertainty then data/information from individual sensors.
Odometry enables the possibility to calculate the rotation and translations made by the mobile robot. A fixed starting point is needed to get the calculations right. This fixed position can be for example on the bottom of the tank and that the robot’s back always has to touch the surface of the ground. That point is the null set, from that point the robot drives over the surface of the tank and measurements are being maintained. Localizing a robot with odometry is furthermore a cheap and fairly easy way to estimate the location and position of the robot. A consumer motor rotary encoder ranges from $80USD to $500USD.
4.2 Dead reckoning
Dead reckoning is an almost similar technique as odometry but it can be applied without the use of motion sensors. With this technique a current mobile robot’ position can be calculated by using data of the previous position. The current position is being calculated with the data of the heading and travelled distance of the mobile robot. It is necessary that the heading, speed and time difference are calculated very precisely. The confounding factors that apply on the mobile robot also need to be taken into account to get an accurate position measurement. When dead reckoning is being applied properly, the method is still subjected to significant errors. The current position of a mobile robot is an estimate and relative to the previous calculated position. Because of the fact that all estimated positions is relative to the preceding positions the errors are accumulative.
A user of dead reckoning can do the calculations by hand but there are systems that are able to estimate dead reckoning by the use of heading, distance and rotation sensors. Such sensors are covered by the term Inertial Navigation Systems (INS). This paragraph focuses on the following inertial navigation systems:
• Inertial Measurement Unit (IMU);
• Attitude and Heading Reference System (AHRS);
• Fibre-optic gyroscope.
4.2.1 Inertial Measurement Unit (IMU)
An IMU, an abbreviation for inertial measurement unit, is a small electronic sensor which can measure and output three axis rotations. The rotations are yaw, pitch and roll and are measured with a combination of accelerometers, gyroscopes and magnetometers. The accelerometers, gyroscopes and magnetometers can sense the rotations because of the use of MEMS, an abbreviation for microelectromechanical system, technology. Via dead reckoning computer software, a computer is able to estimate and track the position of a mobile robot without the use of external references. The explanation about the advantages and disadvantages of the use of an inertial measurement unit can be found in appendix 7.
Because an IMU is made up from microelectromechanical systems, it consumes low power and has low fabrication costs. IMUs are mass produced and range from $20USD up to $170USD. Because of the IMUs modular design methodology it can be used in most systems. IMUs are cross-platform and can be programmed with the C++ language and used in combination with Arduinos.
The integration drift from which IMUs suffer have to be taken into account and some IMUs need a platform to stabilize it for more accurate data. Also, when an IMU is used on the in- or exterior of a stainless steel storage tank, the distortion of the earth’s magnetic field imposed by nearby (non)-ferrous metals causes a source of problems. Local distortions to a sensor’s field may be compensated for through calibration, but this calibration is only valid while the distorting metal remains in the same relative position to the sensor (such as if you mount the sensor on a metal plate). When nearby metal is moved relative to the sensor, such as when a robot approaches a metal tank wall, the calibration no longer applies.
4.2.2 Attitude and Heading Reference system (AHRS)
An Attitude and Heading Reference System (AHRS) is almost similar to an IMU but beside the three rotations it can measure attitude. The crucial difference between an AHRS and an IMU is that an AHRS has, beside an integrated IMU with three magnetometers instead of one, an integrated processing system that calculates the orientation in space. An advantage of this is that localisation is more accurate and less sensitive to errors.
In comparison to an IMU an AHRS is more expensive and is not mass produced. A small American company sells AHRS’ for $350USD.
An AHRS’ also becomes more inaccurate when used in ferrous environments. Most AHRS’ have built-in real-time heading calibration routines. These routines attempt to compensate for changes in the relative position of ferrous metal structures, though they work best when the device is moved slowly relative to the metal tank and includes changes in
4.2.3 Fibre-optic gyroscope (FOG)
Like an IMU a fibre-optic gyroscope (FOG) can sense changes in orientation but far more accurate because it can’t sense acceleration, vibration and shock. The part on how a FOG works can be found in appendix 5. The advantage of a FOG is that it has low drift, maintenance and a long lifetime. A FOG can be used in combination with interfero- and accelerometers. With this combination, six degrees of freedom can be measured. The accelerometers have to be very accurate, because these types of sensors suffer from high drift most of the time. Furthermore, it uses a coil made out of fibre which makes the gyroscope rather large and clumsy. However, the mechanical coil can be replaced with a fibre-optic MEMS one. Finally, FOGs are not commercially available, this orientation device is only currently used by the military.
4.3 Visual odometry
A specific type of odometry is visual odometry and is a process of calculating, measuring and determining the position and/or orientation of a mobile robot. By analysing individual images or video footage it is possible to determine the location of the robot. Visual odometry can furthermore be used for the navigation of mobile robots through environments.
4.3.1 Optical flow
Optical flow is a form of visual odometry and is the flow field in a visual scene that can be seen when in motion. The relative motion between the observer and the scene causes optical flow. More information about optical flow can be found in appendix 8. For this situation a camera and the OpenCV database is being used to detect optical flow. With optical flow the velocity of a moving object or of the observer relative to the environment can be calculated.
When using a camera, due to the wealth or lack of information, for example very bright or dark environments, extraction of visual features for positioning can be a very tough job. When inspecting on the inside of the tank a light source is needed to extract features to localise the robot.
Furthermore the camera needs to be calibrated, due to the curve of the camera lens, beforehand to get an accurate measurement. Optical flow is not affected by for example, slip of the wheel or temperature of the environment. It is furthermore a cheap solution, because the costs depend on the abilities/specifications of the camera.
4.3.2 Computer vision angle measurement
For simple localisation only two measurements need to be done, a height measurement and an angular measurement. These two coordinates are part of the cylindrical coordinate system. With a laser rangefinder or the datasheet of the storage tank the diameter of the storage tank can be found. By dividing by two the radius of the storage tank is calculated which is the third coordinate of the cylindrical system.
By applying multiple computer vision algorithms the angle of the robot on the wall relative to the camera can be calculated.
4.4 Various localisation methods
This paragraph focuses on three other methods that have been looked into to localise the robot on the in- and exterior of a petrochemical storage tank.
4.4.1 Ultra-wide band technology
Ultra-wide band commonly abbreviated as UWB is a wireless radio communication technology. UWB is capable of transmitting digital information, which is like a train of short (sometimes picoseconds) pulses that is being spread over a wide bandwidth. The radio wave that is send out into the environment is being picked up by receivers. The time it takes before the radio wave signal is being picked up is the time of flight. The time of flight can be measured because it is known that radio waves travel at the speed of light. By simply dividing the time of flight by the speed of light the distance to a certain receiver is calculated.
Because of the high travelling speed of radio waves, the timing of sensing the emitted radio wave is crucial and can be done very accurately with the use of ultra-wide band. How ultra-wide band works and how accurate locating of a mobile robot in an indoor environment can be done is explained in appendix 6.
This techniques works well in indoor environments because it is a more controlled environment then an outdoor environment. In case of a petrochemical storage tank, multiple receivers need to be placed around the tank when the robot is inspecting on the outside of the tank. When the robot is inspecting on the inside of the tank, multiple receivers need to be placed in the tank.
4.4.2 Laser rangefinder
A laser rangefinder is a specific type of rangefinder which uses a laser beam to estimate the distance to a certain object. The rangefinder sends out a laser pulse and calculates the time that the reflected laser pulse takes to be back at the rangefinder. The laser rangefinders precision lies in the so called ‘rise’ or ‘fall’ time of the send out pulse and the processing speed of the receiver. The rise time is the time needed of an electric signal to change from a low value to a high value. The fall time is the other way around. Laser rangefinders also have limited ranges due to the divergence of the laser pulse over longer distances.
4.4.3 Barometric sensor
An altitude sensor or pressure sensor measures the atmospheric pressure in an environment. When the altitude raises the pressure decreases. There are a few barometric sensors that have a high resolution module integrated which has altitude resolutions at sea level of up to thirteen centimetres of air. This sensor functions between pressures of thirteen mbar to 1200 mbar. It furthermore consumes low power, 0.6 μA while being used and ≤ 0.1 μA while being standby, and can be used with any microcontroller which is equipped with an I²C-bus interface.
The disadvantage of such sensor is that due to temperature changes the pressure also changes which affects the obtained data.
4.4.4 Simultaneous Localisation and Mapping
This sub-paragraph explains the SLAM technique, because it can be of use when looked further into to fulfil the wish of an autonomous robot. Robots that use SLAM can map an unknown environment and simultaneous locate themselves in the made map. The robot makes a map and uses that map to navigate through the environment and uses specific landmarks to locate itself in it. The maps are most of time geographic, a sort of floor map, and sometimes a 3D map is made. The type of map is dependent on the situation in which the robot is.
Different sensors are used to map, navigate and locate itself in the environment. Common sensors are laser range finders, cameras or a LIDAR laser scanner. SLAM can be used in the future for the robots of Invert Robotics, because it enables the possibility of robotic autonomy. It is furthermore a very accurate, up to millimetres precision, technique and the robot learns from previous maps when it’s driving through an environment multiple times. However, to integrate SLAM in a robot a study on the subject is required to fully understand and implement it in a robot.
4.5 Coordinate system for localising the robot
There are many coordinate systems in which the position of the robot can be expressed. The most logical one is the cylindrical coordinate system because the robot operates in cylindrical storage tanks. There are three coordinates needed to know the position of the robot inside the storage tank. The radial coordinate which is the coordinate from the central axis of the tank to the wall of the storage tank. The second coordinate is the angular coordinate (azimuth) which is the coordinate measured from a fixed point to the location of the robot. The last coordinate is the height or altitude coordinate. The cylindrical coordinate can easily be converted to a Cartesian coordinate system. The altitude coordinate is the same in both systems, only the radial and angular coordinate need a conversion. The x- coordinate in the Cartesian system is calculated by multiplying the radial coordinate by the cosine of the angular coordinate. The y- coordinate is calculated by multiplying the radial coordinate by the sine of the angular coordinate.
4.6 Conclusion of the methods for localizing a mobile robot
The Simultaneous Localisation and Mapping technique is one of the best methods to localise the robot in the future. But due to the limited time and the vastness of this method, the method has not been looked further into. In comparison to the other methods, the SLAM method is the most accurate and reliable one and enables the ability for robotic autonomy. Because the SLAM method also enables the robot to learn is most effective.
The coordinate system that will be used is the cylindrical coordinate system. This one is used because the robot also operates in cylindrical environments and this coordinate system visualizes the position of the robot best. Optical flow will be tested to research if it can be a part of the localisation system. This method can be tested fairly easily because Invert Robotics already owns cameras and the software which can be programmed to detect optical flow. Also the laser rangefinder that is also directly available will be tested to analyse if it can be of use to calculate the travelled distance in height relative to the ground. The ultra-wide-band technology is a very accurate way of locating and is a proven method in indoor environments. It is unknown how this system works in outdoor and ferrous/non-ferrous environments, this system will be tested and is described in chapter 6. Finally, the camera will calculate the angle (φ) of the robot on the wall relative to the camera is used in the localisation system. Because of the direct availability of a camera and the easiness of the setup method it is chosen as part of the localisation system.
5. Design of the localisation prototype
This chapter presents the specified localisation methods that have been chosen in chapter four. Optical flow, the laser rangefinder and ultra-wideband will be discussed. The pros and cons of the methods will also be looked further into.
5.1 Localisation method one: motion detection with optical flow
Optical flow can be detected by any sort of camera, so for the prototype a Logitech C920 with a 640×480 resolution is used. This type of webcam is used because it was directly available and has been used for other former vision tests. The camera films the surface of the material on which the robot is driving on. The current robot of Invert Robotics uses a down facing camera to enable the operator to navigate near weld seams. The same principle is used but the camera only detects optical flow and measures the horizontal and vertical travelled distance of the robot. On the graphical user interface the travelled distance is displayed. The summarized test results are described in chapter 6, a more detailed test description can be found in appendix 9.
Optical flow is only able to detect vertical and horizontal translations, with mathematics and vision algorithms rotations can be calculated. The processing speed of the complete algorithm has to keep up with the maximum speed of the robot otherwise drift in measurements will occur.
Most of the time the surface which is inspected has almost no features so a robust algorithm that detects corners, edges and/or colours is needed. Lukas Kanade optical flow in combination with the Shi-Tomasi corner detection method is a robust algorithm. The parameters of the Shi-Tomasi function can be adjusted in such a way so that the algorithm can detect corners on almost featureless surfaces. The processing speed of the total algorithm will go down because the function has to go through multiple layers of the frame to search for a corner. The window in which has to be searched needs to be big so that the output of the function is more accurate.
5.2 Localisation method two: laser rangefinder
The SF02/F laser rangefinder made by the South African company Lightware is used because of its direct availability. The rangefinder has a range of forty meter in both indoor and outdoor environments and has a resolution of one centimetre. The laser rangefinder is equipped with analog, digital and serial outputs and interfaces, the serial output can be used to connect the rangefinder to a current robot of Invert Robotics to test the rangefinder thoroughly. The laser rangefinder is placed inside an aluminium housing and is connected to a slip ring. Through the slip ring run wires from the rangefinder to the serial input of the robot to enable communication. The whole prototype is mounted to a cover of one of the robots of Invert Robotics. The laser rangefinder measures the height of the robot relative to the ground surface, and therefore the rangefinder always points downwards. The rangefinder as it were “dangles” on the cover of the robot and gravity is used to point the rangefinder downwards. When the robot makes a rotation the slip ring will rotate too and points the rangefinder downwards. The aforementioned situation is sketched in figure 5-1.
The risk of this solution is that when there isn’t enough weight on the slip ring and the robot makes a turn, the slip ring will not turn because the friction is to strong. Hereby the laser rangefinder will not point straight downwards but in an angle. Because the rangefinder points at an angle to the ground the obtained data is off.
5.3 Localisation method three: ultra-wideband
To be able to localise the robot and the found defects with ultra-wideband, the EVK1000 from the Irish company DecaWave are used. The EVK1000 is an evaluation kit which includes two boards and antennas for real-time localisation applications. The two boards feature a DW1000 ultra-wideband transceiver and LCD displays to show the distance between the two boards. The DW1000 transceiver is able to localise objects in real-time to a precision of ten centimetres when it is stationary or being moved up to eighteen kilometres per hour.
By default, one of the boards is programmed as tag and the other board as anchor. The tag will be mounted to the robot and sets up the first communication with the anchor. The anchor stays stationary in the environment and receives signals from or transmits signals to the tag. The reflectivity of the material of the storage tank has to be taken into account while the ultra-wideband is being tested. The reflectivity can cause noise in the ultra-wideband signals. When the UWB technology is used outside the storage tank, the Non-Line of Sight (NLOS) of the system has to be taken into account.
5.4 Localisation method four: Visual odometry angle measurement
To measure the angle of the robot on the storage tank wall relative to the camera, computer vision algorithms of the openCV library are applied to the footage. Optical flow, Shi-Tomasi, AGAST, FAST, blob detection, background subtraction and colour detection belong to the possible algorithms to detect and follow the robot. When the robot has been detected and can be followed well, the angular coordinate in cylindrical coordinate system measurement can be calculated by subtracting the x- and y coordinate of the tank centre with the x- and y coordinate of the detected robot. When the coordinates have been subtracted the inverse tangent of the output has to be calculated and converted from radians to degrees to finally be able to output the angular coordinate of the robot in the tank.
The camera will be placed inside and in the middle of the storage tank. The camera lens is a 180⁰ fish eye lens so that the whole interior of the storage tank can be seen. An IP-camera is used to capture the footage because it is an easy and modular solution for data transfer and video capturing. To be able to calculate the angular coordinate of the robot very accurate, the camera has to be placed in the exact centre of the tank. This is not entirely possible because the tank floor has a slight slope, so the centre of the tank is estimated.
Furthermore, the vision algorithm needs to cope with stationary and changing light reflections that are caused by incoming light from the manhole or other holes in the storage tank.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Mobile robot. Available from:<https://www.essaysauce.com/engineering-essays/mobile-robot/> [Accessed 19-04-26].

These Engineering essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.