Cognitive Simultaneous Localization and Mapping

 




 

Choy, Cheong Weng (2020) Cognitive Simultaneous Localization and Mapping. Final Year Project (Bachelor), Tunku Abdul Rahman University College.

[img] Text
ChoyCheongWeng_Full Text.pdf
Restricted to Registered users only

Download (6MB)

Abstract

In this research, Simultaneous Localization and Mapping (SLAM) is a well known research course to be done. There are a lot of research done throughout the century in order to improve the standard and classic SLAM solution that had been proposed from the early stage. But according to the current advanced technology, an Artificial Intelligence (AI) technology is highly demanded in the society. So, this project is a future work that will be focused on in order to build an AI autonomous robot. For the current autonomous robot that had implement the SLAM algorithm has the ability to process the mapping and the navigation by its own, the current SLAM algorithm allowed the robot to navigate autonomously, but the map that build by the robot is just geometry information without cognitively understanding of the room. The information that are taken from the laser or sonar sensor is very limited where the robots is not so intelligent to interpret the real environment as human do. But by visualizing, the mobile robots able to take extra information as the human do who have the ability to recognize the object around the environment and able to interpret the environment for the path planning during the navigation. So, this cognitive SLAM is propose for this research in order to solve the current issues that are facing, where the classic SLAM robot will only use sonar or laser sensor to detect the obstacle without the intelligent to differentiate the unknown environment. This proposed method will be using a camera to capture image and an object recognition algorithm is used to process the image to differentiate the unknown environment, finally the robot will able to recognise the places that had been before. This is the demanded intelligent in the future where the autonomous robot able to make decision by its own and able to recognise the unknown environment. In this project, a Turtlebot 3 with Burger model will be used as the hardware and a Raspberry Pi 3 B+ is used to capture images with the Pi Camera. For the Software, an object recognition algorithm is been used to process the image that had been capture which is called YOLO V3. Next, it will go through a Cognitive Algorithm that allow the robot to understand the environment cognitively. The ROS software is used to control the navigation and mapping of the Burger Turtlebot. This Turlebot will run a few simulation in order to make improvement and fine tune on the accuracy of the system.

Item Type: Final Year Project
Subjects: Technology > Mechanical engineering and machinery
Technology > Electrical engineering. Electronics engineering
Faculties: Faculty of Engineering and Technology > Bachelor of Mechatronics Engineering with Honours
Depositing User: Library Staff
Date Deposited: 24 Apr 2020 16:03
Last Modified: 19 Oct 2020 09:13
URI: https://eprints.tarc.edu.my/id/eprint/14283