Kinect based control of a Mobile Robot

Gürkan Küçükyıldız, Suat Karakaya

In this study Kinect based control of a mobile robot system was examined. A mobile robot platform was developed for his purpose and developed algorithms were tested on this platform in real time. Mobile robot was actuated by DC Motors. Frames captured from Kinect sensor, which was placed in front of the mobile robot, was processed in Visual Studio C# environment by developed image processing algorithm.  Distance between Kinect sensor and detected skeleton was gathered by developed image processing algorithm. Results were sent to developed control card via serial port.  Developed control card controlled actuators PD speed control algorithm. At result, it was observed that developed system is operating successfully and  follows the skeleton successfully.

Kinect Based Robot Arm Control

Gürkan Küçükyıldız, Suat Karakaya

In this work, we have been working on moving the robot arm instantaneously with Kinect. Kinect sensor and computer were used in the system developed for this purpose. In addition, one triaxial robot was developed during the study and the experiments were performed on this developed robot in real time. The movement of the three axis robot is provided by RC servo motors which are controlled by the Arduino Uno R3 card. The image obtained from the Kinect camera is skeletonized through the image processing program developed in the Processing 2.0b9 environment to find the joints. A vector is drawn on the human limbs to be angled. The lengths of these drawn vectors are given through trigonometric operations to give angles between the limbs. The obtained angle values ​​are sent to the Arduino Uno R3 board via serial communication, and then the servomotors that move the robot are rotated according to these angle values ​​to move the system. It has been observed that the system developed as a result of the experiments performed was successful and that the arms of the robot could imitate the movements instantaneously.

Development of LIDAR System Using Camera And Laser

Gürkan Küçükyıldız, Suat Karakaya

In this study, the instantaneous distances of the objects in the environment were studied by using camera and laser. In the developed system, a mirror was used to keep the camera and the laser fixed and change the view angle of both. The mirror is integrated into the system to provide a 45o angle to the focus line of the camera. One reducer DC motor is used to rotate the mirror in the system. In this way, the system can receive data at a desired speed and resolution in a 270-degree area. The codes for the system are written in Phyton environment and a development card based on the Atmel Atmega328p processor is used for controlling the DC motor in the system. It is seen that the developed system scans an area of 360o with a resolution of 3.30o within 1.8 seconds.

Image Processing Based Indoor Localization System

Gürkan Küçükyıldız, Suat Karakaya

In this study, image processing based low cost indoor localization system was developed. Image processing algorithm was developed in C++ programming language and Open CV image processing library.  Frames were captured by a USB camera which was designed for operating at 850 nm wave length to eliminate environmental disturbances. A narrow band pass filter was integrated to camera in order to detect retro reflective labels only. Retro reflective labels were placed ceiling of indoor area with pre-determined equal spaced grids. Approximate location of mobile robot was obtained by label identity and exact location of mobile robot was obtained with detected label’s position at image coordinate system. Developed system was tested on a mobile robot platform and it was observed that system is operating successfully in real time.

DSP Based Real Time Lane Detection Algorithm

Development And Optimization Of DSP Based Real Time Lane Detection Algorithm On A Mobile Robot Platform

Gürkan Küçükyıldız

In this study, development and optimization of a Hough transform based real time lane detection algorithm was explored. Finding lane marks by using Hough transform on captured video frames was the main goal of the system. Image processing code was developed on Visual DSP 5.0 environment and the code was run on BF-561 processor embedded in ADSP BF561 EZ KIT LITE evaluation board. The code was optimized into a form which is satisfactory for real time applications. A mobile robot platform was developed during the study and the image processing algorithm was tested on this platform. The experimental results which were obtained before and after the optimization of the code were compared.

Design and Navigation of a Robotic Wheel Chair

Gürkan Küçükyıldız, Suat Karakaya

In this study, design and navigation of a robotic wheelchair for disabled or elderly people was explored. Developed system consists of a wheelchair, high-power motor controller card, Kinect camera, RGB camera, EMG sensor, EEG sensor, and computer.  Kinect camera was installed on the system in order to provide safe navigation of the system. Depth frames, captured by Kinect camera, were processed with developed image processing algorithm to detect obstacles around the wheelchair. RGB camera was mounted to system in order to detect head-movements of user. Head movement, has the highest priority for controlling of the system. If any head movement detected by the system, all other sensors were disabled. EMG sensor was selected as second controller of the system. Consumer grade an EMG sensor (Thalmic Labs) was used to obtain eight channels EMG data in real time. Four different hand movements: Fist, release, left and right were defined to control the system using EMG. EMG data was classified different classification algorithms( ANN,SVM and random forest) and most voted class was selected as result.  EMG based control can be activated or disabled by user making a fist or release during three seconds.  EEG based control has lowest priority for controlling the robotic wheelchair. A wireless 14 channels EEG sensor (Emotiv Epoch) was used to collect real time EEG data. Three different cognitive tasks: Solving mathematical problems, relaxing and social task were defined to control the system using EEG. If system could not detect a head movement or EMG signal, EEG based control is activated.   In order to other to control user should accomplish the relative cognitive task.   During experiments, all users could easily control the robotic wheelchair by head movements and EMG movements. Success of EEG based control of robotic wheelchair varies because of user experiments. Experienced users and  un-experienced user changes the result of the system.

Image Processing Based Package Volume Detection with Kinect

Gürkan Küçükyıldız, Suat Karakaya

In this study, an image processing based package volume detection scheme that utilizes Kinect depth sensor was developed in Matlab environment. Background subtraction method was used to obtain the foreground image that contains the package to be measured from the Kinect depth image. Connected components labeling method was used to segment the foreground image. Out of the components determined by connected components labeling, the one that has the maximum pixel area overlapping with the measuring plate was assumed to be the package of interest.  Package orientation angle and center point were then determined. Hough transform was applied to the package image to obtain the lines that passes through package edges. The package corners were obtained by finding the four intersection points of the detected lines. Real world coordinates of the package corners were calculated using the Kinect’s intrinsic matrix. Package width and length were determined by finding the distance between the corners in the real world coordinate system. Finally, the package height was determined by differencing plate depth and average depth value of points on the package surface. It was observed that the algorithm performed successfully and the measurement error was within 1cm under presence of various disturbance effects.

A Hybrid Indoor Localization System Based on Infra-red Imaging and Odometry

Gürkan Küçükyıldız, Suat Karakaya

In this study, a real-time indoor localization system was developed by using a camera and passive landmarks. A narrow band-pass infra-red (IR) filter was inserted to the back of the camera lens for capturing IR images. The passive landmarks were placed on the ceiling at pre-determined locations and consist of IR retro-reflective tags that have binary coded unique ID’s. An IR projector emits IR rays at the tags on the ceiling. The tags then reflect the rays back to the camera sensor creating a digital image. An image processing algorithm was developed to detect and decode the landmarks in captured images. The proposed algorithm successfully estimates the position and the orientation angle based on relative position and orientation with respect to the detected tags. To further improve the accuracy of the estimates, extended Kalman filter (EKF) was adapted to the measurement algorithm. The proposed method initially estimates the position of a mobile robot based on odometry and kinematic model. EKF was then used to update the estimates given the measurement obtained from the image processing system. Real time experiments were performed to test the performance of the system. The results prove that the proposed indoor localization system can effectively estimate position with an error less than 5cm.

Development of a Human Tracking Indoor Mobile Robot Platform

Gürkan Küçükyıldız, Suat Karakaya

In this paper, a differential drive mobile robot platform was developed in order to perform indoor mobile robot researches. The mobile robot was localized and remote controlled. The remote control consists of a pair of 2.4 GHz transceivers. Localization system was developed by using infra­red reflectors, infrared leds and camera system. Real time localization system was run on an industrial computer placed on the mobile robot. The localization data of the mobile robot is transmitted by a UDP communication program. The transmitted localization information can be received any computer or any other UDP device. In addition, a LIDAR (Light Detection and Ranging; or Laser Imaging Detection and Ranging) and a Kinect three­dimensional depth sensor were adapted on the mobile robot platform. LIDAR was used for obstacle and heading direction detection operations and Kinect for eliminating depth data of close environment. In this study, a mobile robot platform which has specialties as mentioned was developed and a human tracking application was realized real time in MATLAB and C# environment.

Copyright © 2017

Romeda Bilgi Teknolojileri Ltd. Şti.