Kinect based control of a Mobile Robot

Gürkan Küçükyıldız, Suat Karakaya

In this study Kinect based control of a mobile robot system was examined. A mobile robot platform was developed for his purpose and developed algorithms were tested on this platform in real time. Mobile robot was actuated by DC Motors. Frames captured from Kinect sensor, which was placed in front of the mobile robot, was processed in Visual Studio C# environment by developed image processing algorithm.  Distance between Kinect sensor and detected skeleton was gathered by developed image processing algorithm. Results were sent to developed control card via serial port.  Developed control card controlled actuators PD speed control algorithm. At result, it was observed that developed system is operating successfully and  follows the skeleton successfully.

Kinect Based Robot Arm Control

Gürkan Küçükyıldız, Suat Karakaya

In this work, we have been working on moving the robot arm instantaneously with Kinect. Kinect sensor and computer were used in the system developed for this purpose. In addition, one triaxial robot was developed during the study and the experiments were performed on this developed robot in real time. The movement of the three axis robot is provided by RC servo motors which are controlled by the Arduino Uno R3 card. The image obtained from the Kinect camera is skeletonized through the image processing program developed in the Processing 2.0b9 environment to find the joints. A vector is drawn on the human limbs to be angled. The lengths of these drawn vectors are given through trigonometric operations to give angles between the limbs. The obtained angle values ​​are sent to the Arduino Uno R3 board via serial communication, and then the servomotors that move the robot are rotated according to these angle values ​​to move the system. It has been observed that the system developed as a result of the experiments performed was successful and that the arms of the robot could imitate the movements instantaneously.

Design and Navigation of a Robotic Wheel Chair

Gürkan Küçükyıldız, Suat Karakaya

In this study, design and navigation of a robotic wheelchair for disabled or elderly people was explored. Developed system consists of a wheelchair, high-power motor controller card, Kinect camera, RGB camera, EMG sensor, EEG sensor, and computer.  Kinect camera was installed on the system in order to provide safe navigation of the system. Depth frames, captured by Kinect camera, were processed with developed image processing algorithm to detect obstacles around the wheelchair. RGB camera was mounted to system in order to detect head-movements of user. Head movement, has the highest priority for controlling of the system. If any head movement detected by the system, all other sensors were disabled. EMG sensor was selected as second controller of the system. Consumer grade an EMG sensor (Thalmic Labs) was used to obtain eight channels EMG data in real time. Four different hand movements: Fist, release, left and right were defined to control the system using EMG. EMG data was classified different classification algorithms( ANN,SVM and random forest) and most voted class was selected as result.  EMG based control can be activated or disabled by user making a fist or release during three seconds.  EEG based control has lowest priority for controlling the robotic wheelchair. A wireless 14 channels EEG sensor (Emotiv Epoch) was used to collect real time EEG data. Three different cognitive tasks: Solving mathematical problems, relaxing and social task were defined to control the system using EEG. If system could not detect a head movement or EMG signal, EEG based control is activated.   In order to other to control user should accomplish the relative cognitive task.   During experiments, all users could easily control the robotic wheelchair by head movements and EMG movements. Success of EEG based control of robotic wheelchair varies because of user experiments. Experienced users and  un-experienced user changes the result of the system.

Image Processing Based Package Volume Detection with Kinect

Gürkan Küçükyıldız, Suat Karakaya

In this study, an image processing based package volume detection scheme that utilizes Kinect depth sensor was developed in Matlab environment. Background subtraction method was used to obtain the foreground image that contains the package to be measured from the Kinect depth image. Connected components labeling method was used to segment the foreground image. Out of the components determined by connected components labeling, the one that has the maximum pixel area overlapping with the measuring plate was assumed to be the package of interest.  Package orientation angle and center point were then determined. Hough transform was applied to the package image to obtain the lines that passes through package edges. The package corners were obtained by finding the four intersection points of the detected lines. Real world coordinates of the package corners were calculated using the Kinect’s intrinsic matrix. Package width and length were determined by finding the distance between the corners in the real world coordinate system. Finally, the package height was determined by differencing plate depth and average depth value of points on the package surface. It was observed that the algorithm performed successfully and the measurement error was within 1cm under presence of various disturbance effects.

Development of a Human Tracking Indoor Mobile Robot Platform

Gürkan Küçükyıldız, Suat Karakaya

In this paper, a differential drive mobile robot platform was developed in order to perform indoor mobile robot researches. The mobile robot was localized and remote controlled. The remote control consists of a pair of 2.4 GHz transceivers. Localization system was developed by using infra­red reflectors, infrared leds and camera system. Real time localization system was run on an industrial computer placed on the mobile robot. The localization data of the mobile robot is transmitted by a UDP communication program. The transmitted localization information can be received any computer or any other UDP device. In addition, a LIDAR (Light Detection and Ranging; or Laser Imaging Detection and Ranging) and a Kinect three­dimensional depth sensor were adapted on the mobile robot platform. LIDAR was used for obstacle and heading direction detection operations and Kinect for eliminating depth data of close environment. In this study, a mobile robot platform which has specialties as mentioned was developed and a human tracking application was realized real time in MATLAB and C# environment.

Copyright © 2017

Romeda Bilgi Teknolojileri Ltd. Şti.