Kinect based control of a Mobile Robot

Gürkan Küçükyıldız, Suat Karakaya

In this study Kinect based control of a mobile robot system was examined. A mobile robot platform was developed for his purpose and developed algorithms were tested on this platform in real time. Mobile robot was actuated by DC Motors. Frames captured from Kinect sensor, which was placed in front of the mobile robot, was processed in Visual Studio C# environment by developed image processing algorithm.  Distance between Kinect sensor and detected skeleton was gathered by developed image processing algorithm. Results were sent to developed control card via serial port.  Developed control card controlled actuators PD speed control algorithm. At result, it was observed that developed system is operating successfully and  follows the skeleton successfully.

Kinect Based Robot Arm Control

Gürkan Küçükyıldız, Suat Karakaya

In this work, we have been working on moving the robot arm instantaneously with Kinect. Kinect sensor and computer were used in the system developed for this purpose. In addition, one triaxial robot was developed during the study and the experiments were performed on this developed robot in real time. The movement of the three axis robot is provided by RC servo motors which are controlled by the Arduino Uno R3 card. The image obtained from the Kinect camera is skeletonized through the image processing program developed in the Processing 2.0b9 environment to find the joints. A vector is drawn on the human limbs to be angled. The lengths of these drawn vectors are given through trigonometric operations to give angles between the limbs. The obtained angle values ​​are sent to the Arduino Uno R3 board via serial communication, and then the servomotors that move the robot are rotated according to these angle values ​​to move the system. It has been observed that the system developed as a result of the experiments performed was successful and that the arms of the robot could imitate the movements instantaneously.

Development of LIDAR System Using Camera And Laser

Gürkan Küçükyıldız, Suat Karakaya

In this study, the instantaneous distances of the objects in the environment were studied by using camera and laser. In the developed system, a mirror was used to keep the camera and the laser fixed and change the view angle of both. The mirror is integrated into the system to provide a 45o angle to the focus line of the camera. One reducer DC motor is used to rotate the mirror in the system. In this way, the system can receive data at a desired speed and resolution in a 270-degree area. The codes for the system are written in Phyton environment and a development card based on the Atmel Atmega328p processor is used for controlling the DC motor in the system. It is seen that the developed system scans an area of 360o with a resolution of 3.30o within 1.8 seconds.

Image Processing Based Indoor Localization System

Gürkan Küçükyıldız, Suat Karakaya

In this study, image processing based low cost indoor localization system was developed. Image processing algorithm was developed in C++ programming language and Open CV image processing library.  Frames were captured by a USB camera which was designed for operating at 850 nm wave length to eliminate environmental disturbances. A narrow band pass filter was integrated to camera in order to detect retro reflective labels only. Retro reflective labels were placed ceiling of indoor area with pre-determined equal spaced grids. Approximate location of mobile robot was obtained by label identity and exact location of mobile robot was obtained with detected label’s position at image coordinate system. Developed system was tested on a mobile robot platform and it was observed that system is operating successfully in real time.

DSP Based Real Time Lane Detection Algorithm

Development And Optimization Of DSP Based Real Time Lane Detection Algorithm On A Mobile Robot Platform

Gürkan Küçükyıldız

In this study, development and optimization of a Hough transform based real time lane detection algorithm was explored. Finding lane marks by using Hough transform on captured video frames was the main goal of the system. Image processing code was developed on Visual DSP 5.0 environment and the code was run on BF-561 processor embedded in ADSP BF561 EZ KIT LITE evaluation board. The code was optimized into a form which is satisfactory for real time applications. A mobile robot platform was developed during the study and the image processing algorithm was tested on this platform. The experimental results which were obtained before and after the optimization of the code were compared.

Real Time Position and Speed Control on DC Motor

Real Time DSPIC Based Position and Speed Control of DC Motor with PID and Fuzzy Logic

Gürkan Küçükyıldız, Gökhan Taşçı

In this study, a DC motor with a series excitation brush was realized in real time using the position, speed controls PID and Fuzzy Logic control methods. By applying different reference values to the system input, it is aimed to keep the position and speed variables at the desired reference value. The necessary codes for the system have been developed in the C18 environment and transferred to real time using a processor named DSPIC33FJ128MC804. Experimental data obtained by PID and Fuzzy logic methods are compared in the results section.

Encoder-Based Localization, Obstacle Detection on a Mobile Robot Platform

Gürkan Küçükyıldız, Suat Karakaya

In this study, a mobile robot which is sensitive to its environment was developed and the mobile robot was tested in different obstacle conditions. The mobile robot senses the obstacles via a laser range finder(Lidar) sensor mounted on its body. The developed mobile robot has two front wheels which are coupled with two separated DC motors and single caster as rear wheel. Real time location of the mobile robot was handled from the encoders coupled with the front wheels. This location info was plotted on a user interface which was developed in Visual C # 2010 environments. Obstacle and heading direction detection was developed in Visual Basic 6.0 environment.

Design and Navigation of a Robotic Wheel Chair

Gürkan Küçükyıldız, Suat Karakaya

In this study, design and navigation of a robotic wheelchair for disabled or elderly people was explored. Developed system consists of a wheelchair, high-power motor controller card, Kinect camera, RGB camera, EMG sensor, EEG sensor, and computer.  Kinect camera was installed on the system in order to provide safe navigation of the system. Depth frames, captured by Kinect camera, were processed with developed image processing algorithm to detect obstacles around the wheelchair. RGB camera was mounted to system in order to detect head-movements of user. Head movement, has the highest priority for controlling of the system. If any head movement detected by the system, all other sensors were disabled. EMG sensor was selected as second controller of the system. Consumer grade an EMG sensor (Thalmic Labs) was used to obtain eight channels EMG data in real time. Four different hand movements: Fist, release, left and right were defined to control the system using EMG. EMG data was classified different classification algorithms( ANN,SVM and random forest) and most voted class was selected as result.  EMG based control can be activated or disabled by user making a fist or release during three seconds.  EEG based control has lowest priority for controlling the robotic wheelchair. A wireless 14 channels EEG sensor (Emotiv Epoch) was used to collect real time EEG data. Three different cognitive tasks: Solving mathematical problems, relaxing and social task were defined to control the system using EEG. If system could not detect a head movement or EMG signal, EEG based control is activated.   In order to other to control user should accomplish the relative cognitive task.   During experiments, all users could easily control the robotic wheelchair by head movements and EMG movements. Success of EEG based control of robotic wheelchair varies because of user experiments. Experienced users and  un-experienced user changes the result of the system.

Image Processing Based Package Volume Detection with Kinect

Gürkan Küçükyıldız, Suat Karakaya

In this study, an image processing based package volume detection scheme that utilizes Kinect depth sensor was developed in Matlab environment. Background subtraction method was used to obtain the foreground image that contains the package to be measured from the Kinect depth image. Connected components labeling method was used to segment the foreground image. Out of the components determined by connected components labeling, the one that has the maximum pixel area overlapping with the measuring plate was assumed to be the package of interest.  Package orientation angle and center point were then determined. Hough transform was applied to the package image to obtain the lines that passes through package edges. The package corners were obtained by finding the four intersection points of the detected lines. Real world coordinates of the package corners were calculated using the Kinect’s intrinsic matrix. Package width and length were determined by finding the distance between the corners in the real world coordinate system. Finally, the package height was determined by differencing plate depth and average depth value of points on the package surface. It was observed that the algorithm performed successfully and the measurement error was within 1cm under presence of various disturbance effects.

Brain Computer Interface with Low Cost Commercial EEG Device

Gürkan Küçükyıldız, Suat Karakaya

In this study, a brain computer interface (BCI) system was explored.  Instead of high cost EEG devices, a low cost commercial EEG device (EMOTIV) was used. Raw EEG data was obtained by using research edition SDK of EMOTIV.  EMOTIV EEG device has 14 channels (10-20 placement) for EEG and two channels (x and y axis gyro: GYROX, GYROY) for head movements.  Head movements and eye-blink can affect the EEG data and are usually referred to as artifacts. In this study, raw EEG data was pre-processed using the x and the y axis gyro data and the two front EEG channels, namely AF4, F8, in order to determine whether the data is artifact free or not. EEG data was collected from subjects that were asked to accomplish two cognitive tasks: pushing a cube and relaxing. Subjects performed each task for a duration of five seconds during 20 trials. The acquired EEG data was divided into 0.25 second epochs. Epochs that were determined to have artifacts were discarded. Power spectral density (PSD) and time domain based features were extracted from artifact free epochs. The features were then used to train a Support Vector Machine (SVM) to determine the corresponding task. The performance of the SVM classifier was compared to that of an Artifical Neural Network (ANN) based one. Experimental results show the efficacy of the SVM based scheme.

Copyright © 2017

Romeda Bilgi Teknolojileri Ltd. Şti.