- 1 INTRODUCTION
- 2 LITERATURE REVIEW
- 3 METHODOLOGY
- 4 Implementation and Components
- 5 EXPERIMENTAL SETUP
- 6 RESULT AND ANALYSIS
- 7 CONCLUSION AND LIMITATION
The advent of new high-speed technology and growing computer capacity provided realistic opportunity for new robot controls and realization of new methods of control theory. Robotics is one field within artificial intelligence. Along with it, Image processing has given a huge contribution in the field of automation. Usually in industries there are high need of robots for fast and efficient workout with less human intervention. Still we may require a highly efficient robot which can work with its own intelligence. This intelligence can be guided with the help of robotic arms which use image processing.
The intelligent arm holds a Pi camera for image acquisition and object detection which then sends it to raspberry Pi for performing various image processing funtions like preprocessing (image rotation,resizing,gray scale conversion) to manipulate the image and feature extraction to extract the necessary information from image. Using Open Source Computer Vision (OpenCV) library with python,the image of object is determined and visualized according to which the arm which is constructed with servo motors are given command to pick up the required or destinated object and place it in desired nearest location. The extracted image (parameters in compliance with the classifier) is sent to the classifier to recognize what object it is and once this is finalized, the output would be the type of the object along with it’s coordinates to be ready for the Robotic Arm to execute the pick and place task.
Further for remote control application,the arm is subjected to a remote control vehicle which is operated with the radio frequency control where the speed of vehicle is controlled with the help of Arduino UNO.
The biggest motivation is the need of high tech. helping hand robots at large scale industrial fields in context of today’s competitive market for efficient working at low human effort and less time.
- Detect and allocate object and sorting according to object class
- Task of picking desired object(grasping) and placing at desired location
- Locomotory robot with arm using remote control carrier.
The low efficiency of work performed with working difficulty in industrial tasks manually along with consumption of large time obviously demands a new invention. Human vision is not perfect for doing task efficiently. They are exhaustible too. In today’s scenario, the robot with high accuracy, high output, and no error is in demand, the precise work or repetitive work is better done with robots, for the robot the sensor or camera is common sense for the machine-like image processing to detect and identify an object and its characteristic which helps to perform a required task like sorting with the robotic arm along with manual control for remote application of arm using rf controlled carrier.
In manufacturing industries, there arises a need to sort objects. The objects may be of similar or different types. The system should be able to detect the objects and then differentiate the objects from each other based on their properties. Objects may have different shapes or different colours. The objects may be of same shape and same colour but different texture. Thus, different objects and different conditions require different type of processing. Our aim is to classify objects using different image processing algorithms on the parameters like colour, shape and texture. The input for the system will be a video, which will be converted into frames and then processed for detecting the colour, shape or texture.
To reduce human efforts on mechanical maneuvering different types of robotic arms are being developed. These arms are too costly and complex due to the complexity and the fabrication process. Most of the robotic arms are designed to handle repeated jobs. In design of the robotic are different parameters are to be taken care. The design of mechanical structure with enough strength, optimum weight, load bearing capacity, speed of movement and kinematics are important parameters. In electronic design the specification of the motors, drives, sensors, control elements are to be considered. In the software side the reconfigurability, user interface and implementation and compatibility are to be considered. Some voluntary records of robotic arm are given as follows:
1.Object detection and tracking using image processing (2014):
Abstract: – This paper mainly focuses on the basis to implement the object detection and tracking based on its color and shape.
Proposed approach: – This paper mainly focuses on the basis to implement the object detection and tracking based on its color.
2. Real Time Object Detection & Tracking System (locally and remotely) with Rotating function (2013):
Abstract: – This paper presents an implementation of real time detection and tracking of an unknown object
Proposed approach: – Detection of a moving object is necessary for any surveillance system.
3.Mobile Robot for Object Detection Using Image Processing (2012):
Abstract: – This paper describes a robotic application that tracks a moving object by utilizing a mobile robot with sensors and image processing.
Proposed approach: – In the majority of surveillance and video tracking systems, the sensors are stationary.
4. Color Image Processing and Object Tracking System (1996):
Abstract: – This report describes a personal computer-based system for automatic and semiautomatic tracking of objects on film or video tape.
Proposed approach: – The Tracking System achieves the automation by integrating the discrete components into a cohesive system
5.Practical Applications of Robotic Hand using Image Processing (2015):
Abstract: – Robotic hand is used in image processing our paper Presents various application for robotic hand.
Proposed approach: – The robot and robotic arm provide main function and useful for human worker in industry.
Methodology is analysis of the tasks to be done in order to obtain the desired output. An appropriate methodology mainly results into a successful project and vice-versa. Here, for this system, a number of methodologies were considered and the most efficient ones are used. This doesn’t mean that one particular method is used. According to the system, the most appropriate ones are used in combination. The model used here is an iterative model i.e. in the beginning a small subset of the software requirement is developed and then using the concept of redesign and redevelopment its further versions are enhanced. This process is continued until and unless the desired system is developed that produces results as mentioned in the system requirements. The methodology once decided is changed during the project if there arise any circumstances where the design emerged any flaws. Thus based on the situations appropriate methodologies are implemented. Hence in our scenario methodology comprises of four different steps-:
1: Creation of Bot In this phase, parts of robot are assembled & interfaced. This phase contains two sections, connection and interface.
(a) Connections: In connection part we design the connection of robot means where we should place or connect the particular h/w.
(b) Interfacing: In this section we create the interface between the hardware and s/w.
2: Programming for Object Detection: In this phase we write the program for the object detection. We develop the program in python language using open CV library.
3: Implementation of image processing on BOT: This phase comes after the programing phase. The program we create for object detection we install in raspberry pi board. For implementing image processing there are two subsections which are as follows:
(a) Object detection: For detecting the object we can give the object to the bot in two ways first direct via programing, second by camera.
(b) Feature extraction: After detecting the object important task is feature extraction. For extracting the features, the given image is change BGR to gray image. Which helps to detect the size, shape of the object.
4: Testing In this phase we test the program of object detection.
This section gives a detail review on the design on which the system developed is implemented. It includes -:
Block Diagram for Object Detection and Automatic Robotic Arm
The block diagram shows Raspberry pi powered with regulated DC supply. A PI camera which is interfaced with the raspberry pi, capture the image of the object where image processing is done with the help of program written in python along with OpenCV library. Open CV library features provide the required details for detection and extraction of object shape, position and appearances. Actually, the given image is changed BGR to gray image for feature extraction process. Python code has instruction for each pixel details of object along with RGB value to identify the object and instruct the robotic arm which are the constructed with combination of multiple servos controlled by coding with separate library to pick and place pre-specified object. The servo motors can produce the back electromagnetic force(emf) with a high spike of voltage which can damage the controller. In order to save it, either servo motor drivers are used or just diode to stop back emf. For this purpose we use current module.
MANUAL REMOTE CONTROLLED CARRIER
The block diagram of the robot is shown in Fig. 1. It has two major sections: (a) transmitter and (b) receiver and motor driver with Arduino Uno board. The transmitter circuit is built around encoder IC HT12E (IC1), 433MHz RF transmitter module (TX1) and a few discrete components. The receiver and motor driver circuit are built around Arduino UNO board (BOARD1), decoder IC HT12D (IC2), 433MHz RF receiver module (RX1), motor driver IC L293D (IC3), regulator IC 7805 (IC4) and a few discrete components.
Algorithm and Process
1.Train the required object using cascade trainer
2.input Image taken from pi camera (5mp)
3. Import sub-process with initialization of GPIO pins
4.Image preprocessing is done using preprocessing algorithms in python language using OpenCV library function
5.Apply feature extraction algorithm (Blob analysis) and object detection according to classification specification.
6.Testing and output generation
7.Raspberry pi controls movement of servos according to instructed algorithm.
8.Robotic arm is targeted and adjusted to centroid position of object to pick it up and sort it.
Object Training: Training the object is most challenging part.Number of positive and negative sample are to be taken which are taken as input for processing them to make a cascade classifier and resulting file is a xml file.Negative images should not contain even a portion of positive image.However negative images can be any image that is not the positive image but in practice, negative images should be relevant to the positive images. For example, using sky images as negative images is a poor choice for training a good car classifier, even though it doesn’t have a bad effect on the overall accuracy. Common, Cascade and Boost tabs can be used for setting numerous parameters for customizing the classifier training. Cascade Trainer GUI sets the most optimized and recommended settings for these parameters by default, still some parameters need to be modified for each training. Note that detailed description of all these parameters are beyond the scope of this help page and require deep knowledge about cascade classification techniques.You can set pre-calculation buffer size to help with the speed of training process. You can assign as much memory as you can for these but be careful to not assign too much or too low.
Video Processing: A video consisting of objects of different shapes and colours is captured. The video is taken by placing the camera in a way such that the top view of objects is captured. This video is then converted into a series of images. Here for this we are using Open CV library and python language for coding where different moduls and functions are used for feature extraction and color conversion. These images also contain noise, which must be removed. As the number of images is quite large, redundant images need to be removed. There would exist two types of redundancies. There may be images in which the object is only seen partially or is touching the boundary of the image. Such images need to be removed as they cannot be used for sorting. These images are detected by scanning the boundaries and extracting the feature of the images.The original image has already been trained with a tool cascade trainer. The original image is then compared with the cleared border image. If both images are same, it means that the original image need not be deleted. If both images are different, then delete the original image.
Images can be compared by calculating the extent and determining the colour of objects contained in the image. Extent is the ratio of the area of an object to the area of its bounding box. These images are given as an input to the shape, colour and texture algorithms.
Raspberry pi comes with GPIO pin which can be used for input and output operations.The servo motors can easily handled using these pins activating PWM i.e pulse with modulation.The duty cycle which are set in the function decides the rotation of servos.When video as well as image processing is done the respected output decides the activation of these motor as per programmed.
a) Shape Detection Algorithm: For shape detection of objects, the algorithm uses Bounding box method. The shapes defined for classification are Rectangle, Square, Circle, Hexagon and Triangle.When image is read and captured from the video, the image is in RGB format which is a true colour format for an image. The captured or imported RGB image is three dimensional and each pixel is represented by an element of a matrix whose size corresponds to the size of the image.Next process is converting RGB image to Black and White Image This process is done in two steps. The RGB image is first converted to a two-dimensional grayscale image. The grayscale image is nothing but a matrix that holds the luminance (Y) values of the image. We obtain the luminance values of the image by combining the RGB values using the NTSC standard equation that multiplies the primary colors (red, green and blue) with coefficients based on the sensitivity of our eyes to these colours as shown: Y = 0.3 ∗ R + 0.59 ∗ G + 0.11 ∗ B. The luminance image is then converted to black and white (binary) image by a process called thresholding.we can use a color conversion function that converts the grayscale image to a binary image. The image is now a two dimensional array with binary elements. Boundaries of the objects are recognized by first setting a single pixel on the object-background interface as a starting point and moving in a clockwise or counter-clockwise direction and searching for other object pixels. Now next work is finding areas of objects and area filtering.Once the object boundaries have been recognized, the area of that object can easily be calculated by summing the number of pixels within the boundary extent. The image is filtered to remove small, isolated noise pixels by inverting it. The bounding box of an object is an imaginary rectangle that completely encloses the given object and its sides are always parallel to the axes. Figure below illustrates the concept of a bounding box. It is worth noting that due to the various angles of inclination of an object, the dimensions of the bounding box change accordingly.
However, to make the shape recognition independent of the rotation of the object, the dimensions of the bounding box must be constant. This is because the area of the bounding box is an important parameter which we will be using to classify the shape of the object. The next step involves finding the ratio of the area of an object to the area of its bounding
Extent = Area of the object/ area of bounding box for circles, this value is around 0.7853, irrespective of the radius. The corresponding value for rectangles and squares is approximately 1.0000, provided the sides are parallel to the axes and the bounding box and sides overlap. The objects were first made independent of rotation. As a result, the value of Extent is seen as constant. Thus, whenever an object that has value of Extent approximately equal to 0.7853 is a circle. Further, whenever an object is encountered that has value of Extent approximately equal to 1.0000, the object may be a square or a rectangle. The decision between square and rectangle can be made on the basis of the dimensions of the object. The object with equal sides is obviously a square. The object with 2 unequal pairs of sides is obviously a rectangle. Similarly, extent for triangles and hexagons can be calculated.
b) Colour Detection Algorithm: For colour detection of objects, the algorithm uses RGB colour space. The colours defined for classification are Red, Green, Blue and Black.
Implementation and Components
Raspberry pi is used as processing hardware with Raspbian OS. The whole code for object detection written in Python. Image processing will be taken
care by the open CV libraries. By using the raspberry pi the image processing is based on the robotic arm control for pick and place desired object operations.
The hardware requires Robot Arm, RPI Camera, Raspberrypi, Relays, and Power Supply.
Raspberrypi (Model B+):
Raspberry pi (model B+) is used in our model. Raspberry Pi has small sized pc board with Linux or other small operating systems. It was developed by foundation of Raspberry Pi in UK for the use of computer science education. The Raspberry Pi needs an external Secure Digital (SD) card to store its operating system and also all the user data. Hence the Raspberry pi can be used as a really powerful microcontroller which can accomplish almost any functions, and also it can act as a normal use computer with keyboard, mouse and monitor connected.
Raspberry pi camera:
RPI CAMERA BOARD plugs directly into the CSI connector on the Raspberry Pi.Its able to deliver a crystal clear 5MP resolution image or 1080pHD video recording at 30fps with the latest v1.3 Board features a 5MP(2592 x 1944 pixels) Omni vision 5647 sensor in a fixes focus module. The module attaches to Raspberry Pi, by the way of 15 pin Ribbon Cable to the dedicated camera serial interface (CSA), which was designed especially for interfacing to cameras.The CSI bus is capable of extremely high data rates ,and it exclusively carries pixel data to BCM2835 processor.
Servo Motors and Arm
A servomotor is a rotary actuator or linear actuator that allows for precise control of angular or linear position, velocity and acceleration. It consists of a suitable motor coupled to a sensor for position feedback. It also requires a relatively sophisticated controller, often a dedicated module designed specifically for use with servomotors.
Servomotors are not a specific class of motor although the term servomotor is often used to refer to a motor suitable for use in a closed-loop control system. Servomotors are used in applications such as robotics, CNC machinery or automated manufacturing.
Remote Controlled Carrier Components
1. Arduino UNO board
2. Holteks’ encoder-decoder pair
3. RF Modules
4. DC Motors
5. Push buttons (Switch)
Raspbian OS based on Linux is used in raspberry pi for processing the hardware. Since 2015 it has been officially provided by the Raspberry Pi Foundation as the primary operating system for the family of RaspberryPi single board computers. Raspbian uses PIXEL, Pi Improved windows Environment, Lightweight as its main desktop environment as of the latest update. It is composed of a modified LXDE desktop environment and the Openbox stacking window manager with a new theme and few other changes.
Python programming language
Python is an interpreted high-level programming language for general-purpose programming. Python has a design philosophy that emphasizes code readability, notably using significant whitespace. It provides constructs that enable clear programming on both small and large scales. Python features a dynamic type system and automatic management.
Python interpreters are available for many operating systems. CPython, the reference implementation of Python, is open source software and has a community-based development model, as do nearly all of its variant implementations. CPython is managed by the non-profit Python Software Foundation.
It allows programmers to express concepts in fewer lines of code than possible in language such as C++ or java. The language provides constructs intended to enable writing clear programs on both a small and large scale.
OpenCV(Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision. Originally developed by Intel, it was later supported by Willow Garage then Itseez (which was later acquired by Intel). The library is cross-platform and free for use under the open-source BSD license.
OpenCV’s application areas include:
- Ego motion estimation
- Facial recognition system
- Gesture recognition
- Human–computer interaction (HCI)
- Mobile robotics
- Motion understanding
- Object identification
- Segmentation and recognition
- Stereopsis stereo vision: depth perception from 2 cameras
- Structure from motion (SFM)
- Motion tracking
- Augmented reality
To support some of the above areas, OpenCV includes a statistical machine learning library that contains:
- Decision tree learning
- Gradient boosting trees
- Expectation-maximization algorithm
- k-nearest neighbor algorithm
- Naive Bayes classifier
- Artificial neural networks
- Random forest
- Support vector machine (SVM)
- Deep neural networks (DNN)
4. Cascade Trainer tool
Cascade Trainer GUI is a program that can be used to train, test and improve cascade classifier models. It uses a graphical interface to set the parameters and make it easy to use OpenCV tools for training and testing classifiers.
Hardware Configuration used for Testing
Computer Model: DELL 5110
Physical Memory (RAM): 4.00 GB
DDR2 Processor: Intel(R) Core (TM) i-5-2450M CPU, 2.5 GHz
System Type: 64-bit Operating System, x64-based processor
Cache Size: 4096 KB
OS: Windows 10 Enterprise
The Raspberry Pi is a series of small single-board computer. In this project it used as brain for controlling devices. Object to be detected is done by pi camera which is connected to Raspberrypi and displayed in laptop monitor.
RESULT AND ANALYSIS
Above figures show the result of image processing of two different objects which were detected in real time by pi camera. The detection is done in pixel level by extracting the feature of each pixel as possible. However there was little problem because of improper lightning.
Table 1: Final output
|SN.||OBJECT||DETECTION||Accuracy (In percentage)||Servo1 (110 degree)||Servo2 (0 degree)||Result|
|1||ARDUINO||YES||75 around||140||180||Sorted successfully|
|2||FOAM||YES||70 around||140||90||Sorted successfully|
|3||SOAP||YES||50 around||140||150||Sorted successfully|
7. FUTURE ENHANCEMENT
With the completion of our project, there are certain enhancement that can be done. They are as follows:
- Along with industrial use it can be used for surveillance and might be useful in those case where human can’t reach, for example in mines where it can detect the required object.
- We can make it fully automatic adding artificial intelligence
- The arm can be extended with high degree of freedom so that the limitation of its movement can be eliminated
CONCLUSION AND LIMITATION
With all the accumulated effort invested in this project, there are reasons to believe that at the end of this semester this project will find itself in a much better shape and quite closer to actual acceptance than it was. We summarize the progress with respect to the main objectives of the project, namely, accuracy and speed.
- Accuracy: This is the main obstacle for the project. We have been constantly using and testing many different algorithms for similarity comparison.Error related to object detection hit the bad accuracy because of not properly trained objects.However we achieved to get succed in object detection and sorting.
- Speed: Speed is also a challenging factor for this project. The requirement for shorter processing time has made it difficult to balance between accuracy and speed. We had used raspberrypi as a main tool for image processing with the help of pi camera but due to resolution issue and low FPS, there was some problem while detecting image in repetation.
The Object Sorting Algorithms stated above detect the objects and classifies them on different parameters. The Automatic Object Sorting System is developed with a view to decrease the human effort and make wider use of such systems in Manufacturing and Packaging Industries where there is a need to sort objects and then perform operations on them.The system also proves to be cost efficient since it eliminates the manpower required to manage the object queue and also to sort the objects.
The performance shown by pi camera (5MP) was not that much satisfactory.Small frame also objected in detecting a whole body of object.The result might be better if web camera would be used for better resolution and image capture.However it would not meet our whole setup.
Workload on raspberry pi
A single processor of raspberry pi handles both the image detection and servo arm control at a same time due to which at the time arm works, video processing becomes slow.
Detection of similar object
As we have not used any high level artificial intelligence, our computer faces problems in detecting similar objects.
NOTE: THIS PROJECT WAS PERFORMED BY :
RAJA RAM SUKAJU