This machine has an approach to implement sorting of objects on the basis of color
and size. The arm of the machine drops the object in different places or boxes. The
color detection is done by pi camera which uses machine vision techniques. . Image
processing has led to advancements in applications of robotics and embedded
systems. Sorting of objects are usually done by humans which takes a lot of time
and effort. Using Computer Vision techniques (Image Processing), a conveyor belt
system is developed using stepper, servo motors and mechanical structures, which
can identify and sort various objects. Many techniques detects the color of the
objects on the basis of their RGB values. This reduces human effort, time
consumed, and also improves the time to market the products. This is simple
concept to implement sorting effectively saving manual time and work. Here, we
sort the object based on color and size. For the color detection the video of the
respective object is captured by the pi camera. The video stream is the converted
into image format then RGB value is converted in to the HSV value. The control
then represent the grey color. The brightness or the intensity of the color is
determined. The range of the color is defined and corresponding mask is created.
Finally the contour is created to display the detected color. For the size detection
we use the canny edge detection techniques which find the size of the object by
finding the area of the contour.


JLCPCB - RIT Launch Initiative

Thanks To JLCPCB.

$2 for 1-4 Layer PCBs.

Get SMT Coupons: https://jlcpcb.com/RTA



There is a great history of robotics and it’s so vast that can’t be contained in any
document. Every day around the world, lots of robotic projects are carried out to
fulfill some kinds of human needs. Actually a robot is an electro-mechanical
machine that is guided by a computer program or electronic circuitry. When we
look into the history of robotics, we found different stage of development of
different kind of robots. But before heading towards such thing, it is better to know
about robot. Materials handling involves the movement, storage, control, and
protection of materials during their manufacturing, distribution, consumption, and
disposal. There are different material handling systems and equipment in industrial
plants, which use conveyor system. It moves objects from the source to the terminal
instead of moving objects with people due to its ability of continuity in the operation
speed and consistency of objects in movement. Material handling systems ranges
from simple pallet rack, shelving projects to complex overhead conveyor systems,
automated storage, and retrieval systems. Material handling also consists of sorting
and picking. In recent times, various sorting systems have been developed. The
applications of sorting varies from agricultural products, consumer manufactured
products, books, etc. Constantine and Michael in 2002 reported that every sorting
methodology can be classified based on the specification of two issues: the form of
the criteria aggregation model which is developed for sorting purposes, and the
methodology employed to define the parameters of the sorting model. Few
researches were also based on automatic sorting, manual sorting and online sorting
methods. For example, few researchers proposed sorting system that can organize
different material automatically without human aid, with the use of double
acting pneumatic cylinder to push the material to its equivalent boxes on
the conveyor belt. Other methods are the electrophoresis, morphological transformation
of labeling of materials,magnetophoresis, fluorescence
activated image segmentation. For many years, most machine vision systems only
operated in black-and-white. Even today, the majority of machine vision
applications are still monochrome. Yet there are a growing number of applications
9where color imaging is both required and can provide significant advantages. The
use of color in machine vision applications has increased significantly in the past
ten years. This has been accompanied by steady improvements in both the camera
technology and the algorithms to support color machine vision applications. As a
result, many more machine vision system designers are finding themselves facing
new challenges as they embark on building systems where color is a critical factor.
Determining real time and highly accurate characteristics of small objects in a fast
flowing streams would open new directions for industrial sorting process. In the
past industries used to hire peoples to sort the industrial equipment manually. So
this leads to the time consuming and costs more for industries because of more
number of workers.


The proposed system has automatic object sorting and implementing material
technique for selected items using robot system. It synchronizes the movement of
robotic arm to sort the object. It aims in classifying the colored objects which are
coming on from the conveyor belt driven by motors based on the its color i.e. based
on RGB values. A large percentage of visible spectrum can be created using these
colors but the disadvantage of this system is the range of sensitivity and the
environmental conditions. The visible spectrum is the portion of
the electromagnetic spectrum that is visible to the human eye. Electromagnetic
radiation in this range of wavelengths is called visible light or simply light. A
typical human eye will respond to wavelengths from about 380 to 740 nanometers.
In terms of frequency, this corresponds to a band in the vicinity of 430–770 THz.
This proposed system uses raspberry pi 4, camera and servo motors. Mainly the
color sorter are used in a agriculture machineries like rice sorter, bean sorter, plastic
granule sorting of colored nuts and bolts etc. It reduces the human effort, labor and
cost. It also increases the efficiency since the mechanized sorting is much faster
than manual sorting. When an object is came into conveyor belt the robotic arm
pick, rotates and place the objects in their respective place according to color and
size. The pi camera is use to detect the color and the size of the object. The arm to
place the object in a respective position.


Manual sorting of different colored objects has been tedious task in the field of
agriculture, shopping malls, industries and so on. This consumes a lot of human
effort which is not reliable. So this automatic sorting machine sorts the objects
based on the color and greatly reduces the human effort. Also it makes it fast and
error free that can be occur during the time of manual sorting. This provides huge
automation in the field of agriculture, shopping malls, and industries and greatly
reduces the human effort.


To detect the color and size of the object using pi camera.

To place the object in respective position by the use of arm.


This system can be implemented in the field of industry, agriculture, shopping malls
and so on. The application of the proposed system are:
In the field of agriculture to sort the ripe and immature fruits, vegetables etc.
In the field of industry to sort different colored packages.
In shopping malls where clothes of different color has to be separated.
In the field of agriculture to sort different grains.


Color is the most important feature for classification and sorting of objects based
on their color. Because of ever-growing need to supply high quality of product with
in the short time. Provision of quality of goods is the most vital aspect of one’s life.
“Conveyor line object sorting by image processing” [1] aim is to sort the objects
based on color that is in the conveyor in random sequence in correct position. The
image captured by camera can be implemented with different image processing
techniques to determine the color of the object and sort on basis of color recognized.
“Object sorting robot based on the shape” [2] object sorting is one of the most
important tasks in industries. Implementation of robotic arm using ARM7 can sort
the objects based on the shape of the object using image processing techniques. This
system uses a camera and raspberry pi as microcontroller. Camera captures the
image of the object and process the to determine the shape of the object.
“Object sorting by robotic arm using image processing” [3] the computer vision
is carried out with assistance of Open CV and the robotic arm, which is motored by
microcontroller. Different algorithms built in microcontroller, enables robotic arm
to either sort the object on fault like missing drill holes, improper shape or some
other faults.
“Object sorting in manufacturing industries using image processing” [4] in
industries the objects may be of different color, different size and different texture.
The main objective of this project is to sort the different objects based on the color,
size and texture. This system uses a camera and the input to the microcontroller is
video. The video is processed by using video processing techniques that convert
video in to frames and processes them to extract the information.
“Object sorting using image processing” [5] Automation has led to the growth
of industries in recent years. Image processing has led to the advancement in
applications of the robotic and embedded system. Using computer vision
techniques, a conveyor belt system is developed using stepper servo and mechanical
structures which can sort the objects.
12“Color and shape-based object sorting” [6] this project represents the
mechatronics sorting system based on object’s color and shape. Previous system
uses inductive and capacitive sensor to detect the color of the objects. This system
uses a webcam to take the picture of the object and different image processing
algorithm are used to detect the shape and size of the objects.
“Intelligent segregation system” [7] this system proposes an automatic object
sorting system. This system uses IR sensor, camera and a flipper system. The flipper
system is used in conveyor belt and camera is used to take the picture of the objects
and image processing algorithm are used to identify the color and shape of the
objects. The main aim of this system is to reduce the human effort and make the
process automatic and smart.
“Object sorting system using robotic arm” [8] this system proposes a system that
sorts the objects on basis of the color and size. It uses pick and place robotic system.
Camera is interfaced with Arduino microcontroller and size and color of the objects
are sorted using image processing. This system proposes a system that is automatic
which don’t need manual sorting mechanism which makes the system really
effective and smart.
“Object sorting robot using image processing” [9] this system uses camera to
find out the color of the system and objects with sharp edges or without sharp edges.
Motors are used for robot operation and Arduino is used as microcontroller. This
system proposes a smart and fully automated system to sort the objects.



The entire system contain Raspberry pi 4, camera module, servo motors, ultrasonic
sensor, stepper motor and conveyor belt. Raspberry pi 4 acts as a controller of the
whole system. Input to the controller is camera and ultrasonic Sensor. Based on the
attributes of the image, the controller drives the sorting mechanism.


In the working process the DC motor receives the power from the power supply and
start to rotate. The motor after getting start, It moves the conveyor belt with the
object to the forward direction. Forward movement of the belt brings the object
near the camera, when the object is sensed by the ultrasonic sensor, conveyor belt
stops for some time for the identification of the colored object with the help of the
camera. The image and size of the particular color object is fed to the control unit
(raspberry pi) for the further operation, then the controller sends the information to
the arm that picks the object and place them in a prescribed area based on the color
and size. After placing the object the arm come back to the initial position and waits
for the next object to arrive then the controller starts the conveyor belt to bring the
next object beyond the camera and so on.


The implementation of detecting multiple color in real-time using python
programming language is given below.


Step1: Capture video through pi camera.
Step2: Read the video stream into image format.
Step3: Convert image format from RBG (RGB color space represented as three
matrices of red, green and blue with integer value from 0-255) to HSV (hue-
saturation-value) color space.
Hue: Describe the color.
Saturation: Represents the gray color in that color.
Value: Denotes the brightness or intensity of color.
Step4: Define the range of each color and create corresponding mask.
Step5: Morphological transform to remove the noise in image.
Step6: Differentiate bitwise between the image frame and mask is performed to
specific color.
Step7: Create the contour for the individual color to display the detected the
particular color.




The size detection algorithm for our project is given below:
Step1: Read image from video.
Step2: Convert image into gray scale image.
Step3: Performing blurring of image to remove noise from image.
Step4: Perform Canny edge detection.
Step5: Morphological transform: dilation.
Step6: Perform erosion.
Step7: Find no. of contour.
Step8: Find size by finding area of contour.
18Canny edge Detection
The basic steps involved in this algorithm are:
Step1: Noise reduction using Gaussian filter: It uses a Gaussian filter for the
removal of noise from the image, it is because this noise can be assumed as edges
due to sudden intensity change by the edge detector. The sum of the elements in the
Gaussian kernel is 1 and kernel of size 5 X 5 and sigma = 1.4, which will blur the
image and remove the noise from it. The equation for Gaussian filter kernel is;
−(? 2 +? 2 )
?? =
? 2?2
2?? 2
Step2: Gradient calculation along the horizontal and vertical axis: When the image
is smoothed, the derivatives Ix and Iy are calculated w.r.t x and y axis. It can be
implemented by using the Sobel-Feldman kernels convolution with image as given:
−1 0 1
?? = [−2 0 2] , ?? = [ 0
−1 0 1
−1 −2 −1
The gradient approximations at pixel (x,y) given a 3×3 portion of the source image
Ii are calculated as follows:
Ix = x-direction kernel * (3×3 portion of image A with (x, y) as the center cell)
Iy = y-direction kernel * (3×3 portion of image A with (x, y) as the center cell)

  • Above is not normal matrix multiplication. * denotes the convolution operation.
    Gradient magnitudes and angles to further process for each pixel.
    |?|=√?? 2 + ?? 2
    ?(?, ?) = tan−1 ( )
    Below is an example 5×5 Gaussian kernel that can be used.
    G=159 5
    9 12
    12 15
    9 12
    2]Step3: Non-Maximum suppression of false edges: This step aims at reducing the
    duplicate merging pixels along the edges to make them uneven. For each pixel find
    two neighbors in the positive and negative gradient directions. If the magnitude of
    the current pixel is greater than the magnitude of the neighbors, nothing changes,
    otherwise, the magnitude of the current pixel is set to zero.
    Step4: Doubling threshold for segregating strong and weak edges: The gradient
    magnitudes are compared with two specified threshold values, the first one is lower
    than the second. The gradients that are smaller than the low threshold value are
    suppressed, the gradients higher than the high threshold value are marked as strong
    ones and the corresponding pixels are included in the final edge map. All the rest
    gradients are marked as weak ones.
    Step5: Edge tracking by hysteresis: Since a weak edge pixel caused by true edges
    will be connected to a strong edge pixel, weak gradient is marked as edge and
    included in the final edge map if and only if it is involved in the same connected
    component as strong gradient.



Raspberry pi 4

The Raspberry Pi is a low cost, credit-card sized computer that plugs into a
computer monitor or TV, and uses a standard keyboard and mouse. It is a capable
little device that enables people of all ages to explore computing, and to learn how
to program in languages like Scratch and Python. It’s capable of doing everything
as expect a desktop computer to do, from browsing the internet and playing high-
definition video, to making spreadsheets, word-processing, and playing games.
The Raspberry Pi is a series of small single-board computers developed in
the United Kingdom by the Raspberry Pi Foundation to promote teaching of
basic computer science in schools and in developing countries. The original model
became far more popular than anticipated, selling outside its target market for uses
such as robotics. It does not include peripherals (such as keyboards and mice)
or cases. However, some accessories have been included in several official and
unofficial bundles.

The Raspberry Pi 4 Model B was launched in June 2019. It uses a 1.5GHz 64-bit
quad-core Arm Cortex-A72 CPU, has three RAM options (2GB, 4GB, 8GB),
gigabit Ethernet, integrated 802.11ac/n wireless LAN, and Bluetooth 5.0.The new
Raspberry Pi 4 has upgraded USB capacity: along with two USB 2 ports and has
two USB 3 ports, which can transfer data up to ten times faster.

Servo motor

Servo motor is a rotary actuator or linear actuator that allows for precise control of
angular or linear position, velocity and acceleration. It consists of a suitable motor
coupled to a sensor for position feedback. It also requires a relatively sophisticated
controller, often a dedicated module designed specifically for use with servomotors.
Servomotors are not a specific class of motor although the term servomotor is often
used to refer to a motor suitable for use in a closed-loop control system. Servo is a
very high torque servo motor in a small and light weight packages. Due to these
features they are being used in many applications like toy car, RC helicopters and
planes, Robotics, Machine etc.

Ultrasonic Sensor

An ultrasonic sensor is an instrument that measures the distance to an object
using ultrasonic sound waves. An ultrasonic sensor uses a transducer to send
and receive ultrasonic pulses that relay back information about an object’s
proximity. High-frequency sound waves reflect from boundaries to produce
distinct echo patterns. Ultrasonic sensors work by sending out a sound wave at
a frequency above the range of human hearing. The transducer of the sensor
acts as a microphone to receive and send the ultrasonic sound. Our ultrasonic
sensors, like many others, use a single transducer to send a pulse and to receive
the echo. The sensor determines the distance to a target by measuring time
lapses between the sending and receiving of the ultrasonic pulse. The distance
can be calculated with the following formula:
Distance L = 1/2 × T × C

Where L is the distance, T is the time between the emission and reception, and C is
the sonic speed. (The value is multiplied by 1/2 because T is the time for go-and-
return distance).


The Pi camera module is a portable light weight camera that supports Raspberry Pi.
It communicates with Pi using the MIPI camera serial interface protocol. It is
normally used in image processing, machine learning or in surveillance projects. It
is commonly used in surveillance drones since the payload of camera is very less.
Apart from these modules Pi can also use normal USB webcams that are used along
with computer. MIPI modules are ideal for multi camera applications including
mobile and distributed applications like autonomous driving, UAVs, Smart City,
medical technology, and laboratory automation.

Dc Motor

A DC motor is any of a class of rotary electrical machines that converts direct
current electrical energy into mechanical energy. The most common types rely on
the forces produced by magnetic fields. Nearly all types of DC motors have some
internal mechanism, either electromechanical or electronic, to periodically change

the direction of current in part of the motor. .A DC motor consists of an stator, an
armature, a rotor and a commutator with brushes. Opposite polarity between the
two magnetic fields inside the motor cause it to turn. DC motors are the simplest
type of motor and are used in household appliances, such as electric razors, and
in electric windows in cars.

Stepper Motor

A stepper motor is an electromechanical device it converts electrical power into
mechanical power. Also, it is a brushless, synchronous electric motor that can divide
a full rotation into an expansive number of steps. The motor’s position can be
controlled accurately without any feedback mechanism, as long as the motor is
carefully sized to the application.
The stepper motor working principle is Electro-Magnetism. It includes a rotor
which is made with a permanent magnet whereas a stator is with electromagnets.
Once the supply is provided to the winding of the stator then the magnetic field will
be developed within the stator. Now rotor in the motor will start to move with the
rotating magnetic field of the stator. So this is the fundamental working principle of
this motor


Motor drivers acts as an interface between the motors and the control circuits. Motor
require high amount of current whereas the controller circuit works on low current
signals. So the function of motor drivers is to take a low-current control signal and
then turn it into a higher-current signal that can drive a motor.


Python Complier

The Python compiler package is a tool for analyzing Python source code and
generating Python bytecode. The compiler contains libraries to generate an abstract
syntax tree from Python source code and to generate Python bytecode from the tree.
The compiler package is a Python source to bytecode translator written in Python.
It uses the built-in parser and standard parser module to generate a concrete syntax
tree. This tree is used to generate an abstract syntax tree (AST) and then Python
bytecode. The full functionality of the package duplicates the built-in compiler
provided with the Python interpreter. It is intended to match its behavior almost
exactly. Why implement another compiler that does the same thing? The package
is useful for a variety of purposes. It can be modified more easily than the built-in
compiler. The AST it generates is useful for analyzing Python source code.


OpenCV (Open Source Computer Vision Library) is an open source computer
vision and machine learning software library. OpenCV was built to provide a
common infrastructure for computer vision applications and to accelerate the use of
machine perception in the commercial products. Being a BSD-licensed product,
OpenCV makes it easy for businesses to utilize and modify the code.
The library has more than 2500 optimized algorithms, which includes a
comprehensive set of both classic and state-of-the-art computer vision and machine
learning algorithms. These algorithms can be used to detect and recognize faces,
identify objects, classify human actions in videos, track camera movements, track
moving objects, extract 3D models of objects, produce 3D point clouds from stereo
cameras, stitch images together to produce a high resolution image of an entire
scene, find similar images from an image database, remove red eyes from images
taken using flash, follow eye movements, recognize scenery and establish markers
to overlay it with augmented reality. OpenCV has large area of application such as:
2D and 3D feature toolkits, Egomotion estimation, Facial recognition system,
Gesture recognition, Human–computer interaction (HCI), Mobile robotics,

Motion understanding, Object detection and also include some of statistical
machine learning library that contains: Boosting, Decision tree learning, Gradient
boosting trees Expectation-maximization algorithm etc.


Tinkercad is a free, online 3D modeling program that runs in a web browser, known
for its simplicity and ease of use. Since it became available in 2011 it has become a
popular platform for creating models for 3D printing as well as an entry-level
introduction to constructive solid geometry in schools



Raspberry pi with camera

The camera is directly connected by inserting the cable into camera slot of raspberry
pi. The cable is situated between the USB and micro-HDMI ports.

Rapberry pi with Ultrasonic Sensor

Ultrasonic sensor can measure the distance upto 4 to 5 meter by using ultrasound.
There are 4 pin on the ultrasound model that are connected to the Raspberry.
VCC to pin 2
GND to pin 6
TRIG to pin 12
ECHO to pin 18 via resistor(330 ohm and 470 ohm)

Raspberry pi with servo motor

Servo motor has got three connection pins, Vcc, ground and signal. It requires a 5V
1A power, which is provided from a suitable supply source. On the Raspberry Pi
board, GPIO pin number 22 is selected as the signal pin. The GND pin of the power
supply is connected to the ground pin of the servo and also to the 6th pin on the
Raspberry Pi.

Raspberry pi with stepper motor

The inputs to the Motor Driver Module i.e. IN1, IN2, IN3 and IN4 are connected to
the Physical Pins 11, 12, 13 and 15 i.e. GPIO17, GPIO18, GPIO27 and GPIO22 of
the Raspberry Pi. One set of the Motor Coils are connected to OUT1 and OUT2 of
the Motor Driver and the other set is connected to OUT3 and OUT4. A 5V external
power supply is given to the Motor Driver Module and the ground terminals of the
L298N Motor Driver Module and the Raspberry Pi are made common.

Raspberry pi with Dc motor

The design of the circuit for controlling a DC Motor with Raspberry Pi is very
simple. First, connect the pins 8 and 16 (VCC2 and VCC1) of L293D to external
5V supply (assuming you are using a 5V Motor). There are four ground pins on
L293D. Connect pin 4 to the GND of supply. Also, connect the ground pin of
L293D to GND pin of the Raspberry Pi.
Finally, we have the enable and control input pins. Connect the pin 1 of L293D
(1,2EN) to GPIO25 (Physical Pin 22) of Raspberry Pi. Then connect control input
pins 2 and 7 (1A and 2A) to GPIO24 (Physical Pin 18) and GPIO23 (Physical Pin
16) respectively.


Some of the results of the projects are as:

The RGB value obtained by placing camera at distance of 9.5cm from belt
surface as follow;

The graph for the different color VS their respective RGB value is obtained
as follows;

RGB value response for red object
RGB value response for green object

The original image capture by pi camera, image after gray scale conversion
and image after morphological transform are as follows;

For red object

For green object

The process for size detection is described in figure below;

The result Obtained for 30 observation of three types of object are as follows.


As no any machine can be perfect and this line also affect our project as well. Even
after such a continuous hard work throughout the whole project period still our
projects contains some of the limitations. The limitations throughout the project are
as follows:

  1. It can’t sort the colored object of different shape.
  2. The range of RGB value may vary according environment light.
  3. The system can’t sort the object having multiple color.



Objects can be sorted based on their color and size. This system greatly reduces the
human effort and saves a lot of time that can be spent during time of manual sorting.
Using this system provides huge automation in the field of agriculture, industries,
and malls and so on. Besides this this system has some limitations. It may not work
fine if the environmental conditions varies such as effect of ambient light can
greatly affect the system performance. Therefore, this system must be placed on
such place where the effects of environment remains constant or varies very little.


This project is can be made useful in various fields when the system is enhanced as
per the requirement of the relevant field. The further enhancement which can be
carried out may be:

  1. The counter can be introduce in entire project which can count number of each
    colored object.
  2. Some rubber gripper can be used. It increases the surface resistance which helps
    to avoid slipping of the conveyor belt.
  3. Using Gear instead of direct connection with roller and motor shaft will be more
  4. Object can also be sorted based on their shape.



Rajesh Raskoti

Subindra Chaudhary

Saroj Raut

Anjal Basnet

Recommended For You

About the Author: admin