Posts Tagged ‘Opencv’

It all started with my regular B.Tech. classes at Delhi Technological University. My work and interests keep me far from having a big social life. It turns out that even after two years, I don’t  know everyone in my class. Its not about being oblivious, I know few friends completely, while I recognize many by just their face.

One day while travelling, a boy came to me and initiated a conversation. He knew me well enough! A face recognition algorithm back in my brain instructed me that he is someone from my class. But my neuron could not retrieve any other information about him from my brain (not even his name). I thought he might feel bad if I ask his name, so during the entire conversation I had to pretend as if I knew him.

Post conversation, I simply started thinking about a scene from “Mission Impossible: Ghost Protocol”, where an agent’s face recognition system triggers a false positive alarm. Fascinated from this, i thought to work on a miniature face recognition system of my own which I could later include in something big (shhh!!….concealed…).

But why face recognition after all…? Why do researchers spend their time on this..?

To answer this question, let us work on how we identify someone. We see, observe, and interact with various people around us or far in our day to day life. Our brain is so smart to recognize each person differently. We identify each other with Names and/or appearance. We even recognize some with their voice. Certainly of all other ways, we use Name and Appearance the most to identify a person. And out of all things that come under appearance, face has the most impact. That is, of all appearance features, we commonly use face to identify a person.

I represent above pictorially as below:

1

Now to decide among Name and Face, let me ask you a simple question. My neighbor’s name is Sunny. So now considering that you know his name, can you identify him if he comes to you?
Certainly not!
But now if I tell you that my Neighbor looks like below, maybe you will be able to identify him next time you see him.

2
MY NEIGHBOR: SUNNY

We interact with the world around us through our senses. Our eyes are our windows into the world. The amount humans learn through their eyes should not be underestimated under any circumstances. Concretely, studies have show that even three day old babies are able to distinguish between known faces.

Getting a bit technical now! We are far from understanding how over brain actually decodes faces. But various human attempts have been made for face recognition to simulate something similar to our brain’s working. Now, OpenCV has few modules that can be used directly for this purpose.

Few useful links:

1) http://www.cognotics.com/opencv/servo_2007_series/part_2/index.html

2) http://docs.opencv.org/trunk/modules/contrib/doc/facerec/tutorial/facerec_video_recognition.html

3) http://docs.opencv.org/trunk/modules/contrib/doc/facerec/facerec_tutorial.html

Few Output Videos:

Face Detection:

Face Recognition

Face Recognition with prediction confidence

Now, I found this post worth writing for my dear juniors and myself naturally.

Opencv uses CUDA to run code on GPU. CUDA library is however only available for Nvidia graphic cards and there is a wide range of NVIDIA graphic cards that support CUDA.

https://developer.nvidia.com/cuda-gpus mentions various cuda enabled GPU. With every CUDA-enabled GPU processing would occur at different computational capability.

Now the first category of NVDIA GPU is “Tesla”. Telsa series represent best of all NVDIA GPUs. But with high performance, they come with a large bill. Each telse gpu costs around few thousand dollars. Hence, I had stop searching more about tesla gpus for obvious reasons.

Next comes the “NVS” series. Now this series is designed to support multiple monitors (say 4). This series is surely not for the high computational purpose. Nvidia page (https://developer.nvidia.com/cuda-gpus) mentions a very low computation capability for them. It would be a dream come true for me to have 4 screens in a non-parallel system. But, would not resolve my priority concerns. Just for records, this card can do cool stuff as shown in image:

The budget is also a concern. It is the major reason why we can not afford Tesla series.

Now, Geforce and QuadPro products are of our interest. Geforce series represent GPU specially designed more for gaming purposes. Whereas, QuadPro series is designed for a more professional use like CAD with a workstation.

www.nvidia.com/object/quadro_geforce.html explains advantages of quadpro over GeForce. However with list of advantages quadpro comes with greater cost too. Price of entry lever QuadPro GPU is comparable to mid-range GeForce GPU. Thus GeForce may perform better than QuadPro on the basis on cost of product.

GeForce GTX 650 is the first model with computational capability 3.0. In QuadPro series, K600 is the first model with computational capability 3.0. Following table shows comparison between them:

GeForce GTX 650 QuadPro K600
Computational Capability 3.0 3.0
CUDA cores 384 192
Base Clock (Mhz) 1058
Memory Bandwidth (Gbps) 80 29
Memory 1 Gb DDR5 1 Gb DDR3
OpenGL support Yes (4.3) Yes (4.3)
CUDA Support Yes Yes
Price (INR) 7855/- 15,000/-

Source for above info: 1) http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-650/specifications

                                 2) http://www.nvidia.com/object/quadro-desktop-gpus-specs.html

Someone very well explained, “If you think of your video card like a freeway, then CUDA cores would be analogous to the number of lanes in the road,clock speed would be the speed limit, and memory interface would be the number of lanes for exit/entry ramps. More lanes means more cars can be moving on the freeway at any given time, the higher the speed limit the faster any given car is moving, and then if there’s say 2 exit lanes instead of 1, you can have more cars getting off the freeway, same as you can have more cars coming onto the freeway if there are say 2 entry lanes instead of 1.“

Thus for a specific economic range, there will always be GTX card with better hardware specs then the QuadPro. However, major difference lies in the fact that GTX cards are majorly designed for gaming. However, they can still meet our requirements.

On a blog post at http://www.timzaman.com/?p=2256 , author compares various GPUs for few openCV functions. He concludes as, “ In terms of value for money, the GTX 670 (€400) with 2Gb of RAM is very nice. There is absolutely no reason to buy the GTX 680 since it costs € 100 more. Then again, the GTX 570 costs €300, which is nice, but only has 1,25Gb RAM, which can be dangerous when working with large images (nasty errors).
It is clear that GPU computation is BLOODY fast. But i HAVE to note, that only a SINGLE core of the CPU’s were used for the normal CPU functions. These algo’s have not really been optimized for multithreaded if I’m not mistaken. On the other hand, speed increases of >20x is too much for any intel CPU to catch up with. GPU Computing is a must if fast image processing is important.”

And here is the difference!

Posted: December 30, 2012 in Work
Tags:

So finally over and out call to python this time. Everything was going fine but then suddenly a loop that should take not more than few second (2-5 seconds ) was taking around 5 minutes. Even after highly optimizing it, the time could not be reduced to few seconds as I mentioned. So after hours of struggling and debugging Machine Vision department decided to stick to C/C++.

So yesterday, I started once again  from scratch, this time in C++. Surely the day was not good (rather night I must say). I started in even. I wrote code to find distance contrast with is the image using histogram over HSV range. And I got some strange outputs. And again the debugging process started. After rechecking each thing , handling data types properly I got nothing. So after hours of searching and debugging, I gave a Mayday Mayday call to my senior. It might be around 0.00 am when I asked for his suggestions. I must say I have a very nice senior (Mr. Harsh Agrawal). He helps me every time I give away a Mayday call. So it started around 0.00 am. We both went through code again and again. Then following the Senior’s advice I started comparing all values in all matrices. Printed almost all matrices and compared them, rechecked all calculations. And this continued till 4 am , when I figured out that I used a wrong flag in one of the normalization function. I was happy that ultimately problem was resolved, but a silly mistake and few wrong comparisons(manually) wasted 6-7 hours. So finally I am able to calculate distance contrast perfectly. And am really thankful to Harsh Agrawal for his support.

The day was going still better today and then I started using cvBlob library. I have been coding in Opencv c++ so I read images as Mat format. But cvBlob library was written in C which used IplImage data structure. It is easy to convert an IplImage data type to Mat but I needed to do just the opposite.

was actually using cvLabel fucntion. For it i needed an image to be converted into IplImage format.I  tried following commant

IplImage result_image = result;

which indeed worked perfectly for me.

Though today I had to spend some of my time debugging, but I gave no Mayday calls and were able to solve all issues with in time constrains.

Nothing much to right today. Team is planning alot for future. Lots of planning means lots of changes. It even means the change in my plan.

Till now I am successful in finding blobs.

Asd

Histogram exam and my paining arm

Posted: December 25, 2012 in Work
Tags: , ,

I was thinking to find histogram over the HSV ranges in an image that too by using python. Yes, I have done this many times using OpenCV functions. But I am trying to search if I could replace a standard hist in OpenCV with numpy if possible.
Normally, using createHist() function requires size of bins , dimension and the ranges to create a histogram object. The same can be done by using numpy to create a multidimensional array of the same dimensions. After a lot of brainstorming i just 4 lines of code to find histogram. This proves me correct that I can code in any famous computer language provided I am given INTERNET.

So this is how i did it. I do not want to display the histogram, so I focused on formulating array (numpy) to store the histogram and eliminating the overheads in displaying them. I wanted to find histogram over just Hue and Sat range from HSV form. Since OpenCV allows us to specify the channel, this prevented me from splitting the image into three respective channels. Then i first calculated a Histogram for Hue range and normalized it. After that i followed same procedure for S range. I first calculated hist and the normalized it. Merging both of then in different dimensions of a single array gave me the required result.

The outcome seems simple, but i would just say  “mission accomplished !”

I thought the above would work. but it did not . So i struggled more to optimize my code rather to make it work properly. But after lots of struggling i got error. Debug too failed. Python is a good wrap up language. But i dont know why was i not able to debug it properly. May be because of run time nature of opencv.

But after lots of searching i found the solution. And I also found that python is smarter, and has a beauty in it. OpenCV needs to properly document its python implementation.So finally Mission accomplished!

Aim is to perform some initial steps involved in mean shift segmentation of an image. To recognize the objects in image i first want to remove the texture from image to have effective segmentation.  After performing mean shift filtering we get filtered “posterized” image with color gradient and fine-grain texture flattened.

In opencv mean shift filtering can be implemented on the image by using function PyrMeanShiftFiltering(). Apart from source and destination image, it takes radius of spatial window, radius of colour window as its parameters. There are few more default parameters that are implementation dependent.

At every pixel (X,Y) of the input image (or down-sized input image) the function cv2::PyrMeanShiftFiltering  in openCV executes meanshift iterations, that is, the pixel (X,Y) neighborhood in the joint space-color hyperspace is considered:

(x,y): X- \texttt{sp} \le x  \le X+ \texttt{sp} , Y- \texttt{sp} \le y  \le Y+ \texttt{sp} , ||(R,G,B)-(r,g,b)||   \le \texttt{sr}

where (R,G,B) and (r,g,b) are the vectors of color components at (X,Y) and (x,y), respectively (though, the algorithm does not depend on the color space used, so any 3-component color space can be used instead). Over the neighborhood the average spatial value (X',Y') and average color vector (R',G',B') are found and they act as the neighborhood center on the next iteration:

(X,Y)~(X',Y'), (R,G,B)~(R',G',B').

After the iterations over, the color components of the initial pixel (that is, the pixel from where the iterations started) are set to the final value (average color at the last iteration)

Now the point is that I have to perform this function on an image that has been obtained by performing pyrdown three time. But at the same time i do not want to lose any info when I pyrup this image after performing meanshiftfiltering.  By performing filtering I get the image below

But the cv::PyrMeanShiftFiltering also has a parameter maxLevel (default value = 0 ). When maxLevel > 0, the gaussian pyramid of maxLevel+1 levels is built, and the above procedure is run on the smallest layer first. After that, the results are propagated to the larger layer and the iterations are run again only on those pixels where the layer colors differ by more than sr from the lower-resolution layer of the pyramid. That makes boundaries of color regions sharper.

Python bit Eclipse

Posted: December 22, 2012 in Kreeps
Tags: , , , ,

This post is dedicated to my dear juniors.
Today i linked Eclipse on Linux with Pydev (Python Development Plugin). A program in python can be written in even a notepad, but investing time in setting up a proper IDE is fruitfull.

Follow following steps to setup python on your Eclipse IDE

The PyDev Ubuntu setup requires some configuration in Eclipse. Installing PyDev on Ubuntu is similar to the installation on Windows or Macintosh.

1. Run Eclipse and then go to Help | Software Updates | Find and Install…

2. In the Feature Updates window, select Search for new features to install and click Next.

3. In the Install window, select New Remote Site.

4. In the New Update Site window, specify the following:

Name: PyDev

URL: http://update-production-pydev.s3.amazonaws.com/pydev/updates/site.xml
The old URL was http://pydev.org/updates

5. In the Search Results window, select both PyDev Extensions and click Next.

6. In the Feature License window, agree to the license agreement and click Next.

7. Configure the Python interpreter by going to Window | Preferences | Pydev | Interpreter – Python.

8. In the Python interpreters section, click New.

9. Locate your Python interpreter (e.g., /usr/bin/python).

10. When prompted, select your System PYTHONPATH. You can just leave the default checkboxes selected and click OK.

Now I assume the reader already has python OpenCV installed
SO follow these two steps now :
Add /usr/local/lib/python2.7/dist-packages to the System Libs
Add cv to the Forced builtins
All set now

Happy Image processing !

I found Python today!

Posted: December 21, 2012 in Kreeps
Tags: ,

I have been coding in C and C++ for all my projects related to OpenCV till now. But when it comes Machine Vision project in team UAS-DTU (code name: Shaurya) we need to consider every possible parameter. Even if it means shifting to our code to Python if it can make it more efficient. So Machine Vision team has started to take Python into consideration. Python is a general-purpose, interpreted high-level programming language whose design philosophy emphasizes code readability. Opencv’s Python language is an implementation using SWIG. SWIG is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages. OpenCV in python seems to make algorithm implementation faster. But i don’t think it will affect the overall processing time since, everything is ultimately processed in c/c++ as Python is just like a wrapper. But still it is fair to Python a chance since even if nothing it could possibly reduce coding/implementation time. It may improve efficiency. Obviously I googled to find if OpenCV’s Python implementation is good enough to port our code to it. But I could not find any major advantage. So now I decide to have hands over it myself.

ASCII ART!!! (-_-) [-_-]

Posted: July 21, 2012 in Kreeps
Tags: , , ,

A picture is a poem without words.
– Horace

A picture is worth a thousand words.
-Napoleon Bonaparte

If a picture (image) can worth so many words, why can’t it be made of words (mere ASCII character). Thus, to add little fun to my work i decided to write a program to turn any image into an ASCII art.

Technically making an image from standard symbols and characters is ascii art.

example :

          * 
         / 
       HH 
     SSSSSS
    SSSSSSSS
    S )))) S 
   SS -  - SS
  SSS o  o SSS
 SSSS  6   SSSS
  SSS  __  SSS
   SSS    SSS
      W   W
     WW  WWW
   WWWW  WWWW 
   WWWWWWWWWW
        XXXXXXXXX
      XXXXXXXXXXXXX
    XXXXXXXXXXXXXXXX 
   (__________)XXXXXX 
    ( ___  ___ XXXXXX
       o/   o   XXXXX 
    (  /        XXXXX
      /___)     XXXXX
   (             XXXX
  (     ____    ) XXX
   (               XX
    (          )    X
     (       )      *
       (    )      ***
                    *
1962 book, “Art Typing”

Developing a real ASCII art by typing in each character is really a talent. But for a coder its merely few line of code to convert any image into ASCII file.

This is how i did using openCV library:

  • Open the image and perform little Gaussian smoothing

Mat image = imread(“7.jpg”);
GaussianBlur(image,image, Size(5,5),0);

  • Convert image into Gray Image

Mat grayImage = Mat(image.size(), CV_32FC1);
cvtColor(image, grayImage, CV_BGR2GRAY);

  • for each pixel of image depending on the pixel value write into a text file an ascii character

ofstream f (“file.txt”);
for(int i =0 ; i<grayImage.rows; ++i)
{
for(int j=0;j<grayImage.cols;++j)
{
if(grayImage.at<uchar>(i,j)>230)
f<<“`”;
else if(grayImage.at<uchar>(i,j)>200)
f<<“‘”;
else if(grayImage.at<uchar>(i,j)>160)
f<<“;”;
else if(grayImage.at<uchar>(i,j)>120)
f<<“O”;
else if(grayImage.at<uchar>(i,j)>80)
f<<“8”;
else if(grayImage.at<uchar>(i,j)>40)
f<<“N”;
else
f<<“M”;
}
f<<“\n”;
}

here different characters are chosen to represent different brightness in the final output depending on the white space they have around them.

quality can be improved further by more conditional constraints to the pixel value or edge detection.

Am writing after so long.

But still am back. Past days were learning days for me. I learned lots of things.

Once i found people talking about line follower. Line follower is simply a bot that follows a line. It is very easy to make one using IR sensors. But being an Image Processing Learner I decided to make one using Image Processing.

My aim is not only to make a Line follower but to make it more robust (using image processing)

Till now writing code was easy, but will have to surely append some changes to make it robust.

Simplest line follower (image processing ) can be made doing following, as I did first:

My first step was to reduce brightness variation , which i did through intensity normalization. Generating histogram over some images could explain why i did so.

Secondly I set up a control box which helps me robustly select few parameters for edge detection and image morphology.

Then i perform edge detection (canny). On doing this I get the Image containing path boundaries with white colour and rest with black.

Then i compute Cente of Gravity for white pixels in the image.

Now this COG helps me to orient my bot properly.

Tills here everything is easy. In this i even used Opencv2 (for C++)

the code is though simple but works even for distorted path.

Panda obviously has and edge over beagle board. So building a small OpenCV project on Panda-board became my first face off with it.

Having a little experience with beagle board, i saved little time in things like installing headless ubuntu for Panda-board. But still Google this time even proved to be my friend.

I am feeling much more excited after working on Panda-board, so it would be an insult to the board if don’t upload its image here. So here i go…..

                                                                                    

🙂

So now let me note down the steps I followed (omitting the googling part )

(these steps are same as you may find from some other resource.

To install headless Ubuntu on PandaBoard:

1) First of all you need to download the image of ubuntu ARM 11.04 available here. Now this is a headless ubuntu. I call it headless because simply it has no GUI. Since i wont attach any monitor or other screen with my panda-board so i don’t need GUI support. You can also download this image using terminal :

vipul@kreezire:$ wget http://cdimage.ubuntu.com/releases/11.04/release/ubuntu-11.04-preinstalled-headless-armel+omap4.img.gz

2) Now get a SD card of at-least 4GB (preferably 8GB) and insert it into your card reader connected with computer.

3) check that your card is not mounted. If it is mounted ….unmount it as:

vipul@kreezire:$ umount /dev/sdX

where X is the letter of your drive.

4) Now install

vipul@kreezire:$ sudo sh -c ‘zcat ubuntu-11.04-preinstalled-headless-armel+omap4.img.gz > /dev/sdX’

Now it will take some time. So wait and then ahoe its done….

So this the all stuff i did for installing Ubuntu for PandaBoard.

Now comes the turn to install openCV 2.3.1

this is similar as I would have done for my PC

1) Make a folder  “opencv” and then download the source code into it

mkdir opencv
cd opencv/
wget http://sourceforge.net/projects/opencvlibrary/files/opencv-unix/2.3.1/OpenCV-2.3.1a.tar.bz2

2) Unpack

tar -xjf ./OpenCV-2.3.1a.tar.bz2

3) Configure and build OpenCV:

mkdir release
cd release
cmake -DCMAKE_BUILD_TYPE=RELEASE  -DWITH_QT=ON -DWITH_FFMPEG=OFF  -DWITH_GSTREAMER=OFF -DWITH_PYTHON=OFF -DWITH_GTK=OFF ../OpenCV-2.3.1a.tar.bz2
make
sudo make install

Yo Done…!!!!

Now you can build sample program for OpenCV ( but keep in mind this ubuntu is headless)