Archive for the ‘Kreeps’ Category

Pushing the Phone Gap

Posted: January 29, 2015 in Kreeps
Tags: , ,

The days when developers had to develop separate apps for each target platform are nearing its end. Thanks to technologies like phonegap. It allows us to code your app just once and deploy over various platforms.This post is just a log for the information I came across working with push notifications for PhoneGap app. PhoneGap reduces a lot of effort when your aim is to develop an for various application environments (iOS, android, windows…). You don’t have to code separately for each target platform. Though phonegap docs show a lengthy procedure to set up your development environment, I found the following video on YouTube more useful and less painstaking:

https://www.youtube.com/watch?v=0gGBhaVG9CI

App development is done in Html. If UI is not of a very major concern, jQueryMobile is a nice javascript library which will help you with your UI. Since, the app made using phonegap runs on a browser, there are obviously some limitations. I am not going to detail them here.

PhoneGap’s latest PushPlugin helps us to add push notification capabilities to our app. Following git hub links explain how to use the push notification:

https://github.com/phonegap-build/PushPlugin

The PushNotification.js file given in this repository has some missing semicolons. You can instead paste following corrected code.



var PushNotification = function() {
};

// Call this to register for push notifications. Content of [options] depends on whether we are working with APNS (iOS) or GCM (Android)
PushNotification.prototype.register = function(successCallback, errorCallback, options) {
if (errorCallback == null) { errorCallback = function() {};}

if (typeof errorCallback != “function”) {
console.log(“PushNotification.register failure: failure parameter not a function”);
return;
}

if (typeof successCallback != “function”) {
console.log(“PushNotification.register failure: success callback parameter must be a function”);
return;
}

cordova.exec(successCallback, errorCallback, “PushPlugin”, “register”, [options]);
};

// Call this to unregister for push notifications
PushNotification.prototype.unregister = function(successCallback, errorCallback) {
if (errorCallback == null) { errorCallback = function() {};}

if (typeof errorCallback != “function”) {
console.log(“PushNotification.unregister failure: failure parameter not a function”);
return;
}

if (typeof successCallback != “function”) {
console.log(“PushNotification.unregister failure: success callback parameter must be a function”);
return;
}

cordova.exec(successCallback, errorCallback, “PushPlugin”, “unregister”, []);
};

// Call this to set the application icon badge
PushNotification.prototype.setApplicationIconBadgeNumber = function(successCallback, errorCallback, badge) {
if (errorCallback == null) { errorCallback = function() {};}

if (typeof errorCallback != “function”) {
console.log(“PushNotification.setApplicationIconBadgeNumber failure: failure parameter not a function”);
return;
}

if (typeof successCallback != “function”) {
console.log(“PushNotification.setApplicationIconBadgeNumber failure: success callback parameter must be a function”);
return;
}

cordova.exec(successCallback, errorCallback, “PushPlugin”, “setApplicationIconBadgeNumber”, [{badge: badge}]);
};

//——————————————————————-

if(!window.plugins) {
window.plugins = {};
}
if (!window.plugins.pushNotification) {
window.plugins.pushNotification = new PushNotification();
}

if (module.exports) {
module.exports = PushNotification;
}


For setting up GCM follow this: http://apigee.com/docs/app-services/content/registering-notification-service

For using php to send push notifications: http://distriqt.com/post/1273

The following link helps in adding and testing push notification in app : http://devgirl.org/2013/07/17/tutorial-implement-push-notifications-in-your-phonegap-application/

It all starting during tête-à-tête with few elites. They remarked that we can not build a video stabilization solution because one the biggest organization in country took months/years for it. I found it to be peculiar. Not because they doubted us. But because we took only few hours that night to produce following outputs:

Now, I found this post worth writing for my dear juniors and myself naturally.

Opencv uses CUDA to run code on GPU. CUDA library is however only available for Nvidia graphic cards and there is a wide range of NVIDIA graphic cards that support CUDA.

https://developer.nvidia.com/cuda-gpus mentions various cuda enabled GPU. With every CUDA-enabled GPU processing would occur at different computational capability.

Now the first category of NVDIA GPU is “Tesla”. Telsa series represent best of all NVDIA GPUs. But with high performance, they come with a large bill. Each telse gpu costs around few thousand dollars. Hence, I had stop searching more about tesla gpus for obvious reasons.

Next comes the “NVS” series. Now this series is designed to support multiple monitors (say 4). This series is surely not for the high computational purpose. Nvidia page (https://developer.nvidia.com/cuda-gpus) mentions a very low computation capability for them. It would be a dream come true for me to have 4 screens in a non-parallel system. But, would not resolve my priority concerns. Just for records, this card can do cool stuff as shown in image:

The budget is also a concern. It is the major reason why we can not afford Tesla series.

Now, Geforce and QuadPro products are of our interest. Geforce series represent GPU specially designed more for gaming purposes. Whereas, QuadPro series is designed for a more professional use like CAD with a workstation.

www.nvidia.com/object/quadro_geforce.html explains advantages of quadpro over GeForce. However with list of advantages quadpro comes with greater cost too. Price of entry lever QuadPro GPU is comparable to mid-range GeForce GPU. Thus GeForce may perform better than QuadPro on the basis on cost of product.

GeForce GTX 650 is the first model with computational capability 3.0. In QuadPro series, K600 is the first model with computational capability 3.0. Following table shows comparison between them:

GeForce GTX 650 QuadPro K600
Computational Capability 3.0 3.0
CUDA cores 384 192
Base Clock (Mhz) 1058
Memory Bandwidth (Gbps) 80 29
Memory 1 Gb DDR5 1 Gb DDR3
OpenGL support Yes (4.3) Yes (4.3)
CUDA Support Yes Yes
Price (INR) 7855/- 15,000/-

Source for above info: 1) http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-650/specifications

                                 2) http://www.nvidia.com/object/quadro-desktop-gpus-specs.html

Someone very well explained, “If you think of your video card like a freeway, then CUDA cores would be analogous to the number of lanes in the road,clock speed would be the speed limit, and memory interface would be the number of lanes for exit/entry ramps. More lanes means more cars can be moving on the freeway at any given time, the higher the speed limit the faster any given car is moving, and then if there’s say 2 exit lanes instead of 1, you can have more cars getting off the freeway, same as you can have more cars coming onto the freeway if there are say 2 entry lanes instead of 1.“

Thus for a specific economic range, there will always be GTX card with better hardware specs then the QuadPro. However, major difference lies in the fact that GTX cards are majorly designed for gaming. However, they can still meet our requirements.

On a blog post at http://www.timzaman.com/?p=2256 , author compares various GPUs for few openCV functions. He concludes as, “ In terms of value for money, the GTX 670 (€400) with 2Gb of RAM is very nice. There is absolutely no reason to buy the GTX 680 since it costs € 100 more. Then again, the GTX 570 costs €300, which is nice, but only has 1,25Gb RAM, which can be dangerous when working with large images (nasty errors).
It is clear that GPU computation is BLOODY fast. But i HAVE to note, that only a SINGLE core of the CPU’s were used for the normal CPU functions. These algo’s have not really been optimized for multithreaded if I’m not mistaken. On the other hand, speed increases of >20x is too much for any intel CPU to catch up with. GPU Computing is a must if fast image processing is important.”

After the real party (SUAS 13) for this summer, i needed some refreshment. Perhaps, an after-party. After party soon began when i returned back home in July, 2013. It began with me setting up a new QT project in my machine. What that project is about ? Let that remains classified for now. I thought to include all the third party libraries that my project needs with in the project. This means, I did not install third party libraries in default locations but in my project folders. So now when ever I move my project to any other  machine, or say any freshly set up machine, I need not to install any third party libraries that my project needs.  Obviously this is not a great work which I did, but essential for good and a complete software.

To check if my software was really “independent”, i quickly installed virtual box and made a virtual system for Ubuntu 12.04.

Next I updated it and installed Qt5. When I tried to run one of the sample qt project. I got an error. The error was something like “-lGL now found”

It clearly showed that libGL was missing. In order to fix it i found following to install libGL on virtualbox

 sudo apt-get install mesa-common-dev

However for one of my friend with same issue, this also worked:

sudoapt-get install libgl1-mesa-dev

Both  installs dev files for libGL. And now it is all set to run the qt project. Except once, when i still found the same error in newly set up virtual machine. This time error occurred due broken reference to libGL.so.1 by libGL.so.

How did I discover broken link ? I tried to find if my system had any libGL.so file. And yes it did and right on its place. Next I tried to find if it points to correct file. So I used following command:

 ls -l /usr/lib/x86_64-linux-gnu/libGL.so

And I found that it points to a file which does not even exist. Hence, the problem was of broken/incorrect links.

So I needed to create a new link where libG.so file points to correct libGL.so.1 file.

I manually searched for libGL.so.1 and found that the file exists in /usr/lib directory.

to recreate the link, i first had to delete old link:

sudo rm /usr/lib/x86_64-linux-gnu/libGL.so

This removes my libGL.so file. Now its time to create new one. To create a new link:

sudo ln -s /usr/lib/libGL.so.1 /usr/lib/x86_64-linux-gnu/libGL.so

This resolved my issue.

Python bit Eclipse

Posted: December 22, 2012 in Kreeps
Tags: , , , ,

This post is dedicated to my dear juniors.
Today i linked Eclipse on Linux with Pydev (Python Development Plugin). A program in python can be written in even a notepad, but investing time in setting up a proper IDE is fruitfull.

Follow following steps to setup python on your Eclipse IDE

The PyDev Ubuntu setup requires some configuration in Eclipse. Installing PyDev on Ubuntu is similar to the installation on Windows or Macintosh.

1. Run Eclipse and then go to Help | Software Updates | Find and Install…

2. In the Feature Updates window, select Search for new features to install and click Next.

3. In the Install window, select New Remote Site.

4. In the New Update Site window, specify the following:

Name: PyDev

URL: http://update-production-pydev.s3.amazonaws.com/pydev/updates/site.xml
The old URL was http://pydev.org/updates

5. In the Search Results window, select both PyDev Extensions and click Next.

6. In the Feature License window, agree to the license agreement and click Next.

7. Configure the Python interpreter by going to Window | Preferences | Pydev | Interpreter – Python.

8. In the Python interpreters section, click New.

9. Locate your Python interpreter (e.g., /usr/bin/python).

10. When prompted, select your System PYTHONPATH. You can just leave the default checkboxes selected and click OK.

Now I assume the reader already has python OpenCV installed
SO follow these two steps now :
Add /usr/local/lib/python2.7/dist-packages to the System Libs
Add cv to the Forced builtins
All set now

Happy Image processing !

I found Python today!

Posted: December 21, 2012 in Kreeps
Tags: ,

I have been coding in C and C++ for all my projects related to OpenCV till now. But when it comes Machine Vision project in team UAS-DTU (code name: Shaurya) we need to consider every possible parameter. Even if it means shifting to our code to Python if it can make it more efficient. So Machine Vision team has started to take Python into consideration. Python is a general-purpose, interpreted high-level programming language whose design philosophy emphasizes code readability. Opencv’s Python language is an implementation using SWIG. SWIG is a software development tool that connects programs written in C and C++ with a variety of high-level programming languages. OpenCV in python seems to make algorithm implementation faster. But i don’t think it will affect the overall processing time since, everything is ultimately processed in c/c++ as Python is just like a wrapper. But still it is fair to Python a chance since even if nothing it could possibly reduce coding/implementation time. It may improve efficiency. Obviously I googled to find if OpenCV’s Python implementation is good enough to port our code to it. But I could not find any major advantage. So now I decide to have hands over it myself.

Now days, everyone with a digital camera considers himself as a professional. No doubt I also have a love for photography. So I thought putting something about sensors used in cameras on the blog. Most digital (rather almost all) use sensors from CMOS family or CCD family.
Both the technologies keep on improving. CCD (charged coupled device) may be better from CMOS (complementary metal oxide semiconductor) in one aspect while CMOS might be better in another. Each has unique strength and weaknesses.
Both these sensors convert light into electric charge. In a CCD, every pixel is transferred through a very limited number of output nodes (often just one) to be converted to voltage buffered, and sent off-chop as an analog signal. All of the pixels can be devoted to light capture, and the output’s uniformity is high. In CMOS, each pixel has its own charge-to-voltage conversion, and the sensor often also includes amplifiers, noise-correction, and digitization circuits, so that the chip outputs digital bits. With each pixel doing its own conversion uniformity is lower.

But the point why I am concerned about the sensor is little application dependent. Let’s understand it. Now suppose I have a camera on a plane taking images of propeller regularly. Both kinds of sensors produce different types of distortions or artifacts when they click images of rotating propeller.
In one of the form, the image contains curly rippling lines while in the other it produces a sort of a propeller which keeps moving in consecutive frames. Curvy image is generally produced by CMOS sensors. While the other distortion is produced by CCD sensors.


These artifacts are majorly due to different e shutter methods used. CMOS sensor uses a rolling e-shutter while a CCD sensor uses an uniform e-shutter. To get a better image fps could be increased. It is seen that at a particular fps rate, sensor produces images showing stable propeller even though it is rotating. SO now i know what I am going to lose (or gain) if i use a CMOS based camera with low fps rate.
So this was propeller distortion. This is a major concern while talking about the aerial image acquisition.

Certainly Natural User Interface remains incomplete if a smart system can not recognize common human gestures. I wondered if my computer could tell me what action i am performing (like walking, running etc). So i planned to study kind of activity recognition. According to some source on internet “Activity Recognition aims to recognize the actions and goals of one or more agents from a series of observations on the agents’ actions and the environmental conditions.”

Like all most all other Image Processing stuff that involve recognizing stuff from an image, activity recognition also follows three steps:

1) Detection/Segmentation : It is the process in through which we separate required portion (or what I call Object of Concern) from Image. In my case OoC is the human body. I separated (or segmented) this from the image simply using background and foreground subtraction.

2) Classification: This process involves processing the segmented frames using certain parameters or matching with some templates

3) This part majorly deals with reasoning engines. They encode the activity semantics based on the lower level action primitives.

Hands on Web Development

Posted: October 23, 2012 in Kreeps
Tags: , ,

Some one once told me that no matter what technical i do website development will always earn bread for me. Since then, almost my every technical work has proved him wrong. I do not have harsh feel towards web development, but what i do i find it more interesting then web development. Since child i always thought of developing a website but, never got any opportunity. Until when my team UAS-DTU needed a new website and the responsibility came on me. SO http://www.uasdtu.com became the first website to be developed by me. Of-course i do give credits to few of my seniors. But after developing website i felt most of the time i only have to play with indexing or padding on the page to align content properly. May be that could be the reason i don’t develop websites frequently. Though this time i did not have hands over dynamic website, which may probably change my perspective towards web development.

After saying above i must mention web development is really good can really earn bread (faster even) provided you have interest in it.

ASCII ART!!! (-_-) [-_-]

Posted: July 21, 2012 in Kreeps
Tags: , , ,

A picture is a poem without words.
– Horace

A picture is worth a thousand words.
-Napoleon Bonaparte

If a picture (image) can worth so many words, why can’t it be made of words (mere ASCII character). Thus, to add little fun to my work i decided to write a program to turn any image into an ASCII art.

Technically making an image from standard symbols and characters is ascii art.

example :

          * 
         / 
       HH 
     SSSSSS
    SSSSSSSS
    S )))) S 
   SS -  - SS
  SSS o  o SSS
 SSSS  6   SSSS
  SSS  __  SSS
   SSS    SSS
      W   W
     WW  WWW
   WWWW  WWWW 
   WWWWWWWWWW
        XXXXXXXXX
      XXXXXXXXXXXXX
    XXXXXXXXXXXXXXXX 
   (__________)XXXXXX 
    ( ___  ___ XXXXXX
       o/   o   XXXXX 
    (  /        XXXXX
      /___)     XXXXX
   (             XXXX
  (     ____    ) XXX
   (               XX
    (          )    X
     (       )      *
       (    )      ***
                    *
1962 book, “Art Typing”

Developing a real ASCII art by typing in each character is really a talent. But for a coder its merely few line of code to convert any image into ASCII file.

This is how i did using openCV library:

  • Open the image and perform little Gaussian smoothing

Mat image = imread(“7.jpg”);
GaussianBlur(image,image, Size(5,5),0);

  • Convert image into Gray Image

Mat grayImage = Mat(image.size(), CV_32FC1);
cvtColor(image, grayImage, CV_BGR2GRAY);

  • for each pixel of image depending on the pixel value write into a text file an ascii character

ofstream f (“file.txt”);
for(int i =0 ; i<grayImage.rows; ++i)
{
for(int j=0;j<grayImage.cols;++j)
{
if(grayImage.at<uchar>(i,j)>230)
f<<“`”;
else if(grayImage.at<uchar>(i,j)>200)
f<<“‘”;
else if(grayImage.at<uchar>(i,j)>160)
f<<“;”;
else if(grayImage.at<uchar>(i,j)>120)
f<<“O”;
else if(grayImage.at<uchar>(i,j)>80)
f<<“8”;
else if(grayImage.at<uchar>(i,j)>40)
f<<“N”;
else
f<<“M”;
}
f<<“\n”;
}

here different characters are chosen to represent different brightness in the final output depending on the white space they have around them.

quality can be improved further by more conditional constraints to the pixel value or edge detection.