Pushing the Phone Gap

Posted: January 29, 2015 in Kreeps
Tags: , ,

The days when developers had to develop separate apps for each target platform are nearing its end. Thanks to technologies like phonegap. It allows us to code your app just once and deploy over various platforms.This post is just a log for the information I came across working with push notifications for PhoneGap app. PhoneGap reduces a lot of effort when your aim is to develop an for various application environments (iOS, android, windows…). You don’t have to code separately for each target platform. Though phonegap docs show a lengthy procedure to set up your development environment, I found the following video on YouTube more useful and less painstaking:


App development is done in Html. If UI is not of a very major concern, jQueryMobile is a nice javascript library which will help you with your UI. Since, the app made using phonegap runs on a browser, there are obviously some limitations. I am not going to detail them here.

PhoneGap’s latest PushPlugin helps us to add push notification capabilities to our app. Following git hub links explain how to use the push notification:


The PushNotification.js file given in this repository has some missing semicolons. You can instead paste following corrected code.

var PushNotification = function() {

// Call this to register for push notifications. Content of [options] depends on whether we are working with APNS (iOS) or GCM (Android)
PushNotification.prototype.register = function(successCallback, errorCallback, options) {
if (errorCallback == null) { errorCallback = function() {};}

if (typeof errorCallback != “function”) {
console.log(“PushNotification.register failure: failure parameter not a function”);

if (typeof successCallback != “function”) {
console.log(“PushNotification.register failure: success callback parameter must be a function”);

cordova.exec(successCallback, errorCallback, “PushPlugin”, “register”, [options]);

// Call this to unregister for push notifications
PushNotification.prototype.unregister = function(successCallback, errorCallback) {
if (errorCallback == null) { errorCallback = function() {};}

if (typeof errorCallback != “function”) {
console.log(“PushNotification.unregister failure: failure parameter not a function”);

if (typeof successCallback != “function”) {
console.log(“PushNotification.unregister failure: success callback parameter must be a function”);

cordova.exec(successCallback, errorCallback, “PushPlugin”, “unregister”, []);

// Call this to set the application icon badge
PushNotification.prototype.setApplicationIconBadgeNumber = function(successCallback, errorCallback, badge) {
if (errorCallback == null) { errorCallback = function() {};}

if (typeof errorCallback != “function”) {
console.log(“PushNotification.setApplicationIconBadgeNumber failure: failure parameter not a function”);

if (typeof successCallback != “function”) {
console.log(“PushNotification.setApplicationIconBadgeNumber failure: success callback parameter must be a function”);

cordova.exec(successCallback, errorCallback, “PushPlugin”, “setApplicationIconBadgeNumber”, [{badge: badge}]);


if(!window.plugins) {
window.plugins = {};
if (!window.plugins.pushNotification) {
window.plugins.pushNotification = new PushNotification();

if (module.exports) {
module.exports = PushNotification;

For setting up GCM follow this: http://apigee.com/docs/app-services/content/registering-notification-service

For using php to send push notifications: http://distriqt.com/post/1273

The following link helps in adding and testing push notification in app : http://devgirl.org/2013/07/17/tutorial-implement-push-notifications-in-your-phonegap-application/


It was fun while working with the following image for character detection, (challenging too!)


Trying the algo on a random pic containing Indian Script (Hindi precisely) resulted:  Screenshot-1


(Not Bad!)

It all starting during tête-à-tête with few elites. They remarked that we can not build a video stabilization solution because one the biggest organization in country took months/years for it. I found it to be peculiar. Not because they doubted us. But because we took only few hours that night to produce following outputs:

It all started with my regular B.Tech. classes at Delhi Technological University. My work and interests keep me far from having a big social life. It turns out that even after two years, I don’t  know everyone in my class. Its not about being oblivious, I know few friends completely, while I recognize many by just their face.

One day while travelling, a boy came to me and initiated a conversation. He knew me well enough! A face recognition algorithm back in my brain instructed me that he is someone from my class. But my neuron could not retrieve any other information about him from my brain (not even his name). I thought he might feel bad if I ask his name, so during the entire conversation I had to pretend as if I knew him.

Post conversation, I simply started thinking about a scene from “Mission Impossible: Ghost Protocol”, where an agent’s face recognition system triggers a false positive alarm. Fascinated from this, i thought to work on a miniature face recognition system of my own which I could later include in something big (shhh!!….concealed…).

But why face recognition after all…? Why do researchers spend their time on this..?

To answer this question, let us work on how we identify someone. We see, observe, and interact with various people around us or far in our day to day life. Our brain is so smart to recognize each person differently. We identify each other with Names and/or appearance. We even recognize some with their voice. Certainly of all other ways, we use Name and Appearance the most to identify a person. And out of all things that come under appearance, face has the most impact. That is, of all appearance features, we commonly use face to identify a person.

I represent above pictorially as below:


Now to decide among Name and Face, let me ask you a simple question. My neighbor’s name is Sunny. So now considering that you know his name, can you identify him if he comes to you?
Certainly not!
But now if I tell you that my Neighbor looks like below, maybe you will be able to identify him next time you see him.


We interact with the world around us through our senses. Our eyes are our windows into the world. The amount humans learn through their eyes should not be underestimated under any circumstances. Concretely, studies have show that even three day old babies are able to distinguish between known faces.

Getting a bit technical now! We are far from understanding how over brain actually decodes faces. But various human attempts have been made for face recognition to simulate something similar to our brain’s working. Now, OpenCV has few modules that can be used directly for this purpose.

Few useful links:

1) http://www.cognotics.com/opencv/servo_2007_series/part_2/index.html

2) http://docs.opencv.org/trunk/modules/contrib/doc/facerec/tutorial/facerec_video_recognition.html

3) http://docs.opencv.org/trunk/modules/contrib/doc/facerec/facerec_tutorial.html

Few Output Videos:

Face Detection:

Face Recognition

Face Recognition with prediction confidence

Now, I found this post worth writing for my dear juniors and myself naturally.

Opencv uses CUDA to run code on GPU. CUDA library is however only available for Nvidia graphic cards and there is a wide range of NVIDIA graphic cards that support CUDA.

https://developer.nvidia.com/cuda-gpus mentions various cuda enabled GPU. With every CUDA-enabled GPU processing would occur at different computational capability.

Now the first category of NVDIA GPU is “Tesla”. Telsa series represent best of all NVDIA GPUs. But with high performance, they come with a large bill. Each telse gpu costs around few thousand dollars. Hence, I had stop searching more about tesla gpus for obvious reasons.

Next comes the “NVS” series. Now this series is designed to support multiple monitors (say 4). This series is surely not for the high computational purpose. Nvidia page (https://developer.nvidia.com/cuda-gpus) mentions a very low computation capability for them. It would be a dream come true for me to have 4 screens in a non-parallel system. But, would not resolve my priority concerns. Just for records, this card can do cool stuff as shown in image:

The budget is also a concern. It is the major reason why we can not afford Tesla series.

Now, Geforce and QuadPro products are of our interest. Geforce series represent GPU specially designed more for gaming purposes. Whereas, QuadPro series is designed for a more professional use like CAD with a workstation.

www.nvidia.com/object/quadro_geforce.html explains advantages of quadpro over GeForce. However with list of advantages quadpro comes with greater cost too. Price of entry lever QuadPro GPU is comparable to mid-range GeForce GPU. Thus GeForce may perform better than QuadPro on the basis on cost of product.

GeForce GTX 650 is the first model with computational capability 3.0. In QuadPro series, K600 is the first model with computational capability 3.0. Following table shows comparison between them:

GeForce GTX 650 QuadPro K600
Computational Capability 3.0 3.0
CUDA cores 384 192
Base Clock (Mhz) 1058
Memory Bandwidth (Gbps) 80 29
Memory 1 Gb DDR5 1 Gb DDR3
OpenGL support Yes (4.3) Yes (4.3)
CUDA Support Yes Yes
Price (INR) 7855/- 15,000/-

Source for above info: 1) http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-650/specifications

                                 2) http://www.nvidia.com/object/quadro-desktop-gpus-specs.html

Someone very well explained, “If you think of your video card like a freeway, then CUDA cores would be analogous to the number of lanes in the road,clock speed would be the speed limit, and memory interface would be the number of lanes for exit/entry ramps. More lanes means more cars can be moving on the freeway at any given time, the higher the speed limit the faster any given car is moving, and then if there’s say 2 exit lanes instead of 1, you can have more cars getting off the freeway, same as you can have more cars coming onto the freeway if there are say 2 entry lanes instead of 1.“

Thus for a specific economic range, there will always be GTX card with better hardware specs then the QuadPro. However, major difference lies in the fact that GTX cards are majorly designed for gaming. However, they can still meet our requirements.

On a blog post at http://www.timzaman.com/?p=2256 , author compares various GPUs for few openCV functions. He concludes as, “ In terms of value for money, the GTX 670 (€400) with 2Gb of RAM is very nice. There is absolutely no reason to buy the GTX 680 since it costs € 100 more. Then again, the GTX 570 costs €300, which is nice, but only has 1,25Gb RAM, which can be dangerous when working with large images (nasty errors).
It is clear that GPU computation is BLOODY fast. But i HAVE to note, that only a SINGLE core of the CPU’s were used for the normal CPU functions. These algo’s have not really been optimized for multithreaded if I’m not mistaken. On the other hand, speed increases of >20x is too much for any intel CPU to catch up with. GPU Computing is a must if fast image processing is important.”

This happened few weeks before. But i am putting  it down here so that I can read it in future and smile.

I received an email from someone (Let’s say Mr. ABC working at amazon). Now the mail was as given below (recreated!):

I am contacting you to check if you would be interested in working with Amazon Hyderabad for a Software Development Engineer. Kindly respond back to -: XYZ@amazon.com to take this forward.

To which I replied that I would be interested in internship and forwarded my CV.

Now, few FAQs:

1) How did they reach me? : “Frankly speaking I don’t know. Someone said that HRs usually scroll linkedin and contact people with good profile. This could be the reason, but am not sure about it. Even both HRs, ABC and XYZ are not connected to me anyway. Not even in linkedin. I regularly maintain my linkedin profile, which u can check at: http://in.linkedin.com/in/kreezire.

2) Why did they contact me ? : “Call it exaggerated but, after seeing my work experience they thought am already graduated. So, that means they may approach me again next year. Ahem Ahem… even I can refer few people now “

Moving further, After weeks of my respond, they finally called me and scheduled my written test.( I thought it was for INTERNSHIP.) It was a coding test. I was given three questions and total time was about 1.5 hours. It was on Interview Street only. No objectives type questions. Two of them were solved using DP while the third one was relatively easy. After written, I had series of telephonic interviews (all technical). Interviewer asked me even to code on collabedit. Collabedit is a site that allows collaborative editing. So, any change I make in my doc appeared on his computer also.

Now, about the questions, Amazon usually repeat their questions every year. So, it is always good if you check few blogs to see previous year questions (which I did only for my last interview). My written was completely different! Their favorite topics for interview are binary trees and binary search trees. My first interview was only about BT and BST. I gave 4 different algo to check if a BT is BST.

The person who takes technical interview is not generally from HR department (Obvious!). He might belong to the department for which you are about to be selected. Apparently, I cleared all the rounds.

Quick tips:

1) Study Data structures. At-least you should know all definitions. I did not know that in BST equality always appear with the left child. My interviewer told this to me.

2) Interviewer gives time to think. But do not ever take too much time to answer.

After that! I guess most of my friends wanted to know what happened finally. For them, don’t worry am still unemployed. Now the craziest thing which no  one knows is that on the day of my last telephonic conversation, i received another “similar looking mail” from Amazon asking if i would like to work with them. The only difference is that this time it was from Amazon BANGALORE!

After the real party (SUAS 13) for this summer, i needed some refreshment. Perhaps, an after-party. After party soon began when i returned back home in July, 2013. It began with me setting up a new QT project in my machine. What that project is about ? Let that remains classified for now. I thought to include all the third party libraries that my project needs with in the project. This means, I did not install third party libraries in default locations but in my project folders. So now when ever I move my project to any other  machine, or say any freshly set up machine, I need not to install any third party libraries that my project needs.  Obviously this is not a great work which I did, but essential for good and a complete software.

To check if my software was really “independent”, i quickly installed virtual box and made a virtual system for Ubuntu 12.04.

Next I updated it and installed Qt5. When I tried to run one of the sample qt project. I got an error. The error was something like “-lGL now found”

It clearly showed that libGL was missing. In order to fix it i found following to install libGL on virtualbox

 sudo apt-get install mesa-common-dev

However for one of my friend with same issue, this also worked:

sudoapt-get install libgl1-mesa-dev

Both  installs dev files for libGL. And now it is all set to run the qt project. Except once, when i still found the same error in newly set up virtual machine. This time error occurred due broken reference to libGL.so.1 by libGL.so.

How did I discover broken link ? I tried to find if my system had any libGL.so file. And yes it did and right on its place. Next I tried to find if it points to correct file. So I used following command:

 ls -l /usr/lib/x86_64-linux-gnu/libGL.so

And I found that it points to a file which does not even exist. Hence, the problem was of broken/incorrect links.

So I needed to create a new link where libG.so file points to correct libGL.so.1 file.

I manually searched for libGL.so.1 and found that the file exists in /usr/lib directory.

to recreate the link, i first had to delete old link:

sudo rm /usr/lib/x86_64-linux-gnu/libGL.so

This removes my libGL.so file. Now its time to create new one. To create a new link:

sudo ln -s /usr/lib/libGL.so.1 /usr/lib/x86_64-linux-gnu/libGL.so

This resolved my issue.

Panda goes wild – Reminders notes

Posted: April 23, 2013 in Work

Configuring pandaboard is not that difficult. But once things stop working, it sometimes become difficult to answer “why”.

Past few days with panda were not so good. This is how it went. I had two pandaboards. I picked once of them. Tried to work with it, but i guess panda was in bad mood. So was the other panda. After hours of googling at night (00:00 am till 5 am ), nothing worked. Eventually, i figured out that there was some problem with memory card reader’s pins. And luckily one of them started working after little hardware  correction.

While in some other test flight one of my “dearest” (angrily) friend spills gas on pangaboard. Luckily panda was not powered so nothing went wrong that day. BUT, just before the next test flight (3:0 am ) memory card inside broke. It smelled like fuel. Reason was: swelling of memory card due to fuel. So i had to reconfig a new memory card with all libraries before (6 am ) .

So thought to put some reminder notes for juniors  (like what my seniors did for me ) :

And the installation begins.

>> Install OpenSSH server

Install following (sudo-apt get):




g++ (not required though)


> Rsync wihtout password :
1) generate keys :
$ ssh-keygen
Enter passphrase (empty for no passphrase):
Enter same passphrase again:

ssh-copy-id -i ~/.ssh/id_rsa.pub panda@
(and bang)

I have been searching for some good papers which I could implement to improve  my current implementation. After lots of googling and reading few papers, my eyes were half red. It was around 3 a.m. when I came across  Ioannis Katramados and Toby Breckon’s paper on “REAL-TIME VISUAL SALIENCY BY DIVISION OF GAUSSIANS”. This was not exactly what I was looking for but it somehow seems to satisfy my need. So I thought to implement the paper. Since then I have been discussing various things with both I. Katramados and T. Breckon. They both are really helpful and so is their paper.

Coming back to implementation, the paper has been written perfectly and it seems easy to implement the paper using OpenCV (SEEMS!).

This is what I am doing:
1) I converted image to GrayScale (32F)
2) according to step one in paper: “The Gaussian pyramid U comprises of n levels,starting with an image U1 as the base with resolution w × h. Higher pyramid levels are derived via downsampling using a 5 × 5 Gaussian filter. The top pyramid level has a resolution of (w/2n−1 ) × (h/2n−1 ). Let us call this image Un .
    which means I simply have to perform pyrDown() operation in opencv. I did it 8 times.
3) according to step two paper: “U n is used as the top level Dn of a second Gaus-sian pyramid D in order to derive its base D1 . In this case, lower pyramid levels are derived via upsampling using a 5×5 Gaussian filter.”
I simply performed 8 times pyrUp image.
4) And then goes pixel by pixel division of values as according to paper.
5) I normalized the result matrix to 0-255
I. Katramados has been really generous. Both the writers of the paper have been really helpful. I guess few changes to its implementation might result in a better output.
3:41 PM = So now its time follow few more advises  Ioannis Katramados.

Finding the Hul(k)l with in

Posted: February 10, 2013 in Work

Struggle is something that should never stop. Even if I want it to stop, time constrains are not allowing it to be stopped. Implementation up till now is good. But is taking more time then calculated (3-4 seconds). Hence the struggle for a better implementation continues. Today I have been working to find convex hull. around the object in the image. The idea seems to be not so good to me. But trying it is the only option. So am being a little optimistic for now. As the time passes, my assumptions are turning right. mapping the convex hull around target to the original size image was not really so good. SO after few trials i dropped this idea. Current implementation is good enough. So tweaking it could help us. Using the concept of ROI really helped.   Now I am trying what I call another way of finding saliency map. This should reduce my run time (SHOULD).