Tuesday, August 07, 2007

Why am I here?

What is the purpose of my life?

For a while I have been bugged by this question “What is the purpose of my life?” Everyone at some point of time must have asked themselves questions on those lines – “Why am I here?”, “What is the meaning of my life?”, “Why doesn’t someone come here and tell me, what my purpose is?”

For about a week I have been thinking a lot about this, and one fine Sunday morning I woke and saw myself in the mirror. I looked the same as I looked a week back, in fact much dull. I looked around and my room was pretty much the same as it was on last Saturday, only shabbier. Then I looked at the calendar and something immediately struck me like light. I haven’t gained anything, but I certainly lost something- one complete week of my life. That’s when I realized how stupid I was and what a silly question I was asking myself. Instead of asking myself and thinking so much about “What is the purpose of my life?”, I should have asked a much simpler question “What do I do in this week, to make my life a little better?”. I could have gone and seen the beautiful waterfalls. I could have improved my game skills. I could have exercised and got into shape. I could have spent a whole day standing on the road and saying ‘hi’ to strangers. I could have called up my family and spoke to them for hours. Even better I could have worked at a petrol station and earned more money. Instead I spent hours sitting in a cozy dark room, listening to tragedy music. What a complete loser was I. The calendar hanging in my room, really did a great deal of good to me, it showed me what I lost.

Grapes hanging high are sour. This adage is certainly worth the direction now. Let us think for a while, does anyone really know why he or she is here. I doubt so. No one knows the answer to the question “What is the purpose of one’s life?” In fact people keep blaming God that he hasn’t showed you the purpose of your life. What if tomorrow, God appeared to you and said “My Son, Your purpose in life is to become the President of America”. What good does that direction do to you? Does that help you? You might even laugh and tell God “Lite teeskuo prabhu”.

Some people put so much time and energy into this thought into finding answers, that anything else seems completely worthless until and unless they find the solution to this supposedly ‘holy question’. Vemana shatakam may have born out of it, but not everyone is a Vemana, and may be I don’t need the Vemana shatakam at all.

Mahatma Gandhi did not know when he was born that he would break the shackles of slavery and bless a whole nation with freedom and be hailed as the ‘Father of the Nation’ by a billion. Nor did he lock himself up in a room thinking and analyzing constantly to answer the question “What is my purpose in this world?” He just rose to the occasion and wanted to dedicate his full to the extent possible. He went by a simple principle “How can I see happiness in the faces of the people around me?” It so happened, that it was not 10 people around him, but a whole Nation.

So, can we live by one simple principle? Every single day, ask ourselves this simple question “How can I make myself happy and within my capacity and the hour of need, can I rise to the occasion and do the needful to see happiness in the faces of people around me?”

The hard part comes here - What makes people happy? A beggar feels happy when a person gives him money. A saint is happy when God appears before him. What can we do about it? Ok. Here comes ‘capacity’. Can you give the beggar a rupee and still feel at the end of the day that you have not lost a fortune? Can God (if he existed) appear before the saint for a minute, without repenting at the end of the day that in that minute he could have saved an entire town from a flood or a natural calamity? Then do it.

Can you visit your ailing grandmother and make her happy? Can you wish a kid on his birthday? Can you visit an old and lonely couple and give them a day of happiness? Can you speak with your parents and make them happy? Can you joke on yourself to bring laughs into a boring get-together? Can you smile at a stranger to see his lips blossom?

Just keep in mind that you are a small link in this enormous network of social living. Nevertheless you are a link and your mere existence is crucial to the network being in tact and any further effort only makes it better.

Who do I blame today?

Who do I blame today?

We need someone to blame - mom, dad, siblings, friends, teacher, colleagues, boss or God.

Everyone is intelligent to enough to understand their responsbilites. But they refuse to accept it, coz then there will be no one but only them to blame for the consequences.

Just do this simple task for 5 days –


On each day write on a piece of paper how you spent the last 30 minutes of your day. At the end of the day, look at the piece of the paper. You will learn a lot from it. You will see the gaps and where to improve.

Wednesday, September 27, 2006

LSI + Contextual Network Search

http://www.knowledgesearch.org/

M. Ceglowski, A. Coburn, and J. Cuadrado. Semantic search of unstructured data using contextual network graphs. Preliminary white paper, National Institute for Technology and Liberal Education, Middlebury College, Middlebury, Vermont, 05753 USA, 2003. [url]

————————————

This paper describe the Contextual Network Graph, a technique that should solve some of the pitfalls of the latent semantic indexing, like the poor scalability of the singular value decomposition algorithm.

The authors offer an alternative interpretation of the term-document matrix (TDM), which is essentially a lookup table of term frequency data for the entire document collection. In LSI this is interpreted as a high-dimensional vector space. Alternatively, this can be seen as a bipartite graph of term and document nodes where each non-zero value in the TDM corresponds to an edge connecting a term node to a document node. In this way, every term is connected to all of the documents in which the term appears, and every document has a link to each term contained in the document. The authors call this a contextual network graph.

This construct correspond to the intuition that documents that share many rare tems are likely to be semantically related.

Their idea is to use this representation to search a document collection energizing a query node and allowing the energy to propagate to other nodes along the edges of the graph based on a set of simple rules.

Their results show comparable results to an LSI search engine. In a 1981 dissertation at the University of Illinois, Scott Preece describes an almost identical technique under the name spreading activation search.

Tuesday, December 13, 2005

Learning Structural metadata information of books

  • Introduction
Structural meta data can be an important component of the metadata of a book in a digital library.
But, adding the structural tags manually is time consuming. Is there a way of doing it automatically? Especially when we have a large annontated data( by annotated i mean example data containing structural meta data) can we learn some how from it and use it to assign the corresponding structural meta data of a given and new page.


  • Some questions to think about:
  1. Is the problem do-able?
  2. How easy / hard is to to do it ?
  3. HOW SHOULD WE DO IT?
  4. If yes what kind of assumptions should we be taking?
  5. What kind of results should we expecting.
  6. What is the related work ?
  7. Any machine learning approaches, what other approches exists?
  8. What are their results and observations ?
  9. Should i use the images or the textual content of the book? what are the adv and disadv in each?
  • A Rudimentary Approach:
As a first step, we assume that the structural meta data is whether the page is the first page in the book, index page, preface, cover page, normal page etc etc.
Can i then view this problem of assigning structural meta data problem as a classification problem. Formulation is as follows:
Given large annontated data containing the structural information, I should be able to successfuly learn from it and use it to assign structural information to any given page
with some accuracy.

Convinced to approach the problem as a classification problem, the question stil remains still the image should be used or the textual content. Howver it is not very clear.
Whatever may be the case, the next important phase in approaching the problem is extracting appropriate features. (this has to be done depending on what we want to use ie image or text).
What machine learning techniques to use? The same old famous Neural Networks with n hidden layers?
Still to think..................

Personalized Search - Overview of approaches - (Trying to complete a survey )

I am begining to write a survey of approches on personalized search. In this post, I present the category of approches to personalized search. It is as follows ....

____________________________________________
Categorization of Personalized Search Approaches
____________________________________________

First of all, search is not a solved problem. Morever, with the tremendous growth in the available information on the web, personalized search is increasinly becoming an active research area.
Actually there are a variety and a growing literature of approaches proposed for Personalized search. A category of the approaches can be

1) Link Based Approaches using the Graph structure of the web, Primiarily Extending PageRank(what google uses), Hubs and Authorities..
2) Domain Specific Personalized based on Ontologies etc.
3) Content Based approaches (based on Vector Model in Information Retrieval)
4) Machine learning Based Approaches
5) Approaches based on Linear Algebra
6) Recommemdnation based personalized search (using collaborative filteirng and content based filteirng)
7) Based on Long term history Short term history of the user from web log etc etc. etc.

All the existing approaches to personalized saarch in the literature can more or less be categorized in one more of of the approches. Each approah can fall into one or more categories.
For example, a machine learning based approach uses the content of the page.. etc etc.

The visualization of this categorization can be better done in terms of sets. Each of the 7 categories
can be represented as a set. certain sets contain certain other sets. there are small overlaps, big overlaps accordingly. The approaches belonging to each of the above category are the elements of the respective sets.

Link Analysis

Hi,

I am posting some info i know about link analysis.

link analysis as far as i know is making use of the hyperlinks on the web for various applications on the web. The applications include finding authoritative or important and pages having significance on thwe web [1], computing web page ranking(google) for searches [2], finding web user communities [3][4], finding similar pages [5], web page clustering, web site classification, in recommendation systems etc.

u know google's page rank algorithm na.. which use the backlinks
and out links from a given page to calcualte the popularity of the page.
like that. The basic funda in page rank algorithm is calculating the
popularity of a web page based on back links it has. It is believed that A
good/popular page has links from good/popular pages. for example, my home
page and yahoo. my home page has few back links where as yahoo has many.
so yahoo is a strong popular page than my home page.

basically they see how is a page connected in the web. to what
pages it gives links (out links), from what pages does it have links (back
links). for example clustering of web page can be done by observing the
links of a page. it is believed that simlar pages will have similar out
links and back link and seeing the out links and back links we can see how
two pages are related .. some such stuff.

In this process, usually the content of the page is not used
except for anchor text. (in running text or something when we give
hyperlink we tend to put a small description of the hyperlink. that is
called anchor text). the context is not much used because most of the
resarch in this area is done my db ppl and for some reasons they dont tend
to use the content of pages.

References
[1] http://iiit.net/~pkreddy/wdm03/wdm/auth.pdf
[2] http://iiit.net/~pkreddy/wdm03/wdm/page98pagerank.pdf
[3]http://iiit.net/~pkreddy/wdm03/Trawling.htm
[4]http://iiit.net/~pkreddy/wdm03/wdm/identification_of_web.pdf
[5]http://iiit.net/~pkreddy/wdm03/wdm/FRP.pdf

Monday, December 12, 2005

design pattern conformance

Developers realize data patterns in various forms. Though an architect might be contiuing his analysis assuming it is "X" design pattern, in reality the implementation may not reflect the same.

If we could dynamically discover the design pattern from the running of the system and match it with a standard template, we could probably infer that..

Look in terms of this paper
http://www.cs.brown.edu/research/vis/docs/pdf/Heuzeroth-2003-ADP.pdf

Probabilistic state machines could help may be?

Store design pattern is a state machine, and compare it against an inferred state machine?

Saturday, December 10, 2005

Inferring constraints of usage

Most proper usages of frameworks require you and enforce certain sequence or order in which you perform your activities. Some simple example could be call routine A before you call routine B. Subscribe before you publish, open before you close.

Is there research done in mining and automatically inferring such rules for using a framework.

Please update me if you do knw. If there isn't any, throw some light on how does one know what is the right order to perform those?

"Mining Specifications" - Look at this paper and George's thesis at CMU ISRI.
www.cs.berkeley.edu/~bodik/research/popl02a.pdf

Saturday, November 19, 2005

MSR 2006 potential projects

1) Find line to line correspondence. Get mapping with author of the line.

Findbugs + update it into Bugzilla reports with corresponding author information.
http://findbugs.sourceforge.net/

2) JDepend gives you complexity of Code and Design quality.
http://www.clarkware.com/software/JDepend.html

Find checking relationships and patterns from CVS mining. Then decide about Coupling and Cohesion. Could there be a potential complementary analysis that can help better evaluation of code?

3) BIRT is a cool project that helps you access and format and create reports from BUGZILLA.
http://dev.mysql.com/tech-resources/articles/using-birt/

Integration of bugzilla with other analysis and reporting techniques and grouping and observing change patterns can be done.

Tuesday, October 18, 2005

Invariant detection in CVS Code Repositories


Some lines are not altered by developers over a period of time for ‘n’ number of checkins. These lines could contain extractable patterns or programmatical equations which could be extracted and called “invariants”. If these are suddenly changed, then may be we need to alert the user.