Monday, 27 May 2013

P vs NP


INTRODUCTION

NP vs P is highly theoretical topic with vast practical applications.
Lets see in short what does it mean informally:
  • P class problems:  That can be solved in polynomial time. That is their upper bound is O(n^k), where k is some positive integer. Now, it doesn't matter how large k is : even if it 1000, we still call O(n^1000) as P class problem.
  • NP class problems: Any problem that is not solvable in polynomial time, but can be "verified" in polynomial time is NP problem. What the heck does that mean? It simple means that there exists no algorithm to solve the problem within polynomial limits. It may be solved in 2^n time or n! time; however once we get the solution, we can check whether it is correct in polynomial time.
    For example, lets take an example of Hamiltonian Path Problem In this problem, we need to find a path in the graph such that it reaches every vertex only once, except the source vertex which is also a destination vertex. Till date, there is no proven polynomial time algorithm for same. However once the path is found, we can easily check in polynomial time, that whether such path really exists and satisfies the problem criteria.
    Note 1: When we see "not solvable in polynomial", we mean that no such algorithm has been yet discovered. It is always possible that in future some one discovers (or at least proves that there exists) a polynomial algorithm.
    Note 2: Since every problem in P class can be verified in polynomial time (do we really need to verify, we can solve it :P ), we can conclude that P is a subset of NP.
  • NPC Class Problem: Informally, it means those problems which are "hardest" or at least no easier than any problem in NP class. We will get to technical definitions later, but it simply means that if any problem in NPC can be solved in polynomial time, all the problems in NP class will have polynomial time. 

Motivation

Before proceeding further, one may say, why the heck we need to study this section. How does it matter if the problem is n^1000 or 2^n, they both are practically impossible to implement for any sufficiently large value of n,  and so why bother to study them in separate sections? 

So, here are some of reasons why we should study P vs NP problems:
  1. If a polynomial time algorithm is discovered for any problem, there is always chance that we can reduce its complexity further in polynomial time. O(n^1000) maybe with some improvements could work in O(n^100) or even smaller.
  2. On other hand, if a problem is proved to be NP; it is always better to solve it be approximation algorithms or find algorithms for specific instances of problem, rather than trying to find a general solution which would work fast.
  3. Polynomial problems also possess closure properties. Which in simple terms means that you add,  multiply, combine more than one polynomial algorithm, and you would still obtain polynomial solution. We cannot make such a claim about NP problems. 
  4. Another thing that Polynomial problem has is that it tends to retain polynomial time on different kinds of computation model, such as Abstract Turing Machine; or Random Access computational model.
  5. Although it is widely believed that no polynomial solution exists for any NPC problem, proving that such a solution exists will lead to polynomial solutions for EVERY NP problem. On the other hand, no one has been able to prove that such a solution should exist (or not!). This is indeed the most popular open question in Computer Science. (In case anyone is interested, there is $1,000,000 prize associated with it - to anyone who can prove whether a polynomial solution can exist for NPC problem or not)

Now lets move to technical bits:

DECISION vs OPTIMIZATION

Throughout, Algorithms study, we came across several Optimization problems, where our goal is to find an optimal solution to a given problem. For example, finding shortest distance between two nodes in a graph; or Longest Common Sub-sequence for two strings and so on...
However, in world of NP, we will deal with Decision problems. Our goal is not to find an optimal solution, but rather finding that "such solution exists." 

First, let us learn how to convert any optimization problem in decision problem. For shortest distance problem, 
Optimization: In a graph G(V, E), find the shortest path between nodes S and D.
Decision: In a graph G(V, E), does there exist a path between S and D which is of maximum weight W?
Easy!

Now, why we use decisions rather than optimizations? Well, because they are easy to answer! It is always easier to answer "yes" or "no" rather than find an optimal solution to a problem.
Also, remember we are now trying to prove "lower bounds" for problem, rather than "upper bound" : and so if we can prove that a decision problem is hard, we can conclude that Optimization problem is hard. 
So, in short our job is easier. If we can prove that a decision problem is hard, it automatically follows that corresponding optimization would be hard. On the other hand, if optimization is easy, decision problem is easier.

REDUCTIONS

This is fairly easy. Let's say, we have two decision problems: A and B. We have already solved A, and now we would like to solve B.

Now, there exists a BLACK BOX. What it does, it takes instance of problem B (which we would like to solve) and *magically* convert it into instance  of problem A (which we have solved). 
This Black Box has following properties:
  • It is polynomial time conversion.
  • Its conversion is "true" i.e. whatever is the answer of output ("yes" or "no" - keep in mind it is decision problem) is the answer of input.
So, this is a way, we show most problems are NPC, we take one problem which we already had proved to be "NPC", and then use some conversion algorithm and conclude whether next problem is NPC or not.

*****

This is in short: highly non-technical, very low level introduction to P vs NP problems. 
Further introductory studies might include:
  1. Deterministic vs Non-Deterministic models
  2. Examples of NPC Problems : Clique, vertex cover, 3-CNF, Hamiltonian Path, Subset Problem etc.
Source: CLSR

Wednesday, 30 January 2013

Calculating Emotions



As we move to Robotics stage, as well as world which is increasingly becoming digital, it may be a topic of interest to be able to calculate intensity of emotion. This is in fact, a challenging task, since we haven't yet developed accurate algorithm which can always correctly predict what type of emotion would be generated for a given set of circumstances, leave alone how much of that emotion would be predicted.
One particular difficulty in the task is that no human respond with single emotion to any stimuli. A sentient response always consist of a group of emotions, with different intensities associated with them.

In this blog, I will try to work on relation between set of circumstances, and given emotion.

Before moving forward, let me review key terms :

  • Emotion :  A state of feeling, or a state of mind derived from present circumstances and resulting in behavioral changes. Examples include : Anger, love etc.
  • Intensity (of emotion) : How strong a feeling may be present. e.g. anger in high intensity can lead to rage, while high sexual desire can be termed as lust.
  • Stimuli: In simple, words they are external circumstances to which a response must be given.

Our next job should be to investigate the factors on which intensity of emotion depends:

Other Emotions
Any emotion is dependent on other emotions. The level of anger clearly does depend on level of disgust which is felt; jealousy intensity depends upon both the love intensity as well as hatred intensity. (You intensity of jealousy does depend on how much you love your partner/how much you hate with whoever your partner is having an affair. Of course, intensity of jealousy will be dependent on other factors such as pride, trust etc also)

Thus, we get :
  1. Intensity of emotion depends on intensities of other emotions.
A basic equation is then :
IE = IE1 +  IE2 + ...

Trivially, next point should be that any intensity may add or may diminish the intensity of given emotion.
  1. An Intensity may be positive, negative or zero.
  • Emotions contributing positively are constructive, those with negative intensity are destructive, while one with zeroes (or near-about values) are indifferent for that emotion.
Now,

IE = ∑ IE

Our range of i depends on us, and our data-base. In my opinion, we have three ways to achieve this equation, for every emotion, map out certain constructive and destructive emotions and iterate over them; second way is to iterate over every emotion in database, and thus calculate intensity; while third yet is to constrict ourselves to set of only few basic emotions and calculate with respect to them.
[A supplement, if needed, can be written describing advantages/disadvantages of each method]

  • Basic Emotions: It is argued that with help of only few emotions, any emotion could be constructed, While No universal agreement is there on which emotions consist of basic emotions (or of there are basic emotions), one set as proposed by Plutchick is { joy,sadness ; acceptance, disgust; anger, fear; surprise, anticipation }
However,  we aren't quite done yet.

Each emotion affects other emotion by different degree. Surely, impact of surprise is less than impact of care if we are talking about emotion of love. It may be so, a stimuli generates lots of surprise in you, but the high value of surprise should not affect the intensity of love. 
This means each emotion is perceived by any emotion differently. Therefore, this perception or closeness  of any emotion to other emotion must be factored in our equation.

The most logical way to do that would be to multiply degree of closeness to that respective emotion.

IE = ∑ IEi*DEi

There are various ways in which again degree can be implemented. One way to do so, would be to assign absolute degree as 1. If a degree is one, it means that particular emotion is certain to impact the emotion under consideration. Further, a degree could be implemented linearly or exponentially, as indicated by further research. (Linearly: a degree 0.6 emotion means that this equation is twice more impact-ful than 0.3 degree emotion on given emotion; while in exponential implementation, this may not be the case)

****************

This completes today's entry. In coming parts, we would view other factors affecting any emotion such as type of person who is responding to stimuli (under same set of circumstances, two persons react differently), type of relationship you share towards stimuli entities ( A same person may react differently to different person) and other things.

Wednesday, 22 August 2012

Aces in My Hand

INTRO

This blog intends to be much more of write-up of my experiences while solving some of famous problems. So, this will tend to introduce new concepts which I learned, as well as design methodologies.

In effort to be both simple, and yet not leave anything out, Technical/Mathematical part will be clearly demarcated, so that anyone new to concept can follow it easily. You can skip demarcated session(s) whenever you desire.

I will again repeat, this blog will try to introduce both new concepts (to some of you), as well as how to use them to solve problems.

In Nutshell, anyone enthusiastic about coding as well as anyone who is just stuck with coding, and and figuring why in world would I need it; this blog is for you. So don't hesitate to give me feedback. To experienced coders there, I require your support utmost.  As these are my experiences, any input given - in form of mistakes, comments, ways to improve, things I overlooked, or (rarely) compliments - would be cherished by me.

*****************************************************************************


Life is like a game of cards. The hand that is dealt you is determinism; the way you play it is free will.
Jawaharlal Nehru

In this section, I would write about my experience in which I am trying to design Poker Program. To be exact Texas Hold 'em. The people unfamiliar with rules - I would describe them as we go along, but in this section, we will be dealing with short problem related to basic data structure involved - How to design deck of cards? (If you are really interested in programming stuff and likes to think things, try to visualize how would you design your deck of cards - just for half a minute, and then read and compare)

A caveat on Methodology (again, those not interested in things like Top-down vs Bottom-up etc can ignore the section, and move on to next)

Some of my experience tells me that instead of starting from base, or bottom; I should have started from top, or function prototypes, and then moved down. While I am too inclined to do that, but I think problem of concerning whether top-down or bottom-ups comes in Algorithm section; and before that I should get my data structuring correct. As it is said - "Smart Data Structures with dumb code is better than smart code with dumb data structure."

So, lets review what deck of cards look like:

  1. There are total of 52 cards
  2. The cards are divided into four suits: Hearts, Diamonds, Clubs and Spades. In Texas Hold Them, all suits are treated equal.
  3. Each suit contain thirteen cards: 2,3,4,5,6,7,8,9,T,J,Q,K,A. Where T=10, J=Jack(11), Q=Queen(12), K=King(13), A=Ace.
  4. Ace can be treated either as of value of 14 or 1, depending upon combinations.

So, on top of my head, first thing is to treat as deck of cards as two dimensional array:

int deck_cards[13][4];
//suits can be treated as 0, 1, 2, 3, say clubs, diamonds, heart and spades resp.
//Treat Ace as Zero. 2 as 1... K as 12. 

Now, this representation is already giving headache in terms of numbers. For example 7 of clubs will be given as [6][0], and Jack of Hearts as [10][2]. While after a while, we can get into its practice, a better solution might be to extend this as 14*4 array.

int cards[14][4];
//Treat Ace when 14 as 0. Ace when 1 as 1. 2 as 2, King as 13.

With this one memory trade-off, we achieved two objectives:
we have two values for Ace. and Second: Index of array represents Face Value of card.

With this treatment, Lets perform simple functions on cards:
For example, say we have set of cards: 
hand = { [6][0], [7][1], [3][[1], [6][3], [12][2] }
We have to find whether there is a pair i.e. whether any two cards in the hand have same face value.

While writing code is not super-human task, and relatively easy, we get two loops running and lots of un-necessary handling of second dimension of suits which have no role to play. We would like our structure to be easy in the sense that we can compare two cards as:

is(card1.value == card2.value)

But this is not possible while we are dealing with two dimensions. This gives rise to idea of breaking the card from double dimensional array to simple one-dimensional arrays roofed under single entity:

And this gives us idea of structures or class. Here it is much better (in my opinion) to go with classes, since all functions associated with deck of cards can be collected under single roof (such as generating new deck, shuffling, dealing out n cards, etc). 

so, a pretty basic definition of Class card would look like:

 class card
{
        public:
        char suit;      //Easier to read than int:P
        int value;
};      

Now, we have to initialize this deck. It isn't hard but not very pretty with the basic definition you (as I) started out with:

        char suit[] = {'c', 'd', 'h', 's'};
        int value[] = {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14};
        
        card new_deck[52];
        int k =0;

        for(int j=0; j<4; ++j)
                for(int i=0; i<13; ++i)
                {
                        new_deck[k].value = value[i];
                        new_deck[k].suit = suit[j];
                        ++k;
                }

It could be easily checked that this definition works by simply printing all the cards:

for(k = 0; k<52; ++k)                      cout<<new_deck[k].value<<'\t'<<new_deck[k].suit<<endl;

So, you might think we are ready with our deck of cards, and are ready to test how it responds to basic tests. Some tests might be:

  • Getting 'n' random cards, which (obviously) must be distinct.
  • Given n random cards, checking whether there is pair in them.
  • Shuffling given 'n' cards.
  • Given n cards, arranging them i)suit-wise ii) value-wise.
Being a lazy ass, I will get to simplest test, that if I have some cards, can I check if they contain pair. (You might of course, try above tests, and more : if you do so, be sure to leave your code (or at least your method) in comments or mail me.

Now, C/C++ is a hardcore (well, maybe not as hardcore as Assembly or Lisp) so it will not give simple functions for it.

In other language, such as Ruby these are like - highly easy operations:
array.shuffle -> Shuffle

Is (array == array.uniq)
array.uniq -> deletes repeating value
so, code in ruby would be

--> array.eql?(array.uniq)

Simple, isn't it?

But lets see, if we can write this if language doesn't support it.

So, lets see a new concept of vector (When I say new, I meant it was new to me). We really doesn't need it, but it is better than array, (it is kind of dynamic array), and in this context, can simplify things a little.

So, if you are interested, read this section: 
Vectors are like dynamic arrays in way they are stored in contiguous memory locations, giving them advantage of ease of retrieval of an element of vector. But on other side, storage in vectors is automatic, so they can be expanded or contracted with little worries on programmer's part.

Their syntax is:
vector<data-type> var_name;
For example:
vector<int> cards;   //Create Integer vector

Or in our case
vector<card> deck;  //Card was name of our class, in case you forgot :P

Now why should I use vectors apart from fact that they are easier to implement with respect to arrays. 

In the problem, we have in fact, even this advantage of vectors doesn't count, since we are dealing with fixed memory spaces; but vectors in C++ have lots of in-built functions that you can exploit, making them better than arrays. 
Some of functions are:
  1. Iterator Functions (begin, end, rbegin, rend) - Calling iterator function returns iterator (you know - the loop variable, like "i" in for loop) to start or end of vector list.
  2. size - returns size of vector
  3. empty - returns whether vector is empty
  4. front, back - Access first, last element
  5. push_back, pop_back : Adds/deletes elements at end.
Our little code will change to following definition with vector.. (if you are skipping vectors, keep skipping - we will come to test cases soon)

int main() {
    vector<card> deck;
    char suit[] = {'h','d','c','s'};
    int value[] = {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14};
    for (int j=0; j<13; j++) 
        for (int i=0; i<4; i++) 
                deck.push_back(card{value[i],suit[j]});               
    return 0;
}
Coming to our test, as to how to compare two cards. I will give crude version here, since our original aim was to make deck of cards. If I write further on this topic, I am pretty sure I will have by then better way. (I may progress to maps instead of vectors, but that is future discussion):

Comparing whether two elements in given hand are same or not: 
(for our ease, lets assume that either only two elements will be equal or all elements will be distinct)


        bool flag = false;
        vector<card>::iterator i; 
        vector<card>::iterator j;
        for(i=hand.begin(); i!=hand.end(); ++i)
        {       
                for(j = i+1; j!=hand.end(); ++j)
                        if(j->value == i->value)
                        {
                                cout<<j->value;
                                flag = true;
                                break;
                        }
         
                        if(flag == true)
                                break;
        }

There are many things that may require explanation; though everything is figure-able. Iterators are simply loop counters, with properties they can be assigned value like begin, end etc. 

Hand is vector of type cards from which we want to compare the values. 

Of course, above algorithm, takes O(n^2) time and hence is inefficient. As I said, better algorithms later.

Now,  The methods I discussed above are not by any stretch the only ways to describe deck of cards. As I fleetingly mentioned, we could use maps, and those familiar with enumerations will make case for them too.

So, in end I will just leave with one very clever method (by benjismith of Stack Overflow), which makes the task of calculating pairs etc very easy, and can crunch huge numbers, and basically could be used while making AI software.
Plus, this is one method where C/C++ really shine:

It makes use of BIT-WISE representation.


Each card is represented in 4 bytes or 16 bits values, where thirteen higher order bits represent number (it is really clever not to use 2 as 10, 3 as 11, but to use thirteen bits for thirteen numbers. So that at any time, only the corresponding bit will contain non-zero value.)
The two lowermost bits represent suits. Here we use 00, 01, 10, 11.
And the remaining bit, bit[2] denotes whether Ace is used as a value of 1 or 14. 0 represents 14, and 1 represents 1.
Here are a few examples:
0000000000001001  <----  Two of hearts
0100000000000011  <----  King of spades
1000000000000010  <----  Ace of diamonds
^^^^^^^^^^^^^            ("face-value" bits)
             ^           ("low-ace" flag)
              ^^         ("suit" bits)




Now of course, it is really easy to find pairs in this.

Take first thirteen bits and apply bit-wise AND operation.
If the result is greater than zero, we have at least a pair, and if result is zero, we have no n-kind cards.

And of course, since Bit-wise operations are amazingly fast, this data representation just can perform millions of operations/second, making it ideal for AI player.


So, I finish with my analysis of different types of ways a deck of hand can be made with little thinking about the operations we might perform on it.
I will be really interested in hearing if you have implemented different kind of structure, and how did it go? Also, as again, any mistake (or any improvement) please tell me.

************************************************************************

More Mind-Fu
Famous Two Envelopes Paradox.

If you haven't heard of it, it may come as surprise:

You are given two indistinguishable envelopes, out of which one contain 'x' money and other   '2x' money, i.e. double the first of it. You are allowed to choose any one envelope and claim the money. So, you choose one envelope at random.
You are now given choice to either retain the envelope or switch:
Your line of reasoning might be:

1. Let my envelope contain "A" money, then other envelope can contain either "2A" or "A/2" money.
2. It is equally probable that it can contain double money or half the money.
3. So the money in other envelope = 0.5*2A + 0.5*0.5A = 1.25A
4. Since the other envelope contain more money (1.25A>A), we switch
5. But then we again apply the same logic with other envelope, causing us to switch back to original envelope
6. By this method, there is bound to be infinite switches.

Problem is to find flaw in reasoning.