To sit or 2-SAT

Yeah yeah yeah – lame title. But I just can’t help it😉

Here we are going to discuss Satisfiability (or SAT) and especially 2-SAT based problems. SAT problems refer to checking if the following kinds of boolean formulae are satisfiable.

\neg(x \vee \neg z \vee (\neg w \vee x)) \vee (x \wedge \neg y)

However, since the above is too general to represent with a data-structure, we typically reduce such formulae into the Causal Normal Form  (CNF) as shown –

(\ldots \vee \ldots \vee \ldots \vee \ldots)\wedge \ldots \wedge(\ldots \vee \ldots \vee \ldots \vee \ldots)

Read here if you want to know how to convert any formula to a CNF. Each sub-formula which is joined by \wedge is called a clause and each variable a literal. We know that the satisfiability of CNF composed of clauses with more than 2 literals is NP Hard. These are represented as 3-SAT, 4-SAT etc. We will only explore the satisfiability of 2-CNF. An example 2-CNF is (\neg x \vee y) \wedge ( y \vee z) \wedge (x \vee \neg z) \wedge (z \vee \neg y)

Solving 2-SAT

2-SAT problems are polynomial time decidable. The problem can be modelled as a graph problem. We will have two vertices for each literal. One vertex represents x and the other \neg x .

Now we add an edge from x and y iff we have a clause of the form (\neg x \vee y)

Hence for every 2-SAT clause, we will have two edges one for (x \vee y) and another for (y \vee x) . Since we require a clause of the form (\neg x \vee y) to create an edge, we will change (x \vee y) to (\neg (\neg x) \vee y) and draw an edge from \neg x to y and another edge \neg y to x .

Why

The edges represent a ‘If Then’ relationship. A condition like (x \vee y) can be re-written as –

if not x then y
else if NOT Y then X

This is precisely what we represent when we add an edge. We add an edge for each of the IF conditions.

Taking (\neg x \vee y) \wedge ( y \vee z) \wedge (x \vee \neg z) \wedge (z \vee \neg y) as an example, here is how the edges can be drawn out.

http---makeagif.com--media-8-31-2013-NFnfau

To figure out if the given 2-CNF is satisfiable, we just need to check if there is a path from \neg x to x (similarly for y and z). You can do this via a DFS or a BFS. A faster way is to check for component Components. If \neg x to x (similarly for y and z) lie in the same component, it isn’t satisfiable.

One other way (and quicker to code) is to use Floyd-Warshall. It is quick to write and easy to remember.

Here is a problem for practice: SRM 464 – Div 1 – 550.

The trick is to create a 2-CNF where we add a clause everytime we cannot add a pair of vertices (A, B). The clause will take the form  – NOT (A AND B) or in a CNF form – (NOT A OR NOT B).

I hope you found this useful!


Greatest Hits – Side A

I didn’t realize that almost a year has gone by and I haven’t updated my blog. With a lot of free time on my hand now, its time to conjure up another post! I wanted to write this one for some time now but I just haven’t found the strength to punch the keys (and trust me, it takes quite a lot of effort).

Like I mentioned, I have a lot of free time now and I decided to go back to doing something I always loved – solving problems on Topcoder. So I started the Arena, logged in (I was surprised I could still remember my password) and went straight to SRM 411 Div 2. It was a problem set I had already solved and I wanted my first practice contest in almost 10 months to be easy. Well – it wasn’t! I thought I coded the 250 right but the tests revealed that I missed the easiest edge cases. It took me a lot of thinking and a little searching to finally figure out how to do the 600 and the 900 was a bit easy on the mind but impossible to code! I  compared this against my performance the last time around and the contest was like that between me and Petr on a live SRM!

I am not going to share anything related to this experience though. Just to encourage myself, I am going to look into eternal abyss (actually my memory – things fall in rather quick but never quite make it back when I need it!) and share some problems  (in a never ending Saga) that I was able to solve but which many of my more established peers could not (or so I would like to think), during a live contest!

TCO 2012 Round 2A – 300:

[Problem Statement] [My Submission]

Don’t be fooled just because its a 300 point problem. This problem saw over 800 submission but only 38% passed! Also, this is Round 2, so there are hardly any rookies left and I remember a large number of red coders failing the system test. Maybe it was a 300 point problem and people took it lightly. But I knew, I could only solve this one, since the 450 and 1000 were bit on the harder side for me!

The only reason I was able to solve this problem was that I had just learnt how to solve problems involving bi-partite matching (just the easy ones though). And when you have learnt a new technique, suddenly every problem fits the bill and you can see a bipartite graph in every problem. Fortunately, it was true for this problem.

First thing to notice was that if it was not a bipartite graph, there was no solution possible as the system would be inconsistent. Also, if there was no matching possible on this graph for any vertex, then again the system would be inconsistent (Notice that if one switch does not have an associated lamp, then either one switch is connected to two lamps or there is a lamp without a switch). However, this turned out to be the easy part!

I tried a lot of different simple techniques to figure out the number of experiments I would need but everything had one flaw or another. It was in the dying minutes of the contest that I jumped up with ‘Eureka’! What I realized was that lets say there is a set of switches A which can be mapped to another set of lamps B (note: n(A) always equals n(B)). Then the number of experiments needed is log(n(A)).

Why? Lets say the system had 4 switches and lamps. Then if I switch on 2 and switch off 2, in one experiment I would have identified 2 switches and their respective lamps. Now I have two groups of 2 sets each. Notice that they are independent of each other (experimenting on one does not affect the result of another – which means that in one experiment I can set the states of switches and figure everything out!). If I repeat this again then in 2 experiments I would have the answer for these 4 sets of switches and lamps. The property that experiments are independent meant that if you identified all such groups and figured out what is the largest number of experiments needed to solve any of the groups, that would be your final answer.

Google Code Jam 2011 Round 2 C. Expensive Dinner

[Problem Statement] [My Solution]

This problem is a source of both happiness and sorrow. Happiness – because I figured out how to solve it; sorrow- because I got lazy in implementing it and the hard cases broke this code. If I hadn’t been lazy, I would have qualified for Round 3 which would have been a moment of great pride. I did end up in the top 1000 for which they sent me a nice GCJ t-shirt but alas – the wrong size!

When is the waiter called: everytime the LCM of first K numbers is not equal to the LCM of first K-1 numbers. This is the most important observation.
The problem statement asks us to find two permutations of first N numbers A and B such that A results in the smallest number of waiter calls and B in the largest. Just by looking at the test cases you can figure out that the largest way is to keep the numbers in sorted order. But what is this value? It turns out this is the sum of the largest power of prime number such that it is less than equal N.

How? Consider first 10 numbers. When 1 comes in, he will call. So will 2 and so will 3. At this point the LCM is 6. But when 4 comes in, he will need to multiply the LCM by 2 so even he will have to call. So will 5, but when 6 comes in, he does not have to since 2 and 3 have already taken care of his needs! 7 will again call and so will 8 (since we need another factor of 2 to satisfy the condition) and 9 (factor of 3 this time). But when 10 comes in, he will not need it. So the total calls here is 8 – which is 1 (1^1) + 3 (for 2^3)) + 2 ( for 3^2) + 1 ( for 5^1) + 1 ( for 7^1) = 8.

What is the smallest such possibility? Simple, if the largest power of a prime, comes first he will ensure that none of the other factors need to be called. In essence, the number of prime numbers less than equals to N. But instead of doing these seperately, you can do this in one loop by reducing by 1, the value you find in the above explanation.

Just to point out where I blundered: When calculating the largest power I used log function. While mathematically there is nothing wrong with it, in practice log has an error associated with it which is greatly magnified with larger numbers. Not that I did not know this when I was implementing it, just that I got bloody lazy!

Any hoot, we all learn from mistakes and so did I!
Do try solving these problems on your own and let me know how that went!

I almost forgot – to my ardent fans – A Very Happy New Year! That’s it for now!


Mystic Regex

You are a Dev and have been so for more than a year now ( or maybe more or less). Your editor of choice has always been emacs/vim. Your mode of operation starts with a sip of ‘darker than coal’ coffee and ends with making your keyboard – your pillow. You are an expert at typing and programming but the one weapon you have been missing from your arsenal is writing/understanding something like this –

$out =~ s/(^[a-zA-Z0-9]+)\.([a-z]+)/<a href="\&quot;\1\.\2&quot;">\1<\/A>/g;</a>

What does it do? I hope you will be able to tell me after reading through this.
This post is to add to your arsenal – an Intercontinental ballistic missile of programming or what others call – Regex!

The Basics

Literals

This is just plain text. If I need to match cat in Bell the cat, I would just use cat as a regex!

Regex Special Characters

The following characters – []{}().+*\|^$ are native to regex. If you need to use them as literals you need to escape them by preceding it with \, for eg – \{. Now what do these do –

Regex Character
What it is
Examples
[] Character class [abcd] – Anything that is either one of a,b,c or d.
[^abcd] Match anything which is neither of a,b,c or d
. Dot character class Matches any single character except \n
* Star Matches any character class preceding it; of any length including 0 length. So, if you use .*cat, it will match pussycat and also cat
+ Plus Matches anything of length >=1. So, if you use .+cat, it will match pussycat and but not cat
| Alternation This works similar to a ‘or’ in a regex. If you want to match dog in the string My dogs name is Tiger, but also match cat in My cats name is puff. These are almost similar string and so your regex would be My cats|dogs name is .*
{} Limited Repitition Let say you want to ensure the number of times a pattern is to be matched. Or even better, you know the minimum and the maximum. In such a case you would use {}. For eg – [0-9]{2,5} means match it to any 2 digit, 3digit, 4 digit or 5 digit number ( with leading zeros). If you want only 2 digit numbers – [0-9]{2}, or if you want atleast 2 digit numbers [0-9]{2,} (note the comma ,)
$ End line Anchor This is a regex end line anchor. If your regex ends with this character, you are trying to say that ‘The pattern must occur at end of line’. For eg, If you want to ensure that the match ends with your pattern like I am What I am, if you search using am, it will match both but if you search am$, it matches the last one only.
^ Start line anchor This is a regex start line anchor. If your regex starts with this character, you are trying to say that ‘The pattern must occur at the start of line’. For eg, If you want to ensure that the match starts with your pattern like I am What I am, if you search using I, it will match both but if you search ^I, it matches the first one only.
^$ Caret Dollar Remember, you can also use ^$ in the same regex and in this case it would mean that the line must contain exaclty the pattern. For eg, your input is a large file with text on every line and you are trying to pull out a key of length 10 which can contain characters and numbers you would say – ^[0-9A-Za-z]{10}$

The Advanced

Now that you have a basic grasp of regex writing, it is time to learn some more advanced stuff.

Grouping & Back Referencing

If you are looking to group a pattern so that another operation can be applied to it, like (ash)+ will match ash, ashash but not ashas. But this would be a very primitive usage of this character. The more powerful usage is in backreferencing. When you put a pattern into a (), you tell the regex engine to store the match internally so that you can access it later. To use a matched pattern as a pattern again, you can use \ followed by a number. This number is the sequence of back reference. If you say \1, then it means the pattern matched with the first set of parenthesis. For eg, if you want to write a regex, which will match html starting and closing tags, you can use

<([A-Z][A-Z0-9]*)\b[^>]*>(.*)?</\1>

A language like Perl allows you to return backreferences. In the above example, to get the tag, you would use $1 and to see inside this tag you would use $2 (since the second time () used contains the html inside this tag).

Optional Items and Regex Greediness

Suppose you want to use a regex to match an HTML tag, assuming your input is a well formed HTML file.

You would think that  <.+> will solve easily. But be surprised when they test it on a string like This is my <TAG>first</TAG> test. You might expect the regex to match <TAG> and when continuing after that match, </TAG>.

But it does not. The regex will match <TAG>first</TAG>; not what we wanted. The reason is that the plus is greedy. That is, the plus causes the regex engine to repeat the preceding token as often as possible. Only if that causes the entire regex to fail, will the regex engine backtrack. That is, it will go back to the plus, make it give up the last iteration, and proceed with the remainder of the regex. To avoid such pitfalls, use ?. You can see this in the above regex.

It can also be used like optionals. If you want to match February but also Feb, you can use Feb(ruary)?

Performance of Regex

In one word – Better. The regex engine will perform better than anything you or I can write to match a pattern, unless you write your own regex engine. And even in that case, the standard Regex will beat you to it! Also, the more simpler your regex, the faster it run (Obviously). Using back-referencing will slow down your regex. A very simple example is grep. This utility only allows simple regex characters and tends to be faster than egrep which allows much more advanced stuff but at a price!

Now, after going through all this, I hope you can answer what the first regex I introduced you to, did! Its in perl and s/<PATTERN>/<REPLACE>/g replaces <PATTERN> with <REPLACE>, globally.
I hope you are able to now add regex to your programming arsenal and hope this has helped you understand it. For more info, you can always google😉


Linear Recurrences

How often has it happened to you in a programming contest (or elsewhere) that you thought it was impossible to solve it faster than O(N) and yet the limits imposed suggest that it has to be done faster. Well, if not all, atleast a majority of them have a solution based on the idea of linear recurrences. In this blog post, I intend to help you out on this !!

In this post, we are going to do a – Solve and Learn strategy ; You will be given a question and I will show you how to apply  the concepts on them.

TYPE 1 :: The Simplest :

If a post mentions recurrences, then it has to mention Fibonacci (Gosh, if only I had a penny for every mention of Fibo in tutorials. )

The recurrence is of type : F(n) = F(n-1) + F(n-2).

I am pretty sure you know to code the linear version of it which runs in O(N) but can you do it in O(log N) ? If you throw google to good use, you will come up with a solution which says there is a Matrix M which when raised to power N, will give you the N-th fibonacci number. And since you can always exponentiate in logN time, you have your answer. But to those, who wondered if this Matrix is magical- read on!

Firstly the answer- No; Its not magical. How. Lets do a little Algebra (yumm… My favourite! )
F(n)=F(n-1)+F(n-2)\\ \\ F(n+1) =F(n)+F(n-1)\\ \\ F(n+2)=F(n+1)+F(n)

Obviously enough, the value of N-th term, depends on two previous terms (or states). This implies that all values depend on just the first two states in the sequence. As you can see here –

\begin{pmatrix}F(n+2)\\ F(n+1)\end{pmatrix}=\begin{pmatrix}1&1\\ 1&0\\ \end{pmatrix}\times\begin{pmatrix}F(n+1)\\ F(n)\end{pmatrix}\\ \\ and\\ \\ \begin{pmatrix}F(n+1)\\ F(n)\end{pmatrix}=\begin{pmatrix}1&1\\ 1&0\\ \end{pmatrix} \times \begin{pmatrix}F(n)\\ F(n-1)\end{pmatrix} \\ \\ Hence \\ \\ \begin{pmatrix}F(n+2)\\ F(n+1)\end{pmatrix}=\begin{pmatrix}1&1\\ 1&0\\ \end{pmatrix} ^2 \times \begin{pmatrix}F(n)\\ F(n-1)\end{pmatrix} \\ \\ \begin{pmatrix}F(n+2)\\ F(n+1)\end{pmatrix}=\begin{pmatrix}1&1\\ 1&0\\ \end{pmatrix}^3 \times \begin{pmatrix}F(n-1)\\ F(n-2)\end{pmatrix}

Hence in General, we may write ::
\begin{pmatrix}F(n)\\ F(n-1)\end{pmatrix}=\begin{pmatrix}1&1\\ 1&0\\ \end{pmatrix}^{n-1} \times \begin{pmatrix}1\\ 0\end{pmatrix}

I hope that has helped you in understanding how to frame such equations and solving it with a matrix.

TYPE 2 : Simplest ++

Now that we have a basic understanding. Try the following recurrence :

F(n) = F(n-1) + F(n-2) + F(n-3).

It is the same as the previous recurrence but with an additional state. I won’t go on explaining the hows (again!). I am going to share the solution.
\begin{pmatrix}F(n)\\ F(n-1)\\ F(n-2) \end{pmatrix}=\begin{pmatrix}1&1&1\\ 1&0&0\\ 0&1&0 \end{pmatrix}^{n-2} \times \begin{pmatrix}2\\ 1\\ 1\end{pmatrix}

TYPE 3: Simplest << 1

Consider the following scenario ::

G(n) = a . G(n-1) + b . G(n-2) + c . H(n)\\ \\ and \\ \\ H(n)= d . H(n-1) + e . H(n-2)

This one is a lot trickier. First thing to notice is that we will need 4 states in a matrix to fully define the next state. The reason for using 4 and not 3 is that H(n) depends on 2 states and thus we need 2 states (and not just 1) to represent it.

If you carefully write down the LHS matrix and the RHS matrix, then we can frame the solution as . . .

\begin{pmatrix}G(n)\\ G(n-1)\\ H(n+1)\\ H(n) \end{pmatrix}=\begin{pmatrix}a&b&c&0\\ 1&0&0&0\\ 0&0&d&e\\ 0&0&1&0 \end{pmatrix}^{n-1} \times \begin{pmatrix}G(1)\\ G(0)\\ H(2)\\ H(1)\end{pmatrix}

TYPE 4 : Ohhh !

The final hurdle can come in the name of a constant. If we add a constant C to the above recurrence we get –

G(n) = a . G(n-1) + b . G(n-2) + c . H(n) + C\\ \\ and \\ \\ H(n)= d . H(n-1) + e . H(n-2)

But to tell you the truth, its not that difficult if your concepts are clean. Now there is another additional state to hold the information about C. The solution will look like –

\begin{pmatrix}G(n)\\ G(n-1)\\ H(n+1)\\ H(n)\\ C \end{pmatrix}=\begin{pmatrix}a&b&c&0&1\\ 1&0&0&0&0\\ 0&0&d&e&0\\ 0&0&1&0&0\\ 0&0&0&0&1 \end{pmatrix}^{n-1} \times \begin{pmatrix}G(1)\\ G(0)\\ H(2)\\ H(1)\\ C\end{pmatrix}

I hope this post lived up to your expectations and I hope it was worth the wait😛. Please feel free to post comments/corrections/improvements to this post to make it really useful.


Return to Roots: Tree 101

What is a Tree :

Tree is a heirarchial arrangement of nodes. From the literal meaning of Tree we know that it has root, branches, fruits and leaves. Well, in Algorithms also, we have a root – which is the origin of the tree. We have branches which connect to smaller trees and we have leaves, which do not have outgoing branches. And as far as the fruits are concern – depending on the complexity of operations that can be perform, we may label the fruits as sweet and sour !

The simplest tree would be a node which branches to exactly one other node, or in other words – a singly Link List. If every node branches to its child and also to its parent, we have a doubly link list. But in this post, we are not going to discuss these.

The next level of trees would be – where a single node may branch out to a maximum of two other nodes. Such a tree is call a binary tree. Binary trees are some of the most widely us datastructures in computers and we are going to discuss them in a series of posts. So lets begin.

One of the most important things to do is : Create a tree.
So what is it that we ne to create one. We will ne to represent the nodes and the links between nodes. And since we ne to connect to a maximum of two nodes, we will have two branches. We shall call these branches – left and right. Also, it will store some data in it. Our tree will be us to just store integers.

We will use the following structure to create it. FYI, everything here is in C++ and not C.

struct NODE {
    int data;
    NODE *left;
    NODE *right;
};

Now whenever we ne to insert a node, we ne to make sure that there is a fix position at which the node will be insert given its value (Data in the node). Let us follow a simple strategy.
We will insert a node to the left of a ‘Parent node’, if its value is lesser than the value of the Parent, otherwise to the right. The binary trees which use such a strategy are call Binary Search Trees.

The obvious advantage of such a strategy is that we can search for elements in the tree in O(h) time, where h is the height of the tree. Do note that, in general, h does not equal logN. If we could actually have a tree where the height is inde logN, we would call such trees as Balanc Binary Search Trees.

Alright then, lets get our hands dirty with a code that will create the tree for us. The function insert takes as input the root of the tree and the value to be insert and returns the node which contains the data.

NODE * insert(NODE *root, int data) {
    if(root==NULL) {
        root=(NODE*)malloc(sizeof(NODE));
        root->left=root->right=NULL;
        root->data=data;
        return root;
    }
    else {
        while(root!=NULL) {
            if(root->data>data) {
                if(root->left!=NULL) root=root->left;
                else break;
            }
            else {
                if(root->right!=NULL) root=root->right;
                else break;   
            }
        }
        NODE *new_node=new NODE;
        new_node->data=data;
        new_node->left=new_node->right=NULL;
        if(root->data > data) {
            root->left=new_node;
        }
        else root->right=new_node;
        return new_node;
    }
}

Another very useful and important property when using the above strategy is, that the INORDER traversal is sort!

Lets backup a bit. What are Traversals. It is like visiting many homes using the roads which connect them. Only that, the homes here are the NODEs and the roads are the links between each node.

There are many traversals but the three us very often are – PreOrder, InOrder and PostOrder.

In PreOrder, you print the current node and then visit its left and then its right children, recursively.
In InOrder, you first visit the left child, once you have return, you print the current value and then visit the right child.
In PostOrder, you visit both your children and then print the current value.

Here is the code snippet for the InOrder traversal (recursive version).

void inorder(NODE *root) {
    if(root!=NULL) {
        inorder(root->left);
        printf("%d ",root->data);
        inorder(root->right);
    }
}

You could write an iterative version, where you would simulate the operations in a system stack, using your own stack. The obvious advantage is that you would be saving space (since you would now push as many values as the system would for a function call.)

However, there exists a really beautiful iterative version which does not use a stack. It assumes that two pointers can be check for equality. It is bas on thread trees and it was first written in 1979 by Morris and hence the name!

How does it work.

The only reason we ne a stack is so that we can do the “RETURN” from child nodes to parent nodes. This return is ne only from one node really. Consider a 5 node tree.

                                      20
                                    /     \
                                   /       \
                                 10        30
                                /   \     
                               /     \
                             5       15

Now our stack would work like this.

1. Push 20.
2. Push 10.
3. Push 5.
4. Pop 5 and print 5.
5. Pop 10 and print 10.
6. Push 15.
7. Pop 15 and print 15.
8. Pop 20 and print 20.
9. Push 30.
10. Pop 30 and print 30.

If I write a non-resursive and non-stack version, my greatest headache would be to go to 20 from 15 (statements 7-8). So we need to link 15 and 20 so that we can go to 20 without problems. But that would mean that we are modifying the tree. Well, we could do it in two steps. First we link the two and in the next step once we have printed 20, we can destroy that link.

                                        20
                                      / | \
                                     /  |  \
                                   9    |   30
                                  /   \ |   
                                 /     \|
                               5       15

And thus we have the following –

1. SET current as root.
2. if current is not null do –
2.a. if current has no left child, print current , set current as right child and REPEAT 2.
2.b. else goto the rightmost child of current’s left child.
2.b.a. If this is NULL, then link it to current and set current as left child of current and REPEAT 2.
2.b.b. else set the right child to NULL. Print Current. Set current as Current’s right child . REPEAT 2.

As a pseudocode we may write it as –

Morris-InOrder ( root )
current = root
while current != NULL do
	if LEFT(current) == NULL then
	   print current
	   current=RIGHT(current)
	else do
	   // set pre to left child of current
	   pre=LEFT(current)
	   // find rightmost child of the left child of current
	   while (RIGHT(pre) != NULL  and RIGHT(pre) != current) do
	       pre=RIGHT(pre)
	    //if thus is null, link it to current and set current's left as current
	    if RIGHT(pre) == NULL then
	       RIGHT(pre)=current
	       current=LEFT(current)
	    // else unlink it, print current and set right child of current as current
	    else do
	       RIGHT(pre)=NULL
	       print current
	       current=RIGHT(current)

Looks nice aah. Let’s just write the code.

void MorrisInorder(NODE *root) {
    NODE* current,*pre;
    current=root;
    while(current!=NULL) {
        if(current->left==NULL) {
            printf("%d ",current->data);
            current=current->right;
        }
        else {
            pre=current->left;
            while(pre->right != NULL && pre->right !=current) 
                pre=pre->right;
            if(pre->right==NULL) {
                pre->right=current;
                current=current->left;
            }
            else {
                pre->right=NULL;
                printf("%d ",current->data);
                current=current->right;
            }
        }
    }
}

Now, lets talk about the fruits!

Insert happens in O(h) time. Each of the traversals (recursive and iterative versions using stack) are in O(N) time and O(N) space (system stack or normal stack).

Morris Inorder runs in O(NlogN) time and O(1) space. One could say that it is slower which is true, but the fact that it does not use additional space can be a huge boost in situations where you are low on system memory!

The entire code is available on :PASTEBIN
I hope you gathered all that info well! I will post a Tree 102, in which I shall discuss the delete operation and talk more about balanced trees!


Thou art Debugger

I have been a big fan of Visual Studio. It is an amazing IDE. It can be used to develop anything from a CLI to GUI and from Mobile Apps to Web Apps. But I have never really tried all that and that isn’t the reason why I liked it so much. As a starter, it can be very difficult to discover a bug in your program. You safely assume you have written what you wanted to write. But in reality that happens very rarely. Often, we miss a little small thing here and there and that creates havoc. The one thing that caught my eye as a young programmer was the Debugging features of VS. Gosh, its amazing.
In this article, I will explain how Debuggers debug!.

Firstly, we will need to learn a bit about CPU registers. We will concentrate on the x86 architecture. If you have ever written code in x86 Assembly, then you would have heard about them. But let me just walk you through the functions of these registers.

CPU registers are just memory that the CPU can use to store data. But, it is the fastest accessible memory for the CPU. Ideally you would like to keep everything in it. Unfortunately, it is damn expensive and so a trade-off is done on the pricing and performance. Though there are 32 registers, the most commonly used ones(9 to be precise) for the purpose of executing instuctions are –

EAX : Accumulator : used to store. Used during add/sub/multiplication. Mult/Div cannot be done elsewehere except in EAX. Also used to store return values.
EDX : storage register Used in conjuction to EAX. It is like a side-kick.
ECX: count register: Used in looping. However, there is an interesting thing about it. It always counts downwards and not upwards.
for ex:
int a=100;
int b=0;
while(b<a)b++;

Then ECX will begin from 100 and not 0 and move to 99,98 … !
ESI : source index for data operations and holds location of input stream.(READING)
EDI : points to location where the result is stored of a data operation destination index. (WRITING)
ESP : Stack pointer
EBP: Base pointer
EBX: not designed for anything specific . can be used for extra storage.
EIP : Current instruction being executed.

Now that we understand this, lets see how a debugger works!
Depending on the type of breakpoint, one of the following happens :

Soft Breakpoint :
Let us assume we need to execute this instruction :
mov %eax,%ebx

And this is at location 0x44332211, and the 2 byte opcode for this is 0x8BC3. Hence what we will see is :
0x44332211 0x8BC3 mov %eax,%ebx

Now when we create a soft breakpoint at this instruction what the debugger does is – it takes the first byte (8B in this case) and replaces it with 0xCC.
So now, the opcode actually looks like – 0xCCC3. This is the opcode for INT 3 interrupt. INT 3 is used to halt the execution. Now when the CPU is happily executing everything till this one and suddenly sees the 0xCC it knows it has to stop(it may not really like it but – Rules are rules😉 ). It then raises the INT 3 interrupt which the debugger will trap. It then checks the EIP register and sees if this intruction is actually in its list of breakpoints (just in case the program itself has a INT 3 inside it). If it is present, it will replace the first byte with the correct value (8B in this case) and then program execution can continue.

There are two kinds of soft breakpoint: One shot – Where the breakpoint occurs only once and after that the debugger removes the instruction from its list; and Persistent – where it keeps recurring. The debugger would replace the first byte with the correct byte but when execution is resumed, it will once again replace it with 0xCC and not remove it from its list.

Soft breakpoints have one caveat though.When we make a soft breakpoint it changes the software/program’s CRC (cyclic redundancy check) check sum. A CRC is a type of function which tells whether there has been any change or not. It is like a hash function and can be applied to memory/files etc.,. It compares against a known value and if the checksum fails, the CRC fails. This can be used to prevent Soft breakpoints like in malwares where if the CRC fails, the malware kills itself. To get around this we use Hardware breakpoints!

Hardware Breakpoints –
Though hardware breakpoints cannot be applied at everything, it is still very useful. Recall, the I said there are 32 registers. I introduced 9 in the previous section, let me add another 8 to that list. These eight – DR0 to DR7 are debug registers. DR0 – DR3 store the address of the breakpoints and thus we can have only 4 hardware breakpoints (Ouch!). DR4 and DR5 are reserved. DR6 is the status register which determines the type of debugging event once it is hit and DR7 is ON/OFF switch for hardware breakpoints and also stores different conditions like –
1. Break when an instruction is executed at a particular address.
2. Break when data is written to an address.
3. Break on reads or writes to an address but not execution.

As you can guess, you can only break 4 bytes of memory with hardware breakpoints but nonetheless they are very useful tools for reverse engineers. For creating hardware breakpoints, you can use hbreak command inside gdb.( More info ).

Memory breakpoints.
These arent really breakpoints. It is more like setting permissions on a section of memory or on an entire page, something similar to file permissions. When we set a particular permission, if any instruction tries to do something outside of these permissions on that memory, then a break occurs. The permissions available are – Page Execution (enables exec but throws access violation on read/write) , Page Read ( allows only read), Page Write (only write), and Guard Page (one time exception after which the page returns to its original status). Like in files, we can use combination of these to set permissions. In gdb, to break on write use watch, rwatch will break on read and awatch will break on read/write.

I hope you now have a better understanding of how debuggers work. You may feel this as being unnecessary info, but trust me it helps to know how things work for better usage.

I would like to thank all Source of knowledge – the World Wide Web. Please free to send me corrections/suggestions/criticism.
Adios!


Quick-Short

Hi,


The cheapest, fastest and most reliable components of a computer system are those that aren’t there.
                                                                                                                                                  -Gordon Bell

I am back with a new post. And this time its about one of my favourite algorithms (Yes, as a geek I am allowed to have fav algos😛 ) – Quick Sort ! Whats so special about it – It is amazingly simple and very clearly complex. Quick sort can be implemented as horribly as follows –


void quicksort(int *x,int l,int u)
{
	int i,j,t;
	if(l>=u)return;
	t=x[l];
	i=l;
	j=u+1;
	for(;;)
	{
		do i++; while(i<=u && x[i]<t);
		do j--;while(x[j]>t );
		if(i>j)break;
		swap(x[i],x[j]);
	}
	swap(x[l],x[j]);
	quicksort(x,l,j-1);
	quicksort(x,j+1,u);
}

or as simple as


void quicksort(int *x,int l,int u)
{	
	int i,j,t;
	if(l>=u)return;
	t=x[l];
	i=l;
	for(j=l+1;j<=u;j++)
	{
		if(x[j]<x[l])
			swap(x[++i],x[j]); 
	}
	swap(x[l],x[i]);
	quicksort(x,l,i-1);
	quicksort(x,i+1,u);	
}

But in this post, we are not going to see its looks but rather we are going to explore its performance (real beauty).
Anyone who attended a class on ‘Algorithms and Data-structures’ or had the pleasure of learning it on your own (like me) knows that Quick Sort runs in O(nlogn) expected average time. Its a known fact that for any given quick-sort (standard implementation) there exists a case which will ensure that it runs in O(n^2) time (even for the purely randomized version. If you don’t know about it, feel free to comment at the bottom and I will let the secret out😛 ) .

But what if I wanted to find the average expected time. I know there exists a mathematical derivation using Expectation and it shows that it is nlogn but what if I wanted to find out the exact number of comparisons made on the average. We are going to make an attempt on that.

Before we do that, we should take a minute to observe that there are two variables on which quicksort’s performance can be measured. One is the Number of SWAPS made and the second is the Number of Comparisons. We must select the variable which has the most impact in reducing its complexity. In this post I am using Comparisons over Swaps, Why? Simple because, the impact of a comparison is more than the impact of a Swap. How to prove it? Simple- Write a piece of CODE! (I will post the code a little later)

We will just add a new counter before the comparison inside the loop and when the sort exits, we will have the exact count of the comparisons made.


void quicksort(int *x,int l,int u)
{	
	int i,j,t;
	if(l>=u)return;
	t=x[l];
	i=l;
	for(j=l+1;j<=u;j++)
	{
            cmp++;
		if(x[j]<x[l])
			swap(x[++i],x[j]); 
	}
	swap(x[l],x[i]);
	quicksort(x,l,i-1);
	quicksort(x,i+1,u);	
}

A very basic optimization would be to add it outside the loop as shown.


void quicksort(int *x,int l,int u)
{	
	int i,j,t;
	if(l>=u)return;
	t=x[l];
	i=l;
    cmp+=u-l;
	for(j=l+1;j<=u;j++)
	{
		if(x[j]<x[l])
			swap(x[++i],x[j]); 
	}
	swap(x[l],x[i]);
	quicksort(x,l,i-1);
	quicksort(x,i+1,u);	
}

It is still slow and I want to speed it up. Is there any way I can get rid of that for loop. Actually, YES. I can remove it clearly. I know you are throwing away your thinking hat saying that -” WHAT WILL YOU SORT ? AND IF YOU AREN’T SORTING ANYTHING WHATS THE POINT ?” Well, you are right. I am not interested in sorting. I am only interested in estimating the time it will take to run on average. To do this, I don’t need to sort any array, I just need to simulate it and to simulate it quicker, I will remove the for loop and everything associated with it.
Now our simulator code looks like this. I have also removed the two variables and replaced it with the length I want to partition.


int quicksort_count(int L)
{
	int m;
	if(n<=1)return 0;
	m=l+(rand()%L);
	return n-1 + quicksort_count(m-1) + quicksort_count(L-m-1);
}

But, if we want to find the true average, we need to do this for every possible m that may be chosen. Hence we can modify our code to.

double quicksort_avg(int L)
{
	if(n<=0)return 0;
	double sum=0.0;
	for(int m=1;m<=L;m++)
		sum+=L-1 + quicksort_avg(m-1) + quicksort_avg(L-m);
	return sum/L;
}

We can improve its runtime by using Dynamic Programming. We could use the Top-Down approach where we store the values that were previously computed in an array and look it up or we could do Bottom-Up and compute the values in increasing order.


double quicksort_avg(int L)
{
    double dp[L+5];//5 is just taken for safety !
    dp[0]=0;
	for(int n=1;n<=L;n++)
	{  
        double sum=0.0;
	    for(int m=1;m<=n;m++)
	       	sum+=n-1 + dp[m-1] + dp[n-m];
	     dp[n]=sum/n;  	
    }   
	return dp[L];
}

I am still not happy. It is using O(N^2) time which I obviously do not like. It may seem like its impossible to reduce it but in reality that is not the case. For example, if n=5, then the look ups would be :

0 and 5-1
1 and 5-2
2 and 5-3
3 and 5-4
4 and 5-5.

As you can see, I am looking up the same elements twice !
So, I could remove the two lookups and instead multiply it by 2. Also, I can remove the n-1 added every time in the loop (n times to be accurate) and then I divide it by n, leaving me n-1. With those changes, I can convert it to O(N) time.

double quicksort_avg(int L)
{
    double dp[L+5],sum=0;
    dp[0]=0;
    for(int n=1;n<=L;n++)
    {
        sum+=2*dp[n-1];
        dp[n]=n-1  + sum/n;   
    }
    return dp[L];
}

Even now, I am not happy. ( Its impossible to make me happy, right ?). We can actually improve on the O(N) space, since we are only looking at the previous state. Now, our final piece looks like :

double quicksort_avg(int L)
{
    double dp,sum=0;
    dp=0;
    for(int n=1;n<=L;n++)
    {
        sum+=2*dp;
        dp=n-1 + sum/n;   
    }
    return dp;
}

Beautiful isn’t it. In one for-loop using 2 variables, I can actually find out the average comparions made by quicksort for a given length of numbers.

What I (rather Jon Bentley) is trying to show is that – sometimes and almost always we can add functionality by actually removing code! Though this was a pretty small example (and a beautiful example), it seems to explain the idea pretty well.

Do watch the video.

The original video: Three Beautiful QuickSorts


%d bloggers like this: