Category Archives: Programming

Linear Recurrences

How often has it happened to you in a programming contest (or elsewhere) that you thought it was impossible to solve it faster than O(N) and yet the limits imposed suggest that it has to be done faster. Well, if not all, atleast a majority of them have a solution based on the idea of linear recurrences. In this blog post, I intend to help you out on this !!

In this post, we are going to do a – Solve and Learn strategy ; You will be given a question and I will show you how to apply  the concepts on them.

TYPE 1 :: The Simplest :

If a post mentions recurrences, then it has to mention Fibonacci (Gosh, if only I had a penny for every mention of Fibo in tutorials. )

The recurrence is of type : F(n) = F(n-1) + F(n-2).

I am pretty sure you know to code the linear version of it which runs in O(N) but can you do it in O(log N) ? If you throw google to good use, you will come up with a solution which says there is a Matrix M which when raised to power N, will give you the N-th fibonacci number. And since you can always exponentiate in logN time, you have your answer. But to those, who wondered if this Matrix is magical- read on!

Firstly the answer- No; Its not magical. How. Lets do a little Algebra (yumm… My favourite! )
F(n)=F(n-1)+F(n-2)\\ \\ F(n+1) =F(n)+F(n-1)\\ \\ F(n+2)=F(n+1)+F(n)

Obviously enough, the value of N-th term, depends on two previous terms (or states). This implies that all values depend on just the first two states in the sequence. As you can see here –

\begin{pmatrix}F(n+2)\\ F(n+1)\end{pmatrix}=\begin{pmatrix}1&1\\ 1&0\\ \end{pmatrix}\times\begin{pmatrix}F(n+1)\\ F(n)\end{pmatrix}\\ \\ and\\ \\ \begin{pmatrix}F(n+1)\\ F(n)\end{pmatrix}=\begin{pmatrix}1&1\\ 1&0\\ \end{pmatrix} \times \begin{pmatrix}F(n)\\ F(n-1)\end{pmatrix} \\ \\ Hence \\ \\ \begin{pmatrix}F(n+2)\\ F(n+1)\end{pmatrix}=\begin{pmatrix}1&1\\ 1&0\\ \end{pmatrix} ^2 \times \begin{pmatrix}F(n)\\ F(n-1)\end{pmatrix} \\ \\ \begin{pmatrix}F(n+2)\\ F(n+1)\end{pmatrix}=\begin{pmatrix}1&1\\ 1&0\\ \end{pmatrix}^3 \times \begin{pmatrix}F(n-1)\\ F(n-2)\end{pmatrix}

Hence in General, we may write ::
\begin{pmatrix}F(n)\\ F(n-1)\end{pmatrix}=\begin{pmatrix}1&1\\ 1&0\\ \end{pmatrix}^{n-1} \times \begin{pmatrix}1\\ 0\end{pmatrix}

I hope that has helped you in understanding how to frame such equations and solving it with a matrix.

TYPE 2 : Simplest ++

Now that we have a basic understanding. Try the following recurrence :

F(n) = F(n-1) + F(n-2) + F(n-3).

It is the same as the previous recurrence but with an additional state. I won’t go on explaining the hows (again!). I am going to share the solution.
\begin{pmatrix}F(n)\\ F(n-1)\\ F(n-2) \end{pmatrix}=\begin{pmatrix}1&1&1\\ 1&0&0\\ 0&1&0 \end{pmatrix}^{n-2} \times \begin{pmatrix}2\\ 1\\ 1\end{pmatrix}

TYPE 3: Simplest << 1

Consider the following scenario ::

G(n) = a . G(n-1) + b . G(n-2) + c . H(n)\\ \\ and \\ \\ H(n)= d . H(n-1) + e . H(n-2)

This one is a lot trickier. First thing to notice is that we will need 4 states in a matrix to fully define the next state. The reason for using 4 and not 3 is that H(n) depends on 2 states and thus we need 2 states (and not just 1) to represent it.

If you carefully write down the LHS matrix and the RHS matrix, then we can frame the solution as . . .

\begin{pmatrix}G(n)\\ G(n-1)\\ H(n+1)\\ H(n) \end{pmatrix}=\begin{pmatrix}a&b&c&0\\ 1&0&0&0\\ 0&0&d&e\\ 0&0&1&0 \end{pmatrix}^{n-1} \times \begin{pmatrix}G(1)\\ G(0)\\ H(2)\\ H(1)\end{pmatrix}

TYPE 4 : Ohhh !

The final hurdle can come in the name of a constant. If we add a constant C to the above recurrence we get –

G(n) = a . G(n-1) + b . G(n-2) + c . H(n) + C\\ \\ and \\ \\ H(n)= d . H(n-1) + e . H(n-2)

But to tell you the truth, its not that difficult if your concepts are clean. Now there is another additional state to hold the information about C. The solution will look like –

\begin{pmatrix}G(n)\\ G(n-1)\\ H(n+1)\\ H(n)\\ C \end{pmatrix}=\begin{pmatrix}a&b&c&0&1\\ 1&0&0&0&0\\ 0&0&d&e&0\\ 0&0&1&0&0\\ 0&0&0&0&1 \end{pmatrix}^{n-1} \times \begin{pmatrix}G(1)\\ G(0)\\ H(2)\\ H(1)\\ C\end{pmatrix}

I hope this post lived up to your expectations and I hope it was worth the wait :P. Please feel free to post comments/corrections/improvements to this post to make it really useful.


Return to Roots: Tree 101

What is a Tree :

Tree is a heirarchial arrangement of nodes. From the literal meaning of Tree we know that it has root, branches, fruits and leaves. Well, in Algorithms also, we have a root – which is the origin of the tree. We have branches which connect to smaller trees and we have leaves, which do not have outgoing branches. And as far as the fruits are concern – depending on the complexity of operations that can be perform, we may label the fruits as sweet and sour !

The simplest tree would be a node which branches to exactly one other node, or in other words – a singly Link List. If every node branches to its child and also to its parent, we have a doubly link list. But in this post, we are not going to discuss these.

The next level of trees would be – where a single node may branch out to a maximum of two other nodes. Such a tree is call a binary tree. Binary trees are some of the most widely us datastructures in computers and we are going to discuss them in a series of posts. So lets begin.

One of the most important things to do is : Create a tree.
So what is it that we ne to create one. We will ne to represent the nodes and the links between nodes. And since we ne to connect to a maximum of two nodes, we will have two branches. We shall call these branches – left and right. Also, it will store some data in it. Our tree will be us to just store integers.

We will use the following structure to create it. FYI, everything here is in C++ and not C.

struct NODE {
    int data;
    NODE *left;
    NODE *right;
};

Now whenever we ne to insert a node, we ne to make sure that there is a fix position at which the node will be insert given its value (Data in the node). Let us follow a simple strategy.
We will insert a node to the left of a ‘Parent node’, if its value is lesser than the value of the Parent, otherwise to the right. The binary trees which use such a strategy are call Binary Search Trees.

The obvious advantage of such a strategy is that we can search for elements in the tree in O(h) time, where h is the height of the tree. Do note that, in general, h does not equal logN. If we could actually have a tree where the height is inde logN, we would call such trees as Balanc Binary Search Trees.

Alright then, lets get our hands dirty with a code that will create the tree for us. The function insert takes as input the root of the tree and the value to be insert and returns the node which contains the data.

NODE * insert(NODE *root, int data) {
    if(root==NULL) {
        root=(NODE*)malloc(sizeof(NODE));
        root->left=root->right=NULL;
        root->data=data;
        return root;
    }
    else {
        while(root!=NULL) {
            if(root->data>data) {
                if(root->left!=NULL) root=root->left;
                else break;
            }
            else {
                if(root->right!=NULL) root=root->right;
                else break;   
            }
        }
        NODE *new_node=new NODE;
        new_node->data=data;
        new_node->left=new_node->right=NULL;
        if(root->data > data) {
            root->left=new_node;
        }
        else root->right=new_node;
        return new_node;
    }
}

Another very useful and important property when using the above strategy is, that the INORDER traversal is sort!

Lets backup a bit. What are Traversals. It is like visiting many homes using the roads which connect them. Only that, the homes here are the NODEs and the roads are the links between each node.

There are many traversals but the three us very often are – PreOrder, InOrder and PostOrder.

In PreOrder, you print the current node and then visit its left and then its right children, recursively.
In InOrder, you first visit the left child, once you have return, you print the current value and then visit the right child.
In PostOrder, you visit both your children and then print the current value.

Here is the code snippet for the InOrder traversal (recursive version).

void inorder(NODE *root) {
    if(root!=NULL) {
        inorder(root->left);
        printf("%d ",root->data);
        inorder(root->right);
    }
}

You could write an iterative version, where you would simulate the operations in a system stack, using your own stack. The obvious advantage is that you would be saving space (since you would now push as many values as the system would for a function call.)

However, there exists a really beautiful iterative version which does not use a stack. It assumes that two pointers can be check for equality. It is bas on thread trees and it was first written in 1979 by Morris and hence the name!

How does it work.

The only reason we ne a stack is so that we can do the “RETURN” from child nodes to parent nodes. This return is ne only from one node really. Consider a 5 node tree.

                                      20
                                    /     \
                                   /       \
                                 10        30
                                /   \     
                               /     \
                             5       15

Now our stack would work like this.

1. Push 20.
2. Push 10.
3. Push 5.
4. Pop 5 and print 5.
5. Pop 10 and print 10.
6. Push 15.
7. Pop 15 and print 15.
8. Pop 20 and print 20.
9. Push 30.
10. Pop 30 and print 30.

If I write a non-resursive and non-stack version, my greatest headache would be to go to 20 from 15 (statements 7-8). So we need to link 15 and 20 so that we can go to 20 without problems. But that would mean that we are modifying the tree. Well, we could do it in two steps. First we link the two and in the next step once we have printed 20, we can destroy that link.

                                        20
                                      / | \
                                     /  |  \
                                   9    |   30
                                  /   \ |   
                                 /     \|
                               5       15

And thus we have the following –

1. SET current as root.
2. if current is not null do –
2.a. if current has no left child, print current , set current as right child and REPEAT 2.
2.b. else goto the rightmost child of current’s left child.
2.b.a. If this is NULL, then link it to current and set current as left child of current and REPEAT 2.
2.b.b. else set the right child to NULL. Print Current. Set current as Current’s right child . REPEAT 2.

As a pseudocode we may write it as –

Morris-InOrder ( root )
current = root
while current != NULL do
	if LEFT(current) == NULL then
	   print current
	   current=RIGHT(current)
	else do
	   // set pre to left child of current
	   pre=LEFT(current)
	   // find rightmost child of the left child of current
	   while (RIGHT(pre) != NULL  and RIGHT(pre) != current) do
	       pre=RIGHT(pre)
	    //if thus is null, link it to current and set current's left as current
	    if RIGHT(pre) == NULL then
	       RIGHT(pre)=current
	       current=LEFT(current)
	    // else unlink it, print current and set right child of current as current
	    else do
	       RIGHT(pre)=NULL
	       print current
	       current=RIGHT(current)

Looks nice aah. Let’s just write the code.

void MorrisInorder(NODE *root) {
    NODE* current,*pre;
    current=root;
    while(current!=NULL) {
        if(current->left==NULL) {
            printf("%d ",current->data);
            current=current->right;
        }
        else {
            pre=current->left;
            while(pre->right != NULL && pre->right !=current) 
                pre=pre->right;
            if(pre->right==NULL) {
                pre->right=current;
                current=current->left;
            }
            else {
                pre->right=NULL;
                printf("%d ",current->data);
                current=current->right;
            }
        }
    }
}

Now, lets talk about the fruits!

Insert happens in O(h) time. Each of the traversals (recursive and iterative versions using stack) are in O(N) time and O(N) space (system stack or normal stack).

Morris Inorder runs in O(NlogN) time and O(1) space. One could say that it is slower which is true, but the fact that it does not use additional space can be a huge boost in situations where you are low on system memory!

The entire code is available on :PASTEBIN
I hope you gathered all that info well! I will post a Tree 102, in which I shall discuss the delete operation and talk more about balanced trees!


Thou art Debugger

I have been a big fan of Visual Studio. It is an amazing IDE. It can be used to develop anything from a CLI to GUI and from Mobile Apps to Web Apps. But I have never really tried all that and that isn’t the reason why I liked it so much. As a starter, it can be very difficult to discover a bug in your program. You safely assume you have written what you wanted to write. But in reality that happens very rarely. Often, we miss a little small thing here and there and that creates havoc. The one thing that caught my eye as a young programmer was the Debugging features of VS. Gosh, its amazing.
In this article, I will explain how Debuggers debug!.

Firstly, we will need to learn a bit about CPU registers. We will concentrate on the x86 architecture. If you have ever written code in x86 Assembly, then you would have heard about them. But let me just walk you through the functions of these registers.

CPU registers are just memory that the CPU can use to store data. But, it is the fastest accessible memory for the CPU. Ideally you would like to keep everything in it. Unfortunately, it is damn expensive and so a trade-off is done on the pricing and performance. Though there are 32 registers, the most commonly used ones(9 to be precise) for the purpose of executing instuctions are –

EAX : Accumulator : used to store. Used during add/sub/multiplication. Mult/Div cannot be done elsewehere except in EAX. Also used to store return values.
EDX : storage register Used in conjuction to EAX. It is like a side-kick.
ECX: count register: Used in looping. However, there is an interesting thing about it. It always counts downwards and not upwards.
for ex:
int a=100;
int b=0;
while(b<a)b++;

Then ECX will begin from 100 and not 0 and move to 99,98 … !
ESI : source index for data operations and holds location of input stream.(READING)
EDI : points to location where the result is stored of a data operation destination index. (WRITING)
ESP : Stack pointer
EBP: Base pointer
EBX: not designed for anything specific . can be used for extra storage.
EIP : Current instruction being executed.

Now that we understand this, lets see how a debugger works!
Depending on the type of breakpoint, one of the following happens :

Soft Breakpoint :
Let us assume we need to execute this instruction :
mov %eax,%ebx

And this is at location 0x44332211, and the 2 byte opcode for this is 0x8BC3. Hence what we will see is :
0x44332211 0x8BC3 mov %eax,%ebx

Now when we create a soft breakpoint at this instruction what the debugger does is – it takes the first byte (8B in this case) and replaces it with 0xCC.
So now, the opcode actually looks like – 0xCCC3. This is the opcode for INT 3 interrupt. INT 3 is used to halt the execution. Now when the CPU is happily executing everything till this one and suddenly sees the 0xCC it knows it has to stop(it may not really like it but – Rules are rules 😉 ). It then raises the INT 3 interrupt which the debugger will trap. It then checks the EIP register and sees if this intruction is actually in its list of breakpoints (just in case the program itself has a INT 3 inside it). If it is present, it will replace the first byte with the correct value (8B in this case) and then program execution can continue.

There are two kinds of soft breakpoint: One shot – Where the breakpoint occurs only once and after that the debugger removes the instruction from its list; and Persistent – where it keeps recurring. The debugger would replace the first byte with the correct byte but when execution is resumed, it will once again replace it with 0xCC and not remove it from its list.

Soft breakpoints have one caveat though.When we make a soft breakpoint it changes the software/program’s CRC (cyclic redundancy check) check sum. A CRC is a type of function which tells whether there has been any change or not. It is like a hash function and can be applied to memory/files etc.,. It compares against a known value and if the checksum fails, the CRC fails. This can be used to prevent Soft breakpoints like in malwares where if the CRC fails, the malware kills itself. To get around this we use Hardware breakpoints!

Hardware Breakpoints –
Though hardware breakpoints cannot be applied at everything, it is still very useful. Recall, the I said there are 32 registers. I introduced 9 in the previous section, let me add another 8 to that list. These eight – DR0 to DR7 are debug registers. DR0 – DR3 store the address of the breakpoints and thus we can have only 4 hardware breakpoints (Ouch!). DR4 and DR5 are reserved. DR6 is the status register which determines the type of debugging event once it is hit and DR7 is ON/OFF switch for hardware breakpoints and also stores different conditions like –
1. Break when an instruction is executed at a particular address.
2. Break when data is written to an address.
3. Break on reads or writes to an address but not execution.

As you can guess, you can only break 4 bytes of memory with hardware breakpoints but nonetheless they are very useful tools for reverse engineers. For creating hardware breakpoints, you can use hbreak command inside gdb.( More info ).

Memory breakpoints.
These arent really breakpoints. It is more like setting permissions on a section of memory or on an entire page, something similar to file permissions. When we set a particular permission, if any instruction tries to do something outside of these permissions on that memory, then a break occurs. The permissions available are – Page Execution (enables exec but throws access violation on read/write) , Page Read ( allows only read), Page Write (only write), and Guard Page (one time exception after which the page returns to its original status). Like in files, we can use combination of these to set permissions. In gdb, to break on write use watch, rwatch will break on read and awatch will break on read/write.

I hope you now have a better understanding of how debuggers work. You may feel this as being unnecessary info, but trust me it helps to know how things work for better usage.

I would like to thank all Source of knowledge – the World Wide Web. Please free to send me corrections/suggestions/criticism.
Adios!