Here we are going to discuss Satisfiability (or SAT) and especially 2SAT based problems. SAT problems refer to checking if the following kinds of boolean formulae are satisfiable.
However, since the above is too general to represent with a datastructure, we typically reduce such formulae into the Causal Normal Form (CNF) as shown –
Read here if you want to know how to convert any formula to a CNF. Each subformula which is joined by is called a clause and each variable a literal. We know that the satisfiability of CNF composed of clauses with more than 2 literals is NP Hard. These are represented as 3SAT, 4SAT etc. We will only explore the satisfiability of 2CNF. An example 2CNF is
2SAT problems are polynomial time decidable. The problem can be modelled as a graph problem. We will have two vertices for each literal. One vertex represents and the other .
Now we add an edge from and iff we have a clause of the form
Hence for every 2SAT clause, we will have two edges one for and another for . Since we require a clause of the form to create an edge, we will change to and draw an edge from to and another edge to .
The edges represent a ‘If Then’ relationship. A condition like can be rewritten as –
if not x then y else if NOT Y then X
This is precisely what we represent when we add an edge. We add an edge for each of the IF conditions.
Taking as an example, here is how the edges can be drawn out.
To figure out if the given 2CNF is satisfiable, we just need to check if there is a path from to (similarly for y and z). You can do this via a DFS or a BFS. A faster way is to check for component Components. If to (similarly for y and z) lie in the same component, it isn’t satisfiable.
One other way (and quicker to code) is to use FloydWarshall. It is quick to write and easy to remember.
Here is a problem for practice: SRM 464 – Div 1 – 550.
The trick is to create a 2CNF where we add a clause everytime we cannot add a pair of vertices (A, B). The clause will take the form – NOT (A AND B) or in a CNF form – (NOT A OR NOT B).
I hope you found this useful!
Like I mentioned, I have a lot of free time now and I decided to go back to doing something I always loved – solving problems on Topcoder. So I started the Arena, logged in (I was surprised I could still remember my password) and went straight to SRM 411 Div 2. It was a problem set I had already solved and I wanted my first practice contest in almost 10 months to be easy. Well – it wasn’t! I thought I coded the 250 right but the tests revealed that I missed the easiest edge cases. It took me a lot of thinking and a little searching to finally figure out how to do the 600 and the 900 was a bit easy on the mind but impossible to code! I compared this against my performance the last time around and the contest was like that between me and Petr on a live SRM!
I am not going to share anything related to this experience though. Just to encourage myself, I am going to look into eternal abyss (actually my memory – things fall in rather quick but never quite make it back when I need it!) and share some problems (in a never ending Saga) that I was able to solve but which many of my more established peers could not (or so I would like to think), during a live contest!
[Problem Statement] [My Submission]
Don’t be fooled just because its a 300 point problem. This problem saw over 800 submission but only 38% passed! Also, this is Round 2, so there are hardly any rookies left and I remember a large number of red coders failing the system test. Maybe it was a 300 point problem and people took it lightly. But I knew, I could only solve this one, since the 450 and 1000 were bit on the harder side for me!
The only reason I was able to solve this problem was that I had just learnt how to solve problems involving bipartite matching (just the easy ones though). And when you have learnt a new technique, suddenly every problem fits the bill and you can see a bipartite graph in every problem. Fortunately, it was true for this problem.
First thing to notice was that if it was not a bipartite graph, there was no solution possible as the system would be inconsistent. Also, if there was no matching possible on this graph for any vertex, then again the system would be inconsistent (Notice that if one switch does not have an associated lamp, then either one switch is connected to two lamps or there is a lamp without a switch). However, this turned out to be the easy part!
I tried a lot of different simple techniques to figure out the number of experiments I would need but everything had one flaw or another. It was in the dying minutes of the contest that I jumped up with ‘Eureka’! What I realized was that lets say there is a set of switches A which can be mapped to another set of lamps B (note: n(A) always equals n(B)). Then the number of experiments needed is log(n(A)).
Why? Lets say the system had 4 switches and lamps. Then if I switch on 2 and switch off 2, in one experiment I would have identified 2 switches and their respective lamps. Now I have two groups of 2 sets each. Notice that they are independent of each other (experimenting on one does not affect the result of another – which means that in one experiment I can set the states of switches and figure everything out!). If I repeat this again then in 2 experiments I would have the answer for these 4 sets of switches and lamps. The property that experiments are independent meant that if you identified all such groups and figured out what is the largest number of experiments needed to solve any of the groups, that would be your final answer.
[Problem Statement] [My Solution]
This problem is a source of both happiness and sorrow. Happiness – because I figured out how to solve it; sorrow because I got lazy in implementing it and the hard cases broke this code. If I hadn’t been lazy, I would have qualified for Round 3 which would have been a moment of great pride. I did end up in the top 1000 for which they sent me a nice GCJ tshirt but alas – the wrong size!
When is the waiter called: everytime the LCM of first K numbers is not equal to the LCM of first K1 numbers. This is the most important observation.
The problem statement asks us to find two permutations of first N numbers A and B such that A results in the smallest number of waiter calls and B in the largest. Just by looking at the test cases you can figure out that the largest way is to keep the numbers in sorted order. But what is this value? It turns out this is the sum of the largest power of prime number such that it is less than equal N.
How? Consider first 10 numbers. When 1 comes in, he will call. So will 2 and so will 3. At this point the LCM is 6. But when 4 comes in, he will need to multiply the LCM by 2 so even he will have to call. So will 5, but when 6 comes in, he does not have to since 2 and 3 have already taken care of his needs! 7 will again call and so will 8 (since we need another factor of 2 to satisfy the condition) and 9 (factor of 3 this time). But when 10 comes in, he will not need it. So the total calls here is 8 – which is 1 (1^1) + 3 (for 2^3)) + 2 ( for 3^2) + 1 ( for 5^1) + 1 ( for 7^1) = 8.
What is the smallest such possibility? Simple, if the largest power of a prime, comes first he will ensure that none of the other factors need to be called. In essence, the number of prime numbers less than equals to N. But instead of doing these seperately, you can do this in one loop by reducing by 1, the value you find in the above explanation.
Just to point out where I blundered: When calculating the largest power I used log function. While mathematically there is nothing wrong with it, in practice log has an error associated with it which is greatly magnified with larger numbers. Not that I did not know this when I was implementing it, just that I got bloody lazy!
Any hoot, we all learn from mistakes and so did I!
Do try solving these problems on your own and let me know how that went!
I almost forgot – to my ardent fans – A Very Happy New Year! That’s it for now!
$out =~ s/(^[azAZ09]+)\.([az]+)/<a href="\"\1\.\2"">\1<\/A>/g;</a>
What does it do? I hope you will be able to tell me after reading through this.
This post is to add to your arsenal – an Intercontinental ballistic missile of programming or what others call – Regex!
This is just plain text. If I need to match cat in Bell the cat, I would just use cat as a regex!
The following characters – []{}().+*\^$ are native to regex. If you need to use them as literals you need to escape them by preceding it with \, for eg – \{. Now what do these do –
Regex Character 
What it is 
Examples 

[]  Character class  [abcd] – Anything that is either one of a,b,c or d. [^abcd] Match anything which is neither of a,b,c or d 

.  Dot character class  Matches any single character except \n  
*  Star  Matches any character class preceding it; of any length including 0 length. So, if you use .*cat, it will match pussycat and also cat  
+  Plus  Matches anything of length >=1. So, if you use .+cat, it will match pussycat and but not cat  
  Alternation  This works similar to a ‘or’ in a regex. If you want to match dog in the string My dogs name is Tiger, but also match cat in My cats name is puff. These are almost similar string and so your regex would be My catsdogs name is .*  
{}  Limited Repitition  Let say you want to ensure the number of times a pattern is to be matched. Or even better, you know the minimum and the maximum. In such a case you would use {}. For eg – [09]{2,5} means match it to any 2 digit, 3digit, 4 digit or 5 digit number ( with leading zeros). If you want only 2 digit numbers – [09]{2}, or if you want atleast 2 digit numbers [09]{2,} (note the comma ,)  
$  End line Anchor  This is a regex end line anchor. If your regex ends with this character, you are trying to say that ‘The pattern must occur at end of line’. For eg, If you want to ensure that the match ends with your pattern like I am What I am, if you search using am, it will match both but if you search am$, it matches the last one only.  
^  Start line anchor  This is a regex start line anchor. If your regex starts with this character, you are trying to say that ‘The pattern must occur at the start of line’. For eg, If you want to ensure that the match starts with your pattern like I am What I am, if you search using I, it will match both but if you search ^I, it matches the first one only.  
^$  Caret Dollar  Remember, you can also use ^$ in the same regex and in this case it would mean that the line must contain exaclty the pattern. For eg, your input is a large file with text on every line and you are trying to pull out a key of length 10 which can contain characters and numbers you would say – ^[09AZaz]{10}$ 
Now that you have a basic grasp of regex writing, it is time to learn some more advanced stuff.
If you are looking to group a pattern so that another operation can be applied to it, like (ash)+ will match ash, ashash but not ashas. But this would be a very primitive usage of this character. The more powerful usage is in backreferencing. When you put a pattern into a (), you tell the regex engine to store the match internally so that you can access it later. To use a matched pattern as a pattern again, you can use \ followed by a number. This number is the sequence of back reference. If you say \1, then it means the pattern matched with the first set of parenthesis. For eg, if you want to write a regex, which will match html starting and closing tags, you can use
<([AZ][AZ09]*)\b[^>]*>(.*)?</\1>
A language like Perl allows you to return backreferences. In the above example, to get the tag, you would use $1 and to see inside this tag you would use $2 (since the second time () used contains the html inside this tag).
Suppose you want to use a regex to match an HTML tag, assuming your input is a well formed HTML file.
You would think that <.+> will solve easily. But be surprised when they test it on a string like This is my <TAG>first</TAG> test. You might expect the regex to match <TAG> and when continuing after that match, </TAG>.
But it does not. The regex will match <TAG>first</TAG>; not what we wanted. The reason is that the plus is greedy. That is, the plus causes the regex engine to repeat the preceding token as often as possible. Only if that causes the entire regex to fail, will the regex engine backtrack. That is, it will go back to the plus, make it give up the last iteration, and proceed with the remainder of the regex. To avoid such pitfalls, use ?. You can see this in the above regex.
It can also be used like optionals. If you want to match February but also Feb, you can use Feb(ruary)?
In one word – Better. The regex engine will perform better than anything you or I can write to match a pattern, unless you write your own regex engine. And even in that case, the standard Regex will beat you to it! Also, the more simpler your regex, the faster it run (Obviously). Using backreferencing will slow down your regex. A very simple example is grep. This utility only allows simple regex characters and tends to be faster than egrep which allows much more advanced stuff but at a price!
Now, after going through all this, I hope you can answer what the first regex I introduced you to, did! Its in perl and s/<PATTERN>/<REPLACE>/g replaces <PATTERN> with <REPLACE>, globally.
I hope you are able to now add regex to your programming arsenal and hope this has helped you understand it. For more info, you can always google
In this post, we are going to do a – Solve and Learn strategy ; You will be given a question and I will show you how to apply the concepts on them.
If a post mentions recurrences, then it has to mention Fibonacci (Gosh, if only I had a penny for every mention of Fibo in tutorials. )
The recurrence is of type : F(n) = F(n1) + F(n2).
I am pretty sure you know to code the linear version of it which runs in O(N) but can you do it in O(log N) ? If you throw google to good use, you will come up with a solution which says there is a Matrix M which when raised to power N, will give you the Nth fibonacci number. And since you can always exponentiate in logN time, you have your answer. But to those, who wondered if this Matrix is magical read on!
Firstly the answer No; Its not magical. How. Lets do a little Algebra (yumm… My favourite! )
Obviously enough, the value of Nth term, depends on two previous terms (or states). This implies that all values depend on just the first two states in the sequence. As you can see here –
Hence in General, we may write ::
I hope that has helped you in understanding how to frame such equations and solving it with a matrix.
Now that we have a basic understanding. Try the following recurrence :
F(n) = F(n1) + F(n2) + F(n3).
It is the same as the previous recurrence but with an additional state. I won’t go on explaining the hows (again!). I am going to share the solution.
Consider the following scenario ::
This one is a lot trickier. First thing to notice is that we will need 4 states in a matrix to fully define the next state. The reason for using 4 and not 3 is that H(n) depends on 2 states and thus we need 2 states (and not just 1) to represent it.
If you carefully write down the LHS matrix and the RHS matrix, then we can frame the solution as . . .
The final hurdle can come in the name of a constant. If we add a constant C to the above recurrence we get –
But to tell you the truth, its not that difficult if your concepts are clean. Now there is another additional state to hold the information about C. The solution will look like –
I hope this post lived up to your expectations and I hope it was worth the wait :P. Please feel free to post comments/corrections/improvements to this post to make it really useful.
Tree is a heirarchial arrangement of nodes. From the literal meaning of Tree we know that it has root, branches, fruits and leaves. Well, in Algorithms also, we have a root – which is the origin of the tree. We have branches which connect to smaller trees and we have leaves, which do not have outgoing branches. And as far as the fruits are concern – depending on the complexity of operations that can be perform, we may label the fruits as sweet and sour !
The simplest tree would be a node which branches to exactly one other node, or in other words – a singly Link List. If every node branches to its child and also to its parent, we have a doubly link list. But in this post, we are not going to discuss these.
The next level of trees would be – where a single node may branch out to a maximum of two other nodes. Such a tree is call a binary tree. Binary trees are some of the most widely us datastructures in computers and we are going to discuss them in a series of posts. So lets begin.
One of the most important things to do is : Create a tree.
So what is it that we ne to create one. We will ne to represent the nodes and the links between nodes. And since we ne to connect to a maximum of two nodes, we will have two branches. We shall call these branches – left and right. Also, it will store some data in it. Our tree will be us to just store integers.
We will use the following structure to create it. FYI, everything here is in C++ and not C.
struct NODE { int data; NODE *left; NODE *right; };
Now whenever we ne to insert a node, we ne to make sure that there is a fix position at which the node will be insert given its value (Data in the node). Let us follow a simple strategy.
We will insert a node to the left of a ‘Parent node’, if its value is lesser than the value of the Parent, otherwise to the right. The binary trees which use such a strategy are call Binary Search Trees.
The obvious advantage of such a strategy is that we can search for elements in the tree in O(h) time, where h is the height of the tree. Do note that, in general, h does not equal logN. If we could actually have a tree where the height is inde logN, we would call such trees as Balanc Binary Search Trees.
Alright then, lets get our hands dirty with a code that will create the tree for us. The function insert takes as input the root of the tree and the value to be insert and returns the node which contains the data.
NODE * insert(NODE *root, int data) { if(root==NULL) { root=(NODE*)malloc(sizeof(NODE)); root>left=root>right=NULL; root>data=data; return root; } else { while(root!=NULL) { if(root>data>data) { if(root>left!=NULL) root=root>left; else break; } else { if(root>right!=NULL) root=root>right; else break; } } NODE *new_node=new NODE; new_node>data=data; new_node>left=new_node>right=NULL; if(root>data > data) { root>left=new_node; } else root>right=new_node; return new_node; } }
Another very useful and important property when using the above strategy is, that the INORDER traversal is sort!
Lets backup a bit. What are Traversals. It is like visiting many homes using the roads which connect them. Only that, the homes here are the NODEs and the roads are the links between each node.
There are many traversals but the three us very often are – PreOrder, InOrder and PostOrder.
In PreOrder, you print the current node and then visit its left and then its right children, recursively.
In InOrder, you first visit the left child, once you have return, you print the current value and then visit the right child.
In PostOrder, you visit both your children and then print the current value.
Here is the code snippet for the InOrder traversal (recursive version).
void inorder(NODE *root) { if(root!=NULL) { inorder(root>left); printf("%d ",root>data); inorder(root>right); } }
You could write an iterative version, where you would simulate the operations in a system stack, using your own stack. The obvious advantage is that you would be saving space (since you would now push as many values as the system would for a function call.)
However, there exists a really beautiful iterative version which does not use a stack. It assumes that two pointers can be check for equality. It is bas on thread trees and it was first written in 1979 by Morris and hence the name!
How does it work.
The only reason we ne a stack is so that we can do the “RETURN” from child nodes to parent nodes. This return is ne only from one node really. Consider a 5 node tree.
20 / \ / \ 10 30 / \ / \ 5 15
Now our stack would work like this.
1. Push 20.
2. Push 10.
3. Push 5.
4. Pop 5 and print 5.
5. Pop 10 and print 10.
6. Push 15.
7. Pop 15 and print 15.
8. Pop 20 and print 20.
9. Push 30.
10. Pop 30 and print 30.
If I write a nonresursive and nonstack version, my greatest headache would be to go to 20 from 15 (statements 78). So we need to link 15 and 20 so that we can go to 20 without problems. But that would mean that we are modifying the tree. Well, we could do it in two steps. First we link the two and in the next step once we have printed 20, we can destroy that link.
20 /  \ /  \ 9  30 / \  / \ 5 15
And thus we have the following –
1. SET current as root.
2. if current is not null do –
2.a. if current has no left child, print current , set current as right child and REPEAT 2.
2.b. else goto the rightmost child of current’s left child.
2.b.a. If this is NULL, then link it to current and set current as left child of current and REPEAT 2.
2.b.b. else set the right child to NULL. Print Current. Set current as Current’s right child . REPEAT 2.
As a pseudocode we may write it as –
MorrisInOrder ( root ) current = root while current != NULL do if LEFT(current) == NULL then print current current=RIGHT(current) else do // set pre to left child of current pre=LEFT(current) // find rightmost child of the left child of current while (RIGHT(pre) != NULL and RIGHT(pre) != current) do pre=RIGHT(pre) //if thus is null, link it to current and set current's left as current if RIGHT(pre) == NULL then RIGHT(pre)=current current=LEFT(current) // else unlink it, print current and set right child of current as current else do RIGHT(pre)=NULL print current current=RIGHT(current)
Looks nice aah. Let’s just write the code.
void MorrisInorder(NODE *root) { NODE* current,*pre; current=root; while(current!=NULL) { if(current>left==NULL) { printf("%d ",current>data); current=current>right; } else { pre=current>left; while(pre>right != NULL && pre>right !=current) pre=pre>right; if(pre>right==NULL) { pre>right=current; current=current>left; } else { pre>right=NULL; printf("%d ",current>data); current=current>right; } } } }
Now, lets talk about the fruits!
Insert happens in O(h) time. Each of the traversals (recursive and iterative versions using stack) are in O(N) time and O(N) space (system stack or normal stack).
Morris Inorder runs in O(NlogN) time and O(1) space. One could say that it is slower which is true, but the fact that it does not use additional space can be a huge boost in situations where you are low on system memory!
The entire code is available on :PASTEBIN
I hope you gathered all that info well! I will post a Tree 102, in which I shall discuss the delete operation and talk more about balanced trees!
Firstly, we will need to learn a bit about CPU registers. We will concentrate on the x86 architecture. If you have ever written code in x86 Assembly, then you would have heard about them. But let me just walk you through the functions of these registers.
CPU registers are just memory that the CPU can use to store data. But, it is the fastest accessible memory for the CPU. Ideally you would like to keep everything in it. Unfortunately, it is damn expensive and so a tradeoff is done on the pricing and performance. Though there are 32 registers, the most commonly used ones(9 to be precise) for the purpose of executing instuctions are –
EAX : Accumulator : used to store. Used during add/sub/multiplication. Mult/Div cannot be done elsewehere except in EAX. Also used to store return values.
EDX : storage register Used in conjuction to EAX. It is like a sidekick.
ECX: count register: Used in looping. However, there is an interesting thing about it. It always counts downwards and not upwards.
for ex:
int a=100;
int b=0;
while(b<a)b++;
Then ECX will begin from 100 and not 0 and move to 99,98 … !
ESI : source index for data operations and holds location of input stream.(READING)
EDI : points to location where the result is stored of a data operation destination index. (WRITING)
ESP : Stack pointer
EBP: Base pointer
EBX: not designed for anything specific . can be used for extra storage.
EIP : Current instruction being executed.
Now that we understand this, lets see how a debugger works!
Depending on the type of breakpoint, one of the following happens :
Soft Breakpoint :
Let us assume we need to execute this instruction :
mov %eax,%ebx
And this is at location 0x44332211, and the 2 byte opcode for this is 0x8BC3. Hence what we will see is :
0x44332211 0x8BC3 mov %eax,%ebx
Now when we create a soft breakpoint at this instruction what the debugger does is – it takes the first byte (8B in this case) and replaces it with 0xCC.
So now, the opcode actually looks like – 0xCCC3. This is the opcode for INT 3 interrupt. INT 3 is used to halt the execution. Now when the CPU is happily executing everything till this one and suddenly sees the 0xCC it knows it has to stop(it may not really like it but – Rules are rules ). It then raises the INT 3 interrupt which the debugger will trap. It then checks the EIP register and sees if this intruction is actually in its list of breakpoints (just in case the program itself has a INT 3 inside it). If it is present, it will replace the first byte with the correct value (8B in this case) and then program execution can continue.
There are two kinds of soft breakpoint: One shot – Where the breakpoint occurs only once and after that the debugger removes the instruction from its list; and Persistent – where it keeps recurring. The debugger would replace the first byte with the correct byte but when execution is resumed, it will once again replace it with 0xCC and not remove it from its list.
Soft breakpoints have one caveat though.When we make a soft breakpoint it changes the software/program’s CRC (cyclic redundancy check) check sum. A CRC is a type of function which tells whether there has been any change or not. It is like a hash function and can be applied to memory/files etc.,. It compares against a known value and if the checksum fails, the CRC fails. This can be used to prevent Soft breakpoints like in malwares where if the CRC fails, the malware kills itself. To get around this we use Hardware breakpoints!
Hardware Breakpoints –
Though hardware breakpoints cannot be applied at everything, it is still very useful. Recall, the I said there are 32 registers. I introduced 9 in the previous section, let me add another 8 to that list. These eight – DR0 to DR7 are debug registers. DR0 – DR3 store the address of the breakpoints and thus we can have only 4 hardware breakpoints (Ouch!). DR4 and DR5 are reserved. DR6 is the status register which determines the type of debugging event once it is hit and DR7 is ON/OFF switch for hardware breakpoints and also stores different conditions like –
1. Break when an instruction is executed at a particular address.
2. Break when data is written to an address.
3. Break on reads or writes to an address but not execution.
As you can guess, you can only break 4 bytes of memory with hardware breakpoints but nonetheless they are very useful tools for reverse engineers. For creating hardware breakpoints, you can use hbreak command inside gdb.( More info ).
Memory breakpoints.
These arent really breakpoints. It is more like setting permissions on a section of memory or on an entire page, something similar to file permissions. When we set a particular permission, if any instruction tries to do something outside of these permissions on that memory, then a break occurs. The permissions available are – Page Execution (enables exec but throws access violation on read/write) , Page Read ( allows only read), Page Write (only write), and Guard Page (one time exception after which the page returns to its original status). Like in files, we can use combination of these to set permissions. In gdb, to break on write use watch, rwatch will break on read and awatch will break on read/write.
I hope you now have a better understanding of how debuggers work. You may feel this as being unnecessary info, but trust me it helps to know how things work for better usage.
I would like to thank all Source of knowledge – the World Wide Web. Please free to send me corrections/suggestions/criticism.
Adios!
The cheapest, fastest and most reliable components of a computer system are those that aren’t there.
Gordon Bell
I am back with a new post. And this time its about one of my favourite algorithms (Yes, as a geek I am allowed to have fav algos ) – Quick Sort ! Whats so special about it – It is amazingly simple and very clearly complex. Quick sort can be implemented as horribly as follows –
void quicksort(int *x,int l,int u) { int i,j,t; if(l>=u)return; t=x[l]; i=l; j=u+1; for(;;) { do i++; while(i<=u && x[i]<t); do j;while(x[j]>t ); if(i>j)break; swap(x[i],x[j]); } swap(x[l],x[j]); quicksort(x,l,j1); quicksort(x,j+1,u); }
or as simple as
void quicksort(int *x,int l,int u) { int i,j,t; if(l>=u)return; t=x[l]; i=l; for(j=l+1;j<=u;j++) { if(x[j]<x[l]) swap(x[++i],x[j]); } swap(x[l],x[i]); quicksort(x,l,i1); quicksort(x,i+1,u); }
But in this post, we are not going to see its looks but rather we are going to explore its performance (real beauty).
Anyone who attended a class on ‘Algorithms and Datastructures’ or had the pleasure of learning it on your own (like me) knows that Quick Sort runs in O(nlogn) expected average time. Its a known fact that for any given quicksort (standard implementation) there exists a case which will ensure that it runs in O(n^2) time (even for the purely randomized version. If you don’t know about it, feel free to comment at the bottom and I will let the secret out ) .
But what if I wanted to find the average expected time. I know there exists a mathematical derivation using Expectation and it shows that it is nlogn but what if I wanted to find out the exact number of comparisons made on the average. We are going to make an attempt on that.
Before we do that, we should take a minute to observe that there are two variables on which quicksort’s performance can be measured. One is the Number of SWAPS made and the second is the Number of Comparisons. We must select the variable which has the most impact in reducing its complexity. In this post I am using Comparisons over Swaps, Why? Simple because, the impact of a comparison is more than the impact of a Swap. How to prove it? Simple Write a piece of CODE! (I will post the code a little later)
We will just add a new counter before the comparison inside the loop and when the sort exits, we will have the exact count of the comparisons made.
void quicksort(int *x,int l,int u) { int i,j,t; if(l>=u)return; t=x[l]; i=l; for(j=l+1;j<=u;j++) { cmp++; if(x[j]<x[l]) swap(x[++i],x[j]); } swap(x[l],x[i]); quicksort(x,l,i1); quicksort(x,i+1,u); }
A very basic optimization would be to add it outside the loop as shown.
void quicksort(int *x,int l,int u) { int i,j,t; if(l>=u)return; t=x[l]; i=l; cmp+=ul; for(j=l+1;j<=u;j++) { if(x[j]<x[l]) swap(x[++i],x[j]); } swap(x[l],x[i]); quicksort(x,l,i1); quicksort(x,i+1,u); }
It is still slow and I want to speed it up. Is there any way I can get rid of that for loop. Actually, YES. I can remove it clearly. I know you are throwing away your thinking hat saying that ” WHAT WILL YOU SORT ? AND IF YOU AREN’T SORTING ANYTHING WHATS THE POINT ?” Well, you are right. I am not interested in sorting. I am only interested in estimating the time it will take to run on average. To do this, I don’t need to sort any array, I just need to simulate it and to simulate it quicker, I will remove the for loop and everything associated with it.
Now our simulator code looks like this. I have also removed the two variables and replaced it with the length I want to partition.
int quicksort_count(int L) { int m; if(n<=1)return 0; m=l+(rand()%L); return n1 + quicksort_count(m1) + quicksort_count(Lm1); }
But, if we want to find the true average, we need to do this for every possible m that may be chosen. Hence we can modify our code to.
double quicksort_avg(int L) { if(n<=0)return 0; double sum=0.0; for(int m=1;m<=L;m++) sum+=L1 + quicksort_avg(m1) + quicksort_avg(Lm); return sum/L; }
We can improve its runtime by using Dynamic Programming. We could use the TopDown approach where we store the values that were previously computed in an array and look it up or we could do BottomUp and compute the values in increasing order.
double quicksort_avg(int L) { double dp[L+5];//5 is just taken for safety ! dp[0]=0; for(int n=1;n<=L;n++) { double sum=0.0; for(int m=1;m<=n;m++) sum+=n1 + dp[m1] + dp[nm]; dp[n]=sum/n; } return dp[L]; }
I am still not happy. It is using O(N^2) time which I obviously do not like. It may seem like its impossible to reduce it but in reality that is not the case. For example, if n=5, then the look ups would be :
0 and 51
1 and 52
2 and 53
3 and 54
4 and 55.
As you can see, I am looking up the same elements twice !
So, I could remove the two lookups and instead multiply it by 2. Also, I can remove the n1 added every time in the loop (n times to be accurate) and then I divide it by n, leaving me n1. With those changes, I can convert it to O(N) time.
double quicksort_avg(int L) { double dp[L+5],sum=0; dp[0]=0; for(int n=1;n<=L;n++) { sum+=2*dp[n1]; dp[n]=n1 + sum/n; } return dp[L]; }
Even now, I am not happy. ( Its impossible to make me happy, right ?). We can actually improve on the O(N) space, since we are only looking at the previous state. Now, our final piece looks like :
double quicksort_avg(int L) { double dp,sum=0; dp=0; for(int n=1;n<=L;n++) { sum+=2*dp; dp=n1 + sum/n; } return dp; }
Beautiful isn’t it. In one forloop using 2 variables, I can actually find out the average comparions made by quicksort for a given length of numbers.
What I (rather Jon Bentley) is trying to show is that – sometimes and almost always we can add functionality by actually removing code! Though this was a pretty small example (and a beautiful example), it seems to explain the idea pretty well.
Do watch the video.
The original video: Three Beautiful QuickSorts
Here is the Wikilink for it so that you can see the basics of the languange.
In a C version of it, you will need 3 variables – to store the previous value and current value, and one temporary. One more to make sure you only print 5 and not any more , making our count 4 !
Lets look at the code now :
int main() { int prev=1,curr=1,temp,count; for(count=0;count<5;count++) { printf("%d ",curr); temp=curr; curr=prev+curr; prev=temp; } }
That was quick.
But even a simple program like this could make you think very hard in BrainFuck. Its fun to know that you have to think to write simpler programs. It makes my blood rush with just the thought of it.
So here goes !
We only have 8 instructions, out of which we will use 7 (since we are not reading anything from the stdin).
You have to think through absolutely every single thing before you so even put a ‘.’ (dot) on the code editor (quite literally).
We need to print SPACE, so we need 32(ASCII of SPACE) in the memory somewhere!.
We need to print number (for simplicities sake, ONE digit numbers) so we need to make sure we have the ASCII values of it, in memory (ASCII of 0 is 48, 1 is 49 and so on..).
If you look at the C code, we will need keep count of the number of items we have remaining to print. And then we need two locations to store previous and current values of the sequence. A very important thing to remember is that all memory is initialized to 0 at the start and we are relying on this for it to work. Also, the looping statements compare the memory location currently being referred (where your current pointer is) to against 0 and if it is true it breaks otherwise loops.
Writing a simple pseudocode (in terms of only Brainf*k) :
Initialize a location A to 32 Initialize a location B to 48 Initialize a location C to 5 Initialize a location D and E to 1. Set current Location to C. Start a Loop Add B to E and F. Print E. Subtract B from E Add D to E and F Copy E to D Copy F to E Reduce C by 1 and Set it as the current location End Loop
Basically, it was very easy to write this pseudocode. The issue with Braninf*k is that it does not have a COPY instruction for us to use, and so we need to use a looping statement to copy data. But the problem is that to ensure that the loop is executed the correct number of times, we will need to destroy the original value. To get around this, we always update two locations (one where we need it to be copied and the other a temporary!) and then we copy back from the temporary which also destroys it!
Also the A,B,C,D,E and F are all contiguous in memory and so I am using a[0],a[1]…a[6] to refer to them in the code !
Here is the final version.
++++++++[>++++>++++++<<]>>>++++++>+>+<<[<[>>>+>+<<<<]>>>.<<<<.>>>>>[<<<<+>>>>]<[>+>+<<]<[>>>+<<<]>>[<<+>>]>[<<+>>]<<<<]
To make sense of it I have divided it into parts. Some parts have not been commented and are left as an exercise for you to figure out ( I know I am mean ).
++++++++[>++++>++++++<<] //32 48 >>>++++++ //6 >+>+<< //Set first two values to 1 and reduce the number of terms to be printed to 5. [ <[>>>+>+<<<<] >>>.//print the number <<<<.//print space >>>>>[<<<<+>>>>] <[>+>+<<] <[>>>+<<<] >>[<<+>>]//copy from a[6] to a[4] >[<<+>>]//copy from a[7] to a[5] <<<<//decrement the value ]
All right. Pretty cool huh! Well as a real mindboggler you can try making it print N numbers (dont worry about printing it right, just print out the actual value encoded by ascii ).
You could use it to even encrypt messages in a really wierd way. Oh almost forgot. To run your Brainf*k programs use the following website. They have an online interpreter and a debugger of sorts too : www.brainfk.tk
I hope you liked your Christmas Present ;). Merry Christmas and a Happy New Year… Ho oh ho
If you have a null reference – every bachelor in the world would seem married, polyandrously!
Edsger Djikstra
What is a Null Reference : A Null reference is a pointer that points to a NULL (in C++). For example,
#include<stdio.h> int main() { char *cp=NULL; cp=(char*)(malloc(sizeof(char)*12); cp="HELLO THERE"; printf("%s\n"cp); }
Here, in Line 4 you have a Null Reference. Now you would wonder why that is a bad thing. In the context of this program, it isnt all that bad since it executes smoothly. But if you begin using complex source codes then you will have to be careful that you do no use/pass a NULL REFERENCE in/to a function. As this can lead to potential issues – sometimes almost impossible to detect !
A HISTORY LESSON :
Hoare’s first job was in the 60’s, as a programmer for a British Manufacturer – Elliots. After about 9 months, he was asked to design a new programming language ( Imagine this as your assignment at your first job !) While working on that, he found a book – Report on the International Algorithmic Language – ALGOL60 (around 23 pages). He mostly wrote the new language around Algol 60 and left out complicated parts like if statements but later added them into it.
In those days, code was written in Machine Language and it was easier to debug in machine language ( I never thought I would ever hear that!) than in high level languages because of now knowing the exact values in the 4096 bytes of memory. The principle involved was that ” One should be able to tell the error by looking at the high level code alone”. And this was a real problem and people were a little nervous about the high level languages. But as the complexity went up, it was accepted eventually.
Now, Lets examine another piece of commonly used code –
#include<stdio.h> #define N 100 int main() { int arr[N]; int s=0; int k /*** SOME CODE TO DO SOME OPERATIONS ON ARRAYS ***/ /*** SOME CODE THAT MODIFIES k***/ while(s<N) { printf("%d\n",arr[s]); s++; } if(k>=0 && k<N)printf("Kth value is %d\n",arr[k]); }
If we look at what we are doing in the code at Line 10 and 11 – We are making sure that the subscript in within bounds. Thus we see that two checks(Upper Bound and Lower bound) are required to make sure this condition is satisfied. But this added to the time to run code (In those days the processors were really slow and things like ifs could take up a large amount of time !) and also to the size of the code.
A Question may come up – Is this a good thing to do? The Answer is YES. Languages like JAVA already implement it!
Hoare later on went on to become part of the team to design the successor of ALGOL – 60. He suggested that an Object/Record could be referenced by a pointer. But pointers in indirect memory can cause absolute havoc, because if you used a float as a pointer you could end up overwriting your own code! Thus, he took it for granted that the programmer must declare what data the pointer points to and it could be checked at compile time. He asked the customers – Whether they would like the Type checking removed once the code was tested ? They said No. The reason being that there are more chances of errors and issues popping up in a Live environment than in a test environment.
At that point, many people also used FORTRAN instead of ALGOL and Elliot wanted to sell it to them also. So they wrote a program which converted FORTRAN to ALGOL. It was a disaster, not because it compromised the speed but because – after converting it, it would come up with an entire essay of subscript errors.
Hoare invented the NULL REFERENCE but his friend Edsger Djikstra thought that it was a bad idea – If you have a null reference – every bachelor in the world would seem married, polyandrously – since every bachelor has the same wife – NULL but is still a bachelor !
Hoare came up with a solution based on Discrimination of Objects belonging to the disjoint Union Class – A union between two sets with nothing in common! For eg, if we have a class Vehicle and subclasses – Cars and Bus. They both have an attribute Capacity but for a bus it refers to the passenger Capacity whereas for a car it refers to boot capacity. Thus it would discriminate based on bus or car and give you the corresponding capacity (by making a discriminating class.) Thus a pointer could point to a vehicle or to NULL.
Also, having NULL made it easy to initialize. If you didn’t do it this way, you could end up doing it in a really complex manner which could require a sublanguage of its own. For eg, it is easy to design a datastructure without the NULL but almost impossible to design a circular structure without it! Hoare did not want to deal with that and hence NULL became a possible value for every pointer.
Thus in a way, it also influenced the way C was written. One of the greatest issues was the gets() function in C, which allowed buffer – overflow. If it had not been for it, the World would have been free of malwares !
If you asked me whether it was a Good choice or a bad choice to have a NULL reference – I would say that it was a good choice. Consider the amount of time I have saved thanks to the NULL pointer and the structures also owe some stability to the NULL Pointer. However, one must be very careful around it because it is very dificult to see that the pointer points to a NULL and it can be a cause for many a sleepless night!
Oh yes, I’ve had nightmares about binary search. The worst kind. Now, you would tell me – What kinda geek are you ! I’ll answer – an honest Geek. Many people feel that the Binary Search is the easiest algorithms but in fact it is much more complex than some of the other algorithms.
To give you a brief background, I can write everything from a Quick Sort to Djikstra SSSP in over 4 programming languages without so much as blinking but when it comes to binary search – my heart begins to pound and my head begins to twirl. Why ? Because every line of that algorithm is just too simple that we (or maybe just me !) don’t pay close attention to it. You may not agree and I won’t blame you. Most programmers have used binary search to just do that – search. Not that I use it for something else. But just that the way you can apply binary search to an array of problems is amazing. And its only when you begin to apply it to others areas, you realize that you never really understood Binary Search !
Let me first show you the actual code :
int binary_search(int *arr, int sizeOfArray,int search) { lo=0; hi=sizeOfArray1; while(lo<=hi) { mid= lo + (hilo)/2; if(arr[mid]==search) { //ELEMENT FOUND return mid; } if(arr[mid]<search) lo=mid+1; else hi=mid1; } return 1; } //ELEMENT NOT FOUND
The algorithm is only ~10 lines, but each line is very important. Lets start with line 5: This is the condition that controls the looping. It is important to realize that if the element exists in the array, then the above condition can be anything that ensures continuous looping because on finding the element, we will break. But, if the item is not present in the array, then this condition is very important.
1. We are very easily tempted to write the value, lo<hi instead of lo <=hi, and I will show you why we shouldn’t give into that temptation.
Consider the case : 1, 3 4, 5, 10 ,12 14
and we are looking for 10. The series of updates to lo,hi and mid are as follows.
lo=0,hi=6,mid=3;
lo=4,hi=6,mid=5;
lo=4,hi=4,mid=4;
Now for the last loop, if we had not used the ‘=’ condition, it would have broken from the loop without even finding.
2. Okay. Does the next line look like magic to you ? It is nothing but a better way of writing (lo+hi)/2. Now, why do I need that ? – To save me from the overflow. You see (lo+hi) could easily overflow the integer range and result into some nonsensical data. Of course, it won’t happen in most cases as you cannot actually have an array the size of a 32 bit number, but still it is better to avoid it !
3. In most cases, you would write lo=mid, afterall that is what the leading text suggests but then why have I used mid+1 ?
Consider the case : 0 2 5
and search for 3.
lo=0,hi=2,mid=1;
lo=1,hi=2,mid=1;
lo=1,hi=2,mid=1;
and it goes on forever !
For the same reason we use hi=mid1.
Now to prove why this is NOT wrong. Now, if the condition arr[mid] <search is satisfied, it means that (mid+1)th element is definitely larger that midth element. We have already checked with the midth element so we can move the lower bound to the element which is after that. Thus it does not hamper the algorithm in any way!
A very common problem employing binary search is , given a monotonically increasing/decreasing function F(x) find a value of x such that F(x)= VALUE. The domain of x is real numbers. To do this, we need to remember that the domain is really dense and so it is not possible to get a F(x) which is EXACTLY equal to VALUE, so we define a new function SATISFY(x,VALUE) which return true if for a given x, F(x) is within a satisfactory range of VALUE. Also, for this case, the condition to in the while loop is just the precision required on the return value.
Thus,
double EPS= precision_required; double lo=lower_bound, hi=upper_bound; while( abs(lohi)> EPS) { mid=lo+(hilo)/2.0; if(SATISFY(mid,VALUE)==true) hi=mid; else lo=mid; } // lo is the answer !!
Lastly, one usage of binary search ( give it a shot) :
Given an array of N numbers (sorted) and given a search term S, find the largest index i, such that the number at i, is smaller than S but larger than any other number smaller than S.
Eg: case:
1 2 2 2 3 4 5 7 7 8 8 9
. and S= 8. Then the answer is 8, as the 8th number (0indexed) = 7 is the largest of all numbers smaller than S, and 8 is the largest index which contains 7!
I would like to thank lovro for the tutorial on binary search that has helped me out a lot