Given two sorted postive integer arrays A[n] and B[n] (W.L.O.G, let's say they are decreasingly sorted), we define a set S = {(a,b) | a \in A and b \in B}. Obviously there are n^2 elements in S. The value of such a pair is defined as Val(a,b) = a + b. Now we want to get the n pairs from S with largest values. The tricky part is that we need an O(n) algorithm.
Logic:
Take the two arrays and Take a maximal heap...
Now add into heap the first element (A[0]+B[0])...
Now Remove the maximum from the heap...And Print i...
Now if the removed element was (A[i]+B[j]) then push (A[i+1]+B[j]) and (A[i]+B[j+1])...
Do this until you get n elements...
Answer Contributed by f2001840
Wednesday, June 28, 2006
Array[0]
There is an array of size n storing numbers between 0 to n-3 and 2 of the nums repeated. Find both of them.
Example:
I/P: 0 1 2 1 2 3
O/P: 1 2
This solution is generic for finding if there are any repeats in the array.
Solution:
for(i=0;i lessThan n; i++)
{
swap(arr[i], arr[arr[i]]);
if(i != arr[i] && arr[arr[i]] EQ arr[i])
print "found %d", arr[i]
}
Question And Answer Contributed by f2001389
Example:
I/P: 0 1 2 1 2 3
O/P: 1 2
This solution is generic for finding if there are any repeats in the array.
Solution:
for(i=0;i lessThan n; i++)
{
swap(arr[i], arr[arr[i]]);
if(i != arr[i] && arr[arr[i]] EQ arr[i])
print "found %d", arr[i]
}
Question And Answer Contributed by f2001389
Xor linked list
How to Maintain a double linked list using a single linked list?
Xor linked lists are a curious use of the bitwise exclusive disjunction (XOR) operation to decrease storage requirements for doubly-linked lists. An ordinary doubly-linked list stores addresses of the previous and next list items in each list node, requiring two address fields:
... A-1 A B C C+1 ...
<– prev <– prev <– prev <–
–> next –> next –> next –>
An Xor linked list compresses the same information into one address field by storing the bitwise XOR of the address for previous and the address for next in one field:
... A-1 A B C C+1 ...
–> A-1 XOR B <–> A XOR C <–> B XOR C+1 <–
When you traverse the list from left to right: supposing you are at B, you can take the address of the previous item, A, and XOR it with the value in the XOR field. You will then have the address for C and you can continue traversing the list. The same pattern applies in the other direction.
To start traversing the list in either direction from some point, you need the address of two consecutive items, not just one. If the addresses of the two consecutive items are reversed, you will end up traversing the list in the opposite direction.
This particular trick is generally discouraged for several reasons:
general-purpose debugging tools cannot follow the XOR chain, making debugging more difficult;
the price for the decrease in memory usage is an increase in code complexity, making maintenance more expensive; and
conservative garbage collection schemes do not work with data structures that do not contain literal pointers.
Also, modern computer systems usually have cheap and plentiful memory, so storage overhead is not normally an issue outside specialised embedded systems. Where it is still desirable to reduce the overhead of a linked list, unrolling provides a more practical approach (as well as other advantages, such as increasing cache performance and speeding random accesses).
Xor linked lists are a curious use of the bitwise exclusive disjunction (XOR) operation to decrease storage requirements for doubly-linked lists. An ordinary doubly-linked list stores addresses of the previous and next list items in each list node, requiring two address fields:
... A-1 A B C C+1 ...
<– prev <– prev <– prev <–
–> next –> next –> next –>
An Xor linked list compresses the same information into one address field by storing the bitwise XOR of the address for previous and the address for next in one field:
... A-1 A B C C+1 ...
–> A-1 XOR B <–> A XOR C <–> B XOR C+1 <–
When you traverse the list from left to right: supposing you are at B, you can take the address of the previous item, A, and XOR it with the value in the XOR field. You will then have the address for C and you can continue traversing the list. The same pattern applies in the other direction.
To start traversing the list in either direction from some point, you need the address of two consecutive items, not just one. If the addresses of the two consecutive items are reversed, you will end up traversing the list in the opposite direction.
This particular trick is generally discouraged for several reasons:
general-purpose debugging tools cannot follow the XOR chain, making debugging more difficult;
the price for the decrease in memory usage is an increase in code complexity, making maintenance more expensive; and
conservative garbage collection schemes do not work with data structures that do not contain literal pointers.
Also, modern computer systems usually have cheap and plentiful memory, so storage overhead is not normally an issue outside specialised embedded systems. Where it is still desirable to reduce the overhead of a linked list, unrolling provides a more practical approach (as well as other advantages, such as increasing cache performance and speeding random accesses).
Tuesday, June 27, 2006
Infinite Bit Stream Problems
All such Problems we cannot store the input strings. For all such questions which require computation on infinite Bit Streams we use a FSM(Finite State Machine). Let us discuss some such problems.
1. Find out remainder when the given bit stream is divided by 2.
Input : 010
Output: 0
Input : 0101
Output : 1
Solution:
Design an FSM in which u have states as remainder. So for 2 we can have only two possible remainders 0 or 1.
Now start with a state 0. Be in this state till u get a 1. If in 1 you get a zero then move to state 0 this looks something like this. The end state will give us the remainder.
Similarly we can apply the same logic to find the remainder when the number is divided by 3...(Remainder can be 0,1,2)
In General:
When u reach the end of the i/p you state is the remainder....
Code
/*************************************************************
arr contains the input. arr will contain 0/1....
**************************************************************/
unsigned int findRemainder(int arr[],int len, int divisor)
{
int i=0;
int currState = 0;
while(i lessThan len)
{
currState = (currState * 2 + arr[i]) % divisor;
i++;
}
return currState;
}
/************************************************************************
-numberSystem if 2 then input string contains 0/1...
-numberSystem if 10 then input string contains 0/1/2/3/4/5/6/7/8/9
-arr contains the input. arr will contain [0,9]
*************************************************************************/
unsigned int findRemainder(int arr[],unsigned int numberSystem,int len, int divisor)
{
int i=0;
int currState = 0;
while(i lessThan len)
{
currState = (currState * numberSystem + arr[i]) % divisor;
i++;
}
return currState;
}
1. Find out remainder when the given bit stream is divided by 2.
Input : 010
Output: 0
Input : 0101
Output : 1
Solution:
Design an FSM in which u have states as remainder. So for 2 we can have only two possible remainders 0 or 1.
Now start with a state 0. Be in this state till u get a 1. If in 1 you get a zero then move to state 0 this looks something like this. The end state will give us the remainder.
Similarly we can apply the same logic to find the remainder when the number is divided by 3...(Remainder can be 0,1,2)
In General:
If you are finding the remainder of a bit stream when divided by N then...
if you are in state A. then
CASE 1: Input is 0
nextState = (A * 2 + 0) % N
CASE 2: Input is 1
nextState = (A * 2 + 1) % N
When u reach the end of the i/p you state is the remainder....
Code
/*************************************************************
arr contains the input. arr will contain 0/1....
**************************************************************/
unsigned int findRemainder(int arr[],int len, int divisor)
{
int i=0;
int currState = 0;
while(i lessThan len)
{
currState = (currState * 2 + arr[i]) % divisor;
i++;
}
return currState;
}
/************************************************************************
-numberSystem if 2 then input string contains 0/1...
-numberSystem if 10 then input string contains 0/1/2/3/4/5/6/7/8/9
-arr contains the input. arr will contain [0,9]
*************************************************************************/
unsigned int findRemainder(int arr[],unsigned int numberSystem,int len, int divisor)
{
int i=0;
int currState = 0;
while(i lessThan len)
{
currState = (currState * numberSystem + arr[i]) % divisor;
i++;
}
return currState;
}
Bubble Sort/Insertion Sort
Methods of Sorting
1. Bubble Sort
Compare the adjacent elements and set the higher index element to the highest among the two. a[i+1] = max(a[i],a[i+1]). So each pass you are actually putting the highest element at the end of the array.
Sample Code:
for (int i = 0; i < data.Length; i++)
for (int j = 0; j < data.Length - 1; j++)
if (data[j] > data[j + 1])
{
tmp = data[j];
data[j] = data[j + 1];
data[j + 1] = tmp;
}
2. Insertion Sort
In Insertion Sort we try to insert one element by element into the already sorted array. So every time we insert a new element we actually shift the whole array.
for (int i = 0; i <= data.Length; i++) {
int j = i;
while (j > 0 && data[i] < data[j - 1])
j--;
int tmp = data[i];
for (int k = i; k > j; k--)
data[k] = data[k - 1];
data[j] = tmp;
}
About Sorting:
http://www.topcoder.com/tc?module=Static&d1=tutorials&d2=sorting
1. Bubble Sort
Compare the adjacent elements and set the higher index element to the highest among the two. a[i+1] = max(a[i],a[i+1]). So each pass you are actually putting the highest element at the end of the array.
Sample Code:
for (int i = 0; i < data.Length; i++)
for (int j = 0; j < data.Length - 1; j++)
if (data[j] > data[j + 1])
{
tmp = data[j];
data[j] = data[j + 1];
data[j + 1] = tmp;
}
2. Insertion Sort
In Insertion Sort we try to insert one element by element into the already sorted array. So every time we insert a new element we actually shift the whole array.
for (int i = 0; i <= data.Length; i++) {
int j = i;
while (j > 0 && data[i] < data[j - 1])
j--;
int tmp = data[i];
for (int k = i; k > j; k--)
data[k] = data[k - 1];
data[j] = tmp;
}
About Sorting:
http://www.topcoder.com/tc?module=Static&d1=tutorials&d2=sorting
Dynamic Programming
Dynamic Programming:
A DP is an algorithmic technique which is usually based on a recurrent formula and one (or some) starting states. A sub-solution of the problem is constructed from previously found ones. DP solutions have a polynomial complexity which assures a much faster running time than other techniques like backtracking, brute-force etc.
Refer these links for more info:
http://www.topcoder.com/tc?module=Static&d1=tutorials&d2=dynProg
http://en.wikipedia.org/wiki/Dynamic_programming
Example Problem:
Given a list of N coins, their values (V1, V2, ... , VN), and the total sum S. Find the minimum number of coins the sum of which is S (we can use as many coins of one type as we want), or report that it's not possible to select coins in such a way that they sum up to S.
For a better understanding let's take this example:
Given coins with values 1, 3, and 5.
And the sum S is set to be 11.
First of all we mark that for state 0 (sum 0) we have found a solution with a minimum number of 0 coins. We then go to sum 1. First, we mark that we haven't yet found a solution for this one (a value of Infinity would be fine). Then we see that only coin 1 is less than or equal to the current sum. Analyzing it, we see that for sum 1-V1= 0 we have a solution with 0 coins. Because we add one coin to this solution, we'll have a solution with 1 coin for sum 1. It's the only solution yet found for this sum. We write (save) it. Then we proceed to the next state - sum 2. We again see that the only coin which is less or equal to this sum is the first coin, having a value of 1. The optimal solution found for sum (2-1) = 1 is coin 1. This coin 1 plus the first coin will sum up to 2, and thus make a sum of 2 with the help of only 2 coins. This is the best and only solution for sum 2. Now we proceed to sum 3. We now have 2 coins which are to be analyzed - first and second one, having values of 1 and 3. Let's see the first one. There exists a solution for sum 2 (3 - 1) and therefore we can construct from it a solution for sum 3 by adding the first coin to it. Because the best solution for sum 2 that we found has 2 coins, the new solution for sum 3 will have 3 coins. Now let's take the second coin with value equal to 3. The sum for which this coin needs to be added to make 3 , is 0. We know that sum 0 is made up of 0 coins. Thus we can make a sum of 3 with only one coin - 3. We see that it's better than the previous found solution for sum 3 , which was composed of 3 coins. We update it and mark it as having only 1 coin. The same we do for sum 4, and get a solution of 2 coins - 1+3. And so on.
A DP is an algorithmic technique which is usually based on a recurrent formula and one (or some) starting states. A sub-solution of the problem is constructed from previously found ones. DP solutions have a polynomial complexity which assures a much faster running time than other techniques like backtracking, brute-force etc.
Refer these links for more info:
http://www.topcoder.com/tc?module=Static&d1=tutorials&d2=dynProg
http://en.wikipedia.org/wiki/Dynamic_programming
Example Problem:
Given a list of N coins, their values (V1, V2, ... , VN), and the total sum S. Find the minimum number of coins the sum of which is S (we can use as many coins of one type as we want), or report that it's not possible to select coins in such a way that they sum up to S.
For a better understanding let's take this example:
Given coins with values 1, 3, and 5.
And the sum S is set to be 11.
First of all we mark that for state 0 (sum 0) we have found a solution with a minimum number of 0 coins. We then go to sum 1. First, we mark that we haven't yet found a solution for this one (a value of Infinity would be fine). Then we see that only coin 1 is less than or equal to the current sum. Analyzing it, we see that for sum 1-V1= 0 we have a solution with 0 coins. Because we add one coin to this solution, we'll have a solution with 1 coin for sum 1. It's the only solution yet found for this sum. We write (save) it. Then we proceed to the next state - sum 2. We again see that the only coin which is less or equal to this sum is the first coin, having a value of 1. The optimal solution found for sum (2-1) = 1 is coin 1. This coin 1 plus the first coin will sum up to 2, and thus make a sum of 2 with the help of only 2 coins. This is the best and only solution for sum 2. Now we proceed to sum 3. We now have 2 coins which are to be analyzed - first and second one, having values of 1 and 3. Let's see the first one. There exists a solution for sum 2 (3 - 1) and therefore we can construct from it a solution for sum 3 by adding the first coin to it. Because the best solution for sum 2 that we found has 2 coins, the new solution for sum 3 will have 3 coins. Now let's take the second coin with value equal to 3. The sum for which this coin needs to be added to make 3 , is 0. We know that sum 0 is made up of 0 coins. Thus we can make a sum of 3 with only one coin - 3. We see that it's better than the previous found solution for sum 3 , which was composed of 3 coins. We update it and mark it as having only 1 coin. The same we do for sum 4, and get a solution of 2 coins - 1+3. And so on.
Subscribe to:
Posts (Atom)