Indian National Olympiad in Informatics

Online Programming Contest, 4-5 September 2004

IARCS home > OLYMPIAD

Advanced Division

Solution to Problem 1: The Siruseri Sports Stadium

Let us examine this problem from the Manager's point of view. When the
Manager wakes up in the morning he faces one out of the following three
possibilities:

1) There are no requests to use the stadium today.

2) There is an ongoing event that will continue to use the
   stadium today.

3) There are no ongoing events and there are one or more requests
   to use the stadium today.

Cases 1 and 2 are easy. He has no decision to make. In the case
of 1 the stadium has a rest day. In the case of 2, the ongoing
event continues to use the stadium.

Now we consider 3: The naive idea suggests that he chooses, from
among the set of requests, the one that requires the stadium for
the least number of days.

But that does not work and here is a counter example.  Suppose the 
requests are 

            Event    Start   Duration
             1         1        10
             2         1         8
             3         2         1
             4         3         1
             5         4         3

The naive algorithm would schedule event 2 on day 1 through 8 and
thus only one event would be scheduled. On the other hand, if he
decided to let the stadium idle on the first day, he can follow
that by scheduling events 3,4 and 5.

So what do we learn from this?  It is not enough to consider only
the events that need use the stadium today ( Otherwise we would
have to schedule either event 1 or event 2 on day 1. Both of
these are not optimal!).

Yet, this naive solution has half the idea required to solve this
problem.  Why did we decide to choose the shortest event?
Because it would END FIRST and therefore knock out fewer events
than the other candidates.

We just have to apply this idea to all the events that begin
today or later (rather than applying it to only to the events
that begin today).  The manager looks at all the events that
begin today or later and determines the ending day for each of
them. Then he picks the one that ends earliest. If that event
begins today, he schedules it, otherwise he lets the stadium idle
today. That is all.

Why does this work? This follows from the following properties:

(In what follows, by a "valid schedule" we mean a sequence of
events E_1, E_2, ... E_k such that starting date of E_i+1 is
later than the ending date of E_i.  By "optimal schedule" we mean
a valid schedule that has maximum number of events. Our task is
to compute the length of any optimal schedule.)

1) There is always a optimal schedule in which the first event is
   the one with the earliest ending time.

Proof: Pick any optimal schedule. If the first event scheduled is
   not the one with the earliest ending date, replace it with the
   one with the earliest ending date. This is also a valid
   schedule and has the same length.

2) Among all valid schedules beginning with events E_1, E_2,
   ... E_k, an optimal one can be obtained as follows: pick any
   optimal schedule among all the events that begin after E_k
   ends and concatenate it to E_1, E_2, ... E_k.

Proof: Clearly any additional event in any optimal schedule that
   begins with E_1,E_2,...,E_k must consist only of events that
   begin after E_k ends. Conversely any valid schedule among
   events that begin only after E_k ends can be concatenated to
   E_1, E_2, ... E_k to get a valid schedule.


This suggests the following algorithm:

We shall use two variables Count and Blocked. Count keeps track
of the number of events scheduled so far and Blocked keeps track
of the date upto which the stadium will be used by the last of
the events scheduled already.

-----------------------------------------------------------------
Algorithm 1:
----------------------------------------------------------------

You are given (S_1,D_1), (S_2,D_2) ... (S_N,D_N).

1) Compute the ending date E_i = S_i + D_i for each i.

2) Initialise Count to 0 and Blocked to 0. 

3) For i = 1 to N do
    scan the list of events from left to right and find from
    among all events i with S_i > Blocked, the one with
    the earliest ending date. Let this event be E and its ending
    date be e. Increment Count and set Blocked = e.

4) Print Count

------------------------------------------------------------------

The above algorithm carries out roughly N + N*N operations. Statement
1 takes N operations and statment 2 takes N*N operations as 
each iteration of the "for loop" examines each event once and there
are N iterations.

Since N is much smaller than N*N we may take this as roughtly N*N
operations.

Suppose the events were given to you in a special order, that is
sorted in ascending order of ending time. (i.e., if i ≤ j then
E_i ≤ E_j) then we can do significantly better.

Start by adding the first event to the optimal schedule.  Then
scan right till you find an event whose starting time is later
than the ending time of the first event, add that to the
schedule.  Scan right from there on, till you find a third event
whose starting time is later than the ending time of the second
event in the optimal schedule and so on.

In general, e was the last even added to the schedule you can
find the next event to schedule by scanning right from the
current event and picking the first one whose starting time is
later than the ending time of e.

This algorithm is described below:

----------------------------------------------------------------
Algorithm 2:
----------------------------------------------------------------

You are given (S_1,D_1), (S_2,D_2) ... (S_N,D_N).

1) Compute the ending date E_i = S_i + D_i for each i.

2) Sort the events according to their ending time. Let the sorted
   sequence be (s_1,d_1,e_1), (s_2,d_2,e_2) ... (s_N,d_N,e_N)
 
3) Initialise Count to 1 and Blocked to e1. 

   Scan right and whenever you encounter a (s_i,d_i,e_i) with s_i
   > Blocked, we increment Count by 1 and set Blocked to e_i
   (indicating that we pick the ith event (in the sorted array)
   and schedule that as the next event).

4) The value of Count at the end of the scan is the desired answer.

------------------------------------------------------------------

The number of operations required by this algorithm is roughly N
+ Sort(N) + N operations. Statement 1 and 3 contribute N
operations each and statement 2 contributes as many operations as
is required to sort an N element array. The direct sorting
algorithms take roughly N*N and if they are used the algorithm is
no better than algorithm 1. However, there are sorting algorithm
that take roughly N * log(N) operations (where log(N) is the
logarithm of N in base 2. In otherwords log(N) is the least k
such that 2^k > N).  Since log(N) is much smaller than N, when
such sorting algorithms are used, Algorithm 2 does significantly
better than Algorithm 1.

Both Algorithm 1 and Algorithm 2 with simple sorting algorithms
would get at least 50% of the marks in this problem (perhaps more
like 70%). However to get 100% you need to use the fast sorting
algorithms in combination with algorithm 2.

We follow this with a description (drawn from the lecture notes
of the INOI Training Camp held in 2004) of some sorting
algorithms. In particular, we recommend that you understand and
learn to program the to "Quicksort" algorithm.

     
======================================================================
Sorting

We want to sort an array with N elements.

Naive algorithms take N^2 operations to sort the array.  Here is
one such algorithm:

for (i=1; i≤N; i++)
        for (j=1; j≤N; j++) 
                if A[i] > A[j]
                        Exchange A[i] and A[j]


Better sorting algorithms work in time proportional to N log N.

Is this improvement worth it?

Consider the problem of searching for a value in an array of size
10000.  We can either sort the array and then use binary search
(that takes time log N) or do an exhaustive search.

Exhaustive search takes time N, in general, since we may have to
scan the entire array.  For our example, we can thus search for a
single value in 10000 steps.

Binary search takes time log N (all logs are to the base 2).
Recall that 2^10 = 1032 so log 1000 is approx 10.  Since 10000 =
1000 * 10, log 10000 = log 1000 + log 10 = 10 + 4 = 14.

If we are searching for a single value, we can use exhaustive
search and do the job in 10000 steps.  If we
sort the array using an N^2 sorting algorithm and then use binary
search, we spend 10000*10000 = 10^8 steps sorting it, followed by
14 steps to search.

If we repeat this search 10000 times, we would spend 10000*10000
operations doing exhaustive search and 10000*10000 + 14*10000 =
10014*10000 operations doing sorting plus binary search.  For any
value above 10014, sorting will start to make itself
effective---if we check when 10000*M = 10000*10000 + 14*M, we get
M a little above 10014.

On the other hand, suppose we use an N log N sorting algorithm.
Then, the sort phase takes time 14*10000.  So, to see when
sorting+binary search beats exhaustive search, we need to check
when 10000*M = 14*10000 + 14*M, which is approximately when M =
14.  Thus, if we do even 15 searches, it helps to sort.

What if the array grows from 10000 to 100000?  Since log 100000
is about 17, the equations we have are

  N^2 sorting:     100000*M = 100000*100000 + 17*M
  N log N sorting: 100000*M = 100000*17 + 17*M

Thus, we need to about 100017 exhaustive searches to justify
justify an N^2 sorting algorithm, but only 17 searches to justify
an N log N sorting algorithm.

More importantly, if we use an N log N sorting algorithm, for
values much less than the size of the array, we can use sorting
to significantly speed up the search process.

The reason why bubblesort etc are inefficient is that they
duplicate comparisons.  One way to avoid this is to break up the
array into non overlapping segments, sort these and then combine
these portions efficiently.

For instance, suppose we have a box with slips of paper with
numbers between 100 and 300 to be sorted.  We could first
separate the slips into two bunches: those with values below 200
and those with values above 200.  We can sort these bunches
separately.  Since all the values in the second bunch dominate
those in the first bunch, we can combine them easily into a
single sorted bunch.

----------------------------------------------------------------------
Merge sort

In general, there may not be a simple way to determine how to
split the array into two halves based on the values.  Instead, we
can just split the array arbitrarily at the midpoint.  We sort
each half separately.  We then have to combine these two halves,
but we are not guaranteed that one half is uniformly below the
second half.

Let N = 2M be the size of the array.  Suppose, after sorting the
two parts, we have the array in the form

       i_1 i_2 ... i_M j_1 j_2 ... j_M

where i_1 i_2 ... i_M and j_1 j_2 ... j_M are each sorted half
arrays.

We can then proceed as follows.  Examine i_1 and j_1 and extract
the minimum value into the first position of a new array.
Suppose i_1 is selected.  In the next step, compare i_2 and j_1
and extract the minimum value.  Proceed in this fashion till we
exhaust both halves.

In general, this merge operation takes time N.  Let T(N) denote
the time taken to sort an array of N elements.  This approach
requires us to first sort two arrays of N/2 elements and then
combine them using N steps.  So

   T(N) = 2*T(N/2) + N

We recursively solve the two halves using the same procedure.
Thus, the same equation applies to T(N/2), so we can unravel this
equation as

     T(N) = 2*(2*T(N/4) + N/2) +  N = 4*T(N/4) + 2N
          = 4*(2*T(N/8) + N/4) + 2N = 8*T(N/8) + 3N = ...

After expanding this equation K times, we get

     T(N) = 2^K * T(N/2^K) + KN

Eventually, when K = log_2 N, we get

     T(N) = 2^(log_2 N) * T(1) + (log_2 N)*N

T(1), the time to sort a 1 element array, is 1 so we can simplify the
equation above to get

     T(N) = N + N*(log_2 N)

We ignore the smaller term and get T(N) to be proportional to
N*(log_2 N), which is considerably better than N^2.  (In computer
science, 2 is the default base for logarithms, so we shall
henceforth write log N to mean log_2 N).

The problem with this algorithm is that we need an extra array of
size N to merge the two halves.  Can we avoid this?

----------------------------------------------------------------------

Quicksort

Quicksort provides a method to do this that works well on the
average.

1. Call A[0], the first element of the given array A the PIVOT
   element or the SPLITTER.

2. We regroup A[1] to A[N-1] into two parts.  The first part
   consists of elements less than the splitter A[0] while the
   second part consists of elements greater than A[0].  (If there
   are duplicate entries in the array, we may have elements equal
   to A[0].  We can place these, for instance, in the first
   partition.)

3. After rearranging the elements, we move the splitter element in
   between the two partitions and recursively sort the lower and upper
   parts of the array (leaving out the splitter).

We will shortly describe how to do this rearrangement
efficiently.  Before that, we observe that this procedure will
not necessarily give us subproblems of size N/2 as we had assumed
in the earlier analysis.  The sizes of the two partitions depend
on the value of A[0].  In the worst case, A[0] may be the
smallest or largest value in A, resulting in one of the two
partitions being empty.  However, such extreme cases are rare and
it can be proved mathematically that on an average Quicksort runs
in time proportional to N * log N.  More importantly, Quicksort
works very well in practice and it is usually the algorithm used
to implement built-in sorting routines.

To achieve the rearrangement, we scan the array from A[1] to
A[n-1].  At an intermediate point in the scan, we are examining
A[i].  At this point, we should have partitioned A[1] to A[i-1]
into two groups, those smaller than A[0] and those bigger than
A[0].  We thus have two indices into the array, b and i.  The
index b specifies the boundary between the lower and upper
partitions within A[1] to A[i-1].  We can fix b to be either the
last position in the lower partition or the first position in the
upper partition.  Suppose we choose the latter.

We then get an invariant as follows:

After each iteration of the loop where we scan through A,
   i marks the first element of the part of the array that is yet to
     be scanned
   b marks the beginning of the upper partition within A[1] to A[i-1]

Thus, at an intermediate point, the array looks like the
following.

    0 1          b         i         n-1
    -------------------------------------
   | | ≤ A[0]  | > A[0]  | Yet to scan  |
    -------------------------------------

When we scan A[i], we check whether it is smaller than A[0].  If
so, it should move to the smaller partition, so we swap it with
the entry at A[b] and increment both b and i.  If A[i] is not
smaller than A[0], it is already in the correct partition so we
leave b unchanged and just increment i.

     i = 1; b = 1;
     while (i < n){
       if (A[i] ≤ A[0]){
          swap(A[b],A[i]);
          b++;
       }
       i++;
    }

Of course, at the end we should also insert the splitter between the
two partitions, so we can add, after the loop;

    swap(A[0],A[b-1]);  /* Insert splitter before upper partition */

We then recursively sort A[0]..A[b-2] and A[b]..A[n-1].  As we
mentioned earlier, in the worst case, one of these two intervals
will have length n-1 while the other is empty, but on an average
the array will split into approximately equal parts.

In general, quicksort will be a recursive function which will be
passed the array and the left and right endpoints of the interval
to be sorted.  We use the convention that the left endpoint is
the first index in the interval while the right endpoint is one
position beyond the last index in the interval.  Thus, the
initial call would be to quicksort(A,0,n), where n is the size of
A.  In general, the function is

   int quicksort(int *A, int l, int r){

     if (r - l ≤ 1)){
       return 0;     /* Trivial array, do nothing */
     }

     /* Rearrange with respect to splitter, a[l] */

     b = l+1;
     for (i = l+1; i < r; i++){
       if (A[i] ≤ A[l]){
          swap(A[b],A[i]);
          b++;
       }
     }
     swap(A[l],A[b-1]);  /* Insert splitter before upper partition */

     /* Recursive calls */

     quicksort(a,l,b-1);
     quicksort(a,b,r);
 }

----------------------------------------------------------------------

Quicksort in real life

If we choose the pivot element to be first (or last) element in
the array, the worst case input for quicksort is when the array
is already sorted.  We argued that this will rarely happen and
that in the average case, quicksort works well in time n log n.

In certain practical situations, the worst case can and does
happen with unexpected frequency.  Consider an application like a
spreadsheet where you can display and manipulate a table of data.
One option typically provided to the user is to select a column
and sort the entire table based on the values in that column.  It
is often the case that the user first sorts the column in the
wrong order (ascending or descending), realizes the mistake and
then repeats the sort in the opposite direction.  If the column
sorting algorithm is quicksort, the second sort provides a worst
case input!

One solution is to randomize the choice of pivot.  Under Linux,
you can generate random numbers as follows:

  #include <stdlib.h>

  int seed;
  ...
  srandom(seed);  /* Initialize the random number generator*/

  ...

  j = random();   /* Returns a integer random number */

If we initialize srandom() with the same value, the subsequent
sequence of random numbers will be the same.  In IOI, if you use
randomization in your program, it is required that your random
number generator always yields the same sequence for any input.
Thus, you should use a fixed value as a seed, not something
"random" like the current system time etc.

To generate a random number in the range 1..k, we can write

  j = random()%k;

For quicksort, we need to choose a random pivot element between
positions l and r-1.  We can do this by writing:

  pivot = l+ random()%(r-l);
  swap(A[l],A[pivot]);

The second statement moves the pivot to the beginning of the
array.  We can then use the earlier implementation of quicksort
that assumes that the pivot is at the beginning of the array.

----------------------------------------------------------------------



Copyright (c) IARCS 2003-2024;   Last Updated: 15 Mar 2005