FST&TCS'97

Pre--Conference Workshop on Randomised Algorithms

IIT, Kharagpur, India, December 16, 17 1997

A Report, by

Devdatt P. Dubhashi

Dept of Computer Sc and Engg, IIT Delhi
E-mail: dubhashi@henna.iitd.ernet.in

The organisers had given each participant an elegant and colourful bag, containing, among other things, an advance copy of the proceedings of the workshop. When I agreed --- all too readily, in retrospect --- to write this report for the Editor, I was banking on these proceedings, and in a a carefree mood, took no notes myself. Then the inevitable happened: somewhere on the way from Kharagpur to Delhi (surviving ice-clad Montreal in between!) the bag was lost. I had no material to look at, a fast fading memory and the Editor breathing down my neck. It was the worst of times, and yet also the best of times.

Indulge me for a while as I try to explain with an analogy. Imagine returning from a long voyage to a far away exotic land full of beautiful and wondrous sights. As time goes by and you relate this story from your armchair, what is it that will linger on beyond the ravages of time and age? Not the many little details from here and there, but the overall impression and only the most striking vistas. Those are the ones of lasting value, and their survival is proof of it. So it is with this report. Of course, the cynical reader will point out the other obvious advantage of such an armchair travelogue: one goes scot-free for countless sins of commission and omission !

Randomised algorithms have always been characterised by the following dichotomy: on the one hand, the algorithms are natural and simple to describe and understand --- sometimes almost childishly so. On the other hand, their analysis often requires deep mathematical techniques, that are also invariably, strikingly beautiful and elegant. We saw many examples of this at the workshop from six Masters of the craft.

Umesh Vazirani was a special double bonus as he was not originally scheduled (as far as I know). His first talk was on the so-called ``Go with the Winners'' class of algorithms. Consider a task such as searching a large space, typically on a tree (Deep Blue looking for its next move?). The traditional way is to use backtracking which can be very slow in the worst case. Imagine instead sending a posse of processes simultaneously in random directions. After a while, their progress is monitored with some progress measure. Processes that are doing badly are terminated, or rather redirected to ``Go with the Winners''. This is a simple and intuitive idea, but its analysis is not. Vazirani (joint work with David Aldous) shows that an exponential improvement can be obtained over backtracking. The analysis suggests connections with simulated annealing and the so-called genetic algorithms as well.

Umesh's second talk was on a topic very dear to his heart at the moment: Quantum computation. This is a topic that is attracting great attention lately for reasons I will indicate shortly, But first this:

Commercial Break : FSTTCS'98 Special Theme Session
Quantum Computation.
(Sponsers: Messrs Arvind and Jam, IMsc.)

Following a suggestion by the celebrated physicist Richard Feynman, people have begun investigating an alternative model of computation to see if Quantum mechanics has a bearing on computation as well. The additional power of quantum computing stems from the ability of quantum systems to be in a superposition of a large number of base states. For example, if one is required to search a large space, a normal deterministic algorithm would have no recourse but to sequentially run through it. On the other hand, a quantum algorithm could be put at one go into a superposition over the whole space. The probability of being in the desired state is then boosted by unitary transformations (the only ones permitted by Quantum mechanics) before an observation finally produces the desired result with high probability. In this way, significant speedup is possible. This is the basic idea behind a family of search algorithms discovered by Lov Grover. A somewhat more complicated scheme was discovered by Peter Shor to factor large integers. This was a sensational development, because the factoring problem is generally believed to be intractable for classical algorithms, and many cryptographic security schemes are based on this assumption. Should a real and viable quantum computer be built, Shor's algorithm will consign these schemes to the dustbin. The germs of Shor's algorithm actually lie in an earlier paper due to D. Simon on a problem related to identifying cosets in groups. The description in Simon's paper is somewhat obscure; Umesh gave a delightfully simple exposition with a few lines written on the board. As happens quite often, it requires somebody other than the original author to explain an idea well. By now, the basis of the algorithm --- involving Fourier analysis on general groups --- is well understood. A good place to start learning about these developments is in fact a set of notes developed by Umesh for a course at Berkeley and accessible at his web page.

Aravind Srinivasan spoke on algorithms for routing and scheduling. My friend Alessandro Panconesi says that watching Aravind speak is to see the Shostakovich 5th Symphony in action --- it has the same driving irresistible force. And so it was this time as well. In routing problems, one has to send packets along paths in a network and in scheduling, jobs have to be assigned to processors. In both cases, the problem is one of contention --- for paths to travel along in routing and for processors in scheduling. Finding the optimal solution is hard in both cases. However, the simplest of randomised strategies leads to a good solution: simply assign random delays and the problems of contention are magically resolved. Once again, we see simple ideas whose analysis calls for powerful tools: the omnipresent Chernoff-Hoeffding bounds and the Lovasz Local Lemma. Aravind described a number of approximation algorithms for these problems. The write-up in the proceedings already gives a lot of information on these topics but the survey on these topics he is currently writing is sure to be the authoritative account.

S. Muthukrishnan (turbo version featuring a dazzling new hairdo, yank accent and yank humour) spoke on randomisation in ``stringology''. The basic idea once again is simple: the Karp-Rabin fingerprints that are taught in many undergraduate classes nowadays. Muthukrishnan showed how adding more beef in small pieces on top of this can successively lead to solutions to a wide class of string matching problems. I'm told that there was sufficient interest from students for him to schedule another talk on the following day.

Sandeep Sen began the second day by talking about ``Randomisation in Computational Geometry''. He described two broad frameworks for the design and analysis of algorithms: ``Divide-and-Conquer'' and incremental algorithms. A good way to explain both is through an eternal favourite: Sorting. Indeed, Ketan Mulmuley in his book Computational Geometry: An Introduction through Randomised Algorithms has the slogan (in my paraphrase) that Computational Geometry is simply Quicksort in higher dimensions! In the Divide-and-Conquer version, we split the data into two parts via a splitting element and recurse on the subparts. In the incremental version, we add the elements one at a time into the sorted order. Both are doomed to be $\Omega(n^2)$ in the worst case. This is where randomisation comes in: in the Divide-and-Conquer version, choose the splitting element at random. In the incremental version, add the elements in a random order. This simple artifice leads to an optimal $O(n \log n)$ performance with high probability. Sandeep described a number of problems in Computational Geometry that can be phrased in these two paradigms. An elegant devise that is particularly effective with randomised incremental algorithms is ``backwards analysis'': pretend that the algorithm is running backwards from the end-result to the input! Symmetries and linearity of expectation work wonders! A good place to read about these topics is Mulmuley's book. The write-up in the proceedings, at the author's own admission, is somewhat orthogonal to the workshop talk.

What about deterministic algorithms? Curiously, many of the best ones known to date were obtained from the randomised versions by derandomising. For some others, there are by now completely different deterministic algorithms of matching performance but even here, the first deterministic algorithms were obtained by derandomising. In the latter case, randomisation played the pioneering exploratory role, as Sandeep likes to put it. It was very appropriate therefore, that Sandeep's talk was immediately followed by Edgar Ramos talking about ``Derandomisation in Computational Geometry''. Edgar gave a very meticulous survey covering the whole paraphernalia of the derandomisation technology from small-bias spaces to $\epsilon$-nets to ``automata fooling''. In fact so voluminous was his slide collection (only a fraction of which he could cover in the allotted time) that he himself got lost towards the end! Devotees of the method owe a special debt to Edgar for the detailed submission to the proceedings where one can find a nice overview and extensive references.

Vijaya Ramachandran talked about QSM, an allegedly new model of parallel computation that is also claimed to be more realistic. She described many of the standard PRAM algorithms and lower bounds for problems on this model, a theme she carried on to her plenary session in the main FSTTCS '97 conference that began on the following day.

Madhu Sudan spoke on ``Randomness, Algebra and Computation'', leading us gently on a fascinating voyage from Hilbert irreducibility to the Berlekamp factoring algorithm for polynomials and culminating in the dramatic recent results on the complexity of approximations. Randomness in conjunction with algebraic computation has become an indispensable part of the Computer Scientist's toolkit. Computational problems that have seemingly no connection to algebra have found surprising solutions through this potent combination. No example could be more dramatic than the recent resolution of the complexity of approximating combinatorial problems --- something that had remained open from almost the inception of our field. Madhu Sudan described a simpler application to testing equivalence of read-once branching programs. Beginning students would have had a treat, but perhaps the appetite of the connossieurs was left somewhat unsatiated.

Kharagpur was sunny and warm, and the atmosphere at the workshop was relaxed and friendly. For the latter (if not the former) we have to gratefully thank the organisers from the Department of Computer Science and Engineering at I.I.T. Kharagpur. Their warmth and cheer in looking after us was boundless --- starting from the ``longest railway platform in the world'', to the hot water specially brought for us at the somewhat chilly morning hour and the trendily equipped lecture hall in the new Management school. Last but not least, I must mention the sumptuous lunches and dinners, featuring a new Bengali sweet each time. That settles it: in an indifferent voting season, the under-signed vote goes unreservedly to Kharagpur for staging many more such workshops and conferences in the future.