E-mail: dubhashi@cse.iitd.ernet.in
After the talk, we huddled together in groups trying to understand
what had just hit us. Somebody announced
After tendering the obligatory apologies, I nevertheless insist that this accurately captures the spirit of the workshop on ``Derandomization'' organised by Jaikumar Radhakrishnan at TIFR Mumbai between February 25 and March 6, 1999. While the rest of the country was indulging in the annual orgy of colours, we were indulging in one of our own flavour, blending colours of Mathematics and Computer Science.
Readers who have followed previous reports by the author will be only too aware of his free--wheeling armchair travel guide style and know what to expect below. To my previous defence, I would only like to add that Jaikumar is planning to create a web page for the workshop, where all the material of the workshop, diligently scribed in full detail by hapless student ``volunteers'' will be placed, and a hard--copy of the proceedings will also be generated out of this.
One of the difficulties I faced in writing this report (other than the
customary difficulties with deadlines and failing memory) was the
format: should I present it in temporal order with tutorials
supposedly coming first and advanced material later? But often later
material was tutorial and the early material was advanced! Or should
one separate out by topics? But then is pseudorandomness an
algorithmic topic or complexity--theoretic? My eye was caught by a
(now famous) tutorial by Vinay from a past National Seminar where he
experimented in a different non--linear style of presentation. It
struck me that if I were giving this report on the board, I would have
liked to adopt a similar non--linear style -- it would reflect much
better the interconnections between seemingly different topics.
Unfortunately, my skills with
K.V. Subrahmanyam then gave a series of talks on the constructions of small small spaces that satisfied these reduced independence requirements: ``KV speaks k--wise on the Kolaba Koast'', as Ravi quipped. K.V. gave a fairly comprehensive account starting with the constructions converting linear independence (in the linear algebra sense) of columns of generator matrices of codes into stochastic independence, and going into fairly sophisticated (yet easily described) constructions based on Weil estimates of character sums. I wonder if there is one place where one can find a nice account of all these constructions at one place, otherwise the scribes assigned to this would do a very worthy service to us all: these are fundamental tools that are used repeatedly in the derandomization arena.
Three such examples where the method has been successfully applied were presented in the next three speakers I mention now.
Venkatesh Raman gave a talk in two parts with a timer
alarm at the halfway point for people who had fallen asleep. The first
part dealt with hashing in the Fredman--Komlos--Szemeredi two level
strategy. The second dealt with the method of
Peter Bro Miltersen gave a wonderful panoramic exposition of hashing, starting with the Carter--Wegman definition of universal hashing (which he corrected from the common misconception), then the Fredman--Koml\"os--Szemeredi two--level scheme and finally to methods that used ``real'' computer operations (like & in C for masking) and error--correcting codes.
V. Kamakoti talked about derandomization in
Computational Geometry, a field in which the technique has achieved
some of its most dramatic successes. Kamakoti talked mainly about the
technology of the so--called $\epsilon$--nets. One wished there was
also coverage of Ketan Mulmuley's paper ``Pseudorandomness in
Computational Geometry'', where a beautiful general framework of
``configuration spaces'' is set up in which one can uniformly
derandomize a whole host of applications. A good account is in
Mulmuley's book
Ramesh Hariharan gave a fine account of the second
basic method of derandomization, the so--called
Alessandro Panconesi gave two talks on the ``extraordinarily successful careers'' of trivial (randomized) algorithms. The first was on edge--colouring algorithms (where he tried to sneak in his latest agenda of ``experimental analysis of algorithms''). The second was a rather more interesting talk on an application to proving a lower bound on the performance of approximation algorithms for MAX-k-CUT. He showed a very neat probabilistic argument in the course of constructing an approximation--preserving reduction from MAXCUT; it was rather curious that one needs a probabilistic argument and a derandomization (albeit trivial) in order to get a lower bound!
C. Pandurangan talked about a skip--list based solution that achieves comparable performance to that achieved by a considerably more complicated deterministic solution for priority queues due to Gerth Brodal. Once again, the power of randomization is illustrated by the fact that the solution was the work of IIT undergraduates at Madras!
The second talk was on Ramsey graphs. One of the classical class--room
illustrations of the probabilistic methods is to show that for large
enough graphs, there are colourings such that no triangle is
monochromatic. How does one derandomize this? Sundar gave a delightful
exposition of the so--called
One of the alluring results in complexity theory and one that was used in the passage above is Yao's XOR Lemma: to amplify the hardness of a function, take repeated XORs. Although simple to state and intuitive, the proof is not easy. Jaikumar Radhakrishnan gave a gem of a talk in which he gave a completely understandable proof of the XOR Lemma. One of the revealing features of the proof is how much difference it makes to adopt good notation, even if it is only using $+1$ and $-1$ in place of $0$ and $1$!
It was rather a pity therefore, that after finally having understood
the proof of the XOR Lemma and its use in complexity theory, Jaikumar
returned in a later talk to utter the following:
Friends, Romans and Countrymen (footnote - Although the Romans had departed by then)
Lend me your ears
I come to bury the XOR Lemma, not to praise it
The evil that men do lives after them
The good is oft interred with their bones
So be it with the XOR Lemma.
Extractors are very basic concepts in derandomization. As Amnon Ta Shma said, an extractor is something that takes the energy out of a source in which it is sitting, like squeezing an orange. Amnon gave a very nice discussion of the what, why and how of extractors, which made the literature suddenly more accessible. Nisan's survey on extractors which was circulated is also a great help. Luca Trevisan realised that the same pseudorandom generators that had been around for a while could actually be regarded as extractors after combining it with error correcting codes and in fact gave better constructions than previously known. Of course the consequences for derandomization in complexity are momentous. Ran Raz also gave a talk describing the Trevisan extractor.
A different approach to derandomizing BPP is due to Andreev, Clementi
and Rolim. It is based on the concept of
Meena Mahajan gave a couple of talks on derandomization of space--bounded complexity classes. In contrast to the earlier theme, the results here are unconditional. In the first talk, she covered the seminal result of Nisan about derandomizing randomized logspace computation, a paper that came soon after his influential discovery of the pseudorandom generator. Meena's talk was a good advertisement for the virtues of using probabilistic concepts like expectation and variance over direct counting or averaging. In the second talk, she covered the significant improvement of Saks and Zhou where a combination of applications of the Nisan pseudorandom generator and perturbations is used to reduce the space needed by the deterministic simulation dramatically.
Muli Safra gave two talks, one about the original seminal PCP characterization of NP, one of the most important results in Theoretical Computer Science, and its consequences for the complexity of approximation. The second talk was about a different, even more bizarre characterization of NP (involving such beasts as linear integer combinations of functionals) which gave the best known lower bounds on the complexity of lattice problems -- the closest vector problem and the shortest vector problem. He gave his patented laptop presentation -- by his own admission, he has started doing this after several disasters with blackboard presentations.
Ramesh Hariharan gave a very nice talk about work towards proving lower bounds on a set cover problem with restrictions on the set intersections, a problem motivated by geometric applications. He showed how the construction of hitting sets for combinatorial rectangles ( different from the hitting sets for BPP above) was a key component in the solution.
Somewhat removed from the main theme, Ran Raz gave a talk which was misleadingly titled ``Quantum Communication Complexity''. In fact, the talk was about the usual probabilistic communication complexity, only about a problem of interest to the Quantum model. Ran just about managed to give the definition of the probabilistic communication complexity model and the statement of the result but ran out of time before a single line of the proof.
Next we have Vinay's Cricket Scale for rating talks, which I will illustrate by an application to his talk. Copybook defence, bat and pad close together, straight bat and head right on top of the ball at the start of the innings, followed by extravagant cross--batted heaves over mid-wicket in the middle of the innings but surviving eventually to reach a century. Most talks in the workshop reached the magic figure as well, and some, like Jaikumar's XOR lemma talk and Peter Bro's Hitting sets talk reached the pinnacle of achievement, the unbeaten chanceless triple hundred!
In sharp contrast was the self--effacing attitude of the Indian participants, and this, despite the overwhelming impression I got of how many capable and quite brilliant people we have in the country today. But for this workshop, which gathered these people together at one place, one would never have realised this. It showed very dramatically how isolated the research community in India unfortunately is. Recently people have made commendable attempts to change this, for instance through the annual FST&TCS conference, the National Seminar, the Banasthali gathering etc. I for one strongly feel that we should take this opportunity to keep up the momentum generated by this workshop into a regular series of such events. The agenda of this kind of series would be somewhat different from the other ones (such as the National Seminar). This should provide a mechanism for Indian research in Theoretical Computer Science to operate at the highest level on the most challenging problems of the day. Incidentally, inspiration for this came from the public lecture by Armand Borel at the Bhabha Auditorium in which he told us how the young Einstein together with his friends resolved to ``study the most important books of the day''. If Indian research would be seen making visible contributions at the most prestigious international forums, we would have correspondingly less condescending talk directed at us.
By the way, as I sign off, I just get this very uncomfortable feeling that I missed out something important. Well, if I can't remember it, it couldn't have been anything of consequence or very memorable, could it ...?
Dodo Piadico
Contact address: Email: dubhashi@cse.iitd.ernet.in