next up previous
Next: Others Up: dd-mm Previous: Embeddings and Projections

Algebra

Saugata Basu gave a series of 3 talks on algebraic curves and codes. Most of the time was devoted to an exposition of what the Riemann-Roch Theorem means; towards the end, its use in constructing Goppa codes was described. The amount of complex analysis covered was probably enough for half an advanced-level course, and thus was not all digestible, but one could appreciate the complexity involved.

K V Subrahmanyam, in a series of 4 talks, described the recent approach of Ketan Mulmuley, Milind Sohoni et al in using the framework of algebraic geometry to pose and hopefully answer central questions from complexity theory. Again, the material was far too esoteric for me to digest, but at least we got an idea (however faint) of how a complexity question could be phrased as a problem in standard algebraic geometry. One then entertains the hope that the powerful machinery of algebraic geometry might finally be able to resolve some of our long standing conundrums. It seems likely therefore that one may soon need to understand much of this better.

Vinay gave a talk on the Singular Value Decomposition (SVD) and its applications in Computer Science. Starting from the simplest case - diagonal matrices, ``homework exercise'' - to square matrices, he proceeded in a model of systematic exposition to reveal how any matrix $A$ has a decomposition of the form $A = U \Sigma
V^T$ for orthogonal matrices $U$ and $V$ and a diagonal matrix $\Sigma$. The diagonal entries $\sigma_1 \geq
\sigma_2 \geq \cdots \geq \sigma_r \geq \sigma_{r+1} = 0 =
\cdots = \sigma_n$ -- the so called singular values -- are of the utmost importance in revealing the properties of the matrix. For example, $r$ is the rank, $\sigma_1$ the 2-norm and $\sum_i \sigma_i^2$, the Frobenius norm of the matrix. The fundamental Hoffman-Weilandt inequality states that $\vert\vert A-B\vert\vert^2_F \geq
\sum_i (\alpha_i - \beta_i)^2$, where $\alpha_i, \beta_i$ are the singular values of $A,B$ respectively in sorted order. This immediately implies that the best rank $k$ approximation to a matrix $A$ is to take the singular value decomposition and ignore ALL except the first $k$ singular values. This is a fundamental technique employed in latent semantic indexing used in classifying documents for informational retrieval. The technique has, unsurprisingly, migrated to the Internet and Vinay later gave us a live demonstration of Manjara, a meta search engine that does classification of search results based on the SVD. (Check it out at http://cluster.cs.yale.edu. If you forget the URL, no problem; google returns this site on top if you query Manjara. If you even forget the name Manjara, still no problem; visit Vinay's homepage. Vinay who??) Another cute application of the SVD was in circuit complexity to prove a well-known bound due to Krapchenko. Actually one can prove a slightly stronger bound: the smallest formula size for a function $f$ is at least $\sigma^2_1(Q)$, where the matrix $Q$ is formed as follows: the rows are indexed by inputs for which the function takes value $1$, the columns by inputs for which the function evaluates to $0$, and the $(i,j)$ entry is $1$ if the corresponding inputs differ in exactly one bit (and zero otherwise).

In a second talk Vinay talked about Szemeredi's regularity lemma, which has increasingly played an important role in recent times in graph theory and related Computer Science applications. However, the statement of the lemma itself is so involved -- hardly has one $\epsilon$ been dealt with than a new $\delta$ (depending on the just mentioned $\epsilon$ pops up -- that time ran out with just about the statement of the lemma completed. Perhaps a just topic for a sequel.


next up previous
Next: Others Up: dd-mm Previous: Embeddings and Projections
Meena Mahajan 2002-02-13