Lectures

_______________

ICPCU Series

ICPC University hosts several lectures series designed as part of the journey to becoming better problem solvers

Click on the buttons below to view lectures by Alumni and CLI

19 June,

2022

Aggregation with Shuffle Differential Privacy

By Pasin Manurangsi, Google Research

Differential privacy (DP) is a formal notion of privacy for algorithms. Traditionally, the study of DP focused on two main models: the central model and the local model. The former requires a trusted curator but provides good utility guarantees, whereas the latter does not require any trusted curator but incurs larger errors. In recent years, the shuffle model of DP has emerged as an intermediate option–requiring less trust than the central model but yielding better accuracy than the local model. In this talk, I will give an overview of the three models of DP, and demonstrate the power of the shuffle model through simple tasks such as counting and building a histogram.

 Watch 

7 August, 2021

Approximating Edit Distance in Near-Linear Time

By Alex Andoni, Associate Professor at Columbia University

Edit distance is a classic measure of similarity between strings, with applications ranging from computational biology to coding. Computing edit distance is also a classic dynamic programming problem, with a quadratic run-time solution, often taught in the “Intro to Algorithms” classes. Improving this runtime has been a decades-old challenge, now ruled likely-impossible using tools from the modern area of fine-grained complexity. We will briefly survey recent research that led to an algorithm *approximating* the edit distance in near-linear time, up to a constant factor. This research direction has been reenergized by the breakthrough paper of [Chakraborty, Das, Goldenberg, Koucky, Saks; FOCS’18], who showed the first constant-factor approximation algorithm with a (strongly) sub-quadratic running time. (Based on joint work with Negev Shekel Nosatzki, FOCS’20.)

 Watch 

28 March, 2021

Markov Chain Monte Carlo Method

By Shayan Oveis Gharan, Associate Professor at University of Washington

“Most interesting probability distributions that appear in science and mathematics have exponentially large support and we are unable to study them precisely. Researchers, typically, employ a method known as the Markov chain Monte Carlo technique in which they design a Markov chain with stationary distribution the same as the target distribution. Then, they simulate this Markov chain and hope that after running the chain long enough it generates random samples from the target distribution. Samples can then be exploited to estimate several quantities of interests such as marginals or moments of the underlying distribution. The most fundamental question is how long one should run the chain to obtain an approximate/exact sample. Over the last decades, researchers have developed several techniques to answer this question. In this talk, I will talk about a new tool based on the theory of high dimensional expanders to study mixing time of Markov chains. I will discuss applications to several problems such as sampling random spanning trees, and independent sets of a graph.”

 Watch 

23 January, 2021

Predecessor Search and Fractional Cascading

By Omri Weinstein, Columbia University, Hebrew University

Predecessor search is arguably the most basic data structure problem in information retrieval, and governs the efficiency of databases, internet routers and file systems. The classic solution to this “1D nearest-neighbor” problem is O(log n) search via binary search trees. In the first part of this talk we will see that it is possible to obtain an exponential improvement, supporting (dynamic) search in O(log log n) time on the word-RAM, and this is optimal. We will then consider the “iterated search” problem, where the goal is to build an efficient data structure that supports fast Predecessor search in k unrelated databases in parallel. We will describe a classic technique called “fractional cascading”, which yields a static data structure with linear space and O(k + loglog n) search time, instead of the naiive time k*loglog(n). Finally, we will discuss the (im)possibility of fully dynamic fractional cascading.

 Watch 

11 June,

2021

Composable Core-set as a Data Summarization Technique

By Sepideh Mahabadi, Toyota Technological Institute at Chicago

When working with large datasets, classical algorithms are often inefficient, e.g., they are too slow or require too much space. One of the prominent approaches for solving problems over large datasets is to compute a small “core-set”: a subset of the data that is sufficient for approximating the solution of a given task. A “composable core-set” is a core-set with the composability property: the union of multiple core-sets should form a good summary for the union of the original datasets. This composability property enables efficient solutions in a wide variety of massive data processing applications, including distributed computation (e.g. Map-Reduce model), streaming algorithms, and similarity search. In this talk, I will overview these notions and show efficient construction of composable core-sets for basic optimization problems. In particular, we consider the “diversity maximization” problem, where the goal is to maintain the “diversity” in a dataset as we produce a summary of it.

 Watch 

31 July,

2020

Succint Data Structures

By Huacheng Yu, Associate Research Scholar at Princeton University

Huacheng currently serves as an Associate Research Scholar at Princeton University and was an ICPC 2010 World Finalist, representing Tsinghua University. In his lecture, Huacheng will give an introduction to the succinct data structures. He will explain what is a succinct data structure, and present two such data structures in detail, for the rank problem and the problem of storing base-B vectors respectively. Finally, he will list a few more results in this area and state an open question. 

 Watch 

30 August,

2020

ICPCU Alumni Lecture: Recent Progress on Global Minimum Cut

By Pawel Gawrychowski, Assistant Professor at University of Wroclaw

In the global minimum cut problem, we seek a partition of the nodes of an undirected weighted graph into two nonempty parts to minimize the total weight of all edges crossing the partition. In 1996, Karger showed how to solve this problem in O(mlog^3n) time w.h.p. I will discuss his framework and the recent progress on this question, including improving the running time to O(mlog^2n).

 Watch 

3 October,

2020

ICPCU Alumni Lecture: Solving the “Expected Damage” Problem in Kotlin

By Mikhail Dvorkin, Kotlin enthusiast, ICPC 2007 Gold Medalist

Solving the “Expected Damage” Problem in Kotlin With Probabilities and Modular Arithmetic. https://codeforces.com/problemset/pro…

 Watch 

3 October,

2020

ICPCU: A Tale of Turing Machines, Quantum-Entangled Particles, and Operator Algebras

By Henry Yuen, University of Toronto

In a recent result known as “MIP* = RE,” ideas from three disparate fields of study — computational complexity theory, quantum information, and operator algebras — have come together to simultaneously resolve long-standing open problems in each field, including a 44-year old mystery in mathematics known as Connes’ Embedding Problem. In this talk, I will describe the evolution and convergence of ideas behind MIP* = RE: it starts with three landmark discoveries from the 1930s (Turing’s notion of a universal computing machine, the phenomenon of quantum entanglement, and von Neumann’s theory of operators), and ends with some of the most cutting-edge developments from theoretical computer science and quantum computing.

 Watch 

7 November,

2020

Lower Bounds for Multiplication and Integer Sorting via Network Coding

By Kasper Green Larsen, Associate Professor at Aarhus University, part time Full Professor at Copenhagen University

Multiplication is one of the most fundamental computation problems, yet its true complexity remains elusive. For multiplication of two n-bit numbers, recent breakthrough work by Harvey and van der Hoeven (2019) gives a boolean circuit of size O(n log n). Is this optimal? In this talk, we answer the question affirmatively, conditioned on a widely believed conjecture in the seemingly completely unrelated area of network coding. The area of network coding studies the amount of bits that can be transmitted between different senders and receivers in a communication network with bandwidth constraints on the edges.

 Watch 

28 November

2020

Approximation Algorithms for the (Asymmetric) Traveling Salesman Problem

By Jakub Tarnawski, Senior Researcher at Microsoft Research

The famous Traveling Salesman Problem (TSP) asks: given a graph with weights on edges, what is the shortest tour that visits all vertices? As TSP is NP-hard, the theoretical study of algorithms for TSP has focused on approximation algorithms – ones that are provably both efficient and give solutions competitive with the optimum. In this talk I will give a short introduction to approximation algorithms for TSP. In particular, I will discuss the first constant-factor approximation algorithm for the asymmetric version of TSP (ATSP), where the graph is directed, which is a 2017 joint work with Ola Svensson and László Végh. This will serve as a nice example of how linear programming is a very useful technique in the design of algorithms for discrete problems.

 Watch