ICPC University hosts several lectures series designed as part of the journey to becoming better problem solvers
In his lecture, Huacheng will give an introduction to succinct data structures. He will explain what is a succinct data structure, and present two such data structures in detail, for the rank problem and the problem of storing base-B vectors respectively. Finally, he will list a few more results in this area and state an open question.
In a recent result known as MIP *= RE, ideas from three disparate fields of study — computational complexity theory, quantum information, and operator algebras — have come together to simultaneously resolve long-standing open problems in each field, including a 44-year old mystery in mathematics known as Connes’ Embedding Problem. In this talk, Henry will describe the evolution and convergence of ideas behind MIP *= RE: it starts with three landmark discoveries from the 1930s (Turing’s notion of a universal computing machine, the phenomenon of quantum entanglement, and von Neumann’s theory of operators), and ends with some of the most cutting-edge developments from theoretical computer science and quantum computing.
Mikhail will speak about solving the “Expected Damage” Problem in Kotlin With Probabilities and Modular Arithmetic.
Multiplication and integer sorting are some of the most fundamental computation problems, yet their true complexities remain elusive. For multiplication of two n-bit numbers, recent breakthrough work by Harvey and van der Hoeven (2019) gives a boolean circuit of size O(n log n). Is this optimal? For sorting integers, Radix-Sort is a decades old algorithm beating the comparison-based sorting algorithms when the number of bits in a key is small. In particular, sorting n integers with O(log n) bits can be done in linear time. However, many practical applications require sorting huge volumes of integers. When the integers to be sorted takes up more memory than the available RAM, most of the integers must reside on slow secondary storage and the bottleneck is no longer the amount of CPU instructions, but rather the number of disk accesses performed by the sorting algorithm. In such external memory settings, there are no faster integer sorting algorithms than the comparison-based ones. Are comparison-based algorithms optimal for sorting integers in external memory? In this talk, we answer both questions affirmatively conditioned on a widely believed conjecture in the seemingly completely unrelated area of network coding. The area of network coding studies the amount of bits that can be transmitted between different senders and receivers in a communication network with bandwidth constraints on the edges.