# Research

My recent research has two main thrusts: graph algorithms and algorithms under uncertainty. A few representative recent projects in these areas are listed below. For a more comprehensive listing, please see my publications.

## Graph Algorithms

In graph algorithms, my research focuses on the study of cuts, flows, and connectivity in graphs. Fundamentally, graphs are mathematical abstractions of connected entities such as individuals, computers, autonomous systems, etc. Therefore, a central goal in graph algorithms research is to understand the robustness of these connections and their tolerance to failures. In this context, my work has touched on a variety of fundamental questions such as minimum cuts, cut sparsification, network reliability, maximum flows, etc.

While graph connectivity has been a focal point of algorithms research for many decades, our understanding of these problems varies substantially depending on the following:

Directed vs Undirected graphs: For many connectivity problems, most of the progress has been for the class of undirected graphs, and our understanding for the more general directed graphs significantly lags behind.

Vertex vs Edge connectivity: In most cases, we have much better algorithms for edge connectivity than for vertex connectivity, although the latter is just as important for studying the inherent robustness of network connections.

Graphs vs Hypergraphs: Hypergraphs are a generalization of graphs that capture richer connectivity structures, and are useful in many applications. Nevertheless, our understanding of connectivity problems in hypergraphs is generally poor compared to that for graphs.

Much of my recent work in graph algorithms has focused on trying to bridge these gaps, or at least push the boundaries in the less-understood parts of the graph connectivity landscape. For example, see these recent papers (the complete list is here):

Ruoxu Cen, Yu Cheng, Debmalya Panigrahi, Kevin Sun.

Sparsification of Directed Graphs via Cut Balance.**ICALP**2021.Jason Li and Debmalya Panigrahi.

Approximate Gomory-Hu tree is Faster than*n-1*Max-flows.**STOC**2021.Jason Li, Danupon Nanongkai, Debmalya Panigrahi, Thatchaphol Saranurak, and Sorrachai Yingchareonthawirnchai.

Vertex Connectivity in Poly-logarithmic Max-flows.**STOC**2021.Jason Li, Debmalya Panigrahi.

Deterministic Min-cut in Poly-logarithmic Max-flows. [pdf]**FOCS**2020.Kyle Fox, Debmalya Panigrahi, Fred Zhang.

Minimum Cut and Minimum k-Cut in Hypergraphs via Branching Contractions. [pdf]**SODA**2019.Wai-Shing Fung, Ramesh Hariharan, Nicholas J. A. Harvey, Debmalya Panigrahi.

A General Framework for Graph Sparsification.**S****IAM Journal on Computing**2019 (preliminary version in**STOC**2011).Mohsen Ghaffari, David R. Karger, Debmalya Panigrahi.

Random Contractions and Sampling for Hypergraph and Hedge Connectivity. [pdf]**SODA**2017.

## Algorithms under uncertainty

While the traditional model of algorithm design assumes that the entire input is known upfront to the algorithm, this is often not the case in practice where algorithms need to solve optimization problems in dynamic environments. The classic framework for modeling this uncertainty is that of online algorithms, where one assumes no knowledge of the future and optimizes for a worst case input. Other popular frameworks include stochastic optimization, robust algorithms, dynamic algorithms, etc.

### Algorithms and Machine Learning

I am particularly interested in the role of machine learning in mitigating the effects of uncertainty on algorithmic performance. In recent years, tremendous advances in ML have given us the ability to predict an uncertain future. But, these predictions come with some caveats: they are almost never precisely correct, and can occasionally be entirely wrong. This motivates the following research themes:

Algorithms with ML predictions: Can we design algorithms that take advantage of ML predictions while being robust to the possibility of the predictions being incorrect?

ML predictions for Algorithms: Conversely, can we redesign learning algorithms to make good predictions for optimization problems, i.e., to make predictions that improve algorithmic performance rather than aim to minimize standard notions of statistical learning error?

Much of my recent work in algorithms under uncertainty has focused on trying to address these questions. For example, see these recent papers (the complete list is here):

Keerti Anand, Rong Ge, Debmalya Panigrahi.

Customizing ML Predictions for Online Algorithms.**ICML**2020.Zhihao Jiang, Debmalya Panigrahi, Kevin Sun.

Online Algorithms for Weighted Paging with Predictions. [pdf]**ICALP**2020.Sreenivas Gollapudi, Debmalya Panigrahi.

Online Algorithms for Rent-Or-Buy with Expert Advice. [pdf]**ICML**2019.

### Online and Robust Algorithms

I am also generally interested in the design of algorithms that are robust to uncertainty in input, particularly online (where the input requirements arrive over time), robust (where the input is provided upfront but is imprecise or inaccurate), and dynamic (where the input requirements can arbitrarily change over time) algorithms. I have explored this area along the following research themes:

Online Algorithms with Delays and Deadlines: Traditionally, online algorithms need to satisfy the requirements appearing online immediately on arrival. But, in practice, many online and dynamic systems only need to satisfy each requirement within a stipulated time in the future, or is subject to a penalty that increases over time if the requests are left unserved. Can we design online algorithms in this scenario?

Online and Dynamic Algorithms with Recourse: Online algorithms often suffer from pessimistic lower bounds because all algorithmic decisions are irrevocable. Instead, if one allows a bounded recourse budget to reverse previous decisions, can we approach offline performance guarantees? This is also related to the study of dynamic algorithms where the recourse budget is in terms of the time spent on updating the solution in response to an online input.

Robust Algorithms for Input Ranges: Can we design algorithms for problems where the input parameters are not precise but specified as a range of possible values? For instance, in routing on a network, the actual delay on an edge may not be known in advance, but a range of possible delays can be derived from historical performance.

Some of my recent work along these lines of research appear below (the complete list is here):

Arun Ganesh, Bruce Maggs, and Debmalya Panigrahi.

Universal Algorithms for Clustering Problems.**ICALP**2021.Arun Ganesh, Bruce Maggs, Debmalya Panigrahi.

Robust Algorithms for TSP and Steiner Tree. [pdf]**ICALP**2020.Anupam Gupta, Amit Kumar, Debmalya Panigrahi.

Caching with Time Windows. [pdf]**STOC**2020.Amir Abboud, Raghavendra Addanki, Fabrizio Grandoni, Debmalya Panigrahi, Barna Saha.

Dynamic Set Cover: Improved Algorithms and Lower Bounds. [pdf]**STOC**2019.Yossi Azar, Arun Ganesh, Rong Ge, Debmalya Panigrahi.

Online Service with Delay. [pdf]**STOC**2017.Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, Debmalya Panigrahi.

Online and Dynamic Algorithms for Set Cover. [pdf]**STOC**2017.