(As in the convention in theoretical computer science, unless stated otherwise the author names are listed alphabetically)

Thesis/ Survey/ Book Articles

Abstract: The bin packing problem has been the corner stone of approximation algorithms and has been extensively studied starting from the early seventies. In the
classical bin packing problem, we are given a list of real numbers in the range (0, 1],
the goal is to place them in a minimum number of bins so that no bin holds numbers summing to more than 1. In this thesis we study approximation algorithms for
three generalizations of bin packing: geometric bin packing, vector bin packing and
weighted bipartite edge coloring.

Arindam Khan, PhD Thesis, Advisor: Prasad Tetali.

Thesis committee members: Nikhil Bansal (reader), Santosh Vempala, Dana Randall, Santanu Dey, Sebastian Pokutta.

Georgia Institute of Technology, Atlanta, USA, 2015.
(pdf)

Abstract: The bin packing problem is a well-studied problem in combinatorial optimization. In the classical bin
packing problem, we are given a list of real numbers in (0, 1] and the goal is to place them in a minimum
number of bins so that no bin holds numbers summing to more than 1. The problem is extremely important
in practice and finds numerous applications in scheduling, routing and resource allocation problems.
Theoretically the problem has rich connections with discrepancy theory, iterative methods, entropy
rounding and has led to the development of several algorithmic techniques. In this survey we consider
approximation and online algorithms for several classical generalizations of bin packing problem such
as geometric bin packing, vector bin packing and various other related problems. There is also a vast
literature on mathematical models and exact algorithms for bin packing. However, this survey does not
address such exact algorithms.
In two-dimensional geometric bin packing, we are given a collection of rectangular items to be
packed into a minimum number of unit size square bins. This variant has a lot of applications in cutting
stock, vehicle loading, pallet packing, memory allocation and several other logistics and robotics related
problems. In d-dimensional vector bin packing, each item is a d-dimensional vector that needs to be
packed into unit vector bins. This problem is of great significance in resource constrained scheduling and
in recent virtual machine placement in cloud computing. We also consider several other generalizations
of bin packing such as geometric knapsack, strip packing and other related problems such as vector
scheduling, vector covering etc. We survey algorithms for these problems in offline and online setting,
and also mention results for several important special cases. We briefly mention related techniques used
in the design and analysis of these algorithms. In the end we conclude with a list of open problems.

Henrik I. Christensen, Arindam Khan, Sebastian Pokutta, Prasad Tetali.

A well-studied variant of Two-dimensional Bin Packing (2BP) is Guillotine Two-dimensional Bin Packing (G2BP), where all rectangles
must be packed in such a way that every rectangle in the packing can be obtained by recursively
applying a sequence of end-to-end axis-parallel cuts, also called guillotine cuts.
We also give an APTAS for 2BP for skewed instance, though general 2BP does not admit an APTAS.
We also show worst-case ratio between optimal G2BP and 2BP is 4/3.

Arindam Khan, and Eklavya Sharma.

International Conference on Approximation Algorithms for Combinatorial Optimization Problems. (APPROX) 2021.
(pdf, video)

We study Nonpreemptive Peak Demand Minimization (NPDM) problem, where we are given a set
of jobs, specified by their processing times and energy requirements. The goal is to schedule all jobs
within a fixed time period such that the peak load (the maximum total energy requirement at any
time) is minimized. This problem has recently received significant attention due to its relevance in
smart-grids. Theoretically, the problem is related to the classical strip packing problem (SP). In
SP, a given set of axis-aligned rectangles must be packed into a fixed-width strip, such that the
height of the strip is minimized. NPDM can be modeled as strip packing with slicing and stacking
constraint: each rectangle may be cut vertically into multiple slices and the slices may be packed
into the strip as individual pieces. The stacking constraint forbids solutions where two slices of the
same rectangle are intersected by the same vertical line. Nonpreemption enforces the slices to be
placed in contiguous horizontal locations (but may be placed at different vertical locations).
We obtain a (5/3 + ε)-approximation algorithm for the problem. We also provide an asymptotic
efficient polynomial-time approximation scheme (AEPTAS) which generates a schedule for almost
all jobs with energy consumption (1 + ε)OPT. The remaining jobs fit into a thin container of height
1. The previous best for NPDM was 2.7 approximation based on FFDH [40]. One of our key ideas is
providing several new lower bounds on the optimal solution of a geometric packing, which could be
useful in other related problems. These lower bounds help us to obtain approximative solutions based
on Steinberg’s algorithm in many cases. In addition, we show how to split schedules generated by
the AEPTAS into few segments and to rearrange the corresponding jobs to insert the thin container
mentioned above.

Max Deppert, Klaus Jansen, Arindam Khan, Malin Rau, and Malte Tutas.

International Conference on Approximation Algorithms for Combinatorial Optimization Problems. (APPROX) 2021.
(pdf, video)

In two-dimensional geometric knapsack problem, we are given a set of n axis-aligned rectangular
items and an axis-aligned square-shaped knapsack. Each item has integral width, integral height
and an associated integral profit. The goal is to find a (non-overlapping axis-aligned) packing
of a maximum profit subset of rectangles into the knapsack. A well-studied and frequently used
constraint in practice is to allow only packings that are guillotine separable, i.e., every rectangle in
the packing can be obtained by recursively applying a sequence of edge-to-edge axis-parallel cuts
that do not intersect any item of the solution. In this paper we study approximation algorithms
for the geometric knapsack problem under guillotine cut constraints. We present polynomial time
(1 + ε)-approximation algorithms for the cases with and without allowing rotations by 90 degrees,
assuming that all input numeric data are polynomially bounded in n. In comparison, the best-known
approximation factor for this setting is 3 + ε [Jansen-Zhang, SODA 2004], even in the cardinality
case where all items have the same profit.
Our main technical contribution is a structural lemma which shows that any guillotine packing
can be converted into another structured guillotine packing with almost the same profit. In this
packing, each item is completely contained in one of a constant number of boxes and L-shaped
regions, inside which the items are placed by a simple greedy routine. In particular, we provide a
clean sufficient condition when such a packing obeys the guillotine cut constraints which might be
useful for other settings where these constraints are imposed.

Arindam Khan, Arnab Maiti, Amatya Sharma, Andreas Wiese.

International Symposium on Computational Geometry (SoCG) 2021.
(pdf, video)

In the 2-Dimensional Knapsack problem (2DK) we are given a square knapsack and a collection
of n rectangular items with integer sizes and profits. Our goal is to find the most profitable subset of
items that can be packed non-overlappingly into the knapsack. The currently best known polynomialtime approximation factor for 2DK is 17/9 + ε < 1.89 and there is a (3/2 + ε)-approximation
algorithm if we are allowed to rotate items by 90 degrees [Gálvez et al., FOCS 2017]. In this paper,
we give (4/3 + ε)-approximation algorithms in polynomial time for both cases, assuming that all
input data are integers polynomially bounded in n.
Gálvez et al.’s algorithm for 2DK partitions the knapsack into a constant number of rectangular
regions plus one L-shaped region and packs items into those in a structured way. We generalize
this approach by allowing up to a constant number of more general regions that can have the shape
of an L, a U, a Z, a spiral, and more, and therefore obtain an improved approximation ratio. In
particular, we present an algorithm that computes the essentially optimal structured packing into
these regions.

Waldo Galvez, Fabrizio Grandoni, Arindam Khan, Diego Ramirez-Romero, Andreas Wiese.

International Symposium on Computational Geometry (SoCG) 2021.
(pdf, video)

We study the knapsack problem with group fairness constraints. The input of the problem consists of
a knapsack of bounded capacity and a set of items, each item belongs to a particular category and has an
associated weight and value. The goal of this problem is to select a subset of items such that all categories
are fairly represented, the total weight of the selected items does not exceed the capacity of the knapsack,
and the total value is maximized. We study the fairness parameters such as the bounds on the total value
of items from each category, the total weight of items from each category, and the total number of items
from each category. We give approximation algorithms for these problems. These fairness notions
could also be extended to the min-knapsack problem. The fair knapsack problems encompass various
important problems, such as participatory budgeting, fair budget allocation, advertising.

Deval Patel, Arindam Khan, Anand Louis.

International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2021
(pdf)

Best Fit is a well known online algorithm for the bin packing problem, where a collection of one-dimensional items has to be packed into a minimum number of unit-sized bins. In a seminal work, Kenyon [SODA 1996] introduced the (asymptotic) random order ratio as an alternative performance measure for online algorithms. Here, an adversary specifies the items, but the order of arrival is drawn uniformly at random. Kenyon's result establishes lower and upper bounds of 1.08 and 1.5, respectively, for the random order ratio of Best Fit. Although this type of analysis model became increasingly popular in the field of online algorithms, no progress has been made for the Best Fit algorithm after the result of Kenyon.
We study the random order ratio of Best Fit and tighten the long-standing gap by establishing an improved lower bound of 1.10. For the case where all items are larger than 1/3, we show that the random order ratio converges quickly to 1.25. It is the existence of such large items that crucially determines the performance of Best Fit in the general case. Moreover, this case is closely related to the classical maximum-cardinality matching problem in the fully online model. As a side product, we show that Best Fit satisfies a monotonicity property on such instances, unlike in the general case.
In addition, we initiate the study of the absolute random order ratio for this problem. In contrast to asymptotic ratios, absolute ratios must hold even for instances that can be packed into a small number of bins. We show that the absolute random order ratio of Best Fit is at least 1.3. For the case where all items are larger than 1/3, we derive upper and lower bounds of 21/16 and 1.2, respectively.

Susanne Albers, Arindam Khan, Leon Ladewig.

45th International Symposium on Mathematical Foundations of Computer Science (MFCS), 2020
(pdf, video)

Recipient of best paper award in MFCS 2020.

Guillotine separability of rectangles has recently gained prominence in combinatorial optimization,
computational geometry, and combinatorics. Consider a given large stock unit (say glass or wood)
and we need to cut out a set of required rectangles from it. Many cutting technologies allow only
end-to-end cuts called guillotine cuts. Guillotine cuts occur in stages. Each stage consists of either
only vertical cuts or only horizontal cuts. In k-stage packing, the number of cuts to obtain each
rectangle from the initial packing is at most k (plus an additional trimming step to separate the
rectangle itself from a waste area). Pach and Tardos studied the following question: Given a set
of n axis-parallel rectangles (in the weighted case, each rectangle has an associated weight), cut out
as many rectangles (resp. weight) as possible using a sequence of guillotine cuts. They provide a
guillotine cutting sequence that recovers 1/(2 log n)-fraction of rectangles (resp. weights). Abed et
al. claimed that a guillotine cutting sequence can recover a constant fraction for axis-parallel
squares. They also conjectured that for any set of rectangles, there exists a sequence of axis-parallel
guillotine cuts that recovers a constant fraction of rectangles. This conjecture, if true, would
yield a combinatorial O(1)-approximation for Maximum Independent Set of Rectangles (MISR),
a long-standing open problem. We show the conjecture is not true, if we only allow o(log log n)
stages (resp. o(log n/ log log n)-stages for the weighted case). On the positive side, we show a simple
O(n log n)-time 2-stage cut sequence that recovers 1/(1 + log n)-fraction of rectangles. We improve
the extraction of squares by showing that 1/40-fraction (resp. 1/160 in the weighted case) of squares
can be recovered using guillotine cuts. We also show O(1)-fraction of rectangles, even in the weighted
case, can be recovered for many special cases of rectangles, e.g. fat (bounded width/height), δ-large
(large in one of the dimensions), etc. We show that this implies O(1)-factor approximation for
Maximum Weighted Independent Set of Rectangles, the weighted version of MISR, for these classes
of rectangles.

Arindam Khan, Madhusudhan Reddy Pittu.

International Conference on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), 2020
(pdf, video)

In the Strip Packing problem, we are given a vertical half-strip [0, W] × [0, +∞) and a collection of
open rectangles of width at most W. Our goal is to find an axis-aligned (non-overlapping) packing of
such rectangles into the strip such that the maximum height OP T spanned by the packing is as small
as possible. Strip Packing generalizes classical well-studied problems such as Makespan Minimization
on identical machines (when rectangle widths are identical) and Bin Packing (when rectangle heights
are identical). It has applications in manufacturing, scheduling and energy consumption in smart
grids among others. It is NP-hard to approximate this problem within a factor (3/2 − ε) for any
constant ε > 0 by a simple reduction from the Partition problem. The current best approximation
factor for Strip Packing is (5/3+ε) by Harren et al. [Computational Geometry ’14], and it is achieved
with a fairly complex algorithm and analysis.
It seems plausible that Strip Packing admits a (3/2 + ε)-approximation. We make progress in
that direction by achieving such tight approximation guarantees for a special family of instances,
which we call skewed instances. As standard in the area, for a given constant parameter δ > 0, we
call large the rectangles with width at least δW and height at least δOP T, and skewed the remaining
rectangles. If all the rectangles in the input are large, then one can easily compute the optimal
packing in polynomial time (since the input can contain only a constant number of rectangles). We
consider the complementary case where all the rectangles are skewed. This second case retains a
large part of the complexity of the original problem; in particular, it is NP-hard to approximate
within a factor (3/2 − ε) and we provide an (almost) tight (3/2 + ε)-approximation algorithm.

Waldo Galvez, Fabrizio Grandoni, Afrouz Jabal Ameli, Klaus Jansen, Arindam Khan, Malin Rau.

International Conference on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), 2020
(pdf, video)

The knapsack problem is one of the classical problems in combinatorial optimization: Given a set
of items, each specified by its size and profit, the goal is to find a maximum profit packing into a
knapsack of bounded capacity. In the online setting, items are revealed one by one and the decision,
if the current item is packed or discarded forever, must be done immediately and irrevocably upon
arrival. We study the online variant in the random order model where the input sequence is a
uniform random permutation of the item set.
We develop a randomized (1/6.65)-competitive algorithm for this problem, outperforming the
current best algorithm of competitive ratio 1/8.06 [Kesselheim et al. SIAM J. Comp. 47(5)]. Our
algorithm is based on two new insights: We introduce a novel algorithmic approach that employs
two given algorithms, optimized for restricted item classes, sequentially on the input sequence. In
addition, we study and exploit the relationship of the knapsack problem to the 2-secretary problem.
The generalized assignment problem (GAP) includes, besides the knapsack problem, several
important problems related to scheduling and matching. We show that in the same online setting,
applying the proposed sequential approach yields a (1/6.99)-competitive randomized algorithm for
GAP. Again, our proposed algorithm outperforms the current best result of competitive ratio 1/8.06
[Kesselheim et al. SIAM J. Comp. 47(5)].

Susanne Albers, Arindam Khan, Leon Ladewig.

International Conference on Approximation Algorithms for Combinatorial Optimization Problems (APPROX), 2019
(pdf)

Also selected for presentation in Hightlights in Algorithms (HALG), 2020. Journal version published in Algorithmica'21.

We study the two-dimensional geometric knapsack problem (2DK), a geometric variant of the classical knapsack problem. In this problem we are given a set of axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack without rotating items. This is a very well-studied optimization problem and finds applications in scheduling, memory allocation, advertisement placement etc. The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is $2+\eps$ [Jansen and Zhang, SODA 2004].
After more than a decade, in this paper we break the 2-approximation barrier, achieving a polynomial-time $17/9+\eps<1.89$ approximation, which improves to $558/325+\eps<1.72$ in the cardinality case.
We also consider the variant of the problem with rotations (2DKR) , where the items can be rotated by $90$ degrees. Also in this case the best-known polynomial-time approximation factor (even for the cardinality case) is $2+\eps$ [Jansen and Zhang, SODA 2004]. Exploiting part of the machinery developed for 2DK plus a few additional ideas, we obtain a polynomial-time $3/2+\eps$-approximation for 2DKR, which improves to $4/3+\eps$ in the cardinality case.

Waldo Galvez, Fabrizio Grandoni, Sandy Heydrich, Salvatore Ingala, Arindam Khan, Andreas Wiese.

58th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2017.
(pdf)

my FOCS TALK: (Slides,
Video),
A longer 64 page version: (Arxiv)

Gives improved approximation for a classical problem after thirteen years.

Also selected for presentation in Hightlights in Algorithms (HALG), 2018.

We study the strip packing problem, a classical packing problem which generalizes both bin packing and makespan minimization. Here we are given a set of axis-parallel rectangles in the two-dimensional plane and the goal is to pack them in a vertical strip of fixed width such that the height of the obtained packing is minimized. The packing must be non-overlapping and the rectangles cannot be rotated. A reduction from the partition problem shows that no approximation better than 3/2 is possible for strip packing in polynomial time (assuming P!=NP). Nadiradze and Wiese [SODA16] overcame this barrier by presenting a (7/5+epsilon)-approximation algorithm in pseudo-polynomial-time (PPT). As the problem is strongly NP-hard, it does not admit an exact PPT algorithm (though a PPT approximation scheme might exist). In this paper we make further progress on the PPT approximability of strip packing, by presenting a (4/3+epsilon)-approximation algorithm. Our result is based on a non-trivial repacking of some rectangles in the "empty space" left by the construction by Nadiradze and Wiese, and in some sense pushes their approach to its limit. Our PPT algorithm can be adapted to the case where we are allowed to rotate the rectangles by 90 degrees, achieving the same approximation factor and breaking the polynomial-time approximation barrier of 3/2 for the case with rotations as well

Foundations of Software Technology and Theoretical Computer Science (FSTTCS), 2016.
(pdf)

my FSTTCS TALK: (Slides),
A longer 20 page version:( Arxiv)

We study the $d$-dimensional vector bin packing problem, a well-studied generalization of bin packing arising in resource
allocation and scheduling problems.
Here we are given a set of $d$-dimensional vectors $v_1, \ldots, v_n$ in $[0,1]^d$,
and the goal is to pack them into the least number of bins so that for each bin $B$, the sum of the vectors in it is at most $1$ in every dimension, i.e., $||\sum_{v_i \in B} v_i ||_{\infty} \le 1$.
For the $2$-dimensional case we give an asymptotic approximation guarantee
of $1+\ln(1.5)+\epsilon \approx (1.405+\epsilon)$, improving upon the previous bound of $1+\ln 2+\epsilon \approx (1.693+\epsilon)$. We also give an almost tight $(1.5+\epsilon)$
absolute approximation guarantee,
improving upon the previous bound of 2 \cite{kellerer2003approximation}.
For the $d$-dimensional case, we get a $1.5+\ln (\frac{d}2)+ o_d(1)+\epsilon$ guarantee, improving upon the previous $(1+\ln d+\epsilon)$ guarantee \cite{BansalCS09} based on the
Round \& Approx framework.
Here $(1+ \ln d)$ was a natural barrier as rounding-based algorithms can not achieve better than $d$ approximation.
We get around this by exploiting
various structural properties of
(near)-optimal packings, and using multi-objective multi-budget matching based techniques and
expanding the R \& A framework to go beyond rounding-based algorithms.
Along the way we also prove several results that could be of independent interest.

Nikhil Bansal, Arindam Khan, Marek Elias

ACM-SIAM Symposium on Discrete Algorithms (SODA), 2016.
(pdf)

Gives improved approximation for a classical problem after a decade.

We consider weighted bipartite edge coloring problem which is a generalization of bin packing and edge coloring and is inspired from the study of Clos networks. In the problem, given a weighted
bipartite graph the goal is to color all edges with minimum number of colors such that the sum of the edges incident to any vertex of any color is at most one. Chung and Ross conjectured that there is a proper weighted coloring with 2n-1 colors where n denotes the maximum over all vertices of the number of unit sized bins needed to pack the weights of edges incident to the vertex.
We show a constructive proper weighted coloring by 20n/9 color improving previous 9/4 approximation by Feige et al.
We further show that 11n/5 colors are sufficient when all edges are >1/4.

Arindam Khan, Mohit Singh

Foundations of Software Technology and Theoretical Computer Science (FSTTCS), 2015.
(pdf)

We study the two-dimensional bin packing problem
with and without rotations. Here we are given a set of
two-dimensional rectangular items I and the goal is
to pack these into a minimum number of unit square
bins. We consider the orthogonal packing case where
the edges of the items must be aligned parallel to
the edges of the bin. Our main result is a 1.405-
approximation for two-dimensional bin packing with
and without rotation, which improves upon a recent
1.5 approximation due to Jansen and Praedel. We also
show that a wide class of rounding based algorithms
cannot improve upon the factor of 1.5.

Nikhil Bansal, Arindam Khan.

ACM-SIAM Symposium on Discrete Algorithms (SODA), 2014.
(pdf)

Gives a unified framework for a class of bin packing algorithms called O(1) rounding-based algorithms.
For a more detailed version of the result in this paper, please see chapter III of my PhD thesis.

Abstract: Social networks serve as important platforms for users to express, exchange and form opinions on various topics. Several
opinion dynamics models have been proposed to characterize
how a user iteratively updates her expressed opinion based
on her innate opinion and the opinion of her neighbors. The
extent to how much a user is in
uenced by her neighboring
opinions, as opposed to her own innate opinion, is governed
by a measure of her conformity parameter. Characterizing this degree of conformity for users of a social network
is critical for several applications such as debiasing online
surveys andnding social in
uencers. In this paper, we
address the problem of estimating these conformity values
for users, using only the expressed opinions and the social
graph. We pose this problem in a constrained optimization
framework and design efficient algorithms, which we validate
on both synthetic and real-world Twitter data. Using these
estimated conformity values, we then address the problem
of identifying the smallest subset of users in a social graph
that, when seeded initially with some non-neutral opinions,
can accurately explain the current opinion values of users
in the entire social graph. We call this problem seed recovery. Using ideas from compressed sensing, we analyze and
design algorithms for both conformity estimation and seed
recovery, and validate them on real and synthetic data.

Abhimanyu Das, Sreenivas Gollapudi, Arindam Khan and Renato Paes Leme.

ACM Conference on Online Social Networks (COSN), 2014.
(pdf)

Abstract: The problem of ordering a set of entities which contain
inherent ties among them arises in many applications.
Notion of �bucket order� has emerged as a popular
mechanism of ranking in such settings. A bucket
order is an ordered partition of the set of entities into
�buckets�. There is a total order on the buckets, but
the entities within a bucket are treated as tied.
In this paper, we focus on discovering bucket order
from data captured in the form of user preferences. We
consider two settings: one in which the discrepancies in
the input preferences are �local� (when collected from
experts) and the other in which discrepancies could
be arbitrary (when collected from a large population).
We present a formal model to capture the setting of
local discrepancies and consider the following question:
�how many experts need to be queried to discover
the underlying bucket order on n entities?�. We
prove an upperbound of O(log n ^ 1/2). In the case of
arbitrary discrepancies, we model it as the bucket
order problem of discovering a bucket order that
best fits the data (captured as pairwise preference
statistics). We present a new approach which exploits
a connection between the discovery of buckets and the
correlation clustering problem. We present empirical
evaluation of our algorithms on real and artificially
generated datasets.

Sreyash Kenkre, Arindam Khan and Vinayaka Pandit.

SIAM International Conference on Data Mining (SDM), 2011.
(pdf)

Published Journal Papers

We study the two-dimensional geometric knapsack problem (2DK), a geometric variant of the classical knapsack problem. In this problem we are given a set of axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack without rotating items. This is a very well-studied optimization problem and finds applications in scheduling, memory allocation, advertisement placement etc. The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is $2+\eps$ [Jansen and Zhang, SODA 2004].
After more than a decade, in this paper we break the 2-approximation barrier, achieving a polynomial-time $17/9+\eps<1.89$ approximation, which improves to $558/325+\eps<1.72$ in the cardinality case.
We also consider the variant of the problem with rotations (2DKR) , where the items can be rotated by $90$ degrees. Also in this case the best-known polynomial-time approximation factor (even for the cardinality case) is $2+\eps$ [Jansen and Zhang, SODA 2004]. Exploiting part of the machinery developed for 2DK plus a few additional ideas, we obtain a polynomial-time $3/2+\eps$-approximation for 2DKR, which improves to $4/3+\eps$ in the cardinality case.

Waldo Galvez, Fabrizio Grandoni, Sandy Heydrich, Salvatore Ingala, Arindam Khan, Andreas Wiese.

To appear in ACM Transaction on Algorithms (TALG), 2021.
(pdf)

To be updated.

Susanne Albers, Arindam Khan & Leon Ladewig.

Algorithmica, 2021. (Journal version, Arxiv version).
An extended abstract of this paper was published in MFCS'20.

The knapsack problem is one of the classical problems in combinatorial optimization: Given a set of items, each specified by its size and profit, the goal is to find a maximum profit packing into a knapsack of bounded capacity. In the online setting, items are revealed one by one and the decision, if the current item is packed or discarded forever, must be done immediately and irrevocably upon arrival. We study the online variant in the random order model where the input sequence is a uniform random permutation of the item set. We develop a randomized (1/6.65)-competitive algorithm for this problem, outperforming the current best algorithm of competitive ratio 1/8.06 (Kesselheim et al. in SIAM J Comput 47(5):1939–1964, 2018). Our algorithm is based on two new insights: We introduce a novel algorithmic approach that employs two given algorithms, optimized for restricted item classes, sequentially on the input sequence. In addition, we study and exploit the relationship of the knapsack problem to the 2-secretary problem. The generalized assignment problem (GAP) includes, besides the knapsack problem, several important problems related to scheduling and matching. We show that in the same online setting, applying the proposed sequential approach yields a (1/6.99)-competitive randomized algorithm for GAP. Again, our proposed algorithm outperforms the current best result of competitive ratio 1/8.06 (Kesselheim et al. in SIAM J Comput 47(5):1939–1964, 2018).

Susanne Albers, Arindam Khan & Leon Ladewig.

Algorithmica, 2021. (Journal version, Arxiv version).
An extended abstract of this paper was published in APPROX'19.

On the Matching Augmentation Problem.
The high-level goal of survivable network design is to design (cheap) networks that provide connectivity between pre-specified pairs of nodes despite a few node or edge failures. One of the most basic such problems is the Forest Augmentation Problem (FAP): we are given an undirected graph with $0$-$1$ edge costs. Our goal is to compute a min-cost subgraph which is $2$-edge connected. Intuitively, we are given some existing network (i.e., the $0$-edges), and we need to augment it (with $1$-edges) to make it resilient to one edge fault.
Note that each 2-edge connected component of $0$-edges can be contracted into a single node, hence we can assume w.l.o.g. that $0$-edges induce a forest: this motivates the name of the problem.
FAP is APX-hard [Kortsarz et al.,'04], and the best known approximation factor (that holds also for much more general problems) is $2$ [Jain,'01]. The most relevant special case of FAP where a better approximation is known is when the forest consists of a single spanning tree: the Tree Augmentation Problem (TAP) [Nagamochi,'03; Even et al.,'06; Kortsarz and Nutov,'15].
In this paper we study another relevant special case of FAP, the Matching Augmentation Problem (MAP), where $0$-edges form a matching. This captures an extremal situation where the total number of leaves is relatively small compared to the number of trees (in the opposite situation, one can easily achieve an approximation factor better than $2$ by a reduction to TAP).
For this problem we present an improved $7/4$ approximation algorithm. Our main building block is the construction of a min-cost subgraph where each node has degree at least $2$. This is later augmented to the desired solution via a sequence of non-trivial steps.

Joseph Cheriyan, Jack Dippel, Fabrizio Grandoni, Arindam Khan, Vishnu V. Narayan.

Makes first progress towards Forest Augmentation Problem when the forest contains more than one trees.
The other case, Tree Augmentation Problem (TAP) i.e., when the forest is a spanning tree, is well-studied.

Given a capacitated undirected graph G=(V,E) with a set of terminals K \subset V, a mimicking network is a smaller graph H=(V_H,E_H) which contains the set of terminals K and for every bipartition [U,K\U] of the terminals, the cost of the minimum cut separating U from K\U in G is exactly equal to the cost of the minimum cut separating U from K\U in H.
In this work, we improve both the previous known upper bound of 2^2^k and lower bound of (k+1) for mimicking networks, reducing the doubly-exponential gap between them to a single-exponential gap as follows:
Given a graph G, we exhibit a construction of mimicking network with at most k'th Hosten-Morris number of vertices (independent of the size of V).
For lower bound we show that there exist graphs with k terminals that have no mimicking network with less than 2^(k-1) number of edges.
We also show improved construction for trees, outerplanar and bounded treewidth graphs.

Arindam Khan, Prasad Raghavendra.

Information Processing Letters (IPL), Volume 114, Issue 7, July 2014, Pages 365-371.
(journal version)
(pdf)

A preliminary version in Arxiv.
(Independently and parallelly, Krauthgamer and Rika obtained similar bounds in SODA 2013 )

In this paper we study the diffuse reflection diameter and diffuse reflection radius problems for convex-quadrilateralizable polygons . In the usual model of diffuse reflection, a light ray incident at a point on the reflecting surface is reflected from that point in all possible inward directions. A ray reflected from a polygonal edge may graze that reflecting edge but an incident ray cannot graze the reflecting edge. The diffuse reflection diameter of a simple polygon P is the minimum number of diffuse reflections that may be needed in the worst case to illuminate any target point t from any point light source s inside P. We show that the diameter is upper bounded by (3n-10)/4 source in our usual model of diffuse reflection for convex-quadrilateralizable polygons . To the best of our knowledge, this is the first upper bound on diffuse reflection diameter within a fraction of n for such a class of polygons. We also show that the diffuse reflection radius of a convex-quadrilateralizable simple polygon with n vertices is at most (3n-10)/8 source under the usual model of diffuse reflection. In other words, there exists a point inside such a polygon from which (3n-10)/8 usual diffuse reflections are always sufficient to illuminate the entire polygon. In order to establish these bounds for the usual model, we first show that the diameter and radius are (n-4)/2 and View (n-4)/4 respectively, for the same class of polygons for a relaxed model of diffuse reflections; in the relaxed model an incident ray is permitted to graze a reflecting edge before turning and reflecting off the same edge at any interior point on that edge. We also show that the worst-case diameter and radius lower bounds of (n-4)/2 and (n-4)/4 respectively, are sometimes attained in the usual model, as well as in the relaxed model of diffuse reflection.

Arindam Khan, Sudebkumar P. Pal, Mridul Aanjaneya, Arijit Bishnu, Subhas C. Nandy.

Attribute Based Messaging (ABM) enables messages to be addressed using attributes of recipients
rather than an explicit list of recipients. Such messaging offers benefits of efficiency, exclusiveness,
and intensionality, but faces challenges in access control and confidentiality. In this paper we
explore an approach to intra-enterprise ABM based on providing access control and confidentiality
using information from the same attribute database exploited by the addressing scheme. We show
how to address three key challenges. First, we demonstrate a manageable access control system
based on attributes. Second, we demonstrate use of Attribute Based Encryption to provide endto-
end confidentiality. Third, we show that such a system can be efficient enough to support ABM
for mid-size enterprises. Our implementation can dispatch confidential ABM messages approved
by XACML policy review for an enterprise of at least 60,000 users with only seconds of latency.

Rakesh Bobba, Omid Fatemieh, Fariba Khan, Arindam Khan, Carl A. Gunter, Himanshu Khurana and Manoj Prabhakaran.

ACM Transactions on Information and System Security (TISSEC), Volume 13 Issue 4, December 2010.
(pdf)

Manuscripts/Submissions

Coming soon.

Sreenivas Gollapudi, Arindam Khan, Janardhan Kulkarni, Kunal Talwar, Sadra Yazdanbod.

Coming soon.

Arindam Khan, Prasad Tetali.

Other Papers

Abstract: In this paper we tried to characterize malware distribution sites by web-graph properties.

DongWon Seo, Arindam Khan and Heejo Lee.

Korean Information Processing Society (KIPS) 30th Fall Conference, Volume 15, No. 2, pages 1425-1428, Nov. 14. 2008.

Abstract: In this thesis, we consider angular visibility and diffuse reflection and study art-gallery
and fortress problems. ....

Arindam Khan, Master's Thesis, Advisor: Sudebkumar Pal.

Dept. of CSE, IIT Kharagpur, India, 2009.

Nominated for innovative Students project Award at Indian National Academy of Engineering (INAE), 2009.

Abstract: Coverage is an important issue in Wireless Sensors Networks (WSNs). It gives a measure
of the quality of surveillance, a WSN provides over a field it is designed to monitor.
Different measures of coverage capture different aspects of surveillance. In this thesis, we
studied some Geometric Characterisation and Algorithms on Coverage issues in Wireless
Sensor Networks.
�Being with and against the Thief� � The name itself says that we are working on
Coverage from both the perspective of an intruder as well as a defender. First two sections
deal with Intrusion Algorithms and last two deals with Defending algorithms, arguments
and characterisations.
In Chapter 3, we present algorithms to find the right-angled one turn longest path among
circular sensors between two points, from one point and for the general case.
In Chapter 4, we present algorithms to find the longest path among polygonal obstacles
between two points, from one point and for the general case.
Chapter 5 deals with packing arguments for strip sensors.
Chapter 6 consists of few bounds on angular visibility.

Arindam Khan, Bachelor's Thesis, Advisor: Arijit Bishnu.

Dept. of CSE, IIT Kharagpur, India, 2008.

Nominated for Best B.Tech project award, IIT Kharagpur, 2008.