Henrik Lievonen

I am a doc­toral re­searcher at the Dis­trib­uted Al­go­rithms group at Aalto Uni­ver­sity. I started in early 2022, af­ter fin­ish­ing my Mas­ter's the­sis. I work on un­der­stand­ing con­nec­tions be­tween dis­trib­uted al­go­rithms and other fields of math­e­mat­ics and com­puter sci­ence.

In ad­di­tion to re­search, I am the main de­vel­oper for the au­to­mated grad­ing sys­tem that is used at the highly-pop­u­lar Pro­gram­ming Par­al­lel Com­put­ers course. I also work as a teach­ing as­sist­ant on the course. I have con­trib­uted to the com­pet­i­tive pro­gram­ming grad­ing sys­tem CSES.

Conference Articles

2024Local problems in trees across a wide range of distributed models OPODIS 2024

We clas­sify Lo­cally Check­able La­bel­ing prob­lems in a wide va­ri­ety of dis­trib­uted mod­els of com­pu­ta­tion. In par­tic­u­lar, we clas­sify many prob­lems in rooted and un­rooted reg­u­lar trees.

The ran­dom­ized on­line-LO­CAL model cap­tures a num­ber of mod­els of com­put­ing; it is at least as strong as all of these mod­els:

  • the clas­si­cal LO­CAL model of dis­trib­uted graph al­go­rithms,
  • the quan­tum ver­sion of the LO­CAL model,
  • fi­nitely de­pen­dent dis­tri­b­u­tions [e.g. Hol­royd 2016],
  • any model that does not vi­o­late phys­i­cal causal­ity [Gavoille, Kosowski, Markiewicz, DICS 2009],
  • the SLO­CAL model [Ghaf­fari, Kuhn, Maus, STOC 2017], and
  • the dy­namic-LO­CAL and on­line-LO­CAL mod­els [Ak­bari et al., ICALP 2023].

In gen­eral, the on­line-LO­CAL model can be much stronger than the LO­CAL model. For ex­am­ple, there are lo­cally check­able la­bel­ing prob­lems (LCLs) that can be solved with log­a­rith­mic lo­cal­ity in the on­line-LO­CAL model but that re­quire poly­no­mial lo­cal­ity in the LO­CAL model.

How­ever, in this work we show that in trees, many classes of LCL prob­lems have the same lo­cal­ity in de­ter­min­is­tic LO­CAL and ran­dom­ized on­line-LO­CAL (and as a corol­lary across all the above-men­tioned mod­els). In par­tic­u­lar, these classes of prob­lems do not ad­mit any dis­trib­uted quan­tum ad­van­tage.

We pre­sent a near-com­plete clas­si­fi­ca­tion for the case of rooted reg­u­lar trees. We also fully clas­sify the su­per-log­a­rith­mic re­gion in un­rooted reg­u­lar trees. Fi­nally, we show that in gen­eral trees (rooted or un­rooted, pos­si­bly ir­reg­u­lar, pos­si­bly with in­put la­bels) prob­lems that are global in de­ter­min­is­tic LO­CAL re­main global also in the ran­dom­ized on­line-LO­CAL model.

2024No distributed quantum advantage for approximate graph coloring STOC 2024

We give an al­most com­plete char­ac­ter­i­za­tion of the hard­ness of cc-col­or­ing χ\chi-chro­matic graphs with dis­trib­uted al­go­rithms, for a wide range of mod­els of dis­trib­uted com­put­ing. In par­tic­u­lar, we show that these prob­lems do not ad­mit any dis­trib­uted quan­tum ad­van­tage.

To do that:

  1. We give a new dis­trib­uted al­go­rithm that finds a cc-col­or­ing in χ\chi-chro­matic graphs in O~(n1α)\tilde{\mathcal{O}}(n^{\frac{1}{\alpha}}) rounds, with α=c1χ1\alpha = \bigl\lceil\frac{c-1}{\chi - 1}\bigr\rceil.
  2. We prove that any dis­trib­uted al­go­rithm for this prob­lem re­quires Ω(n1α)\Omega(n^{\frac{1}{\alpha}}) rounds.

Our up­per bound holds in the clas­si­cal, de­ter­min­is­tic LO­CAL model, while the near-match­ing lower bound holds in the non-sig­nal­ing model. This model, in­tro­duced by Ar­faoui and Fraig­ni­aud in 2014, cap­tures all mod­els of dis­trib­uted graph al­go­rithms that obey phys­i­cal causal­ity; this in­cludes not only clas­si­cal de­ter­min­is­tic LO­CAL and ran­dom­ized LO­CAL but also quan­tum-LO­CAL, even with a pre-shared quan­tum state.

We also show that sim­i­lar ar­gu­ments can be used to prove that, e.g., 3-col­or­ing 2-di­men­sional grids or cc-col­or­ing trees re­main hard prob­lems even for the non-sig­nal­ing model, and in par­tic­u­lar do not ad­mit any quan­tum ad­van­tage. Our lower-bound ar­gu­ments are purely graph-the­o­retic at heart; no back­ground on quan­tum in­for­ma­tion the­ory is needed to es­tab­lish the proofs.

2024Distributed Binary Labeling Problems in High-Degree Graphs SIROCCO 2024

In this work we study the com­plex­ity of bi­nary la­bel­ing prob­lems as a func­tion of all three pa­ra­me­ters: nn, dd, and δ\delta. To this end, we in­tro­duce the fam­ily of struc­turally sim­ple prob­lems, which in­cludes, among oth­ers, all bi­nary la­bel­ing prob­lems in which car­di­nal­ity con­straints can be rep­re­sented with a con­text-free gram­mar. We clas­sify pos­si­ble com­plex­i­ties of struc­turally sim­ple prob­lems. As our main re­sult, we show that if the com­plex­ity of a prob­lem falls in the broad class of Θd,δ(logn)\Theta_{d,\delta}(\log n), then the com­plex­ity for each dd and δ\delta is al­ways ei­ther Θ(logdn)\Theta(\log_d n), Θ(logδn)\Theta(\log_\delta n), or Θ(logn)\Theta(\log n).

Bal­liu et al. (DISC 2020) clas­si­fied the hard­ness of solv­ing bi­nary la­bel­ing prob­lems with dis­trib­uted graph al­go­rithms; in these prob­lems the task is to se­lect a sub­set of edges in a 2-col­ored tree in which white nodes of de­gree dd and black nodes of de­gree δ\delta have con­straints on the num­ber of se­lected in­ci­dent edges. They showed that the de­ter­min­is­tic round com­plex­ity of any such prob­lem is Od,δ(1)O_{d,\delta}(1), Θd,δ(logn)\Theta_{d,\delta}(\log n), or Θd,δ(n)\Theta_{d,\delta}(n), or the prob­lem is un­solv­able. How­ever, their clas­si­fi­ca­tion only ad­dresses com­plex­ity as a func­tion of nn; here Od,δO_{d,\delta} hides con­stants that may de­pend on pa­ra­me­ters dd and δ\delta.

To prove our up­per bounds, we in­tro­duce a new, more ag­gres­sive ver­sion of the rake-and-com­press tech­nique that ben­e­fits from high-de­gree nodes.

2023Brief Announcement: Distributed derandomization revisited DISC 2023

We give a new, sim­ple proof for the clas­sic dis­trib­uted de­ran­dom­iza­tion re­sult by Chang, Kopelowitz, and Pet­tie, and im­prove the re­sult by re­duc­ing the needed as­sump­tions on the type of ran­dom­ness used.

One of the cor­ner­stones of the dis­trib­uted com­plex­ity the­ory is the de­ran­dom­iza­tion re­sult by Chang, Kopelowitz, and Pet­tie [FOCS 2016]: any ran­dom­ized LO­CAL al­go­rithm that solves a lo­cally check­able la­bel­ing prob­lem (LCL) can be de­ran­dom­ized with at most ex­po­nen­tial over­head. The orig­i­nal proof as­sumes that the num­ber of ran­dom bits is bounded by some func­tion of the in­put size. We give a new, sim­ple proof that does not make any such as­sump­tions—it holds even if the ran­dom­ized al­go­rithm uses in­fin­itely many bits. While at it, we also broaden the scope of the re­sult so that it is di­rectly ap­plic­a­ble far be­yond LCL prob­lems.

2023Locality in online, dynamic, sequential, and distributed graph algorithms ICALP 2023

In this work, we give a uni­fy­ing view of lo­cal­ity in four set­tings: dis­trib­uted al­go­rithms, se­quen­tial greedy al­go­rithms, dy­namic al­go­rithms, and on­line al­go­rithms. We in­tro­duce a new model of com­put­ing, called the on­line-LO­CAL model: the ad­ver­sary pre­sents the nodes of the in­put graph one by one, in the same way as in clas­si­cal on­line al­go­rithms, but for each node we get to see its ra­dius-TT neigh­bor­hood be­fore choos­ing the out­put. In­stead of look­ing ahead in time, we have the power of look­ing around in space.

We com­pare the on­line-LO­CAL model with three other mod­els: the LO­CAL model of dis­trib­uted com­put­ing, where each node pro­duces its out­put based on its ra­dius-TT neigh­bor­hood, the SLO­CAL model, which is the se­quen­tial coun­ter­part of LO­CAL, and the dy­namic-LO­CAL model, where changes in the dy­namic in­put graph only in­flu­ence the ra­dius-TT neigh­bor­hood of the point of change.

The SLO­CAL and dy­namic-LO­CAL mod­els are sand­wiched be­tween the LO­CAL and on­line-LO­CAL mod­els. In gen­eral, all four mod­els are dis­tinct, but we study in par­tic­u­lar lo­cally check­able la­bel­ing prob­lems (LCLs), which is a fam­ily of graph prob­lems ex­ten­sively stud­ied in the con­text of dis­trib­uted graph al­go­rithms. We prove that for LCL prob­lems in paths, cy­cles, and rooted trees, all four mod­els are roughly equiv­a­lent: the lo­cal­ity of any LCL prob­lem falls in the same broad class - O(logn)O(\log^* n), Θ(logn)\Theta(\log n), or nΘ(1)n^{\Theta(1)} - in all four mod­els. In par­tic­u­lar, this re­sult en­ables one to gen­er­al­ize prior lower-bound re­sults from the LO­CAL model to all four mod­els, and it also al­lows one to sim­u­late e.g. dy­namic-LO­CAL al­go­rithms ef­fi­ciently in the LO­CAL model.

We also show that this equiv­a­lence does not hold in two-di­men­sional grids or gen­eral bi­par­tite graphs. We pro­vide an on­line-LO­CAL al­go­rithm with lo­cal­ity O(logn)O(\log n) for the 3-col­or­ing prob­lem in bi­par­tite graphs - this is a prob­lem with lo­cal­ity Ω(n1/2)\Omega(n^{1/2}) in the LO­CAL model and Ω(n1/10)\Omega(n^{1/10}) in the SLO­CAL model.

2023Sinkless Orientation Made Simple SOSA 2023

We pro­vide new, sim­ple proofs show­ing that the sin­k­less ori­en­ta­tion prob­lem sep­a­rates the de­ter­min­is­tic LO­CAL and SLO­CAL mod­els.

The sin­k­less ori­en­ta­tion prob­lem plays a key role in un­der­stand­ing the foun­da­tions of dis­trib­uted com­put­ing. The prob­lem can be used to sep­a­rate two fun­da­men­tal mod­els of dis­trib­uted graph al­go­rithms, LO­CAL and SLO­CAL: the lo­cal­ity of sin­k­less ori­en­ta­tion is Ω(logn)\Omega(\log n) in the de­ter­min­is­tic LO­CAL model and O(loglogn)O(\log \log n) in the de­ter­min­is­tic SLO­CAL model. Both of these re­sults are known by prior work, but here we give new sim­ple, self-con­tained proofs for them.

Manuscripts

2024Distributed Quantum Advantage for Local Problems

We pre­sent the first lo­cal prob­lem that shows a su­per-con­stant sep­a­ra­tion be­tween the clas­si­cal ran­dom­ized LO­CAL model of dis­trib­uted com­put­ing and its quan­tum coun­ter­part. By prior work, such a sep­a­ra­tion was known only for an ar­ti­fi­cial graph prob­lem with an in­her­ently global de­f­i­n­i­tion [Le Gall et al. 2019].

We pre­sent a prob­lem that we call it­er­ated GHZ, which is de­fined us­ing only lo­cal con­straints. For­mally, it is a fam­ily of lo­cally check­able la­bel­ing prob­lems [Naor and Stock­meyer 1995]; in par­tic­u­lar, so­lu­tions can be ver­i­fied with a con­stant-round dis­trib­uted al­go­rithm.

We show that in graphs of max­i­mum de­gree Δ\Delta, any clas­si­cal (de­ter­min­is­tic or ran­dom­ized) LO­CAL model al­go­rithm will re­quire Ω(Δ)\Omega(\Delta) rounds to solve the it­er­ated GHZ prob­lem, while the prob­lem can be solved in 11 round in quan­tum-LO­CAL.

We use the round elim­i­na­tion tech­nique to prove that the it­er­ated GHZ prob­lem re­quires Ω(Δ)\Omega(\Delta) rounds for clas­si­cal al­go­rithms. This is the first work that shows that round elim­i­na­tion is in­deed able to sep­a­rate the two mod­els, and this also demon­strates that round elim­i­na­tion can­not be used to prove lower bounds for quan­tum-LO­CAL. To ap­ply round elim­i­na­tion, we in­tro­duce a new tech­nique that al­lows us to dis­cover ap­pro­pri­ate prob­lem re­lax­ations in a me­chan­i­cal way; it turns out that this new tech­nique ex­tends be­yond the scope of the it­er­ated GHZ prob­lem and can be used to e.g. re­pro­duce prior re­sults on max­i­mal match­ings [FOCS 2019, PODC 2020] in a sys­tem­atic man­ner.

2024Online Locality Meets Distributed Quantum Computing

We ex­tend the the­ory of lo­cally check­able la­bel­ing prob­lems (LCLs) from the clas­si­cal LO­CAL model to a num­ber of other mod­els that have been stud­ied re­cently, in­clud­ing the quan­tum-LO­CAL model, fi­nitely-de­pen­dent processes, non-sig­nal­ing model, dy­namic-LO­CAL model, and on­line-LO­CAL model [e.g. STOC 2024, ICALP 2023].

First, we demon­strate the ad­van­tage that fi­nitely-de­pen­dent processes have over the clas­si­cal LO­CAL model. We show that all LCL prob­lems solv­able with lo­cal­ity O(logn)O(\log^* n) in the LO­CAL model ad­mit a fi­nitely-de­pen­dent dis­tri­b­u­tion (with con­stant lo­cal­ity). In par­tic­u­lar, this gives a fi­nitely-de­pen­dent col­or­ing for reg­u­lar trees, an­swer­ing an open ques­tion by Hol­royd [2023]. This also in­tro­duces a new for­mal bar­rier for un­der­stand­ing the dis­trib­uted quan­tum ad­van­tage: it is not pos­si­ble to ex­clude quan­tum ad­van­tage for any LCL in the Θ(logn)\Theta(\log^* n) com­plex­ity class by us­ing non-sig­nal­ing ar­gu­ments.

Sec­ond, we put lim­its on the ca­pa­bil­i­ties of all of these mod­els. To this end, we in­tro­duce a model called ran­dom­ized on­line-LO­CAL, which is strong enough to sim­u­late e.g. SLO­CAL and dy­namic-LO­CAL, and we show that it is also strong enough to sim­u­late any non-sig­nal­ing dis­tri­b­u­tion and hence any quan­tum-LO­CAL al­go­rithm. We prove the fol­low­ing re­sult for rooted trees: if we can solve an LCL prob­lem with lo­cal­ity o(loglogn)o(\log\log n) in the ran­dom­ized on­line-LO­CAL model, we can solve it with lo­cal­ity O(logn)O(\log^* n) in the clas­si­cal de­ter­min­is­tic LO­CAL model.

Put to­gether, these re­sults show that in trees the set of LCLs that can be solved with lo­cal­ity O(logn)O(\log^* n) is the same across all these mod­els: lo­cal­ity O(logn)O(\log^* n) in quan­tum-LO­CAL, non-sig­nal­ing model, dy­namic-LO­CAL, or on­line-LO­CAL is not stronger than lo­cal­ity O(logn)O(\log^* n) in the clas­si­cal de­ter­min­is­tic LO­CAL model.

Thesis

2022Master's Thesis: Locally Checkable Labeling Problems in Rooted Trees in the Online-LOCAL Model of Computation

I in­tro­duce the on­line-LO­CAL model, which com­bines on­line graph al­go­rithms with the LO­CAL model of dis­trib­uted com­pu­ta­tion. I show that the on­line-LO­CAL model is no more pow­er­ful than the LO­CAL model when con­sid­er­ing Lo­cally Check­able La­bel­ing prob­lems in rooted trees.

There are many ways to clas­sify al­go­rithms. On­line al­go­rithms, for ex­am­ple, are al­go­rithms that have to be able to han­dle in­put one el­e­ment at a time. Off­line al­go­rithms, on the other hand, have ac­cess to the whole in­put. In the case of on­line graph al­go­rithms, the struc­ture of the un­der­ly­ing graph is fixed. The graph is re­vealed to the al­go­rithm one node at a time. When a node is re­vealed, the al­go­rithm has to de­cide its out­put for that node, and it can­not change its de­ci­sion later.

An­other way to clas­sify al­go­rithms is to di­vide them into cen­tral­ized and dis­trib­uted al­go­rithms. In the case of graph al­go­rithms, a cen­tral­ized al­go­rithm is a com­pletely sep­a­rate en­tity from the graph. When the nodes (or edges) of the graph are ac­tive par­ties in the ex­e­cu­tion of the al­go­rithm, the al­go­rithm is called a dis­trib­uted al­go­rithm. One com­monly used model of dis­trib­uted com­pu­ta­tion is the LO­CAL model. In the LO­CAL model, all nodes are com­put­ing their own part of the re­sult in par­al­lel. The nodes only see their own lo­cal neigh­bor­hood and need to base their de­ci­sion only on this lo­cal view.

In this the­sis, I in­tro­duce the on­line-LO­CAL model, which com­bines the power of on­line graph al­go­rithms and LO­CAL al­go­rithms. Like on­line graph al­go­rithms, on­line-LO­CAL al­go­rithms need to re­act to nodes be­ing re­vealed one at a time. Un­like on­line graph al­go­rithms, on­line-LO­CAL al­go­rithms also get to see the lo­cal neigh­bor­hood around the nodes be­fore need­ing to make their de­ci­sions.

The on­line-LO­CAL model is a very strong model of com­pu­ta­tion. In gen­eral, there are prob­lems that are triv­ial in the on­line-LO­CAL model, but dif­fi­cult to solve with on­line graph al­go­rithms and LO­CAL al­go­rithms. In this the­sis, I re­strict my at­ten­tion to the class of prob­lems known as lo­cally check­able la­bel­ing prob­lems. These are a broad class of prob­lems for which a so­lu­tion is valid if it looks valid in all lo­cal neigh­bor­hoods. In par­tic­u­lar, I show that for lo­cally check­able la­bel­ing prob­lems in rooted reg­u­lar trees, the on­line-LO­CAL model is ap­prox­i­mately as pow­er­ful as the LO­CAL model.

The the­sis worked as in­spi­ra­tion for re­lated con­fer­ence ar­ti­cle.

2020Bachelor's Thesis: Temperature Measurement Using a Transmon Device

I have de­vel­oped a method for mea­sur­ing tem­per­a­ture in range of tens to hun­dreds of mil­likelvins. The method uses a quan­tum me­chan­i­cal tarns­mon de­vice as the ther­mome­ter. The method is based on the fact that at the ther­mo­dy­nam­i­cal equi­lib­rium the state of a quan­tum me­chan­i­cal sys­tem is dis­trib­uted ac­cord­ing to the Boltz­mann dis­tri­b­u­tion. By fit­ting the Boltz­mann dis­tri­b­u­tion to the mea­sured state dis­tri­b­u­tion, one can de­ter­mine the tem­per­a­ture of the sys­tem.

The state dis­tri­b­u­tion of the sys­tem is mea­sured by ap­ply­ing a mea­sure­ment pulse and mea­sur­ing the re­flected pulse. The mea­sure­ment pulse col­lapses the quan­tum me­chan­i­cal state to one of the eigen­states of the Hamil­ton­ian of the sys­tem. The en­ergy level of the eigen­state de­ter­mines the prop­er­ties of the re­flected mea­sure­ment pulse.

In prac­tice the mea­sured pulse is very weak and noisy. To im­prove the sig­nal-to-noise ra­tio, the mea­sure­ment is re­peated mul­ti­ple times and the re­sponse sig­nals are av­er­aged. Each state of the sys­tem pro­duces a dif­fer­ent re­flected pulse, and av­er­ag­ing mul­ti­ple mea­sure­ments loses the in­for­ma­tion of the dis­tri­b­u­tion of those pulses and, by proxy, the states. To re­cover the in­for­ma­tion of the state dis­tri­b­u­tion, the sys­tem is pre­pared in dif­fer­ent states be­fore the mea­sure­ment. Be­cause we know how the prepa­ra­tion af­fects the state, we can solve the state dis­tri­b­u­tion from this set of mea­sure­ments. We can then use the Boltz­mann dis­tri­b­u­tion to solve for the tem­per­a­ture of the sam­ple.

The method has been de­vel­oped us­ing a trans­mon qutrit, but it works also for other kinds of quan­tum me­chan­i­cal sys­tems. The method pro­duces plau­si­ble re­sults for the tem­per­a­tures, but not all sources of er­rors could be de­ter­mined. More mea­sure­ments are needed to un­ravel these er­ror sources.