Henrik Lievonen

I am a doc­toral re­searcher at the Dis­trib­uted Al­go­rithms group at Aalto Uni­ver­sity. I started in early 2022, af­ter fin­ish­ing my Mas­ter's the­sis. I work on un­der­stand­ing con­nec­tions be­tween dis­trib­uted al­go­rithms and other fields of math­e­mat­ics and com­puter sci­ence.

In ad­di­tion to re­search, I am the main de­vel­oper for the au­to­mated grad­ing sys­tem that is used at the highly-pop­u­lar Pro­gram­ming Par­al­lel Com­put­ers course. I also work as a teach­ing as­sist­ant on the course. I have con­trib­uted to the com­pet­i­tive pro­gram­ming grad­ing sys­tem CSES.

Conference Articles

2023Brief Announcement: Distributed derandomization revisited DISC 2023

We give a new, sim­ple proof for the clas­sic dis­trib­uted de­ran­dom­iza­tion re­sult by Chang, Kopelowitz, and Pet­tie, and im­prove the re­sult by re­duc­ing the needed as­sump­tions on the type of ran­dom­ness used.

One of the cor­ner­stones of the dis­trib­uted com­plex­ity the­ory is the de­ran­dom­iza­tion re­sult by Chang, Kopelowitz, and Pet­tie [FOCS 2016]: any ran­dom­ized LO­CAL al­go­rithm that solves a lo­cally check­able la­bel­ing prob­lem (LCL) can be de­ran­dom­ized with at most ex­po­nen­tial over­head. The orig­i­nal proof as­sumes that the num­ber of ran­dom bits is bounded by some func­tion of the in­put size. We give a new, sim­ple proof that does not make any such as­sump­tions—it holds even if the ran­dom­ized al­go­rithm uses in­fin­itely many bits. While at it, we also broaden the scope of the re­sult so that it is di­rectly ap­plic­a­ble far be­yond LCL prob­lems.

2023Locality in online, dynamic, sequential, and distributed graph algorithms ICALP 2023

In this work, we give a uni­fy­ing view of lo­cal­ity in four set­tings: dis­trib­uted al­go­rithms, se­quen­tial greedy al­go­rithms, dy­namic al­go­rithms, and on­line al­go­rithms. We in­tro­duce a new model of com­put­ing, called the on­line-LO­CAL model: the ad­ver­sary pre­sents the nodes of the in­put graph one by one, in the same way as in clas­si­cal on­line al­go­rithms, but for each node we get to see its ra­dius-TT neigh­bor­hood be­fore choos­ing the out­put. In­stead of look­ing ahead in time, we have the power of look­ing around in space.

We com­pare the on­line-LO­CAL model with three other mod­els: the LO­CAL model of dis­trib­uted com­put­ing, where each node pro­duces its out­put based on its ra­dius-TT neigh­bor­hood, the SLO­CAL model, which is the se­quen­tial coun­ter­part of LO­CAL, and the dy­namic-LO­CAL model, where changes in the dy­namic in­put graph only in­flu­ence the ra­dius-TT neigh­bor­hood of the point of change.

The SLO­CAL and dy­namic-LO­CAL mod­els are sand­wiched be­tween the LO­CAL and on­line-LO­CAL mod­els. In gen­eral, all four mod­els are dis­tinct, but we study in par­tic­u­lar lo­cally check­able la­bel­ing prob­lems (LCLs), which is a fam­ily of graph prob­lems ex­ten­sively stud­ied in the con­text of dis­trib­uted graph al­go­rithms. We prove that for LCL prob­lems in paths, cy­cles, and rooted trees, all four mod­els are roughly equiv­a­lent: the lo­cal­ity of any LCL prob­lem falls in the same broad class - O(logn)O(\log^* n), Θ(logn)\Theta(\log n), or nΘ(1)n^{\Theta(1)} - in all four mod­els. In par­tic­u­lar, this re­sult en­ables one to gen­er­al­ize prior lower-bound re­sults from the LO­CAL model to all four mod­els, and it also al­lows one to sim­u­late e.g. dy­namic-LO­CAL al­go­rithms ef­fi­ciently in the LO­CAL model.

We also show that this equiv­a­lence does not hold in two-di­men­sional grids or gen­eral bi­par­tite graphs. We pro­vide an on­line-LO­CAL al­go­rithm with lo­cal­ity O(logn)O(\log n) for the 3-col­or­ing prob­lem in bi­par­tite graphs - this is a prob­lem with lo­cal­ity Ω(n1/2)\Omega(n^{1/2}) in the LO­CAL model and Ω(n1/10)\Omega(n^{1/10}) in the SLO­CAL model.

2023Sinkless Orientation Made Simple SOSA 2023

We pro­vide new, sim­ple proofs show­ing that the sin­k­less ori­en­ta­tion prob­lem sep­a­rates the de­ter­min­is­tic LO­CAL and SLO­CAL mod­els.

The sin­k­less ori­en­ta­tion prob­lem plays a key role in un­der­stand­ing the foun­da­tions of dis­trib­uted com­put­ing. The prob­lem can be used to sep­a­rate two fun­da­men­tal mod­els of dis­trib­uted graph al­go­rithms, LO­CAL and SLO­CAL: the lo­cal­ity of sin­k­less ori­en­ta­tion is Ω(logn)\Omega(\log n) in the de­ter­min­is­tic LO­CAL model and O(loglogn)O(\log \log n) in the de­ter­min­is­tic SLO­CAL model. Both of these re­sults are known by prior work, but here we give new sim­ple, self-con­tained proofs for them.


2023Distributed Binary Labeling Problems in High-Degree Graphs

In this work we study the com­plex­ity of bi­nary la­bel­ing prob­lems as a func­tion of all three pa­ra­me­ters: nn, dd, and δ\delta. To this end, we in­tro­duce the fam­ily of struc­turally sim­ple prob­lems, which in­cludes, among oth­ers, all bi­nary la­bel­ing prob­lems in which car­di­nal­ity con­straints can be rep­re­sented with a con­text-free gram­mar. We clas­sify pos­si­ble com­plex­i­ties of struc­turally sim­ple prob­lems. As our main re­sult, we show that if the com­plex­ity of a prob­lem falls in the broad class of Θd,δ(logn)\Theta_{d,\delta}(\log n), then the com­plex­ity for each dd and δ\delta is al­ways ei­ther Θ(logdn)\Theta(\log_d n), Θ(logδn)\Theta(\log_\delta n), or Θ(logn)\Theta(\log n).

Bal­liu et al. (DISC 2020) clas­si­fied the hard­ness of solv­ing bi­nary la­bel­ing prob­lems with dis­trib­uted graph al­go­rithms; in these prob­lems the task is to se­lect a sub­set of edges in a 2-col­ored tree in which white nodes of de­gree dd and black nodes of de­gree δ\delta have con­straints on the num­ber of se­lected in­ci­dent edges. They showed that the de­ter­min­is­tic round com­plex­ity of any such prob­lem is Od,δ(1)O_{d,\delta}(1), Θd,δ(logn)\Theta_{d,\delta}(\log n), or Θd,δ(n)\Theta_{d,\delta}(n), or the prob­lem is un­solv­able. How­ever, their clas­si­fi­ca­tion only ad­dresses com­plex­ity as a func­tion of nn; here Od,δO_{d,\delta} hides con­stants that may de­pend on pa­ra­me­ters dd and δ\delta.

To prove our up­per bounds, we in­tro­duce a new, more ag­gres­sive ver­sion of the rake-and-com­press tech­nique that ben­e­fits from high-de­gree nodes.

2023No distributed quantum advantage for approximate graph coloring

We give an al­most com­plete char­ac­ter­i­za­tion of the hard­ness of cc-col­or­ing χ\chi-chro­matic graphs with dis­trib­uted al­go­rithms, for a wide range of mod­els of dis­trib­uted com­put­ing. In par­tic­u­lar, we show that these prob­lems do not ad­mit any dis­trib­uted quan­tum ad­van­tage.

To do that:

  1. We give a new dis­trib­uted al­go­rithm that finds a cc-col­or­ing in χ\chi-chro­matic graphs in O~(n1α)\tilde{\mathcal{O}}(n^{\frac{1}{\alpha}}) rounds, with α=c1χ1\alpha = \bigl\lceil\frac{c-1}{\chi - 1}\bigr\rceil.
  2. We prove that any dis­trib­uted al­go­rithm for this prob­lem re­quires Ω(n1α)\Omega(n^{\frac{1}{\alpha}}) rounds.

Our up­per bound holds in the clas­si­cal, de­ter­min­is­tic LO­CAL model, while the near-match­ing lower bound holds in the non-sig­nal­ing model. This model, in­tro­duced by Ar­faoui and Fraig­ni­aud in 2014, cap­tures all mod­els of dis­trib­uted graph al­go­rithms that obey phys­i­cal causal­ity; this in­cludes not only clas­si­cal de­ter­min­is­tic LO­CAL and ran­dom­ized LO­CAL but also quan­tum-LO­CAL, even with a pre-shared quan­tum state.

We also show that sim­i­lar ar­gu­ments can be used to prove that, e.g., 3-col­or­ing 2-di­men­sional grids or cc-col­or­ing trees re­main hard prob­lems even for the non-sig­nal­ing model, and in par­tic­u­lar do not ad­mit any quan­tum ad­van­tage. Our lower-bound ar­gu­ments are purely graph-the­o­retic at heart; no back­ground on quan­tum in­for­ma­tion the­ory is needed to es­tab­lish the proofs.


2022Master's Thesis: Locally Checkable Labeling Problems in Rooted Trees in the Online-LOCAL Model of Computation

I in­tro­duce the on­line-LO­CAL model, which com­bines on­line graph al­go­rithms with the LO­CAL model of dis­trib­uted com­pu­ta­tion. I show that the on­line-LO­CAL model is no more pow­er­ful than the LO­CAL model when con­sid­er­ing Lo­cally Check­able La­bel­ing prob­lems in rooted trees.

There are many ways to clas­sify al­go­rithms. On­line al­go­rithms, for ex­am­ple, are al­go­rithms that have to be able to han­dle in­put one el­e­ment at a time. Off­line al­go­rithms, on the other hand, have ac­cess to the whole in­put. In the case of on­line graph al­go­rithms, the struc­ture of the un­der­ly­ing graph is fixed. The graph is re­vealed to the al­go­rithm one node at a time. When a node is re­vealed, the al­go­rithm has to de­cide its out­put for that node, and it can­not change its de­ci­sion later.

An­other way to clas­sify al­go­rithms is to di­vide them into cen­tral­ized and dis­trib­uted al­go­rithms. In the case of graph al­go­rithms, a cen­tral­ized al­go­rithm is a com­pletely sep­a­rate en­tity from the graph. When the nodes (or edges) of the graph are ac­tive par­ties in the ex­e­cu­tion of the al­go­rithm, the al­go­rithm is called a dis­trib­uted al­go­rithm. One com­monly used model of dis­trib­uted com­pu­ta­tion is the LO­CAL model. In the LO­CAL model, all nodes are com­put­ing their own part of the re­sult in par­al­lel. The nodes only see their own lo­cal neigh­bor­hood and need to base their de­ci­sion only on this lo­cal view.

In this the­sis, I in­tro­duce the on­line-LO­CAL model, which com­bines the power of on­line graph al­go­rithms and LO­CAL al­go­rithms. Like on­line graph al­go­rithms, on­line-LO­CAL al­go­rithms need to re­act to nodes be­ing re­vealed one at a time. Un­like on­line graph al­go­rithms, on­line-LO­CAL al­go­rithms also get to see the lo­cal neigh­bor­hood around the nodes be­fore need­ing to make their de­ci­sions.

The on­line-LO­CAL model is a very strong model of com­pu­ta­tion. In gen­eral, there are prob­lems that are triv­ial in the on­line-LO­CAL model, but dif­fi­cult to solve with on­line graph al­go­rithms and LO­CAL al­go­rithms. In this the­sis, I re­strict my at­ten­tion to the class of prob­lems known as lo­cally check­able la­bel­ing prob­lems. These are a broad class of prob­lems for which a so­lu­tion is valid if it looks valid in all lo­cal neigh­bor­hoods. In par­tic­u­lar, I show that for lo­cally check­able la­bel­ing prob­lems in rooted reg­u­lar trees, the on­line-LO­CAL model is ap­prox­i­mately as pow­er­ful as the LO­CAL model.

The the­sis worked as in­spi­ra­tion for re­lated con­fer­ence ar­ti­cle.

2020Bachelor's Thesis: Temperature Measurement Using a Transmon Device

I have de­vel­oped a method for mea­sur­ing tem­per­a­ture in range of tens to hun­dreds of mil­likelvins. The method uses a quan­tum me­chan­i­cal tarns­mon de­vice as the ther­mome­ter. The method is based on the fact that at the ther­mo­dy­nam­i­cal equi­lib­rium the state of a quan­tum me­chan­i­cal sys­tem is dis­trib­uted ac­cord­ing to the Boltz­mann dis­tri­b­u­tion. By fit­ting the Boltz­mann dis­tri­b­u­tion to the mea­sured state dis­tri­b­u­tion, one can de­ter­mine the tem­per­a­ture of the sys­tem.

The state dis­tri­b­u­tion of the sys­tem is mea­sured by ap­ply­ing a mea­sure­ment pulse and mea­sur­ing the re­flected pulse. The mea­sure­ment pulse col­lapses the quan­tum me­chan­i­cal state to one of the eigen­states of the Hamil­ton­ian of the sys­tem. The en­ergy level of the eigen­state de­ter­mines the prop­er­ties of the re­flected mea­sure­ment pulse.

In prac­tice the mea­sured pulse is very weak and noisy. To im­prove the sig­nal-to-noise ra­tio, the mea­sure­ment is re­peated mul­ti­ple times and the re­sponse sig­nals are av­er­aged. Each state of the sys­tem pro­duces a dif­fer­ent re­flected pulse, and av­er­ag­ing mul­ti­ple mea­sure­ments loses the in­for­ma­tion of the dis­tri­b­u­tion of those pulses and, by proxy, the states. To re­cover the in­for­ma­tion of the state dis­tri­b­u­tion, the sys­tem is pre­pared in dif­fer­ent states be­fore the mea­sure­ment. Be­cause we know how the prepa­ra­tion af­fects the state, we can solve the state dis­tri­b­u­tion from this set of mea­sure­ments. We can then use the Boltz­mann dis­tri­b­u­tion to solve for the tem­per­a­ture of the sam­ple.

The method has been de­vel­oped us­ing a trans­mon qutrit, but it works also for other kinds of quan­tum me­chan­i­cal sys­tems. The method pro­duces plau­si­ble re­sults for the tem­per­a­tures, but not all sources of er­rors could be de­ter­mined. More mea­sure­ments are needed to un­ravel these er­ror sources.