Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: JSON parse error: Missing a closing quotation mark in string. in row 33
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
dataset = json.load(f)
File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
return loads(fp.read(),
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 30498)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
for _, table in generator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
raise e
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
pa_table = paj.read_json(
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 33
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
text
string | meta
dict |
|---|---|
\section{Introduction}
\label{sec:intro}
Globular clusters (GCs) exhibit star-to-star variations in their light-element content (e.g., \citealt{carretta09}). In fact, while some GC stars have the same light-element abundances as the field at the same metallicity (first population - FP), others show enhanced N and Na along with depleted C and O abundances (second population - SP). Such anomalies are readily observable also by using color-magnitude diagrams (CMDs) involving specific near-UV filters sensitive to OH, CN and CH molecular bands (e.g. \citealt{sbordone11,piotto15}).
The manifestation of such light-element inhomogeneities is referred to as multiple populations (MPs).
A number of scenarios have been proposed over the years to explain the formation of MPs (e.g., \citealt{decressin07,dercole08,denissenkov14,gieles18}), however, their origin is still strongly debated (see \citealt{bastian18,gratton19} for a recent review).
The MP phenomenon appears to be ubiquitous. In fact, not only all massive and old Galactic GCs host MPs (e.g., \citealt{piotto15,milone17}), but MPs are also observed in the Large and Small Magellanic Clouds (LMC) old stellar clusters \citep{mucciarelli09,dalessandro16}, in GCs in dwarf galaxies such as Fornax \citep{larsen14} and in the M31 GC systems \citep{schiavon13}, and there are strong indications (though based on integrated quantities) that they are a common property of stellar clusters in massive elliptical galaxies \citep[e.g.][]{chung11}. Conversely, several works based on photometric and spectroscopic analysis of red giant branch stars (RGB -- e.g. \citealt{mucciarelli08,mucciarelli14,
martocchia18}) suggest that
massive clusters younger than $\sim2$ Gyr do not show
any inhomogeneity in their light-element content. In fact, NGC~1978 in the LMC is the youngest cluster ($t\sim2$ Gyr) found to host sub-populations with light-element chemical variations so far (\citealt{martocchia18,saracino20muse}).
It is worth stressing that, while young clusters ($<2$ Gyr) do show features in their optical CMDs (e.g., extended main sequence turn-offs, dual main sequences) that are not consistent with the classical notion
of a simple stellar population,
these features are not due to abundance variations \citep{mucciarelli08,mucciarelli09}
{but are likely due to stellar rotation \citep[][]{bastian09,kamann20,kamann21}}.
Hence, while they may be in principle related to MPs, the underlying cause is different.
The lack of MPs in young ($<2$ Gyr) clusters is completely unexpected and
inconsistent with predictions for all theories of MP formation.
We note also that an age of $2$~Gyr corresponds to a formation redshift of $z=0.17$, well past the peak epoch of GC formation (e.g. \citealt{brodie06}).
One possible explanation for the lack of MPs in young age ($<2$ Gyr) clusters is that old GCs were simply much more massive at birth than those systems that do not show abundance spreads.
Such large masses may allow GCs to retain stellar ejecta of stars within them and also to accrete pristine gas from their surroundings.
Indeed cluster mass is found to play a significant role in shaping the properties of MPs in GCs \citep{carretta10,milone17}.
One alternative explanation is that light-element variations do exist also within young clusters,
but they are difficult/impossible to observe along the RGB, where they have been typically searched for.
In fact, \citet{salaris20} have recently shown that the mixing effect associated to
the first dredge-up can have a differential impact on the surface chemical abundances of FP and SP RGB stars and it is able to smooth out their initial N abundance differences with increasing efficiency for decreasing ages.
To finally establish the presence of MPs in young massive clusters, it is therefore
key to search for MPs along their MS.
To this aim, we have started a comprehensive study
of the young ($\sim1.5$~Gyr, \citealt{mucciarelli07,zhang18}) cluster NGC~1783 in the LMC.
This system represents an optimal choice in this context as
it is quite massive ($\sim2\times10^5
M_{\odot}$; \citealt{song21}), it is located in a region of the LMC characterized by low extinction ($A_V<0.1$ mag) and by a negligible field contamination. In addition, previous photometric and spectroscopic studies of RGB stars
\citep{mucciarelli07,cabreraziri2016young,zhang18,martocchia18,martocchia21} suggest that this cluster does not host MPs.
Here we present the results of the first detailed screening of the cluster main sequence (MS) obtained through deep
Hubble Space Telescope (HST) optical and UV MP sensitive photometry.
The Letter is structured as follows. In Section 2 the adopted data-set and data-reduction procedures are described. Section 3 reports on the MP analysis in the CMD and a comparison with theoretical models. Finally we discuss the main results in Section 4.
\section{Observations and data analysis}
\subsection{Data-set and data reduction}
\label{sec:datared}
This work is based on observations obtained with the UVIS channel of the Wide Field Camera 3 (WFC3) and the Advanced Camera for Surveys (ACS) aboard HST.
The main data-set is composed of proprietary WFC3 data obtained under GO 16255 (PI: Dalessandro) and consists of 8 images acquired with the F343N filter ($6\times3086$ s and $2\times3095$ s) and 6 images acquired with the F438W filter ($6\times938$ s).
These data were then combined with archival images obtained under GO 10595 (PI: Goudfrooij) and GO 12557 (PI: Girardi). These complementary data-sets consist of 3 ACS images acquired with each of the following filters, F435W ($2\times340$ s and $1\times90$ s), F555W ($2\times340$ s and $1\times40$ s), F814W filter ($2\times340$ s and $1\times8$ s) and 3 WFC3 images acquired with the F336W filter ($2\times1190$ s and $1\times1200$ s).
The photometric analysis of the entire data-set was performed by using \texttt{DAOPHOT IV}
\citep{stetson87} and following the approach adopted in previous works \citep[see][]{dalessandro18a,dalessandro18b,cadelano19,cadelano20psr}.
Briefly, tens of bright and isolated stars have been selected in each frame to model the point spread function (PSF), which has been eventually applied to all sources detected in each image above $3\sigma$, where
$\sigma$ is the standard deviation of the background counts. We then created a master list composed
of stars detected in at least half of the deep F343N and F438W images. At the corresponding positions of stars in this final master-list, a fit was forced with \texttt{DAOPHOT/ALLFRAME} \citep{stetson94} in each frame of the two data-sets. For each star thus recovered, multiple magnitude estimates obtained in each chip were homogenised by using \texttt{DAOMATCH} and \texttt{DAOMASTER}, and their weighted mean and standard deviation were finally adopted as star magnitude and photometric error.
The final catalog includes all the sources detected in at least two filters
Instrumental magnitudes were calibrated
by using the equations and zero points quoted in the dedicated instrument webpage\footnote{\url{https://www.stsci.edu/hst/instrumentation/wfc3/data-analysis/photometric-calibration}}.
Magnitudes were then corrected for the effect of differential reddening following the approach described in \citet[see also \citealt{dalessandro18b} and the Appendix for further details]{cadelano20a}.
Instrumental positions were corrected for filter-dependent geometric distortions using the prescriptions by \citet{anderson06,bellini09,bellini11} and then converted into the absolute coordinate systems by using the stars in common with \citet{saracino20chromo} as a secondary astrometric reference frame.
The left panels of Figure~\ref{fig:cmd} show the ($m_{F438W},m_{F438W}-m_{F814W}$) and
the ($m_{F438W},m_{F343N}-m_{F438W}$) differential reddening corrected CMDs as an example.
\subsection{Proper motion analysis}
We took advantage of the large temporal baseline of $\sim15$ yr
spanned by the observations and obtained over 5 different epochs (i.e. 2006, 2011, 2016, 2019 and 2021) to perform a relative proper motion (PM) analysis and clean the cluster CMD from field interlopers.
To derive the cluster's relative PMs, we followed the approach described in \citet[see also \citealt{dalessandro18a,cadelano17,massari21}]{dalessandro13}.
The procedure consists in measuring the instrumental position displacements of the stars detected in all the available epochs, once a common distortion-free reference frame is defined.
As a first step, we obtained a precise measurement of the mean stellar positions in each epoch by averaging their instrumental coordinates measured in each frame of each filter. A $3\sigma$-clipping rejection was applied to maximize the accuracy of the final measurements.
We then used a six-parameter linear transformation to shift the average positions of all the stars to a master-list reference frame, which is composed by a sample of likely cluster's member stars selected according to their position in the optical CMDs of the 2006 ACS observations. For each star, the master-frame transformed positions as a function of the epoch are fit with a least-squares straight line, the slope of which represents the star's PM. The fitting procedure is iterated after data rejection and $\sigma$-clipping. After deriving the first-pass PM estimates, we repeated the entire procedure refining the reference master-list by selecting likely member stars according to their first-pass PMs.
To obtain a catalog of stars composed of high probability cluster's members,
we first applied quite strict astrometric quality selection criteria. Specifically, following the prescriptions by \citet{libralato19} we selected {\it i)} stars for which the reduced $\chi^2$ of the PM fit is smaller than 2 in both components, {\it ii)} stars having a PM fit based on at least three different epochs, {\it iii)} stars having a PM error smaller than $3\sigma$ (where $\sigma$ is the local standard deviation of the PM errors calculated over 0.5 large F438W magnitude bins). Then, to select bona-fide cluster's members we analysed the vector-point diagrams (VPDs) in different magnitudes bins in the range $19<m_{F438W}<26$. In each VPD, we performed a Gaussian fit to both the PM components. Stars having a PM smaller than $1\sigma$, where $\sigma$ represents the best-fit gaussian width, are marked as bona-fide cluster's members and they are shown in the VPDs on the right panels of Figure~\ref{fig:cmd}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.4]{cmd.pdf}
\includegraphics[scale=0.4]{cmdpm.pdf}
\caption{{\it Left-hand panels:} CMD of NGC~1783 in two different combination of filters as obtained from the entire catalog of stars. Magnitudes are corrected for differential reddening. {\it Right-hand panels:} same CMD as in the left-hand panels but only for PM selected stars. The right-hand panels show the VPD in different magnitude ranges: gray points represent all the stars with a PM measurement, the red circles enclose stars selected as bona-fide cluster's members, highlighted with black points.}
\label{fig:cmd}
\end{figure}
\section{Results}
\begin{figure}[h]
\centering
\includegraphics[scale=0.6]{iso_Un.pdf}
\caption{Isochrones of a $\sim1.5$ Gyr stellar population with $[Fe/H]=-0.35$ and different N enrichment in the different filter combinations. The red, blue and green curves represent a model with a solar-scaled composition, $[N/Fe]=0.3$ and $[N/Fe]=0.7$ mixture, respectively. {The corresponding stellar masses are reported on the right-hand axis of the right-hand panel}.}
\label{fig:iso}
\end{figure}
To explore the presence of MPs along the cluster MS, we mainly exploited the F343N magnitudes, which have been shown to be quite effective in seperating MPs \citep[e.g.][]{martocchia18,cabrera-ziri20}. In Figure~\ref{fig:iso}
we show the expected behavior of three stellar models in three example color combinations. The reference model is a BaSTI-IAC isochrone \citep{hidalgo18} of appropriate age for NGC~1783 ($t\sim1.5$ Gyr), metallicity $[Fe/H]=-0.35$ \citep{mucciarelli08}, distance $(m-M)_0=18.45$, extinction $E(B-V)=0.02$ and scaled-solar chemical mixture. Such an isochrone is representative of the FP chemical composition.
The other two models were obtained by using a coeval isochrone calculated for the same metallicity, but with two different choices for the metal distribution, in which the elements C, N, O follow the observed MP (anti-)correlations. Specifically, the mildly-enhanced model was obtained assuming [C/Fe]$=-0.2$, [N/Fe]$=+0.3$ and [O/Fe]$=-0.1$, while the highly-enhanced model was created assuming [C/Fe]$=-0.2$, [N/Fe]$=+0.7$ and [O/Fe]$=-0.5$. In both models the total C+N+O abundance is constant. {The calculation of the model atmospheres and fluxes has been performed as described in \citet{hidalgo18}}. Figure~\ref{fig:iso} shows that the ($m_{F343N}-m_{F438W}$) color is the most efficient combination to separate MS stars with different N abundances, in particular those with a mild N enhancement. It is also evident that the effect of N-enhancement on the ($m_{F343N}-m_{F438W}$) color becomes particularly significant for magnitude $m_{F438W}>23$. The enhancement effect decreases but is still appreciable when the F343N filter is combined with other of optical filters (see an example in the middle-panel of Figure~\ref{fig:iso}). On the contrary, as expected, the three models do not show any significant difference in the case optical filter combinations (see the example of the $m_{F438W}-m_{F814W}, m_{F438W}$ CMD on the right-hand panel of Figure~\ref{fig:iso}).
The MP analysis was performed on stars with high photometric quality. First, we removed from the catalog stars having large photometric errors, $\chi^2$ and sharpness values. In particular, for each filter we divided the observed magnitude range in 0.5 mag large bins and removed those stars having at least one of the above quantities larger than $1\sigma$ from the local median values.
Then we removed photometric binaries from the sample. To do this, we selected in the optical diagram ($m_{F438W}-m_{F814W},m_{F438W}$) MS stars in the magnitude range $21.5<m_{F438W}<26$. Then, we divided the sequence in 0.5 mag large magnitude bins where we evaluate the median and standard deviation of the color and removed all the $1.5\sigma$ outliers. This allows us to remove a large fraction of photometric binaries having relatively high mass ratios.
The resulting sample is shown with black dots in Figure~\ref{fig:bimod}.
A first inspection of the CMDs in Figure~\ref{fig:cmd} confirms the presence of an extended turn-off as commonly observed in young stellar systems
and typically interpreted as due to stellar rotation.
Such an effect progressively fades for increasing magnitudes
and the MS reaches a minimum broadening at $m_{F438W}=22.4$ (corresponding to a mass of $\sim1.2 \Msun$).
Interestingly, for magnitudes $m_{F438W}>22.4$ the MS width in the ($m_{F343N}-m_{F438W},m_{F438W}$) diagram abruptly starts to grow again (Figure~\ref{fig:bimod}). We note that this effect is not observed in optical CMDs, but only when the F343N band is adopted { and therefore it is plausible to exclude this is only due to photometric errors. In addition, we can also exclude this effect is due to a residual contamination by low-mass ratio unresolved binaries, as, given the almost vertical shape of the MS in the considered magnitude range in the ($m_{F343N}-m_{F438W},m_{F438W}$) CMD, they are not expected to contribute significantly to the MS color distribution, while their effect would be more easily detectable in the optical CMDs. This points to a possible
connection with the presence of MPs.}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{cmd_bimod.pdf}
\caption{{\it Left-hand panels}: CMD of NGC~1783 in ($m_{F343N}-m_{F438W},m_{F438W}$) filter combination of observed (left-panel) and artificial stars (right-panel). Black points are stars selected following the photometric and binary selection criteria explained in the text, while gray points are the stars that did not survive the selection. The blue and red curves are the fiducial lines adopted to verticalize the color distribution. {\it Right-hand panels:} the top panel displays the verticalized color distribution of MS stars (observed stars on the left, artificial stars on the right), while the bottom panel shows the corresponding histograms in the magnitude range $m_{F438W}=23.75-24.5$. In the case of the observed stars, the two dashed curves are the two best-fit Gaussian while the solid black curve is their sum. In the case of artificial stars, the best single Gaussian fit is shown together with the ratio between the standard deviation of the observed ($\sigma_{OBS}$) and artificial ($\sigma_{ART}$) verticalized MSs.}
\label{fig:bimod}
\end{figure}
To assess quantitatively whether the observed MS broadening for $m_{F438W}>22.4$ in the ($m_{F438W}-m_{F814W}$) combination can be explained in terms of photometric errors, we compared the observations with artificial stars.
We performed a large number of artificial star test experiments following the prescriptions in \citet[see also \citealt{dalessandro15}]{cadelano20b}. We created a list of artificial stars with a F438W input magnitude extracted from a luminosity function modelled to reproduce the observed one and extrapolated beyond the limiting magnitude. Then, to each of these artificial stars, we assigned magnitudes in all the other available filters by interpolating along appropriate mean ridge lines. These artificial stars were added to the real images by using the \texttt{DAOPHOT/ADDSTAR} software and by adopting a regular grid composed of $15\times15$ pixel cells (corresponding approximately to ten times the typical FWHM of the point spread function) in which only one artificial star for each run is allowed to lie. The photometric reduction process and the PSF models used for the artificial star experiments are the same as described in Section~\ref{sec:datared}. This process was iterated multiple times. In the end about 80000 artificial stars are simulated for the entire field of view covered by the adopted data-set. The same photometric quality selection criteria used for real stars were applied to the artificial stars.
We then compared the observed MS width with that derived from artificial star CMDs.
To do this, we verticalized the distribution of MS stars with respect to two fiducial lines \citep[see][for a similar implementation of the technique]{dalessandro18b} in the magnitude range $22.5<m_{F438W}<24.5$.
We estimated the width of the verticalized color distributions by fitting them
with a single Gaussian function.
Results are shown in the right panels of Figure~\ref{fig:bimod}.
Interestingly, we find that the observed MS is $\sim50\%$ larger than the artificial one. A similar difference
is measured also when other combinations of the F343N filter with optical filters, such as the F814W filter (see top panels of Figure~\ref{fig:rect}), are considered.
On the contrary, in all the color combinations including only optical filters, such as the $m_{F438W}-m_{F814W},m_{F438W}$ CMD in the bottom panels of Figure~\ref{fig:rect},
the observed verticalized distributions have only a $15-20\%$ larger widths than the artificial ones. It is important to stress that such an effect is commonly observed in this kind of comparisons (see for example \citealt{dalessandro11,milone12}) and therefore cannot be considered as an evidence of a significant difference.
{\it This quantitative analysis suggests that the significant broadening along the MS of NGC~1783, observed only when UV filters N-abundance sensitive combinations are considered, can represent the first detection of MPs in a massive stellar cluster younger than 2 Gyr}.
Such observational evidence is further supported by the fact that the verticalized ($m_{F438W}-m_{F814W}$) color distribution (Figure~\ref{fig:bimod}) shows hints of bimodality. Indeed, in the observed star histogram in Figure~\ref{fig:bimod} we can distinguish two distinct peaks
with $\Delta_{F343N,F438W}\sim 1$ mag, that can be nicely fit by two Gaussian functions\footnote{We used the Gaussian Mixture Model statistics (\url{https://scikit-learn.org/stable/index.html}) to perform the two component fit.}, whose width is compatible with that expected from photometric errors (i.e. their widths are compatible with that derived from artificial stars). { We can exclude that such bi-modal distribution can be due to low-mass ratio unresolved binaries as they are expected to uniformly populate the the MS color distribution in the considered range of magnitudes.}
Based on the expected distribution of MPs in this CMD (see Figure~\ref{fig:iso}), the red Gaussian peak corresponds to the FP stars, and includes $\sim60\%$ of the sample, while the bluer one corresponds to SP stars and includes the remaining $\sim40\%$ of objects. We note that the presence of MPs along the MS becomes apparent in the magnitude range populated by stars with mass $M\leq1 \, M_{\odot}$ (Figure~2). Finally, it is worth stressing that FP and SP stars are nicely separated in all color combinations including the F343N band, while they become indistinguishable when optical filter combinations are considered (Figure~\ref{fig:bimod2}).
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{cmd_uni_sim.pdf}
\includegraphics[scale=0.5]{cmd_bi_sim.pdf}
\caption{{\it Top panels:} same as in Figure~\ref{fig:bimod} but in the case of the ($m_{F343N}-m_{F814W},m_{F438W}$) filter combination. {\it Bottom panels:} same as in the top panels but in the purely optical ($m_{F438W}-m_{F814W},m_{F438W}$) filter combination.}
\label{fig:rect}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{cmd_bimod2.pdf}
\caption{{\it Left-hand panel:} CMD of NGC~1783 in the filter combination $(m_{F343N}-m_{F814W},m_{F438W})$. Black and gray points are the cluster's members that survived and did not survived the photometric selection criteria. Red and blue dots are FP and SP stars selected on the basis the bimodality presented in Figure~\ref{fig:bimod}. {\it Right-hand panel:} same as in the left-hand panel but in the purely optical filter combination $(m_{F438W}-m_{F814W},m_{F438W})$.}
\label{fig:bimod2}
\end{figure}
\subsection{Comparison with theoretical models}
\label{sec:models}
To tentatively quantify the degree of N enrichment between FP and SP stars, we compared the observations with a set of synthetic CMDs mimicking a population composed of a mixture of stars having standard solar-scaled chemical composition and stars having a N enriched composition.
To do so, we generated three different synthetic CMDs by populating the three isochrones shown in Figure~\ref{fig:iso}.
We divided the magnitude range $21.5<m_{F438W}<26$ in regular bins of 0.5 mag width. In each bin and for each isochrone we simulated 50 artificial stars by randomly extracting them from an uniform distribution in magnitude and from a normal distribution centered on the isochrone color and with a standard deviation equal to that measured from the artificial stars in the same magnitude bin.
Here we assumed a flat luminosity function for the synthetic population and equally populated FP and SP. We note that results are basically unchanged if slightly different luminosity functions and population ratios are assumed.
Results are shown in Figure~\ref{fig:synth}.
The synthetic CMDs obtained by including stars with $[N/Fe]=0.7$ (Figure~\ref{fig:synth}) show either a clear split MS or a significantly larger ($>50\%$) broadening with respect to the observed CMD.
On the contrary, we find that the synthetic CMD populated by a mixture of solar-scaled and $[N/Fe]=0.3$ stars is able to nicely reproduce the observations. In fact, the resulting MS width differs by only $\sim10\%$ from the observed one { and the histogram of the verticalized distribution nicely matches the observed color distribution (Figure~\ref{fig:bimod} and Figure~\ref{fig:synth}) - panel a)}. This suggests that NGC~1783 hosts a second population of stars moderately enriched in terms of N ($\Delta([N/Fe]\sim0.3$).
\begin{figure}[h]
\centering
\includegraphics[scale=0.55]{cmd_synth_hist.pdf}
\caption{ {\it Panel a):} observed CMD in the ($m_{F343N}-m_{F438W},m_{F438W}$) filter combination. {\it Panel b):} synthetic CMD obtained with a mixture of $[N/Fe]=0$ and $[N/Fe]=0.3$ stars. {\it Panel c):} same as in panel b) but for a mixture of $[N/Fe]=0$ and $[N/Fe]=0.7$ stars. {\it Panel d):} same as in panel b) panel but for a combination of the three available mixtures. { {\it Bottom panels:} histograms of the verticalized distributions of MS stars in the corresponding top panels. The blue, red and black curves are the two Gaussian fit presented in Figure~\ref{fig:bimod}.}}
\label{fig:synth}
\end{figure}
\section{Discussion}
The observational results presented in this Letter show that we have detected for the first time the presence of MPs differing in terms of their light element abundances in a stellar cluster younger than $\sim2$ Gyr. These findings represent a potential major breakthrough in the field, as they would suggest, at odds with what found in the literature so far, that the MP phenomenon is common to all massive clusters, irrespective of their age. {Hence, if GC formation is not a specific phenomenon of stellar systems formed at high-z, then GCs at any age can be used as a proxy to study the galaxy assembly processes \citep{horta21,kruijssen15}}.
Recently,
\citet{cabrera-ziri20} and \citet{li20} have carried out a similar study, looking for MPs on
the MS of the massive $\sim1.5$ Gyr old cluster NGC 419 in the SMC {(see also the case of NGC~1846, \citealt{li21})}. The authors did not find any significant evidence of MPs in this cluster, however, their results might have been hampered by the quality of the available photometry (mainly due to the larger distance of the system) and therefore the cluster would deserve a follow-up analysis.
By comparing the observed CMDs with artificial stars and by following broadly the same approach used for the MS analysis, we confirm, based on more accurate photometry, previous findings about the lack of MPs along the cluster RGB.
However, we estimate that the apparent disagreement between the results obtained for the RGB and MS is compatible with the expected mixing effects linked to the first dredge-up.
In fact, the results presented in \citet{salaris20} show that the mixing associated to the first dredge-up can reduce the initial N differences among different sub-populations by a factor of about 2-3 at an age of $\sim1.5$ Gyr. Therefore, in the specific case of NGC~1783, an initial spread of $\Delta [N/Fe] \sim 0.3$ dex, as constrained from the MS (Section 3.1), would be erased completely on the RGB, thus mimicking an homogeneous stellar population.
The combination of these results therefore suggest that to study MPs in very young systems it is necessary to focus on their MS, thus largely changing the observing strategies adopted so far. This calls for a dedicated study that would reappraise our understanding of the MP phenomenon over the cosmic time.
It is interesting to note in this respect, that while this work shows that there is not a sharp age limit for the onset of MPs,
nevertheless age can indeed play a role in shaping light-element chemical abundance variations.
If we compare the initial N spread constrained from the MS of NGC~1783 ($\Delta(N/Fe])\sim 0.3$ dex) with what found photometrically from the RGBs of intermediate-age and old clusters, after accounting for the effects of the first dredge-up, we find indications of a possible correlation between cluster age and initial N spread with older clusters requiring an initial internal $N$ variation of $\sim1$dex and the young ones a spread smaller by a factor $\sim5$ {( see Figure~6 in \citealt{salaris20} and references therein)}.
While it is necessary to investigate the significance of this trend further, one possibility is that it might be related to the initial cluster mass. In fact, while all clusters analyzed so far have comparable present-day masses ($M>10^5 M_{\odot}$), older clusters could have been much more massive at birth than the younger ones.
Larger masses may allow GCs to retain more efficiently stellar ejecta, and also accrete
pristine gas from their surroundings. However, the notion that GCs lose a significant fraction of their initial mass or are able to accrete/retain significant amounts of gas is still strongly debated (e.g., \citealt{larsen14,bastian15,cabrera15,dalessandro19}).
{The results presented here also open the possibility of tightly constraining MP formation processes. For example, young star clusters can be used to detect the presence of age spread (not possible in the case of old clusters) which is one the major discriminator among MP formation models \citep[e.g.][]{martocchia18,martocchia19,saracino20muse}.}
A detailed characterization of the MP properties in NGC~1783 requires a detailed spectroscopic follow-up.
Given the distance of the system and faint magnitudes of the target stars, the use of integral field spectrographs and the application of the approach successfully adopted by \citet{latour19} and \citet{saracino20muse} appears to be a promising route.
\begin{acknowledgments}
MC and ED acknowledge financial support from the project Light-on-Dark granted by MIUR through PRIN2017-2017K7REXT. MS acknowledges support from the STFC Consolidated Grant ST/V00087X/1.
\end{acknowledgments}
\vspace{5mm}
\facilities{HST(ACS,WFC3)}
\software{DAOPHOT IV \citep{stetson87,stetson94}
}
|
{
"timestamp": "2021-12-15T02:00:52",
"yymm": "2112",
"arxiv_id": "2112.06964",
"language": "en",
"url": "https://arxiv.org/abs/2112.06964"
}
|
\section{Introduction}
\label{sec:intro}
Multi-agent systems hold great promise for science exploration in extreme
environments. Correspondingly, there has been a proliferation of national
programs aimed at expanding multi-agent networked systems for caves
\cite{chung2019darpa}, oceans \cite{waterston2019ocean} and low earth
orbit \cite{kramer2008overview,pekkanen2019governing}. These environments can
be considered the ``extreme edge'', far from the robust computation and
omnipresent communication networks of connected cities.
In planetary exploration there is an emerging push toward more extreme environments,
and therefore multi-agent systems because single, flagship robots are limited to less-hostile
operating areas. Therefore, access to Recurring Slope Lineae or planetary caves
\ifextendedv
\cite{boston-frederick-et-al-2003,leveille2010lava,mcewen2014recurring}
\else
\cite{leveille2010lava,mcewen2014recurring}
\fi
may be possible with
multiple small, potentially expendable rovers. Not surprisingly, we see potential
systems being demonstrated in the Mars helicopter\cite{balaram2018mars}, and the
“PUFFER” rover (Pop-Up Flat-Folding Explorer Robots)
\ifextendedv
\cite{karras2017pop,davydychev2019design}.
\else
\cite{davydychev2019design}.
\fi
There is also evidence that
next-generation spaceflight computing employed on these systems will be more
like our current mobile devices \revtwo{
\cite{balaram2018mars,doyle2013hpc,powell2011enabling,mounce2016hpc,schmidt2017spacecubex}.} Finally, it is likely that, in the future, multiple collaborating robots, astronauts, and base stations will \emph{themselves} be a complex and time varying processing and communication network \revtwo{(see e.g. \cite{turchi2021system})}.
\begin{figure}[htb]
\centering{
\includegraphics[width=\linewidth]{problem_2}
{\jvspace{-2em}
\caption{
Illustrative MOSAIC scenario.
A set of processing tasks (On the left as dependency graph) must
be mapped to multiple assets with heterogeneous computing, communication,
and energy capacities. Each asset is also available over a fixed
time window due to terrain effects or orbital parameters.
The goal is to compute all the required tasks as quickly as
possible.
\label{fig:rovers}
}
}
}
\end{figure}
The key metric of these system concepts is throughput of observations and data.
A compelling paradigm to increase the throughput of heterogeneous multi-robot
systems is \emph{computational load-sharing}: by allowing robotic agents to
offload computational tasks to each other or to a ``computational server''
(e.g., an overhead orbiter, a flagship rover, or a stationary lander),
computational load-sharing can give access to advanced analysis capabilities
to small, low-power rovers with limited on-board computing capabilities, or allow agents to do more \revtwo{memory- or cpu-intensive} work by leveraging nearby idle nodes.
Previous work \cite{hook2018ICAPSws,hook2019ICAPS} has shown that computation
sharing in robotic systems with heterogeneous computing capabilities (e.g., Mars
exploration scenarios) can lead to significantly increases in system-level
performance and science returns.
\revtwo{These self-reliant, edge robotic systems share commonalities that motivate our
study. The first is an emphasis on energy conservation due to their remote,
self-sustaining design. The second is the possible use of heterogeneous
systems, in which some nodes contain more resources (power, computing,
communication, mobility, sensing, etc) than others. The final factor is
intermittent and periodic loss of connectivity between nodes. While the
benefit of edge computation supporting mobile phone networks continues to be well
investigated (see the highly influential \cite{cuervo2010maui}), the intermittent loss of
communications and time-varying position of the agents makes it more challenging to employ
these concepts directly. This is true because it is challenging to route
through a changing network, but also because the source and destination changes over time
because agents are collaborating and assisting each other. (Figure
\ref{fig:rovers}). The resulting solution must tolerate partitions to the
network or long delays before data can be sent between nodes or back to a data
center.}
In this paper, we formalize the \textit{communication-aware
computation task scheduling problem} and present an Integer Linear Program that
optimizes the allocation of computation and communication tasks to heterogeneous agents,
accounting for the computational capabilities and
time-varying communication links. Because data and computation are
shared among many devices, we dub the resulting local computation-sharing
network a MOSAIC (Multi-robot On-site Shared Analytics Information and Computing) network.
We model and test with networks that use Delay- and Disruption-Tolerant
Networking (DTN) which provides transparent store-and-forward, multi-hop data
routing between arbitrary endpoints and negotiates intermittent interruptions
and delays in connectivity \ifextendedv
\cite{wyatt2017,cerf2007delay,burleigh2003delay}
\else
\cite{wyatt2017,burleigh2003delay}
\fi.
\revtwo{Unlike mobile phone networks which respond to consumer demand, in cooperative multi-agent networks,
agents can explicitly share their goals and constraints with each other. }
Thus, we consider
the robots' intended actions as part of the scheduling problem so that the
robots can schedule data-intensive tasks when assistance is available.
Our evaluation scenarios are biased towards multi-rover systems for Mars or the
Moon. However, the results generalize to arbitrary time-varying
communication graphs, such as vehicles on known street routes or constellations
in orbit. In a planetary exploration scenario, we show that distributed
computation can increase the amount of science performed \emph{threefold}
compared to an analogous system with no computational load-sharing. We show
that the solution includes intuitive results such as designated relay nodes and
``assembly line'' behaviors.
\subsection{Related Work} \label{sec:related_work}
The core computational problem addressed in this work is communication-aware task scheduling.
Task scheduling is known to be NP-complete
\ifextendedv
\cite{garey1979computers,ullman1975np};
\else
\cite{garey1979computers};
\fi
furthermore, while polynomial-time approximation schemes for the problem exist, to the best
of the authors' knowledge no such schemes are known for the task scheduling
problem when computing nodes have \emph{heterogeneous computational
capabilities}, i.e. the same task requires different computation runtimes on
different nodes
\ifextendedv
\cite{Graham1979Optimization,Kwok1999Static}.
\else
\cite{Graham1979Optimization}.
\fi
A large number of heuristic algorithms have been proposed to solve
the task scheduling problem. Heuristics may be classified as \emph{list
scheduling} heuristics (e.g., \cite{Sih1993DynamicLevelScheduling}), which rely
on greedily allocating tasks according to a heuristic priority task assignment;
\emph{clustering} heuristics (e.g., \cite{Yang1994clustering}), which identify
groups of tasks that should be scheduled on the same computing node; and
\emph{task duplication} heuristics, which duplicate some tasks to reduce
communication overhead (e.g., \cite{ahmad1994duplication}). In addition, a
number of \emph{guided random search} algorithms are available, including genetic algorithms
\cite{yu2006scheduling} and ant colony optimization algorithms
\cite{Chen09Scheduling}. See the survey in
\cite{Kwok1999Static} and introduction in \cite{Topcuoglu2002HEFT} for a
thorough review.
In particular, the heterogeneous earliest-finish-time (HEFT) heuristic
algorithm \cite{Topcuoglu2002HEFT} provides excellent performance for
heterogeneous task scheduling problems, and a number of variations of HEFT have
been proposed \cite{Tang2009HEFT,Canon2010RobHEFT,Tang2011SHEFT}. However, the
HEFT algorithm and its derivatives generally assume that computation nodes are
able to perform all-to-all communication and that the availability of
communication links does not change with time; they also do not capture access
contention or bandwidth constraints on communication links, and do not
accommodate \emph{optional} tasks which are not required to be scheduled but
result in a reward when added to the schedule.
Heuristic approaches are also used in model-based schedulers/temporal-planners that rely on activity-centric representations such as timeline-based modeling languages \cite{chien-et-al-SpaceOps-2012} and the Planning Domain Definition Language (PDDL) \cite{pddl21,pddl22,pddl30}. Research on PDDL temporal planners for example has focused on domain-independent heuristics and has deployed planning systems to several robotics applications, especially those that require both planning and scheduling capabilities \cite{cashmoreetal2014,colesetal2019}.
One of the main state-of-the-art temporal planner is OPTIC \cite{benton2012temporal}; OPTIC not only reasons about actions' preconditions and effects to determine the set of actions required to achieve a given goal state, but also considers an action's temporal and resource constraints as well as soft state constraints (preferences) and continuous objective functions. Due to its generality in the input representation, we compare the performance of our approach with the OPTIC planner in Section \ref{sec:experiment:benchmark}.
Several heuristics are also available for the \emph{online} scheduling problem,
where computational tasks appear according to a stochastic process, and are
not revealed to the scheduler in advance
\cite{tassiulas1992stability,dai2005maxpressure,TerekhovEtal2014JAIR}; recent work extends such
schedulers to accommodate communication latency constraints
\cite{Yang2018Scheduling}. However, online approaches generally perform poorly
compared to offline algorithms when the list of tasks to be executed is known
in advance or in batch, a typical scenario for robotic exploration missions;
in addition, even state-of-the-art online algorithms assume that
\textit{all-to-all communication between the computation nodes is available}. In
contrast, the approach proposed in this paper
does adapt to realistic time-varying communication constraints, explicitly represents
multi-hop communications between nodes, and accommodates optional
tasks, while offering sufficiently fast computation times to make the approach amenable
for field use, as we show.
\revtwo{
The problem of resource-aware scheduling in space application has seen a significant amount of interest in the adaptive space systems community. However, existing solutions tend to focus on reconfigurability \emph{within} an individual vehicle (see e.g. \cite{fayyaz2012adaptive}); solutions applicable to multi-agent systems generally assume that all-to-all communication is available \cite{liao2019caching}.
}
\subsection{Contribution}
Our contribution is threefold.
First, we design a task scheduling and task allocation algorithm based on integer programming that accounts for time-varying, bandwidth-constrained, multi-hop communication links and optional tasks, and that returns high-quality solutions quickly. We also provide a distributed implementation of the algorithm based on a shared-world, consensus-backed model.
Second, we validate the performance of the algorithm with extensive benchmarking on several hardware architectures, including embedded architectures such as PPC 750 and Qualcomm Flight, and with human-in-the-loop field tests.
Third, we explore and highlight emergent load-sharing behaviors produced by the scheduling algorithm, and we quantitatively show that sharing of computational tasks can result in significant increases in science throughput for a notional multi-robot mission.
Finally, we provide an open-source implementation of the core results for the community's use.
Collectively, the results in this paper show that sharing of computational
tasks among heterogeneous agents is greatly enhancing of
heterogeneous multi-agent architectures, resulting in higher utilization of
computational resources, lower energy use, and increased scientific throughput for a given
hardware architecture.
A preliminary version of this paper was presented at the 2019 International Conference on Automated Planning and Scheduling (ICAPS) conference \cite{hook2019ICAPS}.
In this extended version, we
(i) provide an in-depth discussion of the ILP problem and several additional extensions (including additional cost functions and first-order modeling of network interference),
(ii) rigorously show that a flooding-based algorithm can be used to provide a distributed
implementation of the scheduling algorithm for systems with moderate numbers of agents,
(iii) report extensive benchmarking results showing that the ILP can be solved
effectively on embedded hardware architectures suitable for robotic systems,
and (iv) present an extended discussion of experimental results.
\subsection{Organization}
The rest of this paper is organized as follows.
In Section \ref{sec:modelling}, we rigorously describe the multi-robot, communication-aware computation task scheduling problem solved in the paper.
In Section \ref{sec:scheduler}, we provide a detailed description of the proposed scheduling algorithm.
In Section \ref{sec:experiment}, we present experimental results from a field test performed at Jet Propulsion Laboratory (JPL) and highlight a number of interesting emerging organization behaviors. We also report benchmarks showing that the proposed scheduling algorithm performs well on several embedded hardware architectures.
Finally, in Section \ref{sec:disc}, we draw conclusions and lay out directions for future work.
\section{Problem Description}\label{sec:modelling}
\begin{figure}
\centering
\includegraphics[width=.95\columnwidth,trim={0.2cm 0 0 0}, clip]{scenario_task_network}
\caption{Notional software network for the PUFFERs rovers.}
\label{fig:scenario_task_network}
\end{figure}
We now describe the communication-aware computation task scheduling problem for heterogeneous multi-robot systems.
\paragraph{Tasks and Software Network}
\revtwo{ We wish to schedule a set \ensuremath{\mathbb{T}}~ of tasks, with $|\ensuremath{\mathbb{T}}|=M$ (that is, the number of tasks in \ensuremath{\mathbb{T}}~ is $M$).}
Computational tasks of interest can include, e.g. localizing a robot, computing a motion plan for a robot,
classifying and labeling the content of an image \cite{ono2016data,Higa2019VeeGer}, or estimating the spatial distribution of a phenomenon
based on point measurements from multiple assets \cite{tokekar2016sampletspn,kriging}.
Tasks may be \emph{required} or \emph{optional}. Required tasks,
denoted as $\ensuremath{\mathbb{R}}\subseteq\ensuremath{\mathbb{T}}$, must be included in the schedule.
\revtwo{Optional tasks $\ensuremath{\mathbb{T}} \setminus \ensuremath{\mathbb{R}}$ are each assigned a \emph{reward} score, denoted as $r(T)$ for each optional task $T\in\ensuremath{\mathbb{T}}\setminus\ensuremath{\mathbb{R}}$, which captures the value of including the task in a schedule.}
The output of each task is a \emph{data product}. Data products for task $T$ are denoted as \data{T}.
The size (in bits) of the data products are known a-priori as \size{T} for task $T$.
Tasks are connected by dependency relations encoded in a \emph{software network} $SN$.
\revtwo{Let $P_T\subset\ensuremath{\mathbb{T}}$ be a set of predecessor tasks for task $T\in\ensuremath{\mathbb{T}}$. If task $\hat T \in P_T$ (that is, $\hat T$ is a \emph{predecessor} of task $T$), } task $T$ can only be executed by a robot if the robot has data product $\data{\hat T}$.
If $\hat T$ is scheduled to be executed on the same robot as $T$, $\data{\hat T}$ is assumed to be available to $T$ as soon as the computation of $\hat T$ is concluded.
If $\hat T$ and $T$ are scheduled on different robots, $\data{\hat T}$ must be transmitted from the robot executing $\hat T$ to the robot executing $T$ before execution of $T$ can commence.
An example of $SN$ used in our experiments is shown in Figure~\ref{fig:scenario_task_network}.
To ensure a solution exists, we require two assumptions.
\begin{assumption}[Feasibility]\label{assumption:isolated_feasibility}
There exists a schedule where all required tasks are scheduled.
\end{assumption}
\begin{assumption}[No circular dependencies]
The software network $SN$ does not have cycles.
\end{assumption}
\paragraph{Agents}
Agents in the network represent computing units.
Let there be $N\in\mathbb{Z}^{+}$ agents in the network.
The agents are denoted by
$\agent{1},\thinspace \agent{2},\thinspace\ldots,\thinspace \agent{N}$.
Each agent has known on-board processing and storage capabilities.
The time and energy cost required to perform a task $T$ on agent $A_i$ are assumed to be known and denoted respectively as $\ctime{i}{T}$ and $\cenergy{i}{T}$.
Depending on the application, time and energy cost can capture the worst-case, expected, or bounded computation
time and energy; they are all considered to be deterministic.
\begin{figure}[t]
\begin{centering}
\includegraphics[width=.6\columnwidth]{contact_graph}
\par\end{centering}
\caption{
Contact graph for $3$ agents showing connectivity time windows and bandwidths available. \label{fig:Contact-graph}
}
\end{figure}
\paragraph{Contact Graph} \label{sec:cg}
Agents can communicate according to a prescribed time-varying \emph{contact graph} $CG$ which denotes the availability and bandwidth of communication links between the robots.
$CG$ is a graph with time-varying edges.
Nodes $\mathcal{V}$ in $CG$ correspond to agents.
For each time instant $k$, directed edges $\mathcal{E}_{k}$ model the availability of communication links; that is, $(i, j) \in \mathcal{E}_{k}$ if node $i$ can communicate to node $j$ at time $k$.
Each edge has a (time-varying) data rate ranging from $0$ (not connected) to $\infty$
(communicating to self), denoted by $\rate{i}{j}{k}$ for the rate from
\agent{i} to \agent{j} at time $k$.
An example timeline representation for $3$
agents with available bandwidths can be seen in
Figure~\ref{fig:Contact-graph}.
A key feature of DTN-based networking is Contact Graph Routing (CGR) \cite{wyatt2017,Araniti2015cg}.
CGR takes into account predictable link schedules and
bandwidth limits to automate data delivery and optimize the use of network
resources.
Accordingly, by incorporating DTN's store-forward mechanism
into the scheduling problem, it is possible to use mobile agents as
\emph{robotic routers} to ferry data packets between agents that are not directly connected.
Communicating the data product \data{T} from \agent{i} to \agent{j}
at time $k$ requires time
\rev{
\[
\ctime{ij}{T} = \min_{\tau} \left(\tau \text{ such that } \int_{\kappa=k}^{k+\tau} \rate{i}{j}{\kappa} d \kappa \geq \size{T} \right),
\]
}
\rev{that is, $\ctime{ij}{T}$ is the shortest time required to transmit a total of $\size{T}$ bits at an instantaneous data rate $\rate{i}{j}{\cdot}$ starting at time $k$.}
\rev{If the data rate $\rate{i}{j}{\cdot}$ is constant through the communication window and sufficiently long for the transmission to occur,
the expression can be simplified to
$\ctime{ij}{T} = \size{T}/\rate{i}{j}{k}$.}
\revtwo{ We model agents as single-threaded computers.
If a robot actually has multiple processors, even of different types, these can be accommodated by modeling each processor as a computing agent, and connecting physically co-located processors with infinite-bandwidth, zero latency communication links. }
\begin{assumption}[Computational resource availability]
\label{assumption:single_task}
Agents can only perform a single task at any given time, including transmitting or receiving data products.
\end{assumption}
\begin{assumption}[Communication self-loops]
Agents take $0$ time to communicate the solution to
themselves.
\end{assumption}
\paragraph{Schedule}
A schedule is (a) a mapping of tasks to agents and start-times, denoted as
$\ensuremath{\mathbb{S}}:T\rightarrow (\agent{i}, k)$ where $i\in[1,\ldots,N]$ and $k\geq0$,
and (b) a list of inter-agent communications
$(\agent{i}, \agent{j}, \data{T}, k)$ denoting the transmission of $\data{T}$ from $\agent{i}$ to $\agent{j}$ from time $k$ to time \rev{$k+\ctime{ij}{T} : \left(\int_{k}^{k+\ctime{ij}{T}} \rate{i}{j}{\kappa} d \kappa = \size{T}\right)$.}
\paragraph{Optimization Objectives}
We consider several optimization objectives (formalized in the following section), including:
\begin{itemize}
\item \emph{Optional tasks}: maximize the sum of the rewards $r(T)$ for optional tasks $T$ that are included in the schedule;
\item \emph{Makespan}: minimize the maximum completion time of all scheduled tasks;
\item \emph{Energy cost}: minimize the sum of the energy costs $\cenergy{i}{T}$ for tasks included in the schedule;
\end{itemize}
\paragraph{Scheduling Problem}
We are now in a position to state the communication-aware computation task scheduling problem for heterogeneous multi-robot systems.
\begin{problem}[Communication-Aware Computation Task Scheduling Problem for Heterogeneous Multi-Robot Systems]
\label{prob:schedule}
Given a set of tasks modeled as a software network $SN$, a list of
computational agents \agent{i}, $i\in[1\ldots N]$, a contact graph $CG$,
and a maximum schedule length $C^\star$, find a schedule that satisfies:
\begin{enumerate}
\item The maximum overall computation time is no more than $C^\star$;
\item All required tasks $T\in \ensuremath{\mathbb{R}}$ are scheduled;
\item A task $T$ is only scheduled on agent $\agent{i}$ at time $k$ if the agent has received all the data product $\data{\hat T}$ for predecessor tasks $\hat T \in P_T$;
\item Every agent performs at most one task (including transmitting and receiving data products) at any time;
\item The selected optimization objective is maximized.
\end{enumerate}
\label{prob:sched}
\end{problem}
\paragraph{Notes on Problem Assumptions}
The assumption that a feasible schedule including all required tasks exists (Assumption \ref{assumption:isolated_feasibility}) is appropriate for multi-robot systems where each required task ``belongs'' to a specific robot (i.e., the task is performed with inputs collected by the robot, and the output of the task is to be consumed by the same robot). Examples of such tasks include localization, mapping, and path planning. In such a setting, it is reasonable to assume that each robot should be able to perform all of its own required tasks with no assistance from other computation nodes; on the other hand, cooperation between robots can decrease the makespan, reduce energy use, and enable the completion of optional tasks.
The contact graph is assumed to be known in advance. This assumption is reasonable in many space applications, specifically in surface-to-orbit communications, orbit-to-orbit communications, and surface-to-surface communication in unobstructed environments, where the capacity of the communication channel can be predicted to a high degree of accuracy. In obstructed environments where communication models are highly uncertain (e.g., subsurface voids such as caves, mines, tunnels) a conservative estimate of the channel capacity could be used. Extending Problem \ref{prob:sched} to explicitly capture uncertainty in the communication graph is an interesting direction for future research.
Finally, Problem \ref{prob:sched} also assumes that the communication graph is not part of the optimization process. The problem of optimizing the contact graph by prescribing the agents' motion is beyond the scope of this paper \revtwo{(for a good example of the vast literature see \cite{yan2013co,ghaffarkhah2011communication})}; note the tools described in this paper can be used as an optimization subroutine to numerically assess the effect of proposed changes in the contact graph on the performance of the multi-robot system.
\section{Scheduling Algorithm}
\label{sec:scheduler}
\subsection{ILP formulation}
\newcommand{\ensuremath{C^\star_d}}{\ensuremath{C^\star_d}}
\newcommand{\ensuremath{SN}}{\ensuremath{SN}}
We formulate Problem~\ref{prob:sched} as an integer linear program (ILP).
We consider a discrete-time approximation of the problem with a time horizon of $\ensuremath{C^\star_d}$ time steps, each of duration $C^\star/\ensuremath{C^\star_d}$, corresponding to the maximum schedule length $C^\star$.
\revtwo{As is common in ILP formulations, the number of time steps can be set to any value that balances runtime vs granularity.}
The optimization variables are:
\begin{itemize}
\item $X$, a set of Boolean variables of size $N\cdot M \cdot \ensuremath{C^\star_d}$. $X(i,T,k)$ is true if and only if agent $A_i$ starts computing task $T$ at time $k$.
\item $D$, a set of Boolean variables of size $N \cdot M \cdot \ensuremath{C^\star_d}$. $D(i,T,k)$ is true if and only if agent $A_i$ has stored the data products $d(T)$ of task $T$ at time $k$.
\item $C$, a set of Boolean variables of size $N^2 \cdot M \cdot \ensuremath{C^\star_d}$. $C(i,j,T,k)$ is true if and only if agent $A_i$ communicates part or all of data products $\data{T}$ to agent $A_j$ at time $k$.
\end{itemize}
The optimization objective $R$ can be expressed as follows:
\begin{itemize}
\item Maximize the sum of the rewards for completed optional tasks:
\begin{subequations}
\begin{equation}
R_r = \sum_{i=1}^N \sum_{T\in\ensuremath{\mathbb{T}}\setminus \ensuremath{\mathbb{R}}} \sum_{k=1}^{\ensuremath{C^\star_d}-\ctime{i}{T}} r(T) X(i,T,k)
\end{equation}
\item Minimize the makespan of the problem:
\begin{equation}
R_M = -\max_{i\in[1, N]} \max_{T\in \ensuremath{\mathbb{T}}} \max_{k\in[1, \ensuremath{C^\star_d}]} \left( k + \ctime{i}{T} \right)X(i,T,k)
\end{equation}
\item Minimize the energy cost of the problem:
\begin{equation}
R_e = - \sum_{i=1}^N \sum_{T\in\ensuremath{\mathbb{T}}} \sum_{k=1}^{\ensuremath{C^\star_d}} \cenergy{i}{T} X(i,T,k)
\end{equation}
\label{eq:MILP:costs}
\end{subequations}
\end{itemize}
We are now in a position to formally state the ILP formulation of Problem \ref{prob:schedule}:
{
\ifrelaxedv \else
\small
\fi
\begingroup
\allowdisplaybreaks
\begin{subequations}\label{eq:MILP}
\begin{flalign}
& \underset{X, D, C}{\text{maximize }} R \\
& \text{subject to}\nonumber \\
& \sum_{i=1}^N \sum_{k=1}^{\ensuremath{C^\star_d}-\ctime{i}{T}} X(i,T,k) = 1 \quad \forall T\in \ensuremath{\mathbb{R}} \label{eq:MILP:requiredtasks}\\
& \sum_{i=1}^N \sum_{k=1}^{\ensuremath{C^\star_d}-\ctime{i}{T}} X(i,T,k) \leq 1 \quad \forall T\in\ensuremath{\mathbb{T}}\setminus \ensuremath{\mathbb{R}} \label{eq:MILP:optionaltasks}\\
& X(i,T,k) \leq D(i,L,k) \label{eq:MILP:prereqs} \\
&\quad \forall i\in[1,\ndots,N], T\in[1,\ndots,M], L\in P_T, k\in[1,\ndots,\ensuremath{C^\star_d}] \nonumber \\
& \sum_{T=1}^M \left[ \sum_{j=1}^N\left( C(i,j,T,k) + C(j,i,T,k) \right) + \sum_{\mathclap{\hat k = \max(1,k-\ctime{i}{T})}}^k X(i,T,\hat k) \right] \nonumber \\
& \quad \leq 1 \quad \quad \forall i\in[1,\ndots,N], k\in[1,\ndots,\ensuremath{C^\star_d}] \label{eq:MILP:computation}\\
& D(i,T,k+1)\!-\!D(i,T,k) \nonumber \\
& \quad \leq \sum_{\tau=1}^k \sum_{j=1}^N \frac{r_{ji}(\tau)}{\size{T}} C(j,i,T,\tau) + \sum_{\tau=1}^{\mathclap{k-\ctime{i}{T}}} X(i,T,\tau) \nonumber \\
& \quad \forall i\in[1,\ndots,N], T\in[1,\ndots,M], k\in[1,\ndots,\ensuremath{C^\star_d}-1] \label{eq:MILP:learning}\\
& C(i,j,T,k) \leq D(i,T,k) \nonumber \\
& \quad \forall i,j \in[1,\ndots,N], T\in[1,\ndots,M], k\in[1,\ndots,T] \label{eq:MILP:knowledge}\\
& D(i,T,1)=0 \quad \forall i \in[1,\ndots,N], T\in[1,\ndots,M] \label{eq:MILP:initialinfo}
\end{flalign}
\end{subequations}
\endgroup
}
Equation \eqref{eq:MILP:requiredtasks} ensures that all required tasks are
performed and \eqref{eq:MILP:optionaltasks} that optional tasks are performed
at most once.
Equation \eqref{eq:MILP:prereqs} requires that agents only
start a task if they have access to the data products of all its predecessor
tasks.
Equation \eqref{eq:MILP:computation} captures the agents' limited computation
resources by enforcing Assumption \ref{assumption:single_task}.
Equation \eqref{eq:MILP:learning} ensures that agents learn the content of a
task's data products only if they (i) receive such information from other
agents (possibly over multiple time steps, each carrying a fraction $r_{ij}(k)/\size{T}$ of the data product) or (ii) complete the task themselves.
Equation \eqref{eq:MILP:knowledge} ensures that agents only communicate a data
product if they have stored the data product themselves. Finally, Equation
\eqref{eq:MILP:initialinfo} models the fact that data products are initially
unknown to all agents.
The ILP has $N^2M\ensuremath{C^\star_d}+2NM\ensuremath{C^\star_d}$ Boolean variables and $M(N(3\ensuremath{C^\star_d}-1)+N )+N\ensuremath{C^\star_d}$
constraints; instances with dozens of agents and tasks and horizons of 50--100 time steps can be readily solved by
state-of-the-art ILP solvers, as shown in Section \ref{sec:experiment}.
\revtwo{
\subsection{Modeling Extension: Capturing Network Interference}
}
The ILP formulation can be extended to capture network interference as follows.
In \eqref{eq:MILP}, link bandwidths $r_{ij}$ are assumed to be fixed and independent of each other: that is, the communication bandwidth $r_{ij}$ on a link is assumed to be achievable regardless of communication activity on other links. This may not hold for systems with robots in close proximity that share the same wireless channel.
In such a setting, interference introduces a coupling between the achievable bandwidths on different links, and the overall amount of data that can be exchanged by interfering links is limited by the \emph{channel capacity} of the shared physical medium.
The formulation in \eqref{eq:MILP} can be extended to capture a first-order approximation of this effect, letting individual link bit rates be decision variables subject to constraints on the overall channel capacity.
Effectively, agents are allowed to use less than the full capacity of individual links to ensure that their transmissions do not cause interference on other links sharing the same wireless channel.
To accommodate, define an additional set of real-valued decision variables $R$ of size $N^2 \cdot M \cdot \ensuremath{C^\star_d}$. $R(i,j,T,k)$ denotes the amount of bits of the data product of task $T$ that is transmitted from agent $A_i$ to agent $A_j$ in time interval $k$.
Under this model,
the interfering links' channel capacity $r(I, k)$ (that is, the overall amount of bits that links in $I$ can simultaneously transmit) is known, for each discrete time interval $k$ and each subset $I \in \mathbb{I} \subset 2^{N^2}$ of links that is subject to mutual interference.
In order to avoid introducing an exponential number of constraints, it is desirable to consider a modest number of sets of interfering links \revtwo{based on the agents' geographical proximity}. For instance, if all robots are operating in close proximity and can interfere with each other, the overall bandwidth of \emph{all} links should be constrained to be smaller than the capacity of the shared channel, \revtwo{resulting in the addition of a single interference constraint}.
Equation \eqref{eq:MILP:learning} is replaced by the following equations:
{
\ifrelaxedv \else
\small
\fi
\begin{subequations}
\begin{flalign}
& R(i,j,T,k) \leq r_{i,j}(k) C(i,j,T,k) \nonumber \\
&\quad \forall i,j \in[1,\ndots,N], T\in[1,\ndots,M], k\in[1,\ndots,T] \label{eq:MILP:boolean_bandwidth} \\
& D(i,T,k+1)\!-\!D(i,T,k) \nonumber \\
& \quad \leq \sum_{\tau=1}^k \sum_{j=1}^N \frac{1}{\size{T}} R(j,i,T,\tau) + \sum_{\tau=1}^{\mathclap{k-\ctime{i}{T}}} X(i,T,\tau) \nonumber \\
& \quad \forall i\in[1,\ndots,N], T\in[1,\ndots,M], k\in[1,\ndots,\ensuremath{C^\star_d}-1] \label{eq:MILP:learning_interference} \\
& \sum_{i,j \in {i}} \sum_{T\in[1,\ndots,M]} \!\!\!R(j,i,T,k) \leq r(i,k) \quad \forall i\in I, k\in[1,\ndots,T] \label{eq:MILP:channel_capacity}
\end{flalign}
\end{subequations}
}
Equation \eqref{eq:MILP:boolean_bandwidth} ensures that the effective bit rate on a link is nonzero only if a communication occurs on the link; Equation \eqref{eq:MILP:learning_interference} models the process by which robots learn data products through communication, closely following Equation \eqref{eq:MILP:learning}; and Equation \eqref{eq:MILP:channel_capacity} ensures that the sum of all effective bit rates on interfering links does not exceed the channel capacity.
\subsection{Distributed, real-time implementation \label{sec:ilp:distributed}}
In order to provide a \emph{distributed}, \emph{real-time} implementation of the scheduler
presented above suitable for field use, we leverage a shared-world approach using a ``broadcast, plan, and execute'' cycle (shown in Figure
\ref{fig:broadcast-plan-execute}).
Agents are assumed to have access to a common clock and have pre-existing knowledge of the duration of the broadcast, plan, and execute phases of the cycle.
The agents also know what programs or processes may be included in the software network \ensuremath{SN} (even if not all agents can execute all processes). They are not aware of the optimization objective, namely, the execution times, energy costs, sequences, and rewards of individual tasks.
\begin{figure}[h]
\centering
{
\jvspace{-1em}
\includegraphics[width=\columnwidth]{MOSAIC_broadcast_plan_execute_2}
}
\jvspace{-1em}
\caption{Distributed implementation of the ILP relies on a broadcast-plan-execute cycle. First, agents exchange information about their own state through
a message-passing algorithm and achieve a consensus on the system state. Next, all agents solve Problem \eqref{eq:MILP} with the system state
as input and with a deterministic stopping criterion. Finally, all agents execute the tasks assigned to them by the solution to Problem \eqref{eq:MILP}.}
\label{fig:broadcast-plan-execute}
\jvspace{-.5em}
\end{figure}
\paragraph{Broadcast} At an agreed-upon time, agents start the ``broadcast'' phase; during this phase, agents
exchange their state
with all other agents through a flooding message-passing algorithm \cite[Ch. 4]{lynch1996distributed}, and achieve a consensus on the overall system state.
The duration of the broadcast phase is selected to ensure that consensus can be achieved for any possible network topology. As discussed in \ifextendedv
Appendix \ref{apx:flooding_clustering}
\else
the Extended Version \cite{Hook2021EV}
\fi
, if the communication network is strongly connected, systems with 10-50 agents can achieve consensus in under a second under conservative assumptions on the size of agents' states and available link bandwidths.
The state of each agent includes
(i) the estimated present and future bandwidths $r_{ij}$ between each agent and their neighbors,
(ii) the time and energy costs $\{\ctime{i}{T}\}_{i\in [1, N], T \in \ensuremath{\mathbb{T}}}$, $\{\cenergy{i}{T}\}_{i\in [1, N], T \in \ensuremath{\mathbb{T}}}$ required by the agent to perform each possible task, and
(iii) the rewards $\{r(T)\}_{T\in\ensuremath{\mathbb{T}}\setminus \ensuremath{\mathbb{R}}}$ for performing optional tasks.
This approach is responsive to time-varying task rewards and agent capabilities.
However, the choice of a single broadcast epoch per round
does cause some delay in responsiveness, since agent capabilities and rewards can only be updated
if they appear prior to the start of the broadcast phase for each cycle.
\paragraph{Plan}
Once the broadcast phase is over, agents switch to the ``plan'' phase.
In this phase, each agent solves Problem~\eqref{eq:MILP}
with the network topology, tasks set, and vehicle states computed in the broadcast phase as inputs.
Problem~\ref{prob:sched}
is in general NP-hard, and a solver may fail to find an optimal solution within the allocated time.
To ensure that a feasible final solution is found, we provide the solver with a trivial initial solution (which exists, according to Assumption \ref{assumption:isolated_feasibility}).
To ensure that all agents agree on the same solution, we use a \emph{deterministic} MILP solver (i.e., a solver that explores the decision tree according to a deterministic policy), and we employ a deterministic stopping criterion (i.e., the solver terminates after a prescribed, deterministic number of branch-and-bound steps, selected to ensure termination within the duration of the ``plan'' phase).
\paragraph{Execute} Once the plan phase is over, agents switch to the execution phase; here, each agent reads the output of Problem \eqref{eq:MILP} and executes the tasks that
are assigned to itself according to the timing prescribed by the schedule.
This approach provides a \emph{distributed} and \emph{anytime} implementation of
Problem~\ref{prob:sched} which we implement and test in the next Section.
\subsection{Remarks}
\rev{
In the problem formulation, communication tasks do consume computational resources on both the transmitter and the receiver (Assumption \ref{assumption:single_task}).
This is in line with the current mode of operations of space missions, where communication is not concurrent with other activities due to computational, power, and reliability considerations. As a result, the rovers' activities should be \emph{synchronized}: in absence of synchronization, a rover's transmission could interrupt computational activities on the receiver, or be lost if the receiver is unavailable.
In the light of this, the selection of a "broadcast-plan-execute" distributed implementation, which relies on the availability of synchronization between the agents, is preferable for its simplicity and ease of verification.
In cases where agents can concurrently communicate and perform computational tasks, a more versatile \emph{asynchronous} distributed implementation could also be used. We propose such an asynchronous load-sharing execution mechanism in \cite{RossiVaqueroEtAl2020}; the proposed architecture is agnostic to the task-allocation mechanism used, and is therefore compatible with plans provided by the ILP.
}
The broadcast-plan-execute cycle relies on synchronization of the agents' clocks and on accurate knowledge of the duration of tasks to be executed. Deviations from predicted execution times can result both in tasks not being completed in the ``execute'' phase, and in missed communication windows (if, e.g., a task is not completed by the time its data products should be transmitted to another agent). The cyclic nature of the broadcast-plan-execute cycle allows ``missed'' tasks to be rescheduled at a later time step; nevertheless, the design of \emph{robust} scheduling algorithms that can accommodate uncertainty in synchronization and in task execution times is an interesting direction for future research.
While the flooding-based synchronization mechanism itself is quite robust (as discussed in the previous subsection), the overall scheduling approach is \emph{not} robust to failures of the broadcasting synchronization mechanism.
The integration of more robust coordination mechanisms (e.g., challenge-response to verify that agents have achieved a consensus, and watchdogs triggering
the execution of an agreed-upon contingency plan) are interesting directions for future research.
Finally, the complexity of the ILP scales exponentially with the number of agents;
accordingly, in principle, it may be infeasible to obtain a high-quality solution to Problem \ref{eq:MILP} at a sufficient cadence for control of a multi-robot system, especially on embedded platforms.
However, in Section~\ref{sec:experiment:benchmark}, we show that state-of-the-art ILP solvers can provide high-quality (if not optimal) solutions within tens of seconds, even on highly constrained platforms.
\begin{figure*}[htb]
\begin{center}
\includegraphics[height=4.5cm]{mars_yard_scenario}
\includegraphics[height=4.5cm]{image16.jpg}
\vspace{.5cm}
\includegraphics[height=4.4cm]{Timeline.png}
\end{center}
\caption{Illustrative scenario in the Mars Yard at JPL (top left), pictures of the hardware nodes (top right), and one scheduled timeline (bottom).
Timelines represents
the operational cycle and the task allocation. Communication links can be disabled
to test system adaptation and relocation of tasks. RVIZ view provides vehicle
positioning and network topology information.}
\label{fig:mars_yard_scenario}
\label{fig:viz_tools}
\jvspace{-1.25em}
\end{figure*}
\section{Experiments} \label{sec:experiment}
\begin{figure}[thb]
\centering
\includegraphics[width=.8\columnwidth]{Relay_assembly_line}
\caption{Relay and assembly line emerging behaviors (Yellow and green annotations were added manually to the RViz output from the field demonstration).}
\label{fig:relay_assembly_line_behavior}
\end{figure}
\begin{figure*}[thb]
\centering
\includegraphics[width=.75\textwidth]{assembly_line_scenario}
\caption{Illustrative example of the assembly line case.}
\label{fig:assembly_line_scenario}
\end{figure*}
\begin{figure*}[htb]
\centering
\includegraphics[width=\textwidth,trim={2cm 0 0.25cm 13cm},clip]{data_mule_timeline}
\caption{Example simulation of the data mule scenario. Left: Three Puffers have a weak link to the base station, but the middle robot will move closer and so both robots transmit their data to it rather than directly to the base station. Right: Later, the red robot transmits all data to the base station. See supplementary videos and Section~\ref{sec:reproducing_results}.}
\label{fig:data_mule_scenario}
\jvspace{-1.25em}
\end{figure*}
\rev{In this section, we explore the performance of the proposed approach on a
variety of realistic problems. First, we present field tests of a
\emph{distributed} implementation on a set of mobile, wirelessly-connected,
Raspberry Pis. Second, we assess the computational complexity and performance
of the approach through rigorous benchmarks on a variety of computational
architectures.}
\subsection{Distributed hybrid implementation}
\rev{The goals of these experiments were to implement and test the
\emph{distributed} version of Problem \eqref{eq:MILP}, and, specifically, the
broadcast-plan-execute architecture in Section \ref{sec:ilp:distributed}, in
realistic and challenging environments. We sought to check for four important
characteristics of a field-deployed system.
\begin{itemize}
\item \revtwo{\emph{Robustness}: Does the proposed approach run for extended periods of time? It may crash, we may
encounter violated assumptions, or the ILP may not find a feasible solution
in time.}
\item \emph{Computational cost}: Does the implementation scale well and run quickly on
realistic computing architectures?
\item \emph{Networking}: Are communication tasks scheduled reasonably, despite the
additional complexity of scheduling computation? Since our ILP contains
data routing as a sub-problem, we expect the solutions to contain reasonable
routing behaviors.
\item \emph{Load Balancing}: Does the solution exhibit load balancing behaviors when nodes
with uneven computational load have good communication between them?
\item \emph{Science Optimization}: Does the system achieve an increase in
throughput of science data compared to a naive approach?
\end{itemize}}
We used a notional multi-robot scenario where multiple small
rovers perform both ``housekeeping'' tasks (e.g., sensing, path planning) and
science tasks (e.g., microscope measurements) and are aided by a
computationally capable base station.
This is illustrated in Figure \ref{fig:mars_yard_scenario}.
\rev{The concept of operations is loosely based on JPL's PUFFER robots
\cite{karras2017pop}}.
\rev{The software network used is shown in Figure~\ref{fig:scenario_task_network}}.
Tasks are arranged in two sets. ``Housekeeping'' tasks (Figure
\ref{fig:scenario_task_network}, top)
are based on the Mars Perseverance rover's
autonomy architecture, and their execution time is based on actual benchmarks on
Perseverance's on-board RAD750 \cite{riebertalk}.
Housekeeping tasks include (i) capturing an image of the terrain,
(ii) self-localization based on that image, (iii) planning a path through the
environment, and (iv) dispatching the drive command. While image capture and
drive command have to be executed on board, localization
and path planning tasks can be delegated to another robot in the network.
To model optional, autonomous science activities, we also added the ``science tasks'' shown in
the bottom of Figure~\ref{fig:scenario_task_network}. Specifically, PUFFERs
can (i) collect a sample from the environment, (ii) analyze it, and (iii) send the
analysis data to the base station for storage and eventual uplink.
The sample analysis task can be assigned to another node.
Only agents inside pre-designated ``science zones'' can perform sample collection;
storage must be performed by the base station.
Each science task has a reward associated to it; the reward for sample collection is set to 5, the reward for
data analysis is 10, and the reward for storing data is 20.
\rev{Note that no actual sampling and analysis tasks were executed; rather,
task execution was simulated by allocating time in the schedule computed by each node.}
This set of ``science tasks'' represents the scenario where PUFFERs explore a distributed
but spatially-correlated phenomenon, such as water moisture levels, by
performing kriging \cite{kriging}, a process routinely used for spatial
estimation in farming on Earth~\cite{tokekar2016sampletspn}.
The base station's computational power
is an order of magnitude larger than an individual robot's, and it is equipped with the same
communication equipment as the other nodes in the network.
The base station is not assigned any required tasks; its key role is to serve as a supporting node for
sharing the computational load of the network.
\rev{In the field experiments, the PUFFERs were represented by Raspberry Pis (model 3) with a GPS receiver, and
the base station was a desktop computer at a fixed location. The Pis were
moved about within an outdoors experimental area with
two marked ``science zones'' by human experimenters.
We had limited control over positions of the nodes during the
experiment and demonstration, due to the use of the highly
portable Raspberry Pis and participation of enthusiastic observers from JPL and
direction from observing sponsors.
Accordingly, this experiment was an ideal test of the \emph{reliability} and \emph{robustness} of the overall architecture;
separate software benchmarks (reported in the next section) are better suited to assess
the computational cost and performance of the approach.}
To control the communications network, all platforms were connected through a
WiFi router; bandwidths between nodes were computed in simulation based on (i)
inter-agent distance, (ii) the presence of no-communication zones (shown in red
in Figure \ref{fig:viz_tools}), and (iii) direct human intervention
\rev{(i.e., the experimenters' ability to disable selected communication links on demand)}. Data
rates between agents were based on inter-agent range according to a piecewise
constant function: available bandwidth ranged from 1 Mbps for ranges between
15-200 meters to 11 Mbps for ranges between 0-5 meters, or zero (unavailable)
if the line-of-sight between two vehicles crossed obstructions (shown as a red
zones in Figure \ref{fig:viz_tools}).
The logical layers on all agents was implemented using the Robot Operating
System (ROS). For each vehicle in the network, we developed a set of simulated
components, including a communication system to handle the broadcasting phase
and transfer of data, a science component to simulate science tasks, a
navigation/localization system to keep track of the vehicle's pose, and a
controller to plan and dispatch tasks according to the scheduler.
We also developed tools to visualize 1) the execution timeline (and task
allocation), 2) the network topology and bandwidth, and 3) the vehicles'
positions in the environment. The tools also enable manipulation of the
simulation by adding additional no-communication zones and removing
communication links. The software interface is shown in
Figure~\ref{fig:viz_tools}.
The agents were assumed to have access to a common clock, and we considered an
operational cycle 45 seconds long, where 5 seconds was dedicated to the
``broadcast'' phase, 10 seconds the ``plan'' phase, and 30 seconds to execution
(in line with the expected duration of the tasks in the software network in
Figure~\ref{fig:scenario_task_network}). \revtwo{In practice, we would expect a much longer execution cycle, and perhaps shorter broadcast / plan phase, but these values allowed increased introspection and iteration during field tests. }
The ILOG CPLEX solver was used to solve the ILP. Since CPLEX does not support
an ARM architecture, we deployed the scheduler in an x86 server (AWS m5.xlarge)
that was queried at each plan phase by each vehicle. Vehicles called the
scheduler independently; the solution received by each vehicle was guaranteed
to be consistent with the other vehicles' through use of a deterministic solver
with a deterministic stopping criterion (as discussed in
Section~\ref{sec:ilp:distributed}). The deterministic amount of solver steps
corresponded to approximately 10 seconds of execution on the Pi. To ensure
anytime availability of a feasible solution, the solver was seeded with an
initial solution where agents did not share any computational tasks and
executed no optional tasks, which is guaranteed to be feasible.
\rev{Representative portions of the field test are shown in a video in the
Supplementary Material\footnote{Available at
\url{https://youtu.be/zTQ7Y4-ax2A}}.
We provide a software release for full exploration of the
results (Section ~\ref{sec:reproducing_results}).}
Experiments were run for four hours, and demonstrated all of the following
characteristics.
\rev{\paragraph{Robustness}
During the 4-hour long demonstration, nodes were added and removed from
the network (by activating and deactivating the corresponding Raspberry Pi's),
and active nodes were moved around by observers,
including in and out of science zones. We verified that the proposed
approach is able to \emph{consistently} provide good solutions to problems
with 3 to 15 nodes within the 10-seconds planning window, and that the
broadcast-plan-execute architecture can be used to provide a distributed
implementation of Problem \eqref{eq:MILP} that is robust to unforeseen,
human-driven changes in the network topology and in the tasks to be scheduled.
\paragraph{Networking and Data Relay}
One of the most intuitive, and obviously beneficial emergent behaviors we observed was that
of relay activities. Relay nodes, informally speaking, did nothing more than
relay communications between other nodes while tending to their own housekeeping tasks.
This behavior was induced reliably through the use of the no-communication zone to
block direct communication with the base station. Traffic was reliably routed though nodes that were
between the base station and the sender instead, as shown in Figure~\ref{fig:relay_assembly_line_behavior}.
\paragraph{Load Balancing and Science Clusters}
The choice of the software network places additional load on nodes that are in
``science zones'', by adding (optional) science tasks to their list of tasks.
In a recurring behavior, ``science clusters'' formed whenever one vehicle was inside
a science region, and other vehicles were nearby but outside.
For example in Figure \ref{fig:relay_assembly_line_behavior},
The nodes in both science zones off-loaded its localization and path planning
tasks to the other nearby agent, so as to perform multiple sample collection and
analysis tasks.
\paragraph{Science Optimization}
Due to the timing chosen for the software network, the proposed approach could yield at
most a threefold increase in the number of optional science tasks performed for each
node in a science zone. That is, the sum of the computation times of all
relocatable housekeeping tasks was twice the cost of a science task: therefore, by doing
\emph{only} science and offloading all relocatable housekeeping tasks, an agent could gather
three times more science than would have been possible with no load sharing.
The additional analysis and storage tasks placed additional load on nodes
\emph{outside} of the science zone, if appropriately tasked. This threshold was achieved for some nodes that had sufficient nearby nodes, and sufficient throughput to the base station.
Again in Figure \ref{fig:relay_assembly_line_behavior}, the left science node
was able to schedule three sample-gather tasks, by offloading tasks to nearby
agents. We can explore the likelihood of this occurring in random networks in Section~\ref{sec:experiment:benchmark}.
\paragraph{Science Optimization with Assembly Lines}
The combination of relay and load balancing produces and interesting result
that was unintended but obvious in hindsight. When the system did reach the
maximum observed science throughput, the relay nodes also served as
computational aids for the science tasks, analyzing the data enroute to the
base station akin to an ``assembly line''. We illustrate an occurrence of such
a case in Figure \ref{fig:assembly_line_scenario}.} The node labelled PUFFER 1
is in the left-most science zone and offloads localization (cyan) to nearby
PUFFER 6, as in the ``science cluster'' scenario. PUFFER 1 also schedules
three samples (red). Two sample data products are then transferred to PUFFER
2, which acts as a relay to the base station. PUFFER 2 analyzes one sample and
forwards the resulting analysis result and one sample data product to PUFFER 3;
PUFFER 3 analyzes the sample and transfers two analyzed data products to the
base station for storage. The third sample data product is not analyzed or
stored due to the short time horizon; nevertheless, it is collected to receive
the corresponding reward. As mentioned, a threefold increase per node in
science zones is the maximum possible increase due to the amount of time to
execute compared to the time costs of all the tasks.
\rev{
The ``assembly line'' result is quite interesting and may have unexplored
efficiency increases for edge computing networks like terrestrial 5G networks.
\paragraph{Store and Forward, and Data Muling}
Because the planner has knowledge of the future state of the communications
network, it should be possible to plan for future connectivity and
store-and-forward packets to a node in preparation for a link coming online. If
the link comes online because the storing node \emph{moves}, this is sometimes
called ``data muling'' \cite{bhadauria2011robotic}.
We did not observe this in field testing because we could not predict the
future state of the communications network, due to human manipulation.
However, the data muling behavior was readily observed and reproduced in
simulation, as shown in Figure \ref{fig:data_mule_scenario} and in the video
in the Supplementary Material.}
\subsection{Software benchmarks}
\label{sec:experiment:benchmark}
Next, we show through numerical
results that the proposed ILP can be solved efficiently on a variety of
hardware platforms, including embedded platforms suitable for robotics
applications, \rev{and we explore the benefits of the approach compared to a ``selfish'' scenario where agents cannot share computational tasks.}
To this end, we test the performance of a \emph{centralized} version of Problem \eqref{eq:MILP} on several hardware architectures for twenty randomly-generated network topologies
(shown in Figure \ref{fig:experiments:instances})
and several cost functions. \rev{In each scenario, a subset of the agents was randomly placed in ``science zones'';
agents in science zones were able to collect one sample, which could optionally be analyzed and stored.}
\begin{figure}
\includegraphics[width=\columnwidth]{two_sim_scenarios}
\caption{Two example scenarios from the numerical experiments. The base station is yellow. Nodes able to perform science tasks are red; nodes unable to perform science tasks are shown in black, the width of edges shows bandwidth. }
\label{fig:experiments:instances}
\end{figure}
For each instance, the number of agents (proportional to number of tasks to schedule) was varied from 2 to 13 agents to assess the scalability of the proposed approach. Optimization objectives included (i) maximization of reward from optional task, (ii) minimization of energy expenditure, and (iii) a linear combination of the two.
The problem was solved on several computing platforms, specifically:
\begin{itemize}
\item a modern Intel Xeon workstation equipped with a 10-core E5-2687W processor;
\item an embedded Qualcomm Flight platform equipped with a APQ8096 SoC;
\item a Powerbook G3 computer equipped with a single-core PowerPC 750 clocked at 500 MHz, the same CPU (albeit without radiation tolerance adjustments) as the RAD750 used on the Curiosity and Mars 2020 rovers \cite{Bajracharya2008autonomy}.
\end{itemize}
The ILP was solved with the SCIP solver \cite{GleixnerEtal2018OO}. For each problem, we computed both the time required for the solver to find and certify an optimal solution, and the quality of the best solution obtained after 60 seconds of execution. We also compared the performance of the proposed scheduler with the state-of-the-art OPTIC PDDL scheduler \cite{benton2012temporal}. Results are shown in Figure \ref{fig:experiments:benchmark}.
\begin{figure*}[t]
\centering
\includegraphics[width=.3\textwidth]{benchmark_time_TV_violin.pdf}
\includegraphics[width=.3\textwidth]{benchmark_sq_TV_box_rev}
\includegraphics[width=.3\textwidth]{benchmark_sq_TV_box_OPTIC_rev}
\jvspace{-.5em}
\caption{Numerical results on several hardware platforms, and comparison with the OPTIC PDDL scheduler. Left: Time required to solve Problem \ref{eq:MILP} to optimality. Middle: Suboptimality as a fraction of the optimal solution after 60s of execution for Problem \eqref{eq:MILP} and Right: for the OPTIC PDDL solver \cite{benton2012temporal}.}
\label{fig:experiments:benchmark}
\jvspace{-1em}
\end{figure*}
On the Xeon architecture, the median solution time for problems with up to 6 agents is under 10s, and the median solution time for problems with up to 9 agents is under 100s (Figure \ref{fig:experiments:benchmark}, top). The proposed anytime implementation is consistently able to find an optimal solution for problems with up to 11 agents in under 60s (Figure \ref{fig:experiments:benchmark}, middle).
On the embedded Qualcomm SoC, the median solution time for problems with up to 4 agents is under 10s, and the anytime implementation finds the optimal solution to problems with up to 8 agents in under 60s.
Finally, even the highly limited PPC 750 processor is able to find an optimal solution to problems with up to 5 agents in under 60s - a remarkable achievement for a 20-year-old processor.
The ILP scheduler offers superior performance compared to the anytime implementation of the OPTIC scheduler (Figure \ref{fig:experiments:benchmark}, bottom). In particular, solving Problem \eqref{eq:MILP} results in higher-quality solutions for a given problem size and execution time, and OPTIC is unable to return solutions for problems with more than 7 agents even on the Xeon architecture.
\rev{We also assessed the potential benefits of the proposed approach on a more complex version of the problem, where each agent in a ``science zone'' was able to collect up to \emph{three} samples. We solved the same set of problems shown in Figure \ref{fig:experiments:instances} with up to nine agents; for each instance, we compared the solution to the ILP with a ``selfish'' allocation where agents were not allowed to share computational tasks (except for the storage task, which was constrained to be executed on the base station).
We evaluated the solution quality both after 60s of execution, and after 3600s of execution (a time sufficient to achieve and prove optimality for the vast majority of the scenarios considered).
\begin{figure}[t]
\centering
\includegraphics[width=\ifextendedv0.7\else0.48\fi\columnwidth]{solution_improvement_tasks_bars_v2.pdf}
\includegraphics[width=\ifextendedv0.7\else0.48\fi\columnwidth]{solution_improvement_energy_bars_v2.pdf}
\caption{Proposed approach vs ``selfish'' approach (no sharing of cpu time). Left: tasks performed. Right: average energy per task.}
\label{fig:experiments:performance}
\end{figure}
Figure \ref{fig:experiments:performance} shows the overall number of tasks performed and the average energy usage per task across all problem instances. After 1h of execution, the proposed approach yields a 37.3\% increase in the number of samples collected and analyzed, and a 30.4\% increase in the number of samples stored, compared to the selfish approach; the approach also results in a 41.4\% reduction in average energy use for the sample analysis task, which more than outweighs the small increase in energy use for communications. Remarkably, a similar trend is observed even when the solver is stopped after 60s: here, the proposed approach results in a 30.6\% increase in the number of samples collected and analyzed, a 19.7\% increase in the number of samples stored, and a 44\% decrease in the average energy use for sample analysis compared to the selfish approach.
Collectively, these results show that the proposed approach holds promise to yield significant increases in scientific returns and decreased energy usage; can be implemented on embedded robotic architectures with modest computational performance; and performs well in highly dynamic environments, making it well-suited for field robotics multi-agent applications.
}
\paragraph{Reproducing Our Results}
\label{sec:reproducing_results}
We have released implementations of Problem \eqref{eq:MILP} using
the CPLEX, SCIP, and GLPK MILP solvers under a permissive open-source license.
The implementations are available online at \url{github.com/nasa/mosaic}.
Provided scenario files allow reproduction of all of the
emergent behaviors discussed.
\section{Conclusion}\label{sec:disc}
In this paper, we described the communication-aware computation task scheduling problem for heterogeneous multi-robot system and the Multi-robot On-site Shared Analytics Information and Computing (MOSAIC) architecture. We proposed an ILP formulation that allows to optimally schedule computational tasks in heterogeneous multi-robot systems with time-varying communication links.
We showed that the ILP formulation is amenable to a distributed implementation; can be solved efficiently on embedded computing architectures; and can result in a threefold increase in science returns compared to systems with no computational load-sharing.
A number of directions for future research are of interest.
First, we plan to explore pathways to infusion of the MOSAIC architecture in future multi-robot planetary exploration missions.
Proposed Mars Sample Return mission concepts plan to re-visit the same area with multiple launches to fetch, retrieve, and eventually launch soil samples for return to Earth \cite{mattingly2011msr}.
This offers an especially attractive avenue for deployment of MOSAIC, where each deployed asset could act as an ``infrastructure upgrade'', providing communication, computation, and data analysis services for all subsequent assets. Agents participating in the MOSAIC could include Cubesats similar to MarCO
\ifextendedv
\cite{hodges2016marco,schoolcraft2016marco};
\else
\cite{schoolcraft2016marco};
\fi
assets embedded in the ``sky crane'' lander and dropped during the ``flyaway'' phase
\ifextendedv
\cite{korzun2010skycrane,sell2013powered};
\else
\cite{korzun2010skycrane};
\fi
tethered balloons \cite{kerzhanovich2004balloon}; and aerostationary orbiters providing constant assistance to half the Mars surface
\ifextendedv
\cite{breidenthal2016design,breidenthal2018space}.
\else
\cite{breidenthal2018space}.
\fi
The algorithms proposed in this paper can be used during the system design phase to optimize the hardware of the
distributed missions by simulating the scheduling problem in the loop with an iterative hardware trade space explorer such as \cite{herzig2017tradespace}.
Second, we will design software libraries and middlewares that enable integration
of the proposed scheduler with existing autonomy software, autonomously and
transparently distributing computational tasks according to the optimal schedule.
A preliminary effort in this direction can be found in
\cite{RossiVaqueroEtAl2020}.
Finally, it is of interest to extend the proposed scheduling approach to handle uncertainty in the contact graph and in the execution time of individual task.
One promising research avenue is to incorporate stochastic optimization tools as well as probabilistic planning and scheduling
approaches \cite{santana-vaquero-et-al-2016} to the computation sharing problem,
which hold promise to provide guarantees that the MOSAIC is able to operate within
given bounds on the uncertainty of the problem inputs.
\jvspace{-.5em}
\section*{Acknowledgements}
This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. \\
\jvspace{-1em}
{\small
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-12-14T02:44:46",
"yymm": "2112",
"arxiv_id": "2112.06879",
"language": "en",
"url": "https://arxiv.org/abs/2112.06879"
}
|
\section{Appendix}
\subsection{Dataset and Capture system}
We use a multi-camera system with around 100 synchronized color cameras that produces $2048\times1334$ resolution images at 30 Hz. The cameras are focused at the center of the capture system and distributed spherically at a distance of one meter to provide as many viewpoints as possible. Camera intrinsics and extrinsics are calibrated in an offline process. We captured three sequences of different hair styles and hair motions. In the first sequence, we have one actor with a short high pony tail performing nodding and rotating. In the second sequence, we have one actor with a curly long releasing style hair and leaning her head towards four directions(left, right, up and down) and rotating. In the third sequence, we have one actor with a long high pony tail performing nodding and rotating.
\subsection{Baselines}
We compare against several volume-based or implicit function based baseline methods~\cite{steve_mvp, tretschk2021nrnerf, li2021nsff} for spatio-temporal modeling.
\noindent\textbf{MVP\cite{steve_mvp}} presents an efficient 4D representation for dynamic scenes with humans which is capable of doing animation and novel view synthesis. It combines explicitly tracked head mesh with volumetric primitives to model the human appearance and geometry with better completeness. The volumetric primitives can be aligned onto an unwrapped 2D UV-map from a tracked head mesh and can be regressed from a 2D convolutional neural network that leverages shared spatially computation. Similar to Neural Volumes~\cite{steve_nvs}, a differentiable volumetric ray marching algorithm is designed to render 2D rgb images on MVP in real time. We use $N_{p}=4096$ volumetric primitives with a voxel resolution $8\times8\times8$ on each sequence with a ray marching step size around $dt=1mm$. We use a global latent size of $256$.
\noindent\textbf{Non-rigid NeRF\cite{tretschk2021nrnerf}} presents an implicit function based representation for dynamic scene reconstruction and novel view synthesis based on NeRF~\cite{mildenhall2020nerf}. It utilizes a hierarchical model by disentangling a dynamic scene into a canonical frame NeRF and its corresponding deformation field which is parameterized by another MLP. In our experiments, we use 128 sampling points for both coarse and fine level sampling. We use the original implementation from the authors ~\href{https://github.com/facebookresearch/nonrigid_nerf}{here}. We train different models for each sequences and each model is trained for at least 300k iterations until convergence.
\noindent\textbf{NSFF\cite{li2021nsff}} is another implicit function based representation for dynamic scenes that is also based on NeRF~\cite{mildenhall2020nerf}. It learns a per-frame NeRF that is additionally conditioned on the time index. It brings optical flow as additional supervision and learns a 3D scene flow in parallel with the per-frame NeRF for enforcing temporal consistency. NSFF is able to perform both spatial and temporal interpolation on a given video sequence. We use a setting of 256 sampling points in our experiments, using~\cite{kroeger2016disof} as a substitute for generating optical flow. We use the original implementation from the authors ~\href{https://github.com/zl548/Neural-Scene-Flow-Fields}{here}. We train different models for each sequences and each model is trained for at least 300k iterations until convergence.
\subsection{Training Details}
For both tracking optimization and HVH training, We deploy Adam~\cite{kingma2014adam} for optimization. For hair tracking, we use a learning rate of $1$. We set the weighting coefficients of each losses as $\omega_{hdir}=3$, $\omega_{hpos}=1$, $\omega_{len}=3$, $\omega_{tang}=3$ and $\omega_{cur}=1e4$. For each time step, 100 iterations are taken for optimization to solve the possible hair strands at next frame out. For HVH, we set weighting parameters for each objective as $\lambda_{flow}=1$. $\lambda_{geo}=0.1$, $\lambda_{vol}=0.01$, $\lambda_{cub}=0.01$ and $\lambda_{KL}=0.001$. All models are trained with approximately 100-150k iterations. We use a latent code size of 256 and per-strand hair code size of 256, raymarching step size around $dt=1mm$ and around $N_{p}=5500$ volumetric primitives with a voxel resolution $8\times8\times8$ for each sequence depending on the number of guide hairs. For each sequence, we have roughly 30 strands for guide hair and we sample 50 points on each strands.
\subsection{Novel View Synthesis}
We show a larger version of comparison figure between different methods in Figure~\ref{fig:nvs_compare_apdx}. For completeness, we also include visualizations from a perframe NeRF model which takes a perframe temporal code as input liker non-rigid NeRF~\cite{tretschk2021nrnerf}.
\begin{figure*}[htb]
\setlength\tabcolsep{0pt
\renewcommand{\arraystretch}{0
\centering
\begin{tabular}{cccccc}
\textbf{\small Perframe NeRF} &
\textbf{\small NSFF~\cite{li2021nsff}} &
\textbf{\small Non-rigid NeRF~\cite{tretschk2021nrnerf}} &
\textbf{\small MVP~\cite{steve_mvp}} &
\textbf{\small Ours} &
\textbf{\small Ground Truth} \\
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq01_57/000057_pfnerf_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq01_57/000057_nsff_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq01_57/000057_nrnerf_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq01_57/000057_mvp_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq01_57/000057_ours_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq01_57/000057_gt_comp_307_578_464_735_nw.png} \\
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq02_182/000182_pfnerf_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq02_182/000182_nsff_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq02_182/000182_nrnerf2_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq02_182/000182_mvp_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq02_182/000182_ours_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq02_182/000182_gt_comp_302_491_545_724_nw.png} \\
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq03_220/000220_pfnerf_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq03_220/000220_nsff_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq03_220/000220_nrnerf_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq03_220/000220_mvp_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq03_220/000220_ours_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 0 0 0}, clip]{figs/nvs_compare/seq03_220/000220_gt_comp_182_593_443_880_nw.png}
\end{tabular}
\caption{\label{fig:nvs_compare_apdx}\textbf{Comparison on novel view synthesis between different methods.}}
\end{figure*}
\subsection{Ablation Studies}
\noindent\textbf{Temporal Consistency.} We show a bigger version of rendering results on unseen sequence in Figure~\ref{fig:nvs_abl_apx}.
\begin{figure*}[htb]
\setlength\tabcolsep{0pt
\renewcommand{\arraystretch}{0
\centering
\begin{tabular}{ccccc}
\textbf{\small MVP} &
\textbf{\small MVP w/ flow} &
\textbf{\small Ours w/o flow} &
\textbf{\small Ours} &
\textbf{\small Ground Truth} \\
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq01_test/000032_mvp_comp_165_512_389_753_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq01_test/000032_mvp_w_comp_165_512_389_753_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq01_test/000032_ours_wo_comp_165_512_389_753_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq01_test/000032_ours_comp_165_512_389_753_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq01_test/000032_gt_comp_165_512_389_753_nw.png} \\
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq02_test/000086_mvp_comp_247_580_471_821_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq02_test/000086_mvp_w_comp_247_580_471_821_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq02_test/000086_ours_wo_comp_247_580_471_821_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq02_test/000086_ours_comp_247_580_471_821_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq02_test/000086_gt_comp_247_580_471_821_nw.png} \\
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq03_test/000030_mvp_comp_178_636_402_877_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq03_test/000030_mvp_w_comp_178_636_402_877_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq03_test/000030_ours_wo_comp_178_636_402_877_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq03_test/000030_ours_comp_178_636_402_877_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 0 0 0}, clip]{figs/nvs_abl/seq03_test/000030_gt_comp_178_636_402_877_nw.png}
\end{tabular}
\caption{\label{fig:nvs_abl_apx}\textbf{Ablation of temporal consistency.} We compare MVP~\cite{steve_mvp} and ours with different variations.}
\end{figure*}
\noindent\textbf{Hair Decoder structure.} As part of the hair decoder ablation, we compare our method with a naive decoder that uses the same volume decoder as MVP~\cite{steve_mvp} for hair volumes. There are two major differences: 1) the naive decoder does not take the per-strand hair feature as input; 2) The design of the naive decoder does not take into account the hair specific structure where it regresses the same slab as for head tracked mesh and we take the first $N_{hair}$ volumes as the output. In this way, the naive decoder discards all intrinsic geometric structural information while doing convolutions in each layers. We show the hair volumes layout in Figure~\ref{fig:apx_dec_layout}. In the naive design, the hair strands are randomly squeezed into a square UV-map which could break the inner connections of each hair. In our design, we groom the hair strands into the their directions which could preserve the hair specific geometric structure. We compare different designs of decoder on Seq01. As in Table~\ref{tab:abl_dec}, our hair structure aware decoder produces a smaller image reconstruction error and better SSIM, a result of inductive bias of the designed hair decoder.
\begin{table}[h!]
\centering
\begin{tabular}{r|ccc|}
\multicolumn{1}{r|}{decoder} & \multicolumn{1}{c|}{MSE} & \multicolumn{1}{c|}{SSIM} & PSNR \\ \hline
naive & 45.68/75.15 & 0.9549/0.9220 & 31.83/29.54 \\
early fus. & 43.75/71.08 & 0.9533/0.9259 & 31.97/29.82 \\
late fus. & 41.89/65.96 & 0.9543/0.9280 & 32.17/30.09
\end{tabular}
\caption{\label{tab:abl_dec}\textbf{Decoder structure.} We compare different designs of the hair decoder. We report all metrics on both training and testing and we use a \/ to separate them where on the left are the results of novel synthesis on training sequence.}
\end{table}
\begin{figure}[h!]
\centering
\begin{tabular}{cc}
\textbf{\small Naive Decoder} &
\textbf{\small Our Decoder} \\
\adjincludegraphics[width=0.24\textwidth, trim={0 0 0 0}, clip]{figs/apx_dec_layout/naive_dec.png} &
\adjincludegraphics[width=0.24\textwidth, trim={0 0 0 0}, clip]{figs/apx_dec_layout/ours_dec.png}
\end{tabular}
\caption{\label{fig:apx_dec_layout}\textbf{Hair volumes layout.} We show the hair volume layout of both naive decoder and ours.}
\end{figure}
We additionally compare two different designs of the hair decoder where we do late and early fusion of the per-strand hair feature and the global latent feature. We show two different designs in Figure~\ref{fig:hair_dec_apd}. Table~\ref{tab:abl_dec} shows that the late fusion model performs better than early fusion model. This could be because the late fusion model transfers the 1d global latent code into a spatially varying feature tensor which is a more expressive form of feature representation.
\begin{figure}[htb]
\centering
\adjincludegraphics[width=0.45\textwidth, trim={0 0 0 {0.02\height}}, clip]{figs/hair_decoder.pdf} \\
\adjincludegraphics[width=0.45\textwidth, trim={0 0 0 {0.02\height}}, clip]{figs/dec_early.pdf}
\caption{\label{fig:hair_dec_apd}\textbf{Architecture of the hair decoder.} We show late fusion on the top and early fusion on the bottom. The late fusion model first deconvolves the 1D global latent code into a 2D feature map and then concatenate it with the per-strand hair features. A 2D CNN is used afterwards to generate the hair volumes. The early fusion model first repeat the 1D global latent vector spatially and then concatenate the repeated feature map with per-strand hair features. The concatenated features are than fed into a deeper 2D CNN to generate the hair volumes.}
\label{fig:hair_dec}
\end{figure}
\subsection{Visualization of Flow}
Please see Figure~\ref{fig:iflow_vis} for a visualization of the rendered flow from our representation. Compared to the optical flow from ~\cite{kroeger2016disof}, our rendered 2D flow has less noise on the background. This is because that we only define our 3D scene flow on the volumetric primitives instead of the whole space. With the help of the coarse level geometry like the hair strands and head tracked mesh, the scene flow of most part of the empty space will naturally be zero. This could help us eliminate the noise from the background optical flow to certain degree.
\noindent\textbf{Run Time Analysis.} We report the rendering time of one iamge at resolution $1024\times667$ for each methods here. MVP~\cite{steve_mvp} takes 0.223s. Ours takes 0.254s. NSFF takes 28.68s. NRNeRF~\cite{tretschk2021nrnerf} takes 41.29s. All tests are conducted under a single Nvidia Tesla V100 GPU.
\begin{figure}[h!]
\setlength\tabcolsep{0pt
\renewcommand{\arraystretch}{0
\centering
\begin{tabular}{ccc}
\textbf{\small Flow from ours} &
\textbf{\small Flow from ~\cite{kroeger2016disof}} &
\textbf{\small Ground truth} \\
\adjincludegraphics[width=0.16\textwidth]{figs/iflow_vis/000030_seq01_iflow.png} &
\adjincludegraphics[width=0.16\textwidth]{figs/iflow_vis/000030_seq01_flow.png} &
\adjincludegraphics[width=0.16\textwidth]{figs/iflow_vis/000030_seq01_gt.png} \\
\end{tabular}
\caption{\label{fig:iflow_vis}\textbf{Visualization of flow.} We show the rendered 3D scene flow into 2D flow in the first column and the openCV optical flow~\cite{kroeger2016disof} in the second column. The last column shows the ground truth image as reference.}
\end{figure}
\subsection{Hair Tracking Analysis}
In Figure~\ref{fig:track_plot}, we plot different hair properties over time. We report four different metrics describing how well the tracked hairs fit the per-frame reconstruction and how well it preserves its length and curvature. In the first two rows, we report the MSE between the tracked hair and the tracked hair at first frame in terms of curvature and length. In the last two rows, we report the cosine distance between the direction of each nodes on the tracked guide hair and the direction of its neighbor from the reconstruction and the Chamfer distance between the tracked guide hair nodes and the reconstruction. As we can see the length and curvature are relatively preserved across frames and the affinity between the per-frame reconstruction and the tracked guide hair is relatively high.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{figs/dis_plot.png}
\caption{\label{fig:track_plot}\textbf{Plot of tracked hair properties v.s. time.} As we can see, the hair properties like length and curvature are not changing too much across time and hair Chamfer distance are relatively small.}
\end{figure}
\section{Video Results}
Please see all the video results at appended html page: $./video\_navigation/index.html$
\section{Conclusion}
In this paper, we present a hybrid neural volumetric representation for hair dynamic performance capture. Our representation leverages the efficiency of guide hair representation in hair simulation by attaching volumetric primitives to them as well as the high DoF of volumetric representation. The hybrid volumetric representation could be regressed from a global temporal latent code and per-strand hair features with hair structure awareness and it is optimized using multiview RGB images and optical flows. We show that our method could generate sharper and higher quality results on hair and our method achieves better generalization with temporal consistency.
\section{Discussion}
In this paper, we present a hybrid neural volumetric representation for hair dynamic performance capture. Our representation leverages the efficiency of guide hair representation in hair simulation by attaching volumetric primitives to them as well as the high DoF of volumetric representation.
With both hair tracking and 3D scene flow refinement, our model enjoys better temporal consistency.
We empirically show that our method generates sharper and higher quality results on hair and our method achieves better generalization.
Our model also supports multiple applications like drivable animation and hair editing.
\iffalse
There are several limitations of our work which we plan to address in the future:
\iffalse
1) Although our method does not need manual input for hair appearance or shape, we still need a minimum level of artist efforts on preparing a sparse set of guide hair for the first frame to initialize the tracking process.
2) Although our hybrid volumetric representation is able to cover the majority of hair, there might still be some flyaway hair being excluded.
\fi
1) Our method requires the help from artist to prepare guide hair at the first frame and some flyaway hair might be excluded.
2) We currently do not consider physics based interactions between hair and other objects like the shoulder or the chair.
3) Although we achieved certain level of disentanglement between hair and other objects without any human labeling, it is still not perfect.
We only showed results on blonde hair which could be better distinguished from a dark background. Our method might be limited by other hairstyles.
Future directions like incorporating a physics aware module or leveraging additional supervision from semantic information for disentanglement could be interesting.
\fi
\section{Experiments}
\subsection{Dataset}
For each video recorded with our multi camera system, we hold out approximately a quarter of the time frames as test sequence and the rest as training sequence. The test sequence will be used for conducting test experiments of drivable animation. This results in roughly 300 frames for training sequence and 100 frames for testing sequence. Additionally, on the training sequence, we hold out 7 cameras that are distributed around the rear and side view of the head. The captured images are downsampled to $1024\times667$ resolution for training and testing. We train our model exclusively on the training portion of each sequence with $m=93$ training views.
\iffalse
\subsection{Baselines}
We compare against several volume-based or implicit function based baseline methods~\cite{steve_mvp, tretschk2021nrnerf, li2021nsff} for spatio-temporal modeling. Please see more details about the setting of each methods in the supplementary.
\fi
\subsection{Novel View Synthesis}
\label{sec:nvs}
\begin{table*}[h!tb]
\centering
\begin{tabular}{cc}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{r|ccc|ccc|ccc|}
\multicolumn{1}{l|}{\multirow{2}{*}{}} & \multicolumn{3}{c|}{Seq01} & \multicolumn{3}{c|}{Seq02} & \multicolumn{3}{c|}{Seq03} \\ \cline{2-10}
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{MSE} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{MSE} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} & \multicolumn{1}{c|}{MSE} & \multicolumn{1}{c|}{SSIM} & \multicolumn{1}{c|}{PSNR} \\ \hline
PFNeRF & 51.25 & 0.9269 & 31.16 & 103.41 & 0.8659 & 28.15 & 76.59 & 0.9000 & 29.50 \\
NSFF & 50.13 & 0.9346 & 31.21 & 90.06 & 0.8885 & 28.75 & 83.18 & 0.8936 & 29.1 \\
NRNeRF & 56.78 & 0.9231 & 30.78 & 132.16 & 0.8549 & 27.13 & 79.83 & 0.8987 & 29.33 \\ \hline
MVP & 47.54 & 0.9476 & 31.6 & 77.23 & 0.9088 & 29.62 & 73.78 & 0.9224 & 29.66 \\
Ours & \textbf{41.89} & \textbf{0.9543} & \textbf{32.17} & \textbf{59.84} & \textbf{0.9275} & \textbf{30.69} & \textbf{71.58} & \textbf{0.9314} & \textbf{29.81}
\end{tabular}
} &
\resizebox{\columnwidth}{!}{%
\begin{tabular}{r|ccc|ccc|ccc|}
\multicolumn{1}{l|}{\multirow{2}{*}{}} & \multicolumn{3}{c|}{Seq01} & \multicolumn{3}{c|}{Seq02} & \multicolumn{3}{c|}{Seq03} \\ \cline{2-10}
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{MSE} & \multicolumn{1}{c|}{SSIM} & PSNR & \multicolumn{1}{c|}{MSE} & \multicolumn{1}{c|}{SSIM} & PSNR & \multicolumn{1}{c|}{MSE} & \multicolumn{1}{c|}{SSIM} & PSNR \\ \hline
MVP & 47.54 & 0.9476 & 31.6 & 77.23 & 0.9088 & 29.62 & 73.78 & 0.9224 & 29.66 \\
MVP w/ $\mathcal{L}_{flow}$ & 46.49 & 0.9473 & 31.69 & 71.07 & 0.9107 & 29.93 & 75.13 & 0.9240 & 29.58 \\
Ours w/o $\mathcal{L}_{flow}$ & 43.82 & 0.9508 & 31.99 & 65.98 & 0.9186 & 30.27 & 69.97 & 0.9359 & 29.93 \\
Ours & \textbf{41.89} & \textbf{0.9543} & \textbf{32.17} & \textbf{59.84} & \textbf{0.9275} & \textbf{30.69} & \textbf{71.58} & \textbf{0.9314} & \textbf{29.81} \\ \hline
MVP & 75.68 & 0.9200 & 29.49 & 85.10 & 0.9039 & 29.62 & 83.76 & 0.9086 & 29.16 \\
MVP w/ $\mathcal{L}_{flow}$ & 67.86 & 0.9276 & 30.00 & 83.11 & 0.9037 & 29.93 & 80.96 & 0.9086 & 29.16 \\
Ours w/o $\mathcal{L}_{flow}$ & 71.90 & 0.9223 & 29.74 & 72.74 & 0.9137 & 30.27 & 78.34 & 0.9198 & 29.44 \\
Ours & \textbf{65.96} & \textbf{0.9280} & \textbf{30.09} & \textbf{67.75} & \textbf{0.9208} & \textbf{30.69} & \textbf{75.66} & \textbf{0.9222} & \textbf{29.57}
\end{tabular}
}
\end{tabular}
\caption{\label{tab:nvs_train}\textbf{Novel view synthesis}. On the left, we compare our method with both NeRF stemmed methods like NSFF\cite{li2021nsff}, NRNeRF~\cite{tretschk2021nrnerf} and a per-frame NeRF(PFNeRF) baseline, and a volumetric method like MVP~\cite{steve_mvp}. On the right, we further compare our method and different variants of our methods with MVP on novel views of both seen (top) and unseen (bottom) sequences.
}
\end{table*}
We show both qualitative and quantitative comparisons with other methods~\cite{steve_mvp, tretschk2021nrnerf, li2021nsff} on the novel view synthesis task. In the left of Tab.~\ref{tab:nvs_train}, we show the mean squared error(MSE), SSIM and PSNR between predicted images and ground truth images from the novel views of the training sequences. Qualitative results are shown in Fig.~\ref{fig:nvs_compare}. Our method has smaller image prediction errors and is able to generate sharper results, especially on the hair regions.
\iffalse
\begin{figure*}
\setlength\tabcolsep{0pt
\renewcommand{\arraystretch}{0
\centering
\begin{tabular}{cccccc}
\textbf{\small Perframe NeRF} &
\textbf{\small NSFF~\cite{li2021nsff}} &
\textbf{\small Non-rigid NeRF~\cite{tretschk2021nrnerf}} &
\textbf{\small MVP~\cite{steve_mvp}} &
\textbf{\small Ours} &
\textbf{\small Ground Truth} \\
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_pfnerf_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_nsff_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_nrnerf_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_mvp_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_ours_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_gt_comp_307_578_464_735_nw.png} \\
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_pfnerf_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_nsff_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_nrnerf2_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_mvp_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_ours_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_gt_comp_302_491_545_724_nw.png} \\
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_pfnerf_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_nsff_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_nrnerf_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_mvp_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_ours_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.162\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_gt_comp_182_593_443_880_nw.png}
\end{tabular}
\caption{\label{fig:nvs_compare}\textbf{Comparison on novel view synthesis between different methods.}}
\end{figure*}
\fi
\iffalse
\begin{figure}
\setlength\tabcolsep{0pt
\renewcommand{\arraystretch}{0
\centering
\begin{tabular}{cccccc}
\textbf{\tiny Perframe NeRF} &
\textbf{\tiny NSFF~\cite{li2021nsff}} &
\textbf{\tiny NRNeRF~\cite{tretschk2021nrnerf}} &
\textbf{\tiny MVP~\cite{steve_mvp}} &
\textbf{\tiny Ours} &
\textbf{\tiny Ground Truth} \\
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_pfnerf_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_nsff_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_nrnerf_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_mvp_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_ours_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_gt_comp_307_578_464_735_nw.png} \\
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_pfnerf_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_nsff_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_nrnerf2_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_mvp_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_ours_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_gt_comp_302_491_545_724_nw.png} \\
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_pfnerf_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_nsff_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_nrnerf_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_mvp_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_ours_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.081\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_gt_comp_182_593_443_880_nw.png}
\end{tabular}
\caption{\label{fig:nvs_compare}\textbf{Comparison on novel view synthesis between different methods.} Please see supplementary for a larger size version.}
\end{figure}
\fi
\begin{figure}[tb]
\setlength\tabcolsep{0pt
\renewcommand{\arraystretch}{0
\centering
\begin{tabular}{ccccc}
\textbf{\scriptsize NSFF~\cite{li2021nsff}} &
\textbf{\scriptsize NRNeRF~\cite{tretschk2021nrnerf}} &
\textbf{\scriptsize MVP~\cite{steve_mvp}} &
\textbf{\scriptsize Ours} &
\textbf{\scriptsize Ground Truth} \\
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_nsff_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_nrnerf_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_mvp_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_ours_comp_307_578_464_735_nw.png} &
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq01_57/000057_gt_comp_307_578_464_735_nw.png} \\
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_nsff_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_nrnerf2_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_mvp_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_ours_comp_302_491_545_724_nw.png} &
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq02_182/000182_gt_comp_302_491_545_724_nw.png} \\
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_nsff_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_nrnerf_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_mvp_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_ours_comp_182_593_443_880_nw.png} &
\adjincludegraphics[width=0.1\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_compare/seq03_220/000220_gt_comp_182_593_443_880_nw.png}
\end{tabular}
\caption{\label{fig:nvs_compare}\textbf{Comparison on novel view synthesis between different methods.} Please see supplementary material for a bigger version of this figure.}
\end{figure}
\subsection{Ablation Studies}
\noindent\textbf{Temporal consistency.} To test the effects of the temporal consistency and the tracked guide hair, we also conduct a novel view synthesis task on the test portion of our captured sequence. Note that our model is not trained using any part of the test sequence data. On the right of Tab.~\ref{tab:nvs_train}\iffalse~\ref{tab:abl_nvs_train_test}\fi, we report MSE, SSIM, PSNR on novel views of both seen and unseen sequences. As we can see, having the coarse level guide hair strands tracked and without flow supervision gives us better rendering quality. With flow supervision, the results are improved further. This improvement is because the tracking information helps the volumetric primitives to better localize the hair region with higher consistency. While the improvement for seen motions is relatively small, both our model and MVP are notably improved for unseen sequences with novel hair motion when flow supervision is added. Rendering results on unseen sequences are shown in Fig.~\ref{fig:nvs_abl_mini}. In Fig.~\ref{fig:flow_vol_abl}, we visualize the volumetric primitives of the hairs of our model with and without flow supervision. Including flow supervision produces notably better disentanglement between the hair and shoulder.
\iffalse
\begin{table*}[htb]
\centering
\begin{tabular}{r|ccc|ccc|ccc|}
\multicolumn{1}{l|}{\multirow{2}{*}{}} & \multicolumn{3}{c|}{Seq01} & \multicolumn{3}{c|}{Seq02} & \multicolumn{3}{c|}{Seq03} \\ \cline{2-10}
\multicolumn{1}{l|}{} & \multicolumn{1}{c|}{MSE} & \multicolumn{1}{c|}{SSIM} & PSNR & \multicolumn{1}{c|}{MSE} & \multicolumn{1}{c|}{SSIM} & PSNR & \multicolumn{1}{c|}{MSE} & \multicolumn{1}{c|}{SSIM} & PSNR \\ \hline
MVP & 47.54 & 0.9476 & 31.6 & 77.23 & 0.9088 & 29.62 & 73.78 & 0.9224 & 29.66 \\
MVP w/ $\mathcal{L}_{flow}$ & 46.49 & 0.9473 & 31.69 & 71.07 & 0.9107 & 29.93 & 75.13 & 0.9240 & 29.58 \\
Ours w/o $\mathcal{L}_{flow}$ & 43.82 & 0.9508 & 31.99 & 65.98 & 0.9186 & 30.27 & 69.97 & 0.9359 & 29.93 \\
Ours & 41.89 & 0.9543 & 32.17 & 59.84 & 0.9275 & 30.69 & 71.58 & 0.9314 & 29.81 \\ \hline
MVP & 75.68 & 0.9200 & 29.49 & 85.10 & 0.9039 & 29.62 & 83.76 & 0.9086 & 29.16 \\
MVP w/ $\mathcal{L}_{flow}$ & 67.86 & 0.9276 & 30.00 & 83.11 & 0.9037 & 29.93 & 80.96 & 0.9086 & 29.16 \\
Ours w/o $\mathcal{L}_{flow}$ & 71.90 & 0.9223 & 29.74 & 72.74 & 0.9137 & 30.27 & 78.34 & 0.9198 & 29.44 \\
Ours & 65.96 & 0.9280 & 30.09 & 67.75 & 0.9208 & 30.69 & 75.66 & 0.9222 & 29.57
\end{tabular}
\caption{\label{tab:abl_nvs_train_test}\textbf{Novel view synthesis on training\&testing sequence.} We further compare our method and different variants of our methods with MVP on novel views of both training and testing sequences. Novel view synthesis results on training sequence are shown on the top and the results on the test sequence are shown on the bottom. To better understand the role of multiview optical flow learning in training, we further compared with two more models: MVP with $\mathcal{L}_{flow}$ and ours without $\mathcal{L}_{flow}$.}
\end{table*}
\fi
\iffalse
\begin{figure*}
\setlength\tabcolsep{0pt
\renewcommand{\arraystretch}{0
\centering
\begin{tabular}{ccccc}
\textbf{\small MVP} &
\textbf{\small MVP w/ flow} &
\textbf{\small Ours w/o flow} &
\textbf{\small Ours} &
\textbf{\small Ground Truth} \\
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq01_test/000032_mvp_comp_165_512_389_753_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq01_test/000032_mvp_w_comp_165_512_389_753_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq01_test/000032_ours_wo_comp_165_512_389_753_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq01_test/000032_ours_comp_165_512_389_753_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq01_test/000032_gt_comp_165_512_389_753_nw.png} \\
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq02_test/000086_mvp_comp_247_580_471_821_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq02_test/000086_mvp_w_comp_247_580_471_821_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq02_test/000086_ours_wo_comp_247_580_471_821_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq02_test/000086_ours_comp_247_580_471_821_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq02_test/000086_gt_comp_247_580_471_821_nw.png} \\
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq03_test/000030_mvp_comp_178_636_402_877_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq03_test/000030_mvp_w_comp_178_636_402_877_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq03_test/000030_ours_wo_comp_178_636_402_877_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq03_test/000030_ours_comp_178_636_402_877_nw.png} &
\adjincludegraphics[width=0.2\textwidth, trim={0 {0.12\height} 0 0}, clip]{figs/nvs_abl/seq03_test/000030_gt_comp_178_636_402_877_nw.png}
\end{tabular}
\caption{\label{fig:nvs_abl}\textbf{Ablation of temporal consistency.} \ZW{we could probably keep the mini one~\ref{fig:nvs_abl_mini} for spaces?}}
\end{figure*}
\fi
\begin{figure}[tb]
\setlength\tabcolsep{0pt
\renewcommand{\arraystretch}{0
\centering
\begin{tabular}{ccccc}
\textbf{\scriptsize MVP} &
\textbf{\scriptsize MVP w/ flow} &
\textbf{\scriptsize Ours w/o flow} &
\textbf{\scriptsize Ours} &
\textbf{\scriptsize Ground Truth} \\
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq01_test_mini/000032_mvp_bbox_165_512_389_753.png} &
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq01_test_mini/000032_mvp_w_bbox_165_512_389_753.png} &
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq01_test_mini/000032_ours_wo_bbox_165_512_389_753.png} &
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq01_test_mini/000032_ours_bbox_165_512_389_753.png} &
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq01_test_mini/000032_gt_bbox_165_512_389_753.png} \\
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq02_test_mini/000086_mvp_bbox_247_580_471_821.png} &
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq02_test_mini/000086_mvp_w_bbox_247_580_471_821.png} &
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq02_test_mini/000086_ours_wo_bbox_247_580_471_821.png} &
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq02_test_mini/000086_ours_bbox_247_580_471_821.png} &
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq02_test_mini/000086_gt_bbox_247_580_471_821.png} \\
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq03_test_mini/000030_mvp_bbox_178_636_402_877.png} &
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq03_test_mini/000030_mvp_w_bbox_178_636_402_877.png} &
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq03_test_mini/000030_ours_wo_bbox_178_636_402_877.png} &
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq03_test_mini/000030_ours_bbox_178_636_402_877.png} &
\adjincludegraphics[width=0.095\textwidth]{figs/nvs_abl/seq03_test_mini/000030_gt_bbox_178_636_402_877.png}
\end{tabular}
\vspace*{0.2cm}
\caption{\label{fig:nvs_abl_mini}\textbf{Ablation of temporal consistency.} We compare our method and MVP w/ and w/o flow supervision. With flow supervision, better temporal consistency and generalization for unseen sequence can be observed. Please see supplementary for a bigger version of this figure.}
\end{figure}
\begin{figure}[tb]
\setlength\tabcolsep{0pt
\renewcommand{\arraystretch}{0
\centering
\begin{tabular}{ccc}
\textbf{\small w/o flow sup.} &
\textbf{\small w/ flow sup.} &
\textbf{\small Ground truth} \\
\adjincludegraphics[width=0.15\textwidth, trim={{0.15\width} {0.12\height} 0 {0.1\height}}, clip]{figs/flow_vol_abl/000000_seq02_noflow.png} &
\adjincludegraphics[width=0.15\textwidth, trim={{0.15\width} {0.12\height} 0 {0.1\height}}, clip]{figs/flow_vol_abl/000000_seq02_flow.png} &
\adjincludegraphics[width=0.15\textwidth, trim={{0.15\width} {0.12\height} 0 {0.1\height}}, clip]{figs/flow_vol_abl/000000_seq02_gt.png}
\end{tabular}
\caption{\label{fig:flow_vol_abl}\textbf{Ablation on flow supervision.} We further compare the volumetric primitives of the models w/ and w/o flow supervision. We see that model with additional flow supervision yields a consistent and reasonable shape for hair and yields better hair shoulder disentanglement.}
\end{figure}
\iffalse
\begin{figure}
\setlength\tabcolsep{0pt
\renewcommand{\arraystretch}{0
\centering
\begin{tabular}{ccc}
\textbf{\small Flow from ours} &
\textbf{\small Flow from ~\cite{kroeger2016disof}} &
\textbf{\small Ground truth} \\
\adjincludegraphics[width=0.16\textwidth]{figs/iflow_vis/000030_seq01_iflow.png} &
\adjincludegraphics[width=0.16\textwidth]{figs/iflow_vis/000030_seq01_flow.png} &
\adjincludegraphics[width=0.16\textwidth]{figs/iflow_vis/000030_seq01_gt.png} \\
\end{tabular}
\caption{\label{fig:iflow_vis}\textbf{Visualization of flow.}}
\end{figure}
\fi
\iffalse
\noindent\textbf{Hair Decoder structure.} \ZW{(we could probably put this in appendix)} As part of the hair decoder ablation, we compare our method with a naive decoder that uses the same volume decoder as MVP~\cite{steve_mvp} for hair volumes. There are two major differences: 1) the naive decoder does not take the per-strand hair feature as input; 2) The design of the naive decoder does not take into account the hair specific structure where it regresses the same slab as for head tracked mesh and we take the first $N_{hair}$ volumes as the output. In this way, the naive decoder discards all intrinsic geometric structural information while doing convolutions in each layers. As in Table~\ref{tab:abl_dec}, our hair structure aware decoder produces a smaller image reconstruction error and better SSIM, a result of inductive bias of the designed hair decoder.
\begin{table}[h!]
\centering
\begin{tabular}{r|ccc|}
\multicolumn{1}{r|}{decoder} & \multicolumn{1}{c|}{MSE} & \multicolumn{1}{c|}{SSIM} & PSNR \\ \hline
naive & 45.68/75.15 & 0.9549/0.9220 & 31.83/29.54 \\
early fus. & 43.75/71.08 & 0.9533/0.9259 & 31.97/29.82 \\
late fus. & 41.89/65.96 & 0.9543/0.9280 & 32.17/30.09
\end{tabular}
\caption{\label{tab:abl_dec}\textbf{Decoder structure.} We compare different designs of the hair decoder. We report all metrics on both training and testing and we use a \/ to separate them where on the left are the results of novel synthesis on training sequence.}
\end{table}
We additionally compare two different designs of the hair decoder where we do late and early fusion of the per-strand hair feature and the global latent feature. Tab.~\ref{tab:abl_dec} shows that the late fusion model performs better than early fusion model. This could be because the late fusion model transfers the 1d global latent code into a spatially varying feature tensor which is a more expressive form of feature representation.
\fi
\noindent\textbf{Hair tracking analysis.} We first study the impact of different objectives $\mathcal{L}_{len}+\mathcal{L}_{tang}$ and $\mathcal{L}_{cur}$ in hair tracking.
As in Fig.~\ref{fig:track_loss_abl}, when both $\mathcal{L}_{cur}$ and $\mathcal{L}_{len}+\mathcal{L}_{tang}$ are applied, the tracking results are more smooth and without kinks.
We observe that, when using the loss $\mathcal{L}_{len}+\mathcal{L}_{tang}$ as the only regularization term, the length of each hair strand segments are already preserved but could cause some kinks without awareness of the correct hair strand curvatures.
$\mathcal{L}_{cur}$ itself does not help and exaggerates the error when the hair strand length is not correct, but yields smooth results when combined with $\mathcal{L}_{len}+\mathcal{L}_{tang}$. This is because curvature computation is agnostic to absolute length of the hair and only controls the relative length ratio.
\begin{figure}[tb]
\setlength\tabcolsep{0pt
\renewcommand{\arraystretch}{0
\centering
\begin{tabular}{cccc}
\textbf{\scriptsize \begin{tabular}[c]{@{}c@{}}w/o $\mathcal{L}_{cur}$\\ $\mathcal{L}_{len}+\mathcal{L}_{tang}$\end{tabular}} &
\textbf{\scriptsize w/o $\mathcal{L}_{len}+\mathcal{L}_{tang}$} &
\textbf{\scriptsize w/o $\mathcal{L}_{cur}$} &
\textbf{\scriptsize \begin{tabular}[c]{@{}c@{}}w/ $\mathcal{L}_{cur}$\\ $\mathcal{L}_{len}+\mathcal{L}_{tang}$\end{tabular}} \\
\adjincludegraphics[width=0.12\textwidth, valign=m, trim={{0.1\width} {0.2\height} {0.1\width} {0.05\height}}, clip]{figs/track_loss_abl/wo_len_cur.png} &
\adjincludegraphics[width=0.12\textwidth, valign=m, trim={{0.1\width} {0.2\height} {0.1\width} {0.05\height}}, clip]{figs/track_loss_abl/wo_len.png} &
\adjincludegraphics[width=0.12\textwidth, valign=m, trim={{0.1\width} {0.2\height} {0.1\width} {0.05\height}}, clip]{figs/track_loss_abl/wo_cur.png} &
\adjincludegraphics[width=0.12\textwidth, valign=m, trim={{0.1\width} {0.2\height} {0.1\width} {0.05\height}}, clip]{figs/track_loss_abl/full.png}
\end{tabular}
\caption{\label{fig:track_loss_abl}\textbf{Effects of $\mathcal{L}_{len}+\mathcal{L}_{tang}$ and $\mathcal{L}_{cur}$.} We show how the shape and curvature of tracked hair strands are preserved with both $\mathcal{L}_{len}+\mathcal{L}_{tang}$ and $\mathcal{L}_{cur}$.
}
\end{figure}
\iffalse
In Fig.~\ref{fig:track_plot}, we plot different hair properties over time. We report four different metrics describing how well the tracked hairs fit the per-frame reconstruction and how well it preserves its length and curvature. In the first two rows, we report the MSE between the tracked hair and the tracked hair at first frame in terms of curvature and length. In the last two rows, we report the cosine distance between the direction of each nodes on the tracked guide hair and the direction of its neighbor from the reconstruction and the Chamfer distance between the tracked guide hair nodes and the reconstruction. The length and curvature are preserved across frames and the affinity between the per-frame reconstruction and the tracked guide hair is relatively high.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\textwidth]{figs/dis_plot.png}
\caption{\label{fig:track_plot}\textbf{Plot of tracked hair properties v.s. time.} The hair properties like length and curvature are not changing too much across time and hair Chamfer distance is relatively small. \ZW{Probably we should put this into supp and get ablation results on removing length and curvature loss?}}
\end{figure}
\fi
We show the impact of different initialization for hair tracking in Fig.~\ref{fig:track_abl_ims}. When no momentum information from previous frames is used, there is more obvious drifting on some of the strands happening, while the drifting is less severe when we take advantage of the motion information from previous frames.
\begin{figure}[tb]
\setlength\tabcolsep{0pt
\renewcommand{\arraystretch}{0
\centering
\begin{tabular}{cccccc}
&
\textbf{\scriptsize frame 118} &
\textbf{\scriptsize frame 119} &
\textbf{\scriptsize frame 120} &
\textbf{\scriptsize frame 121} &
\textbf{\scriptsize frame 122} \\
\parbox{0.04\textwidth}{no \\ mm.} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000118__bbox_346_984_815_1470.png} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000119__bbox_346_984_815_1470.png} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000120__bbox_346_984_815_1470.png} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000121__bbox_346_984_815_1470.png} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000122__bbox_346_984_815_1470.png} \\
\parbox{0.04\textwidth}{1st ord. \\ mm.} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000118__im_bbox_346_984_815_1470.png} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000119__im_bbox_346_984_815_1470.png} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000120__im_bbox_346_984_815_1470.png} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000121__im_bbox_346_984_815_1470.png} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000122__im_bbox_346_984_815_1470.png} \\
\parbox{0.04\textwidth}{2nd ord. \\ mm.} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000118__ims_bbox_346_984_815_1470.png} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000119__ims_bbox_346_984_815_1470.png} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000120__ims_bbox_346_984_815_1470.png} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000121__ims_bbox_346_984_815_1470.png} &
\adjincludegraphics[width=0.088\textwidth, valign=m]{figs/track_abl_ims_v2/000122__ims_bbox_346_984_815_1470.png} \\
\end{tabular}
\caption{\label{fig:track_abl_ims}\textbf{Ablation of different initialization in hair tracking.} We show tracking results of our methods with different initializations. From top to bottom, we use no momentum information, first and second order momentum information for tracking initialization. Please note the brown and orange strands. As we can see, the hairs are better tracked when we utilize the dynamic information from previous frames. Better view in color version.}
\end{figure}
\section{Applications and Limitations}
One major application that is enabled by our neural volumetric scene representation is novel view synthesis as we have shown in Sec.~\ref{sec:nvs}. Our neural volumetric representation is also animatable with a sparse driving signals like guide hair strands. Given that we have explicitly modeled hair in the form of guide strands, our method
allows modifying the guide hairs directly.
In Fig.~\ref{fig:edit_hair}, we show four snapshots of different configurations of hair positions. Please see more results and details in the supplementary material.
\begin{figure}[tb]
\setlength\tabcolsep{0pt
\renewcommand{\arraystretch}{0
\centering
\begin{tabular}{cccc}
\adjincludegraphics[width=0.12\textwidth, valign=m, trim={{0.4\width} {0.2\height} {0.2\width} 0}, clip]{figs/edit_hair/000000_seq02_2.png} &
\adjincludegraphics[width=0.12\textwidth, valign=m, trim={{0.4\width} {0.2\height} {0.2\width} 0}, clip]{figs/edit_hair/000002_seq02_2.png} &
\adjincludegraphics[width=0.12\textwidth, valign=m, trim={{0.3\width} {0.2\height} {0.3\width} 0}, clip]{figs/edit_hair/000000_seq01.png} &
\adjincludegraphics[width=0.12\textwidth, valign=m, trim={{0.3\width} {0.2\height} {0.3\width} 0}, clip]{figs/edit_hair/000030_seq01.png}
\end{tabular}
\caption{\label{fig:edit_hair}\textbf{Hair position editing.} We create a new animation by direct editing on the guide hair strands. As we can see the volumes of hair are driven by the lifted guide hair to create a new hair motion. Please see supplementary material for video results.}
\end{figure}
There are several limitations of our work which we plan to address in the future:
\iffalse
1) Although our method does not need manual input for hair appearance or shape, we still need a minimum level of artist efforts on preparing a sparse set of guide hair for the first frame to initialize the tracking process.
2) Although our hybrid volumetric representation is able to cover the majority of hair, there might still be some flyaway hair being excluded.
\fi
1) Our method requires the help from artist to prepare guide hair at the first frame and some flyaway hair might be excluded.
2) We currently do not consider physics based interactions between hair and other objects like the shoulder or the chair.
3) Although we achieved certain level of disentanglement between hair and other objects without any human labeling, it is still not perfect.
We only showed results on blonde hair which could be better distinguished from a dark background. Our method might be limited by other hairstyles.
Future directions like incorporating a physics aware module or leveraging additional supervision from semantic information for disentanglement could be interesting.
\section{Introduction}
Although notable progress has been made towards the realism of human avatars, cephalic hair is still one of the hardest parts of the human body to capture and render: with usually more than a hundred-thousand components, with complex physical interaction among them and with complex interaction with light, which is extraordinarily hard to model. However, it is an important part of our appearance and identity: hair styles can convey everything from religious beliefs to mood or activity. Hence, hair is critically important to make virtual avatars believable and universally usable.
Previous work on mesh based representations~\cite{steve_meshvae, tewari2017mofa, tran2018nonlinear3dmm, li2017flame, xiang2020monoclothcap, saito2021scanimate, bagautdinov2021driving} has shown promising results on modeling the face and skin. However, they suffer when modeling hair, because meshes are not well suited for representing hair geometry. Recent volumetric representations~\cite{steve_nvs, mildenhall2020nerf} have high DoF which allows modeling of a changing geometric structure. They have achieved impressive results in 3D scene acquisition and rendering from multi-view photometric information. Compared to other geometric representations like multi-plane images~\cite{szeliski1998stereo, zhou2018mpi, mildenhall2019localmpi, attal2020matryodshka, broxton2020immersive} or point-based representations~\cite{aliev2020npr, wiles2020synsin, meshry2019neural, Lassner_pulsar, ruckert2021adop}, volumetric representations support a larger range of camera motion for view extrapolation and do not suffer from holes when rendering dynamic geometry like point-based representations. Furthermore, they can be learned from multi-view RGB data using differentiable volumetric ray marching, without additional MVS methods.
However, one major flaw of volumetric representations is their cubic memory complexity.
This problem is particularly significant for hair, where high resolution is a requirement.
NeRF~\cite{mildenhall2020nerf} circumvents the $O(n^3)$ memory complexity problem by parameterizing a volumetric radiance field using an MLP. Given the implicit form, the MLP-based implicit function is not limited by spatial resolution.
A hierarchical structure with a coarse and fine level radiance function is used and an importance resampling based on the coarse level radiance field is utilized for boosting sample resolution. Although promising empirical results have been shown, they come with at the advance of high rendering time and the quality is still limited by the coarse level sampling resolution. Another limitation of NeRFs is that they were initially designed for static scenes. There is some recent work~\cite{tretschk2021nrnerf, park2021nerfies, li2021nsff, li2021neural, xian2021space, pumarola2021dnerf, Wang_2021_nvnerf, park2021hypernerf, yuan2021star} that extends the original NeRF concept to modeling dynamic scenes. However, they are still limited to relatively small motions, do not support drivable animation or are not efficient for rendering.
We present a hybrid representation: by using many volumetric primitives, we focus the resolution of the model onto the relevant regions of the 3D space. For each of the volumes, we construct a neural representation that captures the local appearance of the hair in great detail, similar to~\cite{liu2020nsvf, Wang_2021_nvnerf, kilonerf, steve_mvp} \iffalse KiloNeRF~\cite{kilonerf} and MVP~\cite{steve_mvp}\fi. However, without explicitly modeling the dynamics and structure of hair, it would be hard for the model to learn these properties solely through the indirect supervision of the multi-view appearance.
Given that the model learns to position primitives in an unsupervised manner, the model is also prone to overfitting as a result of not incorporating any temporal consistency during training. We address the problem of spatio-temporal modeling of dynamic upper head and hair by explicitly modeling hair dynamics at the coarse level and by enforcing temporal consistency of the model by multi-view optical flow at the fine level.
Procedurally, we first perform hair strand tracking at a coarse level by lifting multi-view optical flow to a 3D scene flow. To constrain the hair geometry and reduce the impact of the noise in multi-view optical flow, we also make sure the tracked hair strands preserve geometric properties like shape, length and curvature across time. As a second step, we attach volumes to hair strands to model the dynamic scene which can be optimized using differentiable volumetric raymarching. The volumes that are attached to the hair strands are regressed using a decoder that takes per-hair-strand features and a global latent code as input and is aware of the hair specific structure. Additionally, we further enforce fine 3D flow consistency by rendering the 3D scene flow of our model into 2D and compare it with the corresponding ground truth optical flow. This step is essential for making the model generalize better to unseen motions.
To summarize, the contributions of this work are
\begin{itemize
\item A hybrid neural volumetric representation that binds volumes to guide hair strands for hair performance capture.
\item A hair tracking algorithm that utilizes multiview optical flow and per-frame hair strand reconstruction while preserving specific geometric properties like hair strand length and curvature.
\item A volumetric ray marching algorithm on 3D scene flow which enables optimization of the position and orientation of each volumetric primitive through multiview 2D optical flow.
\item A hair specific volumetric decoder for hair volume regression and with awareness of hair structure.
\end{itemize}
\section{Method}
\begin{figure*}[tb]
\centering
\includegraphics[width=1.0\textwidth]{figs/teaser_v1_trim.pdf}
\caption{\label{fig:teaser}\textbf{Pipeline.} Our method consists of two stages: in the first stage, we perform guide hair tracking with multiview optical flow as well as per-frame hair reconstruction. In the second stage, we further amplify the sparse guide hair strands by attaching volumetric neural rendering primitives and optimizing them by using the multiview RGB and optical flow data.
}
\end{figure*}
In this section, we introduce our hybrid neural volumetric representation for hair performance capture.
Our representation combines both, the drivability of guide hair strands and the completeness of volumetric primitives. Additionally, the guide hair strands serve as an efficient coarse level geometry for volumetric primitives to attach to, avoiding unnecessary computational expense on empty space. As a result of guide hair strand tracking as well as dense 3D scene flow refinement, our model is temporally consistent with better generalization over unseen motions.
As illustrated in Fig.~\ref{fig:teaser}, the whole pipeline contains two major steps which we will explain separately.
In the first step, we perform strand-level tracking that leverages multi-view optical flow information and propagates information about a subset of tracked hair strands into future frames. To save computation time, we track only guide hairs instead of tracking all hair strands. This is a widely used technique in hair animation and simulation~\cite{iben2013artistic, petrovic2005volumetric, chai2016adaptive}, which leads to a significant boost in run time performance.
However, getting the guide hairs tracked is not enough to model the hair motion and appearance or to animate all the hairs due to the sparseness of the guide hairs. To circumvent this, we combine it with a volumetric representation by attaching volumetric primitives to the nodes on the guide hairs. This hybrid representation has good localization of hairs in an explicit way and has full coverage of all the hairs, making use of the benefits of both representations. Another advantage is that the introduction of volumes allows optimizing hair shape and appearance by multi-view dense photometric information via differentiable volumetric ray marching.
In the second step, we use the attached volumetric primitives to model the hairs that are surrounding the guide hair strands to achieve dense hair appearance, shape and motion acquisition.
A hair specific volume decoder is designed for regressing those volumes, conditioning on both a global latent vector and hair strand feature vectors with hair structure awareness.
Additionally, we develop a volumetric raymarching algorithm for 3D scene flow that facilitates the learning from multi-view 2D optical flow. We show in the experiments that the introduction of additional optical flow supervision yields better temporal consistency and generalization of the model.
\subsection{Guide Hair Tracking}
We frame the guide hair tracking process as an optimization problem.
Given the guide hair strands and multi-view optical flow at the current frame $t$, we unproject and fuse optical flow under different camera poses into 3D flow and use that to infer the next possible position of the guide hairs at the next frame $t+1$.
\noindent\textbf{Data Setup and Notation.} In our setting, we perform hair tracking using multi-view video data. We use a multi-camera system with around 100 synchronized color cameras that produces $2048\times1334$ resolution images at 30 Hz. The cameras are focused at the center of the capture system and distributed spherically at a distance of one meter to provide as many viewpoints as possible. Camera intrinsics and extrinsics are calibrated in an offline process. We generate multi-view optical flow between adjacent frames for each camera, using the OpenCV~\cite{bradski2000opencv} implementation of~\cite{kroeger2016disof}. We acquire per-frame hair geometry by running~\cite{nam2019lmvs}.
We parameterize guide hairs as connected point clouds. Given a specific hair strand $\mathbf S^t$ at time frame $t$, we denote the Euclidean coordinate of the $\mathbf{n}$th node on hair strand $\mathbf S^t$ as $\mathbf S^t_n$. Similarly, we have the future position of $\mathbf{S}^t_n$ at time frame $t+1$ as $\mathbf{S}^{t+1}_n$. Next we introduce the notations for multi-view camera related information. We denote $\Pi_i(\cdot)$ as the camera transformation matrix of camera $i$ which projects a 3D point into 2D image coordinate. We denote $\mathbf I_{of,i}$ and $\mathbf I_{d,i}$ as 2D matrix of optical flow and depth of camera $i$ respectively. We denote $\mathbf H^{t}_n$ as the reconstructed point cloud with direction from \cite{nam2019lmvs, sun2021hairinverse}. Unless otherwise stated, all bold lower case symbols denote vectors.
\noindent\textbf{Tracking Objectives.} Given camera $i$, we could project a 3D point into 2D to retrieve its 2D image index. The camera projection is defined as
\begin{align*}
\mathbf{\hat{p}}^t_{s, i} = \begin{bmatrix} \mathbf{p}^t_{s,i} \\ \mathbf{1} \end{bmatrix} = \Pi_i(\mathbf{S}^t_n),
\end{align*}
\noindent where $\mathbf{\hat{p}}^t_{s, i}$ is the homogeneous coordinate of $\mathbf{p}^t_{s,i}$. Given the camera projection formulation, we formulate the first data-term objective based on optical flow as follows:
\begin{align*}
\mathcal{L}_{of} &= \sum_{n,i} \mathbf{\omega}_{n,i} ||\mathbf{S}^{t+1}_n - \mathbf{Z}_i(\mathbf{S}^{t+1}_n)\Pi_i^{-1}(\mathbf{p}^t_{s,i}+\delta_{\mathbf{p}}) ||^2_2, \\
\mathbf\omega_{n, i} &= exp(-\sigma||\mathbf{Z}_i(\mathbf{S}^{t}_n) - \mathbf{I}_{d,i}(\mathbf{p}^t_{s,i})||^2_2), \\
\delta_{\mathbf{p}} &= \mathbf{I}_{of, i}(\mathbf{p}^t_{s,i}),
\end{align*}
\noindent where we denote $\mathbf{Z}_i(\cdot)$ as the function that represents the depth of a certain point under camera $i$ and $\omega_i$ serves as a weighting factor for view selection where a smaller value means larger mismatch of projected depth and real depth under the $i$th camera pose. We use a $\sigma=0.01$.
In parallel with the data-term objective on optical flow, we add another data-term objective to facilitate geometry preserved tracking, which compares the Chamfer distance between tracked guide hair strands and the per-frame hair reconstruction from \cite{nam2019lmvs}. This loss is designed to make sure that the guide hair geometry point cloud will not deviate too much from the true hair geometry. Unlike the conventional Chamfer loss, we also penalize the cosine distance between the directions of $\mathbf S^t_n$ and the direction of its closest $k=10$ neighbors as $\mathcal{H}(\mathbf S^{t+1}_n) \subsetneq \{\mathbf H^{t+1}_n\}$; the losses are defined as:
\begin{align*}
\mathcal{L}_{hdir} &= \sum_{n, \mathbf h\in \mathcal{H}(\mathbf S^{t+1}_n)} \omega_{n,\mathbf h}^d (1-|\cos(\mathbf{dir}(\mathbf S^{t+1}_n), \mathbf{dir}(\mathbf h))|), \\
\mathcal{L}_{hpos} &= \sum_{n, \mathbf h\in \mathcal{H}(\mathbf S^{t+1}_n)} \omega_{n,\mathbf h}^r ||\mathbf S^{t+1}_n-\mathbf h||^2_2,
\end{align*}
where $\omega_{n,\mathbf h}^d=exp(-\sigma||\mathbf S^{t+1}_n-\mathbf h||^2_2)$ is a spatial weighting, $cos(\cdot, \cdot)$ is a cosine distance function between two vectors and
$\mathbf{dir}(\mathbf S^{t+1}_n)=\mathbf S^{t+1}_{n+1}-\mathbf S^{t+1}_n$ is a first order approximation of the hair direction at $\mathbf S^{t+1}_n$. $\omega_{n,\mathbf{h}}^r=cos(\mathbf{dir}(\mathbf S^{t+1}_n), \mathbf{dir}(\mathbf{h}))$ is a weighting factor that aims at describing the direction similarity between $\mathbf S^{t+1}_n$ and $\mathbf h$. With $\mathcal{L}_{hdir}$, we could groom the guide hairs $\mathbf S^{t+1}_n$ to have similar direction to its closest $k=10$ neighbors in $\mathcal{H}(\mathbf S^{t+1}_n)$, resulting in a more consistent guide hair direction distribution. Alternatively, $\mathcal{L}_{hpos}$ guarantees that the tracked guide hairs do not deviate too much from the reconstructed hair shapes.
However, with just the data-term loss, the tracked guide hairs might overfit to noise in the data terms. To prevent this, we further introduce several model-term objectives for hair shape regularization.
\begin{align*}
\mathcal{L}_{len} =& \sum_n (||\mathbf{dir}(\mathbf S^{t+1}_n)||_2-||\mathbf{dir}(\mathbf S^{0}_n)||_2)^2, \\
\mathcal{L}_{tang} =& \sum_n ((\mathbf S^{t+1}_{n+1} - \mathbf S^{t+1}_{n} - \mathbf S^{t}_{n+1} + \mathbf S^{t}_{n}) \cdot \mathbf{dir}(S^{t}_n))^2 + \\
&((\mathbf S^{t}_{n+1} - \mathbf S^{t}_{n} - \mathbf S^{t+1}_{n+1} + \mathbf S^{t+1}_{n}) \cdot \mathbf{dir}(S^{t+1}_n))^2, \\
\mathcal{L}_{cur} =& \sum_{n} (\mathbf{cur}(\mathbf S^{t+1}_{n}) - \mathbf{cur}(\mathbf S^{0}_{n})),
\end{align*}
\noindent where $\mathbf{cur}(\mathbf S^{t}_{n})$ is a numerical approximation of curvature at point $\mathbf S^{t}_{n}$ and is defined as:
\begin{align*}
\sqrt{\frac{24(||\mathbf{dir}(\mathbf S^{t}_{n})||_2 + ||\mathbf{dir}(\mathbf S^{t}_{n})||_2 - ||\mathbf S^{t}_{n}-\mathbf S^{t}_{n+2}||_2)}{||\mathbf S^{t}_{n}-\mathbf S^{t}_{n+2}||_2^3}}.
\end{align*}
We optimize all loss terms together to solve $\{ \mathbf S^{t+1}_{n} \}$ given $\{ \mathbf S^{t}_{n} \}$ with:
\begin{align*}
\mathcal{L}_{hair} =& \mathcal{L}_{of} + \omega_{hdir}\mathcal{L}_{hdir} + \omega_{hpos}\mathcal{L}_{hpos} \\
&+ \omega_{len}\mathcal{L}_{len} + \omega_{tang}\mathcal{L}_{tang} + \omega_{cur}\mathcal{L}_{cur}.
\end{align*}
By utilizing momentum information across the temporal axis, we can provide a better initialization of $\mathbf S^{t+1}_n$ given its trajectory and intialize $\mathbf S^{t+1}_n$ as
\begin{align*}
\mathbf S^{t+1}_n = 3\mathbf S^{t}_n - 3\mathbf S^{t-1}_n + \mathbf S^{t-2}_n.
\end{align*}
\subsection{HVH}
\iffalse
\ZW{We probably could trim this part by removing the reiterations on the motivations but more focusing on the technical details?}
Getting the guide hairs tracked is not enough to model or animate all the hairs due to its innate sparseness. To circumvent this, we propose to combine volumetric representation and guide hairs by attaching volumes to guide hairs. This hybrid form enjoys good localization of hairs in an explicit way and have full coverage of all the hairs, marrying the goodness of both representation. Another advantage is that the introduction of volumes allows optimizing hair shape and appearance by multi-view dense photometric information via differentiable volumetric ray marching. A relevant work MVP~\cite{steve_mvp} manages to model a dynamic scene of with volumes that are "softly" attached to head tracked mesh. However, the hair parts are missing and the dynamic of it is implicitly modeled. To this end, we present a new scene representation for hair which combines explicitly tracked guide hairs with volumes regressed by a hair-specific volume decoder. Furthermore, we introduce another differentiable volumetric ray marching algorithm of 3D scene flow that facilitates training via multi-view 2D optical flow, which could enforce the temporal consistency of our model and better improve the model generalization over unseen motion.\fi
\noindent\textbf{Background.} Similar to MVP, we define volumetric primitives $\mathcal{V}_{n}=\{ \mathbf t_n, \mathbf R_n, \mathbf s_n, \mathbf V_n \}$ to model a volume of local 3D space each, where $\mathbf R_n\in SO(3), \mathbf t_n\in \mathbb{R}^3$ describes the volume-to-world transformation, $\mathbf s_n\in \mathbb{R}^3$ are the per-axis scale factors and $\mathbf V_n=[\mathbf V_c, \mathbf V_\alpha]\in \mathbb{R}^{4\times M\times M\times M}$ is a volumetric grid that stores three channel color and opacity information. The volumes are placed on a UV-map that are unwrapped from a head tracked mesh and are regressed from a 2D CNN. Using an optimized BVH implementation, we can efficiently determine how the rays intersect each volume and find hit boxes. For each ray $\mathbf{r}_p(t) = \mathbf o_p + t\mathbf d_p$, we denote $(t_{min}, t_{max})$ as the start and end point for ray integration. Then, the differentiable aggregation of those volumetric primitives is defined as:
\begin{align*}
\mathcal{I}_p &= \int_{t_{min}}^{t_{max}} \mathbf{V}_c(\mathbf{r}_p(t))\frac{dT(t)}{dt}dt, \\
T(t) &= min(\int_{t_{min}}^{t} \mathbf{V}_\alpha(\mathbf{r}_p(t))dt , 1).
\end{align*}
\noindent We composite the rendered image as $\mathcal{\tilde{I}}_p=\mathcal{I}_p + (1-\mathcal{A}_p)I_{p,bg}$ where $\mathcal{A}_p=T(t_{max})$ and $I_{p,bg}$ is the background image.
\noindent\textbf{Encoder.} The encoder uses the driving signal of a specific point in time and outputs a global latent code $\bm{\mathit{z}}\in \mathbb{R}^{256}$.
We use the tracked guide hairs $\{ \mathbf{S}_{n}^t \}$ and tracked head mesh vertices $\{ \bm{v}^t_m \}$ to define the driving signal. Symmetrically, we learn another decoder in parallel with the encoder in an auto-encoding way that regresses the tracked guide hairs $\{ \mathbf{S}_{n}^t \}$ and head mesh vertices $\{ \bm{v}^t_m \}$ from the global latent code $\bm{\mathit{z}}$. The architecture of the encoder is an MLP that regresses the parameter of a normal distribution $\mathcal{N}(\bm{\mu}, \bm{\sigma}), \bm{\mu},\bm{\sigma}\in\mathcal{R}^{256}$. We use the reparameterization trick from~\cite{kingma2013auto} to sample $\bm{\mathit{z}}$ from $\mathcal{N}(\bm{\mu}, \bm{\sigma})$ in a differentiable way.
\noindent\textbf{Hair Volume Decoder.}
Besides the volumes that are attached to the tracked mesh $\{ \bm{v}^t_m \}$, we define additional hair volume $\mathcal{V}_{n}^{hair}$ that are associated with guide hair nodes $\mathbf{S}^t_{n}$. The position $\mathbf t_{n}=\mathbf{\hat{t}}_n + \delta_{\mathbf t_{n}}$, orientation $\mathbf R_{n}=\delta_{\mathbf R_{n}}\cdot\mathbf{\hat{R}}_n$ and scale $\mathbf s_{n}=\mathbf{\hat{s}}_n + \delta_{\mathbf s_{n}}$ of each hair volume are determined by the base hair transformation $(\mathbf{\hat{t}}_n, \mathbf{\hat{R}}_n, \mathbf{\hat{s}}_n)$ and regressed hair relative transformation $(\delta_{\mathbf t_{n}}, \delta_{\mathbf R_{n}}, \delta_{\mathbf s_{n}})$. The base translation $\mathbf{\hat{t}}_n$ of each hair node is directly its position $\mathbf S_{n}^t$. The base rotation $\mathbf{\hat{R}}_n$ is derived from the hair tangential direction and the hair-head relative position. We denote $\tau_n$ as the hair tnagential direction at position $\mathbf S_{n}^t$ and $\nu'_n$ as the direction pointing to the tracked head center starting from $S_{n}^t$. Then, the base rotation is $\mathbf{\hat{R}}_n=[\tau_n^T; \rho_n^T; \nu_n^T]$, where $\rho_n=\tau_n\times\nu'_n, \nu_n=\rho_n\times\tau_n$.
The geometry of hair can not be simply described by a surface. Therefore, we design a 2D CNN that convolves along the hair growing direction and the rough hair spatial position separately. Specifically, in the each layer of the 2D CNN, we seperate a $k\times k$ filter into two $k\times 1$ and $1\times k$ filters and apply convolution along two orthogonal directions respectively, similar to~\cite{NEURIPS2019_bbf94b34}. To learn a more consistent hair shape and appearance model, we optimize per-strand hair features $\{ f_n^t \}$ that are shared across all time frames besides the temporally varying global latent code $\bm{\mathit{z}}$. For each node $\mathbf{S_n^t}$ on a hair strand $\mathbf{S}^t$, we assign an unique feature vector $f^t_n$. The shared per-strand hair features and the temporal varying latent code $\bm{\mathit{z}}$ are fused to serve as the input to the hair volume decoder, which is shown in Fig.~\ref{fig:hair_dec}.
\begin{figure}[htb]
\centering
\adjincludegraphics[width=0.45\textwidth, trim={0 0 0 {0.02\height}}, clip]{figs/hair_decoder.pdf}
\caption{\label{fig:hair_dec}\textbf{Architecture of the hair decoder.} The hair decoder takes both the global latent code $z$ and the per-strand hair features $\{ f_n^t\}$ as inputs. $z$ is first deconvolved into a 2D feature tensor. It is then padded and concatenated with $\{ f_n^t\}$. In the following operation, the 2D convolution layers are applied along the hair growing direction and the hair spatial position seperately.}
\label{fig:hair_dec}
\end{figure}
\noindent\textbf{Differentiable Volumetric Raymarching of 3D Scene Flow.}
Learning a volumetric scene representation by multi-view photometric information is sufficient for high fidelity rendering and novel view synthesis. However, it is challenging for the model to reason about motion given the limited supervision and the results have poor temporal consistency, especially on unseen sequences. To better enforce temporal consistency, we develop a differentiable volumetric ray marching algorithm of 3D scene flow which enables training via multi-view 2D optical flow.
Given the transformations of each primitive as $(\mathbf t_n, \mathbf R_n, \mathbf s_n)$, we express the coordinate of each node on a volumetric grid at frame $u$ as $\mathbf{V}_{xyz}^{u}=\mathbf{s}_t\mathbf{R}_t\mathbf{V}_{tpl}+\mathbf{t}_n$, where $\mathbf{V}_{tpl}$ are the coordinates of a 3D mesh grid ranging between $[-1, 1]$. Given that the 3D scene flow from frame $u$ to $u+\delta$ can be expressed by each volumetric primitives as $\{ \mathbf{\delta V}_{xyz}^{u,u+\epsilon}=\mathbf{V}_{xyz}^{u+\epsilon} - \mathbf{V}_{xyz}^u \}$ and rendered into 2D flow as:
\begin{align*}
\mathcal{I}_{p, flow}^{u, u+\delta} &= \int_{t_{min}}^{t_{max}} (\mathbf{\delta V}_{xyz}^{u,u+\epsilon}(\mathbf{r}_p(t)))\frac{dT(t)}{dt}dt, \\
T(t) &= min(\int_{t_{min}}^{t_{max}} \mathbf{V}_\alpha^{u}(\mathbf{r}_p(t))dt , 1).
\end{align*}
\noindent\textbf{Training Objectives.} We train our model in an end-to-end manner with the following loss:
\begin{align*}
\mathcal{L} =& \mathcal{L}_{pho} + \lambda_{flow}\mathcal{L}_{flow} + \lambda_{geo}\mathcal{L}_{geo} \\
&+\lambda_{vol}\mathcal{L}_{vol} + \lambda_{cub}\mathcal{L}_{cub} + \lambda_{KL}\mathcal{L}_{KL}.
\end{align*}
\noindent The first term $\mathcal{L}_{pho}$ is the photometric loss that compares the difference between the rendered image $\mathcal{\tilde{I}}_p$ and ground truth image $I_{p}$ on all sampled pixels $p\in\mathcal{P}$,
\begin{align*}
\mathcal{L}_{pho} = \sum_{p\in\mathcal{P}} ||I_{p,gt} - \mathcal{\tilde{I}}_p||^2_2.
\end{align*}
\noindent The second term $\mathcal{L}_{flow}$ aims to enforce temporal consistency of volumetric primitives from frame $u$ and its adjacent frame $u+\epsilon$ by minimizing the projected 2D flow and ground truth optical flow $I_{p,flow}^{u,u+\epsilon}$,
\begin{align*}
\mathcal{L}_{flow} = \sum_{p\in\mathcal{P}} \mathcal{A}_p||I_{p,of}^{u, u+\epsilon} - \mathcal{I}_{p, flow}^{u, u+\epsilon}||^2_2,
\end{align*}
\noindent where $\epsilon\in\{-1, 1\}$. It is important to note that we use $\mathcal{A}_p$ to mask out the background part and we do not back propagate the errors from $\mathcal{L}_{flow}$ to $\mathcal{A}_p$ in order to get rid of the background noise in optical flows. To better enforce hair and head primitives moving with the tracked head mesh and guide hair strands, $\mathcal{L}_{geo}$ is designed to measure the difference between the mesh/strand vertices and their corresponding regressed value.
\begin{align*}
\mathcal{L}_{geo} = \sum_{n}||\bm{S}^t_n - \bm{S}^t_{n,gt}||^2_2 + \sum_m||\bm{v}^t_m - \bm{v}^t_{m, gt}||^2_2,
\end{align*}
\noindent where $\bm{S}_n^t$ and $\bm{v}_m^t$ are the coordinate of the $n$th node of the tracked guide hair and tracked head mesh at frame $t$ and the $X_{gt}$ denotes the corresponding ground truth value.
We also add several regularization terms to inform the layout of the volumetric primitives:
\begin{align*}
\mathcal{L}_{vol} &= \sum_{i=1,\cdots,N_p} \prod_{j\in\{x,y,z\}} s_i^j, \\
\mathcal{L}_{cub} &= \sum_{i=1,\cdots,N_p} ||max(s_i^x, s_i^y, s_i^z) - min(s_i^x, s_i^y, s_i^z)||,
\end{align*}
\noindent where $N_p$ stands for the total number of volumetric primitives and $s_i^x, s_i^y, s_i^z$ are the three entries of each volumetric primitive's scale $\bm{s}_j$. The two regularization terms aim to prevent each primitive from growing too big while preserving the aspect ratio so that they remain approximately cubic. The last term is the Kullback-Leibler divergence loss $\mathcal{L}_{KL}$ which makes the learnt distribution of latent code $\bm{z}$ smooth and enforces similarity with a normal distribution $\mathcal{N}(0, 1)$.
\section{Related Work}
In this section, we discuss the most closely related classical hair dynamic and shape modeling methods.
We then discuss learning-based approaches that use either volumetric or non-volumetric scene representations for spatio-temporal modeling.
\IGNORE{
\noindent\textbf{Image-based Hair Modeling} has long been an interesting and challenging problem due to complicated hair geometry, a massive number of hair strands, severe self-occlusion and collisions, as well as view-dependent appearance. Comparing to tracking of other parts like the body~\cite{loper2015smpl, bagautdinov2021driving}, face~\cite{tran2018nonlinear3dmm, tewari2017mofa}, head~\cite{steve_meshvae, li2017flame}, hair is relatively hard to track as a result of its gigantic state space, it is hard to get a geometric template for tracking and self-occlusions are much more intense. Furthermore, the strands are usually dense, containing a lot of fine details unlike surfaces and there are quite a few works~\cite{nam2019lmvs, sun2021hairinverse, paris2008hair_photobooth, paris2004capture, wei2005modeling, luo2012multi, luo2013wide, luo2013structure, hu2014robust} that look into the hair geometry acquisition problem.
\GN{A bit wordy. We can either remove this paragraph or move it to intro.}
\noindent \textbf{Image-based Hair Geometry Acquisition} is challenging due to the complicated hair geometry, massive number of strands, severe self occlusion and collision and view-dependent appearance. Paris \textit{et al.}~\cite{paris2008hair_photobooth, paris2004capture} and Wei \textit{et al.}~\cite{wei2005modeling} reconstruct 3D hair geometry from 2D/3D orientation fields using multi-view images. Luo \textit{et al.}~\cite{luo2012multi, luo2013wide} further improve the 3D reconstruction by refining the point cloud from traditional MVS with structure-aware aggregation and strand-based refinement. Luo \textit{et al.}~\cite{luo2013structure} and Hu \textit{et al.}~\cite{hu2014robust} progressively fit hair specific structures like ribbons and wisps to the point cloud. Recently, Nam \textit{et al.}~\cite{nam2019lmvs} substitute the plane assumption in the conventional MVS by a line-based structure to reconstruct 3D line clouds. Sun \textit{et al.}~\cite{sun2021hairinverse} use OLAT images for more efficient reconstruction of line-based MVS and develop an inverse rendering pipeline for hair that reasons about hair specific reflectance. However, none of those methods explicitly model temporal consistency for a time series capture.
\noindent\textbf{Dynamic Hair Capture.}
Compared to the vast body of work on hair geometry acquisition, the work on hair dynamics~\cite{hu2017simulation, zhang2012simulation, xu2014dynamic, yang2019dynamic} acquisition is much less. Zhang \textit{et al.}~\cite{zhang2012simulation} uses hair simulation to enforce better temporal consistency over a per-frame hair reconstruction result.
Hu \textit{et al.}~\cite{hu2017simulation} solves the physics parameters of a hair dynamics model by running parallel processes under different simulation parameters and adopting the one that best matches the visual observation.
Xu \textit{et al.}~\cite{xu2014dynamic} performs visual tracking by aligning per-frame reconstruction of hair strands with motion paths of hair strands on a horizontal slice of a video volume.
Yang \textit{et al.}~\cite{yang2019dynamic} developed a deep learning framework for hair tracking using indirect supervision from 2D hair segmentation and a digital 3D hair dataset.
However those methods mainly focus on geometry modeling and are not photometrically accurate or do not support drivable animation.
\IGNORE{
\ZW{We probably also need to shorten this}
\noindent\textbf{Hair spatio-temporal modeling} has long been an interesting and challenging problem due to the complicated hair geometry, massive number of strands, severe self occlusion and collision and view dependent appearance. Comparing to tracking of other parts like body~\cite{loper2015smpl, bagautdinov2021driving}, face~\cite{tran2018nonlinear3dmm, tewari2017mofa}, head~\cite{steve_meshvae, li2017flame}, hair is relatively hard to track as a result of its gigantic state space, hard to get a geometric template for tracking and the self-occlusion are much more intense. Further more, the strands are usually dense, containing a lot of fine details unlike surfaces and there are quite a few works~\cite{nam2019lmvs, sun2021hairinverse, paris2008hair_photobooth, paris2004capture, wei2005modeling, luo2012multi, luo2013wide, luo2013structure, hu2014robust} that looks into the hair geometry acquisition problem. Paris \textit{et al.}~\cite{paris2008hair_photobooth, paris2004capture} and Wei \textit{et al.}~\cite{wei2005modeling} extract 2D/3D orientation field from multi-view RGB images with either controlled illumination or visual hull. Luo \textit{et al.}~\cite{luo2012multi, luo2013wide} further improve the 3D reconstruction of hair by refine the point cloud from traditional MVS with hair structure aware aggregation and strand-based refinement. Luo \textit{et al.}~\cite{luo2013structure} and Hu \textit{et al.}~\cite{hu2014robust} then progressively fit hair specific structure like ribbons and wisps to the point cloud. Recent, Nam \textit{et al.}~\cite{nam2019lmvs} substitute the plane assumption in the conventional MVS by line-based structure to reconstruct 3D line clouds. Sun \textit{et al.}~\cite{sun2021hairinverse} uses OLAT images for more efficient reconstruction of line-based MVS and develops an inverse rendering pipeline of hair that reasons about hair specific reflectance. Comparing to the vast body of work about hair geometry acquisition, the works on hair dynamics~\cite{hu2017simulation, zhang2012simulation, xu2014dynamic, yang2019dynamic} acquisition problem are relatively fewer. Zhang \textit{et al.}~\cite{zhang2012simulation} uses hair simulation to enforce better temporal consistency over a perframe hair reconstruction result. However, the simulation parameters are empirically determined and no hair collision is considered. Hu \textit{et al.}~\cite{hu2017simulation} solves the physics parameters of a hair dynamic model by running parallel processes under different simulation parameters and take the one that best matches the visual observation. However reliable, the computation of this method is relatively heavy. \ZW{Not sure how to mention this. But whether it might be true that the efficiency and complexity is also limited by the size of the search space? Like if they are searching for too many parameters simultaneously, it take more time or even incredibly long to converge?} Xu \textit{et al.}~\cite{xu2014dynamic} performs visual tracking by aligning perframe reconstruction of hair strands by motion paths of hair strands on a horizontal slice of a video volume. However, this method only supports play back of a given video and don't have appearance modeling for hairs. Yang \textit{et al.}~\cite{yang2019dynamic} develops a deep learning framework for hair tracking using indirection supervision from 2D hair segmentation and a digital 3D hair dataset. But the results are not metrically accurate.
\noindent\textbf{Non-Volumetric Representations}
are widely studied in the literature of spatio-temporal modeling
Mesh-based representations~\cite{steve_meshvae, tewari2017mofa, tran2018nonlinear3dmm, li2017flame, xiang2020monoclothcap, bagautdinov2021driving} are a perfect fit for modeling surfaces and highly efficient to render.
However, they have limitations for modeling complex geometries like hair.
Multi-plane images~\cite{szeliski1998stereo, zhou2018mpi, mildenhall2019localmpi, attal2020matryodshka, broxton2020immersive} are good at modeling continuous shapes similar to volumetric representations, but are limited to a constrained set of viewing angles.
Point cloud representations~\cite{aliev2020npr, wiles2020synsin, meshry2019neural, Lassner_pulsar, ruckert2021adop} can model various geometries with high fidelity.
When used for appearance modeling, however, point-based representations might suffer from their innate sparseness which might result in holes.
Thus image-level rendering techniques~\cite{ronneberger2015unet} are often accompanied with such representations for completeness.
\noindent\textbf{Volumetric Representations}
are highly flexible and thus can model many different objects.
They are designed for geometric completeness given their dense grid-like structure.
Many previous works have demonstrated the strength of such representations in geometry modeling~\cite{choy20163dr2n2, wu20163dvolvae, kar2017lsm, tulsiani2017multi, zhu2017rethinking, NEURIPS2018_ziyan, tung2019learning}.
Some recent works~\cite{sitzmann2019deepvoxels, steve_nvs, peng2021neuralbody} have also shown their effectiveness in modeling appearance.
DeepVoxels~\cite{sitzmann2019deepvoxels} learn a 3D grid of features as the scene representation.
Neural volumes~\cite{steve_nvs} learns a grid of discrete color and density values via volumetric raymarching.
Neural body~\cite{peng2021neuralbody} incorporates SMPL~\cite{loper2015smpl} with Neural Volumes~\cite{steve_nvs} for body modeling.
Nevertheless, the rendering quality, efficiency and memory footprint of those volumetric representations is still limited by the voxel resolutions.
To conquer this major drawback of volumetric methods, MVP~\cite{steve_mvp} proposes a hybrid representation for efficient and high-fidelity rendering.
It attaches a set of local volumetric primitives to a tracked head mesh and employs a tailored volumetric raymarching algorithm that is developed for fast rendering via a BVH~\cite{karras2013fastbvh}.
The tracked mesh provides a good initialization for the positions and rotations of the primitives that are jointly learned.
Still, finding the globally optimal positions and rotations purely based on a photometric reconstruction loss is highly challenging due to many local minima in the energy formulation.
\noindent\textbf{Coordinate-based Representations}
have been the major focus of recent literature in 3D learning due to their low memory footprint and ability to dynamically assign the model capacity to the correct regions of 3D space.
Many works have demonstrated their ability to reconstruct high fidelity geometry~\cite{park2019deepsdf, saito2019pifu, chabra2020deep_local_shape, saito2020pifuhd, jiang2020local, genova2020local, mescheder2019occupancy, peng2020con} or to generate photo-realistic rendering results~\cite{sitzmann2019srn, niemeyer2020dvr, yariv2020idr, mildenhall2020nerf, liu2020nsvf, yu2021plenoctrees}.
NeRF~\cite{mildenhall2020nerf} learns a volumetric radiance field of a static scene from multi-view photometric supervision using a differentiable raymarcher, but comes with a large rendering time. Several works~\cite{liu2020nsvf, yu2021plenoctrees, kilonerf, lindell2021autoint} have improved the rendering efficiency of NeRF on static scene.
Among all those approaches, the most related to ours are spatio-temporal modeling techniques ~\cite{tretschk2021nrnerf, park2021nerfies, li2021nsff, li2021neural, xian2021space, pumarola2021dnerf, Wang_2021_nvnerf, park2021hypernerf}.
Non-rigid NeRF~\cite{tretschk2021nrnerf}, D-NeRF~\cite{pumarola2021dnerf} and Nerfies~\cite{park2021nerfies} introduce a dynamic modeling framework with a canonical radiance field and per-frame warpings.
Some works~\cite{li2021neural, xian2021space, li2021nsff, Wang_2021_nvnerf, yuan2021star, wang2021ntf} model a 3D video by additionally conditioning the radiance field on temporally varying latent codes or an additional time index.
Xian \textit{et al.}~\cite{xian2021space} further leverages depth as an extra source of supervision.
STaR~\cite{yuan2021star} models scenes that consist of a background and one dynamic rigid object.
NSFF~\cite{li2021nsff} also combines a static and dynamic NeRF pipeline and uses optical flow to constrain the 3D scene flow derived from the NeRF model of adjacent time frames.
Wang \textit{et al.}~\cite{Wang_2021_nvnerf} introduce a grid of local animation codes for better generalization and improved rendering efficiency.
However, these methods are still limited by either sampling resolution or ability to model complex motions and do not generalize well to unseen motions.
|
{
"timestamp": "2021-12-21T02:19:28",
"yymm": "2112",
"arxiv_id": "2112.06904",
"language": "en",
"url": "https://arxiv.org/abs/2112.06904"
}
|
"\\section{Introduction}\n\n\\par Type Ia supernovae are useful cosmological tools as standardizable(...TRUNCATED)
| {"timestamp":"2021-12-15T02:00:43","yymm":"2112","arxiv_id":"2112.06951","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\nSince the pioneer work of Alt-Caffarelli \\cite{AC81}, Dirichlet pr(...TRUNCATED)
| {"timestamp":"2022-01-05T02:09:53","yymm":"2112","arxiv_id":"2112.06962","language":"en","url":"http(...TRUNCATED)
|
"\\section{Introduction}\n\n\nQuantum localization is a fundamental phenomenon first introduced in t(...TRUNCATED)
| {"timestamp":"2022-08-17T02:09:31","yymm":"2112","arxiv_id":"2112.06890","language":"en","url":"http(...TRUNCATED)
|
"\\section{Medial anatomy of TPN morphologies} \nTo understand the thermodynamic costs of packing in(...TRUNCATED)
| {"timestamp":"2021-12-15T02:01:17","yymm":"2112","arxiv_id":"2112.06977","language":"en","url":"http(...TRUNCATED)
|
"\n\\section{Abstract}\n\nLogarithmic conformation reformulations for viscoelastic constitutive laws(...TRUNCATED)
| {"timestamp":"2021-12-14T02:43:07","yymm":"2112","arxiv_id":"2112.06829","language":"en","url":"http(...TRUNCATED)
|
"\n\n\n\n\n\\subsection{Reanalysis of pre-O3b events}\n\n\\begin{figure}\n\t\\begin{center}\n\t\\inc(...TRUNCATED)
| {"timestamp":"2021-12-14T02:44:12","yymm":"2112","arxiv_id":"2112.06861","language":"en","url":"http(...TRUNCATED)
|
"\\section*{The metabolic law behind turbulence}\n\\begin{figure*\n\\centering\n\\begin{overpic}[abs(...TRUNCATED)
| {"timestamp":"2021-12-16T02:11:21","yymm":"2112","arxiv_id":"2112.06923","language":"en","url":"http(...TRUNCATED)
|
End of preview.
No dataset card yet
- Downloads last month
- 6