text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
Il castello di Conwy (in inglese: Conwy Castle, anticamente: Conway Castle; in gallese: Castell Conwy) è un castello fortificato della città gallese di Conwy (contea di Conwy, Galles settentrionale), costruito tra il 1283 e il 1287 (o 1289) su progetto dell'architetto militare James of St. George (1230-1309) e per volere di Edoardo I d'Inghilterra (1239-1307).
Si tratta della prima delle quattro fortezze fatte costruire nel nord del Galles da Edoardo I d'Inghilterra e che compongono il cosiddetto "Anello di Ferro", nonché di una delle fortezze meglio conservate del Galles settentrionale e di una delle più imponenti fortezze medievali d'Europa, inserita dall'UNESCO - come gli altri castelli (il castello di Caernarfon, il castello di Beaumaris e il castello di Harlech) dell'"Anello di Ferro" - nel patrimonio dell'umanità (dal 1986).
Attualmente il castello è posto sotto la tutela del Cadw.
Ubicazione
Il castello si trova in Rose Hill Street ed è situato lungo l'estuario del fiume Conwy, nei pressi dei ponti della città (uno dei quali opera di Thomas Telford).
.
Caratteristiche
Il castello misura 1.273 metri in lunghezza e si compone di otto torri principali (su un totale di 21 che sostengono le mura), ognuna delle quali dell'altezza di 70 piedi.
Storia
La costruzione del castello fu concepita da Edoardo I d'Inghilterra tra il gennaio e il maggio del 1283, dopo che le sue truppe, nel corso della Seconda Campagna del Galles (che doveva frenare le volontà indipendentiste della regione guidate da Llywelyn ap Gruffydd), avevano occupato la Snowdonia e la valle del fiume Conwy (Gwynedd)..
La costruzione fu commissionata a James of St. George, uno dei più grandi architetti militari dell'epoca e all'ingegnere Richard di Chester.
Nella costruzione dell'edificio, che terminò nel 1287 (o nel 1289), furono impegnati, nella sola estate del 1285, 1.500 uomini.
Nell'autunno del 1294, il castello di Conwy fu usato dagli inglesi come base operativa per fronteggiare la rivolta gallese guidata da Madoc ap Llywelyn.
Col tempo e a causa della posizione, il castello rischiò di cadere in rovina.
Nel 1346, fu intrapresa una prima opera di restauro del castello, voluta da Edoardo il Principe Nero.
Nel 1401, il castello fu occupato dai seguaci di Owain Glyndŵr (1359-1416 ca.).
Nel 1624, il castello fu ceduto al Visconte di Conwy, che acquistò l'edificio per 100 sterline.
Il castello di Conwy terminò la propria funzione strategica nella metà del Seicento, nel corso della guerra civile, durante la quale l'edificio fu occupato per tre mesi nel 1646 dalle truppe repubblicane guidate dal Generale Oliver Cromwell.
Galleria d'immagini
Note
Voci correlate
Castelli e mura cittadine di Re Edoardo a Gwynedd
Edoardo I d'Inghilterra
Castello lineare
Altri progetti
Collegamenti esterni
https://web.archive.org/web/20110405093020/http://www.cadw.wales.gov.uk/upload/resourcepool/Conwy_Castle_Reconsv24910.pdf
Distretto di contea di Conwy
Conwy
Patrimoni dell'umanità del Regno Unito
Conwy | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,535 |
Customize our Car Cleaning Postcard Template and more!
Advertise an upcoming deal on detailing or other services with a car cleaning postcard made with our easy-to-customize templates and online editor. Modify your template's color theme, add pictures, and insert relevant text to catch your customer's eyes so they'll know where to go when their car gets dirty. Print from any location, or let us print your postcards for you so you can mail them out ASAP to your targeted demographic. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,258 |
Филип Хьюз:
* Хьюз, Филип (род. 1964) — североирландский футболист.
Хьюз, Филип (род. 1981) — ирландский футболист. | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,265 |
- New Features:
- Slots evaluation for ExpressoScripts
- Changes:
- Fixes:
- Known issues (Should be fixed in 1.6):
- Expresso Print("A simple non-evaluated string..."); method won't work when used inside a branch where nodes also uses GetObj/SetObj/GetProperty/SetProperty
- Documentation (web)
## Unreal Importer 1.5 Changelog
- New Features:
- Added a message box that verify for an "ArticyRuntime" reference in the dependency modules of the unreal build tool and ask the user to add it if it's not present.
- Added a checkbox inside the Articy Importer plugin preferences to avoid automatic ArticyRuntime verification (case of the developper have a custom build pipeline).
- String representation : New node "Get Object from String Representation" that takes a string parameter (StringID CloneID) to get an object by its internal Articy String representation.
- New expresso methods (IncrementProp, DecrementProp, IsPropInRange, IsInRange) to reflect Articy:Draft Expresso new methods.
- Changes:
- Removed "BranchLimit" from FlowPlayer
- Fixes:
- Fixed generated files for PS5 and Androïd
- Fixed Articy icon disappears when usign small editor icons for UE4
- Fixed CloneID that was always "1"
- Documentation (web)
- Updated "Getting the object" section to add String representation example.
- Removed step by step import process section
- Changed "Adjust build configuration" section of the documentation to reflect automatic ArticyRuntime reference new functionality.
## Unreal Importer 1.4 Changelog
- Unreal Engine 5 support
- Breaking Changes:
- A change in the code generator will break existing projects. Search "Error C2451" in [the readme](README.md) for the quick fix to get you compiling again.
- New Features:
- Rich text support using Unreal's Rich Text Block component, including hyperlinks ([#64](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/64)).
- Support for multiple, independent global variable sets ([#66](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/66)).
- Changes:
- Import with enabled live coding is now allowed in UE5.
- Moved generated method `U<ProjectName>ExpressoScripts::GetUserMethodsProviderObject()` to `UArticyExpressoScripts::GetUserMethodsProviderObject()`.
- Fixes:
- Fixed compilation issues with UE4.22
- Fixed compilation issues with Linux cross-compilation toolchain v19
- Fixed packaging compiler warning concerning `FADIHierarchy::RootObject` ([#60](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/60)).
- Fixed unsupported pragma compile error when building for PlayStation 4/5 ([#59](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/59) and [#67](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/67)).
- Fixed issue with Unicode characters in generated scripts.
- Fixed crash when changing levels while using a custom script methods ([#63](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/63)).
- Fixed issues with `GetWorld()` in `UArticyDatabase` and `UArticyBaseObject` ([#68](https://github.com/ArticySoftware/ArticyImporterForUnreal/issues/68)).
- Documentation
- Added documentation for custom script methods and shadowing to the Readme ([#61](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/61)).
## Unreal Importer 1.3.1 Changelog
- General:
- New warning dialogue if you try to run the importer and a hotload is required while Live Coding is enabled ([#53](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/53)).
- New Features:
- Support for the new `Matrix` property for Location objects like Zones, Images, Text, etc. ([#56](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/56))
- Allow `setProp` Expresso function to be used to set Reference Slots using `getObj` ([#51](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/51)).
- Expose `GetInputPins` to Blueprint. Note that this involves a breaking change for anyone previously using `GetInputPins()` in C++. Please change all usages to `GetInputPinsPtr()`. ([#58](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/58))
- Fixes:
- Show an error message instead of crashing when failing to regenerate assets ([#48](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/48)).
- PausedOn nodes are no longer executed twice (once on arrival, once on leaving). This only has an effect if you're pausing on Instruction nodes ([#52](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/52)).
- No longer generate uncompilable code while using Blueprint Nativize ([#55](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/55)).
- Added proper dependencies so `ArticyEditorModule` can be included in another module without errors ([#57](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/57)).
## Unreal Importer 1.3.0 Changelog
- Unreal Engine 5 Early Access 2 Support ([#41](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/41)).
- Fixes:
- Fixed issue with plugin marking database and package assets for delete. Fixed in [#42](https://github.com/ArticySoftware/ArticyImporterForUnreal/pull/42). Resolves [#39](https://github.com/ArticySoftware/ArticyImporterForUnreal/issues/39).
- Support for Unreal 4.27
## Unreal Importer 1.2.0 Changelog
- Unreal Engine 4.26 Support
- Unreal Engine 4.20 and 4.21 no longer supported
- New Features:
- Custom Articy functions in expresso scripts now work properly with "self" and "GetObj" as parameters. The functions will use ArticyPrimitive as parameter. Keep in mind that "self" will give you a pin if used from inside a pin's expression. Cast to ArticyFlowPin and then call 'Get Owner' to access the node the pin is called on.
- Articy directory location and import asset location can now be changed! The previous hierarchy needs to be maintained. Either move your existing assets to the new location and change the "Articy Directory" in the plugin settings to the parent folder of the import asset (previously would be the Content folder), or make sure to delete all pre-existing articy assets (.articyue4 file, import asset, generated assets, assets such as images) and do a fresh import from articy to the new location.
- General:
- Added: Category "Articy Methods Provider" for articy custom functions
- Added: Support for ArticyRef/Id widget blueprint pins for Articy Function Library functions (Get Object on an ArticyRef for example)
- Fixed: Plugin Settings for package loading now refreshes upon asset regeneration rather than import.
- Fixed: Articy Object tooltips now display the Articy Id even without generated articy assets (before, you didn't know if the Id was set or not if the object didn't exist)
- Breaking Change: EArticyPausableType enum spelling for Dialog (-> Dialogue) and DialogFragment (-> DialogueFragment).
- C++:
- Added: SetPauseOn function for ArticyFlowPlayer that can take a bit-masked value to support multiple types at once
- Added: Automatic cleanup of your Articy Id Widget customization factories. While you can keep a reference to your factories yourself, you don't need to. The Articy Customization Manager will automatically clean up all factories that are registered at the point of shutdown.
- Fixed: Templated GetObjectOfClass function now contains the objects with the specified clone id, if available, rather than the base object
- Breaking Change: Articy Database now returns ArticyObjects rather than ArticyPrimitives (which were cast to ArticyObject in Blueprints automatically).
## Unreal Importer 1.1.0 Changelog
- New Features:
- Articy Global Variables Debugger added to the articy toolbar
- ArticyIdProperty Widget Customization system. Lets you add widgets from C++ to any SArticyIdProperty widget (ArticyId and ArticyRef structs primarily) without modifying plugin code
- Custom widgets for ArticRef and ArticyId Blueprint pins
- New ArticyRef widget supports Clone settings
- ArticyIds use the previous ArticyRef widget
- New C++ meta specifiers for ArticyRef and ArticyId types:
- ArticyExactClass (locks the class filter if set to true)
- ArticyNoWidget (only for ArticyId, removes the customized widget)
- General:
- Added: Copy & Paste support for ArticyRef & ArticyId. ArticyRef copies can get pasted into ArticyIds and vice versa.
- Added: Global Variables asset uses the same view as the new GV debugger. This fixes categorization issues and allows for search by namespace and variablename.
- Added: Option in the plugin settings to sort children upon import. Default off as it degrades import performance.
- Changed: Revamp of articy asset picker: now includes the class filter button and an 'Exact Class' filter checkbox
- Changed: Articy asset picker now will always have its initial class restriction set to the highest possible in the hierarchy. Meaning: Blueprint created ArticyIds and ArticyRefs will display ArticyObject when opening the asset picker, C++-created ArticyIds and ArticyRefs with an "ArticyClassRestriction=..." meta specifier will have that class as the starting point.
- Changed: The class filter in the Articy asset picker now uses a list rather than a tree structure
- Changed: Articy Button on ArticyID/ArticyRef widgets now uses the current tab for ArticyNode elements (dialogues etc.) and opens up a new tab for entities instead. No more new windows!
- Fix: Articy Import Data now constructs its hierarchy objects properly
- Fix: Crash when selecting two actors of the same type with the same ArticyRef variables
- Blueprints:
- Added: ArticyRef is now hashable and can be used in sets and maps as keys. You can add duplicates at the moment, which will get removed upon Blueprint compilation, rather than the default behavior of not letting you add duplicates in the first place. This lets you easily tweak the data structures. This might change in the future. See below in the C++ section for a more detailed explanation.
- Added: MatchesRaw and MatchesEffective comparison functions for comparison of ArticyRefs. See below in the C++ section for a more detailed explanation.
- C++:
- Added: Static UArticyImportData::GetImportData() function
- Added: OnAssetsGenerated delegatein FArticyEditorModule, called whenever assets are generated. Previous "OnImportFinished" would not get called upon asset regeneration only.
- Added: Static GetPackagesSlow() function in FArticyEditorModule
- Added: GetExpression function for ArticyScriptFragments, returning a script as a const FString reference
- Added: Different ToString functions for FArtiyId and FArticyRef types
- Added: Made FArticyRef hashable (combination of underlying ID + effective CloneId is used). Since hash containers in UE4 make use of the == operator the effective CloneID is compared rather than the actual CloneID (bReferenceBaseObject = true implies effective CloneId = 0, but the actual CloneId value can be different)
- Added: New comparison functions for FArticyRef: MatchesRaw and MatchesEffective.
- Added: FArticyId InitFromString function. Relies on the string contents to include a "Low=XXX" and "High=YYY" section.
- Changed: UArticyObject::FindAsset() now is an editor-only function
- Changed: UArticyObject::FindAsset() uses caching to avoid module and asset registry lookup. This improves performance significantly and ensures functionality for large articy projects inside UE4.
## Unreal Importer 1.0.2 Changelog
- New Features:
- Articy Flow Debugger added
- The flow debugger is an actor found in the plugin content folder (not the generated ArticyContent folder!), which can be placed in the world.
Upon setting the 'Start On' articy reference to a flow object of your choice and hitting Play, a simple UI will popup to display your dialogue and dialogue branches.
Depending on the 'Ignore invalid branches' bool, branches with unfulfilled conditions will either not appear or they will show up in red.
This is a means to test your imported dialogue easily without needing to setup a UI on your own.
- General:
- Changed: Articy Flow Player's 'Start On' attribute now can only select objects in the ArticyNode hierarchy (flow objects effectively, rather than entities)
- Changed: Removal of several monolithic headers (Engine.h and SlateBasics.h) and many include changes across the board
- Fix: ExpressoScripts that compare integers with floats now behave correctly. This is valid for all comparison operators (<, >, <=, >=, ==, !=)
- Fix: Compilation errors for Mac, Linux, and iOS.
## Unreal Importer 1.0.1 Changelog
- New Features:
- ArticyRef meta data attribute "ArticyClassRestriction" added in C++
This meta data attribute will set the class filter restriction to the chosen class permanently and can not be changed without changing the meta data.
This allows programmers to set the allowed class hierarchy for a Blueprint-exposed ArticyRef structure.
Example here:
```
UPROPERTY(EditAnywhere, BlueprintReadOnly, Category = "Setup", meta=(ArticyClassRestriction="ArticyNode"))
FArticyRef StartOn;
```
- Blueprint:
- "Should pause on" function of the Articy Flow Player exposed to Blueprints.
This function allows you to test whether the flow player pauses on an articy node or not.
- "Get Type" function of the Articy Node classes function exposed to Blueprints.
This function allows you get the type of a generic ArticyNode (Flow Fragment, Dialog Fragment etc.) and can be used in a Switch node.
- C++:
- Added export macros to the generated UCLASSES and USTRUCTS.
- General:
- Fixes in the backend to compile as an engine plugin
## Unreal Importer 1.0.0 Changelog
- Disclaimer: Please perform a 'full reimport' after upgrading to this version of the importer by opening up the new Articy Importer window in the level toolbar and clicking 'Force complete reimport'
In case error messages pop up, please close Unreal, recompile the project inside Visual Studio and start up the engine again.
- Articy Importer window added
- This window hosts the main controls of the importer. The button to open the window can be found in the level toolbar. The window will be expanded in the future with more options and functionality. As a consequence, the import options inside the plugin settings and the import data assets have been removed. Currently it enables the user to perform three import actions:
- Force complete reimport
- Reimport changes
- Regenerate assets
- Import Cache & Restoration added
- The importer will now cache the last valid import state and will try to restore that state when a new import fails to compile.
- Blueprint:
- Changed: ImportedPackages map of the Import Data Asset is no longer blueprint readable
- C++:
- Changed: ArticyImporter module renamed to ArticyEditor
- Changed: The Articy Asset Picker is now exported to other modules, meaning that it can be accessed for custom purposes without modifying plugin code
- General:
- Stability improved
- Added: Editor resources to better represent articy:draft related functionality
- Changed: PIE import queue now uses 'Reimport changes' instead of 'Complete reimport'
- Changed: Folder structure of the plugin. Code depending on paths, such as includes, may need to adapt to the new structure.
- Fix: Importing after closing the plugin settings no longer crashes the engine
## Unreal Importer 0.0.5 Changelog
- Disclaimer: Please perform a 'full reimport' after upgrading to this version of the importer by going into the plugin settings and clicking 'Force complete reimport'
In case error messages pop up, please close Unreal, recompile the project inside Visual Studio and start up the engine again.
This is due to Unreal's Hot Reload not handling changes in header files well. Due to class hierarchies changing this can lead to temporary error messages.
- Articy Asset Picker added for ArticyRef variables
- The new asset picker enables an easy lookup and selection of imported articy objects
- Tooltips provide more information on the various objects
- Double clicking the image of an ArticyRef variable opens up the selected asset inside Unreal
- The articy button next to the image opens up the selected object in articy:draft
- When expanding the ArticyRef variable, a class filter can be set to allow the asset picker to only show select objects (dialogue fragments, entities etc.)
- A search filter lets you browse through the available objects. You can filter by class name, ID, text, speakers and various other attributes
- One Unreal asset per package
- An exported package in articy:draft now generates one asset inside Unreal with all specified data inside.
This allows for much faster reimports as less assets have to be handled when reimporting
- Import optimization:
- When hitting the Import button, a compilation process now only happens when script fragments, template definitions or global variables changed.
If none of these changed, the import process finishes almost immediately
- Blueprint:
- Added: GetOutputPins function for all classes that implement the OutputPinsProvider interface (e.g. all flow objects)
- Added: ArticyPin: GetOwner function is now BlueprintCallable
- Added: GlobalVariables: GetVariablesOfType function added in which you can specify what type of variables you want (all, Ints, Strings or Bools)
- Added: ArticyAsset: LoadAsset function
- Changed: Target & TargetPin of ArticyJump are now BlueprintReadOnly instead of BlueprintReadWrite
- C++:
- Added: UArticyJump: GetTargetID and GetTargetPinID functions (same access rules as in Blueprints)
- Added: UArticyObject: GetArticyObjectChildrenIDs function (returns all children IDs that represent ArticyObjects)
- Added: UArticyObject: GetParentID and GetChildrenIDs() functions
- Changed: FindAsset function moved from UArticyPrimitive to UArticyObject
- Changed: Renamed GetOutputPins function to GetOutputPinsPtr
- Changed: All headers were moved to the Public folder, all cpp files were moved to the Private folder
- General:
- UE 4.19 is no longer supported starting with this release. Please use the dedicated UE 4.19 plugin instead
- Added: If Perforce is used for source control, the generated code is now automatically checked out
- Added: Console Command: "ArticyImporter.Reimport" works as if hitting the import button when prompted (adaptive reimport)
- Changed: ArticyRef helper functions now take UArticyObjects as parameters instead of UArticyPrimitives
- Changed: Various generated classes now inherit from more fitting classes rather than from ArticyObject ({YourProject}Entity now inherits from ArticyEntity instead of ArticyObject)
- Changed: Generated articy assets in Unreal will now save automatically after being generated
- Fixed: Removing a GlobalVariable set and reimporting no longer results in uncompileable code
- Fixed: GlobalVariables: SetByNamespaceAndVariable function now works as intended (parameters were used in the wrong order internally)
- Fixed: Setting a non-existing global variable inside an existing variable set no longer leads to a crash
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,193 |
\chapter*{Abstract}
\addcontentsline{toc}{chapter}{Abstract}
\adjustmtc
\markright{\MakeUppercase{Abstract}}
This thesis tackles the subject of spatio-temporal forecasting with deep learning, which is the task of forecasting complex phenomena represented by time series or videos, involving both complex temporal dynamics and strong spatial correlations. This is of crucial importance for many industrial applications, such as climate, healthcare or finance. The motivating application at Electricity de France (EDF) is short-term solar energy forecasting with fisheye images. Despite the great successes of deep learning in computer vision and natural language processing, pure data-driven methods still struggle in the task of physical process extrapolation, especially in data-scarce contexts and for non-stationary time series that can present sharp variations. We explore two main research directions for improving deep forecasting methods by injecting external physical knowledge. The first direction concerns the role of the training loss function. Instead of using the largely dominant mean squared error (MSE), we show that differentiable shape and temporal criteria, typically used as evaluation metrics in applications, can be leveraged to improve the performances of existing models. We address both the deterministic context with the proposed DILATE loss function and the probabilistic context, for which we aim at describing the predictive distribution with a small set of diverse and accurate scenarios, with our proposed STRIPE model. Our second direction is to augment incomplete physical models with deep data-driven networks for accurate forecasting. For video prediction, we introduce the PhyDNet model that disentangles PDE (partial differential equations) dynamics from residual information necessary for prediction, such as texture or details. We further propose a learning framework (APHYNITY) that ensures a principled and unique linear decomposition between physical and data-driven components under mild assumptions, leading to better forecasting performances and parameter identification. We validate our contributions on many synthetic and real-world datasets, and on the solar energy dataset at EDF.
Keywords : deep learning, machine learning, spatio-temporal forecasting, solar energy forecasting.
\end{vcenterpage}
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{Introduction\label{sec:intro}}
\lettrine[lines=3]{M}odelling and forecasting complex dynamical systems is a major challenge in domains such as environment and climate~\cite{rolnick2019tackling}, health science~\cite{choi2016retain}, and in many industrial applications~\cite{toubeau2018deep}. As explained in Chapter \ref{chap:intro}, Model-Based (MB) approaches typically rely on partial or ordinary differential equations (PDE/ODE) and stem from a deep understanding of the underlying physical phenomena. Machine learning (ML) and deep learning methods are more prior agnostic yet have become state-of-the-art for several spatio-temporal prediction. However, pure ML methods are still limited for modelling complex physical dynamics, and cannot properly generalize to new conditions unlike MB approaches.
Combining the MB and ML paradigms is an emerging trend to develop the interplay between the two paradigms. For example, \cite{brunton2016discovering,long2018pde} learn the explicit form of PDEs directly from data, \cite{raissi2017physics,sirignano2018dgm} use NNs as implicit methods for solving PDEs, \cite{seo2020} learn spatial differences with a graph network, \cite{ummenhofer2020} introduce continuous convolutions for fluid simulations, \cite{de2017deep} learn the velocity field of an advection-diffusion system,
\cite{greydanus2019hamiltonian,chen2019symplectic} enforce conservation laws in the network architecture or in the loss function.
The large majority of aforementioned ML/MB hybrid approaches assume that the physical model adequately describes the observed dynamics. This assumption is, however, commonly violated in practice. This may be due to various factors, e.g.~ idealized assumptions and difficulty to explain processes from first principles \cite{gentine}, computational constraints prescribing a fine grain modelling of the system \cite{epnet}, unknown external factors, forces and sources which are present \cite{Large2004}.
In this Chapter, we aim at leveraging prior dynamical ODE/PDE knowledge in situations where this physical model is \textit{incomplete}, i.e.~ unable to represent the whole complexity of observed data. To handle this case, we introduce a principled learning framework to Augment incomplete PHYsical models for ideNtIfying and forecasTing complex dYnamics~(APHYNITY). The rationale of APHYNITY, illustrated in Figure~\ref{fig:comparison_data_phys_coop} on the pendulum problem, is to \textit{augment} the physical model when---and only when---it falls short.
Designing a general method for combining ML and MB approaches is still a widely open problem, and a clear problem formulation for the latter is lacking \cite{Reichstein2019}. Our contributions towards these goals are the following:
\begin{itemize}
\item We introduce a simple yet principled framework for combining both approaches. We decompose the data into a physical and a data-driven term such that the data-driven component only models information that cannot be captured by the physical model. We provide existence and uniqueness guarantees~(Section~\ref{subsec:decomp}) for the decomposition given mild conditions, and show that this formulation ensures interpretability and benefits generalization.
\item We propose a trajectory-based training formulation~(Section~\ref{subsec:learning}) along with an adaptive optimization scheme~(Section~\ref{subsec:optim}) enabling end-to-end learning for both physical and deep learning components. This allows APHYNITY to \textit{automatically} adjust the complexity of the neural network to different approximation levels of the physical model, paving the way to flexible learned hybrid models.
\item We demonstrate the generality of the approach on three use cases (reaction-diffusion, wave equations and the pendulum) representative of different PDE families (parabolic, hyperbolic), having a wide spectrum of application domains, e.g.~ acoustics, electromagnetism, chemistry, biology, physics~(Section~\ref{sec:expes}). We show that APHYNITY is able to achieve performances close to complete physical models by augmenting incomplete ones, both in terms of forecasting accuracy and physical parameter identification. Moreover, APHYNITY can also be successfully extended to the non-stationary dynamics context (Section \ref{sec:aph-nonstat}).
\end{itemize}
\begin{figure}
\centering
\vspace{-0.6cm}
\begin{tabular}{ccc}
\hspace{-0.5cm} \includegraphics[height=4.5cm]{images/aphynity_data-driven.png} & \hspace{-0.5cm} \includegraphics[height=4.5cm]{images/aphynity_physical.png} &
\hspace{-0.5cm}
\includegraphics[height=4.5cm]{images/aphynity_P3.png}\\
(a) Data-driven Neural ODE & (b) Simple physical model & (c) Our APHYNITY framework
\end{tabular}
\caption[APHYNITY motivation.]{Predicted dynamics for the damped pendulum vs. ground truth (GT) trajectories $\nicefrac{\diff^2 \theta}{\diff t^2} + \omega_0^2 \sin \theta + \alpha \nicefrac{\diff \theta}{\diff t} = 0$. We show that in (a) the data-driven approach~\cite{chen2018neural} fails to properly learn the dynamics due to the lack of training data, while in (b) an ideal pendulum cannot take friction into account. The proposed APHYNITY shown in (c) augments the over-simplified physical model in (b) with a data-driven component. APHYNITY improves both forecasting (MSE) and parameter identification (Error $T_0$) compared to (b).}
\label{fig:comparison_data_phys_coop}
\end{figure}
\section{Related work}
\label{sec:related-work}
\paragraph{Correction in data assimilation} As discussed in Chapter \ref{chap:related_work}, data assimilation techniques such as the Kalman filter \cite{kalman1960new,becker2019recurrent} assume that the prediction errors correspond to noise. These errors are modelled probabilistically as random variables, and an optimal correction step is derived after each prediction step. In this sequential two-step scheme, also arising commonly in robotics and optimal control \cite{chen2004disturbance,li2014disturbance}, there is no cooperation between prediction and correction. The originality of APHYNITY is to leverage model-based prior knowledge by augmenting it with neurally parameterized dynamics; the residual does not corresponds to noise but to an unknown or unmodelled part of the dynamical model. APHYNITY also ensures an optimal cooperation between the prior model and the augmentation.
\paragraph{Augmented physical models} Combining physical models with machine learning (\textit{gray-box or \textit{hybrid}} modelling) was first explored from the 1990's: \cite{psichogios1992hybrid,thompson1994modeling,rico1994continuous} use neural networks to predict the unknown parameters of physical models. The challenge of proper MB/ML cooperation was already raised as a limitation of gray-box approaches but not addressed. Moreover these methods were evaluated on specific applications with a residual targeted to the form of the equation.
In the last few years, there has been a renewed interest in deep hybrid models bridging data assimilation techniques and machine learning to identify complex PDE parameters using cautiously constrained forward model \cite{long2018pde,de2017deep}.
Recently, some approaches have specifically targetted the ML/MB cooperation in the case of incomplete physical models. HybridNet~\cite{long2018hybridnet} and PhICNet~\cite{saha2020phicnet} both use data-driven networks to learn additive perturbations or source terms to a given PDE. The former considers the favorable context where the perturbations can be accessed, and the latter the special case of additive noise on the input. \cite{wang2019integrating,neural20} propose several empirical fusion strategies with deep neural networks but lack theoretical groundings. Crucially, all the aforementioned approaches do not address the issues of uniqueness of the decomposition or of proper cooperation for correct parameter identification. Besides, we found experimentally that this vanilla cooperation is inferior to the APHYNITY learning scheme in terms of forecasting and parameter identification performances~(see experiments in Section~\ref{sec:results}).
\section{The APHYNITY Model}
\label{sec:model}
In the following, we study dynamics driven by an equation of the form:
\begin{equation}
\label{eq:ode}
\frac{\diff X_t}{\diff t} = F(X_t)
\end{equation}
defined over a finite time interval $[0,T]$, where the state $X$ is either vector-valued, i.e.~ we have $X_t\in\mathbb{R}^d$ for every $t$ (pendulum equations in Section \ref{sec:expes}), or $X_t$ is a $d$-dimensional vector field over a spatial domain $\Omega\subset\mathbb{R}^k$, with $k\in\{2,3\}$, i.e.~ $X_t(x)\in\mathbb{R}^d$ for every $(t,x)\in[0,T]\times\Omega$ (reaction-diffusion and wave equations in Section \ref{sec:expes}).
We suppose that we have access to a set of observed trajectories ${\mathcal{D}} = \{X_\cdot:[0,T]\rightarrow{\mathcal{A}} \ |\ \forall t\in[0,T], \nicefrac{\diff X_t}{\diff t} = F(X_t)\}$, where ${\mathcal{A}}$ is the set of $X$ values (either $\mathbb{R}^d$ or vector field). In our case, the unknown $F$ has ${\mathcal{A}}$ as domain and we only assume that $F\in{\mathcal{F}}$, with $({\mathcal{F}}, \|\cdot\|)$ a normed vector space.
The overall APHYNITY approach is illustrated in Figure \ref{fig:fig_aphynity}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{images/fig_aphynity.png}
\caption[Principle of the APHYNITY framework.]{The APHYNITY model for learning complex dynamical systems augments an approximate physical model $F_p$ by a deep data-driven model $F_a$. We propose a decomposition fulfilling uniqueness guarantees (Section~\ref{subsec:decomp}). We introduce a trajectory-based formulation for learning the joint ODE $\frac{\diff X_t}{\diff t}=(F_p+F_a)(X_t)$, which leads to different and experimentally better identification results than the physical model $F_p$ (Section~\ref{subsec:learning}). APHYNITY is learned end-to-end with an adaptive optimization algorithm (Section \ref{subsec:optim}) ensuring a meaningful cooperation between physics and augmentation.}
\label{fig:fig_aphynity}
\end{figure}
\subsection{Decomposing dynamics into physical and augmented terms\label{subsec:decomp}}
As introduced in \ref{sec:intro}, we consider the common situation where incomplete information is available on the dynamics, under the form of a family of ODEs or PDEs characterized by their temporal evolution $F_p\in{\mathcal{F}}_p\subset{\mathcal{F}}$. The APHYNITY framework leverages the knowledge of ${\mathcal{F}}_p$ while mitigating the approximations induced by this simplified model through the combination of physical and data-driven components. ${\mathcal{F}}$ being a vector space, we can write:
\[
F = F_p + F_a,
\]
where $F_p\in{\mathcal{F}}_p$ encodes the incomplete physical knowledge and $F_a\in{\mathcal{F}}$ is the data-driven augmentation term complementing $F_p$. The incomplete physical prior is supposed to belong to a known family, but the physical parameters~(e.g.~ propagation speed for the wave equation) are unknown and need to be estimated from data. Both $F_p$ and $F_a$ parameters are estimated by fitting the trajectories from ${\mathcal{D}}$.
The decomposition $F = F_p + F_a$ is in general not unique. For example, all the dynamics could be captured by the $F_a$ component. This decomposition is thus ill-defined, which hampers the interpretability and the extrapolation abilities of the model. In other words, one wants the estimated parameters of $F_p$ to be as close as possible to the true parameter values of the physical model and $F_a$ to play only a complementary role w.r.t $F_p$, so \textit{as to model only the information that cannot be captured by the physical prior}. For example, when $F\in{\mathcal{F}}_p$, the data can be fully described by the physical model, and in this case it is sensible to desire $F_a$ to be nullified; this is of central importance in a setting where one wishes to identify physical quantities, and for the model to generalize and extrapolate to new conditions. In a more general setting where the physical model is incomplete, the action of $F_a$ on the dynamics, as measured through its norm, should be as small as possible.
This general idea is embedded in the following optimization problem:
\begin{equation}
\label{eq:aphynity-opt}
\underset{F_p\in{\mathcal{F}}_p, F_a\in{\mathcal{F}}}{\min} ~~~\left\Vert F_a \right\Vert ~~~
\mathrm{subject~to} ~~~~ \forall X\in{\mathcal{D}}, \forall t, \frac{\diff X_t}{\diff t} =(F_p+F_a)(X_t).
\end{equation}
The originality of APHYNITY is to leverage model-based prior knowledge by augmenting it with neurally parameterized dynamics. It does so while ensuring optimal cooperation between the prior model and the augmentation.
A first key question is whether the minimum in Eq \ref{eq:aphynity-opt} is indeed well-defined, in other words whether there exists indeed a decomposition with a minimal norm $F_a$. The answer actually depends on the geometry of ${\mathcal{F}}_p$, and is formulated in the following proposition proven in Appendix~\ref{app:proof}:
\begin{prop}[Existence of a minimizing pair]\label{prop:exist_unique}
If ${\mathcal{F}}_p$ is a proximinal set\footnote{\label{fn:proximal-chebyshev}A proximinal set is one from which every point of the space has at least one nearest point. A Chebyshev set is one from which every point of the space has a unique nearest point. More details in Appendix~\ref{app:chebyshev}.}, there exists a decomposition minimizing Eq \ref{eq:aphynity-opt}.
\end{prop}
Proximinality is a mild condition which, as shown through the proof of the proposition, cannot be weakened. It is a property verified by any boundedly compact set. In particular, it is true for closed subsets of finite dimensional spaces. However, if only existence is guaranteed, while forecasts would be expected to be accurate, non-uniqueness of the decomposition would hamper the interpretability of $F_p$ and this would mean that the identified physical parameters are not uniquely determined.
It is then natural to ask under which conditions solving problem Eq \ref{eq:aphynity-opt} leads to a unique decomposition into a physical and a data-driven component. The following result provides guarantees on the existence and uniqueness of the decomposition under mild conditions. The proof is given in Appendix~\ref{app:proof}:
\begin{prop}[Uniqueness of the minimizing pair]\label{prop:unique}
If ${\mathcal{F}}_p$ is a Chebyshev set\textcolor{red}{\footnotemark[1]}, Eq \ref{eq:aphynity-opt} admits a unique minimizer. The $F_p$ in this minimizer pair is the metric projection of the unknown $F$ onto ${\mathcal{F}}_p$.
\end{prop}
The Chebyshev assumption condition is strictly stronger than proximinality but is still quite mild and necessary. Indeed, in practice, many sets of interest are Chebyshev, including all closed convex spaces in strict normed spaces and, if ${\mathcal{F}} = L^2$, ${\mathcal{F}}_p$ can be any closed convex set, including all finite dimensional subspaces. In particular, all examples considered in the experiments are Chebyshev sets.
Propositions \ref{prop:exist_unique} and \ref{prop:unique} provide, under mild conditions, the theoretical guarantees for the APHYNITY formulation to infer the correct MB/ML decomposition, thus enabling both recovering the proper physical parameters and accurate forecasting.
\subsection{Solving APHYNITY with deep neural networks\label{subsec:learning}}
In the following, both terms of the decomposition are parametrized and are denoted as $F_p^{\theta_p}$ and $F_a^{\theta_a}$. Solving APHYNITY then consists in estimating the parameters $\theta_p$ and $\theta_a$. $\theta_p$ are the physical parameters and are typically low-dimensional, e.g.~ 2 or 3 in our experiments for the considered physical models. For $F_a$, we need sufficiently expressive models able to optimize over all ${\mathcal{F}}$: we thus use deep neural networks, which have shown promising performances for the approximation of differential equations~\cite{raissi2017physics,ayed2019learning}.
When learning the parameters of $F_p^{\theta_p}$ and $F_a^{\theta_a}$, we have access to a finite dataset of trajectories discretized with a given temporal resolution $\Delta t$: ${\mathcal{D}}_{\text{train}} = \{(X^{(i)}_{k\Delta t})_{0\leq k\leq \left \lfloor{\nicefrac{T}{\Delta t}}\right \rfloor} \}_{1\leq i\leq N}$. Solving Eq \ref{eq:aphynity-opt} requires estimating the state derivative $\nicefrac{\diff X_t}{\diff t}$ appearing in the constraint term. One solution is to approximate this derivative using e.g.~ finite differences as in \cite{brunton2016discovering,greydanus2019hamiltonian,cranmer2020lagrangian}. This numerical scheme requires high space and time resolutions in the observation space in order to get reliable gradient estimates. Furthermore it is often unstable, leading to explosive numerical errors as discussed in Appendix~\ref{app:der_superv}. We propose instead to solve Eq \ref{eq:aphynity-opt} using an integral trajectory-based approach: we compute $\widetilde{X}
^i_{k\Delta t, X_0}$ from an initial state $X^{(i)}_0$ using the current $F_p^{\theta_p} + F_a^{\theta_a}$ dynamics, then enforce the constraint $\widetilde{X}^i_{k\Delta t, X_0}=X^i_{k\Delta t}$. This leads to our final objective function on $(\theta_p, \theta_a)$:
\begin{equation}
\label{eq:opt_final}
\underset{\theta_p, \theta_a}{\min} ~~~\left\Vert F_a^{\theta_a} \right\Vert ~~~
\mathrm{subject~to} ~~~~ \forall i, \forall k, \widetilde{X}^{(i)}_{k\Delta t} = X^{(i)}_{k\Delta t},
\end{equation}
where $\widetilde{X}^{(i)}_{k\Delta t}$ is the approximate solution of the integral $ \int_{X_0^{(i)}}^{X_0^{(i)}+k \Delta t} (F_p^{\theta_p} + F_a^{\theta_a})(X_s) \diff X_s$ obtained by a differentiable ODE solver.
In our setting, where we consider situations for which $F^{\theta_p}_p$ only partially describes the physical phenomenon, this coupled ML/MB formulation leads to different parameter estimates than using the MB formulation alone, as analyzed more thoroughly in Appendix~\ref{app:alt_methods}.
Interestingly, our experiments show that using this formulation also leads to a better identification of the physical parameters $\theta_p$ than when fitting the simplified physical model $F^{\theta_p}_p$ alone~(Section~\ref{sec:expes}). With only an incomplete knowledge on the physics, $\theta_p$ estimator will be biased by the additional dynamics which needs to be fitted in the data.
Appendix~\ref{app:ablation} also confirms that the integral formulation gives better forecasting results and a more stable behavior than supervising over finite difference approximations of the derivatives.
\subsection{Adaptively constrained optimization\label{subsec:optim}}
The formulation in Eq \ref{eq:opt_final} involves constraints which are difficult to enforce exactly in practice.
We considered a variant of the method of multipliers~\cite{constrained_optim} which uses a sequence of Lagrangian relaxations $\mathcal{L}_{\lambda_j}(\theta_p, \theta_a)$:
\begin{equation}
\label{eq:opt_final_relaxed}
\mathcal{L}_{\lambda_j}(\theta_p, \theta_a) = \|F_a^{\theta_a}\| + \lambda_j \cdot \mathcal{L}_{traj}(\theta_p, \theta_a),
\end{equation}
where $\mathcal{L}_{traj}(\theta_p, \theta_a) = \sum_{i=1}^N\sum_{h=1}^{T/\Delta t} \|X^{(i)}_{h\Delta t} - \widetilde{X}^{(i)}_{h\Delta t} \|$.
\begin{algorithm}[H]
\SetAlgoLined
Initialization: $\lambda_0\geq0, \tau_1 >0, \tau_2>0$\;
\For{epoch = $1:N_{epochs}$} {
\For{iter in $1:N_{iter}$}{
\For{batch in $1:B$}{
$\theta_{j+1} = \theta_j - \tau_1 \nabla \left[ \lambda_j\mathcal{L}_{traj}(\theta_j) + \left\Vert F_a \right\Vert \right] \;$
}
}
$\lambda_{j+1} = \lambda_j +$ $\tau_2\mathcal{L}_{traj}(\theta_{j+1}) \;$
}
\caption{\label{alg:optim} APHYNITY}
\end{algorithm}
This method needs an increasing sequence $(\lambda_j)_j$ such that the successive minima of $\mathcal{L}_{\lambda_j}$ converge to a solution~(at least a local one) of the constrained problem in Eq \ref{eq:opt_final}. We select $(\lambda_j)_j$ by using an iterative strategy: starting from a value $\lambda_0$, we iterate, minimizing $\mathcal{L}_{\lambda_j}$ by gradient descent\footnote{Convergence to a local minimum isn't necessary, a few steps are often sufficient for a successful optimization.}, then update $\lambda_j$ with:
$\lambda_{j+1} = \lambda_j + \tau_2\mathcal{L}_{traj}(\theta_{j+1})$, where $\tau_2$ is a chosen hyper-parameter and $\theta = (\theta_p, \theta_a)$. This procedure is summarized in Algorithm~\ref{alg:optim}. This adaptive iterative procedure allows us to obtain stable and robust results, in a reproducible fashion, as shown in the experiments.
\section{Experimental validation\label{sec:expes}}
We validate our approach on 3 classes of challenging physical dynamics: the damped pendulum, reaction-diffusion, and wave propagation, representative of various application domains such as chemistry, biology or ecology (for reaction-diffusion) \cite{cantrell2004spatial,chung2007bifurcation,volpert2009reaction} and earth physic, acoustic, electromagnetism or even neuro-biology (for waves equations) \cite{Slater1937, NUNEZ1974}.
~The last two dynamics are described by PDEs and thus in practice should be learned from very high-dimensional vectors, discretized from the original compact domain. This makes the learning much more difficult than from the one-dimensional pendulum case. For each problem, we investigate the cooperation between physical models of increasing complexity encoding incomplete knowledge of the dynamics~(denoted \textit{Incomplete physics} in the following) and data-driven models. We show the relevance of APHYNITY~(denoted \textit{APHYNITY models}) both in terms of forecasting accuracy and physical parameter identification.
\subsection{Experimental setting\label{subsec:exp_sett}}
We describe the three families of equations studied in the experiments. In all experiments, ${\mathcal{F}}=\mathcal{L}^2({\mathcal{A}})$ where ${\mathcal{A}}$ is the set of all admissible states for each problem, and the $\mathcal{L}^2$ norm is computed on ${\mathcal{D}}_{train}$ by: $\|F\|^2 \approx \sum_{i,k}\|F(X^{(i)}_{k\Delta t})\|^2$. All considered sets of physical functionals ${\mathcal{F}}_p$ are closed and convex in ${\mathcal{F}}$ and thus are Chebyshev. In order to enable the evaluation on both prediction and parameter identification, all our experiments are conducted on simulated datasets with known model parameters. Each dataset has been simulated using an appropriate high-precision integration scheme for the corresponding equation. All solver-based models take the first state $X_0$ as input and predict the remaining time-steps by integrating $F$ through the same differentiable generic and common ODE solver~(4th order Runge-Kutta)\footnote{This integration scheme is then different from the one used for data generation, the rationale for this choice being that when training a model one does not know how exactly the data has been generated.}. Implementation details and architectures are given in Appendix~\ref{app:implementation}.
\paragraph{Damped pendulum:}
The evolution of a damped pendulum is governed by the ODE $\frac{\diff^2\theta}{\diff t^2} + \omega_0^2 \sin \theta +\lambda \frac{\diff\theta}{\diff t} = 0$, where $\theta(t)$ is the angle, $\omega_0 = \frac{2 \pi}{T_0}$ is the proper pulsation~($T_0$ being the period) and $\lambda$ is the damping coefficient. With the state $X = (\theta, \frac{\diff\theta}{\diff t})$, the ODE can be written as in Eq \ref{eq:ode} with
$ F : X \mapsto ( \frac{\diff\theta}{\diff t} , - \omega_0^2 \sin \theta - \lambda \frac{\diff\theta}{\diff t})$.
\noindent We consider the following physical models of increasing complexity:
\begin{itemize}[leftmargin=*,align=left]
\itemsep0em
\item \textit{Hamiltonian models~}\cite{greydanus2019hamiltonian,toth2019hamiltonian}, an energy conservative approximation of the system, with ${\mathcal{F}}_p = \{F^{\mathcal{H}}_p: \linebreak (u,v) \mapsto (\partial_y \mathcal{H}(u,v), -\partial_x \mathcal{H}(u,v))\ |\ \mathcal{H}\in H^1(\mathbb{R}^2)\}$ where $H^1(\mathbb{R}^2)$ is the first order Sobolev space.
\item \textit{Param ODE ($\omega_0$)}, the pendulum without friction, with ${\mathcal{F}}_p = \{F^{\omega^2_0}_p:(u,v) \mapsto (v,- \omega_0^2 \sin u)\ |\ \linebreak \omega^2_0\geq\omega_{\min}^2\}$.
\item \textit{Param ODE ($\omega_0, \lambda$)}, the full pendulum equation (but with unknown parameters), with ${\mathcal{F}}_p=\{F^{\omega^2_0,\lambda}_p:(u,v)\mapsto(v,-\omega_0^2 \sin u - \lambda v)\ |\ \omega^2_0\geq\omega_{\min}^2, \lambda\geq\lambda_{\min}>0\}$.
\end{itemize}
\paragraph{Reaction-diffusion equations:}
We consider a 2D FitzHugh-Nagumo type model~\cite{klaasen1984fitzhugh}. The system is driven by the PDE
\(
\frac{\partial u}{\partial t} = a\Delta u + R_u(u,v; k),
\frac{\partial v}{\partial t} = b\Delta v + R_v(u,v)\)
where $a$ and $b$ are respectively the diffusion coefficients of $u$ and $v$, $\Delta$ is the Laplace operator. The local reaction terms are $R_u(u,v; k) = u - u^3 - k - v, R_v(u,v) = u - v$. The state is $X=(u,v)$ and is defined over a compact rectangular domain $\Omega$ with periodic boundary conditions.
\noindent The considered physical models are:
\begin{itemize}[leftmargin=*,align=left]
\itemsep0em
\item \textit{Param PDE ($a,b$)} with unknown ($a,b$) diffusion terms and without reaction terms:
${\mathcal{F}}_p = \{F_p^{a,b}:(u,v) \mapsto (a\Delta u, b\Delta v)\ | \ a\geq a_{\min} >0, b\geq b_{\min}>0\}$;
\item \textit{Param PDE ($a,b,k$)} the full PDE with unknown parameters:
${\mathcal{F}}_p = \{F_p^{a,b,k} : (u,v)\mapsto \linebreak (a\Delta u + R_u(u,v;k), b\Delta v + R_v(u,v)\ |\ a\geq a_{\min} > 0, b\geq b_{\min} > 0, k\geq k_{\min} > 0\}$.
\end{itemize}
\paragraph{Damped wave equations:}
We investigate the following 2-dimensional damped-wave PDE:
\(
\frac{\partial^2 w}{\partial t^2} - c^2\Delta w + k \frac{\partial w}{\partial t}=0
\)
where $k$ is the \textit{damping coefficient}. The state is $X=(w, \frac{\partial w}{\partial t}
)$ and, as for reaction-diffusion, we consider a compact spatial domain $\Omega$ with Neumann homogeneous boundary conditions. Note that this damping differs from the pendulum case, as its effect is global.
\noindent The considered physical models are:
\begin{itemize}[leftmargin=*,align=left]
\itemsep0em
\item \textit{Param PDE ($c$)}, without damping term and ${\mathcal{F}}_p = \{F^c_p: (u, v) \mapsto \linebreak (v, c^2 \Delta u) \ |\ c\geq c_{\min} > 0\}$;
\item \textit{Param PDE ($c,k$)} with ${\mathcal{F}}_p = \{F^{c,k}_p:(u, v) \mapsto (v, c^2 \Delta u - kv)\ | \linebreak \ c\geq c_{\min} > 0, k\geq k_{\min} > 0\}$.
\end{itemize}
\paragraph{Baselines}
As purely data-driven baselines, we use Neural ODE~\cite{chen2018neural} for the three problems and PredRNN++~(\cite{wang2018predrnn++}, for reaction-diffusion only) which are competitive models for datasets generated by differential equations and for spatio-temporal data. As ML/MB methods, in the ablations studies (see Appendix \ref{app:ablation}), we compare for all problems, to the vanilla ML/MB cooperation scheme found in \cite{wang2019integrating,neural20}. We also show results for \textit{True PDE/ODE}, which corresponds to the equation for data simulation~(which do not lead to zero error due to the difference between simulation and training integration schemes). For the pendulum, we compare to Hamiltonian neural networks \cite{greydanus2019hamiltonian,toth2019hamiltonian} and to the the deep Galerkin method (DGM) \cite{sirignano2018dgm}. See additional details in Appendix~\ref{app:implementation}.
\subsection{Results}
\label{sec:results}
\begin{table}[t!]
\centering
\vspace{-0.6cm}
\footnotesize
\setlength{\tabcolsep}{8.5pt}
\caption[Forecasting and identification results with APHYNITY.]{Forecasting and identification results on the (a) damped pendulum, (b) reaction-diffusion and (c) wave equation datasets. We set for (a) $T_0=6$, $\alpha=0.2$, for (b) $a=1\times 10^{-3}, b=5\times 10^{-3}, k=5\times 10^{-3}$, and for (c) $c=330$, $k=50$ as true parameters. $\log$ MSEs are computed respectively over 40, 25 and 25 predicted time-steps. \%Err param. averages the results when several physical parameters are present. For each level of incorporated physical knowledge, equivalent best results according to a Student t-test are shown in bold. n/a corresponds to non-applicable cases.
\label{tab:pendulum}}
\begin{tabular}{cclccc}
\toprule
Dataset & & Method & $\log$ MSE & \%Err param. & $\|F_a\|^2$ \\
\midrule
\multirowcell{9}{\parbox{1cm}{\centering \tiny (a) \\Damped pendulum}} & \centering\tiny Data-driven & Neural ODE \cite{chen2018neural}
& -2.84$\pm$0.70 & n/a & n/a \\ \cmidrule{2-6}
&\multirowcell{5}{\parbox[c]{1cm}{\centering \tiny Incomplete physics}} & Hamiltonian \cite{toth2019hamiltonian}
& -0.35$\pm$0.10 & n/a & n/a \\
&& APHYNITY Hamiltonian & \textbf{-3.97$\pm$1.20} & n/a & 623 \\ \cdashline{3-6}\noalign{\vskip 0.2ex}
&& Param ODE ($\omega_0$) & -0.14$\pm$0.10 & 13.2 & n/a \\
&& Deep Galerkin Method ($\omega_0$) \cite{sirignano2018dgm} & -3.10$\pm$0.40 & 22.1 & n/a \\
&& APHYNITY Param ODE ($\omega_0$) & \textbf{-7.86$\pm$0.60} & \textbf{4.0} &132 \\ \cmidrule{2-6}
&\multirowcell{5}{\parbox{0.8cm}{\centering\tiny Complete physics}} & Param ODE ($\omega_0, \alpha$) & \textbf{-8.28$\pm$0.40} & \textbf{0.45} & n/a \\
&& Deep Galerkin Method ($\omega_0,\alpha$) \cite{sirignano2018dgm} & -3.14$\pm$0.40 & 7.1 & n/a \\
&& APHYNITY Param ODE ($\omega_0, \alpha$) & \textbf{-8.31$\pm$0.30} & \textbf{0.39} & 8.5 \\ \cdashline{3-6}\noalign{\vskip 0.2ex}
&& True ODE & \textbf{-8.58$\pm$0.20} & n/a & n/a \\
&& APHYNITY True ODE & \textbf{-8.44$\pm$0.20} & n/a & 2.3 \\ \midrule
\multirowcell{8}{\parbox{0.7cm}{\centering\tiny (b) Reaction-diffusion}} & \multirowcell{2}{\parbox{0.9cm}{\centering\tiny Data-driven}} &
Neural ODE \cite{chen2018neural}
& -3.76$\pm$0.02 & n/a & n/a \\
&& PredRNN++ \cite{wang2018predrnn++}
& -4.60$\pm$0.01 & n/a & n/a\\\cmidrule{2-6}
& \multirowcell{2}{\parbox{0.9cm}{\centering\tiny Incomplete physics}} &Param PDE ($a,b$) & -1.26$\pm$0.02 & 67.6 & n/a\\
&& APHYNITY Param PDE ($a,b$) & \textbf{-5.10$\pm$0.21} & \textbf{2.3} & 67 \\\cmidrule{2-6}
&\multirowcell{4}{\parbox{0.9cm}{\centering\tiny Complete physics}}& Param PDE ($a,b,k$) & \textbf{-9.34$\pm$0.20} & 0.17 & n/a\\
&& APHYNITY Param PDE ($a,b,k$) & \textbf{-9.35$\pm$0.02} & \textbf{0.096} & 1.5e-6\\
\cdashline{3-6}\noalign{\vskip 0.2ex}
&& True PDE & -8.81$\pm$0.05 & n/a & n/a\\
&& APHYNITY True PDE & \textbf{-9.17$\pm$0.02} & n/a & 1.4e-7\\ \midrule
\multirowcell{8}{\parbox{0.8cm}{\centering\tiny (c)\\ Wave equation}} &\centering\tiny Data-driven & Neural ODE \cite{chen2018neural}
& -2.51$\pm$0.29 & n/a & n/a \\ \cmidrule{2-6}
&\multirowcell{2}{\parbox{0.9cm}{\centering\tiny Incomplete physics}} & Param PDE ($c$) & 0.51$\pm$0.07 & 10.4 & n/a \\
&& APHYNITY Param PDE ($c$) & \textbf{-4.64$\pm$0.25} & \textbf{0.31} & 71. \\ \cmidrule{2-6}
&\multirowcell{4}{\parbox{0.8cm}{\centering\tiny Complete physics}} & Param PDE $(c,k)$ & -4.68$\pm$0.55 & 1.38 & n/a \\
&& APHYNITY Param PDE $(c, k)$ & \textbf{-6.09$\pm$0.28} & \textbf{0.70} & 4.54 \\ \cdashline{3-6}\noalign{\vskip 0.2ex}
&& True PDE & -4.66$\pm$0.30 & n/a & n/a \\
&& APHYNITY True PDE & \textbf{-5.24$\pm$0.45} & n/a & 0.14 \\
\bottomrule
\end{tabular}
\label{tab:results-for-all}
\vspace{-0.2cm}
\end{table}
We analyze and discuss below the results obtained for the three kind of dynamics. We successively examine different evaluation or quality criteria. The conclusions are consistent for the three problems, which allows us to highlight clear trends for all of them.
\paragraph{Forecasting accuracy:}
The data-driven models do not perform well compared to \textit{True PDE/ODE} (all values are test errors expressed as $\log$ MSE): -4.6 for PredRNN++ vs. -9.17 for reaction-diffusion, -2.51 vs. -5.24 for wave equation, and -2.84 vs. -8.44 for the pendulum in Table~\ref{tab:results-for-all}. The Deep Galerkin method for the pendulum in complete physics \textit{DGM ($\omega_0,\alpha$)}, being constrained by the equation, outperforms Neural ODE but is far inferior to APHYNITY models. In the incomplete physics case, \textit{DGM ($\omega_0$)} fails to compensate for the missing information. The \textit{incomplete physical models}, \textit{Param PDE ($a,b$)} for the reaction-diffusion, \textit{Param PDE ($c$)} for the wave equation, and \textit{Param ODE ($\omega_0$)} and \textit{Hamiltonian models} for the damped pendulum, have even poorer performances than purely data-driven ones, as can be expected since they ignore important dynamical components, e.g.~ friction in the pendulum case. Using APHYNITY with these imperfect physical models greatly improves forecasting accuracy in all cases, significantly outperforming purely data-driven models, and reaching results often close to the accuracy of the true ODE, when APHYNITY and the true ODE models are integrated with the same numerical scheme~(which is different from the one used for data generation, hence the non-null errors even for the true equations), e.g.~ -5.92 vs. -5.24 for wave equation in Table~\ref{tab:results-for-all}. This clearly highlights the capacity of our approach to augment incomplete physical models with a learned data-driven component.
\paragraph{Physical parameter estimation:}
Confirming the phenomenon mentioned in the introduction and detailed in Appendix~\ref{app:alt_methods}, incomplete physical models can lead to bad estimates for the relevant physical parameters: an error respectively up to 67.6\% and 10.4\% for parameters in the reaction-diffusion and wave equations, and an error of more than 13\% for parameters for the pendulum in Table~\ref{tab:results-for-all}. APHYNITY is able to significantly improve physical parameters identification: 2.3\% error for the reaction-diffusion, 0.3\% for the wave equation, and 4\% for the pendulum. This validates the fact that augmenting a simple physical model to compensate its approximations is not only beneficial for prediction, but also helps to limit errors for parameter identification when dynamical models do not fit data well. This is crucial for interpretability and explainability of the estimates.
\paragraph{Ablation study:}
We conduct ablation studies to validate the importance of the APHYNITY augmentation compared to a naive strategy consisting in learning $F=F_p+F_a$ without taking care on the quality of the decomposition, as done in \cite{wang2019integrating,neural20}. Results shown in Table \ref{tab:results-for-all} of Appendix \ref{app:ablation} show a consistent gain of APHYNITY for the three use cases and for all physical models: for instance for \textit{Param ODE ($a,b$)} in reaction-diffusion, both forecasting performances ($\log \text{MSE}=$-5.10 vs. -4.56) and identification parameter (Error$=$ 2.33\% vs. 6.39\%) improve. Other ablation results are provided in Appendix \ref{app:ablation} showing the relevance of the the trajectory-based approach described in Section~\ref{subsec:learning}~(vs supervising over finite difference approximations of the derivative $F$).
\paragraph{Flexibility:}
When applied to complete physical models, APHYNITY does not degrade accuracy, contrary to a vanilla cooperation scheme (see ablations in Appendix \ref{app:ablation}). This is due to the least action principle of our approach: when the physical knowledge is sufficient for properly predicting the observed dynamics, the model learns to ignore the data-driven augmentation. This is shown by the norm of the trained neural net component $F_a$, which is reported in Table~\ref{tab:results-for-all} last column: as expected, $\|F_a\|^2$ diminishes as the complexity of the corresponding physical model increases, and, relative to incomplete models, the norm becomes very small for complete physical models~(for example in the pendulum experiments, we have $\Vert F_a \Vert= 8.5$ for the APHYNITY model to be compared with 132 and 623 for the incomplete models). Thus, we see that the norm of $F_a$ is a good indication of how imperfect the physical models ${\mathcal{F}}_p$ are. It highlights the flexibility of APHYNITY to successfully adapt to very different levels of prior knowledge. Note also that APHYNITY sometimes slightly improves over the true ODE, as it compensates the error introduced by different numerical integration methods for data simulation and training (see Appendix~\ref{app:implementation}).
\begin{figure}[t!]
\centering
\subfloat[Param PDE ($a$, $b$), diffusion-only
]{
\includegraphics[width=0.32\linewidth]{images/aphynity_reacdiff_physical.png}}
\hfill
\subfloat[APHYNITY Param PDE ($a$, $b$)
]{
\includegraphics[width=0.32\linewidth]{images/aphynity_reacdiff_affinity.png}}\hfill
\subfloat[Ground truth simulation
]{
\includegraphics[width=0.32\linewidth]{images/aphynity_reacdiff_ground-truth.png}}
\caption[Qualitative results on the reaction-diffusion equations.]{Comparison of predictions of two components $u$ (top) and $v$ (bottom) of the reaction-diffusion system. Note that $t=4$ is largely beyond the dataset horizon ($t=2.5$).
\label{fig:reaction-diffusion-demo}
}
\end{figure}
\begin{figure}[t!]
\centering
\subfloat[Neural ODE
]{
\includegraphics[width=0.32\linewidth]{images/aphynity_wave_neural_ode.png}}
\hfill
\subfloat[APHYNITY Param PDE ($c$)
]{
\includegraphics[width=0.32\linewidth]{images/aphynity_wave_affinity.png}}\hfill
\subfloat[Ground truth simulation
]{
\includegraphics[width=0.32\linewidth]{images/aphynity_wave_truth.png}}
\caption[Qualitative results on the wave equations.]{Comparison between the prediction of APHYNITY when $c$ is estimated and Neural ODE for the damped wave equation. Note that $t+32$, last column for (a, b, c) is already beyond the training time horizon ($t+25$), showing the consistency of APHYNITY method.\label{fig:wave-damped-demo}
}
\end{figure}
\paragraph{Qualitative visualizations:}
Results in Figure~\ref{fig:reaction-diffusion-demo} for reaction-diffusion show that the incomplete diffusion parametric PDE in Figure~\ref{fig:reaction-diffusion-demo}(a) is unable to properly match ground truth simulations: the behavior of the two components in Figure~\ref{fig:reaction-diffusion-demo}(a) is reduced to simple independent diffusions due to the lack of interaction terms between $u$ and $v$. By using APHYNITY in Figure~\ref{fig:reaction-diffusion-demo}(b), the correlation between the two components appears together with the formation of Turing patterns, which is very similar to the ground truth. This confirms that $F_a$ can learn the reaction terms and improve prediction quality. In Figure~\ref{fig:wave-damped-demo}, we see for the wave equation that the data-driven Neural ODE model fails at approximating $\nicefrac{\diff w}{\diff t}$ as the forecast horizon increases: it misses crucial details for the second component $\nicefrac{\diff w}{\diff t}$ which makes the forecast diverge from the ground truth. APHYNITY incorporates a Laplacian term as well as the data-driven $F_a$ thus capturing the damping phenomenon and succeeding in maintaining physically sound results for long term forecasts, unlike Neural ODE.
\paragraph{Additional illustrations:}
We give further visual illustrations to demonstrate how the estimation of parameters in incomplete physical models is improved with APHYNITY. For the reaction-diffusion equation, we show that the incomplete parametric PDE underestimates both diffusion coefficients. The difference is visually recognizable between the poorly estimated diffusion (\Figref{fig:comp-diffusion}(a)) and the true one (\Figref{fig:comp-diffusion}(c)) while APHYNITY gives a fairly good estimation of those diffusion parameters as shown in \Figref{fig:comp-diffusion}(b).
\begin{figure}[h]
\centering
\subfloat[$a=0.33\times 10^{-3}, b=0.94\times 10^{-3} $, diffusion estimated with Param PDE $(a,b)$]{\includegraphics[width=0.5\textwidth]{images/aphynity_reacdiff_physical.png}}\hfill
\subfloat[$a=0.97\times 10^{-3}, b=4.75\times 10^{-3}$, diffusion estimated with APHYNITY Param PDE $(a,b)$]{\includegraphics[width=0.5\textwidth]{images/aphynity_reacdiff_aphynity_phys_est.png}} \hfill
\subfloat[$a=1.0\times 10^{-3}, b=5.0\times 10^{-3}$, true diffusion]{\includegraphics[width=0.5\textwidth]{images/aphynity_reacdiff_aphynity_phys_est.png}}
\caption[Qualitative analysis on the reaction-diffusion equations.]{Diffusion predictions using coefficient learned with (a) incomplete physical model Param PDE $(a,b)$ and (b) APHYNITY-augmented Param PDE$(a,b)$, compared with the (c) true diffusion\label{fig:comp-diffusion}}
\end{figure}
\subsection{Extension to non-stationary dynamics}
\label{sec:aph-nonstat}
We evaluate here the applicability of APHYNITY in a more challenging setting where physical parameters of the equations vary in each sequence. For the damped pendulum equations, instead of fixed parameters ($T_0=6, \alpha=0.2$) and varying initial conditions (Section \ref{sec:results}), we vary both the parameters ($T_0, \alpha$) and the initial conditions between trajectories.
We simulate 500/50/50 trajectories for the train/valid/test sets. For each trajectory, the period $T_0$ (resp. the damping coefficient $\alpha)$ are sampled uniformly in the range $[3,10]$ (resp. $[0,0.5]$).
We train models that take the first 20 steps as input and predict the next 20 steps. To account for the varying ODE parameters between sequences, we use an encoder that estimates the parameters based on the first 20 timesteps. In practice, we use a recurrent encoder composed of 1 layer of 128 GRU units. The output of the encoder is fed as additional input to the data-driven augmentation models and to an MLP with final softplus activations to estimate the physical parameters when necessary ($\omega_0 \in \mathbb{R}_+$ for Param ODE ($\omega_0$), $(\omega_0,\alpha) \in \mathbb{R}_+^2$ for Param ODE ($\omega_0,\alpha$)).
In this varying ODE context, we also compare to the state-of-the-art univariate time series forecasting method N-Beats \cite{oreshkin2019n}.
Results shown in Table \ref{tab:pendulum-encoder} are consistent with those presented in Section \ref{sec:results}. Pure data-driven models Neural ODE \cite{chen2018neural} and N-Beats \cite{oreshkin2019n} fail to properly extrapolate the pendulum dynamics. Incomplete physical models (Hamiltonian and ParamODE ($\omega_0$)) are even worse since they do not account for friction. Augmenting them with APHYNITY significantly and consistently improves forecasting results and parameter identification.
We provide similar experiments for the reaction-diffusion and wave equations in Appendix \ref{app:additional}.
\begin{table}[H]
\centering
\setlength{\tabcolsep}{6.8pt}
\caption[APHYNITY results on the damped pendulum with varying parameters.]{Forecasting and identification results on the damped pendulum dataset with different parameters for each sequence. log MSEs are computed over 20 predicted time-steps. For each level of incorporated physical knowledge, equivalent best results according to a Student t-test are shown in bold. n/a corresponds to non-applicable cases. \label{tab:pendulum-encoder}}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{clcccc}
\toprule
& Method & $\log$ MSE & \%Error $T_0$ & \%Error $\alpha$ & $\|F_a\|^2$ \\
\midrule
\multirowcell{2}{\parbox{0.9cm}{\centering\tiny data-driven}} & Neural ODE \cite{chen2018neural} & -4.35$\pm$0.9 & n/a & n/a & n/a \\
& N-Beats \cite{oreshkin2019n} & -4.57$\pm$0.5 & n/a & n/a & n/a \\
\midrule
\multirowcell{4}{\parbox{0.9cm}{\centering\tiny Incomplete physics}} & Hamiltonian \cite{greydanus2019hamiltonian} & -1.31$\pm$0.4 & n/a & n/a & n/a \\
& APHYNITY Hamiltonian & \textbf{-4.72$\pm$0.4} & n/a & n/a & 5.6$\pm$0.6 \\
\cdashline{2-6}\noalign{\vskip 0.2ex}
& Param ODE ($\omega_0$) & -2.66$\pm$0.9 & 21.5$\pm$19 & n/a & n/a \\
& APHYNITY Param ODE ($\omega_0$) & \textbf{-5.94$\pm$0.7} & \textbf{5.0$\pm$1.8} & n/a & 0.49$\pm$0.1 \\
\midrule
\multirowcell{4}{\parbox{0.8cm}{\centering\tiny Complete physics}} & Param ODE ($\omega_0, \alpha$) & \textbf{-5.71$\pm$0.4} & 4.08$\pm$0.8 & 152$\pm$129 & n/a \\
& APHYNITY Param ODE ($\omega_0, \alpha$) & \textbf{-6.22$\pm$0.7} & \textbf{3.26$\pm$0.6} & \textbf{62$\pm$27} & (5.39$\pm$0.1)e-10 \\
\cdashline{2-6}\noalign{\vskip 0.2ex}
&True ODE & \textbf{-8.58$\pm$0.1} & n/a &n/a & n/a \\
& APHYNITY True ODE & \textbf{-8.58$\pm$0.1} & n/a & n/a & (2.15$\pm$1.6)e-4 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
\section{Conclusion}
\label{discussion}
In this Chapter, we have introduced the APHYNITY framework that can efficiently augment approximate physical models with deep data-driven networks, performing similarly to models with full-known dynamics. We have exhibited the superiority of APHYNITY over data-driven, incomplete physics, and state-of-the-art approaches combining ML and MB methods, both in terms of forecasting and parameter identification on three various classes of physical systems. Besides, APHYNITY is flexible enough to adapt to different approximation levels of prior physical knowledge.
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{\label{app:chebyshev}Reminder on proximinal and Chebyshev sets}
We begin by giving a definition of proximinal and Chebyshev sets, taken from~\cite{chebyshev}:
\begin{definition}
A \textit{proximinal set} of a normed space $(E,\|\cdot\|)$ is a subset $\mathcal{C}\subset E$ such that every $x\in E$ admits at least a nearest point in $\mathcal{C}$.
\end{definition}
\begin{definition}
A \textit{Chebyshev set} of a normed space $(E,\|\cdot\|)$ is a subset $\mathcal{C}\subset E$ such that every $x\in E$ admits a unique nearest point in $\mathcal{C}$.
\end{definition}
Proximinality reduces to a compacity condition in finite dimensional spaces. In general, it is a weaker one: boundedly compact sets verify this property for example.
In Euclidean spaces, Chebyshev sets are simply the closed convex subsets. The question of knowing whether it is the case that all Chebyshev sets are closed convex sets in infinite dimensional Hilbert spaces is still an open question. In general, there exists examples of non-convex Chebyshev sets, a famous one being presented in~\cite{chebyshev_counter} for a non-complete inner-product space.
Given the importance of this topic in approximation theory, finding necessary conditions for a set to be Chebyshev and studying the properties of those sets have been the subject of many efforts. Some of those properties are summarized below:
\begin{itemize}
\item The metric projection on a boundedly compact Chebyshev set is continuous.
\item If the norm is strict, every closed convex space, in particular any finite dimensional subspace is Chebyshev.
\item In a Hilbert space, every closed convex set is Chebyshev.
\end{itemize}
\section{\label{app:proof}Proof of Propositions~\ref{prop:exist_unique} and \ref{prop:unique}}
We prove the following result which implies both propositions in the article:
\begin{prop}
The optimization problem:
\begin{equation}
\label{eq:opt_sup}
\underset{F_p\in{\mathcal{F}}_p, F_a\in{\mathcal{F}}}{\min} ~~~\left\Vert F_a \right\Vert ~~~
\mathrm{subject~to} ~~~~ \forall X\in{\mathcal{D}}, \forall t, \frac{\diff X_t}{\diff t} =(F_p+F_a)(X_t)
\end{equation}
is equivalent a metric projection onto ${\mathcal{F}}_p$.
If ${\mathcal{F}}_p$ is proximinal, Eq \ref{eq:opt_sup} admits a minimizing pair.
If ${\mathcal{F}}_p$ is Chebyshev, Eq \ref{eq:opt_sup} admits a unique minimizing pair which $F_p$ is the metric projection.
\end{prop}
\begin{proof}
The idea is to reconstruct the full functional from the trajectories of ${\mathcal{D}}$. By definition, ${\mathcal{A}}$ is the set of points reached by trajectories in ${\mathcal{D}}$ so that:
\[
{\mathcal{A}} = \{x\in\mathbb{R}^d\ |\ \exists X_\cdot\in{\mathcal{D}}, \exists t,\ X_t = x\}.
\]
Then let us define a function $F^{\mathcal{D}}$ in the following way: For $a\in {\mathcal{A}}$, we can find $X_\cdot\in{\mathcal{D}}$ and $t_0$ such that $X_{t_0} = a$. Differentiating $X$ at $t_0$, which is possible by definition of ${\mathcal{D}}$, we take:
\[
F^{\mathcal{D}}(a) = \left.\frac{\diff X_t}{\diff t}\right|_{t=t_0}.
\]
For any $(F_p,F_a)$ satisfying the constraint in Eq \ref{eq:opt_sup}, we then have that $(F_p+F_a)(a) = \nicefrac{\diff X_t}{\diff t}_{|t_0} = F^{\mathcal{D}}(a)$ for all $a\in{\mathcal{A}}$. Conversely, any pair such that $(F_p, F_a)\in{\mathcal{F}}_p\times{\mathcal{F}}$ and $F_p+F_a = F^{\mathcal{D}}$, verifies the constraint.
Thus we have the equivalence between Eq \ref{eq:opt_sup} and the metric projection formulated as:
\begin{mini}
{F_p\in{\mathcal{F}}_p}{\left\Vert F^{\mathcal{D}} - F_p \right\Vert.}
{}{}
\end{mini}
If ${\mathcal{F}}_p$ is proximinal, the projection problem admits a solution which we denote $F^\star_p$. Taking $F^\star_a = F^{\mathcal{D}} - F^\star_p$, we have that $F^\star_p+F^\star_a = F^{\mathcal{D}}$ so that $(F^\star_p, F^\star_a)$ verifies the constraint of Eq \ref{eq:opt_sup}. Moreover, if there is $(F_p,F_a)$ satisfying the constraint of Eq \ref{eq:opt_sup}, we have that $F_p + F_a = F^{\mathcal{D}}$ by what was shown above and $\|F_a\| = \|F^{\mathcal{D}}-F_p\|\geq\|F^{\mathcal{D}}-F^\star_p\|$ by definition of $F^\star_p$. This shows that $(F^\star_p,F^\star_a)$ is minimal.
Moreover, if ${\mathcal{F}}_p$ is a Chebyshev set, by uniqueness of the projection, if $F_p\not=F^\star_p$ then $\|F_a\|>\|F^\star_a\|$. Thus the minimal pair is unique.
\end{proof}
\section{\label{app:alt_methods}Parameter estimation in incomplete physical models}
Classically, when a set ${\mathcal{F}}_p\subset{\mathcal{F}}$ summarizing the most important properties of a system is available, this gives a \textit{simplified model} of the true dynamics and the adopted problem is then to fit the trajectories using this model as well as possible, solving:
\begin{mini}
{F_p\in{\mathcal{F}}_p}{\mathbb{E}_{X\sim{\mathcal{D}}} L(\widetilde{X}^{X_0},X)}
{}{}
\addConstraint{\forall g\in{\mathcal{I}},\ \widetilde{X}_0^g = g\text{ and }\forall t,\ \frac{\diff \widetilde{X}_t^g}{\diff t} = F_p(\widetilde{X}_t^g).}{}
\label{eq:opt_pure_phy}
\end{mini}
where $L$ is a discrepancy measure between trajectories. Recall that $\widetilde{X}^{X_0}$ is the result trajectory of an ODE solver taking $X_0$ as initial condition. In other words, we try to find a function $F_p$ which gives trajectories as close as possible to the ones from the dataset. While estimation of the function becomes easier, there is then a residual part which is left unexplained and this can be a non negligible issue in at least two ways:
\begin{itemize}
\item When $F\not\in{\mathcal{F}}_p$, the loss is strictly positive at the minimum. This means that reducing the space of functions $\mathcal{F}_p$ makes us lose in terms of accuracy.\footnote{This is true in theory, although not necessarily in practice when $F$ overfits a small dataset.}
\item The obtained function $F_p$ might not even be the most meaningful function from ${\mathcal{F}}_p$ as it would try to capture phenomena which are not explainable with functions in ${\mathcal{F}}_p$, thus giving the wrong bias to the calculated function. For example, if one is considering a dampened periodic trajectory where only the period can be learned in ${\mathcal{F}}_p$ but not the dampening, the estimated period will account for the dampening and will thus be biased.
\end{itemize}
This is confirmed in Section \ref{sec:expes}: the incomplete physical models augmented with APHYNITY get different and experimentally better physical identification results than the physical models alone.
Let us compare our approach with this one on the linearized damped pendulum to show how estimates of physical parameters can differ. The equation is the following:
\[
\frac{\diff^2\theta}{\diff t^2} + \omega_0^2\theta + \alpha \frac{\diff \theta}{\diff t} = 0.
\]
We take the same notations as in the article and parametrize the simplified physical models as:
\[
F^{a}_p:X\mapsto (\frac{\diff \theta}{\diff t}, -a\theta),
\]
where $a>0$ corresponds to $\omega_0^2$. The corresponding solution for an initial state $X_0$, which we denote $X^{a}$, can then written explicitly as:
\[
\theta^{a}_t = \theta_0\cos{\sqrt{a} t}.
\]
Let us consider damped pendulum solutions $X$ written as:
\[
\theta_t = \theta_0e^{-t}\cos{t},
\]
which corresponds to:
\[
F : X\mapsto (\frac{\diff \theta}{\diff t}, -2(\theta+\frac{\diff \theta}{\diff t})).
\]
It is then easy to see that the estimate of $a$ with the physical model alone can be obtained by minimizing:
\[
\int_0^T|e^{-t}\cos{t} - \cos{\sqrt{a} t}|^2.
\]
This expression depends on $T$ and thus, depending on the chosen time interval and the way the integral is discretized will almost always give biased estimates. In other words, the estimated value of $a$ will not give us the desired solution $t\mapsto \cos{t}$.
On the other hand, for a given $a$, in the APHYNITY framework, the residual must be equal to:
\[
F^a_r : X\mapsto (0, (a-2)\theta - 2\frac{\diff \theta}{\diff t}).
\]
in order to satisfy the fitting constraint. Here $a$ corresponds to $1+\omega_0^2$ not to $\omega_0^2$ as in the simplified case. Minimizing its norm, we obtain $a=2$ which gives us the desired solution:
\[
\theta_t = \theta_0e^{-t}\cos{t},
\]
with the right period.
\section{\label{app:der_superv}Discussion on supervision over derivatives}
In order to find the appropriate decomposition $(F_p,F_a)$, we use a trajectory-based error by solving:
\begin{mini}
{F_p\in{\mathcal{F}}_p, F_a\in{\mathcal{F}}}{\left\Vert F_a \right\Vert}
{}{}
\addConstraint{\forall g\in{\mathcal{I}},\ \widetilde{X}_0^g = g\text{ and }\forall t,\ \frac{\diff \widetilde{X}_t^g}{\diff t} = (F_p+F_a)(\widetilde{X}_t^g)}{}
\addConstraint{\forall X\in{\mathcal{D}},\ L(X,\widetilde{X}^{X_0}) = 0.}{}
\label{notre_pbm_int}
\end{mini}
In the continuous setting where the data is available at all times $t$, this problem is in fact equivalent to the following one:
\begin{mini}
{F_p\in{\mathcal{F}}_p}{\mathbb{E}_{X\sim{\mathcal{D}}} \int \left\Vert \frac{\diff X_t}{\diff t} - F_p(X_t) \right\Vert.}
{}{}
\label{der_pbm}
\end{mini}
where the supervision is done directly over derivatives, obtained through finite-difference schemes. This echoes the proof in Section~\ref{app:proof} of the Appendix where $F$ can be reconstructed from the continuous data.
However, in practice, data is only available at discrete times with a certain time resolution. While Eq \ref{der_pbm} is indeed equivalent to Eq \ref{notre_pbm_int} in the continuous setting, in the practical discrete one, the way error propagates is not anymore: For Eq \ref{notre_pbm_int} it is controlled over integrated trajectories while for Eq \ref{der_pbm} the supervision is over the approximate derivatives of the trajectories from the dataset. We argue that the trajectory-based approach is more flexible and more robust for the following reasons:
\begin{itemize}
\item In Eq \ref{notre_pbm_int}, if $F_a$ is appropriately parameterized, it is possible to perfectly fit the data trajectories at the sampled points.
\item The use of finite differences schemes to estimate $F$ as is done in Eq \ref{der_pbm} necessarily induces a non-zero discretization error.
\item This discretization error is explosive in terms of divergence from the true trajectories.
\end{itemize}
This last point is quite important, especially when time sampling is sparse~(even though we do observe this adverse effect empirically in our experiments with relatively finely time-sampled trajectories). The following gives a heuristical reasoning as to why this is the case. Let $\widetilde{F} = F + \epsilon$ be the function estimated from the sampled points with an error $\epsilon$ such that $\|\epsilon\|_\infty\leq\alpha$. Denoting $\widetilde{X}$ the corresponding trajectory generated by $\widetilde{F}$, we then have, for all $X\in{\mathcal{D}}$:
\[
\forall t,\ \frac{\diff (X-\widetilde{X})_t}{\diff t} = F(X_t) - F(\widetilde{X}_t) - \epsilon(\widetilde{X}_t).
\]
Integrating over $[0,T]$ and using the triangular inequality as well as the mean value inequality, supposing that $F$ has uniformly bounded spatial derivatives:
\[
\forall t\in[0,T],\ \|(X-\widetilde{X})_t\| \leq \|\nabla F\|_\infty\int_0^t \|X_s-\widetilde{X}_s\| + \alpha t,
\]
which, using a variant of the Grönwall lemma, gives us the inequality:
\[
\forall t\in[0,T],\ \|X_t-\widetilde{X}_t\| \leq \frac{\alpha}{\|\nabla F\|_\infty}(\exp(\|\nabla F\|_\infty t) -1).
\]
When $\alpha$ tends to $0$, we recover the true trajectories $X$. However, as $\alpha$ is bounded away from $0$ by the available temporal resolution, this inequality gives a rough estimate of the way $\widetilde{X}$ diverges from them, and it can be an equality in many cases. This exponential behaviour explains our choice of a trajectory-based optimization.
\section{Implementation details\label{app:implementation}}
We describe here the three use cases studied in the paper for validating APHYNITY. All experiments are implemented with PyTorch and the differentiable ODE solvers with the adjoint method implemented in \texttt{torchdiffeq}.\footnote{\url{https://github.com/rtqichen/torchdiffeq}}
\subsection{Damped pendulum}
We consider the non-linear damped pendulum problem, governed by the ODE \[\frac{\diff ^2 \theta}{\diff t^2} + \omega_0^2 \sin \theta + \alpha \frac{\diff \theta}{\diff t} = 0, \] where $\theta(t)$ is the angle, $\omega_0 = \frac{2 \pi}{T_0}$ is the proper pulsation~($T_0$ being the period) and $\alpha$ is the damping coefficient. With the state $X = (\theta, \frac{\diff\theta}{\diff t})$, the ODE can be written as $\frac{\diff X_t}{\diff t} = F(X_t)$ with
$ F : X \mapsto ( \frac{\diff\theta}{\diff t} , - \omega_0^2 \sin \theta - \alpha \frac{\diff\theta}{\diff t})$.
\paragraph{Dataset} For each train / validation / test split, we simulate a dataset with 25 trajectories of 40 timesteps (time interval $[0,20]$, timestep $\delta t=0.5$) with fixed ODE coefficients $(T_0 = 12, \alpha=0.2)$ and varying initial conditions. The simulation integrator is Dormand-Prince Runge-Kutta method of order (4)5 (DOPRI5, \cite{dormand1980family}). We also add a small amount of white gaussian noise ($\sigma=0.01$) to the state. Note that our pendulum dataset is much more challenging than the ideal frictionless pendulum considered in \cite{greydanus2019hamiltonian}.
\paragraph{Neural network architectures} We detail in Table \ref{tab:pendulum-nn-archis} the neural architectures used for the damped pendulum experiments. All data-driven augmentations for approximating the mapping $X_t \mapsto F(X_t)$ are implemented by multi-layer perceptrons (MLP) with 3 layers of 200 neurons and ReLU activation functions (except at the last layer: linear activation). The Hamiltonian \cite{greydanus2019hamiltonian,toth2019hamiltonian} is implemented by a MLP that takes the state $X_t$ and outputs a scalar estimation of the Hamiltonian $\mathcal{H}$ of the system: the derivative is then computed by an in-graph gradient of $\mathcal{H}$ with respect to the input: $F(X_t) = \left( \frac{\partial \mathcal{H}}{\partial (\diff\theta/\diff t)}, - \frac{\partial \mathcal{H}}{\diff \theta} \right)$.
\begin{table}[H]
\caption[Neural network architectures for the damped pendulum.]{Neural network architectures for the damped pendulum experiments. n/a corresponds to non-applicable cases.}
\centering
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{lcc}
\toprule
Method & Physical model & Data-driven model \\
\midrule
Neural ODE & n/a & MLP(in=2, units=200, layers=3, out=2) \\
\midrule
Hamiltonian & MLP(in=2, units=200, layers=3, out=1) & n/a \\
APHYNITY Hamiltonian & MLP(in=2, units=200, layers=3, out=1) & MLP(in=2, units=200, layers=3, out=2) \\
\midrule
Param ODE ($\omega_0$) & 1 trainable parameter $\omega_0$ & n/a \\
APHYNITY Param ODE ($\omega_0$) & 1 trainable parameter $\omega_0$ & MLP(in=2, units=200, layers=3, out=2) \\
\midrule
Param ODE ($\omega_0,\alpha$) & 2 trainable parameters $\omega_0, \lambda$ & n/a \\
APHYNITY Param ODE ($\omega_0,\alpha$) & 2 trainable parameters $\omega_0, \lambda$ & MLP(in=2, units=200, layers=3, out=2) \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:pendulum-nn-archis}
\end{table}
\paragraph{Optimization hyperparameters} The hyperparameters of the APHYNITY optimization algorithm ($Niter,\lambda_0,\tau_1,\tau_2$) were cross-validated on the validation set and are shown in Table \ref{tab:pendulum-hyperparameters}. All models were trained with a maximum number of 5000 steps with early stopping.
\begin{table}[H]
\caption{Hyperparameters of the damped pendulum experiments.}
\centering
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{ccccc}
\toprule
Method & Niter & $\lambda_0$ & $\tau_1$ & $\tau_2$ \\
\midrule
APHYNITY Hamiltonian & 5 & 1 & 1 & 0.1 \\
APHYNITY ParamODE ($\omega_0$) & 5 & 1 & 1 & 10 \\
APHYNITY ParamODE ($\omega_0,\lambda$) & 5 & 1000 & 1 & 100 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:pendulum-hyperparameters}
\end{table}
\subsection{Reaction-diffusion equations}
The system is driven by a FitzHugh-Nagumo type PDE~\cite{klaasen1984fitzhugh}
\begin{align*}
\frac{\partial u}{\partial t} &= a\Delta u + R_u(u,v; k) \\
\frac{\partial v}{\partial t} &= b\Delta v + R_v(u,v),
\end{align*}
where $a$ and $b$ are respectively the diffusion coefficients of $u$ and $v$, $\Delta$ is the Laplace operator. The local reaction terms are $R_u(u,v; k) = u - u^3 - k - v, R_v(u,v) = u - v$.
The state $X=(u,v)$ is defined over a compact rectangular domain $\Omega = [-1,1]^2$ with periodic boundary conditions. $\Omega$ is spatially discretized with a $32\times 32$ 2D uniform square mesh grid. The periodic boundary condition is implemented with circular padding around the borders. $\Delta$ is systematically estimated with a $3\times 3$ discrete Laplace operator.
\paragraph{Dataset} Starting from a randomly sampled initial state $X_\text{init} \in [0,1]^{2\times 32\times 32}$, we generate states by integrating the true PDE with fixed $a$, $b$, and $k$ in a dataset ($a=1 \times 10^{-3}, b=5 \times 10^{-3}, k=5 \times 10^{-3}$). We firstly simulate high time-resolution ($\delta t_\text{sim} = 0.001$) sequences with explicit finite difference method. We then extract states every $\delta t_\text{data} = 0.1$ to construct our low time-resolution datasets.
We set the time of random initial state to $t=-0.5$ and the time horizon to $t=2.5$. 1920 sequences are generated, with 1600 for training/validation and 320 for test. We take the state at $t=0$ as $X_0$ and predict the sequence until the horizon (equivalent to 25 time steps) in all reaction-diffusion experiments. Note that the sub-sequence with $t<0$ are reserved for the extensive experiments in Appendix~\ref{app:additional_reac_diff}.
\paragraph{Neural network architectures}
Our $F_a$ here is a 3-layer convolution network (ConvNet). The two input channels are $(u, v)$ and two output ones are $(\frac{\partial u}{\partial t}, \frac{\partial v}{\partial t})$. The purely data-driven Neural ODE uses such ConvNet as its $F$. The detailed architecture is provided in Table~\ref{tab:reaction-diffusion-arch}. The estimated physical parameters $\theta_p$ in $F_p$ are simply a trainable vector $(a, b) \in \mathbb{R}_+^2$ or $(a, b, k) \in \mathbb{R}_+^3$.
\begin{table}[H]
\caption[Model architecture for the reaction-diffusion and wave equations.]{ConvNet architecture in reaction-diffusion and wave equation experiments, used as data-driven derivative operator in APHYNITY and Neural ODE \cite{chen2018neural}.}
\label{tab:reaction-diffusion-arch}
\centering
\begin{tabular}{ll}
\toprule
Module & Specification \\
\midrule
2D Conv. & $3\times 3$ kernel, 2 input channels, 16 output channels, 1 pixel zero padding \\
2D Batch Norm. & No average tracking \\
ReLU activation & --- \\
2D Conv. & $3\times 3$ kernel, 16 input channels, 16 output channels, 1 pixel zero padding \\
2D Batch Norm. & No average tracking \\
ReLU activation & --- \\
2D Conv. & $3\times 3$ kernel, 16 input channels, 2 output channels, 1 pixel zero padding \\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Optimization hyperparameters}
We choose to apply the same hyperparameters for all the reaction-diffusion experiments: $Niter = 1, \lambda_0 = 1, \tau_1 = 1\times 10^{-3}, \tau_2 = 1\times 10^{3}$.
\subsection{Wave equations} \label{sup:wave-details}
The damped wave equation is defined by
\[
\frac{\partial^2 w}{\partial t^2} - c^2 \Delta w + k \frac{\partial w}{\partial t} = 0,
\]
where $c$ is the wave speed and $k$ is the damping coefficient. The state is $X=(w, \frac{\partial w}{\partial t})$.
We consider a compact spatial domain $\Omega$ represented as a $64\times64$ grid and discretize the Laplacian operator similarly. $\Delta$ is implemented using a $5\times 5$ discrete Laplace operator in simulation whereas in the experiment is a $3\times 3$ Laplace operator. Null Neumann boundary condition are imposed for generation.
\paragraph{Dataset} $\delta t$ was set to $0.001$ to respect Courant number and provide stable integration. The simulation was integrated using a 4th order finite difference Runge-Kutta scheme for 300 steps from an initial Gaussian state, i.e for all sequence at $t=0$, we have:
\begin{equation}
w(x,y, t=0) = C \times \exp^{\frac{(x-x_0)^2 + (y-y_0)^2}{\sigma^2}}.
\end{equation}
The amplitude $C$ is fixed to $1$, and $(x_0, y_0)=(32,32)$ to make the Gaussian curve centered for all sequences. However, $\sigma$ is different for each sequence and uniformly sampled in $[10, 100]$.
The same $\delta t$ was used for train and test. All initial conditions are Gaussian with varying amplitudes. 250 sequences are generated, 200 are used for training while 50 are reserved as a test set.
In the main paper setting, $c=330$ and $k=50$.
As with the reaction diffusion case, the algorithm takes as input a state $X_{t_0}=(w, \frac{\diff w}{\diff t})(t_0)$ and predicts all states from $t_0+\delta t$ up to $t_0 + 25 \delta t$.
\paragraph{Neural network architectures}
The neural network for $F_a$ is a 3-layer convolution neural network with the same architecture as in Table~\ref{tab:reaction-diffusion-arch}. For $F_p$, the parameter(s) to be estimated is either a scalar $c\in\mathbb{R}_+$ or a vector $(c,k) \in \mathbb{R}_+^2$. Similarly, Neural ODE networks are build as presented in Table~\ref{tab:reaction-diffusion-arch}.
\paragraph{Optimization hyperparameters}
We use the same hyperparameters for the experiments:\\ ${Niter = 3, \lambda_0 = 1, \tau_1 = 1\times 10^{-4}, \tau_2 = 1\times 10^{2}}$.
\section{Ablation study\label{app:ablation}}
We conduct ablation studies to show the effectiveness of APHYNITY's adaptive optimization and trajectory-based learning scheme.
\subsection{Ablation to vanilla ML/MB cooperation}
In Table~\ref{tab:ablation-nds}, we consider the ablation case with the vanilla augmentation scheme found in \cite{leguen20phydnet,wang2019integrating,neural20}, which does not present any proper decomposition guarantee. We observe that the APHYNITY cooperation scheme outperforms this vanilla scheme in all case, both in terms of forecasting performances (e.g.~ log MSE= -0.35 vs. -3.97 for the Hamiltonian in the pendulum case) and parameter identification (e.g.~ Err Param=8.4\% vs. 2.3 for Param PDE ($a,b$ for reaction-diffusion). It confirms the crucial benefits of APHYNITY's principled decomposition scheme.
\subsection{Detailed ablation study}
We conduct also two other ablations in Table \ref{tab:ablation-others}:
\begin{itemize}
\item \textit{derivative supervision}: in which $F_p+F_a$ is trained with supervision over approximated derivatives on ground truth trajectory, as performed in \cite{greydanus2019hamiltonian,cranmer2020lagrangian}. More precisely, APHYNITY's $\mathcal{L}_\text{traj}$ is here replaced with $\mathcal{L}_\text{deriv} = \|\frac{\diff X_t}{\diff t} - F(X_t)\|$ as in \eqref{der_pbm}, where $\frac{\diff X_t}{\diff t}$ is approximated by finite differences on $X_t$.
\item \textit{non-adaptive optim.}: in which we train APHYNITY by minimizing $\|F_a\|$ without the adaptive optimization of $\lambda$ shown in Algorithm~\ref{alg:optim}. This case is equivalent to $\lambda = 1, \tau_2=0$.
\end{itemize}
\begin{table}[t]
\centering
\caption[Ablation study comparing APHYNITY to the vanilla ML/MB augmentation scheme.]{Ablation study comparing APHYNITY to the vanilla augmentation scheme \cite{wang2019integrating,neural20} for the reaction-diffusion equation, wave equation and damped pendulum.
\label{tab:ablation-nds}}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{clccc}
\toprule
Dataset & Method & $\log$ MSE &
\%Err Param. &
$\|F_a\|^2$
\\ \midrule
\multirowcell{8}{\parbox{1cm}{\centering\tiny Damped pendulum}}
& Hamiltonian with vanilla aug. & -0.35$\pm$0.1 & n/a & 837$\pm$117 \\
& APHYNITY Hamiltonian & \textbf{-3.97$\pm$1.2} & n/a & 623$\pm$68 \\ \cmidrule{2-5}
& Param ODE ($\omega_0$) with vanilla aug. & -7.02$\pm$1.7 & 4.5 & 148$\pm$49 \\
& APHYNITY Param ODE ($\omega_0$) & \textbf{-7.86$\pm$0.6} & \textbf{4.0} & 132$\pm$11 \\ \cmidrule{2-5}
& Param ODE ($\omega_0, \alpha$) with vanilla aug.& -7.60$\pm$0.6 & 4.65 & 35.5$\pm$6.2 \\
& APHYNITY Param ODE ($\omega_0, \alpha$) & \textbf{-8.31$\pm$0.3} & \textbf{0.39} & 8.5$\pm$2.0 \\
\cmidrule{2-5}
& Augmented True ODE with vanilla aug. & \textbf{-8.40$\pm$0.2} & n/a & 3.4$\pm$0.8 \\
& APHYNITY True ODE & \textbf{-8.44$\pm$0.2} & n/a & 2.3$\pm$0.4 \\
\midrule
\multirowcell{6}{\parbox{0.7cm}{\centering\tiny Reaction-diffusion}}
& Param. PDE ($a,b$) with vanilla aug. & -4.56$\pm$0.52
& 8.4 & (7.5$\pm$1.4)e1\\
& APHYNITY Param. PDE ($a,b$) & \textbf{-5.10$\pm$0.21} & \textbf{2.3} & (6.7$\pm$0.4)e1 \\
\cmidrule{2-5}
& Param. PDE ($a,b,k$) with vanilla aug. & -8.04$\pm$0.03
& 25.4 & (1.5$\pm$0.2)e-2\\
& APHYNITY Param. PDE ($a,b,k$) & \textbf{-9.35$\pm$0.02}
& \textbf{0.096} & (1.5$\pm$0.4)e-6 \\
\cmidrule{2-5}
& True PDE with vanilla aug. & -8.12$\pm$0.05
& n/a & (6.1$\pm$2.3)e-4\\
& APHYNITY True PDE & \textbf{-9.17$\pm$0.02}
& n/a & (1.4$\pm$0.8)e-7\\
\midrule
\multirowcell{4}{\parbox{1cm}{\centering\tiny Wave equation}}
& Param PDE ($c$) with vanilla aug. & -3.90 $\pm$ 0.27 & 0.51 & 88.66 \\
& APHYNITY Param PDE ($c$) & \textbf{-4.64$\pm$0.25}& \textbf{0.31} & 71.0 \\
\cmidrule{2-5}
& Param PDE ($c, k$) with vanilla aug. & -5.96 $\pm$ 0.10 & 0.71 & 25.1 \\
& APHYNITY Param PDE ($c, k$) & \textbf{-6.09$\pm$0.28} & \textbf{0.70} & 4.54 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
\begin{table}[h!]
\centering
\caption[Detailed ablation study for APHYNITY.]{Detailed ablation study on supervision and optimization for the reaction-diffusion equation, wave equation and damped pendulum.
\label{tab:ablation-others}}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{clccc}
\toprule
Dataset & Method & $\log$ MSE &
\%Err Param. &
$\|F_a\|^2$
\\ \midrule
\multirowcell{12}{\parbox{1cm}{\centering\tiny Damped pendulum}} & Augmented Hamiltonian derivative supervision & -0.83$\pm$0.3 & n/a & 642$\pm$121 \\
& Augmented Hamiltonian non-adaptive optim. & -0.49$\pm$0.58 & n/a & 165$\pm$30 \\
& APHYNITY Hamiltonian & \textbf{-3.97$\pm$1.2} & n/a & 623$\pm$68 \\ \cmidrule{2-5}
& Augmented Param ODE ($\omega_0$) derivative supervision & -1.02$\pm$0.04 & 5.8 & 136$\pm$13 \\
& Augmented Param ODE ($\omega_0$) non-adaptive optim. & -4.30$\pm$1.3 & 4.4 & 90.4$\pm$27 \\
& APHYNITY Param ODE ($\omega_0$) & \textbf{-7.86$\pm$0.6} & \textbf{4.0} & 132$\pm$11 \\ \cmidrule{2-5}
& Augmented Param ODE ($\omega_0, \alpha$) derivative supervision & -2.61$\pm$0.2 & 5.0 & 3.2$\pm$1.7 \\
& Augmented Param ODE ($\omega_0, \alpha$) non-adaptive optim. & -7.69$\pm$1.3 & 1.65 & 4.8$\pm$7.7 \\
& APHYNITY Param ODE ($\omega_0, \alpha$) & \textbf{-8.31$\pm$0.3} & \textbf{0.39} & 8.5$\pm$2.0 \\
\cmidrule{2-5}
& Augmented True ODE derivative supervision & -2.14$\pm$0.3 & n/a & 4.1$\pm$0.6 \\
& Augmented True ODE non-adaptive optim. & \textbf{-8.34$\pm$0.4} & n/a & 1.4$\pm$0.3 \\
& APHYNITY True ODE & \textbf{-8.44$\pm$0.2} & n/a & 2.3$\pm$0.4 \\
\midrule
\multirowcell{9}{\parbox{0.7cm}{\centering\tiny Reaction-diffusion}} & Augmented Param. PDE ($a,b$) derivative supervision & -4.42$\pm$0.25
& 12.6 & (6.8$\pm$0.6)e1\\
& Augmented Param. PDE ($a,b$) non-adaptive optim. & -4.55$\pm$0.11
& 7.5 & (7.6$\pm$1.0)e1\\
& APHYNITY Param. PDE ($a,b$) & \textbf{-5.10$\pm$0.21} & \textbf{2.3} & (6.7$\pm$0.4)e1 \\
\cmidrule{2-5}
& Augmented Param. PDE ($a,b,k$) derivative supervision & -4.90$\pm$0.06
& 11.7 & (1.9$\pm$0.3)e-1\\
& Augmented Param. PDE ($a,b,k$) non-adaptive optim. & -9.10$\pm$0.02
& 0.21 & (5.5$\pm$2.9)e-7\\
& APHYNITY Param. PDE ($a,b,k$) & \textbf{-9.35$\pm$0.02}
& \textbf{0.096} & (1.5$\pm$0.4)e-6 \\
\cmidrule{2-5}
& Augmented True PDE derivative supervision & -6.03$\pm$0.01
& n/a & (3.1$\pm$0.8)e-3\\
& Augmented True PDE non-adaptive optim. & -9.01$\pm$0.01
& n/a & (1.5$\pm$0.8)e-6\\
& APHYNITY True PDE & \textbf{-9.17$\pm$0.02}
& n/a & (1.4$\pm$0.8)e-7\\
\midrule
\multirowcell{8}{\parbox{1cm}{\centering\tiny Wave equation}} & Augmented Param PDE ($c$) derivative supervision & -1.16$\pm$0.48 & 12.1 & 0.00024 \\
& Augmented Param PDE ($c$) non-adaptive optim. &-2.57$\pm$0.21 & 3.1 & 43.6 \\
& APHYNITY Param PDE ($c$) & \textbf{-4.64$\pm$0.25}& \textbf{0.31} & 71.0 \\
\cmidrule{2-5}
& Augmented Param PDE ($c, k$) derivative supervision & -4.19$\pm$0.36 & 7.2 & 0.00012 \\
& Augmented Param PDE ($c, k$) non-adaptive optim. & -4.93$\pm$0.51 & 1.32 & 0.054 \\
& APHYNITY Param PDE ($c, k$) & \textbf{-6.09$\pm$0.28} & \textbf{0.70} & 4.54 \\
\cmidrule{2-5}
& Augmented True PDE derivative supervision & -4.42 $\pm$ 0.33 & n/a & 6.02e-5 \\
& Augmented True PDE non-adaptive optim. & -4.97$\pm$0.49 & n/a & 0.23 \\
& APHYNITY True PDE & \textbf{-5.24$\pm$0.45} & n/a & 0.14 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
We highlight the importance to use a principled adaptive optimization algorithm (APHYNITY algorithm described in paper) compared to a non-adpative optimization: for example in the reaction-diffusion case, log MSE= -4.55 vs. -5.10 for Param PDE $(a,b)$. Finally, when the supervision occurs on the derivative, both forecasting and parameter identification results are systematically lower than with APHYNITY's trajectory based approach: for example, log MSE=-1.16 vs. -4.64 for Param PDE $(c)$ in the wave equation. It confirms the good properties of the APHYNITY training scheme.
\pagebreak
\section{Additional experiments\label{app:additional}}
\subsection{Reaction-diffusion systems with varying diffusion parameters \label{app:additional_reac_diff}}
We conduct an extensive evaluation on a setting with varying diffusion parameters for reaction-diffusion equations.
The only varying parameters are diffusion coefficients, i.e.~ individual $a$ and $b$ for each sequence. We randomly sample $a\in [1\times 10^{-3},2\times 10^{-3}]$ and $b \in [3\times 10^{-3},7\times 10^{-3}]$. $k$ is still fixed to $5\times 10^{-3}$ across the dataset.
In order to estimate $a$ and $b$ for each sequence, we use here a ConvNet encoder $E$ to estimate parameters from 5 reserved frames ($t<0$). The architecture of the encoder $E$ is similar to the one in Table~\ref{tab:reaction-diffusion-arch} except that $E$ takes 5 frames (10 channels) as input and $E$ outputs a vector of estimated $(\tilde a,\tilde b)$ after applying a sigmoid activation scaled by $1\times 10^{-2}$ (to avoid possible divergence). For the baseline Neural ODE, we concatenate $a$ and $b$ to each sequence as two channels.
In Table~\ref{tab:reaction-diffusion-supplement}, we observe that combining data-driven and physical components outperforms the pure data-driven one. When applying APHYNITY to Param PDE ($a,b$), the prediction precision is significantly improved ($\log$ MSE: -1.32 vs. -4.32) with $a$ and $b$ respectively reduced from 55.6\% and 54.1\% to 11.8\% and 18.7\%. For complete physics cases, the parameter estimations are also improved for Param PDE ($a,b,k$) by reducing over 60\% of the error of $b$ (3.10 vs. 1.23) and 10\% to 20\% of the errors of $a$ and $k$ (resp. 1.55/0.59 vs. 1.29/0.39).
The extensive results reflect the same conclusion as shown in the main article: APHYNITY improves the prediction precision and parameter estimation. The same decreasing tendency of $\|F_a\|$ is also confirmed.
\begin{table}[h]
\setlength{\tabcolsep}{4pt}
\centering
\caption[APHYNITY results on the reaction-diffusion equations with varying parameters.]{Results of the dataset of reaction-diffusion with varying $(a,b)$. $k=5\times 10^{-3}$ is shared across the dataset. \label{tab:reaction-diffusion-supplement}}
\begin{tabular}{clccccc}
\toprule
& Method & $\log$ MSE
& \%Err $a$ & \%Err $b$ & \%Err $k$ & $\|F_a\|^2$ \\
\midrule
\parbox{0.9cm}{\centering\tiny Data-driven} & Neural ODE \cite{chen2018neural} & -3.61$\pm$0.07
& n/a & n/a & n/a & n/a\\
\midrule
\multirowcell{2}{\parbox{0.9cm}{\centering\tiny Incomplete physics}} & Param PDE ($a,b$) & -1.32$\pm$0.02
& 55.6 & 54.1 & n/a & n/a\\
& APHYNITY Param PDE ($a,b$) & \textbf{-4.32$\pm$0.32}
& \textbf{11.8} & \textbf{18.7} & n/a & (4.3$\pm$0.6)e1\\
\midrule
\multirowcell{4}{\parbox{0.9cm}{\centering\tiny Complete physics}} & Param PDE ($a,b,k$) & \textbf{-5.54$\pm$0.38}
& 1.55 & 3.10 & 0.59 & n/a\\
& APHYNITY Param PDE ($a,b,k$) & \textbf{-5.72$\pm$0.25}
& \textbf{1.29} & \textbf{1.23} & \textbf{0.39} & (5.9$\pm$4.3)e-1 \\
\cdashline{2-7}\noalign{\vskip 0.2ex}
& True PDE & \textbf{-8.86$\pm$0.02}
& n/a & n/a & n/a & n/a\\
& APHYNITY True PDE & \textbf{-8.82$\pm$0.15}
& n/a & n/a & n/a & (1.8$\pm$0.6)e-5\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Additional results for the wave equation\label{app:additional_wave}}
We conduct an experiment where each sequence is generated with a different wave celerity. This dataset is challenging because both $c$ and the initial conditions vary across the sequences. For each simulated sequence, an initial condition is sampled as described previously, along with a wave celerity $c$ also sampled uniformly in $[300, 400]$. Finally our initial state is integrated with the same Runge-Kutta scheme. $200$ of such sequences are generated for training while $50$ are kept for testing.
For this experiment, we also use a ConvNet encoder to estimate the wave speed $c$ from 5 consecutive reserved states $(w, \frac{\partial w}{\partial t})$. The architecture of the encoder $E$ is the same as in Table~\ref{tab:reaction-diffusion-arch} but with 10 input channels.
Here also, $k$ is fixed for all sequences and $k=50$. The hyper-parameters used in these experiments are the same than described in the Section~\ref{sup:wave-details}.
\begin{comment}
Fig \ref{fig:wave-damped-demo_supp} is consistent with the one provided in the main paper showing the consistency of APHYNITY in the case of incomplete physics compared to a fully data-driven approach (Neural-ODE)
\begin{figure}[H]
\vspace{-0.4cm}
\centering
\subfloat[Neural ODE
]{
\includegraphics[width=0.32\textwidth]{figs/wave_damped_2/neural_ode.png}}
\hfill
\subfloat[APHYNITY Param PDE ($c$)
]{
\includegraphics[width=0.32\textwidth]{figs/wave_damped_2/aphyinity.png}}\hfill
\subfloat[Ground truth simulation
]{
\includegraphics[width=0.32\textwidth]{figs/wave_damped_2/truth.png}}
\caption{Comparison between the prediction of APHYNITY when $c$ is estimated and Neural ODE for the damped wave equation. Note that $t+32$ is already largely beyond the dataset horizon, showing the consistency of APHYNITY method.\label{fig:wave-damped-demo_supp}
}
\end{figure}
\end{comment}
The results when multiple wave speeds $c$ are in the dataset are consistent with the one present when only one is considered. Indeed, while prediction performances are slightly hindered, the parameter estimation remains consistent for both $c$ and $k$. This extension provides elements attesting for the robustness and adaptability of our method to more complex settings. Finally the purely data-driven Neural-ODE fails to cope with the increasing difficulty.
\begin{table}[H]
\centering
\setlength{\tabcolsep}{8pt}
\caption[APHYNITY results on the wave equations with varying parameters.]{Results for the damped wave equation when considering multiple $c$ sampled uniformly in $[300, 400]$ in the dataset, $k$ is shared across all sequences and $k=50$.}
\begin{tabular}{clcccc}
\toprule
&Method & $\log$ MSE &
\%Error $c$ & \%Error $k$ & $\|F_a\|^2$
\\ \midrule
\parbox{0.9cm}{\centering\tiny Data-driven} & Neural ODE \cite{chen2018neural} & 0.056$\pm$0.34& n/a & n/a & n/a \\
\midrule
\multirowcell{2}{\parbox{0.9cm}{\centering\tiny Incomplete physics}} &Param PDE ($c$) &-1.32$\pm$0.27 & 23.9 & n/a & n/a \\
&APHYNITY Param PDE ($c$) &\textbf{-4.51$\pm$0.38}& 3.2 & n/a & 171 \\
\midrule
\multirowcell{4}{\parbox{0.9cm}{\centering\tiny Complete physics}} & Param PDE ($c, k$) & -4.25$\pm$0.28 & 3.54 & 1.43& n/a\\
& APHYNITY Param PDE ($c, k$) &\textbf{-4.84$\pm$0.57} & 2.41 & 0.064 & 3.64 \\
\cdashline{2-6}\noalign{\vskip 0.2ex}
& True PDE ($c, k$) &\textbf{-4.51$\pm$0.29} & n/a & n/a & n/a \\
& APHYNITY True PDE ($c, k$) & \textbf{-4.49$\pm$0.22} & n/a &n/a& 0.0005 \\
\bottomrule
\end{tabular}
\label{tab:additional wave}
\end{table}
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{Proof that the temporal kernel is PSD}
\label{app:proof-ktime}
The DTW score between two time series $\mathbf{y} \in \mathbb{R}^{d \times n}$ and $\mathbf{z} \in \mathbb{R}^{d \times m}$ can be written $S(\pi) = \sum_{i=1}^{|\pi|} \mathbf{\Delta}(\mathbf{y}_{\pi_1(i)}, \mathbf{z}_{\pi_2(i)})$ where $\pi=(\pi_1,\pi_2)$ is a valid alignment between both series. Equivalently we can write the DTW score $S(\pi) = S(\mathbf{A}) = \left\langle \mathbf{A}, \mathbf{\Delta(\mathbf{y},\mathbf{z})} \right\rangle$, where $\mathbf{A} \subset \left \{ 0,1 \right \} ^{n \times m}$ is the warping path in matrix form ($\mathbf{A}_{ij}=1$ if $\mathbf{y}_i$ is associated to $\mathbf{z}_j$ and 0 otherwise).\\
Let $w: \mathcal{A}_{n,m} \longrightarrow \mathbb{R}_+^*$ be a strictly positive weighting function on alignment paths and let's consider the following kernel:
\begin{align}
\mathcal{K}_w(\mathbf{y},\mathbf{z}) &= \sum_{\mathbf{A} \in \mathcal{A}_{n,m}} w(\mathbf{A}) ~~ e^{ - \frac{S(\mathbf{A})}{\gamma} } \\
&= \sum_{\mathbf{A} \in \mathcal{A}_{n,m}} w(\mathbf{A}) ~~ e^{ - \frac{\left\langle \mathbf{A} , \mathbf{\Delta(\mathbf{y},\mathbf{z})} \right\rangle}{\gamma} } \\
&= \sum_{\pi \in \mathcal{A}_{n,m}} w(\pi) ~~ e^{ - \frac{ \sum_{j=1}^{|\pi|} \mathbf{\Delta} \left( \mathbf{y}_{\pi_1(j)} , \mathbf{z}_{\pi_2(j)} \right) }{\gamma} } \\
&= \sum_{\pi \in \mathcal{A}_{n,m}} w(\pi) \prod_{j=1}^{|\pi|} e^{ - \frac{ \mathbf{\Delta} \left( \mathbf{y}_{\pi_1(j)} , \mathbf{z}_{\pi_2(j)} \right) }{\gamma} } \\
&= \sum_{\pi \in \mathcal{A}_{n,m}} w(\pi) \prod_{j=1}^{|\pi|} k(\mathbf{y}_{\pi_1(j)} , \mathbf{z}_{\pi_2(j)}) ,
\label{eq:kernel_def}
\end{align}
where we denote $k=e^{-\frac{\mathbf{\Delta}}{\gamma}}$. We prove the following result: \\
\begin{prop}
If $k$ is a PSD kernel such that $\frac{k}{1+k}$ is also PSD, the kernel $\mathcal{K}_w$ defined in Eq. \ref{eq:kernel_def} is also PSD.
\end{prop}
\begin{proof}
The proof is adapted from \cite{cuturi2007kernel}. First, for any time series $\mathbf{y}= (\mathbf{y}_1,\dots,\mathbf{y}_n) \in \mathbb{R}^{d \times n}$ of length $n$ and for any sequence $a \in \mathbb{N}^n$, we introduce the notation:
\begin{equation}
\mathbf{y}_a = (\underset{a_1 \text{~times}}{\underbrace{\mathbf{y}_1,\dots,\mathbf{y}_1}}, \dots, \underset{a_n \text{~times}}{\underbrace{\mathbf{y}_n,\dots,\mathbf{y}_n}}).
\end{equation}
Let $\chi$ be any PSD kernel defined on $\mathbb{R}^d$ with the following condition $|\chi| < 1$, we introduce the kernel $\kappa$ defined as:
\begin{equation}
\kappa(\mathbf{y},\mathbf{z}) =
\begin{cases}
\prod_{i=1}^{|x|} \chi(\mathbf{y}_i, \mathbf{z}_j) \text{~~if~~} |\mathbf{y}| = |\mathbf{z}| \\
0 \text{~~~otherwise.}
\end{cases}
\end{equation}
Then, given a strictly positive weighting function $w(a,b) > 0$, the following kernel $\mathcal{K}_w$ defined in Eq. \ref{eq:Kw} is PSD by construction:
\begin{equation}
\mathcal{K}_w(\mathbf{y},\mathbf{z}) = \sum_{a \in \mathbb{N}^n} \sum_{b \in \mathbb{N}^m} w(a,b) ~ \kappa(\mathbf{y}_a, \mathbf{z}_b).
\label{eq:Kw}
\end{equation}
where we recall that $n=|\mathbf{y}|$ and $m=|\mathbf{z}|$. We denote $\epsilon_a = (\underset{a_1 \text{~times}}{\underbrace{1,\dots,1}}, \dots, \underset{a_p \text{~times}}{\underbrace{p,\dots,p}})$ for any $a\in \mathbb{N}^p$. We also write for any sequences $u$ and $v$ of common length $p$: $u \otimes v = ((u_1,v_1),\dots,(u_p,v_p))$. With these notations, we can rewrite $\mathcal{K}_w$ as:
\begin{equation}
\mathcal{K}_w(\mathbf{y},\mathbf{z}) = \sum_{ \overset{a \in \mathbb{N}^n, b \in \mathbb{N}^m}{\Vert a \Vert = \Vert b \Vert} } w(a,b) \prod_{i=1}^{\Vert a \Vert} \chi((\mathbf{y},\mathbf{z})_{\epsilon_a \otimes \epsilon_b(i)}).
\label{eq:Kw_ab}
\end{equation}
Notice now for each couple $(a,b)$ there exists a unique alignment path $\pi$ and an integral vector $v$ verifying $\pi_v = \epsilon_a \otimes \epsilon_b$. Conversely, for each couple $(\pi,v)$ there exists a unique pair $(a,b)$ verifying $\pi_v = \epsilon_a \otimes \epsilon_b$. Therefore the kernel $\mathcal{K}_w$ in Eq. \ref{eq:Kw_ab} can be written equivalently with a parameterization on $(\pi,v)$ for $w$:
\begin{equation}
\mathcal{K}_w(\mathbf{y},\mathbf{z}) = \sum_{\pi \in \mathcal{A}_{n,m}} \sum_{v \in \mathbb{N}^{|\pi|}} w(\pi,v) \prod_{j=1}^{|\pi|} \chi((\mathbf{y},\mathbf{z})_{\pi_v(j)}),
\end{equation}
\label{eq:Kw_piv}
where $\chi_{\pi(j)}$ is a shortcut for $\chi(\mathbf{y}_{\pi_1(j)}, \mathbf{z}_{\pi_2(j)})$.\\
Now we assume that the weighting function $w$ depends only on $\pi$: $w(\pi,v)=w(\pi)$. Then we have:
\begin{align*}
\mathcal{K}_w(\mathbf{y},\mathbf{z}) &= \sum_{\pi \in \mathcal{A}_{n,m}} w(\pi) \sum_{v \in \mathbb{N}^{|\pi|}} \prod_{j=1}^{|\pi|} \chi^{v_j}_{\pi(j)} \\
&= \sum_{\pi \in \mathcal{A}_{n,m}} w(\pi) \prod_{j=1}^{|\pi|} \left( \chi_{\pi(j)} + \chi_{\pi(j)}^2 + \chi_{\pi(j)}^3 + \dots \right)\\
&= \sum_{\pi \in \mathcal{A}_{n,m}} w(\pi) \prod_{j=1}^{|\pi|} \frac{\chi_{\pi(j)}}{1-\chi_{\pi(j)}}.
\end{align*}
By setting now $\chi = \frac{k}{1+k}$ which is PSD by hypothesis and verifies $| \chi | <1$ (recall that $k=e^{- \frac{\mathbf{\Delta}}{\gamma}} $), we get:
\begin{align*}
\mathcal{K}_w(\mathbf{y},\mathbf{z}) &= \sum_{\pi \in \mathcal{A}_{n,m}} w(\pi) \prod_{j=1}^{|\pi|} k_{\pi(j)} \\
&= \sum_{\pi \in \mathcal{A}_{n,m}} w(\pi) \prod_{j=1}^{|\pi|} k(\mathbf{y}_{\pi_1(j)} , \mathbf{z}_{\pi_2(j)}) ,\\
\end{align*}
which corresponds exactly to the kernel $\mathcal{K}_w$ defined in Eq. \ref{eq:kernel_def}. This proves that $\mathcal{K}_w$ in Eq. \ref{eq:kernel_def} is a well defined PSD kernel. \\
With the particular choice $w(\mathbf{A}) = \left\langle \mathbf{A},\mathbf{\Omega_{sim}} \right\rangle$, we recover:
\begin{align*}
\mathcal{K}_w(\mathbf{y},\mathbf{z}) &= \sum_{\mathbf{A} \in \mathcal{A}} \left\langle \mathbf{A},\mathbf{\Omega_{sim}} \right\rangle ~~ e^{ - \frac{\left\langle \mathbf{A} , \mathbf{\Delta(\mathbf{y},\mathbf{z})} \right\rangle}{\gamma} } \\
&= Z \times \text{TDI}^{\mathbf{\Delta,\Omega_{sim}}}_{\gamma}(\mathbf{y},\mathbf{z}) \\
&= e^{- \text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y},\mathbf{z}) / \gamma}
\times \text{TDI}^{\mathbf{\Delta, {\Omega_{sim}}}}_{\gamma} (\mathbf{y},\mathbf{z}) \\
&= \mathcal{K}_{time}(\mathbf{y},\mathbf{z}),
\end{align*}
which finally proves that $\mathcal{K}_{time}$ defined in paper Eq. 9 is a valid PSD kernel.
\end{proof}{}
The particular choice $k(u,v)= \dfrac{\frac{1}{2} e^{-\Vert u-v \Vert^2_2}} {1-\frac{1}{2} e^{- \Vert u-v \Vert^2_2}}$ fulfills Proposition 1 requirements: $k$ is indeed PSD as the infinite limit of a sequence of PSD kernels $\sum_{i=1}^{\infty} g^i = \frac{g}{1-g} = k$, where $g$ is a halved Gaussian PSD kernel: $g(u,v)= \frac{1}{2} e^{- \Vert u-v \Vert ^2_2}$. For this choice of $k$, the corresponding pairwise cost matrix writes (it is the half-Gaussian cost defined in Section \ref{sec:shape-kernel}):
\begin{equation}
\mathbf{\Delta}(\mathbf{y}_i,\mathbf{z}_j) = \gamma \left[\Vert \mathbf{y}_i-\mathbf{z}_j\Vert^2_2 - \log \left( 2 - e^{- \Vert \mathbf{y}_i-\mathbf{z}_j \Vert ^2_2} \right) \right] .
\end{equation}
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{External shape and temporal metrics}
\label{app:dilate_metrics}
We detail here the two external metrics used in our experiments to evaluate the shape and temporal errors.
\paragraph*{Ramp score:} The notion of \textit{ramping event} is a major issue for intermittent renewable energy production that needs to be anticipated for electricity grid management. For assessing the performance of trained forecasting models in presence of ramps, the Ramp Score was proposed in \cite{vallance2017towards}. This score is based on a piecewise linear approximation on both input and target time series by the Swinging Door algorithm \cite{bristol1990swinging,florita2013identifying}. The Ramp Score described in \cite{vallance2017towards} is computed as the integral between the unsigned difference of derivatives of both linear approximated series. For assessing only the shape error component, we apply in our experiments the ramp score on the target and prediction series after alignment by the optimal DTW path.
\paragraph*{Hausdorff distance:} Given a set of change points $\mathcal{T}^*$ in the target signal and change points $\hat{\mathcal{T}}$ in the predicted signal, the Hausdorff distance is defined as:
\begin{equation}
\text{Hausdorff}(\mathcal{T}^*,\hat{\mathcal{T}}) := \max ( \underset{\hat{t} \in \mathcal{ \hat{T} }}{\max} \underset{t^* \in \mathcal{ T^* }}{\min} |\hat{t}-t^* | , \underset{t^* \in \mathcal{ T^* }}{\max} \underset{\hat{t} \in \mathcal{ \hat{T} }}{\min} |\hat{t}-t^* | ).
\end{equation}{}
It corresponds to the greatest temporal distance between a change point and its prediction.
We now explain how the change points are computed for each dataset: for Synthetic, we know exactly by construction the positions of the change points in the target signals. For the predictions, we look for a single change point corresponding to the location of the predicted step function. We use the exact segmentation method by dynamic programming described in \cite{truong2018review} with the Python toolbox \url{http://ctruong.perso.math.cnrs.fr/ruptures-docs/build/html/index.html#} .\\
For ECG5000 and Traffic datasets which present sharp peaks, this change point detection algorithm is not suited (detected change points are often located at the inflexion points of peaks and not at the exact peak location). We thus use a simple peak detection algorithm based on first order finite differences. We tune the threshold parameter for outputting a detection and the min distance between detections parameter experimentally for each dataset.
\section{Comparison to DILATE divergence variant \label{app:dilate-div}}
Blondel \textit{et al.~} \cite{blondel2020differentiable} point out two limitations for using $\text{DTW}^{\mathbf{\Delta}}_{\gamma}$ as a loss function: first, it can take negative values and second, $\text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y},\mathbf{z})$ does not reach its minimum when $\mathbf{y} = \mathbf{z}$. To address these issues, they propose a proper divergence defined as follows \cite{blondel2020differentiable}:
\begin{equation}
\text{DTW-div}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y}, \mathbf{z}) = \text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y}, \mathbf{z}) \\ - \frac{1}{2} (\text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y}, \mathbf{y}) + \text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{z}, \mathbf{z})).
\end{equation}
This divergence is non-negative and satisfies $ \text{DTW-div}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y}, \mathbf{y}) = 0$. However, it is still not a distance function since the triangle inequality is not verified (as for the true DTW).\\
These limitations also hold for DILATE. Consequently, we use the same normalization trick to define a proper DILATE-divergence. Forecasting results in Table \ref{tab:dilate-div} show that DILATE-div is equivalent to DILATE with the Seq2Seq and N-Beats \cite{oreshkin2019n} models, and inferior to DILATE with the Informer model \cite{zhou2020informer}. It confirms the good behaviour of the DILATE loss that does not require this renormalization.
\begin{table}[H]
\caption{Comparison between DILATE and DILATE-div on the synthetic-det dataset.}
\centering
\begin{tabular}{ccc}
\toprule
Model & MSE & DILATE \\
\midrule
Seq2Seq DILATE & \textbf{13.1 $\pm$ 1.8} & \textbf{33.7 $\pm$ 3.1} \\
Seq2Seq DILATE-div & \textbf{13.6 $\pm$ 0.9} & \textbf{33.6 $\pm$ 2.1} \\
\midrule
N-Beats \cite{oreshkin2019n} DILATE & \textbf{13.3 $\pm$ 0.7} & \textbf{37.9 $\pm$ 1.6} \\
N-Beats \cite{oreshkin2019n} DILATE-div & \textbf{13.8 $\pm$ 0.9} & \textbf{38.5 $\pm$ 1.4} \\
\midrule
Informer \cite{zhou2020informer} DILATE & \textbf{11.8 $\pm$ 0.7} & \textbf{30.1 $\pm$ 1.3} \\
Informer \cite{zhou2020informer} DILATE-div & 12.9 $\pm$ 0.1 & 31.8 $\pm$ 6.5 \\
\bottomrule
\end{tabular}
\label{tab:dilate-div}
\end{table}
\section{DILATE additional visualizations}
\label{app:dilate_visus}
We provide additional qualitative predictions with DILATE for the \texttt{Synthetic-det} in Figure \ref{fig:synth_sup}, for \texttt{ECG5000} in Figure \ref{fig:ecg_sup} and for \texttt{Traffic} in Figure \ref{fig:traffic_sup}.
\begin{figure*}
\begin{center}
\includegraphics[width=13cm]{images/synthetic_sup.png}
\end{center}
\caption{Qualitative predictions for the \texttt{Synthetic-det} dataset.}
\label{fig:synth_sup}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=13cm]{images/ecg_sup.png}
\end{center}
\caption{Qualitative predictions for the \texttt{ECG5000} dataset.}
\label{fig:ecg_sup}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=13cm]{images/traffic_sup.png}
\end{center}
\caption{Qualitative predictions for the \texttt{Traffic} dataset.}
\label{fig:traffic_sup}
\end{figure*}
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{PhyDNet model}
\subsection{Discrete PhyCell derivation}
\label{app:phycell-deriv}
\noindent PhyCell dynamics is governed by the PDE:
\begin{align*}
\dfrac{\partial \mathbf{h}}{\partial t}(t,\mathbf{x}) &= \Phi(\mathbf{h}) + \mathcal{C}(\mathbf{h},\mathbf{u}) \\
&= \Phi(\mathbf{h}(t,\mathbf{x}) ) + \mathbf{K}(t,\mathbf{x}) \odot (\mathbf{E}(\mathbf{u}(t,\mathbf{x})) - (\mathbf{h}(t,\mathbf{x}) + \Phi(\mathbf{h}(t,\mathbf{x}) )).
\end{align*}
By Euler discretization $\frac{\partial \mathbf{h}}{\partial t}= \delta \mathbf{h}_t = \mathbf{h}_{t} - \mathbf{h}_{t-1}$, we get:
\begin{align*}
\mathbf{h}_{t+1} - \mathbf{h}_t &= \Phi(\mathbf{h}_t) + \mathbf{K}_t \odot (\mathbf{E}(\mathbf{u}_t) -(\mathbf{h}_t+\Phi(\mathbf{h}_t))) \\
\mathbf{h}_{t+1} &= \mathbf{h}_t + \Phi(\mathbf{h}_t) + \mathbf{K}_t \odot (\mathbf{E}(\mathbf{u}_t) -(\mathbf{h}_t+\Phi(\mathbf{h}_t))) \\
\mathbf{h}_{t+1} &= (1-\mathbf{K}_t) \odot (\mathbf{h}_t+\Phi(\mathbf{h}_t)) + \mathbf{K}_t \odot \mathbf{E}(\mathbf{u}_t).
\end{align*}
\subsection{Moment matrix}
\label{app:moment-matrix}
For a filter $\mathbf{w}$ of size $k \times k$, the moment matrix $\mathbf{M(w)}$ is a matrix of size $k \times k$ defined as:
\begin{equation*}
\mathbf{M}(\mathbf{w})_{i,j} = \frac{1}{i! j!} \sum_{u=-\frac{k-1}{2}}^{\frac{k-1}{2}} \sum_{v=-\frac{k-1}{2}}^{\frac{k-1}{2}} u^i v^j \mathbf{w}[u,v],
\end{equation*}{}
for $i,j=0,...,k-1$.
For any function $h:\mathbb{R}^2 \longrightarrow \mathbb{R}$, we consider the convolution of $h$ with the filter $\mathbf{w}$. Taylor's expansion gives:
\begin{flalign*}
\sum_{u=-\frac{k-1}{2}}^{\frac{k-1}{2}} \sum_{v=-\frac{k-1}{2}}^{\frac{k-1}{2}} \mathbf{w}[u,v] h(x + \delta x . u,y + \delta y . v)
&= \sum_{u=-\frac{k-1}{2}}^{\frac{k-1}{2}} \sum_{v=-\frac{k-1}{2}}^{\frac{k-1}{2}} \mathbf{w}[u,v] \sum_{i,j=1}^{k-1} \frac{\partial^{i+j} h}{\partial x^i \partial y^j}(x,y) \frac{u^i v^j}{i! j!} \delta x^i \delta y^j
\\ + o(|\delta x|^{k-1} + |\delta y|^{k-1}) \\
&= \sum_{i,j=1}^{k-1} \mathbf{M}(\mathbf{w})_{i,j} \delta x^i \delta y^j \frac{\partial^{i+j} h}{\partial x^i \partial y^j}(x,y) + o(|\delta x|^{k+1} + |\delta y|^{k-1}). \\
\end{flalign*}{}
This equation shows that we can control the differential order approximated by the filter $\mathbf{w}$ by imposing constraints on its moment matrix $\mathbf{M(w)}$.
For example, in order to approximate the differential operator $\frac{\partial^{a+b}}{\partial x^{a} \partial y^{b}} (.)$, it suffices to impose $\mathbf{M(w)}_{i,j} = 0$ for $i \neq a$ and $j \neq b$. By denoting $\mathbf{\Delta}^k_{i,j}$ the Kronecker matrix of size $k \times k$, which equals 1 at position $(i,j)$ and 0 elsewhere, we thus enforce the moment matrix $\mathbf{M(w)}$ to match the target $\mathbf{\Delta}^k_{a,b}$ with the Frobenius norm. This justifies the choice of our moment loss for enforcing each filter $\mathbf{w}^k_{p,i,j}$ to approximate the corresponding derivative $\frac{\partial^{i+j}}{\partial x^{i} \partial y^{j}} (.)$:
\begin{equation*}
\mathcal{L}_{\text{moment}} = \sum\limits_{i \leq k} \sum\limits_{j \leq k} ||\mathbf{M}(\mathbf{w}^k_{p,i,j}) - \mathbf{\Delta}^k_{i,j} ||_F.
\label{eq:lmoment}
\end{equation*}
\subsection{Prediction mode training}
We show in section \ref{sec:pdernn} that the decomposition $\bm{\mathcal{M}}_r(\mathbf{h},\mathbf{u}) = \Phi(\mathbf{h})+ \mathcal{C}(\mathbf{h},\mathbf{u})$ still holds for standard Seq2Seq models (RNN, GRU, LSTM). As mentioned in Chapter \ref{chap:phydnet}, the resulting predictor $\Phi$ is, however, naive and useless for multi-step prediction, i.e.~ $\Phi(\mathbf{h})=-\mathbf{h}$ and $\mathbf{\tilde{h}}_{t+1}=0$.
In multi-step prediction, the option followed by standard Seq2seq models is to recursively reinject back predictions as ground truth input for the next time steps. Scheduled Sampling \cite{bengio2015scheduled} is a solution to mitigate error accumulation and train/test discrepancy, that we use in our ConvLSTM branch. This is, however, inferior to the results obtained with our PhyCell trained in the "prediction-only" mode, as shown in Section \ref{sec:expe_prediction}.
\subsubsection{PDE formulation for standard RNNs}
\label{sec:pdernn}
\paragraph{Vanilla RNN}
The equations for the vanilla RNN are:
\begin{equation*}
\mathbf{h}_t = \tanh(\mathbf{W}_h \mathbf{h}_{t-1} + \mathbf{W}_u \mathbf{u}_t + \mathbf{b} ),
\end{equation*}
with weight matrices $\mathbf{W}_h$, $\mathbf{W}_u$ and bias $\mathbf{b}$. By approximating $\frac{\partial \mathbf{h}}{\partial t}= \delta \mathbf{h}_t = \mathbf{h}_t - \mathbf{h}_{t-1}$, we get the PDE:
\begin{align*}
\dfrac{\partial \mathbf{h}}{\partial t}(t,\mathbf{x}) &= \bm{\mathcal{M}}(\mathbf{h},\mathbf{u}) \\ &=
\tanh(\mathbf{W}_h \mathbf{h}(t) + \mathbf{W}_u \mathbf{u}(t) + \mathbf{b} ) - \mathbf{h}(t).
\end{align*}
A linear decoupling of this PDE is
\begin{equation*}
\frac{\partial \mathbf{h}}{\partial t}(t,\mathbf{x}) = \Phi(\mathbf{h}) + \mathcal{C}(\mathbf{h},\mathbf{u}),
\end{equation*}
with $\Phi(\mathbf{h}) = -\mathbf{h}(t)$ and $\mathcal{C}(\mathbf{h},\mathbf{u}) = \tanh(\mathbf{W}_h \mathbf{h}(t) + \mathbf{W}_u \mathbf{u}(t) + \mathbf{b} ) $ which gives in discrete time the prediction-correction scheme:
\begin{empheq}[left=\empheqlbrace]{alignat=2}
& \tilde{\mathbf{h}}_{t+1}= 0 \label{eq:prediction}\\
& \mathbf{h}_{t+1} = \tilde{\mathbf{h}}_{t+1} + \tanh \left(\mathbf{W}_h \mathbf{h}_{t-1} + \mathbf{W}_u \mathbf{u}_t + \mathbf{b} \right). \label{eq:correction}
\end{empheq}
We see that the prior predictor $\Phi$ brings no information and that the correction step drives the whole dynamics.
\paragraph{Gated Recurrent Unit (GRU)}
The equations of the Gated Recurrent Unit \cite{cho2014learning} are:
\begin{align*}
\mathbf{r}_t &= \sigma(\mathbf{W}_{rh} \mathbf{h}_{t-1} + \mathbf{W}_{ru} \mathbf{u}_t + \mathbf{b}_r) \\
\mathbf{z}_t &= \sigma(\mathbf{W}_{zh} \mathbf{h}_{t-1} + \mathbf{W}_{zu} \mathbf{u}_t + \mathbf{b}_z) \\
\mathbf{g}_t &= \tanh(\mathbf{W}_{gh} (\mathbf{r}_t \odot \mathbf{h}_{t-1}) + \mathbf{W}_{gu} \mathbf{u}_t + \mathbf{b}_g) \\
\mathbf{h}_t &= \mathbf{z}_t \odot \mathbf{h}_{t-1} + (1-\mathbf{z}_t) \odot \mathbf{g}_t,
\end{align*}
where $\mathbf{r}_t$ is the reset gate, $\mathbf{z}_t$ is the update gate and $\mathbf{g}_t$ is the update vector. By approximating $\frac{\partial \mathbf{h}}{\partial t}= \delta \mathbf{h}_t = \mathbf{h}_t - \mathbf{h}_{t-1}$, we get the PDE:
\begin{align*}
\dfrac{\partial \mathbf{h}} {\partial t}(t,\mathbf{x}) &= \bm{\mathcal{M}}(\mathbf{h},\mathbf{u}) \\
&= \mathbf{z}(t) \odot \mathbf{h}(t) + (1-\mathbf{z}(t)) \odot \mathbf{g}(t) - \mathbf{h}(t).
\end{align*}{}
A linear decoupling of this PDE is
\begin{equation*}
\frac{\partial \mathbf{h}}{\partial t}(t,\mathbf{x}) = \Phi(\mathbf{h}) + \mathcal{C}(\mathbf{h},\mathbf{u}) ,
\end{equation*}
with $\Phi(\mathbf{h}) = -\mathbf{h}(t)$ and $\mathcal{C}(\mathbf{h},\mathbf{u}) = \mathbf{z}(t) \odot \mathbf{h}(t) + (1-\mathbf{z}(t)) \odot \mathbf{g}(t)$ which gives in discrete time the prediction-correction scheme:
\begin{empheq}[left=\empheqlbrace]{alignat=2}
& \tilde{\mathbf{h}}_{t+1}= 0 \label{eq:prediction}\\
& \mathbf{h}_{t+1} = \tilde{\mathbf{h}}_{t+1} + \mathbf{z}_t \odot \mathbf{h}_{t-1} + (1-\mathbf{z}_t) \odot \mathbf{g}_t . \label{eq:correction}
\end{empheq}
We again see that the prior predictor $\Phi$ brings no information and that the correction step drives the whole dynamics.
\paragraph{Long Short-Term Memory (LSTM)}
We give the formulation for the standard LSTM \cite{Hochreiter:1997:LSM:1246443.1246450} (the ConvLSTM \cite{xingjian2015convolutional} can be immediately deduced by replacing matrix products by convolutions):
\begin{align*}
\mathbf{i}_t &= \sigma (\mathbf{W}_{ih} \mathbf{h}_{t-1} + \mathbf{W}_{iu} \mathbf{u}_t + \mathbf{b}_i) \\
\mathbf{f}_t &= \sigma (\mathbf{W}_{fh} \mathbf{h}_{t-1} + \mathbf{W}_{fu} \mathbf{u}_t + \mathbf{b}_f) \\
\mathbf{g}_t &= \tanh (\mathbf{W}_{gh} \mathbf{h}_{t-1} + \mathbf{W}_{gu} \mathbf{u}_t + \mathbf{b}_g) \\
\mathbf{c}_t &= \mathbf{f}_t \odot \mathbf{c}_{t-1} + \mathbf{i}_t \odot \mathbf{g}_t \\
\mathbf{o}_t &= \sigma (\mathbf{W}_{oh} \mathbf{h}_{t-1} + \mathbf{W}_{ou} \mathbf{u}_t + \mathbf{b}_o) \\
\mathbf{h}_t &= \mathbf{o}_t \odot \tanh(\mathbf{c}_t).
\end{align*}
where $\mathbf{i}_t$ is the input gate, $\mathbf{f}_t$ the forget gate, $\mathbf{g}_t$ the input-modulation gate, $\mathbf{o}_t$ the output gate, $\mathbf{c}_t$ the cell state and $\mathbf{h}_t$ the latent state. We define the LSTM augmented latent state as:
\begin{equation*}
\Bar{\mathbf{h}} = \begin{pmatrix}
\mathbf{g} \\ \mathbf{c}
\end{pmatrix}.
\end{equation*}
The augmented state $\mathbf{\Bar{h}}$ thus verifies the PDE:
\begin{equation*}
\dfrac{\partial \Bar{\mathbf{h}}}{\partial t} = \begin{pmatrix}
\dfrac{\partial \mathbf{h}}{\partial t} \\ \dfrac{\partial \mathbf{c}}{\partial t} \end{pmatrix} = \begin{pmatrix}
\mathbf{o}(t) \odot \tanh(\mathbf{c}(t)) - \mathbf{h}(t)) \\ \mathbf{f}(t) \odot \mathbf{c}(t) + \mathbf{i}(t) \odot \mathbf{g}(t) - \mathbf{c}(t)
\end{pmatrix}.
\end{equation*}
A linear decoupling of this PDE is
\begin{equation*}
\frac{\partial \mathbf{\Bar{h}}}{\partial t}(t,\mathbf{x}) = \Phi(\mathbf{\Bar{h}}) + \mathcal{C}(\mathbf{\Bar{h}},\mathbf{u}) ,
\end{equation*}
with $\Phi(\mathbf{\Bar{h}}) = -\mathbf{\Bar{h}}(t)$ and
\begin{equation*}
\mathcal{C}(\mathbf{\Bar{h}},\mathbf{u}) = \begin{pmatrix}
\mathbf{o}(t) \odot \tanh(\mathbf{c}(t)) \\ \mathbf{f}(t) \odot \mathbf{c}(t) + \mathbf{i}(t) \odot \mathbf{g}(t)
\end{pmatrix},
\end{equation*}
which gives in discrete time the prediction-correction scheme:
\begin{empheq}[left=\empheqlbrace]{alignat=2}
& \tilde{\mathbf{h}}_{t+1} \!= \mathbf{h}_{t} + \Phi(\mathbf{h}_{t}) & \!\!\!\quad \text{\small{\textbf{Prediction}\!}} \label{eq:prediction}\\
& \mathbf{h}_{t+1} \!= \tilde{\mathbf{h}}_{t+1} + \mathbf{K}_t \odot \left( \mathbf{E}(\mathbf{u}_t) - \tilde{\mathbf{h}}_{t+1} \right). & \!\!\! \quad \text{\small{\textbf{Correction}\!}} \label{eq:correction}
\end{empheq}
We again see that the prior predictor $\Phi$ brings no information and that the correction step drives the whole dynamics.
\section{Experiments}
\subsection{Model architectures and training}
\label{app:phydnet-impl}
\paragraph{Model architectures}
We give here the architecture of the encoder and decoder for all datasets. They share common building blocs, composed of convolutions, GroupNorm activation functions \cite{wu2018group} and LeakyRelu non-linearities. For each of the following architectures, we use skip connections from the encoder to the decoder, as classically done, e.g.~ in \cite{denton2017unsupervised}. We define:
\begin{itemize}
\item conv-block(input, output, stride) = \{Conv2D + GroupNorm + LeakyRelu(0.2)\}
\item upconv-block(input,output,stride)=\{TransposedConv2D + GroupNorm + LeakyRelu(0.2) \}
\item upconv(input,output,stride)=TransposedConv2D(input, output, stride)
\end{itemize}{}
\textbf{Moving MNIST:}
\begin{table}[H]
\centering
\begin{tabular}{c|c}
\toprule
Encoder & Decoder \\ \hline
conv-block(1,8,1) & upconv-block(128,64,1) \\
conv-block(8,16,1) & upconv-block(128,32,2) \\
conv-block(16,32,2) & upconv-block(64,32,1) \\
conv-block(32,32,1) & upconv-block(64,16,2) \\
conv-block(32,64,2) & upconv-block(32,8,1) \\
conv-block(64,64,1) & upconv(16,1,1) \\
\bottomrule
\end{tabular}{}
\end{table}{}
\textbf{Traffic:}
\begin{table}[H]
\centering
\begin{tabular}{c|c}
\toprule
Encoder & Decoder \\ \hline
conv-block(2,32,1) & upconv-block(256,64,1) \\
conv-block(32.64,2) & upconv-block(128,32,2) \\
conv-block(64,128,1) & upconv(64,2,1) \\
\bottomrule
\end{tabular}{}
\end{table}
\textbf{SST:}
\begin{table}[H]
\centering
\begin{tabular}{c|c}
\toprule
Encoder & Decoder \\ \hline
conv-block(1,32,1) & upconv-block(256,64,1) \\
conv-block(32.64,2) & upconv-block(128,32,2) \\
conv-block(64,128,1) & upconv(64,1,1) \\
\bottomrule
\end{tabular}{}
\end{table}
\break
\textbf{Human 3.6:}
\begin{table}[H]
\centering
\begin{tabular}{c|c}
\toprule
Encoder & Decoder \\ \hline
conv-block(3,16,1) & upconv-block(256,128,1) \\
conv-block(16,32,1) & upconv-block(256,64,2) \\
conv-block(32,64,2) & upconv-block(128,64,1) \\
conv-block(64,64,1) & upconv-block(128,32,2) \\
conv-block(64,128,2) & upconv-block(64,16,1) \\
conv-block(128,128,1) & upconv(32,3,1) \\
\bottomrule
\end{tabular}{}
\end{table}
\paragraph{Influence of $\lambda$} We show in Figure \ref{fig:lambda} the influence of parameter $\lambda$ balancing $\mathcal{L}_{\text{image}}$ and $\mathcal{L}_{\text{moment}}$ when training PhyDNet for Moving MNIST dataset. When $\lambda$ decreases towards 0, MSE tends towards the unconstrained case at 29. MSE reaches a minimum around $\lambda = 1$. When $\lambda$ further increases, physical regularization is too high and MSE increases above 30. In the paper, we fix $\lambda = 1$ for all datasets.
\begin{figure}[H]
\centering
\includegraphics[width=7cm]{images/lambda.png}
\caption{Influence of hyperparameter $\lambda$ when training PhyDNet for Moving MNIST dataset.}
\label{fig:lambda}
\end{figure}
\subsection{State-of-the art comparison}
\label{app:compa-villegas}
We show here that PhyDNet results are equivalent on Human 3.6 to a recent baseline that explicitly uses additional human pose annotations \cite{villegas2017learning}. In the supplementary of their paper \cite{villegas2017learning}, the authors evaluate their model with Peak Signal over Noise Ratios (PSNR) curves with respect to the forecasting horizon for all deciles of motion in Human 3.6 videos. Regarding prediction horizon up to $H=4$, their method obtains a PSNR always below 21 and around 22 for the $1^{st}$ decile (with the least human motion). In comparison, PhyDNet attains a per-frame MSE of 369, corresponding to a PSNR of 21.2. This shows that PhyDNet performs similarly than \cite{villegas2017learning} for the prediction horizon considered, without requiring additional human pose annotations.
\subsection{Ablation study}
We give in Figure \ref{fig:ablation} additional visualisations completing Figure \ref{fig:ablation}. We qualitatively analyze partial predictions of PhyDNet for the physical branch $\hat{\mathbf{u}}^{\mathbf{p}}_{t+1} = \mathbf{D}(\mathbf{h}^{\mathbf{p}}_{t+1})$ and residual branch $\hat{\mathbf{u}}^{\mathbf{r}}_{t+1} = \mathbf{D}(\mathbf{h}^{\mathbf{r}}_{t+1})$. For Moving MNIST (a) and Human 3.6 (d), $\mathbf{h^p}$ captures coarse localisations of objects, while $\mathbf{h^r}$ captures fine-grained details that are not useful for the physical model. For Traffic BJ, $\mathbf{h^p}$ captures the main patterns of the road network, while $\mathbf{h^r}$ models remaining details. Finally for SST, the visual difference between $\mathbf{h^p}$ and $\mathbf{h^r}$ is slighter, but the cooperation between both branches is crucial, as shown by quantitatives results.
\begin{table*}
\caption[PhyDNet detailed ablation study.]{A detailed ablation study shows the impact of the physical regularization $\mathcal{L}_{\text{moment}}$ on the performances of PhyCell and PhyDNet for all datasets.}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{l|lll|lll|lll|lll}
\toprule
\multicolumn{1}{c}{Method} & \multicolumn{3}{|c|}{\textbf{Moving MNist}} & \multicolumn{3}{|c|}{\textbf{Traffic BJ}} & \multicolumn{3}{|c|}{\textbf{Sea Surface Temperature}} & \multicolumn{3}{|c}{\textbf{Human 3.6}} \\
\midrule
~ & MSE & MAE & SSIM & MSE $\times$ 100 & MAE & SSIM & MSE $\times$ 10 & MAE & SSIM & MSE /10 & MAE /100 & SSIM \\ \midrule
ConvLSTM & 103.3 & 182.9 & 0.707 & $48.5^*$ & $17.7^*$ & $0.978^*$ & $45.6^*$ & $63.1^*$ & $0.949^*$ & $50.4^*$ & $18.9^*$ & $0.776^*$ \\
PhyCell & 50.8 & 129.3 & 0.870 & 48.9 & 17.9 & 0.978 & 38.2 & 60.2 & 0.969 & 42.5 & 18.3 & 0.891 \\
PhyCell without $\mathcal{L}_{\text{moment}}$ & 43.4 & 112.8 & 0.895 & 43.6 & 16.89 & 0.980 & 35.4 & 56.0 & 0.970 & 39.6 & 17.4 & 0.894 \\
PhyDNet & \textbf{24.4} & \textbf{70.3} & \textbf{0.947} & \textbf{41.9} & \textbf{16.2} & \textbf{0.982} & \textbf{31.9} & 53.3 & \textbf{0.972} & 36.9 & 16.2 & 0.901 \\
PhyDNet without $\mathcal{L}_{\text{moment}}$ & 29.0 & 81.2 & 0.934 & 43.9 & 16.6 & 0.981 & 32.3 & \textbf{53.1} & 0.971 & \textbf{36.7} & \textbf{15.9} & \textbf{0.904} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:app-ablation}
\end{table*}
\subsection{Influence of physical regularization}
\label{app:phydnet-influence}
We provide the detailed ablation study for all datasets in Table \ref{tab:app-ablation} that complements Table \ref{tab:ablation}. When we disable $\mathcal{L}_{\text{moment}}$ for training PhyCell, performances improve for all datasets (improvement of 7 MSE points for Moving MNIST, 5 points for Traffic BJ, 3 points for SST and Human 3.6). This again shows that physical constraints alone are too restrictive for learning dynamics in a general context, where other factors are required for prediction. When we further include PhyCell in our two-branches disentangling architecture PhyDNet, there is another huge performance gain compared to PhyCell (improvement of 25 MSE points on Moving MNIST, 7 points for Traffic and SST, 5 points for Human 3.6). We also remark that when we disable $\mathcal{L}_{\text{moment}}$ for training PhyDNet, we get worse performances (drop of 5 MSE points for Moving MNIST and 2 points for Traffic) or equivalent performances (difference below 0.5 MSE point for SST and Human 3.6). This again confirms the relevance of physical constraints.
\subsection{Additional visualisations}
\label{app:phydnet-visu}
We give further qualitative prediction of PhyDNet on Traffic BJ (Figure \ref{fig:taxi}) with a comparison with Memory in Memory \cite{wang2019memory} that is state-of-the-art for this dataset. We see that PhyDNet leads to sharper results and a lower absolute error. Interestingly, PhyDNet absolute errors are approximately spatially independent, whereas MIM errors tend to be higher at a few keys locations of Beijing road network.
We also provide additional prediction visualisations for Sea Surface Temperature (Figure \ref{fig:sst}) and Human 3.6 (Figure \ref{fig:human}) which confirm the good behaviour of PhyDNet.
We add a detailed qualitative comparison to DDPAE in Figure \ref{fig:compa-ddpae}. DDPAE is a specific disentangling method for Moving MNIST that extracts the positions of the two digits and tracks them with a predictive recurrent neural network. In this example, DDPAE fails to disentangle the two digits (components 1 and 2) in Figure \ref{fig:compa-ddpae} when they overlap in the input sequence, resulting in blurry predictions. In contrast, PhyDNet successfully learns a latent space in which the two digits are disentangled, resulting in far better predictions in terms of sharpness and position of the digits.
\begin{figure*}
\centering
\includegraphics[width=17cm]{images/visu_taxi.png}
\caption[PhyDNet additional qualitative results for Traffic BJ.]{Additional qualitative results for Traffic BJ and comparison to Memory In Memory \cite{wang2019memory}. We see that PhyDNet absolute error are smaller than MIM errors, and independent of the spatial structure of the road network.}
\label{fig:taxi}
\end{figure*}{}
\begin{figure*}
\centering
\includegraphics[width=17cm]{images/sst_sup.png}
\caption{PhyDNet additional qualitative results for Sea Surface Temperature.}
\label{fig:sst}
\end{figure*}{}
\begin{figure*}
\centering
\includegraphics[width=13cm]{images/human_sup1.png}
\caption{PhyDNet additional qualitative results for Human 3.6.}
\label{fig:human}
\end{figure*}{}
\begin{figure*}
\centering
\includegraphics[width=17cm]{images/mm_visu1.png}
\caption{Detailed qualitative comparison to DDPAE \cite{hsieh2018learning} on Moving MNIST dataset.}
\label{fig:compa-ddpae}
\end{figure*}{}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{images/global_visus4.png}
\caption{PhyDNet additional ablation visualisations for all datasets.}
\label{fig:ablation}
\end{figure*}{}
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{STRIPE implementation details}
\label{app:stripe}
\textbf{Neural network architectures:} STRIPE++ is composed of a Sequence To Sequence predictive model. The encoder is a recurrent neural network (RNN) with 1 layer of 128 Gated Recurrent Units (GRU) \cite{cho2014learning} units, producing a latent state $h$ of dimension 128. We fixed by cross-validation the dimension of each diversifying variable $z_s$ or $z_t$ to be $k=8$. The decoder is another RNN with $128+8+8=144$ GRU units followed by fully connected layers responsible for producing the future trajectory.\\
The Posterior network has a similar architecture as the encoder: it is a RNN with 1 layer of 128 GRU units that takes as input the full series $(\mathbf{x}_{1:T},\mathbf{y}^*_{T+1:T+H})$, followed by two multi-layer perceptrons (MLP) dedicated to output the parameters $(\mu_s^*,\sigma_s^*)$ and $(\mu_t^*,\sigma_t^*)$ of the Gaussian distribution from which to sample the posterior diversifying variables $z_s^*$ and $z_t^*$.\\
The STRIPE$^{++}_{\text{shape}}$ and STRIPE$^{++}_{\text{time}}$ proposal mechanisms build on top of the encoder (that produces $h$) with a MLP with 3 layers of 512 neurons (with Batch Normalization and LeakyReLU activations) and a final linear layer to produce $N=10$ latent codes of dimension $k=8$ (corresponding to the proposals for $z_s$ or $z_t$).\\
\paragraph{STRIPE hyperparameters:} We cross-validated the relevant hyperparameters of STRIPE:
\begin{itemize}
\setlength{\itemsep}{5pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item $k$: dimension of the diversifying latent variables $z$. This dimension should be chosen relatively to the hidden size of the RNN encoders and decoders (128 in our experiments). We fixed $k=8$ in all cases.
\item $N$: the number of future trajectories to sample. We fixed $N=10$. We performed a sensibility analysis to this parameter in paper Figure 8.
\item $\mu = 20$: quality constraint hyperparameter in the DPP kernels.
\end{itemize}
\section{STRIPE additional visualizations}
Wee provide additional visualizations for the Traffic and Electricity datasets that confirm that STRIPE predictions are both diverse and sharp.
\subsubsection{Electricity}
\begin{tabular}{cc}
\includegraphics[width=8cm]{images/elec_STRIPE_50.png} &
\includegraphics[width=8cm]{images/elec_STRIPE_52.png} \\
\includegraphics[width=8cm]{images/elec_STRIPE_55.png}&
\includegraphics[width=8cm]{images/elec_STRIPE_103.png}\\
\includegraphics[width=8cm]{images/elec_STRIPE_105.png}& \includegraphics[width=8cm]{images/elec_STRIPE_209.png}
\end{tabular}
\subsubsection{Traffic}
\begin{tabular}{cc}
\includegraphics[width=8cm]{images/traffic_4.png} &\
\includegraphics[width=8cm]{images/traffic_113.png} \\ \includegraphics[width=8cm]{images/traffic_129.png} &
\includegraphics[width=8cm]{images/traffic_175.png}\\
\includegraphics[width=8cm]{images/traffic_186.png} &
\includegraphics[width=8cm]{images/traffic_211.png} \\
\end{tabular}
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{Summary of contributions}
\lettrine[lines=3]{F}rom a general perspective, we have explored in this thesis how to incorporate prior knowledge into machine learning for improving spatio-temporal forecasting models. More specifically, we have studied two important scientific challenges.
\subsection{Multistep forecasting of non-stationary dynamics}
In many real-world applications, time series present non-stationary dynamics with possible sharp variations, e.g.~ traffic flows, financial stocks, or solar irradiance time series. Current state-of-the art deep learning methods for multistep deterministic and probabilistic forecasting struggle to properly predict these abrupt events: their predictions often smooth the sharp variations and/or present a temporal missalignment. One of the reasons is that most works focus on neural network architecture design and overlook the choice of the training loss function. The dominantly used loss function is the mean squared error (MSE), that is unable to take into account global information about the multistep dynamics.
In this thesis, we have shown that this is possible to design dedicated multistep loss functions to impose a certain desired behaviour to the output. For time series, we focus on shape and temporal criteria that are commonly used as assessment metrics in applications. In Chapter \ref{chap:criteria}, we have drawn a panorama of shape and temporal criteria based on smooth approximations of Dynamic Time Warping (DTW) and Time Distortion Index (TDI). We have expressed them both as dissimilarities (loss functions) and similarities (positive semi-definite kernels). We have insisted on their differentiability, which is an important requirement for training models with gradient-based optimization, and propose optimized implementations of these losses for efficient back-propagation training.
We have then applied the proposed shape and time differentiable criteria to two spatio-temporal forecasting contexts. In Chapter \ref{chap:dilate}, we have introduced a differentiable loss function (DILATE), that combines a shape term and a temporal term, for training any deep forecasting model to produce multistep deterministic forecasts. We have shown that training with DILATE produces sharper predictions with a better temporal localization than training with the standard MSE, while maintaining the performances with MSE evaluation.
In Chapter \ref{chap:stripe}, we have proposed the STRIPE model for probabilistic forecasting. In order to produce a limited set of possible scenarios that reflect the shape and temporal variability of ground truth trajectories, the STRIPE model is equipped with a diversification mechanism that structures the output diversity. This is done with a diversity loss relying on determinantal point processes (DPP), with two shape and temporal criteria introduced in Chapter \ref{chap:criteria}. STRIPE leads to more diverse forecasts according to shape and temporal criteria without sacrificing on quality. We have also revealed the crucial importance to decouple the criteria used for quality and diversity.
\subsection{Exploiting incomplete prior physical knowledge in machine learning models}
The extrapolation task underlying spatio-temporal forecasting is quite different and much more challenging for pure data-driven methods than the perception tasks at the origin of the impressive success of deep learning. For example, forecasting complex natural dynamics such as climate remains out of the scope of pure machine learning. An appealing solution is to incorporate external physical knowledge, which is an old research problem that is still open today. In this thesis, we have particularly focused on exploiting \textit{incomplete} physical knowledge, in contrast to mainstream methods that suppose a full prior knowledge. The incomplete case can stem from the difficulty of the phenomenon that remains elusive to a complete description from physical laws, e.g.~ for modelling all the complex interacting phenomena for predicting the evolution of the atmosphere, or from a non-observable prior context, i.e.~ when the dynamical model does not apply directly in the input space.
In Chapter \ref{chap:phydnet}, we have tackled the problem of generic video prediction. It is an example of a non-observable prior context: although there often exists some physical dynamical prior, for example on the motion of clouds in fisheye images, physical laws do not directly apply at the pixel level. The dynamical model is meaningful in a space where the clouds have previously been identified and segmented. We have introduced the PhyDNet prediction model that automatically learns a latent space in which we suppose that a class of linear partial differential equations apply. PhyDNet is a two-branch architecture: the first branch captures the physical dynamics. Since this prior knowledge is often insufficient to fully describe the content of videos, PhyDNet is composed of a second branch for modelling the complementary information necessary for accurate prediction (e.g.~ texture, details, \textit{etc}). We have highlighted the ability of PhyDNet to properly disentangle the physical dynamics from these unknown factors.
In Chapter \ref{chap:aphynity}, we have further delved into the question of augmenting incomplete physical models with deep data-driven counterparts. This is an area that has been explored by very few works up to now, and mostly empirically. We have proposed the APHYNITY framework, that consists in decomposing the dynamics in two components: a physical component accounting for the dynamics for which we have some prior knowledge, and a data-driven component accounting for insufficiencies of the physical model. APHYNITY is a principled learning framework minimizing the norm of the data-driven augmentation, that theoretically guarantees a unique decomposition under mild assumptions. APHYNITY is able to seamlessly adapt to different approximation levels of prior physical knowledge, covering the whole range of Machine Learning /Model-Based methods presented in Chapter \ref{chap:intro}. We have exhibited the superiority of APHYNITY over data-driven, incomplete physics, and state-of-the-art approaches combining ML and MB methods, both in terms of forecasting and parameter identification on three various classes of physical systems.
\subsection{Solar irradiance forecasting with fisheye images}
Finally, we have proposed solutions to the industrial solar irradiance forecasting problem with fisheye images raised at EDF. In Chapter \ref{chap:overview_fisheye}, we have presented the challenges of the problem and proposed a first deep learning model for estimating and forecasting solar irradiance. We have also discussed the limitations of standard deep learning forecasting approaches in this context, that have motivated the contributions of this thesis.
In Chapter \ref{chap:phydnet_fisheye}, we have applied the methodological contributions exposed in parts \ref{part:part1} and \ref{part:part2} of this thesis. We have improved and adapted our PhyDNet model for physically-constrained fisheye image prediction. The PhyDNet model greatly improves the performances compared to competitive pure data-driven, confirming the benefits of the physical knowledge integration. Furthermore, we have applied the DILATE loss function and the APHYNITY framework, leading to another (relatively small) performance gain.
\section{Perspectives}
We present here a non-exhaustive list of possible future research directions for different time horizons.
\subsection{Directions for improving solar irradiance forecasting}
\paragraph{Application of DILATE and APHYNITY}
As discussed in Chapter \ref{chap:phydnet_fisheye}, the main performance improvements compared to pure deep learning methods stem from the application of our physically-constrained PhyDNet architecture. The application of the DILATE loss and the APHYNITY framework further improve the performances, but less significantly.
Concerning the DILATE loss function, we have applied in our experiments the loss on future trajectories of 5 timesteps, which is rather small compared to the experiments in Chapter \ref{chap:dilate} (the shortest trajectories have 20 timesteps for the \texttt{Synthetic} dataset). For short trajectories, the sharp variations are harder to visualize and the use of dynamic time warping (DTW) is less relevant. To fully exploit the capacity of the DILATE loss, an interesting perspective is to augment the length of future trajectories, by reducing the processing interval between images or by augmenting the forecasting horizon.
Regarding APHYNITY, we use in the PhyDNet model a very general physical prior model: a class of linear PDEs. This is a weaker prior than those used in Chapter \ref{chap:aphynity}. Moreover, due to the non-observability of the prior, the physical model is applied in a learned latent space which is not explicitly controlled, contrary to the fully-visible setting in Chapter \ref{chap:aphynity}. This may explain why the Machine Learning / Model Based decomposition is more challenging to optimize. An interesting future direction would be to exploit more specific physical laws modelling the cloud motion and/or a more precise description of the input space where the physical laws apply.
\paragraph{Probabilistic forecasting}
In this thesis, we have forecasted solar irradiance in a deterministic manner with the PhyDNet model. An interesting future work is to extend our contributions on probabilistic forecasting to this problem. An adaptation of the STRIPE model would provide to the decision makers a small set of possible scenarios about the cloud motion (for example if the clouds will occlude the sun or not, and at what temporal horizon).
\paragraph{Handling the rotational distortion of fisheye images}
Fisheye images present a rotational symmetry along the vertical axis. Clouds in linear translation are observed as a curved motion in fisheye images. To handle this distortion induced by the fisheye camera objective, some forecasting methods preprocess fisheye images by projecting them in a plane where a translational cloud motion is linear. In this thesis, we have instead directly processed raw fisheye images with general convolutional layers commonly used in computer vision for encoding translation equivariance. Future works include applying the plane projection or polar transformation \cite{paletta2021spin} as preprocessing, or evaluating more dedicated neural network layers that handle rotation equivariance, such as spherical CNNs \cite{cohen2016group,cohen2018spherical}.
\subsection{Applications of deep augmented physical models}
\subsubsection*{Non-stationary dynamics forecasting}
In this thesis, our contributions towards non-stationary dynamics forecasting concern rethinking the training process by including shape and temporal criteria, and are thus agnostic to the forecasting architectures. An interesting future perspective would be to also incorporate prior knowledge in the model architectures, as studied in part \ref{part:part2} of this thesis. For time series, leveraging trend, seasonality and extrinsic prior knowledge (such as special events) \cite{laptev2017time} could help to better model the non-stationary abrupt changes and measure their impact on diversity and model confidence \cite{gal2016dropout,corbiere2019addressing}. The combination between a traditional forecasting model with interpretable and controlled factors (e.g.~ a ARIMA model) and a data-driven augmentation network would be a possible application case for APHYNITY.
\subsubsection*{Optical flow}
Optical flow estimation is a long-standing problem in computer vision, consisting in estimating the motion field between two frames. It is a core building block for many applications, such as image compression or object tracking. For example, optical flow is used to understand the cloud motion in traditional forecasting methods with fisheye images.
Traditional methods for optical flow, e.g.~ the Lucas-Kanade \cite{lucas1981iterative}
and the Horn-Schunk \cite{horn1981determining} models, are based on the brightness constancy assumption $I_1(\mathbf{x}) = I_2(\mathbf{x}+w)$ that states that the pixel intensity is preserved after advection by the flow field $w$. Linearising this equation leads to the celebrated optical flow PDE:
\begin{equation}
\frac{\partial I}{\partial t} (t,\mathbf{x}) = - w(t,\mathbf{x}) \cdot \nabla I (t,\mathbf{x}).
\label{eq:flot}
\end{equation}
The PDE in Eq \ref{eq:flot} is a simplified physical model, since the brightness constancy assumption is violated in several conditions, e.g.~ in presence of occluded objects, local, global illumination changes or specular reflexions.
Other traditional methods exploit different prior physical models for optical flow in specific contexts, e.g.~ the PDE continuity equation for fluid flows \cite{corpetti2002dense}.
More recently, deep learning approaches have proposed learning optical flow in an end-to-end fashion and have become state-of-the-art \cite{flownet,sun2018pwc,raft,stone2021smurf}. Two classes of methods exist: supervised and unsupervised ones. In the supervised context \cite{flownet,sun2018pwc,raft}, deep learning methods do not exploit the brightness constancy hypothesis anymore, or indirectly (through the computation of a cost volume). Instead, they rely on large synthetic datasets of annotated image pairs, making their generalization to real-world datasets not obvious.
On the other side, unsupervised deep learning approaches \cite{jason2016back,liu2020learning,stone2021smurf} are closer in spirit to traditional approaches. Without ground truth labels for optical flow, they rely on a photometric reconstruction loss. The reason deep unsupervised methods outperform traditional methods is that they fully exploit the correlations from the training dataset, instead of independently optimizing a flow field for each image pair. Typical photometric losses include the L1 loss that directly assumes intensity constancy, or more robust losses such the Charbonnier loss, the structural similarity (SSIM) \cite{jonschkowski2020matters}
or the census loss \cite{meister2018unflow} that is robust to global illumination changes. Although adequate losses may address some limitations of the brightness constancy assumption, they do not overcome all failure cases. Therefore the photometric constancy assumption also represents a simplified physical model.
In this context, an appealing research perspective is to explicitly exploit the simplified optical flow PDE in Eq \ref{eq:flot} in a deep augmented model. This is a favorable case for the application of our APHYNITY framework. This ML/MB integration could regularize and boost the performances of deep supervised estimation models, in particular for generalizing to new datasets. It could also be applied in a semi-supervised context, where the learned data-driven augmentation could complement the simplified photometric constancy for non-annotated images.
\subsubsection*{Model-Based Reinforcement Learning}
Reinforcement Learning (RL) \cite{sutton2018reinforcement} is a branch of machine learning that studies how autonomous agents make decisions in the environment in order to maximize their cumulative reward. Combined with deep learning, RL has encountered impressive successes for example by reaching super-human performance at the game of Go \cite{silver2017mastering}.
There are two main modelling approaches in RL: \textit{model-based} and \textit{model-free}. In the model-based approach, the agent uses an internal predictive model of the world to simulate the consequences of its actions, and choose the best action accordingly. In contrast, in the model-free approach, the control policy is learned directly from experienced trajectories, without any dynamical model.
\begin{figure}[H]
\centering
\includegraphics[width=16cm]{images/MBRL.png}
\caption[Principle of Model-Based Reinforcement Learning.]{Principle of Model-Based Reinforcement Learning.}
\label{fig:MBRL}
\end{figure}
The principle of Model-Based Reinforcement Learning (MBRL) is illustrated in Figure \ref{fig:MBRL}. It consists in planning through a dynamical model $f(\mathbf{s}_t,\mathbf{a}_t)$, where $\mathbf{s}_t$ is the current state and $\mathbf{a}_t$ the chosen action. The dynamical model is learned to minimize the future (discounted) cumulative cost:
\begin{equation}
\label{eq:opt}
\underset{a_{t_0},...,a_{\infty}}{\min} ~~~ \sum_{t=t_0}^{\infty} \gamma^{t-t_0} c(\hat{\mathbf{s}}_t, \mathbf{a}_t) ~~~
\mathrm{subject~to} ~~~~ \forall t \geq t_0, \frac{\diff \mathbf{s}_t}{\diff t} =f(\mathbf{s}_t,\mathbf{a}_t).
\end{equation}
where $c$ is a cost function and $\gamma <1$ a discount factor.
The dynamical model $f$ can be a simple linear (or locally linear) model, a physical model, or a pure data-driven model parameterized by a deep neural network\footnote{Please note that in the RL community, the term \textit{model-based} denotes the presence of a dynamical model $f$, that can either be a pure data-driven model (denoted as \textit{Machine Learning} in this thesis) or a model with a physical prior (denoted as \textit{Model-Based} in this thesis).}. In all cases, the model $f$ is often too simplified to perfectly extrapolate the future trajectories.
A common solution for nonetheless exploiting the incomplete model is to consider short-term rollouts and perform Model Predictive Control (MPC) \cite{nagabandi2018neural,janner2019trust}, which consists in replanning frequently to mitigate the error propagation in the forecasted trajectories.
An interesting future direction would be to explore deep augmented models in this MBRL case. A simplified prior dynamical model of the system could be augmented with a data-driven counterpart and learned together with the APHYNITY framework. This cooperation could improve the accuracy of the predictive model, enabling to perform more truthworthy long-term rollouts, and to replan less frequently.
An other appealing direction concerns improving the exploration process in Reinforcement Learning with a diversity-promoting mechanism \cite{pathak2017curiosity,eysenbach2018diversity,leurent2020robust}; this mechanism could be implemented with determinantal point processes with adequate kernels to represent structured diversity.
\subsection{Long-term perspectives}
The field of spatio-temporal forecasting is still a very active area of research in the AI community, and has not reached yet the degree of maturity of deep learning in computer vision or language. Forecasting complex dynamics remains highly challenging for pure machine learning, due to the relative current scarcity of data for learning complex natural phenomena such as climate. The quantity of training data will likely continue to grow in future years, yet it is not clear at which point it will become sufficient. Relying on this growing data accumulation, the exploration of bigger and bigger models to overcome the underfitting phenomenon is a possible way, which is faced with many computational challenges.
The other way, which was explored in this thesis, is to incorporate external knowledge to regularize machine learning models, in the form of loss functions, model architectures or training strategies. We hope that the contributions of this thesis will open the way towards hybrid and more flexible Machine Learning/Model-Based models for tackling complex real-world applications, e.g.~ in climate science, robotics or reinforcement learning. In particular, the augmentation strategy explored in this thesis - a linear combination - is rather particular. For many incomplete models, there exists high-order interactions between the simplified model and the residual information. Exploring more general augmentations schemes, linked with the growing field of neural architecture search \cite{elsken2019neural}, is an appealing direction for future years.
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{Introduction}
\begin{figure}[H]
\begin{center}
\begin{tabular}{ccc}
\hspace{-1.0cm} \includegraphics[width=5.8cm]{{images/dilate_fig11.png}} &
\hspace{-0.4cm} \includegraphics[width=5.8cm]{{images/dilate_fig12.png}} &
\hspace{-0.4cm} \includegraphics[width=5.8cm]{{images/dilate_fig13.png}} \\
\hspace{-1.0cm} (a) Non informative prediction & \hspace{-0.4cm}(b) Correct shape, time delay & \hspace{-0.4cm} (c) Correct time, inaccurate shape
\end{tabular}
\end{center}
\caption[Limitations of the MSE in deterministic forecasting.]{\textbf{Limitation of the Euclidean (MSE) loss}: when predicting a sudden change (target blue step function), the 3 predictions (a), (b) and (c) have similar MSE but very different forecasting skills. In contrast, the DILATE loss proposed in this work, which disentangles shape and temporal decay terms, supports predictions (b) and (c) over prediction (a) that does not capture the sharp change of regime.}
\label{fig:intro_dilate}
\end{figure}
\lettrine[lines=3]{A}s discussed in the previous Chapter, the Mean Squared Error (MSE) is inadequate in the context of non-stationary time series with sudden variations, as illustrated in Figure \ref{fig:intro_dilate}. Here, the target ground truth prediction is a step function (in blue), and we present three predictions, shown in Figure \ref{fig:intro_dilate} (a), (b), and (c), which have a similar MSE loss compared to the target, but very different forecasting skills. Prediction (a) is not adequate for regulation purposes since it doesn't capture the sharp drop to come. Predictions (b) and (c) much better reflect the change of regime since the sharp drop is indeed anticipated,
although with a slight delay (b) or with a slight
inaccurate amplitude (c).
This Chapter introduces DILATE (DIstortion Loss including shApe and TimE), a new objective function for training deep neural networks in the context of multi-step and non-stationary time series forecasting. DILATE explicitly disentangles into two terms the penalization related to the shape and the temporal localization errors of change detection. The behaviour of DILATE is shown in Figure \ref{fig:intro_dilate}: whereas the values of our proposed shape and temporal losses are large in Figure \ref{fig:intro_dilate} (a), the shape (resp. temporal) term is small in Figure \ref{fig:intro_dilate} (b) (resp. Figure \ref{fig:intro_dilate} (c)). DILATE combines shape and temporal terms, and is consequently able to output a much smaller DILATE loss for predictions (b) and (c) than for (a), as expected.
We first present the DILATE loss in section \ref{sec:training_with_dilate}. We also introduce a variant of DILATE, which provides a smooth generalization of temporally-constrained Dynamic Time Warping (DTW) metrics~\cite{sakoe1990dynamic,jeong2011weighted}. Experiments carried out on several synthetic and real non-stationary datasets reveal that
models trained with DILATE significantly outperform models trained with the MSE loss function when evaluated with shape and temporal distortion metrics, while DILATE maintains very good performance when evaluated with MSE. Finally, we show that DILATE can be used with various network architectures and can outperform on shape and time metrics state-of-the-art models specifically designed for multi-step and non-stationary forecasting.
\section{Training Deep Neural Networks with DILATE}
\label{sec:training_with_dilate}
Given an input sequence $\mathbf{x}_{1:T}=(\mathbf{x}_1,\dots,\mathbf{x}_T) \in \mathbb{R}^{p \times T}$, the deterministic multi-step time series forecasting problem consists in predicting a $H$-steps future trajectory $ \hat{\mathbf{y}} = (\hat{\mathbf{y}}_{T+1},\dots, \hat{\mathbf{y}}_{T+H} ) \in \mathbb{R}^{d \times H}$. As an alternative to the MSE, we introduce here the DIstortion Loss with shApe and TimE (DILATE) for training any deterministic deep multi-step forecasting model. Crucially, the DILATE loss needs to be differentiable in order to train models with gradient-based optimization.
\begin{figure*}
\centering
\includegraphics[width=17cm]{images/DILATE.png}
\caption[Overview of the DILATE loss.]{\textbf{Overview of the DILATE loss:} $\mathcal{L}_{\text{DILATE}}$ for training deterministic deep time series forecasting models is composed of two terms: $\mathcal{L}_{shape}$ based on the soft DTW and $\mathcal{L}_{time}$ that penalizes the temporal distortions visible on the soft optimal path. The overall loss $\mathcal{L}_{\text{DILATE}}$ is differentiable, and we provide an efficient implementation of its forward and backward passes.}
\label{fig:dilate}
\end{figure*}
The DILATE objective function, which compares the prediction $\hat{\mathbf{y}} = (\hat{\mathbf{y}}_{T+1},\dots, \hat{\mathbf{y}}_{T+H} )$ with the actual ground truth future trajectory $\mathbf{y}^* = (\mathbf{y}^*_{T+1},\dots,\mathbf{y}^*_{T+H})$, is composed of two terms balanced by the hyperparameter $\alpha \in [0,1]$:
\begin{align}
\label{eq:dilate}
\mathcal{L}_{\text{DILATE}}(\hat{\mathbf{y}}, \mathbf{y}^*) &= \alpha~\mathcal{L}_{shape}(\hat{\mathbf{y}}, \mathbf{y}^*) + (1-\alpha)~ \mathcal{L}_{time}(\hat{\mathbf{y}}, \mathbf{y}^*)\\
&= \alpha ~\text{DTW}^{\mathbf{\Delta}}_{\gamma}(\hat{\mathbf{y}}, \mathbf{y}^*) + (1-\alpha)~ \text{TDI}^{\mathbf{\Delta},\mathbf{\Omega_{dissim}}}_{\gamma}(\hat{\mathbf{y}}, \mathbf{y}^*).
\end{align}
The computational graph of the DILATE loss is illustrated in Figure \ref{fig:dilate}. We use for the shape term $\mathcal{L}_{shape}$ the smooth shape dissimilarity $\text{DTW}^{\mathbf{\Delta}}_{\gamma}$ defined in Eq \ref{eq:dtwgamma} and for the temporal term $\mathcal{L}_{time}$ the time dissimilarity $\text{TDI}^{\mathbf{\Delta},\mathbf{\Omega_{dissim}}}_{\gamma}$ defined in Eq \ref{eq:temporal}.
\paragraph{Tangled DILATE variant} A variant of our approach to combine shape and temporal penalization would be to incorporate a temporal term inside our smooth $\mathcal{L}_{shape}$ function in Eq \ref{eq:dtwgamma}, leading to a \textit{tangled} version $\mathcal{L}_{\text{DILATE}^t}$:
\begin{equation}
\mathcal{L}_{\text{DILATE}^t}(\hat{\mathbf{y}}_i, \mathbf{y}^*_{i}) := - \gamma \log \left ( \sum_{\mathbf{A} \in \mathcal{A}_{\tau,\tau}} \exp\left ( - \textstyle \frac{ \left \langle \mathbf{A} , \alpha \mathbf{\Delta}(\hat{\mathbf{y}}_i, \mathbf{y}^*_{i}) + (1-\alpha) \mathbf{\Omega} \right \rangle}{\gamma} \right ) \right ).
\label{eq:smoothwdtw}
\end{equation}
We can notice that Eq \ref{eq:smoothwdtw} reduces to minimizing $\left \langle \mathbf{A} , \alpha \mathbf{\Delta}(\hat{\mathbf{y}}_i, \mathbf{y}^*_{i}) + (1-\alpha) \mathbf{\Omega} \right \rangle$ when $\gamma \to 0^+$. In this case, $\mathcal{L}_{\text{DILATE}^t}$ can recover DTW variants studied in the literature to bias the computation based on penalizing sequence misalignment, by designing specific $\mathbf{\Omega}$ matrices:
\begin{center}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{c|c}
\begin{tabular}{c}
Sakoe-Chiba DTW \\
band constraint \cite{sakoe1990dynamic}
\end{tabular}
& $\Omega(i,j) =$
$\begin{cases}
+ \infty \text{~if~} |i-j|>T \\
0 \text{~~ otherwise}
\end{cases}$
\\ \hline
Weighted DTW \cite{jeong2011weighted} & $\Omega(i,j) = f(|i-j|)$ \text{~ for $f$ increasing function}
\end{tabular}
\end{adjustbox}
\end{center}
$\mathcal{L}_{\text{DILATE}^t}$ in Eq \ref{eq:smoothwdtw} enables to train deep neural networks with a smooth loss combining shape and temporal criteria. However, $\mathcal{L}_{\text{DILATE}^t}$ presents limited capacities for disentangling the shape and temporal errors, since the optimal path is computed from both shape and temporal terms, i.e.~ there is no guarantee to recover the true optimal alignment path because of the temporal penalization inside the cost matrix. In contrast, our $\mathcal{L}_{\text{DILATE}}$ loss in Eq \ref{eq:dilate} separates the loss into two shape and temporal components, the temporal penalization being applied to the optimal unconstrained DTW path.
\paragraph{Discussion on most related works}
We review here the most related works that attempt to train deep forecasting models with alternatives to the MSE. For exploiting the shape of future trajectories, recent works have explored smooth approximations of Dynamic Time Warping (DTW) \cite{cuturi2017soft, mensch2018differentiable,abid2018learning,vayer2020time,blondel2020differentiable}. Cuturi and Blondel have proposed the soft-DTW \cite{cuturi2017soft}, which is differentiable loss function that can be computed by dynamic programming with a quadratic complexity. They have shown convincing experiments on time series classification, clustering under the DTW geometry and early experiments on time series forecasting. The soft-DTW was further normalized to ensure a non-negative divergence \cite{blondel2020differentiable}. However, since DTW is by design invariant to elastic distortions, it completely ignores the temporal localization of the changes. A differentiable timing error loss function based on DTW on the event (binary) space was proposed in \cite{rivest2019new} ; however it is only applicable for predicting binary time series. Some works explored the use of adversarial losses for time series \cite{yoon2019time,wu2020adversarial}, which can be seen as an implicit way of enforcing semantic criteria learned from data. However, it gives a weaker and non-interpretable control on shape and time criteria and brings additional adversarial training difficulties.
\section{Experiments}
In this section, we evaluate the relevance of DILATE, both quantitatively and qualitatively, compared to generic as well as recent state-of-the-art models trained with the MSE. We also provide an in-depth analysis of the DILATE loss properties.
\subsection{Datasets}
\label{sec:datasets}
We carry out experiments on 5 synthetic and real-world datasets from various domains to illustrate the broad applicability of our methods. For each dataset, the task is to predict the $H$-steps ahead future trajectory given a $T$-steps context window:
\begin{itemize}
\item \texttt{Synthetic-det} ($T=20, H=20$): deterministic dataset consisting in predicting sudden changes (step functions) based on an input signal composed of two peaks. This controlled setup was designed to measure precisely the shape and time errors of predictions. We generate 500 times series for train, 500 for validation and 500 for test, with 40 time steps each: the first 20 are the inputs, the last 20 are the targets to forecast. In each series, the input range is composed of 2 peaks of random temporal position $i_1$ and $i_2$ and random amplitude $j_1$ and $j_2$ between 0 and 1, and the target range is composed of a step of amplitude $j_2-j_1$ and stochastic position $i_2 + (i_2-i_1)+ randint(-3,3)$. All time series are corrupted by an additive Gaussian white noise of variance 0.01.
\item \texttt{ECG5000} ($T=84, H=56$): this dataset comes from the UCR Time Series Classification Archive \cite{chen2015ucr}, and is composed of 5000 electrocardiograms (ECG) (500 for training, 4500 for testing) of length 140. We take the first 84 time steps (60 \%) as input and predict the last 56 steps (40 \%) of each time series (same setup as in \cite{cuturi2017soft}).
\item \texttt{Traffic} ($T=168, H=24$): this dataset is composed of road occupancy rates (between 0 and 1) from the California Department of Transportation (48 months from 2015-2016) measured every 1h. We work on the first univariate series of length 17544 (with the same 60/20/20 train/valid/test split as in \cite{lai2018modeling}), and we train models to predict the 24 future points given the past 168 points (past week)
\item \texttt{Electricity} ($T=168, H=24$): this dataset consists in hourly electricity consumption measurements (kWh) from 370 customers.
\item \texttt{ETTh1} \cite{zhou2020informer} ($T=96, H=96$): dataset of hourly Electricity Transformer Temperature measurements, which is an important indicator for electricity grids. This dataset enables to assess the generalization of our approach on much longer term predictions.
\end{itemize}
\subsection{Implementation details}
\paragraph*{Metrics}
To evaluate the benefits of our proposed DILATE training loss, we compare it against the widely used Euclidean (MSE) loss, and the soft-DTW introduced in~\cite{cuturi2017soft,mensch2018differentiable}. We use the following multi-step prediction metrics: MSE, DTW (shape), TDI (temporal). To consolidate the evaluation, we also consider two additional (non differentiable) metrics for assessing shape and time. For shape, we compute the ramp score \cite{vallance2017towards}. For time, we compute the Hausdorff distance between a set of detected change points in the target signal $\mathcal{T}^*$ and in the predicted signal
$\hat{\mathcal{T}}$:
\begin{equation}
\text{Hausdorff}(\mathcal{T}^*,\hat{\mathcal{T}}) := \max ( \underset{\hat{t} \in \mathcal{ \hat{T} }}{\max} \underset{t^* \in \mathcal{ T^* }}{\min} |\hat{t}-t^* | , \underset{t^* \in \mathcal{ T^* }}{\max} \underset{\hat{t} \in \mathcal{ \hat{T} }}{\min} |\hat{t}-t^* | ),
\end{equation}{}
which corresponds to the largest possible distance between a change point and its prediction. Additional details about these external metrics are given in Appendix \ref{app:dilate_metrics}.
\paragraph*{Neural networks architectures:} For the generic neural network architectures, we use a fully connected network (1 layer of 128 neurons), which does not make any assumption on data structure, and a more specialized Seq2Seq model \cite{sutskever2014sequence} with Gated Recurrent Units (GRU) \cite{cho2014learning} with 1 layer of 128 units.
Each model is trained with PyTorch for a max number of 1000 epochs with Early Stopping with the ADAM optimizer. The smoothing parameter $\gamma$ of DTW and TDI is set to $10^{-2}$.
\paragraph*{DILATE hyperparameters:} the hyperparameter $\alpha$ balancing $\mathcal{L}_{shape}$ and $\mathcal{L}_{time}$ is determined on a validation set to get comparable DTW shape performance than the $\text{DTW}_{\gamma}^{\mathbf{\Delta}}$ trained model: $\alpha=0.5$ for Synthetic and ECG5000, and 0.8 for Traffic, Electricity and ETTh1. The DTW smoothing parameter $\gamma$ is fixed to $10^{-2}$, as further discussed in section \ref{sec:dilate-analysis}.
Our code implementing DILATE is available on line from: \url{https://github.com/vincent-leguen/DILATE}.
\begin{table*}
\caption[DILATE forecasting results on generic MLP and RNN architectures.]{\textbf{DILATE forecasting results on generic MLP and RNN architectures}, averaged over 10 runs (mean $\pm$ standard deviation). Metrics are scaled for readability. For each experiment, best method(s) (Student t-test) in bold.}
\centering
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{llccc|ccc}
\toprule
\multicolumn{2}{c}{} & \multicolumn{3}{c|}{\textbf{Fully connected network (MLP)}} & \multicolumn{3}{c}{\textbf{Recurrent neural network (Seq2Seq)}} \\
\hline
Dataset & \diagbox{Eval}{Train} & MSE & $\text{DTW}_{\gamma}^{\mathbf{\Delta}}$~\cite{cuturi2017soft} & DILATE (ours) & MSE & $\text{DTW}_{\gamma}^{\mathbf{\Delta}}$~\cite{cuturi2017soft} & DILATE (ours) \\
\hline
~ & MSE (x1000) & ~ \textbf{16.5 $\pm$ 1.4} & ~ 48.2 $\pm$ 4.0 & ~ \textbf{16.7$\pm$ 1.8} & ~ \textbf{11.0 $\pm$ 1.7} & ~ 23.1 $\pm$ 4.5 & ~ \textbf{12.1 $\pm$ 1.3} \\
\texttt{Synthetic} & DTW (x10) & ~ 38.6 $\pm$ 1.28 & ~ \textbf{27.3 $\pm$ 1.37} & ~ 32.1 $\pm$ 5.33 & ~ \textbf{24.6 $\pm$ 1.20} & ~ \textbf{22.7 $\pm$ 3.55} & ~ \textbf{23.1 $\pm$ 2.44} \\
~ & TDI (x10) & ~ 15.3 $\pm$ 1.39 & ~ 26.9 $\pm$ 4.16 & ~ \textbf{13.8 $\pm$ 0.71} & ~ 17.2 $\pm$ 1.22 & ~ 20.0 $\pm$ 3.72 & ~ \textbf{14.8 $\pm$ 1.29} \\
~ & Ramp (x10) & 5.21 $\pm$ 0.10 & \textbf{2.04 $\pm$ 0.23} & 3.41 $\pm$ 0.29 & 5.80 $\pm$ 0.10 & \textbf{4.27 $\pm$ 0.8} & 4.99 $\pm$ 0.46 \\
~ & Hausdorff (x1) & 4.04 $\pm$ 0.28 & 4.71 $\pm$ 0.50 & \textbf{3.71 $\pm$ 0.12} & 2.87 $\pm$ 0.13 & 3.45 $\pm$ 0.32 & \textbf{2.70 $\pm$ 0.17} \\
\midrule
~ & MSE (x100) & ~ \textbf{31.5 $\pm$ 1.39} & ~ 70.9 $\pm$ 37.2 & ~ 37.2 $\pm$ 3.59 & ~ \textbf{21.2 $\pm$ 2.24} & ~ 75.1 $\pm$ 6.30 & ~ 30.3 $\pm$ 4.10 \\
\texttt{ECG} & DTW (x10) & ~ 19.5 $\pm$ 0.16 & ~ 18.4 $\pm$ 0.75 & ~ \textbf{17.7 $\pm$ 0.43} & ~ 17.8 $\pm$ 1.62 & ~ 17.1 $\pm$ 0.65 & ~ \textbf{16.1 $\pm$ 0.16} \\
~ & TDI (x10) & ~ \textbf{7.58 $\pm$ 0.19} & ~ 17.9 $\pm$ 0.7 & ~ \textbf{7.21 $\pm$ 0.89} & ~ 8.27 $\pm$ 1.03 & ~ 27.2 $\pm$ 11.1 & ~ \textbf{6.59 $\pm$ 0.79} \\
~ & Ramp (x1) & \textbf{4.9 $\pm$ 0.1} & 5.1 $\pm$ 0.3 & \textbf{5.0 $\pm$ 0.1} & \textbf{4.84 $\pm$ 0.24} & ~ \textbf{4.79 $\pm$ 0.37} & ~ \textbf{4.80 $\pm$ 0.25} \\
~ & Hausdorff (x1) & \textbf{4.1 $\pm$ 0.1} & 6.3 $\pm$ 0.6 & 4.7 $\pm$ 0.3 & \textbf{4.32 $\pm$ 0.51} & ~ 6.16 $\pm$ 0.85 & ~ \textbf{4.23 $\pm$ 0.41} \\
\midrule
~ & MSE (x1000) & ~ \textbf{6.58 $\pm$ 0.11} & ~ 25.2 $\pm$ 2.3 & ~ 19.3 $\pm$ 0.80 & ~ \textbf{8.90 $\pm$ 1.1} & ~ 22.2 $\pm$ 2.6 & ~ \textbf{10.0 $\pm$ 2.6} \\
\texttt{Traffic} & DTW (x100) & ~ 25.2 $\pm$ 0.17 & ~ \textbf{23.4 $\pm$ 5.40} & ~ \textbf{23.1 $\pm$ 0.41} & ~ 24.6 $\pm$ 1.85 & ~ \textbf{22.6 $\pm$ 1.34} & ~ \textbf{23.0 $\pm$ 1.62} \\
~ & TDI (x100) & ~ 24.8 $\pm$ 1.1 & ~ 27.4 $\pm$ 5.01 & ~ \textbf{16.7 $\pm$ 0.51} & ~ \textbf{15.4 $\pm$ 2.25} & ~ 22.3 $\pm$ 3.66 & ~ \textbf{14.4$\pm$ 1.58} \\
~ & Ramp (x10) & 6.18 $\pm$ 0.1 & \textbf{5.59 $\pm$ 0.1} & \textbf{5.6 $\pm$ 0.1} & 6.29 $\pm$ 0.32 & ~ \textbf{5.78 $\pm$ 0.41} & ~ \textbf{5.93 $\pm$ 0.24} \\
~ & Hausdorff (x1) & \textbf{1.99 $\pm$ 0.2} & \textbf{1.91 $\pm$ 0.3} & \textbf{1.94 $\pm$ 0.2} & \textbf{2.16 $\pm$ 0.38} & ~ \textbf{2.29 $\pm$ 0.33} & ~ \textbf{2.13 $\pm$ 0.51} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{results1}
\end{table*}
\subsection{DILATE performances on generic architectures}
To demonstrate the broad applicability of our approach, we first perform multi-step forecasting with two generic neural network architectures: a fully connected network (1 layer , which does not make
any assumption on data structure, and a more specialized Seq2Seq model with 1 layer of 128 Gated Recurrent Units (GRU). We perform a Student t-test with significance level 0.05 to highlight the best(s) method(s) in each experiment (averaged over 10 runs). Overall results are presented in Table \ref{results1}.
\paragraph*{Comparison to MSE training loss:} DILATE outperforms MSE when evaluated on shape (DTW) in all experiments, with significant differences on 5/6 experiments. When evaluated on time (TDI), DILATE also performs better in all experiments (significant differences on 3/6 tests). Finally, DILATE is equivalent to MSE when evaluated on MSE on 3/6 experiments.
\paragraph*{Comparison to $\text{DTW}_{\gamma}^{\mathbf{\Delta}}$ training loss:} When evaluated on shape (DTW), DILATE performs similarly to $\text{DTW}_{\gamma}^{\mathbf{\Delta}}$ (2 significant improvements, 1 significant drop and 3 equivalent performances). For time (TDI) and MSE evaluations, DILATE is significantly better than $\text{DTW}_{\gamma}^{\mathbf{\Delta}}$ in all experiments, as expected.
We can notice that the ramp score (resp. the Haussdorff distance) provides the same trends than the shape metric DTW (resp. the time metric TDI). It reinforces our conclusions and shows that DILATE indeed improves shape and temporal accuracy beyond the metrics being optimized.
We display a few qualitative examples for Synthetic, ECG5000 and Traffic datasets in Figure \ref{fig:dilate_visu} (other examples are provided in Appendix \ref{app:dilate_visus}). We see that MSE training leads to predictions that are non-sharp, making them inadequate in presence of drops or sharp spikes. $\text{DTW}_{\gamma}^{\mathbf{\Delta}}$ leads to very sharp predictions in shape, but with a possibly large temporal misalignment. In contrast, our DILATE loss predicts series that have both a correct shape and precise temporal localization.
\begin{figure}[t]
\centering
\includegraphics[width=16cm]{images/res_dilate.png}
\caption[Qualitative prediction results with the DILATE loss.]{\textbf{Qualitative prediction results with the DILATE loss.} For each dataset, the MSE training loss leads to non-sharp predictions, whereas the soft-DTW loss can predict sharp variations but has no control over their temporal localization. In contrast, the DILATE loss produces sharp predictions with accurate temporal localization.}
\label{fig:dilate_visu}
\end{figure}
\subsection{DILATE performances with state-of-the-art models}
Beyond generic forecasting architectures, we show that DILATE can also improve the performances of state-of-the-art deep architectures. We experiment here with two recent and competitive models: N-Beats \cite{oreshkin2019n} and Informer \cite{zhou2020informer}. Results in Table \ref{tab:dilate_sota} are consistent with those in Table \ref{results1}: models trained with DILATE improve over MSE in shape (in DTW and ramp score for 6/6 experiments) and time (in TDI for 5/6 and Hausdorff for 4/6 experiments) and are equivalent to MSE when evaluated in MSE (equivalent or better for 3/6 experiments). We provide qualitative predictions of N-Beats on \texttt{Electricity} in Figure \ref{fig:dilate_elec} and \texttt{ETTh1} in Figure \ref{fig:dilate_etth1}. It again confirms that training with DILATE leads to much sharper predictions with a better temporal localization than training with the MSE.
\begin{table*}
\caption[DILATE forecasting results on state-of-the-art architectures.]{\textbf{DILATE forecasting results on state-of-the-art architectures N-Beats \cite{oreshkin2019n} and Informer \cite{zhou2020informer}}. Evaluation metrics are scaled for readability. Results are averaged over 10 runs, best(s) method(s) in bold (Student t-test).}
\centering
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{cccccccc}
\toprule
Dataset & Model & MSE & DTW & Ramp & TDI & Hausdorff & DILATE \\
\midrule
\texttt{Synthetic} & N-Beats \cite{oreshkin2019n} MSE & \textbf{13.6 $\pm$ 0.5} & 24.9 $\pm$ 0.6 & 5.9 $\pm$ 0.1 & \textbf{13.8 $\pm$ 1.1} & \textbf{2.8 $\pm$ 0.1} & \textbf{19.3 $\pm$ 0.5} \\
& N-Beats \cite{oreshkin2019n} DILATE & \textbf{13.3 $\pm$ 0.7} & \textbf{23.4 $\pm$ 0.8} & \textbf{4.8 $\pm$ 0.4} & \textbf{14.4 $\pm$ 1.3} & \textbf{2.7 $\pm$ 0.5} & \textbf{18.9 $\pm$ 0.8} \\
\cdashline{2-8}
& Informer \cite{zhou2020informer} MSE & \textbf{10.4 $\pm$ 0.3} & 20.1 $\pm$ 1.1 & 4.3 $\pm$ 0.3 & 13.1 $\pm$ 0.9 & \textbf{2.5 $\pm$ 0.1} & 16.6 $\pm$ 0.8 \\
& Informer \cite{zhou2020informer} DILATE & 11.8 $\pm$ 0.7 & \textbf{18.5 $\pm$ 1.2} & \textbf{2.4 $\pm$ 0.3} & \textbf{11.6 $\pm$ 0.9} & \textbf{2.4 $\pm$ 0.9} & \textbf{15.1 $\pm$ 0.7} \\
\midrule
\texttt{Electricity} & N-Beats \cite{oreshkin2019n} MSE & \textbf{24.8 $\pm$ 0.4} & \textbf{15.6 $\pm$ 0.2} & \textbf{13.3 $\pm$ 0.3} & 4.6 $\pm$ 0.1 & \textbf{2.6 $\pm$ 0.3} & \textbf{13.4 $\pm$ 0.2} \\
& N-Beats \cite{oreshkin2019n} DILATE & 25.8 $\pm$ 0.9 & \textbf{15.5 $\pm$ 0.2} & \textbf{13.3 $\pm$ 0.3} & \textbf{4.4 $\pm$ 0.2} & 3.1 $\pm$ 0.5 & \textbf{13.2 $\pm$ 0.2} \\
\cdashline{2-8}
& Informer \cite{zhou2020informer} MSE & \textbf{38.1 $\pm$ 2.1} & 18.9 $\pm$ 0.6 & 13.2 $\pm$ 0.2 & 6.5 $\pm$ 0.3 & 2.1 $\pm$ 0.2 & 16.4 $\pm$ 0.5 \\
& Informer \cite{zhou2020informer} DILATE & \textbf{37.8 $\pm$ 0.8} & \textbf{18.5 $\pm$ 0.3} & \textbf{12.9 $\pm$ 0.2} & \textbf{5.7 $\pm$ 0.2} & \textbf{1.9 $\pm$ 0.1} & \textbf{15.9 $\pm$ 0.3} \\
\midrule
\texttt{ETTH1} & N-Beats \cite{oreshkin2019n} MSE & 32.5 $\pm$ 1.4 & 3.9 $\pm$ 0.2 & 13.3 $\pm$ 2.0 & 21.6 $\pm$ 4.3 & \textbf{5.7 $\pm$ 0.7} & 7.4 $\pm$ 1.0 \\
& N-Beats \cite{oreshkin2019n} DILATE & \textbf{26.0 $\pm$ 2.8} & \textbf{2.9 $\pm$ 0.1} & \textbf{4.6 $\pm$ 0.6} & \textbf{11.4 $\pm$ 1.7} & \textbf{6.4 $\pm$ 1.0} & \textbf{4.6 $\pm$ 0.4} \\
\cdashline{2-8}
& Informer \cite{zhou2020informer} MSE & \textbf{28.2 $\pm$ 2.6} & 4.3 $\pm$ 0.3 & 5.8 $\pm$ 0.1 & 21.6 $\pm$ 3.3 & \textbf{6.6 $\pm$ 1.9} & 7.8 $\pm$ 0.9 \\
& Informer \cite{zhou2020informer} DILATE & 32.5 $\pm$ 3.8 & \textbf{3.2 $\pm$ 0.3} & \textbf{4.5 $\pm$ 0.3} & \textbf{19.1 $\pm$ 1.9} & \textbf{6.4 $\pm$ 1.0} & \textbf{6.4 $\pm$ 0.6} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:dilate_sota}
\end{table*}
\begin{figure}
\begin{tabular}{cc}
\textbf{N-Beats \cite{oreshkin2019n} MSE} & \textbf{N-Beats \cite{oreshkin2019n} DILATE} \\
\includegraphics[width=8.5cm]{images/elec_nbeats_mse.png} & \hspace{-0.5cm} \includegraphics[width=8.5cm]{images/elec_nbeats_dilate.png} \\
\end{tabular}\
\hspace{-0.7cm}
\caption[DILATE forecasting results on state-of-the-art architectures.]{Qualitative forecasting results comparing the N-Beats model \cite{oreshkin2019n} trained with MSE and the DILATE loss on the \texttt{Electricity} dataset.}
\label{fig:dilate_elec}
\end{figure}
\begin{figure}
\begin{tabular}{cc}
\textbf{N-Beats \cite{oreshkin2019n} MSE} & \textbf{N-Beats \cite{oreshkin2019n} DILATE} \\
\hspace{-0.5cm}
\includegraphics[width=8.6cm]{images/nbeats_etth1_mse.png} & \hspace{-0.5cm}
\includegraphics[width=8.6cm]{images/nbeats_etth1_dilate.png}
\end{tabular}
\caption[DILATE forecasting results on state-of-the-art architectures.]{Qualitative forecasting results comparing the N-Beats model \cite{oreshkin2019n} trained with MSE and the DILATE loss on the \texttt{ETTH1} dataset.}
\label{fig:dilate_etth1}
\end{figure}
\subsection{DILATE loss analysis \label{sec:dilate-analysis}}
\paragraph*{Influence of $\alpha$} We analyze in Figure \ref{fig:dilate_analysis} (a) the influence of the tradeoff parameter $\alpha$ when training a Seq2Seq model on the \texttt{Synthetic-det} dataset. When $\alpha=1$, $\mathcal{L}_{\text{DILATE}}$ reduces to $\text{DTW}_{\gamma}^{\mathbf{\Delta}}$, with an accurate shape but a large temporal error. When $\alpha \longrightarrow 0$, we only minimize $\mathcal{L}_{time}$ without any shape constraint. Both MSE and shape errors explode in this case, illustrating the fact that $\mathcal{L}_{time}$ is only meaningful in conjunction with $\mathcal{L}_{shape}$. Both the MSE and DILATE error curves present a U-shape ; in this case, $\alpha=0.5$ seems an acceptable tradeoff for the \texttt{Synthetic-det} dataset.
\paragraph*{Influence of $\gamma$} We analyse the influence of the $\text{DTW}_{\gamma}^{\mathbf{\Delta}}$ smoothing parameter $\gamma$ in Figure \ref{fig:dilate_analysis}. We show in Figure \ref{fig:dilate_analysis} (c) the assignment probabilities of the $\text{DTW}_{\gamma}^{\mathbf{\Delta}}$ path between the two test time series from Figure \ref{fig:dtw}, the true DTW path being depicted in red. When $\gamma$ increases, the $\text{DTW}_{\gamma}^{\mathbf{\Delta}}$ path is more uncertain and becomes multimodal. When $\gamma \rightarrow 0$, the soft DTW converges toward the true DTW. However, we see in Figure \ref{fig:dilate_analysis} (b) that for small $\gamma$ values, optimizing $\text{DTW}_{\gamma}^{\mathbf{\Delta}}$ becomes more difficult, resulting in higher test error and higher variance (on \texttt{Synthetic-det}). We fixed $\gamma=10^{-2}$ in all our experiments, which yields a good tradeoff between an accurate soft optimal path and a low test error.
\begin{figure}[H]
\centering
\begin{tabular}{cc}
\includegraphics[height=6.1cm]{images/influ_alpha.png} &
\includegraphics[height=6.1cm]{images/influ_gamma.png} \\
(a) Influence of $\alpha$ & (b) Influence of $\gamma$
\end{tabular}
\begin{tabular}{c}
\includegraphics[width=17cm]{images/etude_gamma_full.png} \\
(c) Influence of $\gamma$ on the soft-DTW optimal path (true path in red)
\end{tabular}
\caption[DILATE loss analysis.]{\textbf{DILATE loss analysis.} The shaded areas represent $\pm $ std computed over 10 runs.}
\label{fig:dilate_analysis}
\end{figure}
\section{Conclusion}
In this Chapter, we have introduced DILATE, a new differentiable loss function for training deep multi-step time series forecasting models. DILATE combines two terms for precise shape and temporal localization of non-stationary signals with sudden changes. We showed that DILATE is comparable to the standard MSE loss when evaluated on MSE, and far better when evaluated on several shape and timing metrics. DILATE compares favourably on shape and timing to state-of-the-art forecasting algorithms trained with the MSE.
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{Spatio-temporal forecasting}
\begin{figure}[H]
\centering
\includegraphics[width=17cm]{images/forecasting.png}
\caption[Spatio-temporal forecasting applications.]{Spatio-temporal forecasting applications include time series forecasting, physical systems extrapolation, forecasting phenomena with visual data, generic video prediction, \textit{etc}.}
\label{fig:forecasting}
\end{figure}
\subsection{General context: perception vs extrapolation}
\lettrine[lines=3]{I}n this thesis, we tackle the problem of spatio-temporal forecasting, which is the task of forecasting complex phenomena represented by time series or videos, involving both complex temporal dynamics and strong spatial correlations. Advances in this field could lead to immediate and possibly large impacts in the society. A wide range of sensitive applications heavily rely on accurate forecasts of uncertain events with potentially sharp variations for making decisions (see Figure \ref{fig:forecasting}). In weather and climate science, better anticipating floods, hurricanes, earthquakes or other extreme events could help taking emergency measures on time and save lives. In medicine, predicting the evolution of a disease is a particularly actual topic. In retail and business, accurately predicting the demand for a product is fundamental for stock management and profit maximization. For industrial applications, failure prediction is an important issue for maintenance.
We address spatio-temporal forecasting from a machine learning point of view, i.e.~ by leveraging training data for solving the task. Machine Learning (ML) is a subfield of Artificial Intelligence (AI) that is appealing for solving complex problems. Bolstered by the recent advances in computer hardware and the exponential growth of available data, ML has witnessed a renewed interest in the last decade from both academic and industrial actors. At the ImageNet competition in 2012, which consists in classifying images between 1000 categories, the deep neural network of Krizhevsky et al. \cite{krizhevsky2012imagenet} has for the first time outperformed traditional methods by a large margin. Given enough training data, Deep Learning (DL) can automatically learn meaningful representations useful for downstream tasks, replacing the manual feature extraction necessary in traditional ML algorithms. Since, Deep Learning has shown impressive results in many practical applications (see Figure \ref{fig:ai-success}), such as object detection \cite{carion2020end}, image segmentation \cite{minaee2021image}, natural language understanding \cite{devlin2018bert}, or human speech recognition \cite{amodei2016deep}. Combined with reinforcement learning, DL has led to super-human performance on many board games, e.g.~ at the game of Go with alphaGo \cite{silver2017mastering}.
\begin{figure}[H]
\centering
\includegraphics[width=17cm]{images/ai_success.png}
\caption[Successes of Artificial Intelligence and Deep Learning.]{The main Artificial Intelligence and Deep learning successful applications include tasks linked to perception, such as computer vision, speech, language, reinforcement learning and games.}
\label{fig:ai-success}
\end{figure}
However, the successes of AI in these tasks are essentially linked to perception and not directly transferable to spatio-temporal forecasting. Modelling and extrapolating complex physical dynamics, such those arising in climate sciences, seems still beyond the scope of pure ML methods. The extrapolation task we address is quite different by nature from perception: future is inherently stochastic and multimodal, i.e.~ multiple outcomes may happen from the same context situation.
Moreover, the volume of available data for learning complex dynamical systems such as in climate is by several orders of magnitude not sufficient still nowadays \cite{schultz2021can}. Many extreme events appear very scarcely in datasets and are thus highly challenging to learn from data.
\subsection{Incorporating prior knowledge in machine learning models}
To overcome these issues, injecting prior physical knowledge about the system is a key aspect for accurate extrapolation. This is an old question in machine learning that yet remains widely open.
We illustrate in Figure \ref{fig:physics_data} the main classes of methods for spatio-temporal forecasting.
On the right side of Figure \ref{fig:physics_data}, the traditional Model-Based (MB) approaches require a
deep mathematical or physical understanding of the underlying phenomena. For time series, classical state space models (SSMs) \cite{hyndman2008forecasting,box2015time} explicitly exploit the trend and seasonality patterns. For physical processes, physicists attempt to model the dynamics with first principles, conservation laws or other empirical behaviours. This physical knowledge can often be formulated through ordinary or partial differential equations (ODE/PDE) with known coefficients. With data available for the initial and boundary conditions, forecasting is performed with numerical simulation solvers. This is the classical setting in many engineering fields, such as in mechanics (where systems are described by Newtonian mechanics) or in computational fluid dynamics (with the Navier-Stockes equations), and the numerical analysis solvers are well theoretically grounded.
However, this class of methods is limited in the case of \textit{incomplete} physical models. Models can be considered incomplete in two situations. In the first case, the complexity of the phenomenon prevents from deriving an exhaustive analytical description of the system. For example when modelling climate change, many complex interactions governing the state of the atmosphere are not modelled. The complete set of input variables of the system may also be unknown, e.g.~ when forecasting financial markets or human interactions. In the second case, certain approximations are made to make the complete equations numerically tractable. For example, the Schrödinger equation that governs the wave function of a quantum-mechanical system is not exactly solvable in many non-trivial situations. Solutions are typically computed by approximate numerical schemes and with several simplifying assumptions, e.g.~ the Born-Oppenheimer approximation. For computational time issues, the equations can also be solved on rather coarse meshes, which can prevent from capturing certain phenomena, e.g.~ the turbulence behaviour in computational fluid dynamics.
On the other side of the spectrum, Machine Learning (ML) represents a more prior-agnostic approach. Given a large amount of training data, deep learning has encountered impressive successes in automatically learning complex relationships without any prior knowledge, and has become state-of-the-art for many forecasting tasks, such as generic video prediction \cite{wu2021motionrnn}. However, as discussed above, deep learning is still limited for modelling highly complex dynamics of natural phenomena such as climate; although more and more data is collected about the atmosphere with in-situ or remote sensing, it is still largely largely insufficient for matching the complexity of the task. Moreover, deep neural networks lack the physical plausibility required in several domains and cannot properly extrapolate to new conditions.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{images/fig_intro_generale2.png}
\caption[Data \textit{vs} prior knowledge contexts.]{\textbf{Data \textit{vs.} prior knowledge contexts.} On the left, Machine Learning (ML) and particularly Deep Learning can extrapolate dynamical systems with no prior information after training on a large dataset. On the right, traditional Model-Based (MB) approaches assume a full physical knowledge of the system and predict the future with numerical simulation from a set of initial and boundary conditions. In-between, with some data and a possibly incomplete knowledge, the ML/MB coupling is a very active and promising research direction that we explore in this thesis.}
\label{fig:physics_data}
\end{figure}
In-between, there exists a category of hybrid methods that combine MB approaches and data. Historically, data assimilation techniques \cite{corpetti2009pressure,bocquet2019data} leverage data to correct the predictions of physical models in presence of noisy observations. This includes the popular Kalman filter \cite{kalman1960new}, particle filter \cite{perez2004data} or 4D-Var \cite{courtier1994strategy} that have achieved great successes for many smoothing/filtering/forecasting applications, for example for tracking objects in videos \cite{perez2002color}. Data assimilation still constitutes the state-of-the-art paradigm for weather forecasting.
Revisiting the ML/MB cooperation with modern deep learning is an emerging research topic motivating a great interest in many communities, attested by the soaring number of publications and workshops in top ML conferences\footnote{For example, the two workshops "Machine learning and the physical sciences" and "Tackling climate change with machine learning" at NeurIPS 2019 gathered together more than 200 papers, and even more at NeurIPS 2020.}. Physics can be leveraged in the training process of ML models, either as a soft constraint in the loss function \cite{raissi2017physics,sirignano2018dgm} or as hard constraints in the neural network topology \cite{daw2020physics,mohan2020embedding}. From the ML point of view, these physical constraints lead to more interpretable ML models compliant to physical laws that remain robust in case of noisy data. This typically results in an increased data efficiency and better extrapolation performances beyond the training domain. Another particularly appealing direction concerns identifying and discovering physical systems: data-driven models can learn the unknown coefficients or parts in parameterized PDEs \cite{rudy2017data,long2018pde}, and discover new physical connections from data \cite{cranmer2020discovering}.
In this thesis, we explore this category of hybrid methods and our contributions are targetted towards the following question:
\begin{center}
\textit{How to properly exploit prior physical knowledge to improve Machine Learning forecasting models?}
\end{center}
We focus on two particular directions: injecting prior knowledge in the training objective (part \ref{part:part1}) and designing augmented MB/ML neural architectures in the case of incomplete physical models (part \ref{part:part2}).
\subsection{Industrial application at EDF: solar energy forecasting with fisheye images}
At Electricité de France (EDF), the industrial use-case motivating this thesis is solar irradiance forecasting. With the increasing share of intermittent renewable energy sources such as solar or wind, accurately forecasting the electricity production and its possibly sharp variations is of great importance since the the consumption-production balance must be satisfied at every timestep. The possible data sources for this task are illustrated in Figure \ref{fig:types-observations}.
Numerical weather forecasts are commonly used for predicting solar energy for long-term horizons up to a few days, with a typical temporal scale of 1 hour and a spatial scale of approximately 10 km. For shorter term horizons, satellite images offer forecasts up to a few hours, at a 15 min temporal granularity and a 1 to 5km spatial scale. However the spatial and temporal granularity of these two techniques are too coarse to precisely forecast the photovoltaic (PV) energy production of a given plant for very short horizons (< 20min).
To this end, images of the sky from ground-based fisheye cameras have been increasingly investigated in recent years \cite{gauchet2012surface,chu2013hybrid,chu2016sun,marquez2013intra,schmidt2016evaluating}.
Coupled with ground truth solar irradiance measurements from pyranometers, fisheye images offer an hemispheric view of the sky enabling to anticipate the evolution of the cloud cover responsible for the electric production variations. A database of several million annotated fisheye images has been collected by EDF R\&D. Estimating the irradiance corresponding to a given fisheye image is a favorable perception task for the application of deep learning. We have confirmed at the beginning of this thesis \cite{leguen-gretsi} that deep learning indeed provides a large improvement gap over traditional machine learning methods for this estimation task.
On the contrary, predicting future fisheye images for anticipating the PV production is a much more challenging extrapolation task: clouds are deformable objects with complex stochastic behaviour (that can appear or evaporate), several layers with different speeds and directions may be simultaneously present, and the fisheye camera distortion exacerbates the difficulty. In this context, even recent state-of-the-art deep learning algorihms struggle to properly extrapolate the cloud motion. We describe this use-case with more details in Chapter \ref{chap:overview_fisheye}.
\begin{figure}
\centering
\includegraphics[width=16cm]{images/types_observations2.png}
\caption{The different data sources for forecasting solar energy.}
\label{fig:types-observations}
\end{figure}
\section{Scientific challenges}
We present here the main scientific challenges, highlighted by our industrial application, that we address in this thesis.
\subsection{Multistep forecasting of non-stationary dynamics}
We address the problem of forecasting complex dynamical systems with non-stationary dynamics, i.e.~ with possible sharp variations. We are interested in describing the distribution of possible futures with a small set of predicted trajectories. In this context, pure data-driven methods are still limited. Paletta \textit{et al.~} \cite{paletta2021benchmarking} compared the performances of mainstream convolutional and recurrent neural networks for solar irradiance forecasting at a 10 minutes horizon. They show (see Figure \ref{fig:paletta}) that Deep Learning (DL) predictions struggle to match the ground truth (black curve). Two main drawbacks can be observed: (1) DL predictions smoothen the shape of the sharp drop of solar irradiance in B, and (2) the predictions are late, for example do not anticipate the drop in B\footnote{Predictions are temporally aligned with the smart persistence, which corresponds to copying the current value for the future time horizon.}.
\begin{figure}[H]
\centering
\includegraphics[width=17cm]{images/paletta.png}
\caption[Limitations of standard deep learning model for solar irradiance forecasting.]{Limitations of standard Deep Learning models for 10-min ahead solar irradiance forecasting with fisheye images. Prior-agnostic Deep Learning models trained with the mean squared error do not capture the correct shape of the ground truth nor its exact temporal localization (they are temporally aligned with the smart persistence). Figure taken from Paletta \textit{et al.~} \cite{paletta2021benchmarking}.}
\label{fig:paletta}
\end{figure}
This solar energy forecasting problem illustrates a non-stationary forecasting context, with possible abrupt variations that need to be anticipated on time. This also occurs in many other important applications, e.g.~ predicting future traffic flows, stocks markets, \textit{etc}. Traditional time series forecasting methods, often relying on stationarity assumptions, are not adapted for this context, and pure data-driven models struggle as well. One of the reasons is the mismatch existing between the evaluation metrics typically used to assess predictions in practice (that take into account shape and temporal errors) and the dominantly used training loss for deep models (the mean squared error).
The main scientific challenges raised by this use-case are the following:
\begin{itemize}
\item How to design differentiable metrics for assessing the correctness of shape and the temporal localization of future trajectories?
\item How to efficiently describe the uncertainty by providing to the decision makers a small set of possible scenarios reflecting the shape and temporal diversity of future trajectories? In particular, how to structure the diversity of future trajectories according to shape and temporal criteria?
\end{itemize}
\subsection{Exploiting incomplete prior physical knowledge in machine learning models}
The majority of existing works for combining machine learning and physics assume a \textit{complete} physical knowledge about the system in the training process \cite{de2017deep,raissi2017physics}. In contrast to this mainstream direction, we investigate in this thesis how to leverage \textit{incomplete} physical models, i.e.~ models that are insufficient for totally describing the dynamics. We have seen that physical models are coarse representations of the reality in many situations, e.g.~ in physics, climate, robotics, finance, \textit{etc}.
In the solar forecasting energy example, the dynamics of clouds can be described from fluid mechanics principles. However, an exhaustive physical description is mainly out of reach since the dynamics of atmosphere is governed by many complex and interacting physical phenomena (e.g.~ formation, evaporation of clouds, turbulence). Moreover, even a complete physical model becomes insufficient in case of missing input information, i.e.~ when the true state of the system (appearing in the dynamical equations) is not fully observed. In our case, we do not have a full observation about the state of the atmosphere above the PV station: we only dispose of fisheye images and we do not use information about the wind speed, the altitude of clouds and we cannot resolve if there exists several cloud layers that mask one another.
Another exacerbating difficulty is the \textit{non-observability of the prior dynamical model}, i.e.~ when the physical model does not apply directly in the input space. For example common laws of motion for tracking clouds in fisheye images, e.g.~ a simple advection model, suppose that the clouds have been correctly identified and segmented and that a linear translation of clouds translates in a linear translation in the image, which is not the case because of the circular distortion of the fisheye objective.
So far, exploiting incomplete physical models has been explored by very few works \cite{long2018hybridnet,saha2020phicnet,neural20}. This problem poses many technical challenges from several points of view:
\begin{itemize}
\item Neural network architecture: how to design deep architectures with hard or soft physical constraints?
\item Training: how to efficiently train these models? From a theoretical point of view, can we provide guarantees on the quality of the ML/MB decomposition (existence, uniqueness)?
\end{itemize}
\section{Contributions and outline}
In this thesis, we address the two aforementioned scientific challenges for spatio-temporal forecasting. For multistep and non-stationary time series forecasting in deterministic and probabilistic contexts, we propose to incorporate differentiable shape and temporal features in the training scheme of deep forecasting models (part \ref{part:part1} of the thesis). For exploiting physical knowledge in deep architectures in incomplete-knowledge settings, we introduce a disentangling architecture and explore the theoretical properties of the resulting ML/MB decomposition (thesis part \ref{part:part2}). Finally, we apply our proposed ideas to the solar irradiance forecasting problem (thesis part \ref{part:part3}).
\subsection*{Part \ref{part:part1}: Differentiable shape and time criteria for deterministic and probabilistic forecasting}
\label{sec:first_deadlock}
In non-stationary contexts occurring in many industrial applications, current deep learning forecasting methods are often inadequate to properly predict sharp variations. The literature is mainly focused on new neural network architectures to improve forecasts. In contrast, the choice of the training loss function is rarely questioned. The large majority of methods are trained with the proxy Mean Squared Error (MSE) or variants that lead to non-sharp predictions. Besides, current state-of-the-art probabilistic forecasting methods are also ill-adapted for representing the shape and temporal variability of future scenarios. In this part, we propose to design training objectives that account for the shape and temporal localization of predictions.
Our contributions to tackle the first scientific challenge are the following:
\begin{itemize}
\item For training deep forecasting models, we introduce in Chapter \ref{chap:criteria} differentiable shape and temporal criteria inspired by evaluation metrics commonly used in applications. We propose an unifying view of these criteria both in terms of dissimilarities (loss functions) and similarities (positive semi-definite kernels). We insist on their efficient computation and differentiability, which allows to use them in deep learning pipelines.
\item For deterministic forecasting, we introduce in Chapter \ref{chap:dilate} the DILATE training loss function that combines a shape and a temporal dissimilarity to accurately predict sharp events with precise temporal localization. We show that training with DILATE loss instead of the MSE leads to better results at test time on several non-stationary benchmarks for generic and state-of-the-art architectures.
\item For probabilistic forecasting, we present in Chapter \ref{chap:stripe} the STRIPE model that provides a set of diverse and accurate possible future trajectories. The diversity is structured with shape and temporal positive semi-definite kernels embedded in a determinantal point process (DPP) mechanism. We show that our method leads to predictions with a better quality/diversity tradeoff than competing diversifying mechanisms.
\end{itemize}
\subsection*{Part \ref{part:part2}: Physically-informed forecasting with incomplete knowledge}
\label{sec:second_deadlock}
To advance towards the exploitation of incomplete physical knowledge in deep forecasting models, we first introduce in this part a new ML/MB deep architecture dedicated to video prediction, for which the physical laws are often not directly applicable at the pixel level. We further delve deeper into the ML/MB decomposition and we propose a new learning framework with uniqueness guarantees.
Our contributions to tackle the second scientific challenge are the following:
\begin{itemize}
\item In Chapter \ref{chap:phydnet}, we propose a new deep architecture called PhyDNet dedicated to video prediction in non-observable prior contexts. PhyDNet learns physical dynamics parameterized by a general class of PDEs. Since the physical laws may not directly apply at the pixel level in videos, we complement the physical model with a data-driven model in charge of learning the residual information necessary for accurate prediction, such as appearance, texture, details. We show that PhyDNet reaches very good performances on several video prediction benchmarks, from a strong (linear translation for the Moving MNist dataset) to a weak prior physical knowledge (modelling general human motion for Human 3.6 dataset).
\item In Chapter \ref{chap:aphynity}, we concentrate on the ML/MB decomposition problem and the optimal cooperation between physical and data-driven models. We introduce a principled learning framework, called APHYNITY, for forecasting complex physical systems with incomplete knowledge. Inspired by the least-action principle, APHYNITY minimizes the norm of the data-driven complement under the constraint of perfect prediction of the augmented model, which leads to a unique decomposition under mild assumptions (Chebychev set). We show on several challenging physical dynamics that APHYNITY ensures better forecasting and parameter identification performances than MB or ML models alone, and that competing ML/MB hybrid methods.
\end{itemize}
\subsection*{Part \ref{part:part3}: Application to solar irradiance forecasting}
\label{sec:deadlock_application}
Finally, we apply the methodological contributions of this thesis to the solar irradiance forecasting problem at EDF.
\begin{itemize}
\item In Chapter \ref{chap:overview_fisheye}, we present the industrial solar irradiance forecasting problem in more details and review the existing literature for solving it. We also propose a first deep learning model for estimating and forecasting solar irradiance.
\item In Chapter \ref{chap:phydnet_fisheye}, we apply the methodological contributions of this thesis to this problem. We propose an adaptation of the introduced PhyDNet architecture to perform physically-constrained prediction. We also evaluate the DILATE loss and the APHYNITY framework on this problem and discuss future improvement directions.
\end{itemize}
Before delving in the core of the thesis, we present in Chapter \ref{chap:related_work} on overview of the basics of machine learning and the related works on spatio-temporal forecasting and physically-constrained machine learning. Finally, in Chapter \ref{chap:conclusion}, we summarize our work and propose appealing perspectives for future works.\\
\newpage
This thesis is based on the following list of publications:
\begin{tabular}{p{14cm}|c}
\toprule
Publication & Chapter \\
\midrule
Vincent Le Guen and Nicolas Thome. "Deep Time Series Forecasting with Shape and Temporal Criteria". IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. & \ref{chap:criteria} \\
\midrule
Vincent Le Guen and Nicolas Thome. "Shape and Time Distortion Loss for Training Deep Time Series Forecasting Models". In Advances in Neural Information Processing Systems (NeurIPS 2019). & \ref{chap:dilate} \\
\midrule
Vincent Le Guen and Nicolas Thome. "Probabilistic Time Series Forecasting with Shape and Temporal Diversity". In Advances in Neural Information Processing Systems (NeurIPS 2020). & \ref{chap:stripe} \\
\midrule
Vincent Le Guen and Nicolas Thome. "Disentangling Physical Dynamics from Unknown Factors for Unsupervised Video Prediction". In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020). & \ref{chap:phydnet} \\
\midrule
Yuan Yin$^*$, Vincent Le Guen$^*$, Jeremie Dona$^*$, Ibrahim Ayed$^*$, Emmanuel de Bézenac$^*$, Nicolas Thome and Patrick Gallinari. "Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting", In International Conference on Learning Representations (ICLR 2021, oral presentation), Journal of Statistical Mechanics: Theory and Experiments (JSTAT 2021). & \ref{chap:aphynity} \\
\midrule
Vincent Le Guen and Nicolas Thome. "Prévision de l'irradiance solaire par réseaux de neurones profonds à l'aide de caméras au sol". In: GRETSI 2019. & \ref{chap:overview_fisheye} \\
\midrule
Vincent Le Guen and Nicolas Thome. "A Deep Physical Model for Solar Irradiance Forecasting With Fisheye Images". In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops 2020 (OmniCV 2020 workshop) & \ref{chap:phydnet_fisheye}\\
\bottomrule
\end{tabular}
\clearpage{\pagestyle{empty}\cleardoublepage}
\part{Titre de la première partie}
\mainmatter
\chapter{Introduction}
\label{chap:intro}
µ\input{./introduction.tex}
\mbox{}
\thispagestyle{empty}
\chapter{State-of-the-art on spatio-temporal forecasting}
\label{chap:related_work}
\input{./related_work.tex}
\partabstract{
\vspace{1cm}
\begin{center}
\textsc{Abstract}\\
\end{center}
\vspace{1cm}
In this part, we tackle the multistep deep time series forecasting problem, in the challenging context of non-stationary series that can present sharp variations. In deep learning, the mainstream research direction concerns developing new neural forecasting architectures. In contrast, the choice of the training loss function is rarely questioned: the surrogate mean squared error (MSE) is used in the vast majority of cases. We propose here to leverage shape and temporal criteria in the training objective. We introduce differentiable similarities and dissimilarities for characterizing shape accuracy and temporal localization error (Chapter \ref{chap:criteria}). We leverage these criteria by introducing two approaches dedicated to deterministic and probabilistic forecasting: we introduce the DILATE loss function for deterministic forecasting that ensures both sharp predictions with accurate temporal localization (Chapter \ref{chap:dilate}), and the STRIPE model for probabilistic forecasting with shape and temporal diversity (Chapter \ref{chap:stripe}). We validate our claims with extensive experiments on synthetic and real-world datasets.
}
\part{Differentiable shape and time criteria for deterministic and probabilistic forecasting}
\label{part:part1}
\chapter{Differentiable shape and temporal criteria}
\label{chap:criteria}
\input{./shape_time_criteria.tex}
\mbox{}
\thispagestyle{empty}
\chapter{Distortion loss with shape and time}
\label{chap:dilate}
\input{./dilate.tex}
\chapter{Probabilistic forecasting with shape and temporal diversity}
\label{chap:stripe}
\input{./stripe.tex}
\partabstract{
\vspace{1cm}
\begin{center}
\textsc{Abstract}\\
\end{center}
\vspace{1cm}
In this part, we are interested in designing Machine Learning (ML) / Model-Based (MB) augmented models by leveraging incomplete physical knowledge formalized through ODE/PDE. Since physical laws are often not directly applicable at the pixel level nor be sufficient for predicting the whole content of future images in generic videos, we propose to learn a latent space where we suppose that physical dynamics apply. We introduce the PhyDNet model (Chapter \ref{chap:phydnet}), which is a two-branch recurrent neural network. One branch is responsible for modelling the physical dynamics while the other branch captures the complementary information required for accurate prediction. We show that PhyDNet reaches state-of-the-art performances on several video prediction benchmarks. Going further, we concentrate on the ML/MB decomposition problem discussed in Chapter \ref{chap:intro}, which is ill-posed and admits an infinity of solutions. We introduce a principled learning framework, called APHYNITY (Chapter \ref{chap:aphynity}). Inspired by the least-action principle, APHYNITY minimizes the norm of the data-driven complement under the constraint of perfect prediction of the augmented model. We provide a theoretical analysis of the decomposition and show that we can ensure existence and uniqueness decomposition guarantees, under mild conditions. We show on several challenging physical dynamics that APHYNITY ensures better forecasting and parameter identification performances than MB or ML models alone, and that competing MB/ML hybrid methods.
}
\part{Physics-informed forecasting with incomplete knowledge}
\label{part:part2}
\chapter{Disentangling physical from residual dynamics for video prediction}
\label{chap:phydnet}
\input{./phydnet.tex}
\mbox{}
\thispagestyle{empty}
\chapter{Augmenting incomplete physical models for complex dynamics forecasting}
\label{chap:aphynity}
\input{./aphynity.tex}
\mbox{}
\thispagestyle{empty}
\partabstract{
\vspace{1cm}
\begin{center}
\textsc{Abstract}\\
\end{center}
\vspace{1cm}
In this final part, we tackle the industrial solar energy forecasting problem with fisheye images that we briefly discussed in Chapter \ref{chap:intro}. We first present in details the use-case, and review the existing traditional methods and the early deep learning approaches (Chapter \ref{chap:overview_fisheye}). We also propose a first data-driven deep learning model for solar irradiance estimation and prediction and discuss its limitations. In Chapter \ref{chap:phydnet_fisheye}, we investigate the model-based machine learning cooperation studied in this thesis for improving the model. We propose a new physically-constrained architecture adapted from our PhyDNet video prediction model (Chapter \ref{chap:phydnet}). We also evaluate the use of our DILATE loss (Chapter \ref{chap:dilate}) for enforcing predictions with accurate shape and temporal localization, and of our APHYNITY framework (Chapter \ref{chap:aphynity}) for optimal ML/MB decomposition.
}
\part{Application to solar irradiance forecasting}
\label{part:part3}
\chapter{Overview of solar irradiance forecasting}
\label{chap:overview_fisheye}
\input{./overview_fisheye.tex}
\mbox{}
\thispagestyle{empty}
\chapter{Deep learning for solar irradiance forecasting}
\label{chap:phydnet_fisheye}
\input{./phydnet_fisheye.tex}
\chapter{Conclusion and perspectives}
\label{chap:conclusion}
\input{./conclusion.tex}
\bibliographystyle{plain}
\section{Introduction}
\begin{figure}[b!]
\centering
\includegraphics[width=12cm]{images/ghi.png}
\caption[The different components of solar irradiance.]{The different components of solar irradiance. Figure taken from \cite{jimenez2016wrf}.}
\label{fig:solar-irradiance}
\end{figure}
\lettrine[lines=3]{T}o tackle climate change and limit global warming, major world economies agreed in 2015 at the Paris climate conference (COP21) on a restrictive plan to reduce greenhouse gas emissions. In the energy sector, this reinforced massive investments towards renewable energy generation such as solar or wind. However, a limitation of solar and wind energies is their intermittent and non-controllable nature, in contrast to conventional fossil fuel or nuclear energy. This causes major challenges for their integration at scale in the existing electricity grid, since electricity production and consumption must be balanced at every time. Therefore, accurately forecasting the intermittent energy production at various time horizons (from seconds to a few days) becomes a crucial aspect of the energy transition. Many applications could benefit from improved solar energy forecasts, such as the development of smart grids, hybrid solar/conventional power systems, or energy trading.
\subsection{The solar irradiance components}
In this thesis, we are interested in forecasting the solar irradiance, which corresponds to the incoming power of electromagnetic radiation received from the sun (expressed in $W/m^2$). The Global Horizontal Irradiance (GHI) can be decomposed into the Direct Normal Irradiance (DNI) directly coming from the sun perpendicularly to the photovoltaic (PV) panels, and the Diffuse Horizontal Irradiance (DHI) coming from the diffusion by the clouds and aerosols of the atmosphere or reflection from the ground (see Figure \ref{fig:solar-irradiance}):
\begin{equation}
\text{GHI} = \text{DHI} + \sin h \times \text{DNI}
\end{equation}
where $h$ is the solar elevation angle.
The GHI is the main quantity of interest in this thesis, since it is directly linked to the electric power production expressed in Watts, by knowing the technology, orientation of the photovoltaic panels and the ambient temperature. In practice, before applying any statistical method, the solar irradiance is often normalized by a clear-sky model corresponding to the theoretical irradiance received in cloudless conditions. This normalization compensates for the inherent seasonality of the solar irradiance. In this thesis, we use the ESRA (European Solar Radiation Atlas) clear sky model \cite{rigollier2000clear} and we denote KGHI the GHI normalized by its clear-sky values.
\subsection{The different data sources for solar irradiance forecasting}
For solar energy, the main source of variability comes from the occlusion of the sun by clouds. We presented in Figure \ref{fig:types-observations} the main classes of methods for forecasting solar irradiance. Although statistical time series forecasting can be directly applied on the 1D solar irradiance series, this strategy is blind to the motions of clouds and thus cannot properly anticipate the variations. To understand the spatio-temporal dynamics of clouds, current methods rely on weather forecasts or sky image analysis. Numerical weather forecasts solve the equations of physics to forecast the dynamics of the atmosphere ; they have a spatial resolution of around 1km and a temporal resolution of 1 to 2h for the AROME model of Meteo France. For shorter forecasting horizons, satellite images can be exploited to provide irradiance forecasts up to a few hours with a 15 min granularity and a 1km spatial scale.
For very short-term horizons (< 20min) at the scale of a PV plant, fisheye cameras pointed towards the sky (see Figure \ref{fig:fisheye-camera}) have become popular in recent years \cite{gauchet2012surface,chu2013hybrid,chu2016sun,marquez2013intra,schmidt2016evaluating,kuhn2018validation}. They offer an hemispheric view of the sky that enables to assess the evolution of the cloud cover.
\begin{figure}[h]
\centering
\includegraphics[width=16cm]{images/fisheye_context.png}
\caption{Fisheye camera and fisheye image for short-term solar irradiance forecasting.}
\label{fig:fisheye-camera}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=10cm]{images/weather_station.png}
\caption{EDF scientific test site at La Reunion Island composed of a fisheye camera, a pyranometer and a weather station mounted above a PV power plant.}
\label{fig:meteo_campaign}
\end{figure}
\subsection{Meteorological campaign at EDF R\&D with fisheye images}
EDF Research and Development (R\&D) has led a meteorological campaign since 2010 at La Réunion Island with fisheye cameras (Axis PTZ 212) and pyranometers (SPN1) for measuring ground truth solar irradiance (see Figure \ref{fig:fisheye-camera} and Figure \ref{fig:meteo_campaign}). A database of more than 7 million images every 10s and corresponding irradiance measurements was collected. The objective is to forecast solar irradiance with fisheye cameras only, which are much cheaper than pyranometers and provide an additional spatial information compared to irradiance time series.
\section{Related work}
In the Section, we review the main existing methods for short-term solar irradiance forecasting.
\paragraph{Persistence and statistical models}
For very-short term forecasting, a first natural baseline is the persistence, which assumes that the current irradiance level (normalized by the clear-sky) will persist. Persistence is often a competitive baseline, with optimal performances in clear-sky conditions. However, persistence does not anticipate variations by definition.
Other statistical models \cite{diagne2013review,wolff2016statistical,mellit2008artificial} use local information (e.g.~ past irradiance values, PV production, temperature, weather forecasts) to capture statistical patterns and predict future values with regression or clustering algorithms. However, these methods do not observe the cloud motion and thus fail to anticipate variations due to sun occlusions.
\paragraph{Ground-based images}
For assessing the cloud coverage and anticipating short-term variations due to sun occlusions, researchers have investigated sky imagery with ground-based cameras from the 2010's. Earlier works have used specific scientific instruments, such as the Total Sky Imager in \cite{chow2011intra,marquez2013intra} (spherical mirror with a camera pointing downwards) or suntrackers. Since, low-cost webcam cameras have encountered a great success, leading to a soaring interest from the community \cite{gauchet2012surface,chu2013hybrid,chu2016sun,schmidt2016evaluating}.
Although many hardware and algorithmic variants exist (e.g.~ additional sensors, multiple cameras for stereo estimation), all these methods mainly follow a similar traditional image processing pipeline:
\begin{enumerate}
\item Camera calibration for determining the distortion parameters of the fisheye objective,
\item Fisheye image acquisition at fixed intervals (e.g.~ every 10s, 1min), sometimes with several expositions and High Dynamic Range (HDR) processing,
\item Image segmentation with thresholds based on color ratios or other photometric properties. Thresholds are either handcrafted or adaptative. The segmentation can be used for deriving a binary cloud map, or for deriving image features.
\item Cloud motion estimation with optical flow,
\item Cloud motion propagation into the future to generate a predicted irradiance map.
\end{enumerate}
However sophisticated the processing pipeline may be, the challenges of the problem remain: the clouds follow a complex stochastic motion with abrupt variations that is hard to extrapolate. All these methods also rely on some manual engineering that only remains valid in a limited range of conditions.
\paragraph{Deep learning for solar irradiance forecasting}
Since a few years, deep learning has become an appealing alternative to replace the whole conventional pipeline with a model learned end-to-end from raw fisheye images \cite{pothineni2018kloudnet,zhang2018deep,spiess2019learning,sun2019short,nie2020pv,paletta2020temporally,zhen2021ultra}. However, as Paletta \textit{et al.~} \cite{paletta2021benchmarking} has highlighted, standard deep learning methods still struggle to properly understand the cloud motion and do not anticipate sharp variations.
\section{Proposed models for solar irradiance estimation and forecasting}
In this Section, we introduce two deep learning models: one for solar irradiance estimation, the second for forecasting. We define estimation as the prediction of the irradiance $r_T$ associated with the image $I_T$. Forecasting corresponds to predicting the future irradiance $r_{T+H}$ (or the complete future trajectory $r_{T+1},\cdots,r_{T+H}$) given a sequence of past images $(I_1, \cdots, I_T)$.
\subsection{Solar irradiance estimation}
For solar irradiance estimation, we use a convolutional neural network that takes as input a fisheye image (without preprocessing) and outputs the estimated solar irradiance for that image. We first propose a handcrafted convolutional architecture (shown in Figure \ref{fig:convnet-small}) working on RBG images resized at $80\times 80$ pixels. This model has approximately 470,000 parameters.
\begin{figure}
\centering
\includegraphics[width=10cm]{images/convnet1.png}
\caption{Small convolutional network used for solar irradiance estimation.}
\label{fig:convnet-small}
\end{figure}
We also propose a much larger model relying on the DenseNet architecture \cite{huang2017densely} that has reached state-of-art performances on the ImageNet image classification task. The model works with higher resolution images, resized at $224 \times 224$ pixels. For adapting the model to this regression task, we replace the final classification layers by fully-connected layers for outputting one irradiance value. The overall model has approximately 18 Million parameters.
\subsection{Solar irradiance forecasting}
To forecast solar irradiance, we propose a neural network architecture relying on the ConvLSTM model \cite{xingjian2015convolutional} which is a strong baseline for deep video prediction. Depicted in Figure \ref{fig:convlstm-fisheye}, our architecture is composed of a ConvLSTM encoder that reads a sequence of $T$ past fisheye images $(I_{1},\cdots,I_{T-1}, I_T)$ and encodes them into a context vector. The network has two output branches: one for predicting the future solar irradiance $\hat{r}_{T+F}$ at a given horizon $T$ and the other for the future fisheye image $\hat{I}_{T+H}$.
We empirically verified that this multi-task objective improves performances compared to forecasting irradiance only, due to the richer supervision signal and the cooperation between both tasks.
Our forecasting model is composed of 4 stacked ConvLSTM layers acting on input images resized to $80\times 80$ pixels.
\begin{figure}[H]
\centering
\includegraphics[width=14cm]{images/convlstm_fisheye1.png}
\caption{Proposed architecture for solar irradiance forecasting based on the ConvLSTM model \cite{xingjian2015convolutional}.}
\label{fig:convlstm-fisheye}
\end{figure}
\section{Experimental results}
\subsection{Fisheye image dataset}
We conduct experiments on the fisheye image dataset collected by EDF at La Reunion Island. For the estimation task, we use a training set composed of 4,190,064 images from the years 2012 to 2015, and a test set of 1,265,717 images in the year 2016. Images are processed from a solar elevation of 10°, and all irradiance measurements are normalized by the ESRA clear-sky model \cite{rigollier2000clear}. We use images resized at $80 \times 80$ pixels for the ConvNet model and $224 \times 224$ for the DenseNet model.
For the forecasting task, the training set is composed of 180,000 sequences of 10 images spaced by 1min (with the associated ground truth solar irradiance measurements) from the years 2014 to 2016, and the test set of 20,000 sequences during the year 2013 on the same site. We use images resized at $80 \times 80$ pixels. We keep 5 images for the input range and predict the 5 following images and solar irradiances.
\subsection{Solar irradiance estimation results}
We present in Table \ref{tab:fisheye-estimation} the estimation results for the KGHI. We have trained two DenseNet models: one that only predicts the KGHI and the other that jointly predicts the KGHI and KDHI. We compare our proposed deep models with the baseline previously developed at EDF R\&D \cite{gauchet2012surface}. This traditional method segments the fisheye images with thresholds on the R-B difference and the lumimance, defines 5 features based on the segmentation ratios and applies a Nadaraya-Watson kernel regression \cite{nadaraya1964estimating} for estimating the irradiance.
We evaluate the performances with the normalized Mean Absolute Error (nMAE) and normalized Root Mean Squared Error (nRSME). Normalization is performed by dividing by the mean KGHI value over the training set.
\begin{table}[H]
\centering
\caption{KGHI estimation results on the test set.}
\label{tab:fisheye-estimation}
\begin{tabular}{ccc}
\toprule
model & nMAE & nRMSE \\ \midrule
Baseline & 14.9 \% & 21.6 \% \\
ConvNet KGHI & 6.59 \% & 10.3 \% \\
DenseNet KGHI & 2.91 \% & 5.27 \% \\
DenseNet KGHI + KDHI & \textbf{2.90 \%} & \textbf{4.83 \%} \\
\bottomrule
\end{tabular}
\end{table}
Results show that the ConvNet model (depicted in Figure \ref{fig:convnet-small}) yields a large performance improvement (from 21.6 \% to 10.3 \% in nRMSE) over the baseline. Going deeper with the DenseNet model further gives a large gap in performances (5.27 \%). It confirms the ability of deep learning to automatically learn a representation space for approximating a complex mapping from a large dataset of annotated images. Finally, we observe that the DenseNet model that jointly estimates the KGHI and KDHI gives the best performances (4.83 \%), indicating that exploiting the correlations between both irradiance components helps in better estimating the KGHI. Intuitively, for two images with similar GHI but different cloud conditions, the differences of diffuse irradiance (DHI) should help to learn more specific cloud features that better generalize for estimating different test images.
We display in Figure \ref{fig:fisheye-estimation} a few qualitative estimation examples. We can see for several sky conditions that the DenseNet estimations are very close to the measurements, both in GHI and DHI. Interestingly, the difference with the baseline is much higher when the diffuse irradiance (DHI) is high, e.g.~ for images (c) and (e). It can be explained by the difficulty of the image segmentation with handcrafted thresholds on clouds with different levels of gray; the deep learning approach better learns features for representing the shades of clouds, supervised by the GHI and DHI values.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{images/fisheye_estimation_results.png}
\caption{Qualitative fisheye estimation results of the GHI and DHI.}
\label{fig:fisheye-estimation}
\end{figure}
\subsection{Solar irradiance forecasting results}
We then evaluate the forecasting performances of our method on the fisheye image dataset. We compare our ConvLSTM architecture with the optical flow baseline previously developed at EDF R\&D \cite{gauchet2012surface} (sketched in Figure \ref{fig:mldl_diff}), and with the (smart) persistence which consists in copying the current value as the forecast for the future timestep (for quantities normalized by the clear sky).
Global results presented in Table \ref{tab:irradiance-forecasting} show that our proposed deep forecasting model outperforms both the optical flow baseline and the persistence. However, the performance gap with the traditional method is narrower than for estimation, revealing the difficulty of the forecasting task.
To further analyse the differences, we display in Figure \ref{fig:fisheye_conv_forecasting} the model predictions on a particular day of the test set. We can see that the ConvLSTM predictions are much closer to the KGHI ground truth than the optical flow baseline and than the persistence ConvNet (which corresponds to applying the estimation ConvNet).
Interestingly, the optical flow baseline has a worse RMSE than the persistence. However, the optical flow method shows a better ability to anticipate sharp variations (e.g.~ around the timestep 200), and is therefore better suited for the industrial application. It confirms that the MSE and variants are not adapted to train and evaluate models in this non-stationary context with abrupt changes, which has motivated the contributions of this thesis. In the following Chapter, we will train and evaluate models with our proposed shape and temporal criteria to improve models in this context.
\begin{table}[H]
\centering
\caption{Forecasting performances of the KGHI (normalized Global Horizontal Irradiance) at a 5min horizon.}
\begin{tabular}{c|c}
\toprule
Method & normalized RMSE \\
\midrule
Optical flow baseline & 32.9 \% \\
Persistence & 28.5 \% \\
ConvLSTM (ours) & \textbf{26.6 \%} \\
\bottomrule
\end{tabular}
\label{tab:irradiance-forecasting}
\end{table}{}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{images/fisheye_convlstm_forecasting.png}
\caption{Qualitative KGHI forecasting results at 5min on a particular day.}
\label{fig:fisheye_conv_forecasting}
\end{figure}
\section{Conclusion}
In this Chapter, we have presented the solar irradiance forecasting problem with fisheye images at EDF, and reviewed the existing methods (traditional and deep). We have proposed first deep models for estimating and forecasting the solar irradiance, that have reached state-of-the-art results compared to traditional methods. However, for the forecasting task, there still exists room for improvement, in particular for modelling the sharp variations and the complex nonlinear cloud dynamics. These limitations will be addressed in the next Chapter.
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{Introduction}
\label{sec:intro}
\lettrine[lines=3]{V}ideo forecasting consists in predicting the future content of a video conditioned on previous frames. This is of crucial importance in various contexts, such as weather forecasting \cite{xingjian2015convolutional}, autonomous driving \cite{kwon2019predicting}, reinforcement learning \cite{oh2015action}, robotics \cite{finn2016unsupervised}, or action recognition \cite{liu2017video}.
In this work, we focus on unsupervised video prediction, where the absence of semantic labels to drive predictions exacerbates the challenges of the task.
In this context, a key problem is to design video prediction methods able to represent the complex dynamics underlying raw data.
State-of-the-art methods for training such complex dynamical models currently rely on deep learning, with specific architectural choices based on 2D/3D convolutional~\cite{mathieu2015deep,vondrick2016generating} or recurrent neural networks~\cite{wang2017predrnn,wang2018predrnn++,wang2019memory}. To improve predictions, recent methods use adversarial training \cite{mathieu2015deep,vondrick2016generating,kwon2019predicting}, stochastic models \cite{castrejon2019improved,minderer2019unsupervised,franceschi2020stochastic}, constraint predictions by using geometric knowledge \cite{finn2016unsupervised,jia2016dynamic,xue2016visual} or by disentangling factors of variation \cite{villegas2017decomposing,tulyakov2018mocogan,denton2017unsupervised,hsieh2018learning}.
Another appealing way to model the video dynamics is to exploit prior physical knowledge, e.g.~ formalized by partial differential equations (PDEs) \cite{de2017deep,seo2019differentiable}. Recently, interesting connections between residual networks and PDEs have been drawn \cite{weinan2017proposal,lu2018beyond,chen2018neural}, enabling to design physically-constrained machine learning frameworks~\cite{raissi2018deep,de2017deep,seo2019differentiable,rudy2017data}.
These approaches are very successful for modelling physical systems, when the underlying dynamics is well described by the physical equations in the input space~\cite{raissi2018deep,rudy2017data,long2018pde}. However, such assumption is rarely fulfilled in the pixel space for predicting generalist videos.
In this work, we introduce PhyDNet, a deep model dedicated to perform accurate future frame predictions from generalist videos. In such a context, physical laws do not apply in the input pixel space; the goal of PhyDNet is to learn a semantic latent space $\bm{\mathcal{H}}$ in which they do, and are disentangled from other factors of variation required to perform future prediction. Prediction results of PhyDNet when trained on Moving MNIST~\cite{srivastava2015unsupervised} are shown in Figure \ref{fig:fig1}. The left branch represents the physical dynamics in $\bm{\mathcal{H}}$; when decoded in the image space, we can see that the corresponding features encode approximate segmentation masks predicting digit positions on subsequent frames.
On the other hand, the right branch extracts residual information required for prediction, here the precise appearance of the two digits. Combining both representations eventually makes accurate prediction successful.
Our contributions to the unsupervised video prediction problem with PhyDNet can be summarized as follows:
\begin{itemize}
\item We introduce a global sequence to sequence two-branch deep model (section~\ref{sec:3.1}) dedicated to jointly learn the latent space $\bm{\mathcal{H}}$ and to disentangle physical dynamics from residual information, the latter being modeled by a data-driven (ConvLSTM~\cite{xingjian2015convolutional}) method. \vspace{-0.05cm}
\item Physical dynamics is modelled by a new recurrent physical cell, PhyCell (section~\ref{section:phycell}), discretizing a broad class of PDEs in $\bm{\mathcal{H}}$. PhyCell is based on a prediction-correction paradigm inspired from the data assimilation community \cite{asch2016data},~enabling robust training with missing data and for long-term forecasting. \vspace{-0.05cm}
\item Experiments (section~\ref{section4}) reveal that PhyDNet outperforms state-of-the-art methods on four generalist datasets: this is, as far as we know, the first physically-constrained model able to show such capabilities. We highlight the importance of both disentanglement and physical prediction for optimal performances.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=11cm]{images/phydnet_fig1.png}
\caption[Overview of the PhyDNet model.]{PhyDNet is a deep model mapping an input video into a latent space $\bm{\mathcal{H}}$, from which future frame prediction can be accurately performed. PhyDNet learns $\bm{\mathcal{H}}$ in an unsupervised manner, such that physical dynamics and unknown factors necessary for prediction, e.g.~ appearance, details, texture, are disentangled. \vspace{-0.1cm}}
\label{fig:fig1}
\end{figure}
\section{Related work}
\label{sec:sota}
We review here related multi-step video prediction approaches dedicated to long-term forecasting. We also focus on unsupervised training, i.e.~ only using input video data and without manual supervision based on semantic labels.
Deep neural networks have recently achieved state-of-the-art performances for data-driven video prediction. Seminal works include the application of sequence to sequence LSTM or Convolutional variants~\cite{srivastava2015unsupervised,xingjian2015convolutional}, adopted in many studies \cite{finn2016unsupervised,lu2017flexible,xu2018structure}. Further works explore different architectural designs based on Recurrent Neural Networks (RNNs) \cite{wang2017predrnn,wang2018predrnn++,oliu2018folded,wang2019memory,wang2018eidetic} and 2D/3D ConvNets \cite{mathieu2015deep,vondrick2016generating,reda2018sdc,byeon2018contextvp}. Dedicated loss functions \cite{cuturi2017soft,leguen19} and Generative Adversarial Networks (GANs) have been investigated for sharper predictions \cite{mathieu2015deep,vondrick2016generating,kwon2019predicting}. However, the problem of conditioning GANs with prior information, such as physical models, remains an open question.
To constrain the challenging generation of high dimensional images at the pixel level, several methods rather use domain-specific knowledge such as predicting geometric transformations between frames \cite{finn2016unsupervised,jia2016dynamic,xue2016visual}, estimating the optical flow \cite{patraucean2015spatio,luo2017unsupervised,liu2017video,liang2017dual,li2018flow} or exploiting the semantics of the scene \cite{bei2021learning}. This is very effective for short-term prediction, but degrades quickly when the video content evolves, where more complex models and memory about dynamics are required.
Another line of work consists in disentangling independent factors of variations in order to apply the prediction model on lower-dimensional representations. A few approaches explicitly model interactions between objects inferred from an observed scene \cite{eslami2016attend,kosiorek2018sequential,ye2019compositional}. Relational reasoning, often implemented with graphs \cite{battaglia2016interaction,kipf2018neural,sanchez2018graph,palm2018recurrent,van2018relational}, can account for basic physical laws, e.g.~ drift, gravity, spring \cite{watters2017visual,wu2017learning,mrowca2018flexible}. However, these methods are object-centric, only evaluate on controlled settings and are not suited for general real-world video forecasting.
Other disentangling approaches factorize the video into independent components \cite{villegas2017decomposing,tulyakov2018mocogan,denton2017unsupervised,hsieh2018learning,gao2019disentangling}. Several disentanglement criteria are used, such as content/motion \cite{villegas2017decomposing,lee2021video} or deterministic/stochastic \cite{denton2017unsupervised}. In specific contexts, the prediction space can be structured using additional information, e.g.~ with human pose \cite{villegas2017learning,walker2017pose} or key points \cite{minderer2019unsupervised}, which imposes a severe overhead on the annotation budget. In this work, we share with these works the motivation to use disentangled representations, but we disentangle incomplete physical dynamics from residual information required for prediction.
\paragraph{Deep Kalman filters}
To handle unobserved phenomena, state space models, in particular the Kalman filter \cite{kalman1960new}, have been recently integrated with deep learning, by modelling dynamics in learned latent space \cite{Krishnan2015DeepKF,watter2015embed,haarnoja2016backprop,fraccaro2017disentangled,becker2019recurrent}. The Kalman variational autoencoder \cite{fraccaro2017disentangled} separates state estimation in videos from dynamics with a linear gaussian state space model. The Recurrent Kalman Network \cite{becker2019recurrent} uses a factorized high dimensional latent space in which the linear Kalman updates are simplified and don't require computationally-heavy covariance matrix inversions. These methods inspired by the data assimilation community \cite{asch2016data,bocquet2019data} have advantages in missing data or long-term forecasting contexts due to their mechanisms decoupling latent dynamics and input assimilation. However, they assume simple latent dynamics (linear) and don't include any physical prior.
\begin{figure}[t]
\centering
\begin{tabular}{cc}
\hspace{-2cm}
\includegraphics[width=6cm]{images/phydnet_fig2a.png} & \includegraphics[width=11cm]{images/phydnet_fig2b.png} \\
\hspace{-0.5cm}\textbf{(a) PhyDNet disentangling cell} & \textbf{(b) Global Seq2Seq architecture} \vspace{0.2cm}
\end{tabular}{}
\caption[Proposed PhyDNet deep model for video forecasting.]{\textbf{Proposed PhyDNet deep model for video forecasting.} (a) The core of PhyDNet is a recurrent block projecting input images $\mathbf{u_t}$ into a latent space $\bm{\mathcal{H}}$, where two recurrent neural networks disentangle physical dynamics (PhyCell, section \ref{section:phycell}) from residual information (ConvLSTM). Learned physical $\mathbf{h}^{\mathbf{p}}_{t+1}$ and residual $\mathbf{h}^{\mathbf{r}}_{t+1}$ representations are summed before decoding to predict the future image $\hat{\mathbf{u}}_{t+1}$. (b) Unfolded in time, PhyDNet forms a sequence to sequence (seq2seq) architecture suited for multi-step video prediction. Dotted arrows mean that predictions are reinjected as next input only for the ConvLSTM branch, and not for PhyCell, as explained in section \ref{sec:training}.}
\label{fig:fig2}
\end{figure}
\section{PhyDNet model for video forecasting}
\label{section3}
We introduce PhyDNet, a model dedicated to video prediction, which leverages physical knowledge on dynamics, and disentangles it from other unknown factors of variations necessary for accurate forecasting. To achieve this goal, we introduce a disentangling architecture (section~\ref{sec:3.1}), and a new physically-constrained recurrent cell (section~\ref{section:phycell}).
\paragraph*{Problem statement:} As discussed in introduction, physical laws do not apply at the pixel level for general video prediction tasks. However, we assume that there exists a conceptual latent space $\bm{\mathcal{H}}$ in which physical dynamics and residual factors are linearly disentangled. Formally, let us denote as $\mathbf{u}= \mathbf{u}(t,\mathbf{x})$ the frame of a video sequence at time $t$, for spatial coordinates $\mathbf{x}=(x,y)$. $\mathbf{h}(t,\mathbf{x}) \in \bm{\bm{\mathcal{H}}}$ is the latent representation of the video up to time $t$, which decomposes as $\mathbf{h}=\mathbf{h^p}+\mathbf{h^r}$, where $\mathbf{h^p}$ (resp. $\mathbf{h^r}$) represents the physical (resp. residual) component of the disentanglement. The video evolution in the latent space $\bm{\bm{\mathcal{H}}}$ is thus governed by the following partial differential equation (PDE):
\begin{equation}
\!\!\!\dfrac{\partial \mathbf{h}(t,\mathbf{x})}{\partial t} \! = \!\frac{\partial \mathbf{h^p}}{\partial t} \!+\! \frac{\partial \mathbf{\mathbf{h^r}}}{\partial t} \!:=\! \bm{\mathcal{M}}_{p}(\mathbf{h^p},\mathbf{u}) + \bm{\mathcal{M}}_{r}(\mathbf{\mathbf{h^r}},\mathbf{u}). \!\!\!
\label{eq:eq1}
\end{equation}
$\bm{\mathcal{M}}_p(\mathbf{h^p},\mathbf{u})$ and $\bm{\mathcal{M}}_r(\mathbf{h^r},\mathbf{u})$ represent physical and residual dynamics in the latent space $\bm{\bm{\mathcal{H}}}$.
\subsection{PhyDNet disentangling architecture}
\label{sec:3.1}
The main goal of PhyDNet is to learn the mapping from input sequences to a latent space which approximates the disentangling properties formalized in Eq \ref{eq:eq1}.
To reach this objective, we introduce a recurrent bloc which is shown in Figure \ref{fig:fig2} (a). A video frame $\mathbf{u}_t$ at time $t$ is mapped by a deep convolutional encoder $\mathbf{E}$ into a latent space representing the targeted space $\bm{\mathcal{H}}$. $\mathbf{E}(\mathbf{u}_t)$ is then used as input for two parallel recurrent neural networks, incorporating this spatial representation into a dynamical model.
The left branch in Figure \ref{fig:fig2} (a) models the latent representation $\mathbf{h^p}$ fulfilling the physical part of the PDE in Eq (\ref{eq:eq1}), i.e.~ $\frac{\partial \mathbf{h^p}(t,\mathbf{x})}{\partial t} = \bm{\mathcal{M}}_{p}(\mathbf{h^p},\mathbf{u})$. This PDE is modeled by our recurrent physical cell described in section \ref{section:phycell}, PhyCell, which leads to the computation of $\mathbf{h}^{\mathbf{p}}_{t+1}$ from $\mathbf{E}(\mathbf{u}_t)$ and $\mathbf{h}_t^{\mathbf{p}}$. From the machine learning perspective, PhyCell leverages physical constraints to limit the number of model parameters, regularizes training and improves generalization.
The right branch in Figure \ref{fig:fig2} (a) models the latent representation $\mathbf{h^r}$ fulfilling the residual part of the PDE in Eq \ref{eq:eq1}, i.e.~ $\frac{\partial \mathbf{h^r}(t,\mathbf{x})}{\partial t} = \bm{\mathcal{M}}_{r}(\mathbf{h^r},\mathbf{u})$. Inspired by wavelet decomposition \cite{mallat1999wavelet} and recent semi-supervised works \cite{robert2018hybridnet}, this part of the PDE corresponds to unknown phenomena, which do not correspond to any prior model, and is therefore entirely learned from data. We use a generic recurrent neural network for this task, e.g.~ ConvLSTM \cite{xingjian2015convolutional} for videos, which computes $\mathbf{h}_{t+1}^{\mathbf{r}}$ from $\mathbf{E}(\mathbf{u}_t)$ and $\mathbf{h}_{t}^{\mathbf{r}}$.
$\mathbf{h}_{t+1}=\mathbf{h}_{t+1}^{\mathbf{p}} +\mathbf{h}_{t+1}^{\mathbf{r}}$ is the combined representation processed by a deep decoder $\mathbf{D}$ to forecast the image $\mathbf{\hat{u}}_{t+1}$.
Figure~\ref{fig:fig2} (b) shows the "unfolded" PhyDNet. An input video $\mathbf{u}_{1:T} = (\mathbf{u}_1,...,\mathbf{u}_T) \in \mathbb{R}^{T\times n \times m \times c}$ with spatial size $n \times m$ an d $c$ channels is projected into $\bm{\mathcal{H}}$ by the
encoder $\mathbf{E}$ and processed by the recurrent block unfolded in time. This forms a Sequence To Sequence architecture~\cite{sutskever2014sequence} suited for multi-step prediction, outputting H future frame predictions $\mathbf{\hat{u}}_{T+1:T+H}$. Encoder, decoder and recurrent block parameters are all trained end-to-end, meaning that PhyDNet learns itself without supervision the latent space $\bm{\mathcal{H}}$ in which physics and residual factors are disentangled.
\subsection{PhyCell: a deep recurrent physical model}
\label{section:phycell}
PhyCell is a new physical cell, whose dynamics is governed by the PDE response function $\bm{\mathcal{M}}_p(\mathbf{h^p},\mathbf{u})$\footnote{In the sequel, we drop the index $\mathbf{p}$ in $\mathbf{h^p}$ for the sake of simplicity}:
\begin{equation}
\bm{\mathcal{M}}_p(\mathbf{h},\mathbf{u}) := \Phi(\mathbf{h})+ \mathcal{C}(\mathbf{h},\mathbf{u}) ,
\label{eq:Mp}
\end{equation}
where $\Phi(\mathbf{h})$ is a physical predictor modelling only the latent dynamics and $C(\mathbf{h},\mathbf{u})$ is a correction term modelling the interactions between latent state and input data.
\paragraph*{Physical predictor:} $\Phi(\mathbf{h})$ in Eq~(\ref{eq:Mp}) is modeled as follows:
\begin{equation}
\Phi(\mathbf{h}(t,\mathbf{x})) = \sum_{i,j: i+j \leq q} c_{i,j} \dfrac{\partial^{i+j} \mathbf{h}}{\partial x^i \partial y^j}(t,\mathbf{x}).
\label{eq:phi}
\end{equation}
$\Phi(\mathbf{h}(t,\mathbf{x}))$ in Eq \ref{eq:phi} combines the spatial derivatives with coefficients $c_{i,j}$ up to a certain differential order $q$. This generic class of linear PDEs subsumes a wide range of classical physical models, e.g.~ the heat equation, the wave equations, the advection-diffusion equations.
\paragraph*{Correction:} $\mathcal{C}(\mathbf{h},\mathbf{u})$ in Eq \ref{eq:Mp} takes the following form:
\begin{equation}
\!\!\!\!\!\!\mathcal{C}(\mathbf{h},\mathbf{u}) \!:=\! \mathbf{K}(t,\mathbf{x})\odot \left[\mathbf{E} (\mathbf{u}(t,\mathbf{x})) \!-\! (\mathbf{h}(t,\mathbf{x}) \!+\! \Phi(\mathbf{h}(t,\mathbf{x} ))\right].
\label{eq:corrcont}
\end{equation}
Eq \ref{eq:corrcont} computes the difference between the latent state after physical motion $\mathbf{h}(t,\mathbf{x}) + \Phi(\mathbf{h}(t,\mathbf{x}))$ and the embedded new observed input $\mathbf{E}(\mathbf{u}(t,\mathbf{x}))$. $\mathbf{K}(t,\mathbf{x})$ is a gating factor, where $\odot$ is the Hadamard product.
\subsubsection{Discrete PhyCell}
\label{sec:discretephicell}
\begin{figure}
\centering
\includegraphics[width=12cm]{images/phydnet_fig3_1.png}
\caption[Description of the PhyCell predictor.]{PhyCell recurrent cell implements a two-steps scheme: physical prediction with convolutions for approximating and combining spatial derivatives (Eq \ref{eq:prediction} and Eq \ref{eq:phi}), and input assimilation as a correction of latent physical dynamics driven by observed data (Eq \ref{eq:correction}). During training, the filter moment loss in red (Eq \ref{eq:lmoment}) enforces the convolutional filters to approximate the desired differential operators.}
\label{fig:phicell}
\end{figure}
We discretize the continuous time PDE in Eq \ref{eq:Mp} with the standard forward Euler numerical scheme \cite{lu2018beyond}, leading to the discrete time PhyCell (derivation in Appendix \ref{app:phycell-deriv}):
\begin{equation}
\mathbf{h}_{t+1} = (1-\mathbf{K}_t) \odot \left(\mathbf{h}_t + \Phi(\mathbf{h}_t) \right) + \mathbf{K}_t \odot \mathbf{E}(\mathbf{u}_t).
\label{eq:physical_cell}
\end{equation}
Depicted in Figure \ref{fig:phicell}, PhyCell is an atomic recurrent cell for building physically-constrained RNNs. In our experiments, we use one layer of PhyCell but one can also easily stack several PhyCell layers to build more complex models, as done for stacked RNNs \cite{wang2017predrnn,wang2018predrnn++,wang2019memory}. To gain insight into PhyCell in Eq~(\ref{eq:physical_cell}), we write the equivalent two-steps form:
\begin{empheq}[left=\empheqlbrace]{alignat=2}
& \tilde{\mathbf{h}}_{t+1} \!= \mathbf{h}_{t} + \Phi(\mathbf{h}_{t}) & \!\!\!\quad \text{\small{\textbf{Prediction}\!}} \label{eq:prediction}\\
& \mathbf{h}_{t+1} \!= \tilde{\mathbf{h}}_{t+1} + \mathbf{K}_t \odot \left( \mathbf{E}(\mathbf{u}_t) - \tilde{\mathbf{h}}_{t+1} \right). & \!\!\! \quad \text{\small{\textbf{Correction}\!}} \label{eq:correction}
\end{empheq}
The prediction step in Eq \ref{eq:prediction} is a physically-constrained motion in the latent space, computing the intermediate representation $\tilde{\mathbf{h}}_{t+1}$. Eq \ref{eq:correction} is a correction step incorporating input data. This prediction-correction formulation is reminiscent of the way to combine numerical models with observed data in the data assimilation community \cite{asch2016data,bocquet2019data}, e.g.~ with the Kalman filter \cite{kalman1960new}. We show in section \ref{sec:training} that this decoupling between prediction and correction can be leveraged to robustly train our model in long-term forecasting and missing data contexts. $\mathbf{K}_t$ can be interpreted as the Kalman gain controlling the trade-off between both steps.
\subsubsection{PhyCell implementation}
We now specify how the physical predictor $\Phi$ in Eq \ref{eq:prediction} and the correction Kalman gain $\mathbf{K}_t$ in Eq \ref{eq:correction} are implemented.
\paragraph*{Physical predictor:} We implement $\Phi$ using a convolutional neural network (left gray box in Figure \ref{fig:phicell}), based on the connection between convolutions and differentiations \cite{dong2017image,long2018pde}.
This offers the possibility to learn a class of filters approximating each partial derivative in Eq \ref{eq:phi}, which are constrained by a kernel moment loss, as detailed in section \ref{sec:training}. As noted by~\cite{long2018pde}, the flexibility added by this constrained learning strategy gives better results for solving PDEs than handcrafted derivative filters.
Finally, we use $1 \times 1$ convolutions to linearly combine these derivatives with $c_{i,j}$ coefficients in Eq \ref{eq:phi}.
\paragraph*{Kalman gain:}
We approximate $\mathbf{K}_t$ in Eq \ref{eq:correction} by a gate with learned convolution kernels $\mathbf{W}_h$, $\mathbf{W}_u$ and bias $\mathbf{b}_k$:
\begin{equation}
\mathbf{K}_t = \tanh \left( \mathbf{W}_{h} * \tilde{\mathbf{h}}_{t+1} + \mathbf{W}_{u} * \mathbf{E}(\mathbf{u}_t) + \mathbf{b}_k \right).
\label{eq:kalman_gain}
\end{equation}
Note that if $\mathbf{K}_t = \mathbf{0}$, the input is not accounted for and the dynamics follows the physical predictor; if $\mathbf{K}_t = 1 $, the latent dynamics is resetted and only driven by the input. This is similar to gating mechanisms in LSTMs or GRUs.
\paragraph*{Discussion:} With specific $\Phi$ predictor,
$\mathbf{K}_t$ gain and encoder $\mathbf{E}$, PhyCell recovers recent models from the literature:
\begin{table}[H]
\centering
\begin{adjustbox}{max width=\columnwidth}
\begin{tabular}{c|ccc}
model & $\Phi$ & $\mathbf{K}_t$ & $\mathbf{E}$ \\ \hline
PDE-Net \cite{long2019pde} & Eq \ref{eq:prediction} & $\mathbf{0}$ & $\mathbf{Id}$ \\ \hline
Advection-diffusion & advection-diffusion & $\mathbf{0}$ & $\mathbf{Id}$ \\
flow~\cite{de2017deep} & predictor & & \\ \hline
Recurrent Kalman Filter \cite{becker2019recurrent} & locally linear, no & approximate & deep encoder \\
~ & physical constraint & Kalman gain & \\ \hline
PhyDNet (ours) & Eq \ref{eq:prediction} & Eq \ref{eq:kalman_gain} & deep encoder
\end{tabular}
\vspace{-0.2cm}
\label{tab:my_label}
\end{adjustbox}
\end{table}
PDE-Net~\cite{long2018pde} directly works on raw pixel data (identity encoder $\mathbf{E}$) and assumes Markovian dynamics (no correction, $\mathbf{K}_t\!\!\!=\!\!\!\mathbf{0}$): the model solves the autonomous PDE $\frac{\partial \mathbf{u}}{\partial t}=\Phi(\mathbf{u})$ given in Eq \ref{eq:prediction} but in pixel space. This prevents from modelling time-varying PDEs such as those tackled in this work, e.g.~ varying advection terms.
The flow model in \cite{de2017deep} uses the closed-form solution of the advection-diffusion equation as predictor ; it is however limited only to this PDE, whereas PhyDNet models a much broader class of PDEs. The Recurent Kalman Filter (RKF) \cite{becker2019recurrent} also proposes a prediction-correction scheme in a deep latent space, but their approach does not include any prior physical information, and the prediction step is locally linear, whereas we use deep models. An approximated form of the covariance matrix is used for estimating $\mathbf{K}_t$ in \cite{becker2019recurrent}, which we find experimentally inferior to our gating mechanism in Eq \ref{eq:kalman_gain}.
\subsection{Training}
\label{sec:training}
Given a training set of $N$ videos $\bm{\mathcal{D}} = \left \{ \mathbf{u}^{(i)} \right \} _{i=\{1:N \}}$ and PhyDNet parameters $\mathbf{w}= (\mathbf{w_p},\mathbf{w_r},\mathbf{w_s})$, where $\mathbf{w_p}$ (resp. $\mathbf{w_r}$) are parameters of the PhyCell (resp. residual) branch, and $\mathbf{w_s}$ are encoder and decoder shared parameters, we minimize the following objective function:
\begin{equation}
\mathcal{L}(\bm{\mathcal{D}},\mathbf{w}) = \mathcal{L}_{\text{image}}(\bm{\mathcal{D}},\mathbf{w}) + \lambda \mathcal{L}_{\text{moment}}(\mathbf{w_p}).
\end{equation}
We use the $L^2$ loss for the image reconstruction loss $\mathcal{L}_{\text{image}}$, as commonly done in the literature \cite{wang2017predrnn,wang2018predrnn++,oliu2018folded,wang2018eidetic,wang2019memory}.
$\mathcal{L}_{\text{moment}}(\mathbf{w_p})$ imposes physical constraints on the $k^2$ learned filters $ \left\{ \mathbf{w}^k_{p,i,j}\right\}_{i,j \leq k}$, such that each $\mathbf{w}^k_{p,i,j}$ of size $k \times k$ approximates $\frac{\partial^{i+j}}{\partial x^i y^j}$. This is achieved by using a loss based on the moment matrix $\mathbf{M}(\mathbf{w}^k_{p,i,j})$~\cite{long2019pde}, representing the order of the filter differentiation~\cite{dong2017image}. $\mathbf{M}(\mathbf{w}^k_{p,i,j})$ is compared to a target moment matrix $\mathbf{\Delta}^k_{i,j}$ (see $\mathbf{M}$ and $\mathbf{\Delta}$ computations in Appendix \ref{app:moment-matrix}), leading to:
\begin{equation}
\mathcal{L}_{\text{moment}} = \sum\limits_{i \leq k} \sum\limits_{j \leq k} ||\mathbf{M}(\mathbf{w}^k_{p,i,j}) - \mathbf{\Delta}^k_{i,j} ||_F .
\label{eq:lmoment}
\end{equation}
\paragraph*{Prediction mode:} An appealing feature of PhyCell is that we can use and train the model in a "prediction-only" mode by setting $\mathbf{K}_t = \mathbf{0}$ in Eq \ref{eq:correction}, i.e.~ by only relying on the physical predictor $\Phi$ in Eq \ref{eq:prediction}. It is worth mentioning that the "prediction-only" mode is not applicable to standard Seq2Seq RNNs: although the decomposition in Eq \ref{eq:Mp} still holds, i.e.~ $\bm{\mathcal{M}}_r(\mathbf{h},\mathbf{u}) = \Phi(\mathbf{h})+ \mathcal{C}(\mathbf{h},\mathbf{u})$, the resulting predictor is naive and useless for multi-step prediction $\mathbf{\tilde{h}}_{t+1}=0$, see Appendix \ref{sec:pdernn}).
Therefore, standard RNNs are not equipped to deal with unreliable input data $\mathbf{u}_t$. We show in section~\ref{sec:expe_prediction} that the gain of PhyDNet over those models increases in two important contexts with unreliable inputs: multi-step prediction and dealing with missing data.
\section{Experiments}
\label{section4}
\subsection{Experimental setup}
We evaluate PhyDNet on four datasets from various origins.
\paragraph{Moving MNIST} is a standard benchmark in video prediction \cite{srivastava2015unsupervised} consisting in two random MNIST digits bouncing on the walls of a $64 \times 64$ grid. We predict 10 future frames given 10 input frames. Training sequences are generated on the fly and the test set of 10000 sequences is provided by \cite{srivastava2015unsupervised}.
\paragraph{Traffic BJ} consists in traffic flow data collected by taxicabs in Beijing \cite{zhang2017deep}. Each $32 \times 32$ image is a 2-channels heat map with leaving/entering traffic
~Video prediction on such real-world complex data require modelling transport phenomena and traffic diffusion
~Following the setting of \cite{zhang2017deep,wang2019memory,wang2018eidetic}, we predict 4 future frames given 4 input frames.
\paragraph{SST} consists in daily Sea Surface Temperature (SST) data from the sophisticated simulation engine NEMO (Nucleus for European Modelling of the Ocean), as in \cite{de2017deep}. SST evolution is governed by the physical laws of fluid dynamics. We predict 4 frames of size $64 \times 64$ given 4 input frames.
\paragraph{Human 3.6} contains 3.6 million images of human actions \cite{ionescu2013human3}, with complex 3D articulated motions. Following the setting of \cite{wang2019memory}, we use only the "walking" scenario with subjects S1, S5, S6, S7, S8 for training, and S9, S11 for testing. We predict 4 future images of size $128 \times 128 \times 3$ given 4 input images. \vspace{0.1cm}\\
\paragraph*{Network architectures and training:}
PhyDNet shares a common backbone architecture for all datasets where the physical branch contains 49 PhyCells filters (with kernel of size $7 \times 7$) and the residual branch is composed of a 3-layers ConvLSTM with 128 filters in each layer. We set up the trade-off parameter between $\mathcal{L}_{\text{image}}$ and $\mathcal{L}_{\text{moment}}$ to $\lambda=1$. Detailed architectures and $\lambda$ impact are given in Appendix \ref{app:phydnet-impl}. Our code is available at \url{https://github.com/vincent-leguen/PhyDNet}.
\paragraph*{Evaluation metrics:} We follow evaluation metrics commonly used in state-of-the-art video prediction methods: the Mean Squared Error (MSE), Mean Absolute Error (MAE) and the Structural Similarity (SSIM) \cite{wang2004image} that computes the perceived image quality with respect to a reference. Metrics are averaged for each frame of the output sequence. Lower MSE, MAE and higher SSIM indicate better performances.
\subsection{State of the art comparison}
We evaluate PhyDNet against strong recent baselines, including very competitive data-driven RNN architectures: ConvLSTM \cite{xingjian2015convolutional}, PredRNN \cite{wang2017predrnn}, Causal LSTM \cite{wang2018predrnn++}, Memory in Memory (MIM) \cite{wang2019memory}. We also compare to methods dedicated to specific datasets: DDPAE \cite{hsieh2018learning}, a disentangling method specialized and state-of-the-art on Moving MNIST ; and the physically-constrained advection-diffusion flow model \cite{de2017deep} that is state-of-the-art for the SST dataset.
\begin{table}[b!]
\caption[Quantitative forecasting results of the PhyDNet model.]{Quantitative forecasting results of PhyDNet compared to baselines using various datasets. Numbers are copied from original or citing papers. * corresponds to results obtained by running online code from the authors. The first five baseline are general deep models applicable to all datasets, whereas DDPAE \cite{hsieh2018learning} (resp. advection-diffusion flow \cite{de2017deep}) are specific state-of-the-art models for Moving MNIST (resp. SST). Metrics are scaled to be in a similar range across datasets to ease comparison.}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{l|lll|lll|lll|lll}
\toprule
\multicolumn{1}{c}{} & \multicolumn{3}{|c}{\textbf{Moving MNIST}} & \multicolumn{3}{|c}{\textbf{Traffic BJ}} & \multicolumn{3}{|c}{\textbf{Sea Surface Temperature}} & \multicolumn{3}{|c}{\textbf{Human 3.6}} \\
\midrule
Method & MSE & MAE & SSIM & MSE $\times 100$ & MAE & SSIM & MSE $\times 10$ & MAE & SSIM & MSE / 10 & MAE $/ 100$ & SSIM \\
\midrule
ConvLSTM \cite{xingjian2015convolutional} & 103.3 & 182.9 & 0.707 & $48.5^*$ & $17.7^*$ & $0.978^*$ & $45.6^*$ & $63.1^*$ & $0.949^*$ & $50.4^*$ & $18.9^*$ & $0.776^*$ \\
PredRNN \cite{wang2017predrnn} & 56.8 & 126.1 & 0.867 & 46.4 & $17.1^*$ & $0.971^*$ & 41.9 & 62.1 & 0.955 & 48.4 & 18.9 & 0.781 \\
Causal LSTM \cite{wang2018predrnn++} & 46.5 & 106.8 & 0.898 & 44.8 & $16.9^*$ & $0.977^*$ & $39.1^*$ & $62.3^*$ & $0.929^*$ & 45.8 & 17.2 & 0.851 \\
MIM \cite{wang2019memory} & 44.2 & 101.1 & 0.910 & 42.9 & $16.6^*$ & $0.971^*$ & $42.1^*$ & $60.8^*$ & $0.955^*$ & 42.9 & 17.8 & 0.790 \\
E3D-LSTM \cite{wang2018eidetic} & 41.3 & 86.4 & 0.920 & $43.2^*$ & $16.9^*$ & $0.979^*$ & $34.7^*$ & $59.1^*$ & $0.969^*$ & 46.4 & 16.6 & 0.869 \\ \hline
Advection-diffusion \cite{de2017deep} & - & - & -& - & - &- & $34.1^*$ & $54.1^*$ & $0.966^*$ & - & - &- \\
DDPAE \cite{hsieh2018learning} & 38.9 & $90.7^*$ & $0.922^*$ & - &- &- &- &- &- &- &- &- \\
\midrule
\textbf{PhyDNet} & \textbf{24.4} & \textbf{70.3} & \textbf{0.947} & \textbf{41.9} & \textbf{16.2} & \textbf{0.982} & \textbf{31.9} & \textbf{53.3} & \textbf{0.972} & \textbf{36.9} & \textbf{16.2} & \textbf{0.901} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:res1}
\end{table}
Overall results presented in Table \ref{tab:res1} reveal that PhyDNet outperforms significantly all baselines on all four datasets. The performance gain is large with respect to state-of-the-art general RNN models, with a gain of 17 MSE points for Moving MNIST, 6 MSE points for Human 3.6, 3 MSE points for SST and 1 MSE point for Traffic BJ. In addition, PhyDNet also outperforms specialized models: it gains 14 MSE points compared to the disentangling DDPAE model \cite{hsieh2018learning} specialized for Moving MNIST, and 2 MSE points compared to the advection-diffusion model \cite{de2017deep} dedicated to SST data. PhyDNet also presents large and consistent gains in SSIM, indicating that image quality is greatly improved by the physical regularization. Note that for Human 3.6, a few approaches use specific strategies dedicated to human motion with additional supervision, e.g.~ human pose in \cite{villegas2017learning}. We perform similarly to \cite{villegas2017learning} using only unsupervised training, as shown in Appendix \ref{app:compa-villegas}. This is, to the best of our knowledge, the first time that physically-constrained deep models reach state-of-the-art performances on generalist video prediction datasets.
In Figure \ref{fig:visus}, we provide qualitative prediction results for all datasets, showing that PhyDNet properly forecasts future images for the considered horizons: digits are sharply and accurately predicted for Moving MNIST in (a), the absolute traffic flow error is low and approximately spatially independent in (b), the evolving physical SST phenomena are well anticipated in (c) and the future positions of the person is accurately predicted in (d). We add in Figure \ref{fig:visus}(a) a qualitative comparison to DDPAE \cite{hsieh2018learning}, which fails to predict the future frames properly. Since the two digits overlap in the input sequence, DPPAE is unable to disentangle them. In contrast, PhyDNet successfully learns the physical dynamics of the two digits in a disentangled latent space, leading a correct prediction. In Appendix \ref{app:phydnet-visu}, we detail this comparison to DPPAE, and provide additional visualizations for all datasets.
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{images/phydnet_global_visus.png}
\caption[Qualitative prediction results of PhyDNet.]{Qualitative results of the predicted frames by PhyDNet for all datasets. First line is the input sequence, second line the target and third line PhyDNet prediction. For Moving MNIST, we add a fourth line with the comparison to DDPAE \cite{hsieh2018learning} and for Traffic BJ the difference $|\text{Prediction-Target}|$ for better visualization.}
\label{fig:visus}
\end{figure*}
\subsection{Ablation Study}
\begin{table}
\caption[Ablation study of the PhyDNet model.]{An ablation study shows the consistent performance gain on all datasets of our physically-constrained PhyCell vs the general purpose ConvLSTM, and the additional gain brought up by the disentangling architecture PhyDNet. * corresponds to results obtained by running online code from the authors.}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{l|lll|lll|lll|lll}
\toprule
\multicolumn{1}{c}{} & \multicolumn{3}{|c|}{\textbf{Moving MNIST}} & \multicolumn{3}{|c|}{\textbf{Traffic BJ}} & \multicolumn{3}{|c|}{\textbf{Sea Surface Temperature}} & \multicolumn{3}{|c}{\textbf{Human 3.6}} \\
\midrule
Method & MSE & MAE & SSIM & MSE $\times$ 100 & MAE & SSIM & MSE $\times$ 10 & MAE & SSIM & MSE $/$ 10 & MAE $/$ 100 & SSIM \\
\midrule
ConvLSTM & 103.3 & 182.9 & 0.707 & $48.5^*$ & $17.7^*$ & $0.978^*$ & $45.6^*$ & $63.1^*$ & $0.949^*$ & $50.4^*$ & $18.9^*$ & $0.776^*$ \\
PhyCell & 50.8 & 129.3 & 0.870 & 48.9 & 17.9 & 0.978 & 38.2 & 60.2 & 0.969 & 42.5 & 18.3 & 0.891 \\
PhyDNet & \textbf{24.4} & \textbf{70.3} & \textbf{0.947} & \textbf{41.9} & \textbf{16.2} & \textbf{0.982} & \textbf{31.9} & \textbf{53.3} & \textbf{0.972} & \textbf{36.9} & \textbf{16.2} & \textbf{0.901} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:ablation}
\end{table}
We perform here an ablation study to analyse the respective contributions of physical modelling and disentanglement. Results are presented in Table \ref{tab:ablation} for all datasets. We see that a 1-layer PhyCell model (only the left branch of PhyDNet in Figure \ref{fig:fig2}(b)) outperforms a 3-layers ConvLSTM (50 MSE points gained for Moving MNIST, 8 MSE points for Human 3.6, 7 MSE points for SST and equivalent results for Traffic BJ), while PhyCell has much fewer parameters (270,000 \textit{vs.} 3 million parameters). This confirms that PhyCell is a very effective recurrent cell that successfully incorporates physical prior in deep models. When we further add our disentangling strategy with the two-branch architecture (PhyDNet), we have another performance gap on all datasets (25 MSE points for Moving MNIST, 7 points for Traffic and SST, and 5 points for Human 3.6), which proves that physical modelling is not sufficient by itself to perform general video prediction and that learning unknown factors is necessary.
To complement the discussion of Table \ref{tab:ablation}, we give here in Table \ref{tab:nb-parameters} the approximate number of models parameters of trained models:
\begin{table}[H]
\centering
\caption{Number of parameters of models trained on Moving MNIST.}
\begin{tabular}{c|c}
\toprule
method & number of parameters \\
\midrule
ConvLSTM & $3 . 10^6$ \\
PhyCell & $370 . 10^3$ \\
PhyDNet & $3 . 10^6$ \\
\bottomrule
\end{tabular}
\label{tab:nb-parameters}
\end{table}
We see that a 1-layer PhyCell with 49 filters has far fewer parameters than a 3-layers ConvLSTM (with 128 filters in each layer) and obtains far better results (gain of 50 MSE points). Then PhyDNet with approximately the same number of parameters as ConvLSTM (3 million) again improves the performances by 25 MSE points, reaching a state-of-the-art MSE score of 24.4.
We qualitatively analyze in Figure~\ref{fig:ablation} partial predictions of PhyDNet for the physical branch $\hat{\mathbf{u}}^{\mathbf{p}}_{t+1} = \mathbf{D}(\mathbf{h}^{\mathbf{p}}_{t+1})$ and residual branch $\hat{\mathbf{u}}^{\mathbf{r}}_{t+1} = \mathbf{D}(\mathbf{h}^{\mathbf{r}}_{t+1})$. As noted in Figure \ref{fig:fig1} for Moving MNIST, $\mathbf{h^p}$ captures coarse localisations of objects, while $\mathbf{h^r}$ captures fine-grained details that are not useful for the physical model. Additional visualizations for the other datasets are provided in Appendix \ref{app:phydnet-visu}.
\begin{figure}
\centering
\includegraphics[width=11cm]{images/phydnet_ablation_mm1.png}
\caption[Qualitative ablation results on Moving MNIST.]{Qualitative ablation results on Moving MNIST: partial predictions show that PhyCell captures coarse localisation of digits, whereas the ConvLSTM branch models the fine shape details of digits. Every two frames are displayed.}
\label{fig:ablation}
\end{figure}
\paragraph{Influence of physical regularization}
We conduct in Table \ref{tab:ablation2} a finer ablation on Moving MNIST to study the impact of the physical regularization $\mathcal{L}_{\text{moment}}$ on the performance of PhyCell and PhyDNet. When we disable $\mathcal{L}_{\text{moment}}$ for training PhyCell, performances improve by 7 points in MSE. This underlines that physical laws alone are too restrictive for learning dynamics in a general context, and that complementary factors should be accounted for.
On the other side, when we disable $\mathcal{L}_{\text{moment}}$ for training our disentangled architecture PhyDNet, performances decrease by 5 MSE points ($29$ \textit{vs} $24.4$) compared to the physically-constrained version. This proves that physical constraints are relevant, but should be incorporated carefully in order to make both branches cooperate. This enables to leverage physical prior, while keeping remaining information necessary for pixel-level prediction. Same conclusions can be drawn for the other datasets, see Appendix \ref{app:phydnet-influence}.
\begin{table}[H]
\centering
\caption{Influence of physical regularization for Moving MNIST.}
\begin{adjustbox}{max width=\columnwidth}
\begin{tabular}{l|lll}
\toprule
Method & MSE & MAE & SSIM \\
\midrule
PhyCell & 50.8 & 129.3 & 0.870 \\
PhyCell without $\mathcal{L}_{\text{moment}}$ & 43.4 & 112.8 & 0.895 \\
PhyDNet & \textbf{24.4} & \textbf{70.3} & \textbf{0.947} \\
PhyDNet without $\mathcal{L}_{\text{moment}}$ & 29.0 & 81.2 & 0.934 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:ablation2}
\end{table}
\subsection{PhyCell analysis}
\label{sec:expe_prediction}
\paragraph*{Physical filter analysis}
With the same general backbone architecture, PhyDNet can express different PDE dynamics associated to the underlying phenomena by learning specific $c_{i,j}$ coefficients combining the partial derivatives in Eq (\ref{eq:phi}). In Figure \ref{fig:cij}, we display the mean amplitude of the learned coefficients $c_{i,j}$ with respect to the order of differentiation. For Moving MNIST, the $0^{th}$ and $1^{st}$ orders are largely dominant, meaning a purely advective behaviour coherent with the piecewise-constant translation dynamics of the dataset. For Traffic BJ and SST, there is also a global decrease in amplitude with respect to order, we nonetheless notice a few higher order terms appearing to be useful for prediction.
For Human 3.6, where the nature of the prior motion is less obvious, these coefficients are more spread across order derivatives.
\begin{figure}[H]
\centering
\begin{tabular}{cccc}
\hspace{-0.5cm} \includegraphics[height=3.3cm]{images/phydnet_cij_mm.png} & \hspace{-0.5cm}
\includegraphics[height=3.3cm]{images/phydnet_cij_traffic.png} &
\includegraphics[height=3.3cm]{images/phydnet_cij_sst.png} &
\hspace{-0.5cm} \includegraphics[height=3.3cm]{images/phydnet_cij_human.png} \\
Moving MNIST & Traffic BJ & SST & Human 3.6
\end{tabular}{}
\caption{Mean amplitude of the combining coefficients $c_{i,j}$ with respect to the order of the differential operators approximated.}
\label{fig:cij}
\end{figure}
\paragraph*{Dealing with unreliable inputs}
\label{sec:lt-forecasting}
We explore here the robustness of PhyDNet when dealing with unreliable inputs, that can arise in two contexts: long-term forecasting and missing data. As explained in section~\ref{sec:training}, PhyDNet can be used in a prediction mode in this context, limiting the use of unreliable inputs, whereas general RNNs cannot. To validate the relevance of the prediction mode, we compare PhyDNet to DDPAE \cite{hsieh2018learning}, based on a standard RNN (LSTM) as predictor module.
Figure \ref{fig:long-term} presents the results evaluated in MSE and SSIM obtained by PhyDNet and DDPAE on Moving MNIST.
For long-term forecasting, we evaluate the performances of both methods far beyond the prediction range seen during training (up to 80 frames), as shown in Figure \ref{fig:long-term}(a). We can see that the performance drop (MSE increase rate) is approximately linear for PhyNet, whereas it is much more pronounced for DDPAE. For example, PhyDNet for 80-steps prediction reaches similar performances in MSE than DDPAE for 20-steps prediction. This confirms that PhyDNet can limit error accumulation during forecasting by using a powerful dynamical model.
Finally, we evaluate the robustness of PhyDNet on DDPAE on missing data, by varying the ratio of missing data (from 10 to 50\%) in input sequences during training and testing.
A missing input image is replaced with a default value (0) image. In this case, PhyCell
relies only on its latent dynamics by setting $\mathbf{K}_t=0$, whereas DDPAE takes the null image as input. Figure \ref{fig:long-term}(b) shows that the performance gap between PhyDNet and DDPAE increases with the percentage of missing data.
\begin{figure}[H]
\centering
\begin{tabular}{cc}
\hspace{-0.8cm} \includegraphics[width=6cm]{images/phydnet_lt_mse.png} & \hspace{-0.5cm} \includegraphics[width=6cm]{images/phydnet_missing_mse.png} \\
\hspace{-0.8cm} \includegraphics[width=6cm]{images/lt_ssim.png} & \hspace{-0.5cm} \includegraphics[width=6cm]{images/missing_ssim.png} \\
(a) Long-term forecasting & (b) Missing data
\end{tabular} \\
\caption{MSE comparison between PhyDNet and DDPAE \cite{hsieh2018learning} when dealing with unreliable inputs, for long-term forecasting (a) and in presence of missing data (b).}
\label{fig:long-term}
\end{figure}
\section{Conclusion}
We have proposed PhyDNet, a new model for disentangling prior dynamical knowledge from other factors of variation required for video prediction. PhyDNet enables to apply PDE-constrained prediction beyond fully observed physical phenomena in pixel space, and to outperform state-of-the-art performances on four generalist datasets. Our introduced recurrent physical cell for modelling PDE dynamics generalizes recent models and offers the appealing property to decouple prediction from correction
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{Introduction}
\lettrine[lines=3]{A}s discussed in the previous Chapter, forecasting solar irradiance with fisheye images remains a very difficult task for pure deep learning methods, because of the complex non-stationary motion of clouds. In this Chapter, we adapt the methodological contributions of this thesis, namely the DILATE loss function (Chapter \ref{chap:dilate}), the PhyDNet video prediction model (Chapter \ref{chap:phydnet}) and the APHYNITY framework (Chapter \ref{chap:aphynity}), for solving this problem.
\section{Proposed forecasting models}
Given a dataset of fisheye images $\mathbf{u}_{1:T} = (\mathbf{u}_1,...,\mathbf{u}_T)$ and associated solar irradiance measurements $r_t$, our goal is to forecast the future irradiance $r_{T+H}$ for a given horizon $H$.
First, we briefly review the PhyDNet model (Section \ref{sec:reviewphydnet}) and propose an improvement to the architecture for better disentangling the physical and residual components (Section \ref{sec:phydnet-improvement}). Then, we propose two implementations of the PhyDNet model for solar irradiance forecasting (Section \ref{sec:phydnet-solar}). The PhyDNet-monostep model is a direct adaptation of the architecture introduced in the previous Chapter, where the ConvLSTM is replaced by PhyDNet; we call this model PhyDNet-mono since we directly predict the future irradiance at the desired horizon $r_{T+H}$. We also propose the PhyDNet-multistep model, that forecasts the entire trajectory up to the desired horizon $(r_{T+1}, \cdots, r_{T+H})$. This multistep extension allows to exploit the whole intermediate trajectory for learning, for example by using the DILATE loss that compares multistep time series.
\subsection{Review of the PhyDNet model}
\label{sec:reviewphydnet}
As described in Chapter \ref{chap:phydnet}, PhyDNet \cite{leguen20phydnet} is a deep architecture that leverages partial differential equations (PDEs) for video prediction. Since physics alone is not sufficient for accurate predictions at the pixel level, PhyDNet aims at learning a latent space $\bm{\mathcal{H}}$ that linearly disentangles physical dynamics from residual factors (such as texture, details,...). The latent state $\mathbf{h}$ is decomposed into physical and residual components $\mathbf{h} = \mathbf{h^p} + \mathbf{h^r}$, and follows the dynamics:
\begin{equation}
\!\!\!\dfrac{\partial \mathbf{h}(t,\mathbf{x})}{\partial t} \! = \!\frac{\partial \mathbf{h^p}}{\partial t} \!+\! \frac{\partial \mathbf{\mathbf{h^r}}}{\partial t} \!:=\! \bm{\mathcal{M}}_{p}(\mathbf{h^p},\mathbf{E(u)}) + \bm{\mathcal{M}}_{r}(\mathbf{\mathbf{h^r}},\mathbf{E(u)}). \!\!\!
\label{eq:eq1}
\end{equation}
The physical model $\bm{\mathcal{M}}_p$ is composed of a PDE in latent space $\Phi_p(\mathbf{h^p})$ and a correction term $\mathcal{C}_p(\mathbf{h^p},\mathbf{E(u)})$ with input data (embedded by encoder $\mathbf{E}$): $\bm{\mathcal{M}}_p(\mathbf{h^p},\mathbf{E(u)}) = \Phi_p(\mathbf{h^p})+ \mathcal{C}_p(\mathbf{h^p},\mathbf{E(u)})$. The physical predictor $\Phi_p$ encodes a general class of linear PDEs up to a differential order $q$:
\begin{equation}
\Phi_p(\mathbf{h^p}(t,\mathbf{x})) = \sum_{i,j: i+j \leq q} c_{i,j} \dfrac{\partial^{i+j} \mathbf{h^p}}{\partial x^i \partial y^j}(t,\mathbf{x}).
\label{eq:phi}
\end{equation}
Partial derivatives are computed by constrained convolutions as in PDE-Net \cite{long2018pde} and combined by learned coefficients $c_{ij}$. Discretizing the PDE $\frac{\partial \mathbf{h^p}}{\partial t}(t,\mathbf{x})= \bm{\mathcal{M}}_p(\mathbf{h^p},\mathbf{E(u)})$ with the Euler numerical scheme leads to a recurrent neural network cell (PhyCell). PhyCell performs a physical prediction step in latent space (Eq \ref{eq:prediction_phycell}) followed by a correction with embedded input data $\mathbf{E}(\mathbf{u}_t)$ (Eq \ref{eq:correction_phycell}), with a tradeoff controlled by the learned Kalman gain $\mathbf{K}_t$.
\begin{empheq}[]{alignat=2}
& \tilde{\mathbf{h}}^\mathbf{p}_{t+1} \!= \mathbf{h}^{\mathbf{p}}_{t} + \Phi_p(\mathbf{h}^{\mathbf{p}}_{t}) & \!\!\!\quad \text{\small{\textbf{Prediction}\!}} \label{eq:prediction_phycell}\\
& \mathbf{h}^{\mathbf{p}}_{t+1} \!= \tilde{\mathbf{h}}^{\mathbf{p}}_{t+1} + \mathbf{K}_t \odot \left( \mathbf{E}(\mathbf{u}_t) - \tilde{\mathbf{h}}^{\mathbf{p}}_{t+1} \right). & \!\!\! \quad \text{\small{\textbf{Correction}\!}} \label{eq:correction_phycell}
\end{empheq}
The residual model $\bm{\mathcal{M}}_r(\mathbf{h^p},\mathbf{E(u)})$ captures the unknown factors related to unmodelled physics, e.g.~ appearance, texture, and is fully learned from data (implemented by a general ConvLSTM \cite{xingjian2015convolutional}).
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{images/phydnet_monostep.png}
\caption[PhyDNet-monostep architecture for solar irradiance forecasting.]{\textbf{PhyDNet-monostep architecture for solar irradiance forecasting.} Input images are embedded by an encoder $\mathbf{E}$ in a common latent space, followed by specific encoders $\mathbf{E_p}$ and $\mathbf{E_r}$ for extracting physical and residual features. PhyDNet recurrent model is unfolded in time and computes a context vector $c=\mathbf{D_p}(\mathbf{h}^\mathbf{p}_T) +\mathbf{D_r}(\mathbf{h}^\mathbf{r}_T$), which is used for predicting the future irradiance $\hat{r}_{T+H}$ and image $\hat{\mathbf{u}}_{T+H}$.}
\label{fig:phydnet-mono}
\end{figure*}{}
\subsection{PhyDNet model with separate encoders and decoders}
\label{sec:phydnet-improvement}
One limitation of PhyDNet model is that images $\mathbf{u}_t$ are embedded by the encoder $\mathbf{E}$ in a common latent space for correcting the dynamics of both physical $\mathcal{C}_p(\mathbf{h^p},\mathbf{E(u)})$ and residual models $\mathcal{C}_r(\mathbf{h^r},\mathbf{E(u)})$. This limits the disentangling ability of PhyDNet since $\mathbf{E}(\mathbf{u}_t)$ contains both physical and residual features. We thus propose to learn separate latent spaces for both branches, via additional specific encoders $(\mathbf{E_p},\mathbf{E_r})$ and decoders $(\mathbf{D_p},\mathbf{D_r})$, leading to the following dynamical model:
\begin{equation}
\!\!\!\dfrac{\partial \mathbf{h}(t,\mathbf{x})}{\partial t} \! =\! \bm{\mathcal{M}}_{p}(\mathbf{h^p},\mathbf{E_p \circ E(u)}) + \bm{\mathcal{M}}_{r}(\mathbf{\mathbf{h^r}},\mathbf{E_r \circ E(u)}). \!\!\!
\label{eq:eq1}
\end{equation}
$\mathbf{E_p}$ aims at learning a specific image embedding for controlling the physical dynamics in latent space with correction features uniquely related to physics (and similarly for $\mathbf{E_r}$).
\noindent In the following, we denote this model as PhyDNet-dual.
\begin{figure}
\centering
\includegraphics[width=16cm]{images/phydnet_multistep.png}
\caption[PhyDNet-multistep architecture for solar irradiance forecasting.]{\textbf{PhyDNet-multistep architecture for solar irradiance forecasting.} This is a Sequence To Sequence architecture with the PhyDNet recurrent neural network. Contrary to PhyDNet-monostep, this model predicts the future solar irradiance and image for each time step of the prediction range.}
\label{fig:phydnet-multi}
\end{figure}
\subsection{PhyDNet for solar irradiance forecasting}
\label{sec:phydnet-solar}
We first propose the PhyDNet-monostep architecture, which is a direct adaptation of the forecasting model described in Chapter \ref{chap:overview_fisheye}. Depicted in Figure \ref{fig:phydnet-mono}, we replace the ConvLSTM encoding the input sequence $\mathbf{u}_{1:T}$ by the PhyDNet-dual encoder, allowing to extract physically-constrained features. The final physical and residual latent states are decoded by their respective specific decoders $\mathbf{D_p}$ and $\mathbf{D_r}$ and then summed to get a context vector $c=\mathbf{D_p}(\mathbf{h}^\mathbf{p}_T) +\mathbf{D_r}(\mathbf{h}^\mathbf{r}_T$). Then a multi-layer perceptron (MLP) uses the input context $c$ to forecast the future irradiance $\hat{r}_{T+H}$, and the global decoder $D$ simultaneously forecasts the future image $\mathbf{D}(c) = \hat{\mathbf{u}}_{T+H}$.
We also propose the PhyDNet-multistep shown in Figure \ref{fig:phydnet-multi}. Instead of directly forecasting the future values from the last step of the input range, PhyDNet-multistep is composed of a PhyDNet-dual recurrent decoder. It provides future image and irradiance predictions for each time step of the prediction range $(T+1,\cdots, T+H)$. This multi-step strategy allows to supervise the model based on a whole predicted trajectory: we evaluate in the experiments the application of the DILATE training loss function instead of the MSE.
\section{Experimental results}
We conduct experiments on the same fisheye dataset as in the previous Chapter. The training dataset for solar irradiance forecasting is composed of 180,000 sequences of 10 images spaced by 1min (with the associated ground truth solar irradiance measurements) from the years 2014 to 2016 at La Reunion Island, and the evaluation dataset of 20,000 sequences during the year 2013 on the same site. We keep 5 images for the input range and predict the 5 following images and solar irradiances. Images are resized at $80 \times 80$ pixels.
\subsection{Irradiance forecasting with PhyDNet}
We forecast solar irradiance at a 5min horizon, given a 5min past context. We compare quantitatively the proposed PhyDNet models against recent competitive video prediction baselines: ConvLSTM \cite{xingjian2015convolutional} (which corresponds to the model presented in Chapter \ref{chap:overview_fisheye}) and PredRNN \cite{wang2017predrnn}. Each baseline is adapted in the same way for solar irradiance forecasting, in the monostep or multistep settings.
We report in Table \ref{tab:irradiance} the normalized RMSE\footnote{nRMSE = Root Mean Squared Error normalized by the mean value of the quantity on the train set, expressed as a percentage.} for the predicted irradiance (KGHI) $\hat{r}_{T+\text{5min}}$.
\begin{table}[H]
\centering
\caption{Solar irradiance (KGHI) forecasting at a 5min horizon.}
\begin{tabular}{c|c}
\toprule
& irradiance nRMSE \\
\midrule
PhyDNet-monostep irradiance only & 27.8 \% \\
ConvLSTM-monostep \cite{xingjian2015convolutional} & 26.6 \% \\
PredRNN-monostep \cite{wang2017predrnn} & 25.1 \% \\
PhyDNet-monostep \cite{leguen20phydnet} & 24.4 \% \\
PhyDNet-dual-monostep & 23.5 \% \\
PhyDNet-dual-multistep & \textbf{21.5 \%} \\
\bottomrule
\end{tabular}
\label{tab:irradiance}
\end{table}{}
The first line in Table \ref{tab:irradiance} corresponds to a PhyDNet-monostep that only predicts the future irradiance $\hat{r}_{T+\text{5min}}$ and not the future image. It gives the worst performances among compared models, indicating that the the joint image-irradiance multitask setting provides a better supervision for training the forecasting model. All the other models in Table \ref{tab:irradiance} jointly predict future images and irradiances.
We observe that, in the monostep setting, the PhyDNet recurrent neural network gives better results (24.4\%) compared to the ConvLSTM (26.6 \%) and PredRNN (25.1 \%). It shows that integrating physical dynamics greatly helps in modelling the cloud motion. With the separate encoders and decoders, PhyDNet-dual-monostep further improves the performances (23.5 \%). Finally, we see that with the multistep strategy, PhyDNet-dual-multistep provides another large improvement (21.5 \%). The supervision coming for a complete trajectory of future images and irradiances significantly boosts the training process.
We provide in Figure \ref{fig:fisheye-qualitative} a qualitative illustration of the 5min GHI predictions of the PhyDNet-dual-multistep predictions on a particular day. We see that our model closely follows the ground truth measurements and is able to successfully anticipates the sharp irradiance fluctuations, despite the fast alternation of clouds and sun.
\begin{figure}
\centering
\includegraphics[width=15cm]{images/fisheye_fig1.png}
\caption[Short-term forecasting with fisheye images.]{5min ahead solar irradiance forecasts from fisheye images. Our proposed deep model leveraging physical prior knowledge accurately predicts the sharp intra-day solar irradiance fluctuations.}
\label{fig:fisheye-qualitative}
\end{figure}
\subsection{Applications of DILATE and APHYNITY}
We evaluate here the application of the DILATE loss function (Chapter \ref{chap:dilate}) and APHYNITY framework (Chapter \ref{chap:aphynity}) introduced in this thesis.
We use the DILATE loss at training time instead of the MSE for the predicted irradiance time series (5 predicted points in the future). We experimentally fixed the hyperparameter $\alpha$ balancing the shape and temporal term to 0.95, which yields the best results.
For APHYNITY, we minimize the norm of the residual hidden state $\mathbf{h^r}$ for all time steps. Note that contrary to the APHYNITY models presented in Chapter \ref{chap:aphynity}, we do not use here the NeuralODE for extrapolating the trajectory in latent space, but the PhyDNet recurrent neural network. Exploiting a NeuralODE integration is a promising way for future works.
Forecasting results are presented in Table \ref{tab:irradiance-dilate-aphynity}. We compare the application of DILATE, APHYNITY and the combination of both mechanisms. We can see that these 3 variants lead to approximately similar performances: they improve slightly over the PhyDNet-dual-multistep baseline in normalized RMSE and in the DILATE objective (confirmed with the shape and temporal metrics).
\paragraph{Discussion} The performance improvement due to DILATE and APHYNITY exists, but is rather small compared to the performance gap due to the architecture design of PhyDNet-dual and to the multistep training scheme. We discuss here the possible reasons. Concerning DILATE, we apply the loss in our experiments on predicted trajectories of 5 timesteps. This is rather small compared to our experiments in Chapter \ref{chap:dilate} (the shortest trajectories have 20 timesteps for the \texttt{Synthetic} dataset). For shorter trajectories, dynamic time warping is less relevant, and the sharp variations are more difficult to visualize. Augmenting the forecasting horizon of our method and reducing the time interval between images (up of the 10s sampling frequency) are interesting future directions for better exploiting the DILATE loss.
Regarding APHYNITY, the physical model used in PhyDNet is a class of linear PDEs. This is a very coarse physical prior, more general than in the experiments presented in Chapter \ref{chap:aphynity}. Moreover, due to the non-observed prior, the physical model is applied in a learned latent space which is not explicitly controlled, contrary to the fully-visible setting in Chapter \ref{chap:aphynity}. This may explain why optimizing the ML/MB decomposition leads to less improvement. An appealing future direction would be to exploit more specific physical laws modelling the cloud motion and/or a more precise description of the input space where the physical laws apply.
\begin{table}[H]
\centering
\caption{Evaluation of the DILATE loss and the APHYNITY framework on the 5-min solar irradiance forecasting problem.}
\begin{tabular}{cccccc}
\toprule
& nRMSE & DTW & TDI & DILATE & Ramp score \\
\midrule
PhyDNet-dual-multistep & 21.5 \% & 34.1 & 63.3 & 97.4 & 78.6 \\
DILATE & \textbf{21.2 \%} & \textbf{33.6} & 63.0 & 96.6 & \textbf{77.3} \\
APHYNITY & 21.4 \% & 34.2 & 62.2 & 96.4 & \textbf{77.3} \\
APHYNITY + DILATE & \textbf{21.2 \%} & \textbf{33.6} & \textbf{61.5} & \textbf{95.1} & 77.9 \\
\bottomrule
\end{tabular}
\label{tab:irradiance-dilate-aphynity}
\end{table}
\subsection{Video prediction}
We then evaluate PhyDNet-dual-multistep on the video prediction task. Given 5 input images with a 1 min interval, we forecast the 5 future images up to $t_0 + 5\text{min}$. We compare PhyDNet-dual-multistep with ConvLSTM and Memory In Memory (MIM) \cite{wang2019memory}.
Evaluation metrics are the mean squared error (MSE), mean absolute error (MAE) and the structural similarity index SSIM (higher is better). Results shown in Table \ref{tab:video-prediction} reveal that PhyDNet-dual-multistep outperforms both baselines for all metrics. It confirms that incorporating physical prior information for modelling cloud motion is beneficial compared to fully data-driven algorithms.
\begin{table}[H]
\caption{Quantitative video prediction results.}
\centering
\begin{tabular}{c|c|c|c}
\toprule
& MSE & MAE & SSIM \\
\midrule
ConvLSTM \cite{xingjian2015convolutional} & 83.1 & 681 & 0.845 \\
MIM \cite{wang2019memory} & 68.6 & 635 & 0.840 \\
PhyDNet-dual-multistep & $\mathbf{68.1}$ & $\mathbf{629}$ & $\mathbf{0.862}$ \\
\bottomrule
\end{tabular}
\label{tab:video-prediction}
\end{table}{}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{images/fisheye_video_prediction1.png}
\caption[Qualitative fisheye video forecasting results.]{Qualitative fisheye video forecasting results up to 5min horizon. The proposed model successfully predicts the motion of the blue and green clouds that move nearer and finally merge into the yellow cloud.}
\label{fig:video-prediction}
\end{figure*}{}
We show in Figure \ref{fig:video-prediction} a video prediction example of PhyDNet-dual model. The future of this sequence presents 2 clouds (circled in blue and green) moving closer between $t_0$ and $t_0+3\text{min}$ and finally merging at time $t_0+4\text{min}$. We observe that PhyDNet-dual predicts the same outcome with a good accuracy on cloud location, although clouds become blurry because of uncertainty.
In Figure \ref{fig:ablation_fisheye}, we provide a particular comparison to ConvLSTM \cite{xingjian2015convolutional}, which forms the residual branch of PhyDNet. In sequence (a), we see that the shape of the small cloud getting nearer the sun is much better predicted by PhyDNet-dual. In sequence (b), the sun will reappear 1 min in the future. PhyDNet-dual provides a better anticipation by prediction a bright spot at the sun location and better defined cloud shapes. It confirms that incorporating physical dynamics greatly improves the predictions of natural phenomena, with a small amount of additional parameters with respect to ConvLSTM.
\begin{figure*}
\centering
\includegraphics[width=16cm]{images/fisheye_ablation.png}
\caption{Qualitative forecasting comparison between PhyDNet-dual-multistep and ConvLSTM.}
\label{fig:ablation_fisheye}
\end{figure*}{}
\section{Conclusion}
In this Chapter, we have explored the methodological contributions of this thesis for solving the solar irradiance forecasting problem at EDF. We have proposed an improvement of our PhyDNet video prediction model that we have adapted for this task. The PhyDNet model greatly improves the performances compared to competitive pure data-driven, confirming the benefits of the MB/ML integration. We have also highlighted the crucial importance of making multistep instead of monostep predictions. Furthermore, we have applied the DILATE loss function and the APHYNITY framework, which further improve the forecasting performances, albeit slightly.
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{Machine Learning}
\label{sec:dl-background}
\subsection{Background}
\lettrine[lines=3]{D}eep Learning belongs to the broader category of statistical machine learning. In the \textit{supervised learning} context, the goal is to estimate the optimal mapping $Y= f(X)$ between inputs X and outputs Y, given a training dataset of $N$ labelled examples $\left\{ (X_i,Y_i) \right\}_{i=1}^N \in (\mathcal{X} \times \mathcal{Y})^N$. The inputs are represented by the attribute (or feature) vectors $X_i \in \mathbb{R}^d$, and the target $Y_i$ can be a categorical variable $Y_i \in \left\{ 0,1,...,K\right\}$ for classification tasks or a real variable $Y_i \in \mathbb{R}^k$ for regression tasks. We illustrate in Figure \ref{fig:ml_framework} the supervised machine learning framework in the case of time series forecasting.
\begin{figure}[H]
\centering
\includegraphics[width=16cm]{images/ml_framework.png}
\caption{Supervised machine learning framework for time series forecasting.}
\label{fig:ml_framework}
\end{figure}
\paragraph{Learning framework} The classifier or regressor function $f$ is optimized over an hypothesis class $\mathcal{H}$ of functions. Examples of classes include the linear models, the kernel methods, or the neural networks. This class should be carefully chosen for the task, guided by the bias-variance tradeoff \cite{bishop:2006:PRML}. The class $\mathcal{H}$ should be sufficiently expressive for modelling the solution of the problem; on the contrary, a too large model capacity reduces the bias but favors the overfitting phenomenon on the training set.
Once the class $\mathcal{H}$ is defined, we want to select the function $f$ that best fits the training data, while generalizing correctly to unseen input data coming from the same distribution. Training the model consists in minimizing the risk $R(f)$ that measures the disagreement between the predictions and the ground truth labels with a loss function $\ell: \mathcal{X} \times \mathcal{Y} \rightarrow \mathbb{R}^+$:
\begin{align}
R(f) &:= \mathbb{E}_{(X,Y) \sim \mathcal{D}} ~ \ell(f(X),Y) \\
f^* &= \text{argmin}_{f \in \mathcal{H}} ~ R(f).
\end{align}
In practice, the joint distribution $\mathcal{D}$ over $\mathcal{X} \times \mathcal{Y}$ is unknown, therefore we minimize the empirical risk defined with the training samples:
\begin{equation}
R_n(f) := \frac{1}{n} \sum_{i=1}^N \ell(f(X_i),Y_i).
\end{equation}
\paragraph{Training loss functions} In the context of binary classification ($\mathcal{Y} = \left\{0,1\right\}$), a common loss function is the binary cross-entropy:
\begin{equation}
\ell(f(X),Y) = - [ Y \log f(X) + (1-Y) \log (1-f(X)) ].
\end{equation}
For regression problems such that found in time series or video prediction, the most common loss function is the mean squared error (MSE), corresponding to the L2 loss averaged over input-output pairs:
\begin{equation}
\ell(f(X),Y) = \Vert f(X)-Y \Vert_2^2.
\label{eq:mse}
\end{equation}
\paragraph*{Monostep vs. multistep forecasts} For time series forecasting, the loss function $\ell$ can either applied to compare monostep or multistep forecasts. Monostep forecasting methods compute a one-step ahead prediction $\mathbf{\hat{y
}}_{T+1}$ given past values $(\mathbf{y}_1,\cdots, \mathbf{y}_T)$, which is compared to the ground truth future $\mathbf{y}^*_{T+1}$: $\ell( \mathbf{\hat{y
}}_{T+1},\mathbf{y}^*_{T+1})$. In contrast, multistep forecasts compute the loss on multiple predicted timesteps: $\ell(\mathbf{ (\hat{y}}_{t})_{T+1:T+H} , (\mathbf{y}^*)_{T+1:T+H})$. The mean squared error (MSE), dominantly used in applications, is \textit{separable}, i.e.~ the multistep loss is the sum of the loss for all individual timesteps. In this thesis, we study dedicated loss functions for multistep forecasting, that are non separable, for explicitly imposing a desired behaviour based on the whole predicted dynamic's trajectory.
\paragraph{Regularization} Machine learning models are optimized to predict the labels of the training set. However, a model that perfectly predicts those labels does not necessarily generalize well to unseen data. With high capacity models such as deep neural networks, the risk is to learn the training set by heart and represent a too complex function; this phenomemon is called \textit{overfitting}.
To overcome this issue, a common strategy is to add a \textit{regularization} term $\Omega$ to the training objective for penalizing the complexity of the model:
\begin{equation}
\underset{f \in \mathcal{H}}{\min} ~~ R_n(f) + \Omega(f).
\end{equation}
From a Bayesian point of view, many regularizers correspond to certain prior distributions over the model parameters. The most popular choices include the L2 and L1 weight normalization. As we will dicuss in Section \ref{sec:physicsbased-ml}, normalization is a possible way to leverage physical priors in a model.
\subsection{Deep neural networks}
Neural networks are based on the simple artificial neuron modelling proposed by MCulloch and Pitts \cite{mcculloch1943logical} and have been explored from the 1980's \cite{lecun1989backpropagation}. Standard feedforward neural networks are composed of a succession of mathematical functions called \textit{layers} that progressively transform the inputs $X$ to the outputs $Y$ through a sequence of intermediate representations $\mathbf{h}_l$ called \textit{hidden states}. A typical \textit{dense} (or \textit{fully-connected}) layer consists in a linear combination of the inputs followed by a nonlinear activation $\phi$: $\mathbf{h}_{l+1} = \phi(\mathbf{W}_l \mathbf{h}_l + \mathbf{b}_l)$ for the $l^{th}$ layer. The typical nonlinearities are traditionally the sigmoid, hyperbolic tangent or the Rectified Linear Unit (ReLU) $x \mapsto \max(0,x)$.
Neural networks are trained using gradient descent algorithms, such as the basic Stochastic Gradient Descent (SGD) \cite{bottou2010large} or variants with momentum like AdaDelta \cite{duchi2011adaptive} or Adam \cite{kingma2014adam}. The gradient of the loss with respect to the model's parameters is computed by the backpropagation method \cite{lecun1989backpropagation}. Thus all applied operations in the model should be differentiable, in particular the loss function. We will see in this thesis that the choice of a differentiable loss function is a key aspect for imposing a desired behaviour.
Deep Learning has become popular since the victory of the AlexNet model \cite{krizhevsky2012imagenet} at the ImageNet competition in 2012. The main revolution of Deep Learning relies in the depth of the neural networks. By stacking many layers, the network progressively learns more and more complex feature representations of the input, from the low-level concepts (such as color or contours) to the most semantics concepts (such as the recognition of a particular object) necessary for image classification.
\begin{figure}[H]
\centering
\includegraphics[width=16cm]{images/archis.png}
\caption[The common layers used in deep learning.]{Common layers used in deep learning models. Shared parameters are shown with the same color. Figure taken from Battaglia \textit{et al.~} \cite{battaglia2018relational}.}
\label{fig:deep-archis}
\end{figure}
The choice of the neural network architecture is a critical aspect for solving a task. We illustrate in Figure \ref{fig:deep-archis} the three main kinds of layers. The Multi-Layer Perceptron (MLP) \cite{rosenblatt1961principles}, only composed of fully-connected layers, is the most generic architecture but at the expense of a number of parameters exponentially growing with the number of layers, making it not amenable for many applications. Other architectures encode specific inductive biases on data. For example, convolutional neural networks \cite{lecun1989backpropagation} encodes spatial equivariance, i.e.~ the response of a classifier should be independent to the particular location of objects in the image, by sharing a convolutional filter for all spatial positions. Likewise, recurrent neural networks encode translation equivariance for processing sequential data by reusing the same weight in time. More recent architecture also encode other kinds of inductive biases: graph neural networks \cite{battaglia2016interaction} encode permutation invariance among a set of items, and the recent Transformer architecture \cite{vaswani2017attention} implements an attention mechanism over neighbouring positions.
When investigating deeper and deeper architectures, researchers have been faced with training issues like the vanishing gradient problem, i.e.~ the gradient of the loss can become very small after backpropagating through a large number of layers. To overcome this problem, He \textit{et al.~} \cite{he2016deep} has proposed the \textit{residual neural networks} (ResNets) by adding skip connections between a block of standard layers:
\begin{equation}
\mathbf{x}_{l+1} = \mathcal{F}(\mathbf{x}_l) + \mathbf{x}_l,
\end{equation}
where $\mathbf{x}_l$ is the hidden state after the $l^{th}$ block and $\mathbf{F}$ denotes a nonlinear function (e.g.~ a series of convolutions and nonlinear activations). These "identity shortcuts" allow a direct flow of the gradient and have significantly improved the training of very deep networks, leading to new state-of-the-art performances on ImageNet. Pursuing this idea, the \textit{densely connected networks} (DenseNets) of Huang \textit{et al.~} \cite{huang2017densely}, connecting all layers together within a block with skip connections, have further improved the performances.
\paragraph{Difference between traditional ML and DL}
The main differences between traditional Machine Learning (ML) and Deep Learning (DL) are illustrated in Figure \ref{fig:mldl_diff} for the case of solar irradiance forecasting with fisheye images. The traditional ML pipeline (from the existing method at EDF \cite{gauchet2012surface}) is composed of several steps with manual intervention: camera calibration for compensating the fisheye distortion, projection of the input images on a plane at a given altitude, optical flow estimation, image warping for computing the future frame, future image segmentation with handcrafted features and thresholds, and finally prediction of the future irradiance with a traditional regressor (e.g.~ linear regression). Many of these steps require expert manual intervention. On the other side, the Deep Learning approach directly learns the image to irradiance mapping on raw fisheye images and automatically derives the appropriate intermediate concepts.
In fact, the difficulty of the task has shifted from the handcrafted feature engineering of traditional ML methods to the manual neural network architecture design of DL that encodes appropriate inductive biases or behaviours.
\begin{figure}[H]
\centering
\includegraphics[width=17cm]{images/ML_DL.png}
\caption{Traditional Machine Learning vs. Deep Learning for forecasting solar irradiance with fisheye images.}
\label{fig:mldl_diff}
\end{figure}
\section{Spatio-temporal forecasting}
\label{sec:spatiotemp-forecasting}
In this Section, we review the main existing machine learning approaches for spatio-temporal forecasting, from the traditional statistical time series forecasting to the most recent deep learning methods.
\subsection{Context and notations}
As discussed in Introduction (Chapter \ref{chap:intro}), we are interested in forecasting spatio-temporal processes driven by some underlying physical phenomenon. We consider dynamical systems formalized through a differential equation of the form:
\begin{equation}
\frac{\diff X_t}{\diff t} = F(X_t).
\label{eq:ode-relatedwork}
\end{equation}
The \textit{state} of the system $X_t$ represent the variables whose knowledge at time $t_0$ is sufficient, in combination with the evolution function $F$, for describing the phenomenon for each time $t>t_0$. The state $X_t$ can be either be parameterized by:
\begin{itemize}
\setlength\itemsep{0em}
\item a $d$-dimensional vector, i.e.~ we have $X_t\in\mathbb{R}^d$ for every $t$. In that case, equation \ref{eq:ode-relatedwork} is an \textit{ordinary differential equation} (ODE);
\item a $d$-dimensional vector field over a spatial domain $\Omega\subset\mathbb{R}^k$, with $k\in\{2,3\}$, i.e.~ $X_t(x)\in\mathbb{R}^d$ for every $(t,x)\in[0,T]\times\Omega$. If the description in Eq \ref{eq:ode-relatedwork} involves spatial derivatives of the state, it corresponds to a \textit{partial differential equation} (PDE).
\end{itemize}
Many phenomena occurring in physics, biology, computer vision, finance follow a general equation of the form \ref{eq:ode-relatedwork}.
To solve the differential equation \ref{eq:ode-relatedwork} numerically, the most common approach is to discretize the phenomenon into a sequence $(\mathbf{x}_1, \mathbf{x}_2, \cdots, \mathbf{x}_T)$ and approximate the time derivative with finite differences. The simplest numerical scheme is the forward Euler method:
\begin{equation}
\mathbf{x}_{n+1} = \mathbf{x}_n + \Delta t ~ F(\mathbf{x}_n),
\end{equation}
where $\Delta t$ is a fixed step size. We will see that this approximation scheme has strong connections with residual neural networks (Section \ref{sec:continuous-time-models}). More complex numerical schemes exist with lower truncation errors, e.g.~ Runge-Kutta \cite{butcher2016numerical}.
For predicting a dynamical system of the form \ref{eq:ode-relatedwork}, two main modelling approaches exist:
\begin{itemize}
\item parameterize the relationship between future time steps and context time steps: $ (\hat{\mathbf{y}}_{T+1},\dots, \hat{\mathbf{y}}_{T+H} ) = g_{\theta}(\mathbf{x}_1,\dots,\mathbf{x}_T)$ with parameters $\theta$. The function $g_{\theta}$ can represent a traditional time series forecasting model like an autoregressive model \cite{box2015time} or a deep neural network.
\item parameterize the derivative function $F_{\theta}$ and integrate the ODE/PDE with a numerical solver. This is the typical case of numerical simulation with a physical model $F_{\theta}$. The function $F_{\theta}$ can also be a deep neural network approximating the dynamics, as done by the Neural ODEs \cite{chen2018neural} presented in Section \ref{sec:continuous-time-models}.
\end{itemize}
\subsection{Model-Based forecasting methods}
As discussed in Chapter \ref{chap:intro}, the traditional modelling paradigm in physics is to derive analytical laws of motion from first principles and integrate the equations with numerical simulation. These models are often expressed as ordinary or partial differential equations (ODEs/PDEs). This arises in a multitude of scientific fields, such as Newtonian mechanics, fluid dynamics or quantum mechanics. For example, we will consider in this thesis the wave equations:
\begin{equation*}
\frac{\partial^2 w}{\partial t^2} - c^2\Delta w + k \frac{\partial w}{\partial t}=0 ,
\end{equation*}
where $k$ is the damping coefficient and $c$ the celerity of the wave.
For time series forecasting, traditional Model-Based methods rely on linear state space models (SSMs) \cite{durbin2012time,hyndman2008forecasting}, which provide a principled framework for modelling known temporal patterns. SSMs include the popular integrated autoregressive moving-average model (ARIMA) and Exponential Smoothing. SSMs assume linear dynamics with structural components (e.g.~ level, trend, seasonality), which makes forecasting robust and interpretable. However, the model selection procedure can be tedious and these methods often exploit strong statistical (e.g.~ i.i.d. additive Gaussian noise) and structural assumptions on data (e.g.~ stationarity or stationarity after differentiation) that are not satisfied for many real-world time series that can present abrupt changes of distribution. Moreover, SSMs are fitted independently on each time series, and thus cannot learn patterns between sets of similar series.
Regarding video prediction, traditional methods focus on predicting the motion field with optical flow, rather than predicting future frames at the pixels level. The seminal works of Lucas-Kanade \cite{lucas1981iterative} and Horn-Schunk \cite{horn1981determining} rely on the brightness consistency constraint, which assumes that the intensity value of a pixel remains constant between two frames. In its linearized form, this constraint can be expressed as a PDE:
\begin{equation}
\frac{\partial I}{\partial t} (t,\mathbf{x}) = - w(t,\mathbf{x}) \cdot \nabla I (t,\mathbf{x}).
\label{eq:flot}
\end{equation}
Again, this PDE corresponds to an incomplete model, since the brightness constancy assumption is violated in many situations, e.g.~ in presence of occlusions, illumination changes, specular reflexions.
\subsection{Deep learning forecasting methods}
Artificial neural networks were first explored in the 1990's for time series forecasting with Multi-Layer Perceptrons (MLPs) \cite{chakraborty1992forecasting,lee1992short,tang1993feedforward}
and Recurrent Neural Networks (RNNs) \cite{connor1994recurrent,kuan1995forecasting}. At that time, most of these architectures were limited to a single hidden layer and trained with one-step targets, restricting their applicability to simple problems.
With the advances in computer hardware and modern training techniques of the deep learning era, neural networks have become appealing for time series forecasting due to their automatic feature extraction, the ability to capture complex nonlinear temporal patterns and the ease to incorporate exogenous variables.
\subsubsection{Recurrent Neural Networks (RNNs)}
\begin{figure}
\centering
\includegraphics[width=12cm]{images/rnn_goodfellow.png}
\caption[Illustration of a recurrent neural network.]{A recurrent neural network. Figure taken from Goodfellow \cite{goodfellow2016deep}.}
\label{fig:rnn}
\end{figure}
RNNs denote a family of architectures dedicated to handling sequential data such as text, speech or time series. Illustrated in Figure \ref{fig:rnn}, RNNs implement a discrete time dynamical system, where a hidden variable $\mathbf{h}_t \in \mathbb{R}^d$, serving as a memory of the system, is recurrently updated across time. A basic RNN formulation can be written as:
\begin{align}
\mathbf{h}_t &= F(\mathbf{W} ~\mathbf{h}_{t-1} + \mathbf{U}~ \mathbf{x}_t + \mathbf{b} )
\label{eq:rnn}\\
\mathbf{o}_t &= \mathbf{V} ~ \mathbf{h}_t,
\end{align}
where $\mathbf{U}$ and $\mathbf{W}$ are weight matrices, $\mathbf{b}$ is a bias and $F$ an activation function (e.g.~ $\tanh$). The output $\mathbf{o}_t$ at time $t$, obtained by a projection of the latent state with a weight matrix $\mathbf{V}$, is compared to the ground truth target $\mathbf{y}_t$ with a loss function $L$. Crucially, the weights of the RNN are identical for all timesteps (as shown in Figure \ref{fig:rnn}). Contrary to more general MLPs, weight sharing in RNNs enables to encode time equivariance and to process sequences of arbitrary lengths. Deep recurrent neural networks can be build by stacking RNN cells.
RNNs are trained by backpropagation through time \cite{mozer1989focused}, i.e.~ by propagating the gradient of the loss function in the unfolded computational graph (see Figure \ref{fig:rnn}). A major drawback of the vanilla formulation in Eq \ref{eq:rnn} is that the vanshing / exploding gradients when processing long sequences \cite{pascanu2013difficulty}. It prevents the network from memorizing long-term information in the current latent state. To address this limitation and model long-term dependencies, Hochreiter \textit{et al.~} \cite{Hochreiter:1997:LSM:1246443.1246450} introduced the Long-Short Term Memory (LSTM) networks which have an additional memory cell $\mathbf{c}_t$ controlled by a learned input gate $\mathbf{i}_t$ and forget gate $\mathbf{f}_t$:
\begin{align*}
\mathbf{i}_t &= \sigma (\mathbf{W}_{ih} ~ \mathbf{h}_{t-1} + \mathbf{W}_{ix} ~ \mathbf{x}_t + \mathbf{b}_i) \\
\mathbf{f}_t &= \sigma (\mathbf{W}_{fh} ~\mathbf{h}_{t-1} + \mathbf{W}_{fx} ~ \mathbf{x}_t + \mathbf{b}_f) \\
\mathbf{c}_t &= \mathbf{f}_t \odot \mathbf{c}_{t-1} + \mathbf{i}_t \odot \tanh (\mathbf{W}_{gh} ~ \mathbf{h}_{t-1} + \mathbf{W}_{gx} ~ \mathbf{x}_t + \mathbf{b}_g) \\
\mathbf{o}_t &= \sigma (\mathbf{W}_{oh} ~ \mathbf{h}_{t-1} + \mathbf{W}_{ox} ~ \mathbf{x}_t + \mathbf{b}_o) \\
\mathbf{h}_t &= \mathbf{o}_t \odot \tanh(\mathbf{c}_t).
\end{align*}
LSTM networks and their variants such as the Gated Recurrent Unit (GRU) \cite{cho2014learning}, have become a reference for many sequential tasks. Shi \textit{et al.~} \cite{xingjian2015convolutional} proposed the ConvLSTM adaptation for video prediction, by replacing all the full-connected operations of the LSTM by convolutions. The ConvLSTM was adopted in many subsequent studies \cite{finn2016unsupervised,jia2016dynamic,xu2018structure} and is at the basis of the most recent video prediction algorithms such as PredRNN \cite{wang2017predrnn,wang2018predrnn++}, Memory in Memory \cite{wang2019memory} or MotionRNN \cite{wu2021motionrnn}.
\subsubsection{Sequence To Sequence models}
For mapping a variable-length sequence to another variable-length sequence, Cho \textit{et al.~} \cite{cho2014learning} and Sutskever \textit{et al.~} \cite{sutskever2014sequence} proposed the Sequence To sequence (Seq2Seq) architecture. The input sequence $(\mathbf{x}_1,\cdots,\mathbf{x}_{n_x})$ is processed by an encoder RNN that provides a fixed-size context vector $C$ summarizing the sequence, typically defined as the last hidden state of the RNN. This context vector is used for initializing the decoder which is another RNN producing the predictions $(\mathbf{y}_1,\cdots,\mathbf{y}_{n_y})$ one step at a time. In a Seq2Seq model, both RNNs are trained jointly to maximize the likelihood $p(\mathbf{y}_1,\cdots,\mathbf{y}_{n_y} | \mathbf{x}_1,\cdots,\mathbf{x}_{n_x})$ averaged over all the input/output sequences of the training set.
When generating predictions, the RNN decoder is rolled forwards by recursively feeding back its own predictions as inputs for the next timesteps. Seq2Seq models can be trained with \textit{teacher forcing}, consisting in feeding the true targets as inputs to the RNN (that are known at training time) instead of the prediction from the last timestep. A popular curriculum often used in practice to mitigate the train/test discrepancy is \textit{scheduled sampling} \cite{bengio2015scheduled} that randomly chooses to use true values or model predictions as inputs, with a sampling probability to use model predictions increasing over time to gradually converge towards test-time conditions.
\begin{figure}
\centering
\includegraphics[width=16cm]{images/seq2seq_goodfellow.png}
\caption[Illustration of a Sequence To Sequence model.]{Sequence To Sequence model. Figure adapted from Goodfellow \cite{goodfellow2016deep}.}
\label{fig:my_label}
\end{figure}
Seq2Seq architectures with RNNs are at the basis of many successful models \cite{fox2018deep,rangapuram2018deep,kuznetsov2018foundations}. Salinas \textit{et al.~} \cite{salinas2017deepar} proposed DeepAR, a Seq2Seq model which estimates the parameters of a Gaussian distribution for the next timestep. Rangapuram \textit{et al.~} \cite{rangapuram2018deep} revisit the traditional state space models (SSMs) by parameterizing them with deep recurrent networks. To limit error accumulation due to autoregressive predictions, some models directly predict all future values at once, often with a MLP decoder \cite{wen2017multi}.
RNN forecasting can be improved with the attention mechanism, introduced by Bahdanau \textit{et al.~} \cite{Bahdanau2015NeuralMT} for machine translation \cite{qin2017dual,lai2018modeling,fan2019multi}. Attention consists in learning which part of the input sequence is the most relevant for predicting a given timestep. More precisely, the context vector $C$ is replaced with a combination of the hidden states from past timesteps weighted by their learned attention weights.
\subsubsection{Beyond recurrent architectures}
Training RNNs with backpropagation through time is expensive since it requires sequential operations that cannot be parallelized. Researchers have explored alternative architectures than RNNs. Following the success of the Wavenet model for audio processing \cite{van2016wavenet}, temporal convolution networks (TCNs) \cite{borovykh2017conditional,chen2020probabilistic} use causal dilated 1D-convolutions, that exponentially increase the receptive field with additional layers and respect the temporal causality. In addition, TCNs can be easily trained in parallel.
Recently, a line of works has questioned the convolutional or recurrent layers used in most architectures, showing that fully-connected layers arranged in a careful way can outperform other methods. For example, pure attention-based models have revealed better than LSTMs for capturing long-range relationships. The Transformer architecture of Vaswani \textit{et al.~} \cite{vaswani2017attention}, only composed of self-attention and fully-connected layers, avoids the recurrent structure and provides a direct access to any previous timestep. Several works have proposed adaptations of the Transformer for time series forecasting \cite{li2019enhancing,zhou2020informer}. In particular, the Informer model of Zhou \textit{et al.~} \cite{zhou2020informer} is able to extend the predictions to a long-term horizon with less degradation than competing methods.
Another example is the NBeats forecasting architecture \cite{oreshkin2019n} shown in Figure \ref{fig:nbeats} that has recently shown state-of-the-art performances for deterministic forecasting. NBeats is composed of stacks of fully-connected layers, each block outputting a forecast for the following block and a backcast that removes the part of the signal that is well-explained by the current block. Partial forecasts from each block are finally combined into the global forecast.
\begin{figure}
\centering
\includegraphics[width=15cm]{images/nbeats.png}
\caption[The NBeats model for deterministic forecasting.]{The NBeats model for deterministic forecasting \cite{oreshkin2019n}.}
\label{fig:nbeats}
\end{figure}
\subsection{Training and evaluation metrics for time series forecasting}
Current research on time series forecasting mainly focuses on new architecture design (the predictive model $f_{\theta}$ in the blue box in Figure \ref{fig:ml_framework}) and the question of the training loss (yellow box in Figure \ref{fig:ml_framework}) is often overlooked. The Mean Squared Error (MSE) in Eq \ref{eq:mse}, Mean Absolute Error (MAE) and its variants (SMAPE, \textit{etc}) are predominantly used as proxies for training models. In practice, forecasts are evaluated with application-specific metrics, often reflecting the shape and temporal localization of future trajectories. However, their non-differentiability makes them unsuitable for training deep models. For characterizing shape, the Dynamic Time Warping (DTW) algorithm \cite{sakoe1990dynamic,jeong2011weighted,zhang2017dynamic}, originally introduced for speech recognition, computes the similarity between time series after temporal alignment. DTW is particularly popular for time series classification \cite{jeong2011weighted} or clustering \cite{chang2021learning} and has been recently explored for time series forecasting \cite{cuturi2017soft}. Another shape metric is the ramp score \cite{florita2013identifying,vallance2017towards} that assesses the detection of ramping events in wind and solar energy forecasting. Timing errors can be characterized among other ways by the Temporal Distortion Index (TDI) \cite{frias2017assessing,vallance2017towards}, or by computing detection scores (precision, recall, Hausdorff distance) after change point detection \cite{truong2019supervised}.
Recently, some attempts have been made to train deep neural networks based on alternatives to MSE, especially based on smooth approximations of DTW \cite{cuturi2017soft, mensch2018differentiable,abid2018learning,vayer2020time,blondel2020differentiable}, in particular the soft DTW \cite{cuturi2017soft} that we will detail in Chapter \ref{chap:dilate}.
In this thesis, we intend to bridge the gap between these common evaluation metrics and the training losses used in practice. We explore how to efficiently combine explicit shape and temporal differentiable criteria at training time, regardless of the training architecture. We will review the most related works in more details in Chapter \ref{chap:criteria}.
\subsection{Particular challenges in video prediction}
Videos are a particular form of multivariate time series, and all the time series forecasting methods presented above could in principle be directly applied to videos by forecasting the dynamics of individual pixels. However this approach neglects the keys properties of images: the spatial coherence between neighboring pixels and the semantics of the scene. Specific architectures dedicated to video prediction were explored \cite{wang2017predrnn,wang2018predrnn++,wang2019memory,wang2018eidetic,wu2021motionrnn}, often based on variants of the seminal ConvLSTM \cite{shi2015convolutional}.
Moreover, extrapolating high-dimensional signals such as images at the pixel level is extremely challenging. To constrain this generation problem, several methods rather use domain-specific knowledge such as predicting geometric transformations between frames \cite{finn2016unsupervised,jia2016dynamic,xue2016visual}, estimating the optical flow \cite{patraucean2015spatio,luo2017unsupervised,liu2017video,liang2017dual,li2018flow} or exploiting the semantics of the scene \cite{bei2021learning}. This is very effective for short-term prediction, but degrades quickly when the video content evolves, where more complex models and memory about dynamics are required.
\paragraph*{Disentanglement} Another line of work consists in disentangling independent factors of variations in order to apply the prediction model on lower-dimensional representations. The typical decomposition criteria are as content/motion \cite{villegas2017decomposing,lee2021video} or deterministic/stochastic \cite{denton2017unsupervised}. We illustrate in Figure \ref{fig:dppae} an example of decomposition from the DPPAE model \cite{hsieh2018learning}: the moving objects are extracted and their individual motion estimated separately to provide the final prediction. In specific contexts, the prediction space can be structured using additional information, e.g.~ with human pose \cite{villegas2017learning,walker2017pose} or key points \cite{minderer2019unsupervised}.
\begin{figure}[H]
\includegraphics[width=14cm]{images/dppae.png}
\caption[Illustration of disentanglement for video prediction.]{Disentanglement approach for video prediction. In this Moving MNIST example, the DPPAE model \cite{hsieh2018learning} decomposes the two digits and predicts their dynamics separately.}
\label{fig:dppae}
\end{figure}
We provide a more detailed review of existing deep video prediction methods in Chapter \ref{chap:phydnet}.
\subsection{Diversity in probabilistic forecasting}
Many critical applications require forecasts associated with uncertainty to make relevant decisions. Probabilistic forecasting consists in estimating the predictive distribution of future values given an input sequence. Two main categories of methods exist for probabilistic forecasting. The first class of methods directly characterizes the predictive distribution. This includes estimating the variance of predictions (e.g.~ with Monte Carlo dropout \cite{gal2016dropout}), estimating the quantiles \cite{wen2017multi,gasthaus2019probabilistic,wen2019deep} or modelling this distribution by a parametric distribution, e.g.~ a Gaussian for the DeepAR algorithm \cite{salinas2017deepar}).
In this thesis, we focus on a second class of probabilistic methods that propose to describe the predictive distribution with a set of plausible scenarios reflecting the uncertainty of future behaviour. This class includes ensemble methods \cite{smyl2019machine} and generative models, which produce diverse forecasts by sampling multiple latent variables from a prior distribution. The most popular generative models are conditional variational autoencoders (cVAEs) \cite{yuan2019diverse}, conditional generative adversarial networks (cGANs) \cite{koochali2020if}, and normalizing flows \cite{rasul2020multi,de2020normalizing}). For further diversifying forecasts, several repulsive schemes were studied such as the variety loss \cite{gupta2018social,thiede2019analyzing} that consists in optimizing the best sample, or entropy regularization \cite{dieng2019prescribed,wang2019nonlinear} that encourages a uniform distribution.
However the aforementioned methods are limited for representing the diversity of future behaviour with a limited number of scenarios, as discussed in Chapter \ref{chap:intro}. Standard generative models sample points belonging to the dominant mode, e.g.~ by sampling multiple forecasts at test time from a standard Gaussian prior, and do not provide control over the diversity of predictions.
\textbf{Determinantal Point Processes (DPP)} To improve this unstructured mechanism, prior works \cite{yuan2019diverse,yuan2020dlow} introduced proposal neural networks for generating the latent variables that are trained with a diversity objective based on Determinantal Point Processes (DPPs).
DPPs are appealing probabilistic models for describing the diversity of a set of items $\mathcal{Y}= \left\{\mathbf{y}_1,...,\mathbf{y}_N \right\}$.
A DPP is a probability distribution over all subsets of $\mathcal{Y}$ that assigns the following probability to a random subset $\mathbf{Y}$:
\begin{equation}
\mathcal{P}_{\mathbf{K}}(\mathbf{Y}=Y) = \frac{\det(\mathbf{K}_Y)}{\sum_{Y' \subseteq \mathcal{Y}}\det(\mathbf{K}_Y')} = \frac{\det(\mathbf{K}_Y)}{\det(\mathbf{K+I})},
\end{equation}{}
where $\mathbf{K}$ is a positive semi-definite (PSD) kernel and $\mathbf{K}_A$ denotes its restriction to the elements indexed by $A$.
We illustrate the behaviour of DPPs in Figure \ref{fig:dpp} for sampling random points in the plane. When we draw points randomly according to a uniform distribution, some regions may become more densely populated than other. In contrast, when sampling from a DPP distribution with a Gaussian kernel, points are farther from one another and better spread on to the plane.
\begin{figure}[H]
\centering
\includegraphics[width=14cm]{images/dpp.png}
\caption[Random points sampled in the plane from a uniform distribution vs a determinantal point process (DPP) distribution.]{Random points sampled in the plane from a uniform distribution vs a determinantal point process (DPP) distribution. Figure taken from Kulesza and Taskar \cite{kulesza2012determinantal}.}
\label{fig:dpp}
\end{figure}
DPPs offer efficient algorithms for sequentially sampling diverse items or maximizing the diversity of a set with a given sampling budget. Importantly, the choice of the kernel enables to incorporate prior structural knowledge on the targeted diversity. As such, DPPs have been successfully applied in various contexts, e.g.~ document summarization \cite{gong2014diverse}, recommendation systems \cite{gillenwater2014expectation}, image generation \cite{elfeki2018gdpp} and diverse trajectory forecasting \cite{yuan2019diverse}.
In this thesis, we design specific shape and temporal PSD kernels for imposing our structured diversity. We further describe the most related works for probabilistic forecasting in Chapter \ref{chap:stripe}.
\section{Physics-informed machine learning}
\label{sec:physicsbased-ml}
As discussed in Chapter \ref{chap:intro}, pure data-driven machine learning methods struggle to extrapolate complex dynamical systems, and often overfit on the training set. Incorporating prior knowledge about the system is an appealing way to regularize the training process. In this Section, we review the main existing approaches for combining machine learning with physical knowledge (called \textit{ML/MB}, \textit{gray-box}, or \textit{hybrid} modelling in the literature).
\subsection{Continuous time models}
\label{sec:continuous-time-models}
Continuous-time models, consisting in modelling the rate of change $F$ of an ODE with a neural network, were first explored from the 1980's \cite{cohen1983absolute,gonzalez1998identification,zhang2014comprehensive}. More recently, researchers have drawn tight connections between dynamical systems and deep (residual) neural networks \cite{weinan2017proposal,lu2018beyond,zhu2018convolutional,chen2018neural}. The residual bloc of a ResNet \cite{he2016deep}
\begin{equation}
\mathbf{h}_{t+1} = \mathbf{h}_t + \Delta t ~ F(\mathbf{h}_t, \theta)
\end{equation}
can be interpreted as the forward Euler discretization of the dynamical system
\begin{equation}
\frac{\diff \mathbf{h}(t)}{\diff t} = F(\mathbf{h}(t), \theta).
\end{equation}
Mainstream recurrent neural networks also have a continuous-time ODE counterpart. The vanilla RNN $\mathbf{h}_t = F(\mathbf{W} \mathbf{h}_{t-1} + \mathbf{U} \mathbf{x}_t + \mathbf{b} )$ in Eq \ref{eq:rnn} is the Euler discretization of the following ODE:
\begin{equation}
\frac{\partial \mathbf{h}}{\partial t}(t,\mathbf{x}) = F(\mathbf{W} \mathbf{h}(t) + \mathbf{U} \mathbf{x}(t) + \mathbf{b} ) - \mathbf{h}(t).
\end{equation}
We derive the associated ODE formulation for the LSTM \cite{Hochreiter:1997:LSM:1246443.1246450} and the Gated Recurrent Unit (GRU) \cite{cho2014learning} in Appendix \ref{part:part2}, which makes our ODE assumptions for forecasting (Eq \ref{eq:ode-relatedwork}) quite general.
Since, many other successful deep architectures have been linked to numerical schemes for ODEs \cite{lu2017beyond,fablet2018bilinear} and new architectures were proposed and analyzed with the rich dynamical system theory \cite{haber2017stable,ruthotto2020deep,qin2019data,chang2019antisymmetricrnns,bai2019deep}, e.g.~ with the notions of stability or reversibility.
\begin{figure}
\centering
\includegraphics[width=10cm]{images/neural_ode_intro.png}
\caption[Residual neural network vs Neural ODE.]{Left: a residual neural network \cite{he2016deep} defines a discrete sequence of layers from the input to the output. Right: a Neural ODE \cite{chen2018neural} solves an ODE starting from the input for evolving the hidden state. Figure taken from \cite{chen2018neural}.}
\label{fig:node}
\end{figure}
The Neural ODEs (or ODE networks) of Chen \textit{et al.~} \cite{chen2018neural} consider the continuous-time limit in residual networks. Instead of a discrete sequence of layers (or timesteps in a RNN), the evolution of the hidden state in the network is supposed to follow an ODE. This leads to a continuous transformation of the hidden state as shown in Figure \ref{fig:node}. Neural ODEs are trained with the adjoint sensitivity method \cite{pontryagin1987mathematical}, which consists in solving a backward ODE instead of backpropagating through the operations of the solver\footnote{This ensures a lower memory footprint for Neural ODEs: intermediate network activations do not need to be stored during the forward pass since they can be recomputed on the fly by solving the backward ODE.}. Many extensions and analyses of Neural ODEs were subsequently proposed \cite{dupont2019augmented,ayed2019learning,massaroli2020dissecting,jia2019neural,zhang2019anodev2,yildiz2019ode2vae} and have shown great successes in several tasks such as generative models with normalizing flows \cite{grathwohl2018ffjord} or modelling continuous-time data \cite{rubanova2019latent,hasani2021liquid}.
For predicting dynamical systems, the advantages of the continuous-time modelling of Neural ODEs are twofold. First, Neural ODEs can accommodate any ODE solver, in particular adaptive solvers that automatically adapt the number of iterations in function of the complexity of the dynamics to reach a given accuracy. Second, Neural ODEs can seamlessly handle irregularly-sampled temporal data, which arises in many applications (e.g.~ medical records) or in case of missing data.
Neural ODEs provide a generative approach for modelling dynamical systems. As illustrated in Figure \ref{fig:node_ts}, time series are represented by a latent trajectory $z(t)$ governed by a dynamical function $F_{\theta}$ parameterized by a neural network: $\frac{\partial z(t)}{\partial t}= F_{\theta}(z(t))$. The latent trajectory is computed by solving the ODE with a differentiable ODE solver from an initial condition $z_{t_0}$ (which is known or estimated via an encoder network on an input trajectory). The solution can be evaluated for any time point in the observation range $[t_0,t_N]$ (interpolation) or in the future $[t_N; \infty[$ (extrapolation). The dynamical model $F_{\theta}$ is trained by reconstructing the trajectories of a training dataset.
\begin{figure}
\centering
\includegraphics[width=16cm]{images/neural_ode.png}
\caption[Modelling dynamical systems with Neural ODEs.]{Modelling dynamical systems with Neural ODEs. From an initial condition $z_{t_0}$ inferred by an encoder network, the latent trajectory is computed by solving the dynamical model $F_{\theta}$ (parameterized by a neural network) by a differentiable ODE solver. Figure taken from \cite{chen2018neural}.}
\label{fig:node_ts}
\end{figure}
Although Neural ODEs offer a principled way to model dynamical systems with deep networks in continuous-time, the dynamical model $F_{\theta}$ is still a pure data-driven component and suffers from the main drawbacks as pure ML methods, i.e.~ overfitting in data-scarce contexts and lack of physical plausibility. In this thesis, we explore how to structure the function $F$ with prior physical knowledge.
\subsection{Physically-constrained machine learning}
In recent years, many researchers have explored how to incorporate physical knowledge into ML models to regularize learning and improve performances. A first solution, made popular by the Physics-Informed Neural Networks (PINNs) of Raissi \textit{et al.~} \cite{Raissi2019}, is to add a physical regularization term in the loss function. Illustrated in Figure \ref{fig:pinn} for solving the heat equation, PINNs are composed of a neural network for predicting the solution $\hat{u}(x,t)$ at a given spatio-temporal location. Partial derivatives are computed during the forward pass by automatic differentiation to form the PDE residual. The total loss function is the sum of the data fidelity term and the adequacy to the PDE constraint and boundary conditions. PINNs are very easy to implement in standard deep learning libraries such as TensorFlow or PyTorch.
In their initial form, PINNs need to be retrained for each new set of the parameters of the PDE. In order to learn a class of PDEs, Sirignano \textit{et al.~} \cite{sirignano2018dgm} propose to add the PDE parameters as inputs of the physics-informed neural network, and the neural operator approaches propose to directly learn the solution operator of a parametric class of PDEs \cite{li2020neural,lu2019deeponet,li2020fourier,wang2021learning}. However, this class of methods only impose soft constraints, i.e.~ the physical laws are not strictly guaranteed to be respected.
\begin{figure}
\centering
\includegraphics[width=15cm]{images/pinn.png}
\caption[Physics-Informed Neural Networks (PINN)]{Physics-Informed Neural Networks (PINN) for solving the heat equation.}
\label{fig:pinn}
\end{figure}
Other works investigate introducing hard physical constraints in the network architectures. Daw \textit{et al.~} \cite{daw2020physics} propose a monotonicity-preserving architecture for modelling lake temperature along depth, by adapting the LSTM with additional variables playing the role of positive increments. Mohan \textit{et al.~} \cite{mohan2020embedding} impose the divergence-free constraint of incompressible flows by parameterizing the flow as the curl of a learned scalar potential
For modelling fluids, De Bezenac \textit{et al.~} \cite{de2017deep} propose a hybrid ML/MB architecture that explicitly exploits the advection-diffusion PDE:
\begin{equation}
\frac{\partial I}{\partial t} + (w . \nabla) I = D \nabla^2 I.
\label{eq:advection-diffusion}
\end{equation}
Given a sequence of past images, their deep architecture estimates the flow field $w$ and the diffusion coefficient $D$, which are used in a warping scheme implementing the closed-formed solution of the PDE. The model is learned end-to-end for predicting the next frame, without any supervision for the physical parameters. The authors successfully apply this model to predict Sea Surface Temperature (SST) maps.
\begin{figure}
\centering
\includegraphics[width=12cm]{images/advection-diffusion.png}
\caption{Hybrid ML/MB architecture of De Bezenac \textit{et al.~} \cite{de2017deep} for predicting Sea Surface Temperature with the advection-diffusion PDE.}
\label{fig:my_label}
\end{figure}
Physical systems are often studied through the conservation of energy, which is encoded in a principled way through Hamiltonian dynamics. Greydanus \textit{et al.~} \cite{greydanus2019hamiltonian} introduce the Hamiltonian Neural Networks to learn physical systems respecting the conservation of energy. With $\mathbf{q}$ the position of a set of particles and $\mathbf{q}$ their momentum, the Hamiltonian $\mathcal{H}(q,p)$ representing the total energy of the systems, obeys the following equations:
\begin{equation}
\dfrac{d\mathbf{q}}{dt} = \dfrac{\partial \mathcal{H}}{\partial \mathbf{p}} \;\;\; , \;\;\; \dfrac{d\mathbf{p}}{dt} = - \dfrac{\partial \mathcal{H}}{\partial \mathbf{q}}.
\end{equation}
HNNs learn the Hamiltonian with a NN and take in-graph gradients to impose the Hamiltonian dynamics. They show in experiments that it better conserves energy than baselines.
Many of the ML/MB approaches described so far are tailored for specific applications, e.g.~ fluid dynamics \cite{de2017deep}, molecular dynamics \cite{chmiela2017machine}, quantum mechanics \cite{schutt2017quantum}, robotics \cite{lutter2019deep}, and are thus not applicable to other domains. Moreover, they often rely on a complete knowledge of the physical equations, and further assume that these equations directly apply in the input space (observed prior as defined in Chapter \ref{chap:intro}). In this thesis, we explore general augmentation strategies that can be applied to all levels of prior knowledge, from the more general prior to the most application-specific equations. We also tackle the case of the unobserved prior by learning representations spaces in which the physical laws apply.
\begin{figure}
\centering
\includegraphics[width=15cm]{images/HNN.png}
\caption{Hamiltonian Neural Networks of Greydanus \textit{et al.~} \cite{greydanus2019hamiltonian}.}
\label{fig:hnn}
\end{figure}
\subsection{Identifying and discovering physical systems}
Beyond forecasting physical systems, researchers have also explored machine learning for system identification, which consists in estimating the unknown parameters in parameterized physical equations. A basic example is estimating the length of a damped pendulum from observed trajectories. Automatically identifying and discovering physical laws from observations is a long-standing goal for physicists, with many applications in control \cite{kidger2020neural} or robotics \cite{lutter2019deep}. Many approaches
use symbolic regression to search the space of possible mathematical functions, using evolutionary
algorithms \cite{schmidt2009distilling}, sparse regression on dictionaries of potential differentiable terms \cite{brunton2016discovering,rudy2017data,schaeffer2017learning}, or
graph neural networks \cite{cranmer2020discovering}.
Several architectures attempt to predict and identify the PDE governing physical systems \cite{long2018pde,raissi2017physics}, such as the the PDE-Net architecture of Long \textit{et al.~} \cite{long2018pde,long2019pde}. As shown in Figure \ref{fig:pdenet}, the basic bloc composing PDE-Net (the $\delta t$-bloc) is a residual module implementing one forward Euler discretization step. For solving the PDE $\frac{\partial u}{\partial t}=F(u,\frac{\partial u}{\partial x},\frac{\partial u}{\partial y},\cdots)$, the authors use convolutional filters that are constrained to approximate each spatial differential term (we give details about these constrained convolutions in Appendix \ref{app:moment-matrix}\footnote{They show that the flexibility of learned differential filters boost performances compared to handcrafted filters, an observation that has been noted for other discretization schemes learned from data.}. Then a symbolic neural network identifies the nonlinear relationships between the spatial derivatives to form the nonlinear function $F$ of the PDE. A skip connection finally provides the prediction of the next timestep $\hat{u}(t+\delta t)=\hat{u}(t) + \delta t \hat{F}$. The complete PDE-Net architecture is composed of several $\delta t$-blocs concatenated in time for long-term prediction.
In this thesis, we take inspiration from the PDE-Net architecture for imposing physical dynamics, and we take a step further by assuming incomplete physical models and by modelling the residual dynamics for accurate prediction. We also show that a careful training scheme leads to a better identification of the physical parameters than simplified physical model alone.
\begin{figure}[H]
\centering
\includegraphics[width=16cm]{images/pdenet.png}
\caption[The PDE-Net architecture.]{The basis $\delta t$ block composing the PDE-Net architecture implements on step of forward Euler integration. Constrained convolutional filters estimate each spatial derivative term, that are combined by a symbolic network that estimates the dynamical function $F$. Finally a skip connection provides the solution for the next timestep. Figure taken from \cite{long2019pde}.}
\label{fig:pdenet}
\end{figure}
\subsection{Augmented physical models}
There exists an abundant literature on statistical methods for calibrating and predicting physical systems in presence of model inadequacy, often expressed in a Bayesian framework; a review of these methods can be found in \cite{pernot2017critical}. In data assimilation techniques, like the Kalman filter \cite{kalman1960new}, the particle filter \cite{perez2004data} or 4D-var \cite{courtier1994strategy}, the predictions errors are modelled probabilistically with random variables reflecting the noise assumption. A correction step using observed data is performed after each prediction step for filtering the noise. Similar residual correction procedures are commonly used in robotics and optimal control \cite{chen2004disturbance,li2014disturbance}. However, these sequential (two-stage) procedures prevent the cooperation between prediction and correction. Besides, in model-based reinforcement learning, model deficiencies are typically handled by considering only short-term rollouts \cite{janner2019trust} or by model predictive control \cite{nagabandi2018neural} consisting in replanning frequently to mitigate error propagation.
In this thesis, we take inspiration from data assimilation ideas to augment incomplete physical models with residual terms. However, in contrast to data assimilation, our residual terms are not assumed to correspond to be a stochastic residual, i.e.~ noise, but to a systematic unmodelled part of the dynamics that we learned from data. Moreover, we derive a principled training scheme for making the prediction and correction steps cooperate.
The idea of augmenting physical models with neural networks (\textit{gray-box or \textit{hybrid}} modelling) is not new: in the 1990's, the works \cite{psichogios1992hybrid,thompson1994modeling,rico1994continuous} use neural networks to estimate the unknown parameters of physical models that are difficult to model from first principles, and a classification of the possible augmentation strategies (serial, parallel, modular) was dressed \cite{thompson1994modeling}. The challenge of proper ML/MB cooperation was already raised as a limitation of gray-box approaches but not addressed. Moreover these methods were evaluated on specific applications with a residual targeted to the form of the equation.
In the last few years, there has been a growing interest in deep augmented models that combine physical priors with deep networks \cite{long2018hybridnet,saha2020phicnet,neural20}. Several ML/MB cooperation schemes with deep networks were studied in \cite{wang2019integrating,neural20}. Again, these approaches do not address the issues of uniqueness of the decomposition or of proper cooperation for correct parameter identification. They are also mostly dedicated to the fully-observable case, whereas we also tackle the non-observable prior context in this thesis. We further detail the literature on augmented physical models in Chapter \ref{chap:aphynity}.
\clearpage{\pagestyle{empty}\cleardoublepage}
\chapter*{Remerciements}
\addcontentsline{toc}{chapter}{Remerciements}
\adjustmtc
\markright{\MakeUppercase{Remerciements}}
Je tiens à remercier ici toutes les personnes qui ont concouru à l'achèvement du travail présenté dans ce manuscrit.
Pour revenir chronologiquement sur le montage de ce projet, commencer une thèse de doctorat 5 ans après la sortie d'école d'ingénieur a été en soi un premier défi. Je remercie chaleureusement mes collègues à EDF pour leur soutien, en particulier Nicolas Paul, Bruno Charbonnier, Loic Vallance. J'ai aussi une grande reconnaissance pour Stéphanie Dubost pour toutes nos discussions utiles sur la prévision d'énergies et pour
avoir défendu dès le début cette idée. Stéphanie a grandement oeuvré pour assurer le financement de ma thèse, réparti sur 4 projets et 4 programmes de recherche différents, ce qui était assez inédit ! J'ai aussi pu compter sur l'appui décisif de ma hiérarchie, Nicolas Roche, Julien Berland, dans cette démarche.
Je remercie aussi Dominique Demengel pour son travail crucial depuis de nombreuses années sur l'instrumentation des caméras au sol et pyranomètres et la bonne qualité des données, sans quoi ce travail n'aurait pas été possible.
Ensuite, mes remerciements les plus sincères s'adressent à Nicolas Thome qui a dirigé ma thèse au conservatoire national des Arts et Métiers. Quand je suis venu la première fois dans son bureau à l'été 2018 avec mon sujet de thèse déjà écrit, Nicolas a accueilli l'idée avec grand intérêt. Ces 3 années de travail commun se sont révélées être intenses et très stimulantes. Nicolas a toujours été très actif pour
guider nos réflexions et discuter de nouvelles pistes quand nous arrivions dans une impasse. Nicolas m'a beacoup appris sur comment mener un projet de recherche, et notamment sur la difficile tâche de rédaction d'articles scientifiques. Un grand merci pour tout le temps passé (soir et week-end compris) pour guider, relire, (ré)écrire, ce qui a été d'une importance cruciale pour l'acceptation des soumissions. Je remercie aussi Clément Rambour au CNAM, pour son co-encadrement très important sur mes travaux de dernière année et ses conseils judicieux.
Au cours de ces 3 années de thèse, j'ai pu passer de très bons moments avec les autres doctorants de l'équipe au CNAM: Olivier Petit, Thuy Le, Laura Calem, Charles Corbière, Rémy Sun, Elias Ramzi, Loïc Thémyr, Marc Lafon, Perla Doubinsky, Yannis Karmim. Malgré le confinement et le télétravail résultant de la crise sanitaire, nous avons réussi à maintenir des contacts techniques et conviviaux réguliers à distance. J'en retiendrai l'amitié, la solidarité et l'entraide nées de ces périodes de dur labeur.
Je remercie également mes collègues à EDF avec qui les discussions ont toujours été fructueuses, en particulier Charlotte Gauchet, Christophe Chaussin, Lorenzo Audibert, Louis Apffel, Georges Hebrail, Nicolas Bousquet, Benoît Braisaz, Eric Lajoie-Mazenc, Matthieu Chiodetti, Gerald Kwiatkowski.
Du côté de Sorbonne Université, je remercie Matthieu Cord et tous ses doctorants pour l'organisation des réunions hebdomadaires "cordettes", à la fois studieuses et conviviales. J'ai également particulièrement apprécié les travaux avec Patrick Gallinari et ses doctorants Yuan Yin, Jérémie Dona, Ibrahim Ayed, Emmanuel de Bézenac, qui ont permis d'aboutir à une publication jointe très approfondie. Je remercie aussi Edouard Oyallon, ancien camarade au master MVA et maintenant chercheur CNRS, dont le regard sur nos travaux a été très pertinent. Sur ses conseils, nous avons collaboré de manière fructueuse avec Edouard Leurent, dont je salue la gentillesse et la disponibilité.
J'exprime ma gratitude pour tous les membres de mon jury de thèse pour avoir accepté d'évaluer mes travaux et pour leurs retours très pertinents: Greg Mori, Patrick Pérez, Patrick Gallinari, Philippe Blanc, Stéphanie Dubost, Elisa Fromont, Etienne Mémin.
Pour finir, je tiens à remercier mes parents, mon épouse pour leur soutien et patience durant ces 3 années très chargées, avec une pensée pour le petit Louis qui est venu au monde 1 mois avant ma soutenance de thèse.
\end{vcenterpage}
\let\cleardoublepage\clearpage
\chapter*{Résumé de la thèse}
\addcontentsline{toc}{chapter}{Résumé}
\adjustmtc
\markright{\MakeUppercase{Resume}}
\section{Introduction}
Cette thèse aborde le problème de la prédiction spatio-temporelle par apprentissage profond. Cela correspond à la tâche de prédiction de phénomènes complexes sous forme de séries temporelles ou de vidéos, ce qui nécessite de modéliser des dépendances temporelles complexes avec d'importantes corrélations spatiales. Ce sujet est d'une importance cruciale pour de nombreuses applications, telle que la prévision climatique, le diagnostic médical, l'évolution des marchés financiers, la demande pour des produits en commerce ou la maintenance prédictive dans l'industrie. A Électricité de France (EDF), l'application qui motive cette thèse est la prévision à court-terme de la production photovoltaïque à l'aide d'images fisheye. Cette tâche est habituellement résolue à l'aide d'algorithmes basés sur les prévisions météo et les images satellite. Toutefois ces sources de données ont une résolution spatiale et temporelle insuffisante pour prédire l'irradiance solaire à très court-terme ($<$ 20min) à l'échelle d'un parc de production photovoltaïque particulier.
Dans cette thèse, nous abordons ces tâches de prédiction avec des méthodes d'intelligence artificielle, en particulier l'apprentissage statistique et l'apprentissage profond. Ces dernières années, l'apprentissage profond a connu un rebond de popularité impressionnant avec le succès du réseau de neurones profond AlexNet \cite{krizhevsky2012imagenet} qui a surpassé toutes les méthodes d'apprentissage machine traditionnel lors de la compétition de classification d'images ImageNet. Depuis, l'apprentissage profond s'est imposé comme le paradigme état de l'art pour de nombreuses tâches liées à la perception, telle que la vision par ordinateur, la reconnaissance vocale ou le traitement du langage naturel. Malgré ces impressionnants succès, les méthodes d'apprentissage entièrement basées sur les données sont limitées pour extrapoler l'évolution de systèmes physiques complexes, particulièrement quand la volumétrie de données est faible et pour des séries temporelles non-stationnaires avec des possibles variations brusques. La tâche d'extrapolation sous-jacente est par nature très différente des tâches de perception pour lesquelles l'apprentissage profond est très efficace, et nécessite de modéliser des dynamiques complexes.
Pour pallier à ces problèmes, nous proposons dans cette thèse d'exploiter de l'information physique a priori en combinaison avec les méthodes d'apprentissage basées données. Il s'agit d'une question très étudiée dans la littérature mais qui reste toujours largement ouverte. Les différents contextes de prévision sont illustrés sur la Figure \ref{fig:physics_data_fr}. D'un côté les méthodes basées modèle (\textit{model-based, MB)} supposent une bonne compréhension mathématique ou physique des phénomènes, souvent formalisée sous forme d'équations différentielles ordinaires ou partielles. A partir de données pour les conditions initiales et aux limites, la prédiction est effectuée par la résolution numérique des équations. C'est le paradigme dominant dans de nombreux domaines scientifiques, par exemple la mécanique des fluides computationnelle. Toutefois ces méthodes sont limitées si la connaissance physique est imparfaite, ce qui est souvent le cas pour des systèmes physiques complexes comme la modélisation du climat.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{images/fig_intro_fr1.png}
\caption[Les différents contextes de prédiction.]{\textbf{Les différents contextes de prédiction}. A gauche, l'apprentissage statistique et profond peuvent extrapoler des systèmes dynamiques sans a priori après apprentissage sur un grand jeu de données. A droite, les méthodes basées modèle supposent une connaissance physique complète du système et prédisent le futur par simulation numérique depuis des conditions aux limites. Entre les deux, les méthodes hybrides utilisant des données et de la connaissance incomplète sont une voie d'exploration très active et prometteuse.}
\label{fig:physics_data_fr}
\end{figure}
De l'autre côté, les méthodes d'apprentissage machine (\textit{Machine Learning, ML}) sont une alternative agnostique à l'information a priori sur le système. L'apprentissage profond a prouvé sa capacité à apprendre automatiquement des relations complexes à partir de grandes bases de données annotées et est devenu état de l'art pour de nombreuses tâches de prédiction. Toutefois, ces méthodes sont toujours limitées pour modéliser des dynamiques physiques complexes. En outre, elles manquent la plausibilité physique pour interpréter les résultats et extrapoler pour de nouvelles conditions.
Entre les deux, les méthodes hybrides \textit{model-based machine learning} (MB/ML) sont une approche attrayante pour combiner de l'information a priori et des données. Historiquement, les méthodes d'assimilation de données exploitent des données pour corriger les prédictions de modèles physiques en présence d'observations bruitées \cite{bocquet2019data,kalman1960new}. Elles constituent toujours l'état de l'art pour la prévision météorologique.
Revisiter la coopération MB/ML avec l'apprentissage profond moderne est un sujet émergent qui suscite un intérêt majeur pour de nombreuses communautés scientifiques. La physique peut être incorporée dans l'apprentissage de modèles soit sous la forme de contraintes douces dans la fonction de perte \cite{raissi2017physics,sirignano2018dgm}, soit comme des contraintes dures dans les architectures des réseaux \cite{daw2020physics,mohan2020embedding}. Du point de vue apprentissage, ces contraintes physiques permettent de développer des modèles plus interprétables qui se conforment aux lois physiques et qui restent robustes en présence de données bruitées. Cela se traduit typiquement par une plus grande efficacité dans l'utilisation des données et de meilleures performances d'extrapolation au-delà du domaine d'apprentissage.
Dans cett thèse, nous explorons cette catégorie de méthodes hybrides et nos contributions tâchent de répondre à la question générale suivante:
\begin{center}
\textit{Comment exploiter de la connaissance physique a priori dans des modèles d'apprentissage statistique?}
\end{center}
Nous nous concentrons sur deux principales directions: incorporation d'information physique a priori dans la fonction d'entraînement des modèles et développement d'architectures augmentées MB/ML dans le cas de connaissance physique incomplète.
\section{Critères différentiable de forme et de temps pour la prédiction déterministe et probabiliste}
Les méthodes traditionnelles de prévision de séries temporelles sont des méthodes statistiques basées modèle qui décrivent des caractéristiques telles que les tendances et la saisonalité. Elles comprennent les méthodes autorégressives comme les modèles ARIMA (\textit{Auto Regressive Integrated Moving Average}) \cite{box2015time}. Ces méthodes font souvent des hypothèses fortes sur les données, par exemple la stationarité, qui ne sont pas vérifiées en pratique.
Avec l'avènement de l'apprentissage profond, les réseaux de neurones profonds sont devenus la méthode état de l'art pour la prédiction de séries temporelles \cite{lai2018modeling,salinas2017deepar,oreshkin2019n,zhou2020informer}, grâce à leur capacité à modéliser des dépendences temporelles complexes à partir d'un corpus d'apprentissage. La plupart des travaux récents se sont concentrés sur l'amélioration des architectures des réseaux de neurones. Le choix de la fonction de perte d'apprentissage, tout aussi important, est quant à lui peu abordé: la plupart des méthodes optimisent l'erreur quadratique moyenne (EQM) ou ses variantes.
L'erreur quadratique moyenne (EQM) est assez peu adaptée pour comparer des séries temporelles à plusieurs pas de temps, comme nous l'illustrons sur la Figure \ref{fig-intro-fr}. L'EQM ne permet pas de modéliser les erreurs de forme ni les décalages temporels entre séries. Pourtant, des critères de forme et de temps sont utilisés dans les applications pour évaluer les prédictions fournies par des algorithmes, par exemple le ramp score \cite{vallance2017towards} pour la forme et le TDI (Temporal Distortion Index) \cite{frias2017assessing} pour le temps. Mais ils ne sont pas utilisés en pratique pour l'entraînement des réseaux de neurones car ils sont la plupart du temps non différentiables.
\begin{figure}
\begin{tabular}{ccc}
\includegraphics[height=4.6cm]{images/dilatestripe_limite_mse_fr.png} & \hspace{-0.3cm}
\includegraphics[height=4.6cm]{images/dilatestripe_fig1a_fr.png} &
\hspace{-0.5cm}
\includegraphics[height=4.6cm]{images/dilatestripe_fig1c_fr.png} \\
~ & \footnotesize{Vraie distribution future} & \hspace{-0.5cm} \footnotesize{modèle stoch. \cite{yuan2019diverse}} \\
\textbf{(a) Prévision déterministe} & ~ & \hspace{-5cm} \textbf{(b) Prévision probabiliste} \\
\end{tabular}{}
\caption[Limites de l'erreur quadratique moyenne pour la prévision déterministe et probabiliste.]{\textbf{Limites d l'erreur quadratique moyenne pour la prévision déterministe et probabiliste.}
(a) Pour la prévision déterministe, les trois prédictions (1,2,3) ont la même erreur quadratique moyenne (EQM) par rapport au vrai futur (en noir). Mais on voudrait favoriser la prédiction 2 (bonne forme, léger retard) et 3 (bon positionnement temporel, forme imprécise) sur la prédiction 1 (pas très informative). (b) Pour la prévision probabiliste, les méthodes état de l'art apprises avec l'EQM \cite{yuan2019diverse,rasul2020multi} perdent la capacité à produire des prédictions nettes (en orange) par rapport aux vraies trajectoires futures (en vert).}
\label{fig-intro-fr}
\end{figure}
Dans cette thèse, nous proposons d'exploiter des critères de forme et de temps pour l'entraînement de réseaux de neurones profonds pour la prédiction de séries temporelles, dans le cas déterministe et probabiliste. Notre objectif est d'aborder des problèmes de prédiction non stationnaires, où les séries temporelles peuvent avoir des variations brutales, comme c'est le cas pour l'irradiance solaire qui chute brutalement lorsqu'un nuage occulte le soleil. Pour cela, nous introduisons des critères différentiables de forme et de temps, que nous formulons à la fois sous la forme de dissimilarités (fonctions de perte) et de similarités (noyaux semi-définis positifs). Les critères de forme sont basés sur une approximation différentiable de l'algorithme du \textit{Dynamic Time Warping (DTW)} \cite{sakoe1990dynamic} et ceux de temps sur le \textit{Temporal Distortion Index} (TDI) \cite{frias2017assessing}.
Nous proposons deux implémentations de ces critères, pour la prévision déterministe et probabiliste.
\subsection{DILATE}
Pour la prévision déterministe de séries temporelles avec des réseaux de neurones profonds, nous introduisons une fonction de perte appelée DILATE (\textit{DIstortion Loss with shApe and TimE}). Conçue comme une alternative à l'EQM, DILATE combine une composante sur la forme des séries temporelles et une composante sur le décalage temporel pour comparer une série prédite $\hat{\mathbf{y}}$ avec le vrai futur $\mathbf{y}^*$:
\begin{align}
\mathcal{L}_{\text{DILATE}}(\hat{\mathbf{y}}, \mathbf{y}^*) &= \alpha~\mathcal{L}_{forme}(\hat{\mathbf{y}}, \mathbf{y}^*) + (1-\alpha)~ \mathcal{L}_{temporelle}(\hat{\mathbf{y}}, \mathbf{y}^*)\\
&= \alpha ~\text{DTW}^{\mathbf{\Delta}}_{\gamma}(\hat{\mathbf{y}}, \mathbf{y}^*) + (1-\alpha)~ \text{TDI}^{\mathbf{\Delta},\mathbf{\Omega_{dissim}}}_{\gamma}(\hat{\mathbf{y}}, \mathbf{y}^*).
\end{align}
Le principe de DILATE est illustré sur la Figure \ref{fig:dilate_fr}. La perte sur la forme $\mathcal{L}_{forme}$ correspond à la soft-DTW \cite{cuturi2017soft} et la perte temporelle $\mathcal{L}_{temporelle}$ à une relaxation différentiable du TDI \cite{frias2017assessing}. Les deux pertes sont combinées linéairement avec un facteur $\alpha \in [0;1]$ qui est un hyperparamètre de la méthode.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{images/dilate_fr.png}
\caption{Fonction de perte DILATE pour l'entraînement de réseaux de neurones profonds pour la prédiction déterministe de séries temporelles.}
\label{fig:dilate_fr}
\end{figure}
Nous conduisons des expériences sur plusieurs jeux de données synthétiques et réels pour évaluer les performances de la perte DILATE. Les résultats révèlent que l'entraînement avec DILATE améliore significativement les performances évaluées sur des critères de forme et de temps, tout en maintenant des performances équivalentes évaluées en EQM. DILATE est agnostique à l'architecture du réseau de neurones et fonctionne aussi bien avec des architecture standard que les dernières architectures état de l'art.
\subsection{STRIPE}
La prévision probabiliste consiste à décrire la loi de probabilité conditionnelle des trajectoires futures sachant une trajectoire d'entrée. Dans cette thèse, notre objectif est de décrire cette loi de probabilité par un petit ensemble (par exemple 10) de trajectoires futures possibles qui représentent bien la variabilité sur l'évolution du phénomène. Ces scénarios doivent être à la fois précis et divers selon des critères de forme et de temps, ce que ne permettent pas les méthodes actuellement état de l'art en prévision probabiliste \cite{salinas2017deepar,rangapuram2018deep}.
Pour cela, nous introduisons un modèle appelé STRIPE (\textit{Shape and Time diverRsIty in Probabilistic forEcasting}). Illustré sur la Figure \ref{fig:stripe-fr}, le modèle STRIPE est une architecture de type encodeur-décodeur qui permet de générer des trajectoires futures à plusieurs pas de temps. Il s'agit d'un modèle génératif où les différents futurs possibles sont générés à partir de l'échantillonnage de variables latentes. Plus précisément, le modèle STRIPE est composé d'un encodeur qui prend la série temporelle d'entrée $\mathbf{x}_{1:T}$ et produit une variable descriptive $h$. On adjoint à cette variable $h$ des variables latentes $z_s$ et $z_t$ qui capturent la variabilité en forme (respectivement en temps). Le décodeur prend en entrée la concaténation $(h,z_s,z_t)$ et produit une trajectoire future $\hat{\mathbf{y}}_{T+1:T+\tau}$.
Pour structurer la diversité des trajectoires prédites, les variables latentes sont générées par des réseaux de neurone appelés STRIPE-forme et STRIPE-temps. La diversité est favorisée par l'ajout d'une fonction de perte de diversité $\mathcal{L}_{diversité}$. Elle est basée sur l'utilisation des processus ponctuels déterminantaux (DPP) \cite{kulesza2012determinantal}, qui sont un outil mathématique élégant pour décrire la diversité d'un ensemble d'éléments. La perte de qualité $\mathcal{L}_{qualité}$ est la perte DILATE pour assurer des prédictions avec à la fois la bonne forme et un faible décalage temporel. Pour assurer le maintien de la qualité des prédictions lors de l'étape de diversification, un réseau postérieur permet d'échantillonner les variables latentes lors de l'entraînement, pour qu'elles correspondent à de réelles trajectoires du jeu de données.
Nous menons des expériences sur un jeu de données synthétique où l'on dispose de l'ensemble des futures trajectoires comme supervision, ainsi que sur des jeux de données réels où l'on a qu'un seul futur disponible. Les résultats montrent que STRIPE parvient à des prédictions avec une bien meilleure diversité mesurée avec des critères de forme et de temps que des mécanismes de diversification concurrents de la littérature \cite{dieng2019prescribed,thiede2019analyzing,elfeki2018gdpp,yuan2019diverse} et que des algorithmes dédiés à la prédiction probabiliste \cite{salinas2017deepar}. De plus, STRIPE maintient une bonne qualité des prédictions obtenues et obtient le meilleur compromis entre qualité et diversité.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{images/stripe_fr.png}
\caption[Modèle STRIPE pour la prévision probabiliste.]{Modèle STRIPE pour la prévision probabiliste.}
\label{fig:stripe-fr}
\end{figure}
\section{Prédiction avec incorporation d'information physique incomplète}
Dans cette partie de la thèse, nous explorons comment incorporer de l'information physique a priori dans les modèles d'apprentissage statistique. En particulier, nous nous intéressons au cas où la connaissance physique est incomplète, ce qui est une question très peu traitée dans la littérature.
\subsection{Modèle PhyDNet pour la prédiction de vidéo}
Nous proposons un modèle d'apprentissage profond dédié à la prédiction de vidéos, dénommé PhyDNet, qui incorpore de l'information physique sous la forme d'une classe d'équations aux dérivées partielles (EDP) linéaires. Toutefois, pour des vidéos génériques, les équations physiques de la dynamique ne s'appliquent pas directement au niveau des pixels. Par exemple, il est nécessaire au préalable de segmenter les objets et de déterminer leur centre de masse avant d'appliquer les lois de Newton. C'est un cas représentatif d'un a priori non observable dans l'espace d'entrée.
Pour traiter ce problème, nous supposons qu'il existe un espace latent dans lequel le modèle dynamique d'EDP linéaire s'applique. Le modèle PhyDNet est composé d'un encodeur-décodeur pour apprendre automatiquement l'espace latent le plus adapté à partir des données. Dans cet espace latent, nous décomposons la dynamique en deux parties: une partie qui intégre les lois a priori de la physique et une partie qui apprend l'information complémentaire à la physique nécessaire pour avoir une bonne prédiction au niveau des pixels.
Le modèle PhyDNet est un réseau de neurones récurrent, illustré sur la Figure \ref{fig:phydnet_fr} dans sa version pliée (à gauche) et dépliée (à droite). Pour modéliser la partie physique, nous introduisons une cellule récurrente appelée PhyCell qui discrétise une équation aux dérivées partielle linéaire par un schéma d'Euler, pour laquelle les dérivées partielles sont calculées avec des convolutions contraintes \cite{long2018pde}. La deuxième branche modélise le résidu qui n'est pas expliqué par la physique; pour cela nous utilisons un réseau de neurones récurrent assez générique, en l'occurence un ConvLSTM \cite{xingjian2015convolutional}. Les deux branches sont sommées dans l'espace latent, avant d'être décodées vers une prédiction de l'image future.
\begin{empheq}[left=\empheqlbrace]{alignat=2}
& \tilde{\mathbf{h}}_{t+1} \!= \mathbf{h}_{t} + \Phi(\mathbf{h}_{t}) & \!\!\!\quad \text{\small{\textbf{Prediction}\!}} \label{eq:prediction}\\
& \mathbf{h}_{t+1} \!= \tilde{\mathbf{h}}_{t+1} + \mathbf{K}_t \odot \left( \mathbf{E}(\mathbf{u}_t) - \tilde{\mathbf{h}}_{t+1} \right). & \!\!\! \quad \text{\small{\textbf{Correction}\!}} \label{eq:correction}
\end{empheq}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{images/phydnet_fr.png}
\caption[Modèle PhyDNet pour la prévision de vidéo.]{Modèle PhyDNet pour la prédiction de vidéo.}
\label{fig:phydnet_fr}
\end{figure}
Nous menons des expériences sur des jeux de données avec différents niveaux de connaissance a priori: depuis Moving MNist où la dynamique de déplacement des chiffres est parfaitement connue, jusqu'à des vidéos généralistes de mouvements humains, en passant par des cas où l'on a un a priori physique incomplet sur la dynamique, comme pour le traffic routier ou la température de surface des océans. Dans tous ces cas, nous montrons la supériorité de PhyDNet par rapport à des modèles d'apprentissage profond sans a priori physique.
\subsection{Modèle APHYNITY pour la coopération optimale entre physique et apprentissage profond}
La prédiction de systèmes dynamiques pour lesquels on a une connaissance partielle de leur dynamique est un problème très courant dans de nombreux champs scientifiques. Par exemple pour la modélisation climatique, il est très compliqué de mettre en équations précisément tous les phénomènes complexes régissant la dynamique de l'atmosphère.
Nous introduisons ici un schéma d'apprentissage, appelé APHYNITY, pour augmenter des modèles physiques simplifiés décrits par des équations aux dérivées partielles, avec des réseaux de neurones profonds. Nous considérons des systèmes dynamiques sous la forme de l'équation différentielle:
\begin{equation}
\frac{\diff X_t}{\diff t} = F(X_t).
\end{equation}
Le modèle APHYNITY décompose la fonction de dynamique $F$ en une composante $F_p$ pour laquelle nous avons un a priori physique et une composante d'augmentation $F_a$ qui corrige les erreurs du modèle physique: $F = F_p + F_a$.
Le problème d'apprentissage est formulé de manière à ce que le modèle physique explique la dynamique le plus possible, tandis que le modèle d'augmentation ne capture que l'information qui ne peut pas être capturée par la physique. Inspiré par le principe de moindre action, ce schéma d'apprentissage consiste à minimiser la norme du résidu $F_a$ sous la contrainte de prédiction parfaite du modèle augmenté:
\begin{equation}
\label{eq:aphynity-opt-fr}
\underset{F_p\in{\mathcal{F}}_p, F_a\in{\mathcal{F}}}{\min} ~~~\left\Vert F_a \right\Vert ~~~
\mathrm{subject~to} ~~~~ \forall X\in{\mathcal{D}}, \forall t, \frac{\diff X_t}{\diff t} =(F_p+F_a)(X_t).
\end{equation}
Sous de faibles hypothèses qui sont vérifiées dans de nombreux cas expérimentaux, il y a existence et unicité du problème d'optimisation APHYNITY, ce qui favorise l'interprétabilité et la généralisation du modèle.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{images/aphynity_fr.png}
\caption[Schéma d'apprentissage APHYNITY.]{Schéma d'apprentissage APHYNITY pour la coopération optimale entre modèles physiques et modèles d'apprentissage.}
\label{fig:aphynity_fr}
\end{figure}
Nous proposons une approche trajectoire pour implémenter en pratique le schéma APHYNITY, qui est illustré sur la Figure \ref{fig:aphynity_fr}. A partir d'une condition initiale $X_0$, un modèle physique paramétré par $\theta_p$ donne la dynamique physique $F_p$, tandis que le modèle d'augmentation basé données paramétrisé par $\theta_a$ fournit la dynamique $F_a$. La dynamique résultante $F=F_p+F_a$ est intégrée dans le temps par un schéma numérique différentiable qui donne les prédictions pour un ensemble de pas de temps futurs. Les paramètres du modèle sont appris par l'optimisation du problème sous contraintes APHYNITY (Eq \ref{eq:aphynity-opt-fr}). Un algorithme d'optimisation sous contraintes adaptatif est utilisé pour résoudre efficacement le problème de l'Eq \ref{eq:aphynity-opt-fr}.
Nous menons des expériences sur trois problèmes representatifs de classes de phénomènes physiques: dynamique Newtonienne (pendule amorti), équations de réaction-diffusion et équations d'ondes. Dans chaque cas, nous considérons des modèles physiques simplifiés (par exemple les équations du pendule sans le terme d'amortissement) et augmentons ces modèles avec le schéma APHYNITY.
Les résultats expérimentaux montrent la supériorité d'APHYNITY sur des modèles basés données uniquement, sur des modèles physiques incomplets et sur des méthodes état de l'art qui combinent données et connaissances. Le gain de performances se voit à la fois sur l'erreur de prédiction et sur l'erreur d'identification des paramètres physiques du modèle. De plus, l'approche APHYNITY est suffisamment flexible pour s'adapter à des niveaux différents de connaissance physique a priori.
\section{Application à la prédiction d'irradiance solaire}
Les énergies renouvelables sont en forte progression dans le monde ces dernières années. Toutefois, leur variabilité spatiale et temporelle reste un défi pour leur intégration à grande échelle dans les réseaux électriques existants, pour lesquels l'équilibre à tout instant entre production et consommation d'électricité est primordial. L'enjeu réside également dans le pilotage indépendant de parcs photovoltaïques ou éoliens qui peuvent être couplés à des moyens de stockage ou de production supplémentaires, notamment dans les systèmes insulaires isolés.
Dans ce contexte, EDF a engagé depuis plusieurs années des travaux sur la prévision de production photovoltaïque, à différents horizons temporels et à l'aide de différentes données d'entrée (modèles météorologiques, images satellites, images au sol, mesures en temps réel). L'amélioration des méthodes de prévision à court terme (de quelques minutes à une heure) est aujourd'hui un enjeu fondamental. La variabilité temporelle à court-terme de la production photovoltaïque est principalement liée à des phénomènes physiques météorologiques, tels que le déplacement des nuages. Les modèles météorologiques et les images satellite ont une résolution spatiale et temporelle insuffisante pour prédire le déplacement des nuages à court-terme au-dessus d'un site de production. Pour cela, l'utilisation de caméras au sol hémisphériques est une piste très prometteuse pour suivre les nuages et anticiper les variations brusques de production à quelques minutes \cite{gauchet2012surface,chu2013hybrid,chu2016sun,marquez2013intra,schmidt2016evaluating}. EDF dispose de plusieurs sites instrumentés de caméras hémisphériques fisheye et de capteurs de rayonnement solaire (pyranomètres), constituant ainsi une base de données annotées de plusieurs millions d'images du ciel au pas de temps 10s (Figure \ref{fig:fisheye-camera_fr}).
\begin{figure}
\centering
\includegraphics[width=14cm]{images/fisheye_context.png}
\caption{Caméra fisheye et exemple d'image fisheye utilisées pour la prévision à court-terme de l'irradiance solaire.}
\label{fig:fisheye-camera_fr}
\end{figure}
Les méthodes traditionnelles de prévision par images fisheye reposent sur du traitement d'images classique. La chaîne de traitement typique \cite{gauchet2012surface,chu2013hybrid,chu2016sun,schmidt2016evaluating} se compose des étapes suivantes: calibration de la caméra fisheye, prétraitement de l'image, segmentation de l'image avec des seuillages, calcul du flot optique et propagation du mouvement pour prévoir la future position des nuages et enfin calcul de l'irradiance future avec des algorithmes de régression.
Depuis quelques années, les méthodes d'apprentissage profond se sont révélées être une alternative intéressante pour estimer et prévoir le rayonnement solaire de bout en bout \cite{pothineni2018kloudnet,zhang2018deep,spiess2019learning,sun2019short,nie2020pv,paletta2020temporally,zhen2021ultra}, sans la nécessité de definir des indicateurs sur les images manuellement. Au début de cette thèse, nous avons exploré de premières architectures de réseaux de neurones profonds pour l'estimation et la prévision du rayonnement \cite{leguen-gretsi}. Pour l'estimation du rayonnement correspondant à l'image courante, nous avons remarqué un gain de performances très important en utilisant des réseaux convolutionnels par rapport aux méthodes traditionelles, ce qui était attendu sachant les succès de l'apprentissage profond pour les tâches de perception. Par contre, la prévision du rayonnement est une tâche beaucoup plus compliquée: notre architecture préliminaire basée sur un ConvLSTM donne de meilleurs résultats que la méthode traditionnelle mais avec une marge plus faible.
Pour améliorer les prédictions, nous avons appliqué les contributions méthodologiques de cette thèse à ce problème. Nous avons adapté le modèle PhyDNet de prédiction de vidéo à la prédiction jointe des images fisheye et des rayonnements futurs. Illustrée sur la Figure \ref{fig:phydnet_fisheye_fr}, cette architecture prend en entrée une séquence d'images fisheye qui est traitée par le réseau de neurones récurrent PhyDNet. Le réseau est ensuite appliqué récursivement pour décoder les images futures et les rayonnements futurs.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{images/phydnet_fisheye_fr.png}
\caption{Modèle PhyDNet adapté pour la prévision de l'irradiance solaire.}
\label{fig:phydnet_fisheye_fr}
\end{figure}
Le modèle PhyDNet a permis un gain de performances important sur les prévisions de l'irradiance solaire à 5min par rapport à notre modèle de base ConvLSTM.
\begin{figure}
\centering
\includegraphics[width=15cm]{images/fisheye_fig1_fr.png}
\caption[Prévision de l'irradiance solaire à court-terme avec images fisheye.]{Prévisions de l'irradiance à 5min avec des images fisheye. Notre modèle inspiré par la physique prédit correctement les variations brusques de l'irradiance solaire.}
\label{fig:fisheye-qualitative-fr}
\end{figure}
Nous avons également exploré l'application de la fonction de perte DILATE et du schéma d'apprentissage APHYNITY à ce problème. Ces deux mécanismes permettent d'obtenir un nouveau gain de performances, quoique plus faible que celui apporté par l'architecture inspirée par la physique PhyDNet. Nous en avons analysé les raisons et proposé des pistes d'améliorations futures.
\section{Conclusion et perspectives}
Dans cette thèse, nous avons exploré de manière générale comment incorporer de la connaissance physique a priori dans les modèle d'apprentissage statistique pour améliorer la prévision spatio-temporelle. Plus particulièrement, nous avons abordé deux principales directions de recherche.
La première concerne le choix de la fonction de perte pour entraîner les modèles. Au lieu de l'erreur quadratique moyenne très majoritairement utilisée, nous proposons d'utiliser des critères de forme et de décalage temporel sur les trajectoires prédites. Nous nous attaquons au contexte de la prévision déterministe avec notre proposition de fonction de perte DILATE, et au contexte probabiliste, où notre objectif est de décrire la distribution prédictive par un faible nombre de scénarios divers et précis, avec notre modèle STRIPE.
Notre seconde direction de recherche est d'augmenter des modèles physiques incomplets avec des réseaux de neurones profonds basés données. Pour la prédiction de vidéo, nous introduisons le modèle PhyDNet qui sépare une partie de dynamique physique modélisée par des équations aux dérivées partielles, d'une partie résiduelle qui capture l'information complémentaire, comme la texture et les détails, nécessaire à la bonne prédiction. Nous proposons aussi un schéma d'apprentissage, appelé APHYNITY, qui assure une décomposition bien posée et unique entre des modèle physiques incomplets et des réseaux de neurones profonds, sous de faibles hypothèses.
Nous avons validons les contributions de cette thèse sur de nombreux jeux de données synthétiques et réels, et sur l'application de prévision photovoltaïque à EDF.
Les travaux de cette thèse ouvrent de nombreuses perspectives intéressantes à explorer. A court-terme, les perspectives pour l'améliorations des prédictions d'irradiance comprennent l'utilisation de modèle physiques plus spécifiques à la dynamique de l'atmosphère, l'apprentissage sur des séquences temporelles de plus longue durée, ou encore l'utilisation de réseaux de neurones qui encodent l'invariance par rotation pour le traitement des images fisheye.
A plus long terme, l'étude des modèles physiques augmentés et leur application pour résoudre des problèmes naturels complexes comme la prévision climatique est particulièrement attrayante. Plusieurs applications pourraient directement bénéficier de ces travaux, par exemple l'estimation du flot optique qui est traditionnellement basée sur l'hypothèse simplifiée de la conservation de l'intensité lumineuse, ou l'apprentissage par renforcement basé modèle qui suppose un modèle de dynamique (souvent simplifié) pour prendre des décisions.
Par ailleurs, nous avons étudié dans cette thèse des décompositions linéaires entre modèles physiques simplifiés et leur augmentations, ce qui est une hypothèse assez forte. D'autres schémas de décompositions peuvent être envisagés, par exemple entre des modélisations physiques à des échelles spatiales différentes.
Mots-clés : apprentissage profond, prévision spatio-temporelle, prévision photovoltaïque.
\end{vcenterpage}
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{Introduction}
Cette thèse aborde le problème de la prédiction spatio-temporelle par apprentissage profond. Cela correspond à la tâche de prédiction de phénomènes complexes sous forme de séries temporelles ou de vidéos, ce qui nécessite de modéliser des dépendances temporelles complexes avec d'importantes corrélations spatiales. Ce sujet est d'une importance cruciale pour de nombreuses applications, telle que la prévision climatique, le diagnostic médical, l'évolution des marchés financiers, la demande pour des produits en commerce ou la maintenance prédictive dans l'industrie. A Électricité de France (EDF), l'application qui motive cette thèse est la prévision à court-terme de la production photovoltaïque à l'aide d'images fisheye. Cette tâche est habituellement résolue à l'aide d'algorithmes basés sur les prévisions météo et les images satellite. Toutefois ces sources de données ont une résolution spatiale et temporelle insuffisante pour prédire l'irradiance solaire à très court-terme ($<$ 20min) à l'échelle d'un parc de production photovoltaïque particulier.
Dans cette thèse, nous abordons ces tâches de prédiction avec des méthodes d'intelligence artificielle, en particulier l'apprentissage statistique et l'apprentissage profond. Ces dernières années, l'apprentissage profond a connu un rebond de popularité impressionnant, notamment en vision par ordinateur \cite{krizhevsky2012imagenet}. Malgré ces impressionnants succès, les méthodes d'apprentissage entièrement basées sur les données sont limitées pour extrapoler l'évolution de systèmes physiques complexes, particulièrement quand la volumétrie de données est faible et pour des séries temporelles non-stationnaires avec des possibles variations brusques. La tâche d'extrapolation sous-jacente est par nature très différente des tâches de perception pour lesquelles l'apprentissage profond est très efficace, et nécessite de modéliser des dynamiques complexes.
Pour pallier à ces problèmes, nous proposons dans cette thèse d'exploiter de l'information physique a priori en combinaison avec les méthodes d'apprentissage basées données. Il s'agit d'une question très étudiée dans la littérature mais qui reste toujours largement ouverte. Nous nous concentrons sur deux principales directions: incorporation d'information physique a priori dans la fonction d'entraînement des modèles et développement d'architectures augmentées \textit{Model Based / Machine Learning (MB/ML)} dans le cas de connaissance physique incomplète.
\section{Critères différentiable de forme et de temps pour la prédiction déterministe et probabiliste}
Les réseaux de neurones profonds sont devenus la méthode état de l'art pour la prédiction de séries temporelles \cite{lai2018modeling,salinas2017deepar,oreshkin2019n,zhou2020informer}, grâce à leur capacité à modéliser des dépendences temporelles complexes à partir d'un corpus d'apprentissage. La plupart des travaux récents se sont concentrés sur l'amélioration des architectures des réseaux de neurones et abordent peu le choix de la fonction de perte d'apprentissage, pourtant tout aussi crucial.
Les travaux de cette thèse se basent sur l'observation que l'erreur quadratique moyenne (EQM) est assez peu adaptée pour comparer des séries temporelles à plusieurs pas de temps, car elle ne distingue pas les erreurs de valeur absolue et de décalage temporel. Les critères d'évaluation de forme et de temps existants, par exemple le ramp score \cite{vallance2017towards} pour la forme et le TDI (Temporal Distortion Index) \cite{frias2017assessing} pour le temps, ne sont pas utilisés en pratique pour l'entraînement des réseaux de neurones car ils sont non différentiables.
Dans cette thèse, nous proposons d'exploiter des critères de forme et de temps pour entraîner des réseaux de neurones profonds pour la prédiction de séries temporelles, dans le cas déterministe et probabiliste. Notre objectif est d'aborder des problèmes de prédiction non stationnaires, où les séries temporelles peuvent avoir des variations brutales, comme c'est le cas pour l'irradiance solaire qui chute brutalement lorsqu'un nuage occulte le soleil. Pour cela, nous introduisons des critères différentiables de forme et de temps, que nous formulons à la fois sous la forme de dissimilarités (fonctions de perte) et de similarités (noyaux semi-définis positifs). Ces travaux sont présentés dans notre article publié au journal T-PAMI \cite{leguen2021deep}.
\subsection{Fonction de perte DILATE \cite{leguen19}}
Pour la prévision déterministe de séries temporelles, nous introduisons la fonction de perte DILATE (\textit{DIstortion Loss with shApe and TimE}) \cite{leguen19}. DILATE combine une composante sur la forme des séries temporelles, basée sur la soft-DTW \cite{cuturi2017soft}, et une composante sur le décalage temporel, basée sur une relaxation différentiable du TDI \cite{frias2017assessing}.
Nous conduisons des expériences sur plusieurs jeux de données synthétiques et réels pour évaluer les performances de la perte DILATE. Les résultats révèlent que l'entraînement avec DILATE améliore significativement les performances évaluées sur des critères de forme et de temps, tout en maintenant des performances équivalentes évaluées en EQM. DILATE est agnostique à l'architecture du réseau de neurones et fonctionne aussi bien avec des architecture standard que les dernières architectures état de l'art.
\subsection{Modèle STRIPE pour la prévision probabiliste \cite{leguen20stripe}}
Pour la prévision probabiliste, nous introduisons un modèle appelé STRIPE (\textit{Shape and Time diverRsIty in Probabilistic forEcasting}). Le modèle STRIPE est un modèle génératif où les différents futurs possibles sont générés à partir de l'échantillonnage de variables latentes. La qualité des prédictions en termes de forme et de temps est assurée grâce à la fonction de perte DILATE, tandis que la diversité est assurée grâce à un mécanisme basé sur les processus ponctuels déterminantaux \cite{kulesza2012determinantal,yuan2019diverse}.
Nous menons des expériences sur un jeu de données synthétique où l'on dispose de l'ensemble des futures trajectoires comme supervision, ainsi que sur des jeux de données réels où l'on a qu'un seul futur disponible. Les résultats montrent que STRIPE parvient à des prédictions avec une bien meilleure diversité mesurée avec des critères de forme et de temps que des mécanismes de diversification concurrents de la littérature \cite{dieng2019prescribed,thiede2019analyzing,elfeki2018gdpp,yuan2019diverse} et que des algorithmes dédiés à la prédiction probabiliste \cite{salinas2017deepar}. De plus, STRIPE maintient une bonne qualité des prédictions obtenues et obtient le meilleur compromis entre qualité et diversité.
\section{Prédiction avec incorporation d'information physique incomplète}
Dans cette partie de la thèse, nous explorons comment incorporer de l'information physique a priori dans les modèles d'apprentissage statistique. En particulier, nous abordons le cas où la connaissance physique est incomplète, ce qui est une question peu traitée dans la littérature.
\subsection{Modèle PhyDNet pour la prédiction de vidéo \cite{leguen20phydnet}}
Nous proposons un modèle d'apprentissage profond dédié à la prédiction de vidéos, dénommé PhyDNet, qui incorpore de l'information physique sous la forme d'une classe d'équations aux dérivées partielles (EDP) linéaires. Toutefois, pour des vidéos génériques, les équations physiques de la dynamique ne s'appliquent pas directement au niveau des pixels. Par exemple, il est nécessaire au préalable de segmenter les objets et de déterminer leur centre de masse avant d'appliquer les lois de Newton. Pour traiter ce problème, nous supposons qu'il existe un espace latent dans lequel le modèle dynamique d'EDP linéaire s'applique. Le modèle PhyDNet est composé d'un encodeur-décodeur pour apprendre automatiquement l'espace latent le plus adapté à partir des données. Dans cet espace latent, nous décomposons la dynamique en deux parties: une partie qui intégre les lois a priori de la physique et une partie qui apprend l'information complémentaire à la physique nécessaire pour avoir une bonne prédiction au niveau des pixels. En particulier, nous introduisons une nouvelle cellule de réseau de neurones récurrent (appelée PhyCell), qui discrétise une équation aux dérivées partielle linéaire par un schéma d'Euler, pour laquelle les dérivées partielles sont calculées avec des convolutions contraintes \cite{long2018pde}.
Nous menons des expériences sur des jeux de données avec différents niveaux de connaissance a priori: depuis Moving MNist où la dynamique de déplacement des chiffres est parfaitement connue, jusqu'à des vidéos généralistes de mouvements humains (Human 3.6), en passant par des cas où l'on a un a priori physique incomplet sur la dynamique, comme pour le traffic routier ou la température de surface des océans. Dans tous ces cas, nous montrons la supériorité de PhyDNet par rapport à des modèles d'apprentissage profond sans a priori physique.
\subsection{Modèle APHYNITY pour la coopération optimale entre physique et apprentissage profond \cite{leguen-aphynity}}
La prédiction de systèmes dynamiques pour lesquels on a une connaissance partielle de leur dynamique est un problème très courant dans de nombreux champs scientifiques. Par exemple pour la modélisation climatique, il est très compliqué de mettre en équations précisément tous les phénomènes complexes régissant la dynamique de l'atmosphère.
Nous introduisons ici un schéma d'apprentissage, appelé APHYNITY, qui décompose le système dynamique $\frac{dX_t}{dt} = F(X_t)$ en une composante $F_p$ pour laquelle nous avons un a priori physique et une composante d'augmentation $F_a$ qui corrige les erreurs du modèle physique: $F = F_p + F_a$.
Le problème d'apprentissage est formulé de manière à ce que le modèle physique explique la dynamique le plus possible, tandis que le modèle d'augmentation ne capture que l'information qui ne peut pas être capturée par la physique. Inspiré par le principe de moindre action, ce schéma d'apprentissage consiste à minimiser la norme du résidu $F_a$ sous la contrainte de prédiction parfaite du modèle augmenté. Sous de faibles hypothèses qui sont vérifiées dans de nombreux cas expérimentaux, il y a existence et unicité du problème d'optimisation APHYNITY, ce qui favorise l'interprétabilité et la généralisation du modèle.
Nous menons des expériences sur trois problèmes representatifs de classes de phénomènes physiques: dynamique Newtonienne (pendule amorti), équations de réaction-diffusion et équations d'ondes. Dans chaque cas, nous considérons des modèles physiques simplifiés (par exemple les équations du pendule sans le terme d'amortissement) et augmentons ces modèles avec le schéma APHYNITY.
Les résultats expérimentaux montrent la supériorité d'APHYNITY sur des modèles basés données uniquement, sur des modèles physiques incomplets et sur des méthodes état de l'art qui combinent données et connaissances. Le gain de performances se voit à la fois sur l'erreur de prédiction et sur l'erreur d'identification des paramètres physiques du modèle. De plus, l'approche APHYNITY est suffisamment flexible pour s'adapter à des niveaux différents de connaissance physique a priori.
\section{Application à la prédiction d'irradiance solaire \cite{leguen-gretsi,leguen-fisheye}}
Les énergies renouvelables sont en forte progression dans le monde ces dernières années. Toutefois, leur variabilité spatiale et temporelle reste un défi pour leur intégration à grande échelle dans les réseaux électriques existants, pour lesquels l'équilibre à tout instant entre production et consommation d'électricité est primordial. Dans ce contexte, EDF a engagé depuis plusieurs années des travaux sur la prévision de production photovoltaïque à l'aide de caméras au sol "fisheye", qui permettent une prédiction à très court terme ($<$ 20 min) en anticipant le déplacement des nuages.
Pour améliorer les prédictions par rapport aux méthodes traditionnelles de traitement d'images \cite{gauchet2012surface,chu2013hybrid,chu2016sun,schmidt2016evaluating}, nous avons appliqué les contributions méthodologiques de cette thèse à ce problème. Nous avons adapté le modèle PhyDNet de prédiction de vidéo à la prédiction jointe des images fisheye et des rayonnements futurs. Le modèle PhyDNet a permis un gain de performances important sur les prévisions de l'irradiance solaire à 5min par rapport à un modèle de base ConvLSTM \cite{xingjian2015convolutional}. Nous avons également exploré l'application de la fonction de perte DILATE et du schéma d'apprentissage APHYNITY à ce problème. Ces deux mécanismes permettent d'obtenir un nouveau gain de performances.
\begin{figure}[b]
\centering
\includegraphics[width=10cm]{images/fisheye_fig1_fr.png}
\caption[Prévision de l'irradiance solaire à court-terme avec images fisheye.]{Prévisions de l'irradiance à 5min avec des images fisheye.}
\label{fig:fisheye-qualitative-fr}
\end{figure}
\bibliographystyle{plain}
\section{Introduction}\label{sec:introduction}
\begin{figure}[H]
\begin{tabular}{ccc}
\includegraphics[height=4.6cm]{images/dilatestripe_limite_mse.png} & \hspace{-0.3cm}
\includegraphics[height=4.6cm]{images/dilatestripe_fig1a.png} &
\hspace{-0.5cm}
\includegraphics[height=4.6cm]{images/dilatestripe_fig1c.png} \\
~ & \footnotesize{True predictive distribution} & \hspace{-0.5cm} \footnotesize{deep stoch. model \cite{yuan2019diverse}} \\
\textbf{(a) Deterministic forecasting} & ~ & \hspace{-5cm} \textbf{(b) Probabilistic forecasting} \\
\end{tabular}{}
\caption[MSE limitations in deterministic and probabilistic forecasting.]{\textbf{MSE limitations in deterministic and probabilistic forecasting.} (a) For deterministic forecasting, the three predictions (1,2,3) have the same MSE with respect to the target (in black). However, one would like to favour prediction 2 (correct shape, slight delay) and 3 (correct timing, inaccurate amplitude) over prediction 1 (which is not very informative).
(b) For probabilistic forecasting, state-of-the-art methods trained with variants of the MSE (e.g.~ \cite{yuan2019diverse,rasul2020multi}) loose the ability to produce sharp forecasts (in orange) compared to the ground truth future trajectories (in green).}
\label{fig-intro}
\end{figure}
\lettrine[lines=3]{T}ime series forecasting consists in analyzing historical signal correlations to anticipate future behaviour. As discussed in Chapter \ref{chap:related_work}, traditional approaches include linear autoregressive methods \cite{box2015time} or state space models \cite{durbin2012time}, which are simple yet mathematically grounded and benefit from interpretability. They often exploit prior knowledge based on stationarity, e.g.~ by leveraging trend or seasonality to constrain forecasting.
These grounding assumptions are often violated in many real-world time series that are non-stationary and can present sharp variations such as sudden drops or changes of regime. Long-term multi-step forecasting in this context is particularly challenging and arises in a wide range of important application fields, e.g.~ analyzing traffic flows \cite{li2017diffusion,snyder2019streets}, medical records \cite{chauhan2015anomaly}, predicting sharp variations in financial markets \cite{ding2015deep} or in renewable energy production \cite{vallance2017towards,ghaderi2017deep,leguen-fisheye}, \textit{etc}.
We are interested in forecasting multi-step future trajectories with potentially sharp variations in the deterministic and probabilistic cases. Deep neural networks are an appealing solution for this problem \cite{yu17learning,qin2017dual,lai2018modeling,salinas2017deepar,oreshkin2019n,zhou2020informer}, due to their automatic feature extraction and complex nonlinear time dependencies modelling. However, the verification criteria typically used in applications are not used at training time because they are mostly not differentiable. We may cite for instance the ramp score \cite{vallance2017towards} for assessing the detection of sharp ramping events, or the Time Distortion Index (TDI) \cite{frias2017assessing} for assessing the time delay of a particular predicted event.
Instead, the huge majority of methods optimize at training time the Mean Squared Error (MSE) or its variants (MAE, quantile loss, \textit{etc}) as a proxy loss function. However, the MSE has important drawbacks in our non-stationary context, as also noted by several other works \cite{vallance2017towards,verbois2020beyond,yang2020verification}. This is illustrated in Figure \ref{fig-intro}. Figure \ref{fig-intro} (a) shows three deterministic predictions, which have the same MSE loss compared to the target step function (in black). Thus, the MSE does not support predictions (2) and (3) over prediction (1), although they clearly are more adequate for regulation purposes because they do anticipate the drop to come, although with a slight delay (2) or with a slightly inaccurate amplitude (3). For probabilistic forecasting (Figure \ref{fig-intro} (b)), current state-of-the art probabilistic methods trained with variants of the MSE tend to produce blurry predictions that do not match the sharp steps of the true futures (in green).
We intend to bridge this train/test criterion gap by incorporating shape and temporal features at training time. In this Chapter, we introduce shape and temporal criteria for training deep forecasting models. We characterize the shape of times series with the Dynamic Time Warping (DTW) \cite{sakoe1990dynamic} algorithm and the temporal shift with the Temporal Distortion Index (TDI) \cite{frias2017assessing}. We provide an unified view of these criteria by formulating them both as dissimilarities (loss functions) and similarities (positive semi-definite kernels). Importantly, we insist on their differentiability, which makes them amenable to gradient-based optimization, and on their efficient computation.
\section{Shape (dis)similarity}
\subsection{Background: Dynamic Time Warping}
To assess the shape similarity between two time series, the popular Dynamic Time Warping (DTW) method \cite{sakoe1990dynamic} seeks a minimal cost alignment for handling time distortions.
Given two $d$-dimensional time series $\mathbf{y} \in \mathbb{R}^{d \times n}$ and $\mathbf{z} \in \mathbb{R}^{d \times m}$ of lengths $n$ and $m$, DTW looks for an optimal warping path represented by a binary matrix $\mathbf{A} \subset \left \{ 0,1 \right \} ^{n \times m}$ where $\mathbf{A}_{ij}=1$ if $\mathbf{y}_i$ is associated to $\mathbf{z}_j$ and 0 otherwise. The set of admissible warping paths $\mathcal{A}_{n,m}$ is composed of paths connecting the endpoints $(1,1)$ to $(n,m)$ with the following authorized moves $\rightarrow, \downarrow, \searrow$. The cost of warping path $\mathbf{A}$ is the sum of the costs along the alignment ; this cost can be written as the scalar product $\left\langle \mathbf{A}, \mathbf{\Delta}(\mathbf{y},\mathbf{z}) \right\rangle$, where $\Delta(\mathbf{y},\mathbf{z})$ is a $n \times m$ pairwise dissimilarity matrix whose general term is typically chosen an the Euclidean distance $\mathbf{\Delta}(\mathbf{y},\mathbf{z})_{ij} = \Vert \mathbf{y}_i-\mathbf{z}_j \Vert^2_{2}$. DTW computes the minimal cost warping path:
\begin{equation}
\text{DTW}^{\mathbf{\Delta}}(\mathbf{y}, \mathbf{z}) :=\underset{\mathbf{A} \in \mathcal{A}_{n,m}}{\min} \left \langle \mathbf{A},\mathbf{\Delta}(\mathbf{y}, \mathbf{z}) \right \rangle.
\label{eq:dtw}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=10cm]{images/dtw.png}
\caption[Principle of Dynamic Time Warping (DTW)]{\textbf{Dynamic Time Warping (DTW)} seeks a path of minimal alignment cost (in red) in the pairwise cost matrix between the two time series.}
\label{fig:dtw}
\end{figure}
Although the cardinality of $\mathcal{A}_{n,m}$ increases exponentially in $\min(n,m)$ \footnote{ $|\mathcal{A}_{n,m}|$ is equal to the Delannoy number $Delannoy(n,m)$ which grows exponentially in $\min(n,m)$}, DTW and the optimal path $\mathbf{A^*}$ can be computed efficiently in $\mathcal{O}(nm)$ by dynamic programming. However, a major limitation of DTW is its non-diffentiability, which prevents its integration in neural network pipelines trained with gradient-based optimization.
\subsubsection{Smooth DTW shape dissimilarity}
\label{sec:soft-dtw}
For handling the non-differentiability of DTW, Cuturi and Blondel \cite{cuturi2017soft} introduced the soft-DTW by replacing the hard-minimum operator by a smooth minimum with the log-sum-exp trick $\min_{\gamma}(a_1,...,a_n) = - \gamma \log(\sum_i^n \exp(-a_i / \gamma) )$:
\begin{equation}
\text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y}, \mathbf{z}) :=
- \gamma \log \left ( \sum_{\mathbf{A} \in \mathcal{A}_{n,m}} e ^ { - \left \langle \mathbf{A},\mathbf{\Delta}(\mathbf{y}, \mathbf{z}) \right \rangle / \gamma} \right ),
\label{eq:dtwgamma}
\end{equation}
where $\gamma > 0$ is a smoothing parameter (when $\gamma \rightarrow 0$, this converges to the true DTW).
$\text{DTW}^{\mathbf{\Delta}}_{\gamma}$ as defined in Eq \ref{eq:dtwgamma} is differentiable with respect to $\mathbf{\Delta}$ (and with respect to both series $\mathbf{y}$ and $\mathbf{z}$ by chain's rule, provided a differentiable cost function $\mathbf{\Delta}$).
We can interpret this relaxed DTW version by considering, instead of the unique optimal path $\mathbf{A}^*$, a Gibbs distribution over possible paths:
\begin{equation}
p_{\gamma}(\mathbf{A} ; \mathbf{\Delta}) = \frac{1}{Z} \: e^{- \left \langle \mathbf{A},\mathbf{\Delta}(\mathbf{y}, \mathbf{z}) \right \rangle / \gamma }.
\label{eq:gibbs}
\end{equation}
The soft-DTW is then the negative log-partition of this distribution: $\text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y}, \mathbf{z}) := - \gamma \log Z $.
Since $\text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y},\mathbf{z})$ can take negative values and is not minimized for $\mathbf{y}=\mathbf{z}$, Mensch and Blondel \cite{mensch2018differentiable} normalized the soft-DTW to make it a true divergence. We found experimentally that this does not improve performances and is heavier computationally (see Appendix \ref{app:dilate-div}).
\subsubsection{Shape similarity kernel}
\label{sec:shape-kernel}
Based on the soft-DTW shape dissimilarity defined in Eq \ref{eq:dtwgamma}, we define a shape similarity kernel as follows:
\begin{equation}
\mathcal{K}_{shape}(\mathbf{y},\mathbf{z}) = e^{- \: \text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y},\mathbf{z}) / \gamma}.
\label{eq:kshape}
\end{equation}
We experiment with the following choices of kernels $\Delta_{ij} = \Delta(\mathbf{y},\mathbf{z})_{ij}$:
\begin{itemize}
\item Half-Gaussian: $\mathbf{\Delta}_{ij}= \Vert \mathbf{y}_i-\mathbf{z}_j \Vert^2_2 + \log (2 - e^{- \Vert \mathbf{y}_i-\mathbf{z}_j \Vert^2_2 })$
\item L1: $\mathbf{\Delta}_{ij}= |\mathbf{y}_i-\mathbf{z}_j|$ ~~ (for $d=1$)
\item Euclidean: $\mathbf{\Delta}_{ij}= \Vert \mathbf{y}_i-\mathbf{z}_j \Vert^2_2$.
\end{itemize}
$\mathcal{K}_{shape}$ was proven to be positive semi-definite (PSD) for the half-Gaussian\footnote{We denote this kernel "half-Gaussian" since the corresponding $k$ kernel defined in the proof (Appendix \ref{app:proof-ktime}) equals $k(\mathbf{y}_i,\mathbf{z}_j) = e^{- \Delta(\mathbf{y}_i,\mathbf{z}_j)} = \left(\frac{1}{2} e^{-\Vert \mathbf{y}_i-\mathbf{z}_j \Vert^2})\right) \times \left(1 - \frac{1}{2} e^{-\Vert \mathbf{y}_i-\mathbf{z}_j \Vert^2}\right)^{-1}$} and the L1 kernels \cite{cuturi2007kernel,blondel2020differentiable} and is conjectured to be PSD for the Euclidean kernel \cite{blondel2020differentiable}. Experimentally we observed that these three cost matrices lead to similar behaviour.
\section{Temporal (dis)similarity}
Quantifying the temporal similarity between two time series consists in analyzing the time delays between matched patterns detected in both series. As discussed in introduction, it of great importance for many applications to anticipate sharp variations.
\subsection{Smooth temporal distortion index}
A common temporal similarity is the Temporal Distortion Index (TDI) \cite{frias2017assessing, vallance2017towards}. The TDI computes the approximate area included between the optimal path $\mathbf{A^*}$ and the first diagonal, characterizing the presence of temporal distortion. A generalized version of the TDI, that we proposed in \cite{leguen19}, can be written:
\begin{equation}
\text{TDI}^{\mathbf{\Delta, \Omega_{dissim}}}(\mathbf{y}, \mathbf{z}) := \langle \mathbf{A}^*, \mathbf{\Omega_{dissim}} \rangle \> ,
\label{eq:tdi}
\end{equation}
where $ \mathbf{A}^* = \underset{\mathbf{A} \in \mathcal{A}_{n,m}}{\arg \min} \left \langle \mathbf{A},\mathbf{\Delta}(\mathbf{y}, \mathbf{z}) \right \rangle$ is the DTW optimal path and $\mathbf{\Omega_{dissim}} \in \mathbb{R}^{n \times m}$ is a matrix penalizing the association between $\mathbf{y}_i$ and $\mathbf{z}_{j}$ for $i \neq j$. We typically choose a quadratic penalization $\mathbf{\Omega_{dissim}}(i,j) \propto (i-j)^2$, but other variants can encode prior knowledge and penalize more heavily late than early predictions, and \textit{vice-versa}.
The TDI dissimilarity defined in Eq \ref{eq:tdi} is however non-differentiable, since the optimal path $\mathbf{A}^*$ is not differentiable with respect to $\mathbf{\Delta}$. We handle this problem
by defining a relaxed optimal path $\mathbf{A}^*_{\gamma}$ as the gradient of $\text{DTW}_{\gamma}^{\mathbf{\Delta}}$:
\begin{align}
\mathbf{A}^*_{\gamma} := \nabla_{\mathbf{\Delta}} \text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y}, \mathbf{z}) = \frac{1}{Z} \sum_{\mathbf{A} \in \mathcal{A}_{n,m}} \mathbf{A} \: e^{- \left \langle \mathbf{A},\mathbf{\Delta}(\mathbf{y}, \mathbf{z}) \right \rangle / \gamma }.
\label{eq:grad_dtw}
\end{align}
The expression in Eq \ref{eq:grad_dtw} results from a direct computation from Eq. \ref{eq:dtwgamma}. Notice that this soft optimal path corresponds to the expected path $\mathbf{A}^*_{\gamma} = \mathbb{E}_{p_{\gamma}(\cdot ; \mathbf{\Delta})} [\mathbf{A}]$ under the Gibbs distribution in Eq \ref{eq:gibbs}. Note also that $\mathbf{A}^*_{\gamma}$ becomes a soft assignment, i.e.~ $\mathbf{A}^*_{\gamma}(i,j)$ represents the probability for a path to contain the cell $(i,j)$. An illustration of soft optimal paths with the influence of $\gamma$ is given in Figure \ref{fig:dilate_analysis}.
We can now define a differentiable version of the TDI:
\begin{equation}
\text{TDI}^{\mathbf{\Delta,\Omega_{dissim}}}_{\gamma}(\mathbf{y},\mathbf{z}) := \left \langle \mathbf{A}_{\gamma}^* , \mathbf{\Omega_{dissim}} \right \rangle = \\
\dfrac{1}{Z} \sum_{\mathbf{A} \in \mathcal{A}_{n,m}} \left \langle \mathbf{A}, \mathbf{\Omega_{dissim}} \right \rangle e^{-\frac{ \left \langle \mathbf{A},\mathbf{\Delta}(\mathbf{y}, \mathbf{z}) \right \rangle}{\gamma} },
\label{eq:temporal}
\end{equation}
which corresponds to the expected value of the TDI under the Gibbs distribution.
\subsection{Temporal similarity kernel}
Based on the temporal dissimilarity in Eq \ref{eq:temporal} and the shape similarity kernel in Eq. \ref{eq:kshape}, we can define a time similarity as follows:
\begin{equation}
\mathcal{K}_{time}(\mathbf{y},\mathbf{z}) := e^{- \text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y},\mathbf{z}) / \gamma}
\times \text{TDI}^{\mathbf{\Delta, {\Omega_{sim}}}}_{\gamma} (\mathbf{y},\mathbf{z}),
\label{eq:Ktime}
\end{equation}
where in this case, we use a similarity matrix $\mathbf{\Omega_{sim}}$ favoring pairs of time series with low temporal distortion, i.e.~ with an optimal path near the main diagonal. We typically choose a pointwise inverse of $\mathbf{\Omega_{dissim}}$: $\mathbf{\Omega_{sim}}(i,j) = \frac{1}{(i-j)^2+1}$. We prove that $ \mathcal{K}_{time}$ defines a valid PSD temporal kernel (proof in Appendix \ref{app:proof-ktime}). \\
The following table provides an overview of the shape and temporal criteria introduced in this work:\\
\begin{center}
\begin{tabular}{c|c|c}
criterion & differentiable loss & PSD similarity kernel \\
\hline
shape & $\text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y}, \mathbf{z})$ & $ e^{- \: \text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y},\mathbf{z}) / \gamma}$ \\
\hline
time & $\text{TDI}^{\mathbf{\Delta,\Omega_{dissim}}}_{\gamma}(\mathbf{y},\mathbf{z})$ &
$e^{- \text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y},\mathbf{z}) / \gamma }
\times \text{TDI}^{\mathbf{\Delta, {\Omega_{sim}}}}_{\gamma} (\mathbf{y},\mathbf{z})$
\end{tabular}
\end{center}
\subsection{Efficient forward and backward computation}
\label{app:efficient-computation}
The direct computation of the shape loss $\text{DTW}^{\mathbf{\Delta}}_{\gamma}$ (Eq \ref{eq:dtwgamma}) and the temporal loss $\text{TDI}^{\mathbf{\Delta,\Omega_{dissim}}}_{\gamma}$ (Eq \ref{eq:temporal}) is intractable, due to the exponential growth of cardinal of $\mathcal{A}_{n,m}$. We provide a careful implementation of the forward and backward passes in order to make learning efficient.\\
\paragraph*{Shape loss:} Regarding $\text{DTW}^{\mathbf{\Delta}}_{\gamma}$, we rely on~\cite{cuturi2017soft} to efficiently compute the forward pass with a variant of the Bellmann dynamic programming approach~\cite{bellman1952theory}. For the backward pass, we implement the recursion proposed in~\cite{cuturi2017soft} in a custom Pytorch loss. This implementation is much more efficient than relying on vanilla auto-differentiation, since it reuses intermediate results from the forward pass.
\paragraph*{Temporal loss:} For $\text{TDI}^{\mathbf{\Delta},\mathbf{\Omega_{dissim}}}_{\gamma}$, note that the bottleneck for the forward pass in Eq \ref{eq:temporal} is to compute $\mathbf{A}^*_{\gamma} = \nabla_{\Delta} \text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y},\mathbf{z})$, which we implement as explained for the $\text{DTW}^{\mathbf{\Delta}}_{\gamma}$ backward pass. Regarding $\text{TDI}^{\mathbf{\Delta},\mathbf{\Omega_{dissim}}}_{\gamma}$ backward pass, we need to compute the Hessian $\nabla^2 \text{DTW}^{\mathbf{\Delta}}_{\gamma}(\mathbf{y},\mathbf{z})$. We use the method proposed in~\cite{mensch2018differentiable}, based on a dynamic programming implementation that we embed in a custom Pytorch loss. Again, our back-prop implementation allows a significant speed-up compared to standard auto-differentiation.
The resulting time complexity of both shape and temporal losses for forward and backward is $\mathcal{O}(nm)$.
\paragraph*{Custom backward implementation speedup:} We compare in Figure \ref{fig:speedup} the computational time between the standard PyTorch auto-differentiation mechanism and our custom backward pass implementation for calculating $\text{DTW}^{\mathbf{\Delta}}_{\gamma}+\text{TDI}^{\mathbf{\Delta},\mathbf{\Omega_{dissim}}}_{\gamma}$ (we will call this quantity the DILATE loss in the next Chapter). We plot the speedup of our implementation with respect to the prediction length $H$ (averaged over 10 random target/prediction tuples). We notice the increasing speedup with respect to $H$: speedup of $\times$ 20 for 20 steps ahead and up to $\times$ 35 for 100 steps ahead predictions.
\begin{figure}
\centering
\includegraphics[width=8cm]{images/speedup.png}
\caption{Speedup of the custom forward and backward implementation of the DILATE loss introduced in Chapter \ref{chap:dilate}.}
\label{fig:speedup}
\end{figure}
\section{Conclusion}
To tackle the multi-step and stationary time series forecasting problem, we question the widely-used MSE training loss that lead to non-sharp predictions. We instead propose to leverage shape and temporal features at training time. In this Chapter, we have introduced differentiable similarities and dissimilarities for characterizing shape accuracy and temporal localization error. Shape is characterized with the Dynamic Time Warping (DTW) \cite{sakoe1990dynamic} algorithm and the temporal error with the Temporal Distortion Index (TDI) \cite{frias2017assessing}. We have provided an unified view of these criteria by formulating them in terms of dissimilarities (loss functions) and similarities (positive semi-definite kernels). We have insisted on their differentiability and efficient computation.
In subsequent Chapters, we provide two implementations for time series forecasting: the DILATE loss function for deterministic forecasting that ensures both sharp predictions with accurate temporal localization (Chapter \ref{chap:dilate}), and the STRIPE model for probabilistic forecasting with shape and temporal diversity (Chapter \ref{chap:stripe}).
\clearpage{\pagestyle{empty}\cleardoublepage}
\section{Introduction}
\begin{figure}[H]
\begin{tabular}{cccc}
\hspace{-1cm}
\includegraphics[height=4.8cm]{images/dilatestripe_fig1a.png} & \hspace{-0.3cm}
\includegraphics[height=4.8cm]{images/dilatestripe_fig1b.png} &
\hspace{-0.3cm}
\includegraphics[height=4.8cm]{images/dilatestripe_fig1c.png} &
\hspace{-0.3cm}
\includegraphics[height=4.8cm]{images/dilatestripe_fig1d.png} \\
\hspace{-1.5cm} (a) True predictive distribution & \hspace{-0.3cm} (b) DILATE ~\cite{leguen19} & \hspace{-0.3cm} (c) deep stoch model \cite{yuan2019diverse} & \hspace{-0.3cm} (d) STRIPE (ours)
\end{tabular}{}
\caption[Probabilistic forecasting motivation.]{\textbf{Probabilistic time series forecasting}: recent advances include the DILATE loss \cite{leguen19} for enabling sharp predictions (b), but are inadequate for producing diverse forecasts. On the other hand, probabilistic forecasting approaches based on generative models \cite{yuan2019diverse,rasul2020multi} loose the ability to generate sharp forecasts (c). The proposed STRIPE model (d) produces both sharp and diverse future forecasts, matching the ground truth distribution (a).}
\label{fig:stripe_motivation}
\end{figure}
\lettrine[lines=3]{I}n many applications, producing deterministic forecasts, i.e.~ a single future trajectory, is not sufficient for decision makers, who need information about the forecast's uncertainty. Probabilistic forecasting consists in modelling the conditional predictive distribution of future trajectories given past values. In this work, our goal is to describe this predictive distribution with a small set (e.g.~ $N=10$) of plausible and diverse predictions. This is a different goal than estimating the variance of the predictions or the quantiles of the distribution. Focusing on the non-stationary context with possible sharp variations, the targeted set of predictions should reflect the shape and temporal diversity of the true future trajectories. Our motivation is illustrated in the example of the blue input in Figure \ref{fig:stripe_motivation} (a): we aim at performing predictions covering the full distribution of future trajectories, whose samples are shown in green.
State-of-the-art methods for time series forecasting currently rely on deep neural networks, which exhibit strong abilities in modelling complex nonlinear dependencies between variables and time. Recently, increasing attempts have been made for improving architectures for accurate predictions \cite{lai2018modeling,sen2019think,li2019enhancing,oreshkin2019n,leguen20phydnet} or for making predictions sharper, e.g.~ by explicitly modelling dynamics~\cite{chen2018neural,dupont2019augmented,rubanova2019latent,franceschi2020stochastic}, or by designing specific loss functions addressing the drawbacks of blurred prediction with MSE training~\cite{cuturi2017soft,rivest2019new,leguen19,vayer2020time} (e.g.~ with DILATE). Although Figure \ref{fig:stripe_motivation} (b) shows that DILATE produces sharp and realistic forecasts, its deterministic nature leads to to a single trajectory prediction without uncertainty quantification.
Probabilistic methods targeting for producing a diverse set of predictions include generative models \cite{yuan2019diverse,koochali2020if,rasul2020multi} that produce multiple trajectories by sampling from a latent space. These approaches are commonly trained using MSE or variants, and consequently often loose the ability to represent sharp predictions, as shown in Figure~\ref{fig:stripe_motivation} (c) for \cite{yuan2019diverse}. These generative models also lack an explicit structure to control the type of diversity in the latent space.
In this Chapter, we introduce the STRIPE model for including Shape and Time diverRsIty in Probabilistic forEcasting. As shown in Figure \ref{fig:stripe_motivation} (d), this enables to produce sharp and diverse forecasts, which fit well the ground truth distribution of trajectories in Figure \ref{fig:stripe_motivation} (a). STRIPE is a predictive model equipped with a diversification mechanism based on determinantal point processes (DPP). The diversity of predictions is structured with the two shape and temporal semi-definite kernels defined in Chapter \ref{chap:criteria}, and we design explicit schemes to control the quality vs.~ diversity tradeoff.
We conduct experiments on synthetic datasets to evaluate the ability of STRIPE to match the ground truth trajectory distribution. We show that STRIPE significantly outperforms baseline methods for representing diversity, while maintaining the accuracy of the forecasting model. Experiments on real datasets further show that STRIPE is able to outperform state-of-the-art probabilistic forecasting approaches when evaluating the best sample (i.e.~ diversity), while being equivalent based on its mean prediction (i.e.~ quality).
\section{Related work}
In the Section, we pursue the review from Chapter \ref{chap:related_work} on spatio-temporal forecasting and insist on the most related works for probabilistic forecasting and for imposing structured diversity.
\paragraph{Probabilistic forecasting}
For describing the conditional distribution of future values given an input sequence, a first class of deterministic methods add variance estimation with Monte Carlo dropout \cite{zhu2017deep,laptev2017time} or predict the quantiles of this distribution \cite{wen2017multi,gasthaus2019probabilistic,wen2019deep} by minimizing the pinball loss \cite{koenker2001quantile,romano2019conformalized} or the continuous ranked probability score (CRPS) \cite{gneiting2007probabilistic}. Other probabilistic methods try to approximate the predictive distribution, \textit{explicitly} with a parametric distribution (e.g.~ Gaussian for DeepAR \cite{salinas2017deepar} and variants \cite{rangapuram2018deep,salinas2019high}), or \textit{implicitly} with a generative model with latent variables (e.g.~ with conditional variational autoencoders (cVAEs) \cite{yuan2019diverse}, conditional generative adversarial networks (cGANs) \cite{koochali2020if}, normalizing flows \cite{rasul2020multi}). However, these methods lack the ability to produce sharp forecasts by minimizing variants of the MSE (pinball loss, gaussian maximum likelihood), at the exception of cGANs - but which suffer from mode collapse that limits predictive diversity. Moreover, these generative models are generally represented by unstructured distributions in the latent space (e.g.~ Gaussian), which do not allow to have an explicit control on the targeted diversity.
\paragraph{Structured diversity for prediction}
For diversifying forecasts, several repulsive schemes were studied such as the variety loss \cite{gupta2018social,thiede2019analyzing} that consists in optimizing the best sample, or entropy regularization \cite{dieng2019prescribed,wang2019nonlinear} that encourages a uniform distribution. Besides, generative models, such as variational autoencoders (VAE) \cite{kingma2013auto}, are widely used for producing multiple predictions through sampling from a latent space. However latent states are typically sampled at test time from a standard Gaussian prior distribution, resulting in an unstructured diversity. To improve this unstructured mechanism, prior works \cite{yuan2019diverse,yuan2020dlow} introduced proposal neural networks for generating the latent variables that are trained with a diversity objective.
As discussed in Chapter \ref{chap:related_work}, determinantal point processes (DPPs) are an appealing mathematical solution for characterizing the diversity of a set of items. Efficient algorithms maximizing the diversity of a set of items with a given sampling budget.
GDPP \cite{elfeki2018gdpp} proposed by Elfeki \textit{et al.~} is based on matching generated and true sample diversity by aligning the corresponding DPP kernels, and thus limits their use in datasets where the full distribution of possible outcomes is accessible. In contrast, our probabilistic forecasting approach is applicable in realistic scenarios where only a single future trajectory is available for each training sample.
Yuan and Kitani \cite{yuan2019diverse} train their proposal neural networks with a DPP diversity loss. Although we share with \cite{yuan2019diverse} the goal to use DPP as diversification mechanism for future trajectories, the main limitation in~\cite{yuan2019diverse} is to use the MSE loss for training the predictor and the MSE kernel for diversification, leading to blurred prediction, as illustrated in Figure~\ref{fig:stripe_motivation} (c). In contrast, we design specific shape and time DPP kernels and we show the necessity to decouple the criteria used for quality and diversity.
\begin{figure*}
\centering
\includegraphics[width=17cm]{images/stripe.png}
\caption[Overview of the STRIPE model]{\textbf{Overview of the STRIPE model:} STRIPE builds on a forecasting architecture trained with a quality loss $\mathcal{L}_{quality}$ enforcing sharp predictions.
The latent state is disentangled into a deterministic part $h$ from the encoder and two stochastic codes $z_s$ and $z_t$ that account for the shape and time variations. First step (Figure upper part), we train the predictor with a quality loss, the stochastic codes are sampled from a posterior network. Second step (bottom), we diversify the predictions with two STRIPE shape and time proposal networks trained with a DPP diversity loss (keeping the encoder and decoder frozen).}
\label{fig:stripe}
\end{figure*}
\section{Probabilistic forecasting with structured diversity}
\label{sec:stripe}
We consider the multi-step and non-stationary time series forecasting problem in the probabilistic case. Given an input sequence $\mathbf{x}_{1:T}=(\mathbf{x}_1,\dots,\mathbf{x}_T) \in \mathbb{R}^{p \times T}$, we aim at describing the conditional predictive distribution of future trajectories with a set of $N$ future trajectories $ \{ \hat{\mathbf{y}}^{(i)} \}_{i=1..N} \in \mathbb{R}^{d \times H}$ (corresponding to diverse scenarii sampled from the true future distribution $\hat{\mathbf{y}}^{(i)} \sim p(\cdot |\mathbf{x}_{1:T})$).
We introduce the STRIPE framework (Shape and Time diverRsIty in Probabilistic
forEcasting), that extends STRIPE \cite{leguen20stripe}. Depicted in Figure \ref{fig:stripe}, STRIPE builds upon a general multi-step forecasting pipeline: the input time series $\mathbf{x}_{1:T}$ is fed into an encoder that summarizes the input into a latent vector $h$. This context vector $h$ is then transformed by a decoder into a future trajectory.
The key idea of STRIPE is to augment the deterministic latent state $h$ with stochastic diversifying variables $z_s$ (resp. $z_t$) meant to capture the shape (resp. temporal) variations of the future time series. We distinguish two phases for training the overall model: (i) we train the predictor with a quality loss and (ii) we train the diversifying STRIPE mechanism with a DPP diversity loss (with the weights of the predictor frozen). For both of these steps, we detail now how the diversifying variables are sampled.
\subsection{Training the predictor with a quality loss}
For training the predictor (upper part in Figure \ref{fig:stripe}) with possibly multiple admissible futures as supervision, we take inspiration from the probabilistic U-Net \cite{kolh-probunet} and introduce a posterior network from which to sample the diversifying variables $z_s^*$ and $z_t^*$ (which represent the shape and temporal variant attached to a particular future $\mathbf{y}^*$). The posterior net outputs the parameters $\mu_s^*$ and $\sigma_s^*$ of a Gaussian distribution $\mathcal{N}(\mu_s^*,\sigma_s^*)$ for parameterizing the shape posterior distribution $q(z_s | \mathbf{x},\mathbf{y}^*)$ (and similarly for the temporal posterior distribution).
To train this generative model (encoder, decoder and posterior networks), we resort to variational inference \cite{kingma2013auto} and maximize the evidence lower bound (ELBO) of the log-likelihood, or equivalently, minimize the following prediction loss over all training examples:
\begin{equation}
\mathcal{L}_{prediction}(\hat{\mathbf{y}},\mathbf{y}^*) = \mathcal{L}_{quality}(\hat{\mathbf{y}},\mathbf{y}^*) \; + \\ \text{KL} \left( q(z_s | \mathbf{x},\mathbf{y}^*)\;||\;p(z_s) \right) + \text{KL}\left( q(z_t | \mathbf{x},\mathbf{y}^*)\;||\;p(z_t) \right).
\end{equation}
In our non-stationary context, we choose the DILATE loss for $\mathcal{L}_{quality}$, in order to guarantee sharp predictions with accurate temporal localization. The Kullback-Leibler (KL) losses enforce that the shape posterior distribution $q(z_s | \mathbf{x},\mathbf{y}^*)$ matches a prior distribution $p(z_s)$ (we use a Gaussian prior $\mathcal{N}(0,\mathbf{I})$, which is a common choice in variational inference).
\subsection{Training the STRIPE diversification mechanism}
For including structured shape and temporal diversity (lower part in Figure \ref{fig:stripe}), we introduce two proposal neural networks STRIPE$_{\text{shape}}$ and STRIPE$_{\text{time}}$ that aim to produce a set of $N_s$ shape latent codes $\left\{z_s^i\right\}_{{i=1..N_s}} \in \mathbb{R}^k$ (resp. $N_t$ time codes $\left\{z_t^i\right\}_{{i=1..N_t}} \in \mathbb{R}^k$) dedicated to generate diverse trajectories in terms of shape (resp. time).
When training STRIPE$_{\text{shape}}$ (the description for STRIPE$_{\text{time}}$ is similar), we concatenate $h$ with the posterior time latent code $\mu_t^*$ and the $N_s$ shape latent codes $z_s^i$ provided by STRIPE$_{\text{shape}}$, which leads to $N_s$ future trajectories $\hat{\mathbf{y}}^{i} = \text{Decoder}\left( (h, z_s^i , \mu_t^*) \right)$, $i=1..N_s$\footnote{If there exists multiple futures as supervision, we repeat this operation for each posterior latent code $\mu_t^{*,j}$ (it corresponds to consider each tuple $(\mathbf{x}_{1:T},\mathbf{y}^{*,j})$ as a separate training example).}. The shape diversity of this set of $N_s$ trajectories is then enforced by a shape diversity loss that we describe below.\\
\paragraph*{DPP diversity loss:} We resort to determinantal point processes (DPP) for their appealing properties for maximizing the diversity of a set of items $\mathcal{Y} = \left\{ \mathbf{y}_1,...,\mathbf{y}_N \right\}$ given a fixed sampling budget $N$ and for structuring diversity via the choice of the DPP kernel. Following \cite{yuan2019diverse}, we minimize the negative expected cardinality of a random subset $Y$ from the DPP:
\begin{align}
\mathcal{L}_{diversity}(\mathcal{Y} ; \mathbf{K}) &= -\mathbb{E}_{Y \sim \text{DPP}(\mathcal{K})} |Y| \\ &= - Tr(\mathbf{I}-(\mathbf{K}+\mathbf{I})^{-1}).
\label{eq:ldiversity}
\end{align}{}
Intuitively, a larger expected cardinality means a more diverse sampled set according to kernel $\mathcal{K}$. This loss is differentiable and can be computed in closed form.\\
\paragraph*{Quality regularizer in the DPP:} When training the shape and time proposal networks with the diversity loss, we do not have control over the quality of predictions, which can deteriorate to improve diversity. To address this, we introduce a quality regularization term in the DPP kernels. Crucially, we decouple the criteria used for quality (DILATE) and diversity (shape or time). $\mathcal{K}_{shape}$ maximizes the shape (DTW) diversity, while maintaining a globally low DILATE loss (thus playing on the temporal localization to ensure a good tradeoff). This contrasts with \cite{yuan2019diverse} which uses the same MSE criterion for both quality and diversity (see Figure \ref{fig:stripe_analysis} (b) for a detailed analysis). In practice, we introduce a quality vector $\mathbf{q}= (q_1,\dots,q_{N_s})$ between the prediction $\hat{\mathbf{y}}^i$ and the ground truth $\mathbf{y}^*$ \footnote{If there are multiple futures as supervision, we again consider each tuple (input sequence, possible future) as a separate training example.}. We choose $q_i = \mu (1 - \text{DILATE}(\hat{\mathbf{y}}^i, \mathbf{y}^*))$, where $\mu > 0$ is a hyperparameter to tune the influence of the quality regularization. The modified shape kernel becomes (and similarly for the time kernel):
\begin{equation}
\Tilde{\textbf{K}}_{shape} = \text{Diag}(\textbf{q}) ~ \textbf{K}_{shape} ~ \text{Diag}(\textbf{q}).
\label{eq:kshape-tilde}
\end{equation}
This decomposition enables to sample sets of items of both high quality and diversity:
\begin{equation}
\mathcal{P}_{\mathbf{\Tilde{K}}}(\mathbf{Y}=Y) \propto \left( \prod_{i \in Y} q_i^2 \right) \det(\mathbf{K}_Y).
\end{equation}{}
We then train STRIPE$_{\text{shape}}$ by applying the shape kernel $\Tilde{\textbf{K}}_{shape}$ (Eq \ref{eq:kshape-tilde}) to the set of $N_s$ shape future trajectories $\mathcal{L}_{diversity}(\hat{\mathbf{y}}^{1},\dots,\hat{\mathbf{y}}^{N_s} ; \Tilde{\mathbf{K}}_{shape})$ and STRIPE$_{\text{time}}$ by applying the time kernel $\Tilde{\textbf{K}}_{time}$ to the set of $N_t$ time future trajectories $\mathcal{L}_{diversity}(\hat{\mathbf{y}}^{1},\dots,\hat{\mathbf{y}}^{N_t} ; \Tilde{\mathbf{K}}_{time})$.
\subsection{Diverse trajectory generation at test time}
At test time, the posterior network is discarded and we only rely on the trained encoder, STRIPE$_{\text{shape}}$, STRIPE$_{\text{time}}$ proposal networks and decoder to generate future predictions. More precisely, we combine the shape and temporal proposals $\left\{ z_s^i \right\}_{i=1..N_s}$ and $\left\{ z_t^j \right\}_{j=1..N_t}$ to obtain $N_s \times N_t$ predictions $\hat{\mathbf{y}}^{i,j} = \text{Decoder}((h,z_s^i,z_t^j))$.
\section{Experiments \label{sec:stripe_expe}}
We firstly assess the ability of STRIPE to capture the full predictive distribution of future trajectories. To do so, we need for evaluation the ground truth set of admissible futures for a given input; we construct here the \texttt{Synthetic-prob} dataset designed for this purpose. Secondly, on a more realistic setting where we only know one future for each input, we evaluate STRIPE on the \texttt{Traffic} and \texttt{Electricity} datasets with the best (resp. the mean) sample metrics as a proxy for diversity (resp. quality). We describe the implementation details and neural network architectures (encoder, decoder, posterior net and STRIPE proposal network) in Appendix \ref{app:stripe}.
\subsection{Full predictive distribution evaluation on \texttt{Synthetic-prob}}
\label{sec:stripe-synth}
\paragraph*{Dataset:} In this Chapter, we build the \texttt{Synthetic-prob} ($T=20, H=20$) dataset with multiple admissible futures for each input series. This is a variant of \texttt{Synthetic-det} used in Chapter \ref{chap:dilate} where for each input series, we generate 10 different future series of length 20 by adding noise on the step amplitude and localization. A sample from this dataset can be observed in Figure \ref{fig:stripe_motivation} (a). The dataset is composed of $100 \times 10=1000$ time series for each train/valid/test split.
\paragraph*{Metrics:} To assess the discrepancy between the predicted and true distributions of futures trajectories, we define the two following measures $\text{H}_{quality}(\ell)$ and $\text{H}_{diversity}(\ell)$ ($\ell = $ DTW, TDI or DILATE in our experiments):
\begin{align}
\text{H}_{quality}(\ell) &:= \mathbb{E}_{\mathbf{x} \in \mathcal{D}_{test}} \mathbb{E}_{\hat{\mathbf{y}}} \left[ \underset{\mathbf{y} \in F(\mathbf{x})}{\inf} \: \ell(\hat{\mathbf{y}},\mathbf{y}) \right] \\
\text{H}_{diversity}(\ell) &:= \mathbb{E}_{\mathbf{x} \in \mathcal{D}_{test}} \mathbb{E}_{\mathbf{y} \in F(\mathbf{x})} \left[ \underset{\hat{\mathbf{y}}}{\inf} \: \ell(\hat{\mathbf{y}},\mathbf{y}) \right] .
\end{align}
$\text{H}_{quality}$ penalizes forecasts $\hat{\mathbf{y}}$ that are far away from a ground truth future of $\mathbf{x}$ denoted $\mathbf{y} \in F(\mathbf{x})$ (similarly to the \textit{precision} concept in pattern recognition) whereas $\text{H}_{diversity}$ penalizes when a true future is not covered by a forecast (similarly to \textit{recall}). As a tradeoff balancing quality and diversity, we compute the F1 score defined in Eq \ref{eq:F1score}:
\begin{equation}
F1 \text{ score} = \frac{2 ~ \text{H}_{quality}(\ell) \cdot \text{H}_{diversity}(\ell) }{ \text{H}_{quality}(\ell) + \text{H}_{diversity}(\ell)} \label{eq:F1score}.
\end{equation}
In addition, we also use the continuous ranked probability score (CRPS) which is a standard \textit{proper scoring rule} \cite{gneiting2007probabilistic} for assessing probabilistic forecasts \cite{gasthaus2019probabilistic}. Intuitively, the CRPS is the pinball loss integrated over all quantile levels. A key property is that the CRPS attains its minimum when the predicted future distribution equals the true future distribution, making this metric particularly adapted to our context.
\paragraph*{Forecasting results:} We compare in Table \ref{tab:stripe} our method to 4 recent competing diversification mechanisms (variety loss \cite{thiede2019analyzing}, entropy regularisation \cite{dieng2019prescribed}, diverse DPP \cite{yuan2019diverse} and GDPP \cite{elfeki2018gdpp}) based on a conditional variational autoencoder (cVAE) backbone trained with DILATE. We observe that STRIPE obtains the global best performances by improving diversity by a large amount ($\text{H}_{diversity}(\text{DILATE)}$=17.9) compared to the backbone cVAE DILATE ($\text{H}_{diversity}(\text{DILATE)}$=33.9) and to other diversification schemes (the best competitor GDPP \cite{elfeki2018gdpp} attains $\text{H}_{diversity}(\text{DILATE)}$=23.9).
This highlights the relevance of the structured shape and time diversity. We can also notice that, in contrast to competing diversification schemes that improve diversity at the cost of a loss in quality, STRIPE maintains high quality predictions. STRIPE is only beaten in $\text{H}_{quality}(\text{DILATE)}$ by GDPP \cite{elfeki2018gdpp}, but this method is significantly worse than STRIPE in diversity, and GDPP requires full future distribution supervision, which it not applicable in real datasets (see section \ref{sec:stripe_real_datasets}). All in all, the F1 scores summarize the quality vs.~ diversity tradeoffs, and STRIPE gets the best F1 DILATE score. Moreover, STRIPE outperforms all other methods with the CRPS metric, indicating that the predicted future trajectory distribution is closer to the ground truth one.
\begin{table*}[t]
\caption[STRIPE forecasting results on the \texttt{Synthetic-prob} dataset.]{\textbf{STRIPE forecasting results on the \texttt{Synthetic-prob} dataset with multiple futures}, averaged over 5 runs (mean $\pm$ std). Best equivalent methods (Student t-test) shown in bold. Metrics are scaled (MSE $\times$ 1000, DILATE $\times 100$, CRPS $\times$ 1000).}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{ccccccccccc}
\toprule
\multicolumn{1}{c}{} & \multicolumn{3}{c}{$\text{H}_{quality}(\cdot) \; (\downarrow)$} & \multicolumn{3}{c}{$\text{H}_{diversity}(\cdot) \; (\downarrow)$} & \multicolumn{3}{c}{$ F1 \text{ score} \; (\downarrow)$} & \multicolumn{1}{c}{CRPS ($\downarrow$)} \\
\cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10}
Methods & DTW & TDI & DILATE & DTW & TDI & DILATE & DTW & TDI & DILATE \\
\midrule
cVAE DILATE & \textbf{11.7 $\pm$ 1.5} & 9.4 $\pm$ 2.2 & \textbf{14.2 $\pm$ 1.5} & 18.8 $\pm$ 1.3 & 48.6 $\pm$ 2.2 & 33.9 $\pm$ 3.9 & 14.4 & 15.7 & 20.0 & 62.2 $\pm$ 4.2 \\
variety loss \cite{thiede2019analyzing} DILATE & 15.6 $\pm$ 3.4 & 10.2 $\pm$ 1.1 & 16.8 $\pm$ 0.9 & 22.7 $\pm$ 4.1 & 37.7 $\pm$ 4.9 & 30.8 $\pm$ 1.0 & 18.5 & 16.1 & 21.7 & 62.6 $\pm$ 3.0 \\
Entropy reg. \cite{dieng2019prescribed} DILATE & 13.8 $\pm$ 3.1 & 8.8 $\pm$ 2.2 & \textbf{15.0 $\pm$ 1.6} & 20.4 $\pm$ 2.8 & 42.0 $\pm$ 7.8 & 32.6 $\pm$ 2.3 & 16.5 & 14.5 & 20.5 & 62.4 $\pm$ 3.9 \\
Diverse DPP \cite{yuan2019diverse} DILATE & \textbf{12.9 $\pm$ 1.2} & 9.8 $\pm$ 2.1 & 15.1 $\pm$ 1.5 & 18.6 $\pm$ 1.6 & 42.8 $\pm$ 10.1 & 31.3 $\pm$ 5.7 & 15.2 & 15.9 & 20.4 & 60.7 $\pm$ 1.6 \\
GDPP \cite{elfeki2018gdpp} DILATE & 14.8 $\pm$ 2.9 & 11.7 $\pm$ 8.4 & \textbf{14.4 $\pm$ 2.1} & 20.8 $\pm$ 2.4 & 25.2 $\pm$ 7.2 & 23.9 $\pm$ 4.5 & 17.3 & 15.9 & 17.9 & 63.4 $\pm$ 6.4 \\
STRIPE & 13.5 $\pm$ 0.5 & 9.2 $\pm$ 0.5 & \textbf{15.0 $\pm$ 0.3} & \textbf{12.9 $\pm$ 0.3} & 16.3 $\pm$ 1.2 & \textbf{17.9 $\pm$ 0.6} & \textbf{13.2} & 11.7 & \textbf{16.3} & \textbf{48.6 $\pm$ 0.6} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:stripe}
\end{table*}{}
\subsection{State-of-the-art comparison on real-world datasets}
\label{sec:stripe_real_datasets}
\begin{table*}
\caption[STRIPE probabilistic forecasting results on \textsc{Traffic} and \textsc{Electricity}.]{\textbf{Probabilistic forecasting results on the \texttt{Traffic} and \texttt{Electricity} datasets}, averaged over 5 runs (mean $\pm$ std). Metrics are scaled for readability. Best equivalent method(s) (Student t-test) shown in bold.}
\label{tab:stripe_sota}
\centering
\setlength{\tabcolsep}{6.8pt}
\begin{adjustbox}{max width=\linewidth}
\begin{tabular}{ccccc|cccc}
\toprule
\multicolumn{1}{c}{} & \multicolumn{4}{c|}{\texttt{Traffic}} & \multicolumn{4}{c}{\texttt{Electricity}} \\
\multicolumn{1}{c}{} & \multicolumn{2}{c}{MSE} & \multicolumn{2}{c|}{DILATE} & \multicolumn{2}{c}{MSE} & \multicolumn{2}{c}{DILATE} \\
\cmidrule(lr){2-3} \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9}
Method & mean & best & mean & best & mean & best & mean & best \\
\midrule
Nbeats \cite{oreshkin2019n} MSE & - & 7.8 $\pm$ 0.3 & - & 22.1 $\pm$ 0.8 & - & 24.8 $\pm$ 0.4 & - & 20.2 $\pm$ 0.3 \\
Nbeats \cite{oreshkin2019n} DILATE & - & 17.1 $\pm$ 0.8 & - & 17.8 $\pm$ 0.3 & - & 25.8 $\pm$ 0.9 & - & 19.9 $\pm$ 0.5 \\
\midrule
Deep AR \cite{salinas2017deepar} & 15.1 $\pm$ 1.7 & \textbf{6.6 $\pm$ 0.7} & 30.3 $\pm$ 1.9 & 16.9 $\pm$ 0.6 & 67.6 $\pm$ 5.1 & 25.6 $\pm$ 0.4 & 59.8 $\pm$ 5.2 & 17.2 $\pm$ 0.3 \\
cVAE DILATE & \textbf{10.0 $\pm$ 1.7} & 8.8 $\pm$ 1.6 & \textbf{19.1 $\pm$ 1.2} & 17.0 $\pm$ 1.1 & \textbf{28.9 $\pm$ 0.8} & 27.8 $\pm$ 0.8 & 24.6 $\pm$ 1.4 & 22.4 $\pm$ 1.3 \\
Variety loss \cite{thiede2019analyzing} & \textbf{9.8 $\pm$ 0.8} & 7.9 $\pm$ 0.8 & \textbf{18.9 $\pm$ 1.4} & 15.9 $\pm$ 1.2 & 29.4 $\pm$ 1.0 & 27.7 $\pm$ 1.0 & 24.7 $\pm$ 1.1 & 21.6 $\pm$ 1.0 \\
Entropy regul. \cite{dieng2019prescribed} & 11.4 $\pm$ 1.3 & 10.3 $\pm$ 1.4 & \textbf{19.1 $\pm$ 1.4} & 16.8 $\pm$ 1.3 & 34.4 $\pm$ 4.1 & 32.9 $\pm$ 3.8 & 29.8 $\pm$ 3.6 & 25.6 $\pm$ 3.1 \\
Diverse DPP \cite{yuan2019diverse} & 11.2 $\pm$ 1.8 & 6.9 $\pm$ 1.0 & 20.5 $\pm$ 1.0 & 14.7 $\pm$ 1.0 & 31.5 $\pm$ 0.8 & 25.8 $\pm$ 1.3 & 26.6 $\pm$ 1.0 & 19.4 $\pm$ 1.0 \\
STRIPE & \textbf{10.0 $\pm$ 0.2} & \textbf{6.7 $\pm$ 0.3} & \textbf{19.0 $\pm$ 0.2} & \textbf{14.1 $\pm$ 0.3} & \textbf{29.5 $\pm$ 0.3} & \textbf{23.6 $\pm$ 0.4} & \textbf{24.1 $\pm$ 0.2} & 17.3 $\pm$ 0.4 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table*}{}
We evaluate here the performances of STRIPE on the two challenging real-world datasets \texttt{Traffic} and \texttt{Electricity} commonly used as benchmarks in the time series forecasting literature \cite{yu2016temporal,salinas2017deepar,lai2018modeling,rangapuram2018deep,leguen19,sen2019think} and described in Chapter \ref{chap:dilate}. Contrary to the \texttt{Synthetic-prob} dataset, we only dispose of one future trajectory sample $\mathbf{y}^{*}_{T+1:T+\tau}$ for each input series $\mathbf{x}_{1:T}$. In this case, the metric $\text{H}_{quality}$ (resp. $\text{H}_{diversity}$) defined in section \ref{sec:stripe-synth} reduces to the mean sample (resp. best sample), which are common for evaluating stochastic forecasting models \cite{babaeizadeh2017stochastic,franceschi2020stochastic}.
Results in Table \ref{tab:stripe_sota} reveal that STRIPE outperforms all other baselines in the best sample (evaluated in MSE or DILATE). Our method even outperforms in the best sample the state-of-the-art N-Beats algorithm \cite{oreshkin2019n} (either trained with MSE or DILATE), which is dedicated to producing high quality deterministic forecasts. In terms of quality (evaluation with the mean sample), STRIPE gets the best (or equivalently best) results in all cases. This contrasts to competing diversification methods, e.g.~ Diverse DPP \cite{yuan2019diverse}, that deteriorate quality to improve diversity. Finally we notice that STRIPE is consistently better in diversity and quality than the state-of-the art probabilistic deep AR method \cite{salinas2017deepar}.
We display a few qualitative forecasting examples of STRIPE on Figure \ref{fig:stripe_visus}. We observe that STRIPE predictions are both sharp and accurate: both the shape diversity (amplitude of the peaks) and temporal diversity match the ground truth future.
\begin{figure*}
\centering
\begin{tabular}{cc}
\includegraphics[width=8cm]{images/traffic_stripe.png} & \includegraphics[width=8cm]{images/elec_stripe.png} \\
(a) \texttt{Traffic} & (b) \texttt{Electricity}
\end{tabular}
\caption[STRIPE qualitative predictions on Traffic and Electricity.]{STRIPE qualitative predictions on datasets \texttt{Traffic} (a) and \texttt{Electricity} (b).}
\label{fig:stripe_visus}
\end{figure*}
\subsubsection{STRIPE analysis: quality-diversity cooperation}
We analyze here the quality-diversity tradeoff with respect to the number $N$ of sampled future trajectories. In Figure \ref{fig:stripe_analysis} (a) we represent the evolution of performances when $N$ increases from 5 to 100 on the synthetic-prob dataset. As expected, the normalized DILATE diversity $\text{H}_{diversity}(5)/\text{H}_{diversity}(N)$ (higher is better) increases with $N$ for both STRIPE and deepAR models \cite{salinas2017deepar}. However we remark that STRIPE does not deteriorate normalized quality (which even increases slightly), in contrast to deepAR which does not have control over the targeted diversity. This again confirms the relevance of our approach that effectively combines an adequate quality loss function and a structured diversity mechanism.
We also highlight the importance to separate the criteria for enforcing quality and diversity. In Figure \ref{fig:stripe_analysis}, we represent 50 predictions from the models Diverse DPP DILATE \cite{yuan2019diverse} and STRIPE in the plane (DTW,TDI). Diverse DPP DILATE \cite{yuan2019diverse} uses a DPP diversity loss based on the DILATE kernel, which is the same than for quality. We clearly see that the two objectives conflict: this model increases the DILATE diversity (by increasing the variance in the shape (DTW) or the time TDI) components) but a lot of these predictions have a high DILATE loss (worse quality). In contrast, STRIPE predictions are diverse in DTW and TDI, and maintain an overall low DILATE loss. STRIPE succeeds in recovering a set of good tradeoffs between shape and time leading a low DILATE loss.
\begin{figure}[H]
\centering
\begin{tabular}{c|c}
\includegraphics[width=7.5cm]{images/etude_N.png} & \includegraphics[width=6cm]{images/scatterplot.png} \\
(a) & (b)
\end{tabular}
\caption[STRIPE analysis.]{\textbf{STRIPE analysis:} (a) Influence of the number $N$ of trajectories on quality (higher is better) and diversity for the \texttt{Synthetic-prob} dataset. (b) Scatterplot of 50 predictions in the plane (DTW,TDI), comparing STRIPE v.s. Diverse DPP DILATE \cite{yuan2019diverse}.}
\label{fig:stripe_analysis}
\end{figure}
\section{Conclusion}
In this Chapter, we have presented STRIPE, a probabilistic time series forecasting method that introduces structured shape and temporal diversity based on determinantal point processes. Diversity is controlled via two proposed differentiable positive semi-definite kernels for shape and time and exploits a forecasting model with a disentangled latent space. Experiments on synthetic and real-world datasets confirm that STRIPE leads to more diverse forecasts without sacrificing on quality. Ablation studies also reveal the crucial importance to decouple the criteria used for quality and diversity.
\clearpage{\pagestyle{empty}\cleardoublepage}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,520 |
{"url":"http:\/\/clay6.com\/qa\/1341\/integrate-the-function-frac","text":"Want to ask us a question? Click here\nBrowse Questions\n Ad\n0 votes\n\n# Integrate the function$\\frac{(\\large log \\; x)^2}{\\large x}$\n\nThis question has appeared in model paper 2012\n\nCan you answer this question?\n\n## 1 Answer\n\n0 votes\nToolbox:\n\u2022 Method of substitution:\n\u2022 Given $\\int f(x)dx$ can be transformed into another form by changing independent variable x to t by substituting x=g(t).\n\u2022 Consider $I=\\int f(x)dx.$","date":"2017-01-23 02:44:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8919693231582642, \"perplexity\": 3763.397337636028}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560281746.82\/warc\/CC-MAIN-20170116095121-00188-ip-10-171-10-70.ec2.internal.warc.gz\"}"} | null | null |
"UFC 141: Lesnar vs. Overeem" takes place this Friday, Dec. 30, at the MGM Grand Garden Arena in Las Vegas. It will be the Ultimate Fighting Championship's final show of the year, and it is a big one.
ProMMAnow.com (www.prommanow.com) continues with our preview of the preliminary card with a look at the welterweight match-up between TUF 7 veteran Matt Riddle and Team Nova União's Luis Ramos.
Riddle vs. Ramos is one of four UFC 141 bouts that will stream LIVE on Facebook Friday night starting at 7:15 p.m. ET (4:15 p.m. PT).
A former high-school state wrestling champion, 25-year-old Matthew Riddle has spent his entire pro career inside the UFC Octagon. His official UFC career began in June 2008 with a unanimous decision win over Dante Rivera in the TUF 7 Finale.
Since then he has put together wins over Steve Bruno, Dan Cramer, Greg Soto and DaMarques Johnson. Most recently though, he has dropped decision losses to Sean Pierson last December and Lance Benoist in September.
He did, however, win "fight of the night" in the loss to Benoist and it was the type of fight where Benoist came out looking like the one who had lost after receiving a busted nose that opened up like a faucet.
Thirty-year-old former Shooto champion Luis Ramos has been fighting professionally since 2001, primarily in Brazil. He made his UFC debut in August at UFC 134 in Rio, dropping a first round TKO loss to Erick Silva just 40 seconds into the bout.
Prior to that, Ramos had won three straight bouts this year, including two wins in one night at Watch Out Combat Show 11 in Brazil. However, two of those wins were against sub-.500 fighters.
Ramos won the vacant Shooto middleweight title (after Shinya Aoki vacated the title) with a unanimous decision win over Igor Fernandes in August 2010. Ramos vacated the belt when he signed with the UFC earlier this year.
The match-up: Riddle has some power in his hands, but his base is his wrestling. Ramos has a serious experience advantage, he has great training partners and the better submission game. He has pretty good wrestling defense as well.
Riddle does have back-to-back losses coming in, however, the UFC probably won't hold the loss to Benoist against him since it was "fight of the night" and he did most of the damage. Having said that, it is not a bet you want to make — in other words, he needs a win.
The pick: I'm giving the edge to Riddle mainly due to his size and strength and his wrestling. He should be able to control where the fight goes for the most part and grind out a decision win. He will have to be careful of Ramos' submission game and more experienced stand-up. Riddle by decision.
See the rest of ProMMAnow.com's UFC 141 preliminary card previews. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,960 |
Ola, India's largest app-based taxi aggregator, has gone global, partnering with China's Didi Kuaidi, US-based Lyft and southeast Asian GrabTaxi to offer seamless access to cabs on their apps in these countries. The move is part of an alliance put together by Softbank and Alibaba, common investors in these firms, to take on Uber, their biggest rival globally.
The single integrated app across the four companies will start rolling by March.
The deal will allow the partners to leverage each other's technology, local market knowledge and business resources, allowing customers travelling abroad to hail rides through their own apps instead of installing individual apps while travelling. The Alibaba-Softbank alliance firms have drawn investments of $7.29 billion, and are valued at $25.1 bn.
Uber pioneered the asset-light taxi-hailing app concept. It has drawn investments of $8.21 bn since its inception, but its value is double that of the combined value of the alliance at $51 bn.
SoftBank, which counts Ola as one of its largest portfolio companies in India, has also invested in GrabTaxi and Didi Kuaidi, while Chinese e-commerce giant Alibaba has invested in Didi Kuaidi and Lyft. Formation of the anti-Uber alliance became more obvious when Didi Kuaidi participated in Ola's latest $500-million funding round in November. The push could potentially help Lyft, a laggard in the US, attract foreign travellers to leverage on its partners' success in their home countries.
"We are excited to partner with Lyft, Didi Kuaidi and GrabTaxi, allowing seamless mobility access across hundreds of cities globally for our combined user base that runs into hundreds of millions," said Bhavish Aggarwal, co-founder and chief executive of Ola.
There has been consolidation in the app-based taxi aggregator space at a local level across the world. Didi Dache and Kuaidi Dache, the two largest players in the Chinese market, joined forces back in February to create an entity worth $6 bn. Today, Didi Kuaidi is valued at $16 bn. In India, Ola acquired rival TaxiForSure in March for $200 mn as it looked at the ways to scale quickly and fend off attacks from Uber.
"They are welcome. Let's see what then can do together what they could not do alone. I would say, bring it on," said Amit Jain, head of India for Uber.
Uber, a late entrant in India, has 250,000 drivers on its platform and has committed $1 bn to expand its operations in the country. It claims a market share of 40 per cent, while Ola claims share of 80 per cent. Didi, the biggest player in China, owns 83 per cent of the market. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,793 |
Q: Programmatically adding buttons to ScrollView (initially created with IB) I'm doing the majority of work in IB, and created a UIScrollView (with child View) using IB that I would now like to add UIButtons to (with corresponding constraints). I could add a few of these buttons using IB, but actually want to add hundreds of these buttons to a single UIScrollView so using IB seems fairly tedious.
Hence, I want to programmatically add the buttons (can copy/paste the button labels from a .txt file I have) in the .swift file.
How do I reference the UIScrollView created in the IB, in the corresponding .swift file, so I can add these buttons? See below code and comment:
override func viewDidLoad() {
super.viewDidLoad()
let nib = UINib(nibName: "KeyboardView", bundle: nil)
let objects = nib.instantiateWithOwner(self, options: nil)
view = objects[0] as! UIView;
let buttonTitles = ["Test Quote 1", "Test Quote 2"]
var buttons = createButtons(buttonTitles)
var topRow = UIView(frame: CGRectMake(0, 0, 320, 40))
for button in buttons {
topRow.addSubview(button)
}
self.view.addSubview(topRow) // how do I add this topRow view to the ScrollView created in IB, rather than to the main View?
addConstraints(buttons, containingView: topRow)
}
A: If you need to add hundreds of buttons, maybe you could use an UITableView instead of a UIScrollView You then only need to create one custom UITableViewCell and use a datasource for configurating the UIButtons.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,599 |
Nina McLemore herself will help you find the right jackets for your body type at this special in-store event, where you can preview the Spring 2019 collection! Expect a special assortment of sizes in 00, 0 & 16,18, 20. Call 1.212.319.7700 to book your appointment. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,384 |
package io.upnext.beaconcontrol.app.s2s.http.model;
public enum ErrorCode {
UNKNOWN,
BEACON_CONTROL_ERROR,
IO_ERROR,
OFFLINE
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,199 |
{"url":"http:\/\/www.maplesoft.com\/support\/help\/Maple\/view.aspx?path=Task\/ScientificErrorAnalysis","text":"Compute with Quantities Containing Errors or Uncertainties - Maple Programming Help\n\nHome : Support : Online Help : Tasks : Units, Constants, and Errors : Task\/ScientificErrorAnalysis\n\nCompute with Quantities Containing Errors or Uncertainties\n\n Description Compute with quantities that have errors or uncertainties, such as experimental measurements and known scientific constants.\n\nEnter quantities that have associated errors.\n\n > $\\mathrm{ScientificErrorAnalysis}\\left[\\mathrm{Quantity}\\right]\\left({1.552}{,}{0.002}\\right)$\n ${\\mathrm{Quantity}}{}\\left({1.552}{,}{0.002}\\right)$ (1)\n > $\\mathrm{ScientificErrorAnalysis}\\left[\\mathrm{Quantity}\\right]\\left({2.510}{,}{0.01}\\right)$\n ${\\mathrm{Quantity}}{}\\left({2.510}{,}{0.01}\\right)$ (2)\n\nEnter a formula using these quantities.\n\n > $\\frac{4{\\pi }^{2}}{{2}^{}}$\n $\\frac{{4}{}{{\\mathrm{\u03c0}}}^{{2}}{}{\\mathrm{Quantity}}{}\\left({1.552}{,}{0.002}\\right)}{{{\\mathrm{Quantity}}{}\\left({2.510}{,}{0.01}\\right)}^{{2}}}$ (3)\n\nCompute the derived error of the formula.\n\n > $\\mathrm{combine}\\left(,'\\mathrm{errors}'\\right)$\n ${\\mathrm{Quantity}}{}\\left({9.725322477}{,}{0.07849949920}\\right)$ (4)\n\nCompute the relative error of the formula.\n\n > $\\frac{\\mathrm{ScientificErrorAnalysis}\\left[\\mathrm{GetError}\\right]\\left(\\right)}{\\mathrm{ScientificErrorAnalysis}\\left[\\mathrm{GetValue}\\right]\\left(\\right)}$\n ${0.008071660285}$ (5)","date":"2016-07-29 05:59:07","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 10, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9891054034233093, \"perplexity\": 3663.2295660745995}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-30\/segments\/1469257829972.19\/warc\/CC-MAIN-20160723071029-00172-ip-10-185-27-174.ec2.internal.warc.gz\"}"} | null | null |
In this solo cooking adventure, Gary takes a Bittman recipe and adapts it a little more to his liking.
Here at Foodie Call, Kate and I practically worship at the altar that is Mark Bittman. And why the heck not? His recipes are simple (okay, minimalist) yet delicious, often shedding the mysticism some dishes hold in our minds (I'm referring to his chicken liver pâté recipe). His recipe for spaghetti with broccoli rabe and garlic, while not exactly mysterious or exotic, was certainly tasty and simple in the "why hadn't I thought to do that" sorta way. That's what makes him Marc Bittman, he of the many cookbooks and NY Times columns, and myself decidedly not him.
I've made the recipe several times, sometimes straight up as Bittman intended and sometimes by varying the type of pasta used or adding meat (face it, I'm such an omnivore that I would turn a nice vegetarian recipe into one that is not). Regardless of what I've done to the original recipe, this has become, in all its forms, a staple in my repertoire of go-to meals. In fact, I'm fairly certain I've made this over a dozen times in the year or so since I watched the podcast in which he made this and I saw just how ridiculously easy this recipe was.
In keeping with the minimalist nature of this recipe, my adaptation only features one change and one addition. After trying this recipe several times with spaghetti, penne, ziti, rotini, and fusilli, I finally settled on fusilli as my pasta of choice. It has a large amount of surface area for the bread crumbs to cling to and the broccoli rabe is chopped into pieces similar in length, which just looks nice on the plate. The addition I made was to add bulk spicy Italian sausage because I loves my meat and the additional dose of spiciness doesn't hurt either. Besides those two changes, nothing is really different – it's still awesome delicious and still simple to make.
Bring a large pot of water to a boil and salt it. Cook broccoli rabe in boiling water until it is soft, about 5 minutes. Remove, drain well and chop into 1-½ pieces. Cook pasta in same pot.
Put the ¼ cup olive oil in a large skillet over medium-low heat. When oil is warm, cook garlic just until fragrant, 1 to 2 minutes. Add bread crumbs and red pepper flakes and cook until bread crumbs are golden, 5 minutes or so. Remove and set aside.
In a large pan over medium-high heat, cook the Italian sausage. When it is just about cooked through, reduce the heat to medium-low and add the chopped pieces of broccoli rabe. Toss well to combine, adding salt and black pepper to taste. When the broccoli rabe is warm, add garlic and bread crumbs and mix well.
When pasta is done, drain it, reserving a little cooking water. Toss pasta in skillet with broccoli rabe and sausage mixture, moistening with a little reserved water if necessary. Adjust seasonings and serve. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,720 |
package ca.stellardrift.permissionsex.datastore.sql.dao;
import ca.stellardrift.permissionsex.impl.util.PCollections;
import org.jdbi.v3.core.collector.CollectorFactory;
import org.pcollections.PSet;
import org.pcollections.PStack;
import org.pcollections.PVector;
import java.lang.reflect.ParameterizedType;
import java.lang.reflect.Type;
import java.util.IdentityHashMap;
import java.util.Map;
import java.util.Optional;
import java.util.stream.Collector;
import static io.leangen.geantyref.GenericTypeReflector.erase;
final class PCollectionsCollectorFactory implements CollectorFactory {
static final PCollectionsCollectorFactory INSTANCE = new PCollectionsCollectorFactory();
private final IdentityHashMap<Class<?>, Collector<?, ?, ?>> collectors = new IdentityHashMap<>();
private PCollectionsCollectorFactory() {
this.collectors.put(PSet.class, PCollections.toPSet());
this.collectors.put(PVector.class, PCollections.toPVector());
this.collectors.put(PStack.class, PCollections.toPStack());
}
@Override
public boolean accepts(Type containerType) {
final Type erased = erase(containerType);
return this.collectors.containsKey(erased);
}
@Override
public Optional<Type> elementType(Type containerType) {
if (!(containerType instanceof ParameterizedType)) {
return Optional.empty();
}
return Optional.ofNullable(((ParameterizedType) containerType).getActualTypeArguments()[0]);
}
@Override
public Collector<?, ?, ?> build(Type containerType) {
final Collector<?, ?, ?> collector = this.collectors.get(erase(containerType));
if (collector == null) {
throw new IllegalArgumentException("Does not accept " + containerType);
}
return collector;
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,489 |
FactoryGirl.define do
factory :configuration_location do
sequence(:name) { |n| "configuration_location#{seq_padded_for_sorting(n)}" }
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,724 |
State Road 121 (NM 121) is a state highway in the US state of New Mexico. NM 121's southern terminus is at NM 518 south of Holman, and the northern terminus is at the end of state maintenance near Chacon.
Major intersections
See also
References
121
Transportation in Rio Arriba County, New Mexico | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,496 |
{"url":"http:\/\/www.physicsforums.com\/showthread.php?t=420461","text":"## Uniqueness of Euler angles\n\nHi..\n\nThe wikipedia article on euler angles claims that the Euler angles in zxz convention are unique if we constrain the range they are allowed to take (except in the case of the gimbal lock).\n\nThis seems reasonable. But can someone give me a reference... a book or a paper where this is stated?\n\nOn searching the web, I was able to find some lecture notes which proved the above assertion. But it did not have references.\n PhysOrg.com physics news on PhysOrg.com >> Study provides better understanding of water's freezing behavior at nanoscale>> Soft matter offers new ways to study how ordered materials arrange themselves>> Making quantum encryption practical\n Sorry, I don't get a thing... If you found lectures that proved the assertion, then in the same lectures the assertion must also have been stated, in something like this: Theorem: (assertion) Proof: (proof) ...or not?\n Yes... the proof is there... And I do believe the statement.. the theorem seems alright.. But just want to be sure....and would like to have something to give as a reference when I use this fact... I cannot give lecture notes as reference... That is why I need a book or a paper....\n\n## Uniqueness of Euler angles\n\nI found a book..\n\nBiedenharn, L. C.; Louck, J. D. (1981), Angular Momentum in Quantum Physics, Reading\n\nOne of the references in the wikipedia article....\n\nThe way I understand the theorem is like this...\n\nThe angle between the initial z axis and the final Z axis is beta...\n\nIf we know the position of the initial z axis and the final Z axes, then the two axes together form a plane... beta rotation must have been performed about an axis perpendicular to this plane...\n\nThis leaves two choices for the axis of rotation...either along z x Z direction or along Z x z direction.... choosing either of the two directions perependicular to the plane as the positive axis...\n\nFor one choice, if beta=theta....then for the other choice, beta= - theta.....\n\nAlso, the two choices are related by N(the line of nodes) going to - N....\n\nWhich can be accomplished by alpha going to alpha+ 180 deg....\n\nThus, assuming beta is not zero or 180.... the relative position of z and Z axes can be achieved by two choices of alpha and beta..\n\n1) alpha and beta...\n\n2) alpha+180 and -beta (or 360- beta)...\n\nfor beta zero or 180...it is easy to see that alpha angle is inconsequential...\n\nWe can fix one choice by requiring beta to be between 0 and 180....then alpha is also fixed..\n[wikipedia article seems to suggest that beta between -90 and 90 will also work..but this goes against my argument...using mathematica, I got the result that the sets (alpha,beta,gamma) as (135,60,270) and (315,-60,90) gave the same rotation matrix...so this particular point is most probably wrong...]\n\nOnce alpha and beta are fixed, it is easy to see that gamma is unique... of course all the while assuming that we are considering only the range 0 to 360...\n An idea: (1) write the 3 matrices corresponding the 3 zxz rotations. This is easy. (2) multiply them in the correct order. Also easy, but you get a lot of sin and cos. (3) verify that the matrix obtained is different for different values of the 3 angles, in their range. Very easy. ...so...it should be easy! Worth doing this calculation once in a life.\n Agreed that the first two steps are simple... even simpler if implemented in something like mathematica.. lt The resulting matrix is already given in the wikipedia article... But I do not see how the third step can be easy... Just have a look at the resulting matrix in wikipedia....\n\n Quote by krishna mohan Just have a look at the resulting matrix in wikipedia....\nIf you copy and paste that matrix here we'll have it a look...\n I think I do understand.. Your idea must be something like the one detailed in the link below... http:\/\/www.gregslabaugh.name\/publications\/euler.pdf\n Yes. The calculation in that link is a bit \"longer\" of what I meant because, not only it shows the \"essential\" uniqueness of Euler angles, but it actually calculates them for an arbitrary rotation.\n\n Tags euler angles, unique","date":"2013-05-22 23:47:38","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8137182593345642, \"perplexity\": 925.8220796233204}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368702525329\/warc\/CC-MAIN-20130516110845-00049-ip-10-60-113-184.ec2.internal.warc.gz\"}"} | null | null |
Office Developments
Rivington Street, Shoreditch, London
Home > Portfolio Item > Rivington Street, Shoreditch, London
"What a fabulous job and what an interesting project. It is really nice to see people actually doing something positive and doing it in such a commercially successful way."
Cassandra Campbell, Sustainability Plus
Built in 1937 as an extension to Shoreditch Town Hall, this former civic building has been thoughtfully converted into serviced office and meeting room space.
The design involved re-organising the internal layout installing two new passenger lifts and adding a sleek glazed upper floor. The character of the existing building has informed the design of a sociable and stimulating working environment.
Sussex House Office Refurbishment, Chichester
This three storey, 1960s concrete framed building had suffered from little maintenance and upkeep in recent years. This scheme was to refurbish…
Penthouses, St John's Wood
Planning permission was obtained for a pair of two storey duplex apartments with extensive terraces. These float over the original 1960s apartment…
Smiths of Smithfield
The restoration and transformation of a dilapidated Grade II listed former meat warehouse became the new home of London's hottest eaterie. The…
JWA is a dedicated team of architects working with James Wells. Based in West Sussex we design for new build homes, extensions, barn conversions and orchestrate plans for historically important listed buildings to work in the 21st Century. Our sister practice James Wells Commercial handles design for commercial projects including hotels, bars, restaurants and offices.
E: hello@jameswellsarchitects.co.uk
Sussex House,
12 Crane Street,
Chichester,
PO19 1LJ
© Copyright James Wells Commercial Architects. All Rights Reserved. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,866 |
Reese Johnson (born July 10, 1998) is a Canadian professional ice hockey forward currently playing with the Chicago Blackhawks of the National Hockey League (NHL).
Playing career
Johnson played as a youth with the Saskatoon Blazers in the SMAAAHL, before he was signed by major junior club, the Red Deer Rebels of the Western Hockey League (WHL).
Johnson played in five seasons with the Rebels, captaining the club during his final season in 2018–19 and compiling a career high 27 goals and 53 points through 67 regular season games. As an undrafted free agent, Johnson was signed by the Chicago Blackhawks to a three-year, entry-level contract on March 6, 2019. Following a first-round exit with the Rebels to complete his junior career, Johnson embarked on his professional career by joining the Blackhawks AHL affiliate, the Rockford IceHogs, for the final stages of the 2018–19 season, posting 4 assists through 6 games.
In the pandemic delayed 2020–21 season, after attending the Blackhawks training camp, Johnson was originally assigned to Rockford's training camp. On January 21, 2021, he was added to Chicago's Taxi squad, and through a growing list of players ruled out through the COVID protocol, Johnson was called up to make his NHL debut with the Blackhawks against the Columbus Blue Jackets on January 31, 2021. He recorded his first career NHL goal on November 23 in a 5–2 loss to the Calgary Flames.
Career statistics
References
External links
1998 births
Living people
Chicago Blackhawks players
Red Deer Rebels players
Rockford IceHogs (AHL) players
Undrafted National Hockey League players | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,111 |
{"url":"https:\/\/socratic.org\/questions\/how-do-you-write-an-equation-in-point-slope-form-that-passes-through-p-2-4-and-p-1#149877","text":"# How do you write an equation in point slope form that passes through P(-2,4) and perpendicular to x=5?\n\nSince the equation $x = 5$ is a vertical line, any line perpendicular to it must be a vertical line\n(i..e. the perpendicular line must have the form $y = c$ for some constant $c$).\nIf the perpendicular line passes through $\\left(x , y\\right) = \\left(- 2 , 4\\right)$\n$\\textcolor{w h i t e}{\\text{XXXXX}}$$y = 4$","date":"2022-01-24 17:11:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 6, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7620529532432556, \"perplexity\": 194.2689419190741}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320304572.73\/warc\/CC-MAIN-20220124155118-20220124185118-00425.warc.gz\"}"} | null | null |
\section{Introduction}
\begin{small}
\begin{displayquote}
\enquote{Oregon GOP frontrunner for governor embraces claims of election fraud... said he doubted Oregon's vote-by-mail system}---The Texas Tribune, Feb 11, 2022 \cite{OregonGO89:online}
\end{displayquote}
\end{small}
\begin{small}
\begin{displayquote}
\enquote{Election Deniers Go Door-to-Door to Confront Voters After Losses (in US primaries)}---Bloomberg, Aug 23 2022 \cite{USPrimar71:online}
\end{displayquote}
\end{small}
\begin{small}
\begin{displayquote}
\enquote{With 10 weeks until midterms, election deniers are hampering some election preparations
Some election deniers have ``weaponized'' against us, one election official says.}---ABC News, Aug 30, 2022 \cite{With10we65:online}
\end{displayquote}
\end{small}
Skepticism around the legitimacy of the US electoral process, which primarily gained momentum during the 2020 US presidential election, had serious ramifications. For example, endorsement of election conspiracy theories was found to be positively associated with lower turnout in the 2021 US Senate election in Georgia \cite{doi:10.1073/pnas.2115900119}. In 2022, the false narratives around the 2020 elections still persist \cite{Studyfin13:online,HowtoFig58:online} and continue to threaten democratic participation in the upcoming US midterm elections \cite{Studyfin13:online,HowtoFig58:online}.
In the last two years, 19 US states altered voting procedures and enacted laws to make voting more restrictive, creating information gaps and fresh opportunities for election misinformation to emerge and proliferate in the real and online world \cite{HowtoFig58:online}. Thus, battling election misinformation has never been more important.
Studies show that social media platforms have
become important mediums for political discourse \cite{allcott2017social,vitak2011s}. In particular, YouTube---the most popular platform among US adults \cite{Socialme29:online}---has emerged
as a political battleground as demonstrated by the fact that both political parties extensively used the platform for election campaigning \cite{Trumpdep92:online}. However, the platform came under fire from technology critics for being a hub of electoral conspiracy theories \cite{YouTubeh93:online,Election91:online}. Given the concern that search engines can play a significant role in shifting voting
decisions \cite{epstein2015search,epstein2017suppressing} and can confine users into a filter bubble of misinformation \cite{hussein2020measuring}, there has been a push for online platforms to enact policies that minimize election misinformation \cite{Inelecti49:online}. In response to this push, YouTube introduced content policies to remove videos spreading election-related falsehoods and claimed that misinformative videos would not prominently surface or get recommended on the platform \cite{Supporti69:online,Election56:online,YouTubet38:online,Howwills6:online}. However, the formulation of policies does not equate to effective enactment. It's evident from the results of two misinformation audits conducted on the platform for the same conspiratorial topics (such as vaccine controversies, and 9/11 conspiracies), first in 2019 \cite{hussein2020measuring} and second in 2021 \cite{tomlein2021audit}, both of which found echo chambers of misinformation on the platform. Despite changes to YouTube's misinformation policies in 2020 \cite{Managing54:online}, the authors of the second audit study did not find improvements when compared to the results of the first audit, rather they
found recommendations worsening for topics like vaccination. These findings iterate the need to continuously audit platforms to investigate how a platform's algorithms fare with respect to problematic content and how effectively a platform's content policies are implemented \cite{simko2021towards}. While multiple studies have audited YouTube for misinformation \cite{hussein2020measuring,tomlein2021audit,papadamou2022just}, these were mostly conducted using sock-puppets
(bot accounts emulating real users) in conservative settings\footnote{For example, sock-puppet building account history by watching videos that only promote misinformation.} which often do not reflect true user behavior. There is a dearth of crowd-sourced misinformation audits that test the algorithms' behavior with real-world users (\cite{bisbee2022election} is one of the few exceptions).
In this paper, we fill this gap by conducting a large-scale crowd-sourced audit on YouTube to determine how effectively YouTube has regulated its algorithms---search and recommendation---for election misinformation.
To conduct the audit, we recruited 99 participants who filled out a survey and installed \textit{TubeCapture}, a browser extension built to collect users' YouTube search results, and recommendations. The extension conducted searches for 88 search queries related to the 2020 US presidential elections. We also seeded \textit{TubeCapture} with 45 seed videos with three differing stances on election misinformation---supporting, neutral, and opposing. The extension collected up-next recommendation trails---five consecutive up-next recommendation videos---for each seed video. \textit{TubeCapture} simultaneously collected YouTube components from both personalized standard and unpersonalized incognito windows allowing us to measure the extent of personalization. This leads us to our first research question:
\begin{itemize}[leftmargin=*]
\item[] \indent \textbf{RQ1 Extent of personalization: } What is the extent of personalization in various YouTube components?
\begin{itemize}
\item[] \indent \textbf{RQ1a:} How much are search results personalized for search queries about the 2020 US presidential elections and the surrounding voter fraud claims?
\item[] \indent \textbf{RQ1b:} How much are YouTube's up-next recommendation trails personalized for seed videos with different stances on election misinformation---supporting, neutral and opposing?
\end{itemize}
\end{itemize}
We find that while search results have very little personalization, up-next trails are highly personalized. We next venture into
determining the amount of election misinformation real users could be exposed to under different conditions, such as following up-next trails for videos supporting or opposing election misinformation.
\begin{itemize}[leftmargin=*]
\item[] \indent \textbf{RQ2: Amount of election misinformation:}
What is the impact of watching a sequence of YouTube up-next recommendation videos starting with seed videos with different stances on election misinformation (supporting, neutral, and opposing) on various YouTube components?
\begin{itemize}
\item[] \indent \textbf{RQ2a: } How much do search results get contaminated with election misinformation?
\item[] \indent \textbf{RQ2b: } What is the amount of misinformation returned in users' up-next recommendation trails?
\item[] \indent \textbf{RQ2c: } {What is the amount of misinformation that appears in users' homepage video recommendations?}
\end{itemize}
\end{itemize}
We find that YouTube presents debunking videos in search results for most of the queries. We also observe an echo chamber effect in recommendations where trails with supporting seeds contain more misinformation than trails with neutral and opposing seeds. Since election misinformation is closely entangled with political beliefs with several right-leaning news sources amplifying the claims of voter fraud \cite{Theuniqu22:online,Republic22:online}, we also study the diversity and composition of the content presented by YouTube in its various components.
We ask,
\begin{itemize}[leftmargin=*]
\item[] \indent \textbf{RQ3: Impact on composition and diversity:}
What is the impact on content diversity when watching a sequence of YouTube up-next recommendation videos starting with seed videos with different stances on election misinformation (supporting, neutral, and opposing)?
\begin{itemize}
\item[] \indent \textbf{RQ3a: } How diverse are the search results ?
\item[] \indent \textbf{RQ3b: } How diverse are the up-next recommendation trails?
\end{itemize}
\end{itemize}
We find that YouTube ensures source diversity in its search results. We also find a large number of impressions for left-leaning late-night shows (e.g. Last Week Tonight with John Oliver) and right-leaning Fox news in users' up-next trails.
Overall, our work makes the following contributions:
\begin{itemize}
\item We conduct a post hoc audit on YouTube to determine how its algorithms fare with respect to election misinformation; post hoc auditing comprises investigating a platform for a past topic or event which could have a significant impact on citizenry in the present and future. In turn, we are able to test the effectiveness of YouTube's content policies enforced to curb election misinformation.
\item We extend prior work on misinformation audits by conducting an ethical crowd-sourced audit to see the impact of performing certain actions on the searches and recommendations of real-world people with complex platform histories instead of conservative settings of sock puppet audits.
\item {Our audit reveals that YouTube search results contain more
videos that oppose election misinformation as compared to videos supporting election misinformation, especially for search queries about election fraud in presidential elections. However, a filter bubble effect still persists in the up-next recommendation trails, where a small number of misinformative videos are presented to users watching videos supporting election misinformation.}
\end{itemize}
\section{Related Work} \label{rel}
\subsection{Algorithmic audits}
Search engines and social media platforms act as information gatekeepers, with their algorithmically generated feed, timeline, and recommendations affecting the information exposure of people. Given the ubiquitousness of the algorithms and the influence they hold over the citizenry, scholars have emphasized the need for auditing online platforms, i.e., conducting a systematic investigation to determine whether the algorithmic output is aligned with ``laws and regulations, societal values, ethical desiderata, or
industry standards'' \cite{abs-2105-02980}. As a result, several research studies have audited algorithmic systems for
distortion (e.g. hyper-personalization \cite{10.1145/3449148}, ideological skew \cite{bandy2021more,trielli2019search,10.1145/2998181.2998321} ), discrimination (e.g. racial and gender discrimination \cite{buolamwini2018gender,kyriakou2019fairness,asplund2020auditing}), exploitation (e.g. exploiting users' private and sensitive information \cite{DBLP:journals/corr/DattaTD14,cabanas2018unveiling}) and misjudgment (e.g. incorrect algorithmic predictions \cite{AllSouls64,duwe2019better}) \cite{10.1145/3449148}. These scholarly studies have used a myriad of audit research methods, including code audits, scraping audits, sock puppet audits, and crowd-sourced audits (see \cite{sandvig2014auditing} for a review). Among them, sock puppet auditing, where researchers create bots or fake user accounts to impersonate real-life users is the most popular since this audit method gives researchers the greatest control over experimental variables \cite{wilsonpromise} and doesn't require high participant recruitment cost like in the case of crowd-sourced auditing \cite{sandvig2014auditing}. Thus, several past studies have employed this audit method \cite{asplund2020auditing,hussein2020measuring,bandy2020auditing,trielli2019search,bandy2021more,juneja2021auditing}. However, in sock-puppet auditing, the bot histories are built in very conservative settings that do not emulate real-world users' complex account histories \cite{juneja2021auditing}. Thus, as an alternative, scholars have collected and audited algorithmic outputs from real-world users to study and identify problematic algorithmic behaviors in users' naturalistic settings \cite{robertson2018auditing,3274417,bisbee2022election,bandy2020auditing,venkatadri2019auditing}. We add to the existing crowd-sourced audit studies by conducting a crowd-sourced audit of YouTube to measure the amount of election misinformation in the searches, and recommendations of real-world users. In our study, we use a list of pre-selected videos and search queries to collect data from users' YouTube accounts to test whether users' existing account histories could lead them to misinformative content on the platform. In the next section, we present the audits conducted specifically on YouTube and discuss how our work adds to the growing literature on platform audits.
\subsection{Auditing YouTube for problematic content}
Given the popularity of YouTube and the criticism the platform has faced for not regulating problematic content, several scholarly studies have audited YouTube's search and recommendation algorithms for the prevalence of misinformation, extremism, and echo chambers of problematic content. Sock puppet audits on YouTube revealed that while the platform's channel recommendations radicalize users by recommending extreme channels \cite{ribeiro2020auditing}, video recommendations drive users away from radical content by recommending videos from mainstream news channels \cite{huszar2022algorithmic}. A crowd-sourced audit further revealed that real users with high prior levels of racial
resentment get more exposure to extremist content since they typically subscribe to extremist channels \cite{chen2022subscriptions}.
In another line of inquiry, several studies audited YouTube for conspiracy theories \cite{sanna2020yttrex,hussein2020measuring,faddoul2020longitudinal,papadamou2022just}.
Notably, first such audit on YouTube was conducted by Hussein et al \cite{hussein2020measuring}.
This audit
revealed the prevalence of echo chambers of misinformation in YouTube's top-5 video recommendations for topics such as the moon landing, 9/11 conspiracies, etc. \cite{hussein2020measuring}.
Recently, Tomlein et. al re-conducted the audit performed by Hussein and Juneja et al \cite{hussein2020measuring} and found that video recommendations for topics like 9/11 conspiracies have worsened on the platform \cite{tomlein2021audit}.
Another study (conducted in the fall of 2020), closest to this work collected real-world YouTube recommendations for election fraud videos by asking users to manually click on recommendations following certain traversal rules \cite{bisbee2022election}. The study aimed at proving that users skeptical about the legitimacy of elections receive more voter fraud videos in their recommendations. On the other hand, we audit YouTube's searches, homepages, and default algorithmic pathway (up-next videos that are auto-played by the platform) of users with different political leanings and investigate how its algorithm fares under different conditions (watching videos of different stances) for the same individual. Additionally, we conduct the audit two years after the presidential election event. Post hoc auditing of the platform allows us to determine how well the platform has enacted its content policies and regulated harmful content.
\section{Methodology} \label{method}
\subsection{Developing search queries to measure election fraud based misinformation} \label{search_queries}
The first methodological step in any algorithmic audit is to determine a viable set of relevant search queries that would be used to probe the algorithmic system. For our study, we identified search queries that satisfy two properties. First, we select high-impact search queries that were used by people to search about Presidential Election as well as the voter fraud claims about the 2020 elections. Second, we curate search queries that have a high probability of returning misinformative results which would result in meaningful measurements of algorithmically curated misinformation about the audit topic. To compile such queries, we used Google Trends and YouTube video tags (refer Figure \ref{fig:query}).
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{figures/query-gen.pdf}
\caption{Figure illustrating our method to curate search queries for our audit experiment}
\Description{The figure illustrates the two methods to curate search queries for the audit. First, we use high-impact queries from Google Trends. Second, use relevant YouTube video tags from YouTube videos that were shared by users promoting voter fraud claims on Twitter.}
\label{fig:query}
\end{figure*}
\subsubsection{Curating high-impact queries via Google Trends} First, we leveraged Google Trends which contain Google's daily and real-time search trends data. As the most popular search service, its trends are a good indicator for understanding the real-world search behavior of a large number of people. Using \textit{Election Fraud 2020} and \textit{Presidential Election} as search topics, United States as location, April 2020 to Present as date range, and search service as YouTube search, we extracted the top 15 most and least popular search queries that people used on YouTube. We choose April 7 as the start date since this was the day when Donald Trump made one of his first fraudulent claims about the security of mail-in ballots \cite{Timeline1:online}. We included the most popular queries since they represent the ones that people mostly use to get information on elections. To explore the \textit{data-voids} \cite{golebiewski2019data} associated with our audit topic, we also included the least popular search queries to determine if those terms have been hijacked by conspiracists to surface misinformation.
\begin{figure*}[!t]
\begin{minipage}{0.9\textwidth}
\begin{minipage}[]{0.5\textwidth}
\centering
\includegraphics[width=0.8\textwidth,keepaspectratio]{figures/tags_new_1.pdf}
\captionof{figure}{List of video tags associated with YouTube video titled { \tt Is Voter Fraud Real?} (video id: { \tt RkLuXvIxFew}) that promotes voter fraud misinformation. Video tags are added by content creators while uploading YouTube videos on the platform. The tags can be extracted from videos via YouTube APIs or third-party tools. We use tags associated with videos shared by users promoting voter fraud claims on Twitter as search queries in our audit experiments.}
\label{fig:tags}
\Description{The figure shows a list of video tags associated with the YouTube video titled Is Voter Fraud Real?, such as election tampering, non-citizen voters, threat to democracy, ballot harvesting, etc. }
\end{minipage}
\begin{minipage}[]{0.6\textwidth}
\centering
\small
\begin{tabular}{l}
\hline
presidential election 2020 \\
us elections 2020 latest news \\
election fraud 2020 \\
rigged election \\
dominion voting exposed \\
mail in ballots 2020 \\
stop the steal \\
joe biden voter fraud \\
usps whistleblower \\
voter fraud evidence\\
trump biden general election\\
dominion voter fraud \\ \hline
\end{tabular}
\captionof{table}{Sample search queries for our YouTube audit}
\label{searchqueries}
\end{minipage}
\end{minipage}
\end{figure*}
\subsubsection{Curating misinfo-queries queries using YouTube video tags}
Second, we used YouTube video tags that content creators associated with misinformative videos while uploading them on the YouTube platform (see Figure \ref{fig:tags} for an example). These tags could be
thought of as search words representing how content creators would like their videos to be discovered. To extract video tags associated with election misinformation videos, we leveraged a large-scale Voter Fraud 2020 dataset released by Abilov et al \cite{Abilov}. The dataset contains over 12,002 YouTube video URLs that were shared on Twitter
by accounts that tend to refute and promote voter fraud claims. We extracted YouTube video tags associated with videos shared by accounts promoting voter fraud claims
to probe YouTube (n=200K).
To curate a viable number of search queries from the extracted video tags, we employed several steps. First, we manually curated a list of 10 keywords related to elections and fraudulent claims surrounding the elections\footnote{ \textit{steal, fraud, ballot, elect, seal, dominion, sharpiegate, whistleblower, harvest, and sunrise zoom}} from the list of keywords provided by Abilov et al \cite{Abilov} as well election 2020 misinformation report produced by the Election Integrity Partnership \cite{eip}. Then for each of the keywords, we extracted 15 top and 15 least occurring video tags containing that term. For example, one of the most occurring tags containing keyword \textit{whistleblower} was
`usps whistleblower' while the least occurring tag was
`whistleblower jesse morgan'.
\subsubsection{Filtering search queries to obtain the final set}
We combined search queries obtained from both Google Trends and YouTube video tags in our final query set and employed several filtering steps to obtain a reasonable number of relevant search queries.
First, we only kept queries related to the election 2020, for example, we
kept `election fraud 2020' and removed `election fraud 2016'. We removed duplicate and redundant search queries and replaced them with a single randomly selected query. For example, we replaced queries `voter fraud 2020', 'voter fraud', and `vote fraud' with `voter fraud 2020'. We removed queries with lengths greater than five since they were overly specific (e.g. `we've got pictures of the check stubs paid to people to ballot harvest'). We also removed queries containing names of news channels, news anchors, and presidential candidates because they were too generic and not directly related to the audit topic. However, we kept the search queries where the names of the presidential candidates were together with the election or election fraud-related terms (e.g.
`Joe Biden voter fraud'). We also removed search queries that were in languages other than English. Finally, we had 88 search queries in total. Table \ref{searchqueries} presents a sample.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{figures/seed_gen.pdf}
\caption{Figure illustrating our method to curate seed videos for our audit experiment}
\label{fig:seed}
\Description{The figure illustrates the method used to curate seed videos for the audit experiment that is described in detail in the Section titled ``determining popular seed videos to collect
up-next video trails''}
\end{figure*}
\begin{table*}[]
\small
\begin{tabular}{m{2.8cm}|m{8.5cm}|m{2cm}}
\hline
Annotation label & Video title & Video id \\ \hline
\multirow{2}{*}[-0.5em]{\begin{tabular}[c]{@{}l@{}}Supporting election\\ fraud misinformation\end{tabular}} & Poll worker gives his account of what happened when he tried to monitor the vote in Nevada & 4X2V5hPPp6w \\ \cline{2-3}
& Joe Biden says he's built most extensive "voter fraud" org in history & WGRnhBmHYN0 \\ \hline
\multirow{2}{*}[-0.5em]{Neutral} & Ex-Trump official shares his prediction if Trump loses 2020 & KuqhhrmhfCI \\ \cline{2-3}
& 'Don't be ridiculous': Rudy Giuliani learns about Biden win from reporters & Z0hEFa52Bdo \\ \hline
\multirow{2}{*}[-0.5em]{\begin{tabular}[c]{@{}l@{}}Opposing election\\ fraud misinformation\end{tabular}} & Voting by Mail: Last Week Tonight with John Oliver (HBO) & l-nEHkgm\_Gk \\\cline{2-3}
& Trump and the GOP Still Refuse to Accept Biden's Win: A Closer Look & QoPA3unjQgA
\end{tabular}
\caption{Sample seed videos curated for the audit experiment.}
\label{seedvideos}
\end{table*}
\subsection{Determining popular seed videos to collect up-next video trails} \label{videos}
The second step of our audit experiment is to curate YouTube videos that would act as seed videos to collect the up-next video recommendation trails. We again leveraged Abilov et al's YouTube video dataset \cite{Abilov}.
Recall, the authors identified clusters of Twitter users who either shared tweets promoting or detracting from voter fraud claims and released the YouTube videos related to election fraud 2020 shared by those users. {At the the time of analysis, out of the $\sim$12K videos present in the dataset, ~8.9K were present on YouTube. The remaining videos were either removed or made private. Out of the videos that were still present, ~1K videos were shared by users in the detractor cluster, ~6.5K videos were shared by users in the promoting cluster, and the rest were shared by users who were suspended from Twitter. We sampled 445 videos that had accumulated the maximum number of views from both the promoting and detracting clusters (890 in total).}
Since the videos were not annotated by the authors for misinformation, we could not assume that videos shared by users in the promoting cluster would contain misinformation. Therefore, we conducted an intensive and iterative process to determine the labels and heuristics for annotating the YouTube videos for misinformation. We describe the process in detail in Section \ref{anno}. Through the annotation process, we labeled the videos as supporting, neutral, or opposing election misinformation. Out of the 890 videos, 74 were opposing, 16 were neutral, 101 supported election misinformation while remaining were irrelevant. We selected the top 15 videos that had accumulated maximum engagement, determined by the number of views, for each stance (except the irrelevant) as seeds. Figure \ref{fig:seed} illustrates the seed video curation method. Table \ref{seedvideos} presents a sample of seed videos.
\begin{figure*}[t]
\centering
\includegraphics[width=0.93\textwidth]{figures/elec_misinf_new.pdf}
\caption{Figure (a) presents an overview of our crowd-sourced audit of YouTube for election misinformation, Figures (b) and (c) show how our extension \textit{TubeCapture} collected YouTube components from both standard and incognito windows simultaneously.}
\label{fig:metafig}
\Description{Figure (a) presents an overview of our crowd-sourced audit of YouTube for election misinformation, Figures (b) and (c) illustrate how our extension \textit{TubeCapture} collected YouTube components from both standard and incognito windows simultaneously.}
\end{figure*}
\subsection{Experimental design} \label{design}
To conduct the crowd-sourced audit, we designed a chrome browser extension named \textit{TubeCapture} that enabled us to watch videos, conduct searches, and collect various YouTube components from users' browsers. Figure \ref{fig:metafig} presents an overview of our experimental design.
To select the study participants, we conducted a screening survey of a large sample of people (details in Section \ref{survey_sec}). Next, participants were instructed on how to use \textit{TubeCapture} and provided with a unique code to activate the extension. Once activated, they used \textit{TubeCapture} for a period of 9 days. We seeded our extension with 45 seed videos and 88 search queries. For each participant, each day the extension opened YouTube in two browser windows, one standard window and one incognito window. While the personalized results act as treatment for our experiments, results obtained from incognito act as control since YouTube does not personalize content in the incognito browsing window \cite{BrowseYo68:online}. By comparing the results from standard and incognito windows, we determine the role of YouTube's personalization algorithms in exposing users to misinformative content.
\textit{TubeCapture} first collected and stored the user's YouTube homepage from standard and incognito windows. The extension ensured that the user had signed in to their YouTube account in the standard window and remained logged in using the same YouTube account throughout the study period. We also ensured that the homepage from the standard window is stored without the user's email address to ensure the participant's anonymity. Next, the extension opened a seed video (previously selected) that supports election misinformation, watched it for 2 minutes, saved the video page, clicked on the up-next video, and again saved the video page of the up-next video. This process was repeated until we collected 5 levels of up-next recommendations' video pages. We refer to the collection of 5 up-next video recommendations as up-next trails. Each day we collected up-next trails for five seed videos. Then, the extension again collected the user's homepage followed by personalized (via standard window) and unpersonalized (via incognito window) search results for the curated search queries. {The extension collected the search results for queries in the same order for every participant to control for carry-over effects of the search queries \mbox{\cite{hannak2013measuring}}}. For days 1-3, the extension collected up-next trails for seed videos supporting election misinformation. At the beginning of the fourth day, the extension deleted the search and watch history created by the browser extension. According to YouTube, removing an item from search or watch history removes the impact of consuming that content on future searches and recommendations. This essential step helped us in two ways- 1) it ensured that the history created by our extension in the first three days does not impact the rest of the experiment,
and 2) it also ensured that the user histories built by our extension did not pollute users' future recommendations and search results after the study period is over. For days 4-6, the extension collected up-next trails for seed videos that were neutral in stance. At beginning of the seventh day, again search and watch history developed by the extension was deleted. For days 7-9, the extension collected up-next trails for opposing seed videos. Towards the end of the 9th day, we again deleted the YouTube history developed by the extension. All the data collected by the extension was sent to a back-end server. The participants were instructed on how to remove the extension after the study period was over.
{Our current mixed design allows us to test how YouTube's algorithm fares under different conditions---watching videos of different stances---for individuals with different political beliefs. Note that we did not opt for a randomized assignment in a between-subject design since it would require a large number of participants to test all the conditions (3 political affiliations X 3 misinformation stances). }
We built the YouTube capture extension using
JavaScript libraries. The back-end server was set up using
Flask and Nginx. We load-tested the server using Jmeter and ensured that the server could simultaneously handle 500 GET and 200 POST requests and added mechanisms to handle errors and server timeouts. We used a MySQL database for storing the data collected using the extension. The communication between the extension and our back-end server was encrypted
using SSL. Note that to collect data, TubeCapture opened windows in the background of the currently active browser window, thereby allowing participants to continue working on their device while the extension is running. In case, the participant accidentally closed any of the windows opened by our extension, we informed users via a pop-up window and instructed them on how to resume running the extension.
After building the TubeCapture extension, we tested it with our research
group and conducted three pilot studies. The aim of the pilot studies was to fix technical issues, examine the impact of running the extension on devices with different configurations, RAM, and operating systems as well as improve the usability of the extension.
\subsection{Screening and study survey} \label{survey_sec}
In order to select participants for our study, we screened users according to several criteria. To be eligible for the study, users should be 1) 18 years of age or older, 2) reside in the United States, 3) have a YouTube account, 4) consume content on YouTube primarily in the English language, 5) have a chrome browser installed, 6) willing to run a chrome browser extension for 9 days and 7) have at least 8GB RAM on their device to ensure smooth running of the extension\footnote{We warned users against participating in the study if their device's RAM is less than 8GB and informed them that their device or browser might hang in such a situation}. The users who qualified for the screening survey were sent another study survey. The study survey contained questions about users' demographics, political affiliation, YouTube usage, trust in online information, their opinion on personalization and bias in various components of YouTube, and their view on the results of the presidential elections 2020 as well as conspiracies surrounding the elections. We also included two attention-check questions. The study survey was also used for screening participants. We disqualified users who 1) answered both attention check questions incorrectly, 2) did not frequently use YouTube, and 3) did not use YouTube to access news or information about the 2020 presidential elections. We also used the survey responses to obtain a balanced number of participants across three political affiliations (Democrats, Republicans, and Independents). Later in the recruitment phase, we had enough democrats and independents as participants and thus, added being a republican as a qualifying criterion in the study survey.
\subsection{Recruitment and study deployment} \label{rec}
For our pilot studies, we recruited users from a combination of platforms such as Reddit\footnote{https://www.reddit.com/r/SampleSize/}, Facebook ads, Twitter, and Amazon Mechanical Turk (AMT). The retention rate was highest for participants recruited from Twitter and AMT. Thus, we used these two platforms to recruit participants for the main study. The pilots and the main study were approved by our university's Institutional Review Board.
Out of the 575 users who submitted the screening survey, 400 qualified, and 99 participated in the study. Out of the 99 participants, 94 ran the extension for the entire study duration. Overall, our study sample of 99 users constituted of 60.6\% males and 39.39\% females, was predominantly White/Caucasian (60.6\%) and the majority (53.53\%) of the participants had a bachelor's degree. Politically, 39.39\% of our participants were Democrats, 34.34\% independents, and 26.26\% Republicans. Based on the results of 2020 presidential elections\footnote{https://www.politico.com/2020-election/results/president/}, 66.67\% of our participants lived in the blue states, 32.32\% in red while one individual resided in Puerto Rico\footnote{Puerto Rico is not considered a state but is considered an unincorporated territory of the United States}. We report additional participants' characteristics in Appendix \ref{charac}.
\subsection{Developing data annotation scheme} \label{anno}
Developing the qualitative coding scheme to label YouTube videos for election misinformation was hard and time-consuming, requiring four rounds of discussions and
consultation with an expert to reach a consensus on the annotation heuristics. In the first round, the first author and an undergraduate research assistant sampled 196 YouTube videos from Abilov et al's YouTube dataset \cite{Abilov} and separately annotated the videos. They considered prior work on election misinformation narratives \cite{eip} and YouTube content policy \cite{Election56:online} as references to identify election misinformation, and came up with an initial annotation scale and heuristics to classify videos. Then they came together to reach a consensus on the annotation values. However, even after multiple rounds of discussions, annotations diverged for 33.6\% of the videos. We then conducted additional rounds of annotation exercises with seven researchers, out of which five had extensive work experience on online misinformation.
In every round, researchers independently annotated 15 videos
and later discussed every video's annotation value and the researchers' annotation process.
We also reached out to a postdoctoral researcher who has extensive research experience on online multi-modal election misinformation for feedback. Based on the insights provided by the external researchers and postdoc, we refined the annotation criteria and heuristics \footnote{It is important to note that all annotators and the post-doctoral researcher are left and center-left leaning individuals which may have affected how the content of YouTube videos was perceived and how the annotation heuristics were developed.}. Below we describe the annotation guidelines and heuristics in detail.
\subsubsection{Annotation guidelines} In order to annotate a YouTube video, the annotators were required to go through several fields
present on the video page in the following order: title and description, the overall premise of the video which could be determined by going through the video transcript or watching the video content, and considering channel bias. We encouraged participants to perform an online search to gain more contextual information about
events or individuals discussed in the video that they were unaware of. This strategy is grounded in the lateral reading technique that is often used by fact-checkers for credibility assessments \cite{wineburg2017lateral}. Note that we did not ask participants to consider video comments for the annotations because we found during our annotation exercises that comments could be misleading. For example, video \textit{Dominion Voting Systems representative demonstrates voting machines} (Q7kPSzYsR6Y) contains a demonstration of dominion voting machines, however, the comments indicate the video to be supporting misinformation.
\subsubsection{Annotation heuristics} In this section, we describe our annotation scale and heuristics.
\noindent\textbf{Supporting election misinformation (1)}: This category includes YouTube videos that support or provide evidence for misleading narratives around the presidential elections.
We did not include videos showing incidents of mail dumping, destroyed ballots, etc. in isolation. However, if the videos use these incidents to push a specific narrative/agenda like undermining confidence in mail-in voting,
then we considered them as supporting misinformation.
We also considered live YouTube videos (live press conferences, court hearings, etc.) that highlighted voter fraud claims without giving any additional context in the title, description, or beginning of the video as supporting misinformation. A few examples of videos in this category include \textit{NO RETREAT! America Is About To \#StopTheSteal | Good Morning \#MugClub} (Xqcwzi8Onsk) where video's title, description, and content hint towards massive voter fraud incidents in the US 2020 presidential elections and \textit{LIVE: Trump Legal Team Presents CLEAR Evidence of Fraud Before Georgia Senate Committee 12/3/20} (e35f4pUIYOg) which contains live footage capturing the testimony of individuals claiming occurrence of voter fraud in 2020 presidential elections. The video's description, title, and beginning do not contain any statements questioning or contradicting the claims of widespread voter fraud.
\noindent\textbf{Neutral (0)}: We consider videos as neutral when they are related to the 2020 elections but do not support or oppose false narratives surrounding the elections.
For example, video \textit{WATCH: The first 2020 presidential debate} (w3KxBME7DpM)
is considered neutral since it
covers the first presidential debate of the elections.
\noindent\textbf{Opposing (-1)}: We annotate videos as opposing when they oppose or debunk the misinformation narratives behind the 2020 US presidential elections. We also include
satire videos making fun of the misinformative claims in this category.
For example, video \textit{Trump Has Yet To Show Real Evidence Of Fraud, But Getting Him Out Of Office May Be A Bumpy Ride } (7mJwuKhfvqY) whose title and description indicate that Donald Trump made false claims of massive voter fraud.
\noindent\textbf{Other annotations}: We mark a video as \textit{Irrelevant} (2) if its content is not related to the presidential elections,
as \textit{URL not accessible} (3) if the YouTube video was not accessible at the time of annotation and as \textit{Other languages} (4) when the content, title, or description of the YouTube video was in a language other than English.
\subsection{Classifying YouTube videos for election misinformation} \label{classifier}
Our crowd-sourced audit experiments resulted in $\sim$47K unique YouTube videos and 35 unique YouTube shorts\footnote{YouTube shorts are short YouTube videos with lengths equal to or less than 60 seconds}. Given a large number of videos, we scaled the annotation process using a machine learning classifier. In this section, we present our method of creating the ground truth dataset, a description of features used in our classification model, model architecture, and the results of our classification.
\subsubsection{Creating a ground truth dataset} Two researchers manually annotated 1196 videos using the guidelines and heuristics mentioned in Section \ref{anno}.
We obtained annotations for 545 additional videos using AMT. We describe the process of obtaining video annotations from AMT workers in Figure \ref{fig:amt} and Appendix \ref{amt}. Overall, in our ground truth dataset, we had 1741 videos out of which 124 are supporting\footnote{Out of these 67 videos were removed from the platform at the time of analysis.}, 257 opposing, 228 neutral, and 1132 irrelevant videos.
\subsubsection{Feature description}
We considered the following features for our classifier.
\noindent\textbf{Snippet (title+description)}: We concatenated the title of the YouTube video with its description together, as done by \cite{papadamou2022just}, and used the concatenated string as a feature. \\
\noindent\textbf{Transcript}: Transcript contains the textual content of the video. We use transcripts auto-generated by YouTube. \\
\noindent\textbf{Tags}: Video tags are words that a content creator associates with their video while uploading it on the platform.\\
\noindent\textbf{Video Statistics}: Video statistics include the number of views, likes, comments, and date of publication.\\
\noindent\textbf{Channel Bias}: Since the election misinformation is closely entangled with the political beliefs \cite{Theuniqu22:online,Republic22:online}, we used partisan bias of YouTube channels as a feature. Using existing data sets on media bias and manual annotations (described in Appendix \ref{partisanbias}), we annotated YouTube channels' partisan bias on a 5-point scale of far-left to far-right.
Apart from the features listed above, we also tried several other features like LIWC dictionary~\cite{tausczik2010psychological}, Credibility Cues~\cite{mitra2017parsimonious},
and hashtag matching from the Voter Fraud dataset on the text features~\cite{Abilov}
that didn't improve performance. Therefore, we do not discuss them in detail. Recall, while manually annotating the videos, we discovered that comments are not a good indicator of the veracity of the video. Therefore, we chose not to include those in our feature set.
\begin{scriptsize}
\begin{table*}[]
\centering
\small
\begin{tabular}{p{0.6\textwidth}p{0.15\textwidth}p{0.15\textwidth}}
\textbf{Classifier[Feature + Vectorizer + Imbalance Handling + Data]} & Acc. & F1 \\
\hline
SVM[Video Engagement Statistics] & 0.38 & 0.14 \\
SVM[Snippet + FastText] & 0.61 & 0.56 \\
SVM[Transcript + FastText] & 0.58 & 0.51 \\
SVM[Tags + FastText] & 0.59 & 0.53 \\
SVM[Snippet,Transcript,Tag + FastText] & 0.63 & 0.57 \\
SVM[Snippet,Transcript,Tag + Count] & 0.65 & 0.58 \\
SVM[Snippet,Transcript,Tag + TFIDF] & \textbf{0.71} & \textbf{0.65} \\
\hline
SVM[Snippet,Transcript,Tag,Channel Bias + Sentence Transformer] & 0.73 & 0.69 \\
SVM[Snippet,Transcript,Tag,Channel Bias + TFIDF] & 0.74 & 0.70 \\
SGD[Snippet,Transcript,Tag,Channel Bias + TFIDF] & 0.64 & 0.57 \\
KNN[Snippet,Transcript,Tag,Channel Bias + TFIDF] & 0.61 & 0.58 \\
XGB[Snippet,Transcript,Tag,Channel Bias + TFIDF] & 0.74 & 0.68 \\
Voting SVM+SGD+KNN+XGB [Snippet,Transcript,Tag,Channel Bias + TFIDF] & \textbf{0.75} & \textbf{0.71}\\
\hline
SVM[Snippet,Tag,Channel Bias + TFIDF + SMOTE + Additional Training Data] & \textbf{0.91} & 0.90 \\
XGB[Snippet,Tag,Channel Bias + TFIDF + SMOTE + Additional Training Data] & \textbf{0.91} & \textbf{0.91} \\
\hline
\end{tabular}
\caption{A sample of classifiers and feature set with the performance progression.}
\label{tab:classifier}
\end{table*}
\end{scriptsize}
\subsubsection{Classifier Selection}
To find a classifier that performs well on our dataset, we applied a series of machine learning classifiers on several combinations of feature sets.
To create feature vectors, we tested two types of word vectors (count and tf-idf vectors) and two types of sentence vectors (FastText \footnote{\url{https://fasttext.cc/}} and BERT \cite{devlin2018bert}). For word vector generation, we cleaned the dataset by removing stop words and lemmatization, followed by up to 3-gram generation. To deal with data imbalance in our dataset, we used
Synthetic Minority Over-sampling Technique \cite{chawla2002smote}
We applied several
classifier models on our feature set including
support vector machine, stochastic gradient descent, decision trees, nearest neighbor, and ensemble models.
To find the best model, we performed a grid search on a five-fold cross-validation dataset by looking into standard parameter space for each classifier.
For the sake of brevity, we only show a sample of combinations tested in Table \ref{tab:classifier}.
Out of all the combinations, both SVM and XGBoost performed the best (ACC=91\%) when trained with snippet, tags, and channel bias features and tf-idf text vectorizer~\footnote{If we merge irrelevant and neutral videos into one class resulting in a three-class classification problem, SVM classifier performs with a 93\% accuracy.}.
Based on Occam's Razor principle \cite{Occamsra87:online}, we selected SVM as the final classifier, i.e., the simplest model with maximum accuracy. Using our final classifier, we determined the annotation labels for the remaining videos.
In total, our dataset consisted of 431 supporting, 1868 opposing, 1658 neutral, and 43041 irrelevant videos.
\section{Ethical considerations} \label{ethics}
Our browser extension \textit{TubeCapture} uses crowd workers' YouTube account to watch videos (including videos containing election misinformation) and conduct searches on the platform. It was possible that participants would have seen more misinformation than they would have
otherwise during and also after the research study due to the watch and search history built during the audit. In order to eliminate the potential harm of our experiments, we included two essential
steps in our experimental design. First, our extension always opened the browser window in the background so that participants don't actively see the videos being played. Second, the extension deleted users' search and watch history built during the study period.
Note that YouTube allows the deletion of items from the search and watch history for a specific date range.
YouTube's website \cite{Vieworde17:online,Learnabo12:online} clearly states that ``\textit{search entries you delete will no longer influence your
recommendations. At any time you can (also) remove videos (from watch history) to influence what YouTube recommends to you}''. We explicitly informed users that their YouTube history during the study period would be deleted. We ensured that the extension expires after the study period so that it does not perform any action. In addition, we ensured that the YouTube pages saved by our extension do not contain users' personally identifiable information such as email addresses.
\section{RQ1 Results: Extent of Personalization} \label{rq1}
To measure the extent of personalization in YouTube components, we compare the personalized list of video URLs present in the standard window with the baseline unpersonalized videos obtained from the incognito window.
Below we discuss the metrics that we used to quantify personalization.
\textbf{Measuring personalization in web search:}
In our study, to determine personalization in search results, we employ two metrics: jaccard index and rank bias overlap (RBO). Jaccard index measures the similarity between two lists and has been used in several previous audit studies to measure personalization in web search \cite{kliman2015location,hannak2013measuring,juneja2021auditing}. However, Jaccard index does not take into account the rank of the lists being compared. Thus, we used the RBO metric introduced by Webber et al \cite{webber2010similarity} which takes into account the order of elements in the list. The RBO function includes a parameter \textit{p} which indicates the top-weightedness of the metric, i.e. how much will the metric penalize the difference in the top rankings. A previous audit study used the click-through rate (CTR) of Google search results to estimate the value of \textit{p} \cite{robertson2018auditing}. Because of the lack of CTR statistics available for YouTube, we consider the default value of \textit{p} which is 1 (prior audit studies such as \cite{le2022crowdsourcing} opted for a similar approach), indicating that differences in all rankings are equally penalized. Both jaccard and RBO scores range between 0 and 1, with 1 indicating that the two lists have similar elements while 0 indicating that the lists are completely different.
\textbf{Measuring personalization in up-next trails:}
To measure personalization in up-next trails, we employ jaccard index and Damerau-Levenshtein (DL) distance \cite{damerau1964technique}. DL distance is the enhanced version of edit distance that computes the number of transpositions in addition to insertions, deletions, and substitutions required to make the treatment list identical to the control list.
DL distance has been used by prior audit work as a metric to estimate the ranking differences between two lists \cite{c2020there}. It returns a score from 0 to 1 (identical lists) indicating how similar the two lists are. We refrain from using the RBO metric to determine personalization in up-next trails because RBO is suitable for indefinite lists while the trails collected through our experiments have a known maximum length of five. We also refrain from using the Kendall tau metric since it requires the two ranked lists being compared to be conjoint\footnote{There are alternative versions of Kendall Tau that assume the dissimilar elements to be present at the end of the list. However, conceptually, the metric does not fit our collected trail data.}. Given, jaccard, RBO, and DL distance return similarity values, we define personalization as:-
\begin{equation}
1-similarity\_metric(URL_{incognito}, URL_{standard}).
\end{equation}
\subsection{RQ1a: Personalization in search results}
When asked in our study survey how much YouTube personalizes search results (Figure \ref{fig:persbelief_search}), 34.34\% believed YouTube personalizes search results to a great extent while 19.19\% believed the extent of personalization to be very little.
On quantitatively measuring the extent of personalization in YouTube search results, we found little to no personalization indicating that search results present in standard and incognito windows are highly similar. Figures \ref{fig:serpjac} and \ref{fig:serprbo} show the extent of personalization in SERPs calculated using jaccard index and RBO metric respectively for democrats, republicans, and independents for each day of the experiment run.
We did not find any significant difference in the personalization values of SERPs for participants with respect to their political leaning.
\begin{figure*}
\begin{minipage}{\linewidth}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{RQ1/per_belief_search_perc.pdf}
\caption{Participant's belief in extent of personalization in YouTube search results}\label{fig:persbelief_search}
\end{subfigure}\hfill
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{RQ1/pers_search_jacc.pdf}
\caption{Measuring extent of personalization in SERPs using jaccard index}\label{fig:serpjac}
\end{subfigure}\hfill
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{RQ1/pers_search_rbo.pdf}
\caption{Measuring extent of personalization in SERPs using RBO}\label{fig:serprbo}
\end{subfigure}
\end{minipage}
\caption{\textbf{RQ1a results:} Figure (a) shows participants' response to the survey question: ``How much, if at all, do you think YouTube personalizes search results''. Figures (b) and (c) show personalization calculated via jaccard index values and RBO metric values respectively in YouTube's standard-incognito SERP pairs. We observe that search results are slightly personalized meaning search results obtained from standard windows are very similar to the search results obtained from incognito windows.}
\label{search}
\Description{Figure (a) shows participants' response to the survey question: ``How much, if at all, do you think YouTube personalizes search results''. 19.2\% believe there is little personalization in YouTube search results, 46.5\% believe that search results are somewhat personalized, and 34.3\% believe that YouTube personalizes
search results to a great extent. Figures (b) and (c) show personalization calculated via jaccard index values and RBO metric values respectively in YouTube's standard-incognito SERP pairs for democrats, republicans, and democrats. The figures are line graphs with the x-axis showing the nine days of the experiment run (day0-day8). The y-axis shows the magnitude of personalization. The magnitude of personalization is very low (near 0) for all days for all users. }
\end{figure*}
\begin{figure*}
\begin{subfigure}{0.34\textwidth}
\centering
\includegraphics[width=\textwidth]{RQ1/per_belief_trail_perc.pdf}
\caption{Participant's belief in extent of personalization in YouTube up-next recommendations}\label{fig:persbelief_trail}
\end{subfigure}
\hspace{2cm}
\begin{subfigure}{0.34\textwidth}
\centering
\includegraphics[width=\textwidth]{RQ1/overlap.pdf}
\caption{Distribution of percentage of up-next video recommendations coming from users' subscribed channels. }\label{fig:overlap}
\end{subfigure}\\
\begin{subfigure}{0.34\textwidth}
\centering
\includegraphics[width=\textwidth]{RQ1/trail_jac.pdf}
\caption{Measuring extent of personalization using jaccard index}\label{fig:pers_trail_jc}
\end{subfigure}
\hspace{2cm}
\begin{subfigure}{0.34\textwidth}
\centering
\includegraphics[width=\textwidth]{RQ1/trail_edit.pdf}
\caption{Measuring extent of personalization using DL index}
\label{fig:pers_trail_edit}
\end{subfigure}
\caption{\textbf{RQ1b results:} Figure (a) shows participants' response to the survey question: ``How much, if at all, do you think YouTube personalizes up-next recommendations''. Figure (b) shows the distribution of the percentage of YouTube videos recommended to our study participants from their subscribed channels. Figures (c) and (d) show personalization calculated via jaccard index values and DL distance metric values respectively in YouTube's standard-incognito up-next trails pairs. We observe that up-next recommendation trails are highly personalized. }
\label{trail}
\Description{Figure (a) shows participants' response to the survey question: ``How much, if at all, do you think YouTube personalizes up-next recommendations''. 9.1\% believe there is little personalization in YouTube's up-next recommendations, 39.4\% believe that up-next recommendations are somewhat personalized, and 51.5\% believe that YouTube personalizes
up-next recommendations to a great extent. Figure (b) shows the distribution
of the percentage of videos recommended to our participants in
up-next trails that are coming from their subscribed channels. For around 50\% of users, 10\% or fewer videos come from their subscribed channels. Figures (c) and (d) show personalization calculated
via jaccard index values and DL distance metric values respectively in YouTube's standard-incognito up-next trails pairs. The personalization value is very high (between 0.8-1) for the up-next trails collected from all the users.}
\end{figure*}
\subsection{RQ1b: Personalization in up-next trails}
When asked how much YouTube personalizes up-next recommendations, 51.5\% of participants believed that YouTube personalizes up-next recommendations to a great extent (refer Figure \ref{fig:persbelief_trail}).
The quantitative measurements are in line with this belief showing that up-next trails are highly personalized. Figures \ref{fig:pers_trail_jc} and \ref{fig:pers_trail_edit} show the extent of personalization in up-next trails using jaccard index and DL distance. The graphs indicate that the up-next trails obtained from the users' standard and incognito windows are highly dissimilar and thus, highly personalized. Statistical test revealed that the amount of personalization in trails with supporting, neutral, and opposing seeds is significantly different [F(2)=15.2, p<0.0001]. Post hoc test revealed that up-next trails with seed videos opposing misinformation have lesser personalization (higher jaccard index\footnote{The jaccard index values obtained were highly correlated with DL distance scores (pearson correlation coefficient = 0.96). Thus, we used jaccard index values to perform the statistical test.}) when compared with up-next trails with supporting and neutral seed videos.
Next, we checked the influence of users' subscriptions on personalized trails. 81 (out of 99) participants had subscribed to at least one YouTube channel (mean=109.4, median=31, SD=207.8).
The maximum number of subscriptions for a participant was 1073 and the minimum was 1. The participants had subscribed to 7670 unique channels out of which 79 either did not exist or were suspended due to violation of YouTube's moderation policy and thus, we did not consider these channels for analysis. To determine how many video recommendations in users' up-next trails were coming from their subscriptions, first, for each user we extracted the unique videos recommended in all the up-next trails collected for the user. Then we filtered and calculated the number of videos coming from the users' subscribed channels. Figure \ref{fig:overlap} shows the distribution of the percentage of videos recommended to our participants in up-next trails that are coming from their subscribed channels. This percentage value is moderately correlated with the number of channels subscribed (r=0.61) and highly correlated with the number of news-related channels subscribed\footnote{To get a rough estimate of YouTube channels that broadcast news, we considered the news sources from \texttt{mediabiasfactcheck.com} and \texttt{allsides.com}. Additionally, we extracted the description of each channel and categorized it as a news channel if the description contained terms such as `breaking news', `politic*', `current affairs', `government', `national tv', `national news', `international news', `world news', `global news', `current affairs', `wall street' etc. These terms were curated by the first author after manually going through the description of ~50 national and regional news channels on YouTube. We found that 44 users had subscribed to news and politics-related channels.}(r=0.71).
\section{RQ2 Results: Amount of Misinformation} \label{rq2}
When asked how much do participants trust the credibility of videos in search results and recommendations, less than 20\% reported that they trust the credibility of content shown to them by YouTube to a great extent (Figure \ref{cred_bel}). To determine how much credible information is presented by YouTube to users in reality, we quantify the misinformation present in the YouTube components by adopting the misinformation bias score developed by Hussein and Juneja et al \cite{hussein2020measuring}.
The score determines the misinformation in ranked lists and is calculated as $\frac{\sum_{r=1}^{n} {(x_r * (n - r+1))} }{ \frac{n * (n + 1)}{2}}$; where x is the video annotation, $r$ is rank of the video, and $n$ is the total number of videos present in the SERP/up-next trail. To conform to the video annotation scale in \cite{hussein2020measuring}, we map our annotation values to a normalized scale of -1, 0, and 1. We assign scores of -1 and 1 to videos opposing and supporting election misinformation respectively. Videos marked as irrelevant, neutral, belonging to a non-English language, or removed from the platform are assigned a 0 score. Thus, the misinformation bias score of a SERP/trail is a continuous value ranging between -1 (all videos are opposing election misinformation) to +1 (all videos are supporting election misinformation). Note that a positive score indicates a lean towards misinformation, while a negative score indicates a lean towards content opposing misinformation. For analysis, we consider the top ten
search results and five consecutive videos in the up-next trails.
\begin{figure*}[]
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{RQ2/cred_search.pdf}
\caption{Participant's trust in the credibility of information presented in search results}
\label{cred_serach}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=\textwidth]{RQ2/cred_vid.pdf}
\caption{Participant's trust in the credibility of information presented in up-next recommendations}
\label{cred_recom}
\end{subfigure}
\caption{\textbf{RQ2:} Figure showing participants' response to survey question: ``How much do you trust the credibility of information present in the '' a) search results and b) up-next videos recommended by YouTube.}
\label{cred_bel}
\Description{Figure (a) shows participants' responses to the survey question: How much do you trust the credibility of information present in the search results? 4\% say not al all, 16.2\% say very little. 60.6\% say somewhat, and 19.2\% say to a great extent. Figure (b) shows participants' responses to the survey question: How much do you trust the credibility of information present in the up-next videos recommended by YouTube? 7.1\% say not al all, 18.2\% say very little. 60.6\% say somewhat, and 14.1\% say to a great extent.}
\end{figure*}
\subsection{RQ2a: Misinformation in search results}
The results of RQ1 showed that YouTube's SERPs are very slightly personalized suggesting that search results present in the standard and incognito windows are mostly similar. Therefore, to quantify the misinformation bias in SERPs we only consider the SERPs obtained from the standard YouTube windows of all the participants. We first calculated the average misinformation bias score for each of the 88 search queries for 9 days of the experiment run across all 99 participants. Figure \ref{dist} shows the distribution of misinformation bias scores for all the search queries. We observe that the average misinformation bias scores of 84 (out of 88) search queries are negative indicating that the search results contain more videos that oppose election misinformation as compared to videos supporting election misinformation\footnote{Only four search queries in our query set (`stop the seal', `voting machine fraud', `ballots in garbage' and `ballots thrown out') have a positive misinformation bias. }.
\begin{figure*}[]
\begin{minipage}{0.99\textwidth}
\begin{minipage}[]{0.5\textwidth}
\centering
\includegraphics[width=0.7\textwidth,keepaspectratio]{RQ2/query_misinfo_dist.pdf}
\captionof{figure}{\textbf{RQ2a results:} Mean misinformation bias scores for 88 search queries for all participants. A negative score indicates that SERPs contain more videos opposing election misinformation.}
\label{dist}
\Description{The figure shows the distribution of misinformation bias scores for 88 search queries for all participants. The majority of search results have a negative score indicating that SERPs contain more videos opposing election misinformation.}
\end{minipage}\hfill
\begin{minipage}[]{0.45\textwidth}
\centering
\small
\begin{tabular}{p{6.5cm}}
\hline
\rowcolor[HTML]{D5D3D3}
\textbf{Cluster1: Search queries containing keyword fraud in conjunction with keywords voter, election, and dominion} \\ \hline
voter fraud evidence, dominion voter machine scandal, sharpie voter fraud, election fraud 2020, election fraud whistleblower \\ \hline
\rowcolor[HTML]{D5D3D3}
\textbf{Cluster2: Search queries containing keywords election, and 2020} \\ \hline
trump biden general election, presidential election 2020, presidential election results 2020, mail in ballots 2020 \\ \hline
\end{tabular}
\captionof{table}{The misinformation bias scores form a bimodal distribution, each constituting a cluster of similar queries. This table describes the clusters and presents sample queries for each cluster.}
\label{tab:clusters}
\end{minipage}
\end{minipage}
\end{figure*}
\begin{figure*}[]
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{RQ2/top_bias_queries.pdf}
\caption{Search queries with highest and lowest mean misinformation bias scores}
\label{rank}
\Description{The figure shows search queries with the highest (stop the seal, voting machine fraud, ballots in garbage, ballots thrown out, us elections 2020 pennsylvania) and lowest (voter fraud claims, voter fraud evidence, ballot fraud, electoral fraud, ballot box fraud) mean misinformation bias scores}
\end{subfigure}
\hspace{1cm}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{RQ2/serp_bias.pdf}
\caption{Misinformation bias scores of search queries for each day of experiment run}
\label{fig:irdbias}
\Description{ Figure shows the misinformation bias scores (y-axis) for the participants belonging to the different
political leanings for the days of the experiment run (x-axis). The bias scores coincide for all users on all days.}
\end{subfigure}
\caption{\textbf{RQ2a results:} a) Search queries with highest (labeled in red) and lowest (labeled in blue) mean misinformation bias scores. Positive misinformation bias scores indicate a lean toward misinformation where as negative bias scores indicate a lean toward information that opposes misinformation. b) Figure showing the distribution of misinformation bias scores of search queries for democrats, republicans, and independents. Note that the bias scores for the participants belonging to the different political leanings coincide indicating that misinformation bias in SERPs remains constant throughout for each participant.}
\vspace{-0.4cm}
\end{figure*}
Furthermore, we observe
in Figure \ref{dist}
that the misinformation bias scores of the SERPs form a bimodal distribution constituting two clusters of search queries (Table \ref{tab:clusters}). The cluster1 search queries have the most negative bias, i.e. they contain more opposing videos. This cluster mostly consists of search queries containing the keyword \textit{fraud} in conjunction with keywords \textit{voter}, \textit{election}, and \textit{dominion}. Cluster2 on the other hand consists of search queries with keywords \textit{election} and \textit{2020}. Overall, cluster1 consists of more search queries biased towards finding misinformation compared to search queries in cluster2. This indicates that YouTube pays more attention to search queries about election fraud and ensures that users are exposed to opposing videos when searching about fraudulent claims surrounding the elections.
Figure \ref{rank} shows five search queries with the highest and 5 search queries with the lowest misinformation bias. The search query `voter fraud claims' has the least amount of misinformation bias, indicating that most of the search results for this query oppose election misinformation. On the other hand, the search query `stop the seal' has the most amount of videos supporting election fraud claims. Next, we determine how do misinformation bias scores in SERPs vary for democrats, independents, and republicans. Figure \ref{fig:irdbias} shows that the bias values for democrats, independents, and republicans for all days coincide indicating that the amount of misinformation bias is almost constant for all days for all participants irrespective of their partisanship. {Overall, our RQ2 results indicate that YouTube pushes debunking information in search results, more for search queries about voter fraud claims as compared to generic queries about the presidential elections.}
\begin{figure*}
\hspace{-3cm}
\begin{minipage}[]{0.55\linewidth}
\centering
\includegraphics[width=\linewidth]{RQ2/misinf_scores.pdf}
\end{minipage}
\begin{minipage}[]{0.20\linewidth}
\centering
\begin{footnotesize}
\begin{tabular}{l|l|l}
\hline
Pol. aff. & Statistical tests & Mean diff. \\ \hline
Democrats & F(2,3407)=4035.1 , p=0 & S>N>O \\ \hline
Republicans & F(2,2265)=2981.4, p=0 & S>N>O \\ \hline
Independents & F(2,2941)=3593.8, p=0 & S>N>O
\end{tabular}
\end{footnotesize}
\label{tab:my-table}
\end{minipage}
\caption{\textbf{RQ2b results:} Mean misinformation scores of standard up-next trails with seed videos that are supporting (S), neutral (N), or opposing election misinformation (O) for Democrats, Independents, and Republicans. A positive misinformation score indicates a lean toward misinformative content while a negative score indicates a lean toward content that opposes election misinformation. Statistical tests reveal a significant difference in the amount of misinformation contained in up-next trails. We find that democrats, republicans, and independents find more misinformation in supporting trails compared to neutral trails, and more misinformation in neutral trails as compared to opposing trails.}
\label{tab:misinfo scores}
\Description{The mean misinformation score of trails with supporting seeds for democrats, republicans, and independents is 0.28, 0.31, and 0.33 respectively. The mean misinformation score of trails with neutral seeds for democrats, republicans, and independents is -0.02, 0.01, and -0.01 respectively. The mean misinformation score of trails with opposing seeds for democrats, republicans, and independents is -0.49, -0.49, and -0.51 respectively.}
\vspace{-13pt}
\end{figure*}
\subsection{RQ2b: Misinformation in up-next trails}
The results of RQ1 showed that participants' up-next trails are highly personalized.
In other words, videos in up-next trails obtained from the standard window are different from videos in trails obtained from the incognito window.
Recall, that trails extracted from the incognito window act as baseline unpersonalized trails while trails extracted from the standard window, where users had signed into their accounts, act as personalized treatment trails. Therefore, to determine the impact of personalization on the amount of misinformation in up-next trails, we compare the misinformation bias scores of trails collected in standard windows with the trails collected in incognito windows. We find that the difference in misinformation bias scores of standard and incognito up-next trails is not significant (t=-0.62, p=0.53). This means that although the standard up-next trails are very different from the incognito up-next trails, there is no difference in the amount of misinformation present in them. To avoid inflating our sample size, for further downstream analysis, we only consider up-next trails obtained from participants' standard windows. This similar strategy was adopted by Robertson et al for analyzing bias in Google search results when they did not see any significant difference in the amount of partisan bias in incognito-standard SERP pairs \cite{robertson2018auditing}.
\subsubsection{Misinformation in standard up-next trails for different scenarios} In this section, we
determine the amount of misinformation encountered by our study participants in the standard up-next trails for seed videos with different stances on election misinformation---supporting, neutral and opposing.
Figure \ref{tab:misinfo scores} shows the mean misinformation scores of different up-next trails
collected from the standard windows of democrats, republicans, and independents. Recall that a positive misinformation score (>0) indicates a lean toward misinformation, while a negative misinformation score indicates a lean toward information that opposes election misinformation. We conduct within-group statistical tests to determine the difference in misinformation for the three scenarios (following trails for supporting, neutral, and opposing seed videos). The tests indicate a filter bubble effect. If users watch supporting videos, they are led to supporting videos in the trails. But if they watch neutral videos, they are led to less misinformation compared to when they watched supporting videos. However, if users watch opposing videos, they are led to more opposing videos in the up-next trails. The same trend is observed for democrats, republicans, and independents.
Is the amount of misinformation in trails with different seeds different for democrats, republicans, and independents?
Between-group statistical tests reveal that the amount of misinformation in supporting trails (KW H(2)=11.9,p=0.002) and neutral trails (KW H(2)=8.69,p=0.01) for democrats, independents, and republicans is significantly different. We find that independents in our sample receive more misinformation in their supporting trails as compared to democrats. Additionally, republicans receive more misinformation in their neutral trails compared to democrats.
Overall, by observing Figure \ref{tab:misinfo scores}, we realize misinformation scores of supporting trails are positive and opposing trails are negative. However, the magnitude of misinformation scores of opposing trails is much more than the supporting trails indicating that the strength of the filter bubble effect was more when our study participants watched videos opposing election misinformation.
\begin{figure*}
\centering
\hspace{-0.8cm}
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=.8\linewidth]{RQ2/s_mnew.pdf}
\caption{Mean \% of transitions in trails with seed videos supporting elec. misinfo.}
\label{sm}
\end{subfigure}\hfill
\begin{subfigure}{.32\linewidth}
\centering
\includegraphics[width=.8\linewidth]{RQ2/n_mnew.pdf}
\caption{Mean \% of transitions in trails with neutral seed videos}
\label{nm}
\end{subfigure}\hfill
\begin{subfigure}{.389\linewidth}
\centering
\includegraphics[width=.8\linewidth]{RQ2/o_mnew.pdf}
\caption{Mean \% of transitions in trails with seed videos opposing elec. misinfo.}
\label{om}
\end{subfigure}
\caption{\textbf{RQ2b results:} Mean percentage of various transitions present in the standard up-next trails of democrats, independents, and republicans. S represents a video supporting election misinformation, N represents a neutral video and O represents a video opposing election misinformation. Transition S->S denotes that a YouTube video supporting election misinformation leads to an up-next video recommendation supporting election misinformation.}
\label{tab:transitions}
\Description{The figure shows the mean percentage of transitions (S->S, S->N, S->O, N->S, N->N, N->O, O->S, O->N, and O->O) in trails with seed videos that are promoting, neutral, and opposing. The percentage of N->N transitions are highest in all trails collected from all users. For the opposing trails, O->N and O->O transitions are around 20\% for all users. Problematic transition N->S is 2.67\%, 3.78\%, and 4.26\% in neutral trails of democrats, republicans, and independents.}
\end{figure*}
\subsubsection{Transitions in standard up-next trails}
In this section, we gain more insights into the anatomy of YouTube's up-next trails by studying the various transitions present in them. This allows us to determine how users get pushed towards misinformative or debunking videos in the trails. Since our annotation scale consists of three values, supporting (S), neutral (N), and opposing (O), there are 9 transitions possible in the trails (S->S, S->N, S->O, N->S, N->N, N->O, O->S, O->N, N->O). For each participant, we first individually determine the percentage of each of these transitions present in the three types of standard up-next trails collected (ones starting with a supporting seed video, neutral seed videos, and opposing seed video). Then we calculated the mean percentage of all of these transitions for democrats, independents, and republicans. From Figure \ref{tab:transitions}, we see that the maximum number of transitions across all participants and all types of up-next trails is N->N. Problematic transitions like S->S and O->S are less than 2\% in trails of all users. However, comparatively S->S transitions are still more in the supporting up-next trails of independents (1.78\%) compared to democrats (0.38\%) and republicans (0.86\%). In the neutral up-next trails of republicans and independents, N->S transitions dominate (after N->N transitions) indicating that independents and republicans are sometimes led to supporting videos in their up-next recommendations even when they are viewing neutral YouTube videos. We also observe that the opposing up-next trails majorly consist of transitions O->N and N->O (after N->N transitions) indicating that once a user watches a video that opposes election misinformation, YouTube pushes more videos that are either neutral or opposing in stance in the up-next trails of all the participants.
We also observe that S->O transitions are less than S->N transitions in the supporting trails of democrats, republicans, and independents. Previous work has shown that watching YouTube videos that debunk misinformation helps in bursting filter bubbles of misinformation \cite{tomlein2021audit}. Our work also shows that opposing videos could lead to more opposing videos (O->O transitions in opposing trails). Thus, increasing the number of S->O transitions can lead users to trustworthy information on the platform.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{RQ2/misinf_scores_home.pdf}
\caption{\textbf{RQ2c results:} Figure showing the average change in the amount of bias present in homepages because of watching a trail of up-next videos starting with either supporting, opposing, or neutral seed videos for democrats, republicans, and independents.}
\label{tab:delta}
\Description{Figure showing the average change in the amount of bias present in homepages because of watching a trail of up-next videos starting with either supporting, opposing, or neutral seed videos for democrats, republicans, and independents. All the values are equal to or less than 0.01.}
\vspace{-0.5cm}
\end{figure}
\begin{figure*}[]
\centering
\begin{subfigure}[]{.4\textwidth}
\centering
\includegraphics[width=\linewidth,keepaspectratio]{RQ3/impression_search.pdf}
\caption{}
\label{impr_search}
\end{subfigure}
\begin{subfigure}[]{.5\textwidth}
\centering
\includegraphics[width=\linewidth,keepaspectratio]{RQ3/trail_impr.pdf}
\caption{}
\label{impr_trail}
\end{subfigure}
\caption{ \textbf{RQ3 results:} a) Figure showing Top-10 YouTube channels with impressions in the most number of search queries for all study participants. For example, on average CNN appears in 61.86\% of search queries for all our study participants. b) Figure showing the average number of impressions for Top-10 YouTube channels that appear in the most number of standard up-trails collected for users. For example, on average, videos from the Fox News channel appear 3.27 times in those up-next trails where videos from the channel are observed. \fcolorbox{labell}{labell}{\rule{0pt}{2pt}\rule{2pt}{0pt}} is a left-leaning channel, \fcolorbox{labelr}{labelr}{\rule{0pt}{2pt}\rule{2pt}{0pt}} is right-leaning and \fcolorbox{labelc}{labelc}{\rule{0pt}{2pt}\rule{2pt}{0pt}} is center-leaning.}
\label{fig:imp}
\Description{Figure (a) shows the Top-10 YouTube channels with impressions in the most number of search queries for all study participants. The channels are in order CNN (61.86\%), NBC News (39.85\%), CBS News (33.64\%), CNBC Television (31.36\%), 60 minutes (29.51\%), MSNBC (27.79\%), Fox news (27.08\%), PBS NewsHour (26.27\%), 11Alive (24.72\%), Today (23.88\%). Figure (b) shows the average number of impressions for Top-10 YouTube channels that appear in the most number of standard up-trails collected for users. The channels are in order LastWeekTonight, Saturday Night Live (4), Fox News (3.6), Late Night with Seth Meyers (3.27), The Late Show with Stephen Colbert (2.98), Jimmy Kimmel Live (2.65), NBC News (2.02), Sky News Australia (1.93), Fox Business (1.92), and PBS NewsHour (1.71)}
\end{figure*}
\subsection{RQ2c: Misinformation in homepages}
We collected participants' YouTube homepages to determine how the bias in the homepage changes ($\delta$) after watching a trail of videos starting with a seed video that is either supporting (${\delta}_{S}$), opposing (${\delta}_{O}$) or neutral (${\delta}_{N}$) in stance with respect to election misinformation. We calculated the impact of trails by using the following formula:-
${\delta}_{stance}$ = \textit{Misinformation} $score_{Homepage\_before\_the\_trail}$- \textit{Misinformation} $score_{Homepage\_after\_the\_ trail}$
${\delta}_{S}$, ${\delta}_{N}$ and ${\delta}_{O}$ represent the change in the amount of bias present in homepages because of watching a trail of up-next videos starting with supporting, opposing and neutral seeds. A negative $\delta$ would indicate that the YouTube homepage collected after the trail contained more opposing videos compared to the YouTube homepage before the trail. A positive $\delta$, on the other hand, indicates either presence of more videos supporting election misinformation or a lesser number of opposing videos on the homepage collected after the trail as compared to the homepage collected before the trail. We consider the top ten recommendations present on the homepage for analysis. Figure \ref{tab:delta} shows $\delta$ values for all three kinds of trails for democrats, republicans, and independents. We discuss a few results.
We observe that after following the up-next video trails starting from a neutral seed, the homepages of democrats and independents contain more supporting videos. However, recall that the average misinformation score of the up-next trails with neutral seeds for both democrats and independents was negative (Figure \ref{tab:misinfo scores}). This indicates that although the up-next trails with neutral seeds lead users to more opposing videos, the homepages, however, contain more misinformation or a lesser number of opposing videos after the trail. We also observe that after watching up-next trail videos with supporting seed, republicans' homepage contain more opposing videos (Figure \ref{tab:delta}) while the trail itself contained more misinformation (Figure \ref{tab:misinfo scores}). However, note that the magnitude of the $\delta$ is low in all the conditions indicating that fewer videos supporting or opposing election misinformation appear on the participants' homepages.
\section{RQ3: Composition and Diversity} \label{rq3}
In this research question, we want to characterize source diversity in YouTube when users search for election misinformation on the platform. Source diversity in searches and recommendations is an important characterization of fairness \cite{ge2021towards}. Furthermore, given that the narratives about the election misinformation were closely intertwined with news sources and their leanings, it is important to determine what kinds of YouTube channels are users exposed to. News and media diversity can be characterized in multiple ways \cite{joris2020news}. One typology characterizes media diversity with respect to \textit{source} (content providers), \textit{content} (perspectives) and \textit{exposure} (actual consumption of diverse content) \cite{napoli2011exposure,trielli2019search}. Our work analyzed the content diversity in RQ2 by analyzing the video's stance on election misinformation. We cannot study exposure diversity since it requires determining the actual content consumed (clicked, watched, etc) by our study participants in their naturalistic settings. For this study, we focus on source diversity in terms of the identity of top content providers (YouTube channels) and distribution and concentration of channels in the standard SERPs and up-next trails. We acknowledge that future studies should also examine the ideological position of news sources and study the filter bubbles of partisan content on the platform.
\begin{figure*}[]
\begin{minipage}{\linewidth}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{RQ3/gini_d.pdf}
\caption{Democrats}\label{fig:1a}
\end{subfigure}\hfill
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{RQ3/gini_r.pdf}
\caption{Republicans}\label{fig:1b}
\end{subfigure}\hfill
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{RQ3/gini_i.pdf}
\caption{Independents}\label{fig:1b}
\end{subfigure}
\end{minipage}
\caption{\textbf{RQ3a results:} Distribution of Gini coefficients for all
search queries (n=88) for a) Democrats, b) Republicans, and c) Independents, calculated based on the distribution of impressions of YouTube channels appearing in the search results.
}
\label{gini:search}
\Description{Figure a shows the gini index distributions for search queries for democrats, 54.5\% search queries have gini between 0-0.1. 33\% have gini between 0.1-0.2. Figure b shows the gini index distributions for search queries for republicans, 54.5\% search queries have gini between 0-0.1. 33\% have gini between 0.1-0.2. Figure c shows the gini index distributions for search queries for democrats, 56.8\% search queries have gini between 0-0.1. 30.7\% have gini between 0.1-0.2.}
\end{figure*}
\begin{figure*}
\begin{minipage}{\linewidth}
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{type2/dp.pdf}
\caption{Democrats (supp. trails)}\label{dp}
\end{subfigure}\hfill
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{type2/rp.pdf}
\caption{Republicans (supp. trails)}\label{rp}
\end{subfigure}\hfill
\begin{subfigure}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{type2/ip.pdf}
\caption{Independents (supp. trails)}\label{ip}
\end{subfigure}
\end{minipage}
\begin{minipage}{\linewidth}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{type2/dn.pdf}
\caption{Democrats (neutral trails)}\label{dn}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{type2/rn.pdf}
\caption{Republicans (neutral trails)}\label{rn}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{type2/in.pdf}
\caption{Independents (neutral trails)}\label{in}
\end{subfigure}
\end{minipage}
\begin{minipage}{\linewidth}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{type2/dd.pdf}
\caption{Democrats (oppos. trails)}\label{dd}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{type2/rd.pdf}
\caption{Republicans (oppos. trails)}\label{rd}
\end{subfigure}\hfill
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{type2/id.pdf}
\caption{Independents (oppos. trails)}\label{id}
\end{subfigure}
\end{minipage}
\caption{\textbf{RQ3b results:} Figure showing the top YouTube channels appearing in supporting, neutral, and opposing trails of democrats, republicans, and independents and the percentage of users in whose trails these channels appear. \fcolorbox{labell}{labell}{\rule{0pt}{2pt}\rule{2pt}{0pt}} is a left-leaning channel, \fcolorbox{labelr}{labelr}{\rule{0pt}{2pt}\rule{2pt}{0pt}} is right-leaning and \fcolorbox{labelc}{labelc}{\rule{0pt}{2pt}\rule{2pt}{0pt}} is center-leaning.}
\label{div:trails}
\Description{ Figure (a) shows the top YouTube channels appearing in supporting trails of Democrats (Fox news, NBC, News, CBS News). Figure (b) shows the top YouTube channels appearing in supporting trails of republicans (Fox news, Fox Business, Sky News Australia). Figure (c) shows the top YouTube channels appearing in supporting trails of independents (Fox news, NBC news, PBS Newshour). Figure (d) shows the top YouTube channels appearing in neutral trails of Democrats (Fox news, PowerfulJRE, Fox Business). Figure (e) shows the top YouTube channels appearing in neutral trails of republicans (Fox news, Sky News Australia, PowerfulJRE). Figure (f) shows the top YouTube channels appearing in neutral trails of independents ((Fox news, PowerfulJRE, Fox Business)).
Figure (g) shows the top YouTube channels appearing in opposing trails of Democrats (Saturday Night Live, Last Week Tonight, Late Night with Seth Meyers). Figure (h) shows the top YouTube channels appearing in opposing trails of republicans (Saturday Night Live, Last Week Tonight, Late night with Seth Meyers). Figure (i) shows the top YouTube channels appearing in opposing trails of independents (Saturday Night Live, Last Week Tonight, Late night with Seth Meyers).}
\vspace{-0.5cm}
\end{figure*}
\subsection{RQ3a: Diversity in search results}
For analysis, we consider the top ten search results in standard SERPs. Figure \ref{impr_search} shows the top 10 YouTube channels with impressions in the most number of search queries.\footnote{The top 10 YouTube channels and their mean percentage of total impressions were almost similar when calculated separately for democrats, republicans, and independents. Thus, we show the overall distribution for all users combined together.} Here, we define impression as the occurrence of a channel's video in SERP.
We observe that the left-leaning channel CNN on average appears in SERPs of more than half (61.86\%) search queries. Additionally, except Fox news and 11Alive, all other top channels are left-leaning. We further analyzed which channels were responsible for the most relevant YouTube videos in our collected data. In our standard SERPs, we obtained a total of 4901 unique videos out of which 1940 (39.51\%) videos were relevant, i.e. related to elections (959 opposing, 865 neutral, and 103 supporting). Overall, in these relevant videos, most videos come from CNN and MSNBC. The most opposing videos come from channels MSNBC followed by CNN, most supporting videos come from Fox News followed by Daily Mail while most neutral videos come from NBC news followed by CNN. Given, CNN is one of the channels with the most opposing videos, it is encouraging to see that it has the most search query impressions.
Next, we determine the source diversity in the SERPs using gini coefficient
metric \cite{ge2021towards,xiao2019beyond,trielli2019search}. Gini coefficient determines inequality in a frequency distribution. For our case, we use this metric to determine the inequality in the distribution of YouTube channel impressions. For a given SERP consisting of videos from \textit{n} unique channels, given a list of impressions for all YouTube channels [$g_1$, $g_2$,...$g_n$], then gini coefficient would be calculated as,
\textit{Gini coefficient} (G) = $\frac{1}{2\bar{g}n^2}$ $\Sigma^{|n|}_{i=1}$ $\Sigma^{|n|}_{j=1}$ |$g_i$ - $g_j$| where $\bar{g}$ is the mean of all impressions.
A fairer search engine would have lower values of gini coefficient indicating uniform distributions of YouTube channel impressions. Figure \ref{gini:search} shows the distribution of gini coefficients for all SERPs for democrats, republicans, and independents. The distributions are similar for users with different political leanings. Furthermore, for approximately 96\% of search queries, the gini coefficient of SERPs is less than 0.3 indicating that YouTube has mostly evenly distributed videos from different channels in its search results.
\subsection{RQ3b: Diversity in up-next trails}
Overall, we collected 6943 videos in standard trails out of which 1082 are relevant, i.e. related to elections. The most number of opposing videos in trails come from channels MSNBC and Late Night with Seth Meyers*, most supporting videos in trails come from Fox News* and Fox Business, and most neutral videos come from Fox News* and NBC News\footnote{* indicates that seed videos of our experiments also belonged to these channels.}. Next, we determine the top ten YouTube channels occurring in the standard trails. Note, we do not consider the seed videos while analyzing the trails. Figure \ref{impr_trail} shows the average number of impressions of the top 10 channels appearing the most number of times in the trails.
Here, impression indicates the number of occurrences of a channel's videos in a trail, while considering trails containing videos from that channel. Note that the top channels are also channels of some of the seed videos in our dataset. The figure reveals that on average, videos from LastWeekTonight, Saturday Night Live, and Fox News appear more than 3 times in a trail, when taking into account all the trails where the channel was observed. This finding indicates that videos from these channels lead to more videos from these channels in the up-next recommendations.
Next, to determine the diversity in trails, we determine the proportion of channels that are different than the channel of the seed video in the trails. We find that on average, in an up-next trail of length five, we find 2.07 YouTube channels other than the channel of the seed video. The number of non-seed channels in up-next trails is the least for trails with seed videos from Saturday Night Live (0.85), LastWeekTonight (0.86), and Late Night with Seth Meyers (1.07). Note, we did not calculate this metric for supporting, neutral, and opposing seeds separately since the channels of our supporting, opposing, and neutral videos are not unique. For example, we have a supporting as well as a neutral seed from Fox news. Given this scenario, there is no way to determine whether the videos appearing in the trails are due to the channel lean of the seed video or because of other factors. We also refrain from determining the diversity in up-next trails using gini coefficient since several trails had just one or two unique channels (M=3.1, SD=1.46) in which case gini coefficient would not give a good representation of diversity.
To get a sense of what kinds of channels are presented to users in the up-next trails, we determine the channels appearing in the most number of trails of democrats, republicans, and independents for trails with supporting, neutral, and opposing seeds (Figure \ref{div:trails}). We observe that Fox news appears in up-next trails with supporting and neutral seeds of all users. Fox Business and Sky News Australia appear in both the supporting and neutral up-next trails of more than half of the republicans (Figure \ref{rp}, and \ref{rn}). None of the seed videos belonged to these channels and they still appear in the up-next trails. Similarly, Sky News Australia also appears in the neutral up-next trails of 44.12\% independents (Figure \ref{in}) despite no neutral seed belonging to the channel. Furthermore, PowerfulJRE (Joe Rogan's YouTube channel) did not appear in the neutral up-next trails of all the users even though two neutral seed videos belonged to the channel (Figure \ref{dn}, \ref{rn} and \ref{in}). On the other hand, the top channels appearing in the up-next trails with opposing seeds of all users (Figure \ref{dd}, \ref{rd} and \ref{id}) are the channels of the opposing seed videos used in our experiment. Furthermore, three channels out of the top four appear in the trails of more than 96\% of the users. This indicates that watching a video belonging to these left-leaning channels will probably lead to one or more videos belonging to this channel in the up-next recommendation trail.
\section{Discussion} \label{disc}
In this paper, we conduct a crowd-sourced audit of the YouTube platform to determine
how effectively the platform removed election misinformation from its various components. We discuss the implications of our findings below.
\subsection{Standardization of search results}
We find little to no personalization in the search results. We also did not find any effect of personalization
on the amount of misinformation returned in search results. Throughout the study period, the amount of personalization and misinformation remained constant in the searches. On analyzing the standard SERPs, we find that YouTube returns more videos opposing election misinformation in 95\% of the search queries that we tested. Interestingly, we see that misinformation scores of search queries having a misinformation lean (e.g. dominion voter fraud) are more negative compared to misinformation scores of queries that are neutral in stance (e.g. presidential election 2020). This finding implies that YouTube has paid more attention to the queries with misinformation lean and ensured that users are exposed to more debunking information when they search about the fraudulent claims surrounding the elections. This selective attention is also in-line with results of past audits that showed YouTube improving the recommendations of topics like vaccination over 9/11 conspiracies \cite{hussein2020measuring}.
Our analysis also indicates that gini index of 96\% of search queries is less than 0.3, with $\sim$54\% queries having a gini index of less than 0.1. Such low values of gini index imply that YouTube is ensuring source diversity in searches by evenly distributing videos from different channels in its SERPs. Furthermore, the distribution of gini coefficients was similar for all users irrespective of their partisanship. This finding indicates YouTube's attempt to expose users to videos from different channels rather than a select few based on participants' partisanship. Interestingly, in line with a previous audit on Google search \citep{trielli2019search}, we find that CNN is one of the top channels whose videos appear in 61.8\% of search queries. Future studies can test whether the dominance is due to emergent bias or the strategies adopted by the channel to enhance algorithmic visibility \cite{trielli2019search}. {Overall, our analysis reveals that YouTube's search results are largely unpersonalized and the platform has had varying levels of success in removing misinformation and presenting videos that debunk election-related falsehoods in different clusters of search queries.}
\subsection{Scope for improvement in up-next trail recommendations}
We find that up-next trails are highly personalized. However, for 50\% of the users, only up to 10\% videos in the up-next recommendations come from users' subscribed channels. Future audit studies should further investigate the impact of users' channel subscriptions (both news and non-news channels) on the platform's recommendations. We also find that there is no significant difference in the amount of misinformation that users are exposed to in up-next recommendation trails in the signed-in standard window and unpersonalized incognito window. On examining the standard up-next trails, we do find an echo-chamber effect. Users, irrespective of their partisanship, receive more misinformation in the up-next trails with supporting seeds as compared to the trails with neutral and opposing seeds (Figure \ref{tab:misinfo scores}). We also observe that the magnitude of misinformation scores of trails with opposing seeds is more than the magnitude of misinformation scores of trails with supporting seeds. This implies that users are exposed to a small number of misinformative videos when they follow the up-next recommendations of a video supporting election misinformation. On the other hand, users are exposed to a larger number of opposing videos in the opposing up-next trails. This is a key finding also supported by prior work that showed that echo chambers of misinformation can be burst by watching debunking videos \cite{tomlein2021audit}. The platform can leverage this phenomenon by making its recommendation engine present more debunking videos to users which would then expose them to more credible videos in the recommendation trails.
We also examine various transitions in the up-next trails to study how users get pushed towards misinformation. Overall, we observe that problematic transitions where a supporting video is recommended in the up-next video recommendation of a supporting (S->S) or opposing video (O->S) are less than 2\%. However, S->S transitions are more in trails with supporting seeds for independents compared to democrats and republicans. Furthermore, N->S transitions are also high in up-next trails with neutral seeds for independents. These findings are problematic. Showing misinformative videos to independents who might not have developed a strong opinion on the election fraud conspiracies could increase their chances of forming a pro-conspiracy belief. We also observe that N->S transitions are more for republicans in the up-next trails with neutral seeds (3.78\%) compared to trails with supporting seeds (1.61\%). This finding is again troublesome. Past studies have indicated that republicans are more susceptible to electoral fake news \cite{Republic19:online}. Thus, recommending videos supporting election misinformation to republicans watching neutral videos would expose them to more misinformation which might reinforce or lead to forming conspiratorial beliefs.
On analyzing the up-next trails for channel diversity, we observe several interesting phenomena. First, the number of impressions for left-leaning late-night show channels on YouTube such as LastWeekTonight is very high. On average, approximately 3-4 videos from these channels appear in the up-next trails (of length five) when starting with opposing seed videos. Furthermore, these channels appear in the video recommendations of almost all of our study participants. Similar to the late-night shows, we find that fox news also appears on average 3.27 times in the up-next trails of all participants. Future studies can look into the reasons behind the strong ``algorithmic recognizability'' \cite{gillespie2017algorithmically} and high amplification of these channels in YouTube recommendations. Overall, we conclude that while YouTube has reduced misinformative videos in its up-next recommendations, there is still scope for improving the recommendation algorithm.
\vspace{-0.3cm}
\subsection{Participants' beliefs vs algorithmic reality}
The study survey conducted before our audit experiment provided us with an opportunity to map participants' beliefs about personalization and trust in YouTube's algorithms with the reality of the situation as determined by our audits.
The majority of participants believe that YouTube somewhat personalizes search results. However, in reality, they are hardly personalized. On the other hand, only half of the participants believe up-next recommendations to be highly personalized which is in line with our findings. This mismatch in beliefs and reality indicates users' lack of algorithmic awareness. It also acts as a call to action for the platform to make users aware of the functioning of the algorithms. Users could be made aware of personalization or lack of it by adding design features that promote algorithmic reflection, for example, seeing search results or recommendations of other users \cite{bhuiyan2022othertube}.
Our survey also showed that, respectively, 19.2\% and 14.1\% users trust the credibility of information presented to them by YouTube in the search results and up-next recommendations to a great extent. This belief is problematic and indicates reliance on the platform's algorithms to show credible information. In reality, while we find the majority of YouTube's search results to be credible, up-next recommendations still contained misinformative videos. One way to make people spot misinformation on the platform and not blindly trust YouTube's recommendations could be by providing additional context about the content that the participant is searching for or viewing. While YouTube has started displaying Wikipedia links on the platforms \cite{YouTubea40:online}, additional cues in the form of credibility citations, existing fact-checks or knowledge panel\footnote{https://support.google.com/knowledgepanel/answer/9163198?hl=en} could also be helpful \cite{hughes2021introducing}.
\section{Limitations and future work}
Our work is not without limitations. Our audit study is observational in nature, i.e our experiment does not isolate user attributes that produce the differences in misinformation measurements.
We only make observations on the differences in misinformation received in searches and recommendations of users with different political affiliations. We recruited participants who used YouTube extensively to get information about the 2020 elections. However, for ethical reasons, we did not analyze participants' account histories to verify their self-reported data. Our participant sample was also not balanced with respect to demographic attributes and political affiliation. {We selected YouTube videos that had accumulated the most number of views as the seed videos for our audit experiments. One potential pitfall of such a sampling strategy is that it reduces the ecological validity of the experiment since the participants in our study might not have engaged with those videos in the past.
Another limitation is that YouTube might have specifically tailored the recommendations of popular misinformative videos. Future studies could consider alternative strategies for sampling videos, such as selecting videos that were more recently published on YouTube or sampling a combination of videos that have accumulated the least and most amount of engagement. The search queries used in our audit also might not be representative of how our study participants formulate queries about the elections. }
{Future studies can survey the study participants to determine how they used YouTube searches in the context of political elections as well as their
information needs about the elections.}
{Our classifier developed to annotate the YouTube videos for election misinformation has an error rate of 9\% which could have affected the downstream analysis that we performed to quantify the amount of misinformation in various YouTube components. Additionally, we assign an annotation value of 0 to all videos that were removed from YouTube after our audit data collection. While the number of such videos is very small (<1\%), it would result in a conservative estimate of misinformation bias present in the search results and recommendations. We use the misinformation bias score adopted from Hussein and Juneja et al's study that captures the amount of misinformation along with the rank of the video \mbox{\cite{hussein2020measuring}}. However, this metric does not take into account the relevance of the videos. Future studies can use metrics that measure simultaneously the relevance and credibility in ranked lists such as Normalised Weighted Cumulative Score and Convex Aggregating Measure \mbox{\cite{lioma2017evaluation}}. In our audit experiment, after testing every condition (watching supporting, neutral, and opposing videos), we performed a step to delete users' YouTube history created by our extension so that it does not impact the other experimental condition. The first author tested out the effect of deletion on users' search and watch history for a few sample queries and videos and found that the effect of such deletion is almost immediate. However, we did not test out this scenario for all search queries and videos used in our audit. Future studies can determine how soon the deletion of history impacts users' recommendations and search results across various topics.}
{Our study focuses on users' beliefs about the personalization and credibility of content on YouTube as well as the role of YouTube's algorithms in driving users to the filter bubbles of problematic content. Future studies can focus on the impact of algorithmic recommendations on the radicalization of users. There are several scholars who argue that algorithms are not centrally culpable for the polarization or the filter bubbles that users experience on online platforms \mbox{\cite{bruns2019filter,whittaker2021recommender,bruns2019filter2}}. Many times the users of social media have a more diverse
media diet than the non-users \mbox{\cite{bruns2019filter,bruns2019filter2}}.
Scholars posit that while algorithms can observe what a user consumes on social media, they cannot determine what the user actually prefers \mbox{\cite{dahlgren2021critical}}. In other words, a digital choice is not always a true reflection of an individual's preference \mbox{\cite{dahlgren2021critical}}. Furthermore, users might use different online platforms for different types of content \mbox{\cite{dahlgren2021critical}}. Thus, to gain a holistic idea of the extent algorithms play a role in user polarization, future audit studies can conduct multi-platform crowd-sourced audits for individuals. These audit studies can determine the impact of algorithmic recommendations on users' social/political viewpoints via surveys and monitor users' patterns of content consumption simultaneously on multiple search engines and social media platforms used by the users. }
\section{Conclusion} \label{lim}
In this study, we conducted a crowd-sourced audit on YouTube to determine the effectiveness of its content regulation policies with respect to election misinformation. We find that YouTube returns videos that debunk election misinformation in its searches. We also find that YouTube leads users to a small number of misinformative videos in up-next trails with seed videos that support election misinformation. Overall, our study shows that while YouTube has been largely successful in removing election misinformation from its searches, there is still scope to fix up-next recommendations.
\bibliographystyle{ACM-Reference-Format}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,733 |
**Badger Biographies**
_Belle and Bob La Follette: Partners in Politics
Blue Jenkins: Working for Workers
Caroline Quarlls and the Underground Railroad
Casper Jaggi: Master Swiss Cheese Maker
Cindy Bentley: Spirit of a Champion
Cordelia Harvey: Civil War Angel
Curly Lambeau: Building the Green Bay Packers
Dr. Kate: Angel on Snowshoes
Frank Lloyd Wright and His New American Architecture
Gaylord Nelson: A Champion for Our Earth
Harley and the Davidsons: Motorcycle Legends
Joyce Westerman: Baseball Hero
Les Paul: Guitar Wizard
Lucius Fairchild: Civil War Hero
Mai Ya's Long Journey
Mary Nohl: A Lifetime in Art
Mountain Wolf Woman: A Ho-Chunk Girlhood
Ole Evinrude and His Outboard Motor
A Recipe for Success: Lizzie Kander and Her Cookbook
Richard Bong: World War II Flying Ace
Tents, Tigers, and the Ringling Brothers_
Mountain Wolf
Woman
_A Ho-Chunk Girlhood_
Diane Young Holliday
Wisconsin Historical Society Press
Published by the Wisconsin Historical Society Press
© 2007 by State Historical Society of Wisconsin
E-book edition 2014
For permission to reuse material from _Mountain Wolf Woman: A Ho-Chunk Girlhood_ (ISBN 978-0-87020-381-7, e-book ISBN 978-0-87020-540-8), please access www.copyright.com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users.
www.wisconsinhistory.org
Photographs identified with PH, WHi, or WHS are from the Society's collections; address requests to reproduce these photos to the Visual Materials Archivist at Wisconsin Historical Society, 816 State Street, Madison, WI 53706.
Designed by Jill Bremigan
11 10 09 08 07 1 2 3 4 5
The Library of Congress has cataloged the printed edition as follows:
Holliday, Diane Young, 1951–
Mountain Wolf Woman : a Ho-Chunk girlhood / Diane Young Holliday.
p. cm. — (Badger biographies)
Includes bibliographical references and index.
ISBN 978-0-87020-381-7 (pbk. : alk. paper) 1. Mountain Wolf Woman, 1884–1960—Juvenile literature. 2. Ho Chunk women—Biography—Juvenile literature. 3. Ho Chunk Indians—Biography—Juvenile literature. 4. Ho Chunk Indians—Social life and customs—Juvenile literature. 5. Wisconsin—Social life and customs—Juvenile literature. I. Title.
E99.W7H65 2007
977.5004'975260092—dc22
[B]
2007002539
Front cover photo: WHi Image ID 9385
The image of Jean Nicolet on page 3 is part of a mural in the Wisconsin Historical Society in Madison, Wisconsin
To my daughter Xi He Ping,
who has taught me so
much in her own journey
through childhood
Contents
**1** Meet Mountain Wolf Woman and Her People
**2** The Gift of a Name
**3** Following the Seasons: Spring and Summer
**4** Following the Seasons: Fall and Winter
**5** Many Ways of Growing Up
Afterword
Appendix: Mountain Wolf Woman's Time Line
Glossary
Reading Group Guide and Activities
To Learn More about the Ho-Chunk
Acknowledgments
Index
1
Meet Mountain Wolf Woman
and Her People
When Mountain Wolf Woman was born in April of 1884, Wisconsin had been a state for only 36 years. Although Wisconsin was a young state, Mountain Wolf Woman's people, the Ho-Chunk Nation, had lived in Wisconsin for hundreds, probably even thousands, of years. They live here still.
Mountain Wolf Woman grew up at a time when the Ho-Chunk way of living remained much like that of her **ancestors.** But other parts of her life were very different. This is the way it is for all families. Think of the many ways your life is both different from and the same as that of your grandparents and great-grandparents and great-great-grandparents.
* * *
**Nancy Oestreich Lurie**
To learn about Mountain Wolf Woman's life in her own words, read _Mountain Wolf Woman: Sister of Crashing Thunder; The Autobiography of a Winnebago Indian_ edited by Nancy Oestreich Lurie. Nancy Lurie is an **anthropologist**. She has studied the history and **culture** of the Ho-Chunk people.
_Nancy Lurie_
When Nancy Lurie was a college student, she was adopted by one of Mountain Wolf Woman's relatives and became the niece of Mountain Wolf Woman. This kind of adoption among Indian people is a special form of friendship. As the friend becomes one of the family, the friend assumes the duties and **privileges** of all family members. These duties and privileges are determined by how one person is related to another. For example, aunts are expected to treat nieces and nephews with special **generosity**. So, when Nancy Lurie asked for Mountain Wolf Woman's life story, she happily told it in her own language, and Lurie recorded it.
Lurie then turned to a Ho-Chunk friend, Frances Thundercloud, who speaks Ho-Chunk and English equally well. Frances Thundercloud **translated** Mountain Wolf Woman's Ho-Chunk words into English. When you see quotes in this book, they are Mountain Wolf Woman's words translated into English.
* * *
The first European to arrive in the western Great Lakes region was Jean Nicolet (jhan nik oh **lay** ) in 1634. He was sent by the French government in Canada. But the French did not return to what is now Wisconsin for another 30 years because they were fighting the Iroquois ( **ear** eh kwoy), Indian people who lived in eastern North America. When the French did travel back to Wisconsin in the 1660s, they came to set up missions and trading posts.
For generations before the Europeans' arrival, Ho-Chunk families planted gardens of corn, beans, and squash, and they gathered many wild plants for food. They also caught fish and hunted animals with bows and arrows. They made their own clothes from tanned animal skins and built their own houses with bark or cattail mats.
There were no cameras when Jean Nicolet visited Wisconsin in the 1600s; this is what an artist nearly 300 years later imagined he looked like.
Each year, as the seasons came and went, groups of Ho-Chunk people moved to rivers, to woods, and to fields to get the foods and materials that they needed to live. But when the French arrived in North America, Ho-Chunk life began to change.
Ho-Chunk garden in winter
The Indians traded furs from animals they trapped for European goods such as kettles, knives, guns, cloth, and beads. Before the fur trade, the Ho-Chunk had been living in a large settlement in the Green Bay area. Once the French arrived, the Ho-Chunk began forming smaller villages. They spread across southwestern Wisconsin and northwestern Illinois in order to trap beaver and other animals along the many area rivers, including the Fox, Wisconsin, Rock, and Black.
The Ho-Chunk used European beads to decorate bags like this one from more recent times.
When the British took control of North America from the French in 1763, they continued trading with the Indians. And when the Americans took control from the British after the Revolutionary War, they also continued trading with the Indians. But then the American settlers wanted more than just trade. They wanted land. But different groups of Native people already lived all over the Americas. Many, of course, lived in what became the state of Wisconsin. How did the United States government deal with that problem?
More than 50 years before Mountain Wolf Woman was born, the United States government decided to move as many Indians as possible across the Mississippi River. Most tribes had little choice but to sign **treaties** to sell their land and move to other land set aside for them.
Members of the Ho-Chunk Nation were forced to give up their homeland in Wisconsin and Illinois in exchange for a **reservation** in Iowa. Then, in a series of treaties, the Ho-Chunk people were moved to Minnesota, South Dakota, and, finally, Nebraska. Today, about half of the Ho-Chunk people still live in Nebraska.
Imagine how you would feel if the government told your family that you had to move away from everything you knew and live in a land you had never seen! Some Ho-Chunk families refused to leave. They preferred to hide out in their beloved Wisconsin. Some who left Wisconsin returned later, including Mountain Wolf Woman's family.
How many times were the Ho-Chunk forced to move?
* * *
**Where Does the Name Ho-Chunk Come From?**
The name Ho-Chunk can mean either "big fish" or "big voice." This is big in the sense of important or original. You may have heard of the name Winnebago. This is what some people used to call the Ho-Chunk. Winnebago came from a Mesquakie (mes **kwaw** kee) Indian word meaning "people of the stinking water." The Ho-Chunk used to live near marshy areas around Green Bay and the Fox River. These waters sometimes filled with very smelly dead fish and **algae** in the spring. Ho-Chunk is a name that comes from the Ho-Chunk language; in other words, this is a name they call themselves.
* * *
In 1874, 10 years before Mountain Wolf Woman was born, U.S. government officials put her family, along with other Ho-Chunk people, on a train. The train took them to Nebraska to live on the tribe's reservation there. Mountain Wolf Woman's mother, named Bends the Bough, later told her that some Ho-Chunk did not want to leave Wisconsin. But Bends the Bough said she was happy because she would see some of her relatives who had been moved earlier.
Compare the amount of land under Ho-Chunk control in 1825 and today. How do you think the U.S. government was successful in getting the land it wanted?
It was winter when Bends the Bough and her family arrived in Nebraska. They quickly built wigwams to keep warm. Wigwams were a handy type of house that could be built quickly in different sizes. Indian people in eastern North America had been building wigwams for thousands of years.
Like houses today, wigwams came in all sizes.
To build their wigwams, Mountain Wolf Woman's family members cut down small saplings and stuck them into the ground to make round or oval shapes. Then they pulled the saplings down toward the middle and tied them together to form arches. They tied other saplings around the arches. After building these frames, they covered the outside with mats made from cattail plants or large pieces of elm or birch bark. Inside the wigwam, they covered the floor with mats made from bulrushes. These floor mats were harder to make. They also used these mats for decoration. A fireplace in the wigwam provided heat and light. A well-built wigwam was very comfortable.
But when the next spring arrived, many people were dying on the reservation in Nebraska. It was cold, and there were diseases and hunger. Somebody died almost every day! There were many burials and many tears. Bends the Bough was frightened for her family. She asked, "Why do we stay here? I am afraid because the people are all dying. Why do we not go back home?" So she and other family members decided to return to their homeland in Wisconsin. They moved to the Missouri River. There, they cut down big willow trees and made **dugout canoes** to prepare for their journey home.
* * *
**Dugout Canoes**
Dugout canoes were made from a single log. Before the Indians got metal tools through the fur trade, they used fire to hollow out a large log. They burned the wood and then scraped away the charred wood with tools made from stone and shell. Later, after Indians began trade with Europeans, they used metal tools.
The dugout canoe has a long history in Wisconsin waters. **Archaeologists** from the Wisconsin Historical Society have preserved the remains of 2 dugout canoes—one is about 150 years old and the other is 1,800 years old!
_Dugout canoe_
_Drawing of the 150-year-old dugout canoe preserved by archaeologists_
* * *
To return home to Wisconsin, Mountain Wolf Woman's family first paddled down the Missouri River to the "River's Mouth Place," where St. Louis, Missouri, is today. From River's Mouth Place, they paddled up the Mississippi River. Finally, the family made it to Prairie du Chien and then to La Crosse. From La Crosse, they moved to Black River Falls.
How long do you think this dugout canoe trip took Mountain Wolf Woman's family?
Have you ever floated _down_ a river on an inner tube? You float _with_ the current. Can you imagine paddling _up_ the Mississippi River _against_ the current of North America's largest river? Think of how strong you'd have to be! Mountain Wolf Woman's family must have really wanted to get home to Wisconsin.
* * *
**The River's Mouth Place**
The Ho-Chunk called St. Louis "River's Mouth Place" because it is there that the waters of the Missouri River empty into the Mississippi River, and this is called the Missouri's "mouth." But do you know why this place is called St. Louis today? This part of America was once owned by France. In 1764, a French fur trader established a trading post at this spot. He named it after his French king, Louis XV, and that is the name that most people have used since then.
* * *
After the 1874 attempt to remove all the Ho-Chunk from Wisconsin, the U.S. government gave up. The very next year, the government allowed the Ho-Chunk people who had returned to Wisconsin to live in their traditional homeland. Those who chose to return to Wisconsin could claim a **homestead**. Earlier, in 1862, the U.S. Congress had passed a law that allowed some people to get a homestead of 160 acres of land. In the 1870s and 1880s, Congress changed the law to include Indians. The new law was supposed to encourage them to live like non-Indian farmers.
But private ownership of land had not been part of Ho-Chunk life in the past. The Ho-Chunk people were used to planting large gardens on tribal lands each spring, and then they moved to different areas of their territory to get other types of foods throughout the year. Mountain Wolf Woman's father was definitely not interested in owning any land because he was a member of a sky **clan**—the Thunder Clan. Mountain Wolf Woman remembered him saying, "I do not belong to the earth and I have no concern with land."
Bends the Bough was also a member of a sky clan, the Eagle Clan. But she had her own ideas. She decided that in those changing times the family needed a place to call its own. She took a 40-acre homestead near Black River Falls. Mountain Wolf Woman's father built a log cabin on the homestead. Mountain Wolf Woman's family lived there during parts of the year, but often the family went away to get food or earn money.
Black River Falls when Mountain Wolf Woman was a child
* * *
**Clans**
Like many other Indian nations, Ho-Chunk society is organized by clans. The clans in Ho-Chunk society, as in some other Indian cultures, are divided into at least 2 groups: those who are on earth and those who are above—the earth and sky.
_The_ Ho-Chunk Family _Tree by Ho-Chunk artist Harry Whitehorse shows the Ho-Chunk earth and sky clans. This sculpture is at Thoreau Elementary School in Madison, Wisconsin._
The largest Ho-Chunk sky clan is the Thunder Clan. The Eagle, Hawk, and Pigeon are also sky clans. The earth division includes the Bear Clan as well as the Water Spirit, Fish, Buffalo, Deer, Elk, Snake, and Wolf clans. Clans are named for the animal spirit ancestor. Each clan has its own **origin** story. The Ho-Chunk people believe that long ago, **spiritual** beings, who could take animal or human form, founded the clans.
Traditionally, different clans are responsible for different tasks. In the past, the Bear Clan kept order in the villages and made decisions about land. Peace chiefs came from the Thunder Clan. Today, clans are still important. The clans have responsibilities at feasts and **ceremonies**.
* * *
2
The Gift of a Name
Mountain Wolf Woman's early childhood years were not recorded in photographs or videos as are those of many children today. But much was stored in her memories and in the memories of her family. Back in the 1880s, most people were born at home rather than in hospitals. From her family, she knew that she was born at her grandfather's house at a place called East Fork River. It was early spring when she took her first breath, and her family was making maple sugar.
For many Indian people in Wisconsin, making maple sugar was part of their yearly round of activities. It was something they did every spring. Early in the season, when there was still snow, people set up camps out in the woods in groves of maple trees. They made cuts in the trees with axes and then put a piece of wood into the cut to guide the oozing sap into a bucket.
People spent many hours and days collecting buckets of sap. Then they poured the sap into large kettles. They boiled the sap in these large kettles for many, many hours until it began to **granulate**. Sugaring took a lot of hard work, but people looked forward to working with family and friends. Sugaring was also a time of celebration because everyone knew the cold days and nights of winter were almost over.
The buckets on these trees are collecting sap that will soon become maple sugar.
Sugaring was an important seasonal activity for many Wisconsin Indians.
Mountain Wolf Woman's first memory was of a spring day sometime after her first birthday. She was with her mother, Bends the Bough, and her older sister White Thunder. Mother and daughters had come to a creek that they needed to cross, and there was no bridge. Her mother had been carrying Mountain Wolf Woman on her back in a **cradleboard**. When babies are in cradleboards, they face backward and can't see where they are going. Mountain Wolf Woman started squirming. "I was restless," she remembered. Bends the Bough took her off the board and carried her in a shawl on her back. From the shawl, Mountain Wolf Woman looked over her mother's shoulder and saw what was happening. "I saw the water swirling swiftly." She remembered seeing White Thunder carry the empty cradleboard across the creek while she held her skirt up to keep it from getting wet.
When she was older, Mountain Wolf Woman told her mother about this memory and asked her if it had really happened. Bends the Bough was amazed that she had such an early memory—it was considered a sign of great intelligence! When Mountain Wolf Woman shared this story with her niece, Nancy Lurie, she did not want to sound as though she was bragging. Mountain Wolf Woman added that her mother suggested that she probably remembered this early moment because she was so frightened by the swift running water. Think back. What is your first memory? Was it something scary or some other strong feeling?
* * *
**Cradleboards**
All Ho-Chunk women kept their babies on cradleboards in the old days. The baby would be securely tied to a board with a footrest at one end and a hoop at the other end. The hoop was over the baby's head. It protected the child if the board should fall. The cradleboard was an easy way to carry babies. The boards could also be stood upright on the ground so the babies could watch what their families were doing. Mothers often hung beads or other bright objects on the hoop to entertain the baby. The cradleboard also made it easier for mothers to do their tasks. While the babies were kept snug and warm on the cradleboards, their mothers' hands were free to work.
_Babies in cradleboards_
_Parents still use baby carriers to make life easier._
* * *
Mountain Wolf Woman had several older sisters and brothers. It was the Ho-Chunk custom to have 2 names. One name was based on whether you were a girl or a boy and on the order of your birth in your family. Each person also had a **ceremonial** name that usually came from the clan. If a family had more than 4 girls or boys, they began to use names that were forms of the first 4 names. In Ho-Chunk families, if a fifth daughter was born, like Mountain Wolf Woman, that girl's name was a form of the third girl's name. The chart shows the different names of the children in Mountain Wolf Woman's family.
**Birth Order** | **Ho-Chunk Name** | **Ceremonial Name in Mountain Wolf Woman's Family**
---|---|---
First Daughter | Hínuga (hee noo gah) | White Thunder
First Son | Kúnuga (koo noo gah) | Crashing Thunder
Second Daughter | Wihaŋga (wee hah hag) | Bald Eagle
Second Son | Hénaga (hay nah gah) | Strikes Standing
Third Daughter | Hakśigaga (hahk see gah) | No ceremonial name—this sister died very young.
Third Son | Hágaga (hah gah gah) | Big Winnebago
Fourth Daughter | Hinákega (hee nah kay gah) | Distant Flashes Standing
Fifth Daughter | Hakśigaxunuga (hahk see gah koo noo) | Mountain Wolf Woman
Mountain Wolf Woman got her ceremonial name when she was a little girl and was very, very sick. Bends the Bough was worried and did not know what to do. Finally, she asked an old woman named Wolf Woman to cure her daughter. Bends the Bough had great respect for the powers of old people. She told Wolf Woman, "I want my little girl to live. I give her to you. Whatever way you can make her live, she will be yours."
Among the Ho-Chunk people, highly valued possessions are to be given away, not kept for oneself. Of course, little Hakśigaxunuga was not a possession but the daughter of Bends the Bough. Giving her daughter to Wolf Woman to cure was both a way to help her baby and a way to honor Wolf Woman with a gift.
Wolf Woman cried at the thought of such a precious gift. Wolf Woman said, "My life, let her use it. My grandchild, let her use my existence." Then she gave the little child a holy name and predicted that she would live to be an old person. The name Wolf Woman gave the child was a Wolf Clan name—Xehaćiw _i_ ŋga (khay hah chee wee gah). And Xehaćiw _i_ ŋga got well! She lived to be an old person, just like Wolf Woman had said she would. Mountain Wolf Woman later said that the name Xehaćiw _i_ ŋga had a special meaning: "to make a home in a bluff or a mountain, as the wolf does, but in English I just say my name is Mountain Wolf Woman."
Although many children in Wisconsin went to school in the late 1800s, most Ho-Chunk children learned what they needed to know from their families. Very early, Mountain Wolf Woman learned how to behave politely and properly with family members as well as strangers. She also learned the importance of the many spirits in the Ho-Chunk world. She learned about her duty to **fast** in order to get blessings from the spirits. She learned to always listen to her parents and never to be lazy. Mountain Wolf Woman understood that someday when she became a mother herself, she should never hit or scold her children. The Ho-Chunk women taught their daughters that hitting or scolding showed poor parenting.
Mountain Wolf Woman also spent many hours watching her mother and other women doing their work. She learned the things she would need to know to survive—how to garden, gather wild plants, and cook and preserve many kinds of food. She also learned how to build wigwams, prepare deer hides, make mats, weave baskets, and sew clothes.
These moccasins were made by Mountain Wolf Woman.
Ho-Chunk basket
Mountain Wolf Woman would have used thin strips of wood, like this piece, to weave baskets.
* * *
**Ho-Chunk Spirits**
The Ho-Chunk religion has many spirits, including Earthmaker, Thunder, Disease-Giver, Night Spirits, Sun, Moon, Day, Water Spirits, North Wind, South Wind, and Morning Star. The Ho-Chunk ask these spirits for blessings for good lives, health, and sometimes for "power"—the knowledge to heal and succeed.
* * *
3
Following the Seasons: Spring and Summer
When Mountain Wolf Woman was a girl, her parents did not have 8-hour-a-day jobs where they went to the same place and did the same thing every day. Her parents did not get paychecks and then go to stores to buy everything they needed. Ho-Chunk life followed the seasons. Where families went and what they did each day depended upon the time of year. They understood that different plants and animals were available in different places at different times. To survive, Mountain Wolf Woman's family had to have great knowledge about all plants and animals. It took planning and skill to live by hunting and gathering. And it was a good life. Her parents were their own bosses.
* * *
**Ho-Chunk Month Names**
The importance of the plants and animals and their seasons can be seen in the Ho-Chunk names for the months of the year.
* * *
January | — | first bear month
---|---|---
February | — | last bear month
March | — | raccoon-breeding month
April | — | fish becoming visible month
May | — | drying of the earth month
June | — | digging month
July | — | **cultivating** month
August | — | **tasseling** month
September | — | elk-whistling month
October | — | when the deer paw the earth month
November | — | deer-breeding month
December | — | when the deer shed their horns month
* * *
Why do we call the first month "January" or the last one "December"? How did we come by these names? Can you think of better ones?
* * *
In Mountain Wolf Woman's time, if you didn't want to go hungry, it was important to know the seasons of plants and the habits of animals. In the early spring, her family members trapped animals and sometimes made maple sugar when the sap ran. When the fields warmed, they planted large gardens of corn, beans, and squash. They picked berries and collected **tubers** when these plants were ready. They hunted deer, and sometimes bear, in the fall and early winter. And they stored food from their harvests and hunts to eat as they waited for the cold winter days to end.
The seasonal activities that Mountain Wolf Woman's family followed were much like what Ho-Chunk people had been doing for many generations. But in Wisconsin and across the United States, life was changing in the late 1800s. Non-Indian people were buying more and more land. Then they built fences to define and protect their property. Towns and cities were growing rapidly. It got harder and harder for Ho-Chunk people to live on just what they could grow or collect or hunt. In this changing world, Mountain Wolf Woman and her family also needed to earn money to buy things that they could no longer get from the land or make for themselves.
Although Mountain Wolf Woman's family had its homestead near Black River Falls, the family members did not stay there all year. They spent several of the coldest winter months at home. Then they would start their trips to collect different foods. In March, they usually traveled to the Mississippi River near La Crosse. There, her father and uncles trapped lots of muskrats. These furry animals, sometimes called "marsh rabbits," were plentiful along the Mississippi and its **sloughs**. The Ho-Chunk people hunted and trapped muskrats for **pelts**.
Why was it important that Mountain Wolf Woman's family collect most of their food in spring and summer?
Muskrats also provided a dark red and nutritious meat. Mountain Wolf Woman's mother and aunts roasted them on a rack over a large fire. Many years later, as an old woman, Mountain Wolf Woman still remembered the sights and sounds and smells of those spring days. "The muskrat meat made a lot of noise," she said, as it sizzled and cooked. She remembered watching brown grease drip into the flames as Bends the Bough turned the muskrats over with a long, pointed stick. After the muskrats were cooked and cooled, the women packed and stored them to eat later during the summer.
Muskrat
Sometimes Mountain Wolf Woman's father fished in the spring. One spring, when she was only about 2 years old, her family was camping near Black River Falls. Her father caught a gigantic fish, a sturgeon. There are different ways to catch fish; some people use nets, and some use hooks. To get this fish, her father used a spear. The fish was so big that its tail dragged on the ground as he carried it over his shoulder! Today, there are few sturgeon left in the Mississippi and Wisconsin rivers. The Department of Natural Resources protects them.
Fishermen spearing sturgeon
During the spring, Mountain Wolf Woman and her mother and sisters also dug for the roots of yellow water lilies. This was another good food. They went to sloughs covered with the shiny, dark green leaves of water lilies. Once there, they put on old dresses, took off their shoes, and waded into the cool water. They used their feet to find the large, fleshy roots—letting the river mud ooze through their toes until they found the roots they wanted. Then they used their feet to pull the roots free from the bottom, and the roots floated to the surface. Mountain Wolf Woman and her mother and sisters put the roots into large sacks and carried them back to camp.
Water lilies
At camp, sitting outside their wigwams, they scraped off the outside layer and then sliced the roots. Mountain Wolf Woman said they looked much like bananas. They strung the slices on string and hung them to dry. Then they stored the dried roots in large sacks. Later in the summer, the women cooked these dried roots with meat. Many years later, Mountain Wolf Woman recalled, "They were really delicious."
In the late spring, Mountain Wolf Woman's family returned to the log house on their homestead. Her mother and father planted and cared for a large garden. They also picked the wild blueberries growing in the woods around their home. Lots of blueberries grew in the shade under the tall pine trees. Mountain Wolf Woman remembered that all of the Ho-Chunk picked blueberries back then. Mountain Wolf Woman and her mother dried some of the blueberries. During the winter, they boiled these dried berries with dried corn.
Loading blueberries (in boxes) in Black River Falls
Mountain Wolf Woman's family also picked blueberries to sell in town to earn money. Sometimes her father gave gum to the children, so they would chew gum and not eat the berries as they picked them. What a smart dad! Her family put the berries into square wooden boxes and took them into town to sell. The boxes had rope straps so her family members could carry the boxes on their backs or sling them over the backs of horses.
Mountain Wolf Woman thought they got a good price for blueberries—50 cents a quart at the beginning of the season! Back then, you could buy a lot more with 50 cents than you can today. Selling food that they gathered, such as blueberries, was a way that Ho-Chunk families earned money to buy other foods, household goods, and sometimes even horses. After they sold their blueberries, Mountain Wolf Woman and her family put the store-bought items in the wooden boxes that had held the blueberries and headed back home.
Mountain Wolf Woman could have bought 5 _pounds_ of candy with 50 cents! How much candy can you get with 50 cents?
Downtown Black River Falls where Mountain Wolf Woman and her family would have shopped with the money made from picking blueberries
When the corn was ripe, Mountain Wolf Woman and her family harvested it from their garden and carried it back to their house on their backs. They did not have big machines to pick the corn or trucks to haul it. To cook the corn, they dug a large pit in the ground, put in stones, and then made a big fire in the pit to heat the stones. When the stones were very, very hot, they took out the wood and smoldering ashes and put in corn husks. Next they added the ripe corn and covered it with more husks. Then Mountain Wolf Woman and her family covered the whole pit with dirt except for some holes to add water. When the water hit the rocks, "We used to hear the red hot stones make a rumbling sound," she remembered.
Harvesting corn
Corn ready for the fire pit
The next morning, Mountain Wolf Woman and her family carefully opened the pit and took out the hot cooked corn. Sometimes friends would come to help spread out the hot corn on a cloth placed on the ground. Some friends used clamshells to scrape the corn kernels off the cobs. Other friends used metal teaspoons. After the corn kernels dried in the sun, they were put into sacks. But Mountain Wolf Woman's family always left some corn on the stalks back in the fields. They saved this corn to use for seeds in the next spring's planting. It meant food for the future.
Corn stalk
On other summer days, Mountain Wolf Woman helped her mother and sisters harvest squash from their garden. They wanted to save this food to use during the winter, so it had to be dried to keep it from rotting. First, they needed a drying rack. But they didn't go out to a garage or down in a basement to get a rack. Nor did they go to the store to buy one. They made a rack themselves. Mountain Wolf Woman and her sisters went into the woods and found small trees that had forked branches. They took these branches, stuck them in the ground where they wanted their rack, and hung another branch between them. Then they peeled the squash, cut the squash into rings, strung the squash on the racks, and let the sun and air do their drying work.
Squash drying on racks
During the summer, Mountain Wolf Woman, her grandmother, and her aunt also gathered Indian potatoes, sometimes known as groundnuts. These were not part of their garden. The Indian potatoes grew wild. But Mountain Wolf Woman and her family knew where to look for the potatoes because they knew the land and the plants so well. They found the potatoes in the woods, hidden among hazel bushes, in wet areas near creeks. The Indian potatoes grew on long vines all strung together like a charm bracelet. These vines ran in all directions. When they found a vine, they dug out a whole string of potatoes. They cut the potatoes off the vines and then dried them. When the family needed these potatoes for food, Mountain Wolf Woman and her mother boiled them in water with some sugar until the water was all gone. Then they peeled off the skins and ate.
The vines that Mountain Wolf Woman and her family looked for when hunting Indian potatoes
Mountain Wolf Woman's aunt and grandmother once told her about another way they used to gather food in Nebraska—stealing from mice! Mice stored foods like wild beans. But their supplies weren't safe from the Ho-Chunk women! They followed the tiny trails that mice left to storage holes in the ground. They said that sometimes they found a bucketful of beans, but Mountain Wolf Woman always wondered how big a bucket they meant.
In thinking back on her summer days as a child, Mountain Wolf Woman said simply, "When various foods were ripe, the people dried them." In the summer, the Ho-Chunk grew and picked and dried many foods. During the warm months, Mountain Wolf Woman and her family were always working because they had to be thinking ahead. They had to make sure they would also have enough food in the winter and early spring, the time of year when it was most difficult to find food.
Mountain Wolf Woman and her family did not live in a house or apartment with an electric refrigerator like we have now. They did not even have an icebox like some families had back then. They did not have lots of cupboards and shelves with boxes and bags of **nonperishable foods**. Mountain Wolf Woman and her family dried their own foods and dug holes in the ground to bury foods that they planned to eat later. Storing the food below ground kept it safe from insects and other animals. When they needed food, Mountain Wolf Woman remembered that the adults would say, "Dig up that which is buried."
4
Following the Seasons: Fall and Winter
In the fall, after the work in the gardens was over, Mountain Wolf Woman and her family usually went to pick cranberries. This was another way to make money. When they arrived at a cranberry marsh, there often would be many Ho-Chunk families camping together. And everybody would pick berries—women, men, and children. The adults carried bushel-sized boxes at their sides. As they worked their way across the marsh, they left behind rows of filled boxes. Mountain Wolf Woman and the other children used small pails and picked the berries by hand and then put their berries into their mothers' boxes. At noon, some would go back to camp to eat, but others brought lunches along and ate out in the marsh. Mountain Wolf Woman thought it was a lot of fun to pack a lunch and eat outside.
Cranberries
Ho-Chunk group harvesting cranberries
Notice for cranberry pickers
Mountain Wolf Woman really loved it when **peddlers** came to the cranberry marsh to sell things. Her favorites were the pies. She thought pies were great because her own family used to cook on campfires and could not bake pies and cakes. These baked goods were a real treat!
After cranberry-picking time, it was time for the fall move to hunt deer. Hunting deer is what Mountain Wolf Woman's family and other Ho-Chunk families always did in the fall, just as their ancestors had done before them for countless years.
One place that they liked to hunt was in the woods outside of Neillsville, northeast of Black River Falls. There, Mountain Wolf Woman's family and 4 or 5 other families would build wigwams. Her grandmother and mother made their wigwam and covered it with mats of woven cattails. The other families were her sisters and their husbands and children as well as other relatives. Mountain Wolf Woman loved being with her family. Mountain Wolf Woman's family had so many people that they lived in a wigwam with 2 fireplaces. On cold nights, her father kept the fires going all night long, but the inside of the well-made wigwam was never too smoky. It was a good place to live.
In the fall, Mountain Wolf Woman's family left home to hunt deer.
* * *
**Mats**
Mats were important in Ho-Chunk culture. All the women owned mats that they had made themselves from cattail leaves or bulrush reeds. And all the young girls had to learn how to make mats. These woven mats had many uses. They were used as doors, to sit on, and to cover wigwams. They also spread mats on the ground to dry corn.
To make a mat out of bulrush reeds, the reeds had to be picked, cut, dried, cooked in boiling water, dried again, bundled up, dyed, and then woven. Some were decorated with geometric designs. It took a lot of time.
_Wood strips were woven together to make mats._
_Needles used to make mats_
_Close-up of finished mat_
* * *
Hunting was considered men's work, so only the men and boys hunted. When still young, boys learned how to hunt many kinds of animals from squirrels and rabbits to deer and bear. Mountain Wolf Woman remembered that the hunters used to find deer quickly. They always brought home some meat the first day. Back then, there were not as many hunters. Nobody needed a deer license. Her family killed as many deer as it needed. And sometimes it needed a lot! When Mountain Wolf Woman's father and other hunters shot a deer, they wrapped it in leaves and carried the deer back to camp on their backs. Occasionally, they even brought back a bear.
After hunting for a while in one place, sometimes the family would move its camp to hunt somewhere else. Mountain Wolf Woman's father, mother, and older sisters and brothers all carried packs on their backs to make the move. One year, though, they got a pony, and the pony carried all of their belongings when they moved their camp. Sometimes the children got to ride on top of the pack. Do you think a pony could carry all of your family's belongings? How many ponies do you think it would take?
Mountain Wolf Woman and her sisters and brothers used to fast during the time when the family was hunting. Among the Ho-Chunk people, fasting was a way to ask the spirits for a good life and sometimes to get power. Her parents encouraged all of their children to fast. Her brother Big Winnebago fasted from the fall to the early spring. During these months, he would not eat anything during the day. He would eat only after the sun went down.
Mountain Wolf Woman and her older sister Distant Flashes Standing fasted, too. When their father left in the morning to hunt, Mountain Wolf Woman and Distant Flashes Standing took coals from old, cold fires and blackened their cheeks. Blackening their faces was part of the fasting **ritual**. They fasted to receive blessings, and they blackened their cheeks so people would know not to offer them food.
Mountain Wolf Woman would have played with a doll like this one.
Distant Flashes Standing sat indoors and wove yarn belts, but Mountain Wolf Woman liked to play outside. At the end of the day, when their father returned from hunting, he used to say to them, "Go cry to the Thunders," which meant, go pray to the spirits. And when he was ready to eat, he gave the girls tobacco and again told them to go pray. To the Ho-Chunk people and members of other Indian nations, tobacco is a **sacred** plant. Tobacco has special significance and is used as an offering to the spirits.
Tobacco plant
Mountain Wolf Woman and her sister went into the woods and looked at the dark night sky and cried to the Thunders. They sang, "Oh, good spirits. Will they pity me? Here I am, pleading." They sang this because if the spirits had pity—that is, if the spirits felt sorry for them—then the spirits would give them a blessing. The girls then scattered tobacco and looked at the moon and stars. Thinking back on those times, Mountain Wolf Woman said, "We used to cry because, after all, we were hungry. We used to think we were pitied. We really wanted to mean what we were saying." Then they went home and ate.
At night in the wigwam, Mountain Wolf Woman's father told the children to prepare their bedding and lie down. There were no television sets or even radios, but her father told wonderful stories. These were good memories for Mountain Wolf Woman. She said, "I really enjoyed listening to my father tell stories." Everyone in the whole family stayed quiet so that they could hear all of the words. The stories he told were sacred to the Ho-Chunk people. Many years later, Mountain Wolf Woman still thought fondly of these stories even though she said, "I do not know all of them any more, I just remember parts."
She remembered one story about a Ho-Chunk man getting **revenge** after another tribe had killed everyone in his town. This man snuck into the other tribe's town during the night and cut off the heads of the chief's son and daughter-in-law! And then he took their heads and went up to the moon. Mountain Wolf Woman's father told her that on nights when the moon was full, she could look up and see the man carrying the 2 heads in his hand. Go check out the next full moon. What do you see?
Hunting came to an end when the winter was at its coldest and there was a lot of snow. It was then that Mountain Wolf Woman and her family left their hunting camp and went back to their home near Black River Falls. They spent the coldest winter months warm in their own log house.
After the hunting season, it was time for the winter feast. Seasonal feasts and ceremonies played a big part in Mountain Wolf Woman's childhood. Feasts were important ways that Ho-Chunk people made offerings to the spirits. Spring, fall, and winter feasts were also known as war-bundle feasts or ceremonies.
_Lacrosse was a game enjoyed by many Wisconsin Indians. Ho-Chunk men and women played it on ceremonial occasions._
* * *
**War Bundles**
Many generations in the past, ancestors tied the first war bundles. War bundles consist of animal skins or hides wrapped around items important to a family. The bundles contain items such as pipes, feathers, animal bones, and, in more recent times, such things as soldiers' medals. Succeeding generations add to the bundle. The war bundle offers protection and blessings to the entire family group.
_The items in the center are part of the war bundle._
* * *
Mountain Wolf Woman remembered that her father used to give large feasts. He built a special long wigwam for the winter feast. And for this work, her father had help from his nephews. Remember, for the Ho-Chunk, relationships among family members mean particular duties and privileges. Nephews are expected to work for their uncles, that is, their mother's brothers. (Among the Ho-Chunk, father's brothers were called father, not uncle.)
Mountain Wolf Woman remembered one winter feast that was held in a wigwam long enough to hold 8 fireplaces! Many Ho-Chunk people came, so Mountain Wolf Woman's father fed a whole wigwam full of people. Mountain Wolf Woman remembered that he provided 10 deer for one such feast! Many deer meant people could make offerings to more spirits. The winter feast lasted overnight. People gave speeches, danced, and sang, and they offered tobacco, specially prepared deer hides, and prayers to the spirits. It was a time for clan members to come together. Mountain Wolf Woman enjoyed being part of this large family group.
Ho-Chunk dressed in their ceremonial clothes
Sometimes people fasted before feasts and then broke their fasts at the feast. Mountain Wolf Woman remembered a time when her brothers Big Winnebago and Strikes Standing fasted in the woods before a feast. Boys fasted to **obtain a vision**. Mountain Wolf Woman's father built a shelter for them to live in, and they were supposed to stay there all by themselves for 4 nights! But Strikes Standing did not have the patience to wait 4 nights, and he came home early. Bends the Bough was upset with him for not following this tradition, and she cried. Like all parents, perhaps even your own, she was worried that her child was making wrong choices. But Big Winnebago stayed until it was feast time.
Despite all of their hunting, gathering, and gardening during the year, sometimes Mountain Wolf Woman's family did not have much food. In remembering his own childhood, Big Winnebago said that was why he became such a fast eater. The family always ate out of just one dish, so he learned to eat quickly to get enough. How would your family do if you had to hunt, plant, and gather all of your food? You might have hungry times, too.
5
Many Ways of Growing Up
Throughout Mountain Wolf Woman's childhood, her family usually followed the same cycle—spring, summer, fall, and winter. But when it was almost fall of the year that Mountain Wolf Woman was 9, her oldest brother said that she should go to school! He said that he liked to hear women speak English. He thought his little sister should learn how to speak it. Among the Ho-Chunk, big brothers made decisions for their sisters. Can you imagine how your life would be if your older brother made decisions for you?
It was then that Mountain Wolf Woman's parents let her go to school in Tomah. It was a special government school just for Indian people. It tried to teach them to be more like non-Indian people. Mountain Wolf Woman went to this school for 2 years, but then she didn't go again for a long time. Mountain Wolf Woman said her family did not stay home just so the children could attend school. Back then, hunting and gathering and helping your family were more important. Her family's travels took her away from Tomah and school.
Main building of the Tomah Indian Industrial School
Mountain Wolf Woman and her family continued their yearly cycle. In the fall and winter, they went hunting and picked cranberries, and in the springtime, they returned to the Mississippi River to catch muskrats and dig lilies. One year, though, when they returned to their home near Black River Falls, her mother and father said that they were not going to plant their summer garden. Instead, the family was going to Wittenberg to help her father's uncles. They were old and could no longer help themselves. As you know, among the Ho-Chunk, nephews were expected to help their uncles.
To move to Wittenberg, the family didn't call a moving company, get into a car, and ride down a highway. They moved themselves. They used big wagons pulled by horses. This was a long journey by horse—almost 100 miles! On the way to Wittenberg, they stopped at a Ho-Chunk and Potawatomi (pah tah **wah** tuh me) Indian settlement in the woods north of Marshfield. Mountain Wolf Woman's mother and father both had relatives there. It was a good place for Indians to live the way they wanted. They could follow their own traditions and not worry about non-Indians interfering. Mountain Wolf Woman and her family stayed 2 nights in order to visit with family and rest the horses.
Mountain Wolf Woman's family made this trip by horse and wagon.
Mountain Wolf Woman stayed in a house with an aunt and her husband. Aunts were expected to treat their nieces and nephews with great generosity, and indeed they did! The sisters of Mountain Wolf Woman's father gave them gifts of maple sugar in a handwoven bag. The maple sugar was in cakes of hardened syrup. The aunts also gave them a sack of powdered maple sugar. These were wonderful gifts! In return, Mountain Wolf Woman's mother gave her sisters-in-law necklaces, bracelets, and long earrings that had coins dangling on the ends.
Ho-Chunk necklace and bracelet
Once they arrived at Wittenberg, Mountain Wolf Woman and her family stopped at the log cabin of one of her grandfathers, High Snake. He was a member of the Snake Clan. Then her father went and got his 2 elderly uncles, Good Snake and Fear the Snake Den, and brought them to High Snake's place. (Good Snake and Fear the Snake Den were also Mountain Wolf Woman's grandfathers. Among the Ho-Chunk, grandfathers were not just your mother's and father's father but also included other relatives like the brothers of grandparents.)
Mountain Wolf Woman's father and his uncles were members of a **medicine lodge**. Mountain Wolf Woman's father helped Good Snake and Fear the Snake Den cut trees for poles. Then they built a **wigwam** used for medicine lodge ceremonies. The size of the wigwam varied for these ceremonies; it depended on how many people were expected to attend. When her father and his uncles finished the east end of the wigwam—the side where the sun rose—they sang.
Membership in a medicine lodge was not decided by who you were related to. You had to be invited to join. Members of the medicine lodge taught the people who joined about how the earth and all the animals were formed by the Earthmaker. They also told about how people were unhappy until they learned the proper way to live—the medicine lodge way. Members believed that following the ways of the medicine lodge helped them have good health and a long life.
Many Ho-Chunk people came to the medicine lodge that Mountain Wolf Woman's father and his uncles had built. Some walked. Some came on horseback. And some arrived in horse-drawn wagons. On this occasion, Mountain Wolf Woman's older sister White Thunder and her older brother Strikes Standing were **initiated** into the medicine lodge with a special **ceremony**.
* * *
**What Is Traditional Ho-Chunk Medicine?**
Among the Ho-Chunk, traditional "medicine" did not mean pills or shots or doctor's visits. Medicines were sacred or holy. They could be parts of plants or animals and used for many purposes besides just curing illnesses. For example, there were medicines to provide success in hunts, to make one rich, and to find a husband or wife.
_Cutting roots for medicine_
_A raspberry bark bunch used for medicine_
* * *
After the medicine lodge, someone said that people in town were paying money for the bark of slippery elm trees. Slippery elm bark was used to treat a variety of conditions, from wounds to intestinal problems. Mountain Wolf Woman's father decided they needed to earn some money, so they collected and sold slippery elm bark, too. Those in the family who were strong and could work packed a wagon with some household goods. Then they went looking for slippery elm in the woods. Mothers and younger children, including Mountain Wolf Woman, set up camp near their grandfather's house and waited for the others to return.
Slippery elm tree
When her father and the others found slippery elm, they asked the owners of the land where they found it if they could take the bark. The owners agreed that they could. Mountain Wolf Woman's father and the others peeled the gummy bark off the trees. They cut the bark in strips as long as their arms and then tied the strips into bundles. When everyone had a bundle, they put them on their backs and returned to where they were camping in the woods.
The women peeled off the outer bark with knives and hung the inner bark on drying racks. They dried a lot of bark and tied it into bundles. And then they took the bundles into towns and sold the slippery elm to **pharmacists**. Those who worked getting the slippery elm then got a chance to visit the rest of their families. They brought their families food before going back to the forest to gather more bark. But this way of making money did not last long for Mountain Wolf Woman's family. After a while, the landowners who owned the trees wanted to keep everything for themselves and no longer let the Ho-Chunk people gather the bark.
And then it was fall. Some of Mountain Wolf Woman's sisters returned to Black River Falls, but her father, mother, brothers, and older sister Distant Flashes Standing did not go back. One of her grandfathers, named Rattlesnake, said that they should live in Wittenberg near him. There, her father built a big, round wigwam where they stayed for a while.
Later that fall, her father decided it was time to go trapping, so the entire family, including the grandfathers, moved to Green Lake to trap. When it started to get really cold, they went back to Wittenberg. Only then did Mountain Wolf Woman finally get to go back to school. She was 13. She went to the Lutheran Mission School at Wittenberg. There she was baptized as a Christian. But she also held on to some Ho-Chunk beliefs. Mountain Wolf Woman used to say, "Whatever is good, that I would do."
Do you think Mountain Wolf Woman felt a long way from home in Wittenberg and Green Lake, or do you think she felt all of Wisconsin was home as long as she was with her family?
Around this time, Mountain Wolf Woman's older brothers found another way to make money—dancing in shows! People in towns would pay to see traditional Indian dances. And so her brothers traveled to places like Milwaukee; Chicago, Illinois; and St. Paul, Minnesota, to dance in shows.
Lutheran Mission School at Wittenberg
Big Winnebago used some of the money he made to buy Mountain Wolf Woman a bicycle. This brotherly gift followed an old tradition. Long ago, Ho-Chunk brothers brought home items won in battles for their sisters. Many years later, Mountain Wolf Woman remembered being so proud of that bike. No other student in her school had one. She was the first!
At the Lutheran Mission School, Mountain Wolf Woman became friends with Nancy, an Oneida Indian woman who taught sewing. Nancy had a bicycle, too. When there were social dances (dances just for fun), sometimes the 2 of them would ride their bikes together to the dance. One time, though, they went the old-fashioned way—they hired 2 horses and rode! The horses were hard to handle, and so they let them run and run until they were tired.
When they got to the dance, Mountain Wolf Woman just sat and watched. All of the dancers were dressed in their jewelry and best traditional Ho-Chunk clothes. Then a friend asked her to dance, and dance she did! She really loved to dance and wasn't embarrassed that she was wearing what she called "citizen's clothing." This meant she was dressed like a non-Indian girl, but she didn't care. She loved to dance.
One day, Mountain Wolf Woman's family took her out of school and told her that it was time for her to be married. Although she was an older teenager, she hadn't even finished sixth grade! She cried because she liked school, and she didn't want to get married. But back then, Mountain Wolf Woman had to do what her family said. Her older brother had arranged a marriage with a man he knew. Mountain Wolf Woman could not disobey. She expected her brothers to arrange her marriage because that was the custom among the Ho-Chunk. Still, she was upset because her brother had not made a good match. He had not talked about it with her at all. She grew angry. But her mother said, "Daughter, I prize you very much, but this matter cannot be helped. When you are older and know better, you can marry whomever you yourself think that you want to marry." Mountain Wolf Woman did not forget that. She also promised herself that her own children could choose whom to marry, and they did!
Women and girls in traditional Ho-Chunk jewelry and clothes
Mountain Wolf Woman around the time she got married
For Mountain Wolf Woman's arranged marriage, Bends the Bough combed her daughter's hair and dressed her in a skirt and shawl with ribbon embroidery. Her mother also gave her a necklace, earrings, and a pony to ride. And that was how she looked when she first met her new husband, the son of a man called Pine. Mountain Wolf Woman and her husband rode together on the pony to where his family lived. All of his female relatives were there waiting for them. What a mix of emotions she must have felt—a bit scared, probably, and maybe still a bit angry.
But Mountain Wolf Woman knew what she was supposed to do because her mother had taught her well. She went into a wigwam, laid down her shawl, took off the clothes and the jewelry that she was wearing, and put them on the shawl. Then her new mother-in-law came into the wigwam, took the shawl, and gave away the clothes and jewelry to the relatives waiting outside. But each relative who got something gave something in return to Mountain Wolf Woman. After 2 or 3 days, Mountain Wolf Woman rode back to her own family with 4 horses and a shawl so full of things that she could barely tie the corners shut! Later, 2 more horses were delivered. That was how Ho-Chunk marriages were done back in the old days. There was not a religious ceremony. Families exchanged gifts instead.
Afterword
When she was grown up, Mountain Wolf Woman left her first husband and married a man that her oldest brother, Crashing Thunder, recommended. In all, she had 11 children and lived in Wisconsin, Nebraska, South Dakota, and Oregon. She had a busy life. Mountain Wolf Woman continued to do many of the things she had done as a child—plant vegetable gardens, pick blueberries and cranberries, make mats, sew and cook, and go to feasts and dance. She also learned Indian medicines from a grandfather and worked as a **midwife**, helping other women have their babies.
Mountain Wolf Woman's children had children, and these children have had children! Today, Mountain Wolf Woman's **descendants** include nurses, schoolteachers, and bookkeepers. Some work for the Ho-Chunk Nation making sure that the Ho-Chunk people and their traditions continue to thrive in their homeland, the land we call Wisconsin.
Mountain Wolf Woman lived to be 76 years old. Through these years, she saw many changes around her. She herself also changed. In 1958, Mountain Wolf Woman flew to Ann Arbor, Michigan, and spent 5 weeks with her adopted niece, anthropologist Nancy Lurie. Mountain Wolf Woman liked the running water at Nancy Lurie's house. But she did not trust the electric stove because she was used to cooking over an open fire outdoors or with a wood-burning oven. Sometimes, when Lurie was out of the house, Mountain Wolf Woman made her own meals in the living room fireplace and baked bread in the hot coals. At the age of 73, she even chopped the firewood! During those weeks, Mountain Wolf Woman shared her life stories with her niece. Because she did, people today know who Mountain Wolf Woman was and how she lived.
Mountain Wolf Woman in her later years
Appendix
**Mountain Wolf Woman's Time Line**
**1874** | — | In the winter, Mountain Wolf Woman's family is moved to Nebraska by the U.S. government, where they stay until spring. They return to Wisconsin by traveling down the Missouri River and then back up the Mississippi River.
---|---|---
**1884** | — | Mountain Wolf Woman is born in April.
**1893–1895** | — | At 9 years old, Mountain Wolf Woman goes to school for the first time at the Tomah Indian Industrial School. She goes to the school for only 2 years.
**1897–1898** | — | Mountain Wolf Woman is 13 when she returns to school and attends the Lutheran Mission School in Wittenberg.
**1958** | — | Mountain Wolf Woman visits her adopted niece, anthropologist Nancy Lurie, in Ann Arbor, Michigan. She tells Nancy Lurie about what her life was like as a Ho-Chunk girl.
**1960** | — | Mountain Wolf Woman dies at age 76.
Glossary
**algae** ( **al** jee): small plants with roots or stems that live in water or on damp surfaces
**ancestor**: a family member from long ago
**anthropologist** (an thro **pah** lo jist): a scientist who studies human history by looking at the languages people speak; the environment in which they live; and the way they work, dress, eat, create art, and construct buildings
**archaeologist** (ar key **ol** o jist): a scientist who learns about the past by studying artifacts or objects left behind at places where people once lived, worked, and played
**ceremonial** (ser uh **mo** nee uhl): formal or traditional
**ceremony** ( **ser** uh mo nee): the formal words, actions, or songs that mark an important special occasion, such as a wedding or a funeral
**clan**: a group of people with a common sacred ancestor, such as an animal or spirit
**cradleboard**: a baby carrier used by American Indians
**cultivating**: growing crops
**culture**: the way of life, ideas, and traditions of a group of people
**descendants**: someone's children and grandchildren and their children and grandchildren
**dugout canoe**: a boat made by hollowing out a large log
**fast**: go without food
**generosity**: helping others by sharing things such as time or money
**granulate**: form crystals
**homestead**: land given by the U.S. government to settlers if they built a home and began farming within 5 years
**initiated** (ih **nih** she ay ted): made someone a member of a group or club
**medicine lodge**: an organization of people who practice special ceremonies together
**midwife**: someone who helps women when they have their babies
**nonperishable** (non **pair** ish ubl) **food**: food not easily spoiled
**obtain a vision**: find a spirit to help throughout life
**origin**: where something comes from
**peddler**: a traveling salesperson
**pelt**: an animal's skin with the hair or fur still on it
**pharmacist**: someone who prepares and sells drugs and medicines
**privilege**: a special right
**reservation**: federal land reserved or set aside for Indian nations to live on
**revenge**: getting even
**ritual**: an action that is always done the same way as part of a ceremony or tradition
**sacred** ( **sa** cred): something deserving of respect
**slough (sloo)**: a swampy area near a river
**spiritual** ( **spir** i choo el): something that has to do with the soul and the spirit
**tasseling**: when the tassel, the part of a corn stalk with pollen on it, appears
**translated**: put in a different language
**treaty**: an official, written agreement between nations
**tuber**: a root or bulb
**wigwam**: a home made of cattail mats or tree bark attached to a framework of small branches Reading Group Guide and Activities
Reading Group Guide and Activities
_Discussion Questions_
| Mountain Wolf Woman had a very early memory when she was still young enough to be carried in a cradleboard. What is your earliest memory? What do your friends say are their earliest memories? Why do you think these events stand out in your memory and the memories of your friends?
---|---
| Family was very important to Mountain Wolf Woman. Among the Ho-Chunk, family relationships had both duties and privileges. Think back over Mountain Wolf Woman's life. How did she help family members? How did they help her? Think about your own family. How do your relationships with family members differ from Mountain Wolf Woman's relationships with her family members?
---|---
| For each season of the year, Mountain Wolf Woman and her family participated in different activities like digging for yellow water lilies in the spring and picking cranberries in the fall. What are some of the things that you and your family and friends do in different seasons? Why do you think you do some things in summer and others in winter? Describe your favorite seasonal activity.
---|---
_Activities and Projects_
| Imagine that there are no grocery stores and you have to live off what you can hunt or gather or plant! What would you eat? Where would you find it? Use the library and the Internet to research where your favorite foods are grown and/or produced. Then take a map of the U.S. and decide where you would need to go to get what you want.
---|---
| Reread the section in Chapter 2 on Ho-Chunk names and how Mountain Wolf Woman got her ceremonial name. Ask your parents how you got your name. Pick a place—your city or town or a nearby river or lake—and find out how the place got its name. Ask your local historical society for help.
---|---
| Mountain Wolf Woman and her family did many of the same things that the Ho-Chunk people had done for generations. Interview someone in your family, like an aunt or grandfather, about family traditions that they followed as children then passed on to _their_ children.
---|---
| Go to a Ho-Chunk Pow Wow and watch the dancing! Or view the _New Dawn of Tradition_ video at <http://ecb.org/wisconsin/powwow>. Describe how this dancing is different from other dancing you have seen. How is it similar?
---|---
To Learn More about the Ho-Chunk
Hieb, Jane A., ed. _Visions and Voices: Winnebago Elders Speak to the Children_. Independence, WI: Western Dairyland Economic Opportunity Council, 1994.
Hunter, Sally A. _Four Seasons of Corn: A Winnebago Tradition_. Minneapolis, MN: Lerner Publications, 1997.
Kallen, Stuart A. _Native Americans of the Great Lakes_. San Diego, CA: Lucent Books, 2000.
Loew, Patty. "The Ho-Chunk Nation." Chapter 4 in _Native People of Wisconsin_. Madison, WI: Wisconsin Historical Society Press, 2003.
Milwaukee Public Museum. "Indian Country Wisconsin." <http://www.mpm.edu/wirp>.
Mountain Wolf Woman. _Mountain Wolf Woman: Sister of Crashing Thunder; The Autobiography of a Winnebago Indian_. Edited by Nancy Oestreich Lurie. Ann Arbor: University of Michigan Press, 1961.
_Thunder in the Dells_. VHS. Directed by Dave Erickson. Lone Rock, WI: Ootek Productions, 1991.
Acknowledgments
I thank Bobbie Malone, Director of the Office of School Services, for inviting me to undertake this project. Her enthusiasm for history and gift of sharing this enthusiasm with young people make working with her a joy. I also thank the WHS editorial staff, Elizabeth Boone and Erica Schock, for asking good questions and providing consistency. John Zimm did excellent photo research for the project, aided by WHS colleague John Nondorf and Jennifer Kolb and Diana Zlatanovski at the Wisconsin Historical Museum. State Archaeologist John Broihahn helpfully dealt with every question tossed his way. Jill Bremigan added to the story's intrigue through her outstanding design of the book, and Diane Drexler deftly guided the book through production.
Most of all, I thank Dr. Nancy Oestreich Lurie, who graciously read the manuscript and offered many clarifying insights to help bring Mountain Wolf Woman's story to a new generation of readers. I also thank Frances Thundercloud Wentz, who originally translated Mountain Wolf Woman's words for Dr. Lurie, for helping us with the translation of the Ho-Chunk terms in this book. Many many thanks to all involved.
Index
Page numbers in **bold** means that there is a picture on that page.
A
ancestors, ,
animal spirits,
B
baby carriers. _See_ cradleboards
Bald Eagle (sister),
baskets, ****
beadwork, **** , ****
Bear Clan,
bears, , ****
Bends the Bough (mother)
on following tradition, –
homestead of,
before Mountain Wolf Woman's birth, –,
Mountain Wolf Woman's childhood and, , ,
on Mountain Wolf Woman's marriage,
resettlement of, –
return to Wisconsin of, , –
berry picking
blueberries, , –, ****
cranberries, **** , –, ****
Big Winnebago (brother), , , –
Black River Falls, Wisconsin, , , **** , , **** , ****
British settlers,
C
canoes. _See_ dugout canoes
ceremonies,
_See also_ feasts
cattail. _See_ mats
clans, , , ****
_See also specific clans_
clothes, traditional, **** , **** , , **** , ,
corn, **** , –, ****
cradleboards, , , ****
Crashing Thunder (brother),
D
dancing, , ,
deer, –, ,
Department of Natural Resources,
Distant Flashes Standing (sister), , –,
dolls, ****
dugout canoes, –, ****
E
Eagle Clan, ,
earth clan,
Earthmaker, ,
education
at home, –
at school, –, **** , –, ****
European settlers, ,
trade with, –,
F
fall activities
cranberry picking, **** , –, ****
hunting, , –, **** , –
family roles
aunts, ,
brothers, , , –
duties and privileges, ,
grandfathers,
nephews, , , ,
nieces, ,
sisters, ,
uncles, ,
fasting, , , –
Fear the Snake Den (grandfather), –
feasts, , –
fishing, –, ****
food, , , **** , ,
_See also specific methods of obtaining_
Fox River, ,
French settlers, , ,
G
gardening, , **** , ,
corn, **** , –, ****
squash, –, ****
gathering,
food stored by mice,
potatoes, –,
slippery elm bark, , –
water lilies, –,
generosity, as duty, , , , ,
gifts, , , ,
Good Snake (grandfather), –
Green Bay, Wisconsin, ,
Green Lake, , ****
groundnuts. _See_ potatoes
H
High Snake (grandfather),
Ho-Chunk
language of,
as name,
Ho-Chunk Nation
history of in Wisconsin, ,
lands of, ****
resettlement of by government, –, **** , ,
homesteads, –
houses. _See_ log cabin, wigwams
hunger,
hunting, , , –, **** , –,
I
Iroquois,
J
jewelry, , **** , **** ,
L
La Crosse, Wisconsin, ,
lacrosse (game), ****
land
decisions about,
Ho-Chunk territory, ****
as private property, ,
log cabin, ,
Lurie, Nancy Oestreich, , , ,
Lutheran Mission School, –, **** ,
M
maple sugar, –, **** ,
marriage, –,
Marshfield, Wisconsin,
marsh rabbits. _See_ muskrats
mats, cattail, , , , ****
for wigwams,
medicine, traditional, **** , –,
medicine lodge, –
Mississippi River, , , ,
Missouri River, , , ,
moccasins, ****
month names,
Mountain Wolf Woman
birth of,
death of,
descendants of,
early childhood of, , –, –
as a grown-up, –, ****
life story collected,
marriage of, –,
name of, –
religious beliefs of,
school years of, –, **** , –, ****
as a teenager, **viii** , ****
_Mountain Wolf Woman: Sister of Crashing Thunder; The Autobiography of a Winnebago Indian_ (Lurie),
muskrats, –, ****
N
names
birth order and,
ceremonial, ,
for Ho-Chunk Nation,
of months,
Nebraska, Ho-Chunk people in, , ,
Neillsville, Wisconsin,
Nicolet, Jean, , ****
P
Pine (father-in-law),
potatoes, Indian, –, ****
Potawatomi,
Prairie du Chien, Wisconsin,
prayers and offerings, ,
R
Rattlesnake (grandfather), –
reservations, ,
resettlement by government, –, **** , ,
Revolutionary War,
River's Mouth Place, ,
S
seasonal cycles, –,
_See also individual seasons_
sky clan, , ,
slippery elm, as medicine, **** , –
Snake Clan,
spear fishing, , ****
spirits
of clans,
prayers and offerings to, ,
religion and,
spring activities,
blueberry picking, –
fishing, –, ****
maple sugar making, ,
trapping, ,
water lily gathering, –
squash, –, ****
St. Louis, Missouri, ,
store-bought items, –
Strikes Standing (brother), , –,
sturgeon, –
sugaring. _See_ maple sugar
summer activities
gardening, , , **** , –, **** , ****
gathering potatoes, –, ****
T
Thunder Clan, ,
Thundercloud, Frances,
tobacco, , **** ,
Tomah Indian Industrial School, –, **** ,
Tomah, Wisconsin, –
tools,
trapping, ,
treaties,
V
visions, obtaining, –
W
war bundles, , , ****
water lilies, –, ****
weaving,
White Thunder (sister), , ,
wigwams, **** , –, , ,
Winnebago
as name,
_See also_
Ho-Chunk winter activities
feasts, , –
hunting, ,
Wisconsin
resettlement from, –, **** , ,
return to, , , –,
statehood of,
Wittenberg, Wisconsin, –, **** , , ****
Wolf Clan,
Wolf Woman, –
About the Author
Author **Diane Young Holliday** was an archaeologist at the Wisconsin Historical Society for fifteen years and is the co-author of _Digging and Discovery_ , a book on Wisconsin archaeology for young readers.
| {
"redpajama_set_name": "RedPajamaBook"
} | 6,730 |
Congenital amegakaryocytic thrombocytopenia (CAMT) is a rare inherited disorder.
Presentation
The primary manifestations are thrombocytopenia and megakaryocytopenia, or low numbers of platelets and megakaryocytes. There is an absence of megakaryocytes in the bone marrow with no associated physical abnormalities.
Cause
The cause for this disorder appears to be a mutation in the gene for the TPO receptor, c-mpl, despite high levels of serum TPO. In addition, there may be abnormalities with the central nervous system including the cerebrum and cerebellum which could cause symptoms.
Diagnosis
Treatment
The primary treatment for CAMT is bone marrow transplantation.
Bone Marrow/Stem Cell Transplant is the only thing that ultimately cures this genetic disease. Frequent platelet transfusions are required to ensure that platelet levels do not fall to dangerous levels, although this is not always the case. It is known for patients to continue to create very small numbers of platelets over time.
See also
Thrombopoietin
Myeloproliferative leukemia virus oncogene
References
External links
Amegakaryocytic Thrombocytopenia research study of Inherited Bone Marrow Failure Syndromes (IBMFS)
Coagulopathies
Cell surface receptor deficiencies | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,023 |
{"url":"http:\/\/toddleo.farbox.com\/post\/social-computing-project-report","text":"## Social Computing Project Report\n\nThis is a draft version, and a formal version is required to use LaTex or well-organized word document.\n\n## Job Description\n\n\u2022 Designed the contact frequency, find communities with max-flow and min-cut, and weighted PR ranking and HITS with Wang Shuai\n\u2022 Crawled bilateral (mutual) friends of a single user within a distance of 2\n\u2022 Obtain small communities (subgroups) via min-cut\n\u2022 Plot the graph, with calculated PR, authorities and hubs\n\u2022 Validation\n\n## Crawling 2-hop Bilateral Friends\n\nAfter a short inspection, I found that my frequently contact friends have an average number of 150 bilateral friends. In this case, a 2-hop-distance social network would have 150^2 = 22500 nodes, whilst a 3-hop-distance one would have 150^3 = 3375000 nodes, which is too heavy for further calculations, so the distance of 2 is chosen.\n\nI used python, weibopy (an open source Sina Weibo API SDK) to do this task. Specifically, I stored only user ID and bilateral friends and pack them into vertices and edges in networkx. Eventually, the graph is exported to graphml for further use. After this step, I obtained a undirected graph of 20k vertices and 30k edges.\n\n\/\/ TODO: Average bilateral friends per person ?\n\n## Graph Pruning\n\nThe graph obtained in previous step is rather sparse. \/\/ Density? It is because the users on the leaf (which have a degree of 1 or very small) are weakly connected to the main graph, and less likely to have strong relationship with the root and its community.\n\nMoreover, the leaf nodes will significantly affect the max-flow\/min-cut procedure in terms of in the early stage of min cut iteration, most of the cuts will be on the weak links of the leaf nodes, and it's time consumption is massive. Therefore, pruning the leaf nodes is necessary.\n\nBut by how many degrees of a leaf node has to prune? What is the critical point, as for keep the balance of performance and less potential community information loss? To find out, I pruned the graph with degree from 1 to 21.\n\nAt first we chose degree threshold 4, which is the obvious choice because the curve decreases mildly after 4 degrees. But in further experiments, there are still too many nodes to compute the max-flow\/min-cut, and the initial communities are normally single nodes. So we decided to prune more.\n\nThe selected degree threshold is 15, for the sake of keeping as much information as we could, and a compromise to performance.\n\nAfter pruning, the graph is now 288 nodes taking 3999 edges.\n\n## Max-flow and Min-cut\n\nTo identify the small communities of the social network, we defined the capacity of each edge. We calculated the contact frequency of each pair of users, and assigned the normalized value (0-1) to the corresponding edges. For details, please refer Wang Shuai's report.\n\nUp to this point, each edge in the graph carries a weight of capacity and still, the graph is undirected.\n\n### Package Choose\n\nUnder the principle of using off-shelf utilities and tools, I have made several attempts on graph-mining packages, including networkx, graph-tools, igraph on python and igraph on R.\n\nDespite networkx is a very powerful and easy-to-use python package, the min_cut(G, s, t, capacity='capacity') function computes only the value of the cut, rather than returning 2 partitions that every other packages can do.\n\ngraph-tools is a python package but it is written in c, and has a huge amount of package dependencies which part of them are painful to install to my developing environment. After hours and hours of making and linking, I decided to abandon graph-tools.\n\nigraph, with has both python and R support, is even more powerful (has a massive number of built-in functions) that networkx. Nonetheless, after I wrote a demo using python-igraph on the 1st question of assignment II to validate, I found that the result of min-cut differs (not even close) from the solution.\n\nLuckily, the demo I wrote in R version of igraph matches the solution. Therefore, igraph on R is chosen in spite of some inconveniences of graph importing\/exporting issues.\n\n### The Dilemma of Choosing Source and Sink\n\nIn a s-t cut, the flow starts from s (AKA. source) and is received at t (AKA. sink). But how do we determine a pair of s-t in a particular graph? Two method was proposed by Wang Shuai and myself.\n\n\u2022 Diameter\n\u2022 2 highest weighted-degree nodes\n\n#### Diameter\n\nAs for as selecting the optimal pair of s-t, the diameter is intuitively considered, on account of the farthest pair has a high chance on the opposite sides of 2 partitions after min-cut, which meets the rule of the source and sink definition.\n\nInstead of using the standard diameter measurement, we introduced the weighted diameter, the calculation method is as below:\n\nWeightedDiameter = sum(weight_of_edge_on_path)\n\nBoth networkx and igraph provide the functions that compute the value of diameter only. On behalf of obtain the nodes which linked by path of diameter, normally we traverse all possible pairs and find the farthest one, whereas this method has a time complexity of O(n^2), which is intolerably slow. We need to seek a faster algorithm instead.\n\nAt the end we happened to learn a heuristic algorithm called Fast Map, which solves the farthest pair of nodes problem in PCA, on Prof. Tao's DM course. It can find two pair of nodes with no guarantee of farthest far enough distance, and more importantly, in linear time.\n\n#### 2 highest weighted-degree nodes\n\nIn some cases, using the farthest pair of nodes also doesn't assure on two potential communities. Hence, a method of using 2 highest weighted-degree nodes as s-t is introduced.\n\nThe princeple is very straightforward. Identify a pair of nodes, which have the largest and second largest sum of weighted-degree, and treat them as two most active users in respective community.\n\nAs a result, the min-cut algorithm will return 2 communities, either s-t on two fartherst rims, or represent the most active users of two communities.\n\n### Min-cut Iteration\n\nThe essential idea is using maxflow(G, s, t, capacity) built-in function in igraph. The return value of maxflow() is consisted by value, cut, partition1, partition2, etc.\n\nIn ideal condition, the algorithm is supposed to return a 50%-50% community (in terms of nodes) of the entire graph. Yet, in practise, the maxflow algorithm is likely to return 2 partitions that one is a single node, the other is the rest of nodes. The reason is selected s-t is not optimal, causing single node in one partition, which is merely a outlyer.\n\nTo acquire the proper communities, a min-cut iteration is devised. Procedure is described below.\n\nSet i = 0. At the beginning of each loop, i++\n\nLoad current graph Gi. If i = 1, load the raw\n\nFind the s-t, using diameter or 2 highest weighted-degree nodes\n\nPerform maxflow(), getting the edges being cut, and remove them, so that the graph contains two connected components.\nFind the smaller connected component, output as a result, and remove the connected component's according nodes and edges from the graph.\n\nThe iteration terminates when there's only a single node remain in the graph.\n\nThen identify small communities by selecting connected components which number of nodes larger than a threshold, e.g., 4, which means consider communities larger than 4 only.","date":"2019-03-26 18:52:24","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4637516140937805, \"perplexity\": 1747.271730381484}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-13\/segments\/1552912205600.75\/warc\/CC-MAIN-20190326180238-20190326202238-00518.warc.gz\"}"} | null | null |
WATER FOR CROPS GIVES WOMEN IN KENYA'S DRYLANDS A VOICE
Women draw water from a hand-dug well in Alimao, Kenya, February 2017. Photo by Charles Kariuki, World Vision
By Robert Kilbert (Thomas Reuters Foundation), August 11, 2017— On a blistering hot afternoon, Zainab Omar Ali methodically sorts through freshly picked bunches of kale on her farm in Alimao village in northeast Kenya.
"I managed to sell most of my batch at the market this morning," she said with satisfaction. "I'll try to sell the remaining fresh ones tomorrow, and cook the rest at home."
Near her farm in Wajir County, women buzz around four greenhouses made of dark shade nets, watering vegetable plots and removing weeds.
Omar Ali and other women in this village bordering Somalia used to grow vegetables by fetching water from a hand-dug shallow well and keeping off pests with old mosquito nets.
But increasingly dry weather and rising temperatures damaged their already limited harvests and weakened their cattle, the women said.
Change is afoot, however. Since 2016, a project led by the Millennium Water Alliance (MWA) is helping women from Alimao grow vegetables like kale and onions under shade nets that protect the crops from predators and the sun's intensity.
A drip irrigation system is installed under the nets to use water more efficiently.
The "Kenya Resilient Arid Lands Partnership for Integrated Development" (Kenya RAPID) programme, implemented by World Vision Kenya, aims to improve 45,000 people's access to water and sanitation in dry northern counties.
Zainab Omar Ali and other women operate a solar-powered pump in Alimao, Kenya, February 2017. Photo by Charles Kariuki, World Vision
REBUILDING AFTER DROUGHT
After losing all their livestock to drought in the 1990s, Omar Ali and her family left their village in northern Kenya and migrated to Wajir County.
"Life was hard without any meat or milk to rely on," she told the Thomson Reuters Foundation. "My (six) children and I would sometimes go for two days without a proper meal and had to rely on wild fruits."
Experts say women bear the brunt of climate change in many developing countries, and are often more vulnerable than men when disasters like floods or droughts strike.
Richard Munang, climate change programme coordinator for Africa at UN Environment, said men in pastoralist communities control the main source of income – livestock – meaning women cannot take the decision to sell or slaughter an animal.
"That makes them more likely than men to have to go without food in times of need, while they must walk long distances to fetch water," he said.
With no stable income to rely on, Omar Ali and six other village women decided to pool their limited savings in 2013.
"We used to have weekly meetings where each member would give 200 Kenyan shillings ($1.93) to buy milk from livestock herders and resell it to town dwellers," she recalled, bending to water her vegetables. "But the milk would often spoil due to the heat."
Halima Qureysh, another group member, said the women then tried farming a small piece of land allocated by village elders, but the hand-dug shallow wells they used often ran dry.
Since last year, however, the women have used the shade nets provided by the Kenya RAPID project, which is funded by the U.S. and Swiss governments, to help protect their crops from extreme heat.
Last year they harvested 35 tonnes of kale, compared to just a few bunches each previously, which was barely enough for domestic consumption.
Omar Ali said the group's "healthy-looking" kale now fetches 50 shillings per kilo, instead of only 20 previously.
She now makes about 4,500 shillings per month – three times what she used to earn.
"I can take my children to school, cook balanced meals for my family and I have gained recognition in my community," she said.
"In our society, women are not normally allowed to speak in public forums," she added. "But given our group's success, men are now letting the members speak to the rest of the village and make decisions at a family level."
SOLAR-POWERED PUMP
With support from the project, the group has also set up a borehole with a solar-powered pump to ease water shortages.
The women purify water from the borehole, store it in tanks and sell it to the rest of the community.
"We used to share dirty water with livestock in water pans – if there was water at all," said Omar Ali. "But the water we get now is clean."
Dickens Thunde, former country director at World Vision Kenya, said working with the community's existing ways of coping with climate extremes – rather than introducing a new system – had been key to the success of the project.
"This community was already managing its own natural resources – it just needed a sustainable water source to withstand shocks," he said.
However, challenges remain in reaching other vulnerable community members who aren't part of the women's group.
Hadabah Mahamoud, a project officer for sanitation and nutrition with World Vision, said a lack of funding has so far limited the project's expansion to other villages.
"Once established, these projects are easy to manage, but the initial cost of setting them up and sourcing the equipment like irrigation pumps is quite high," she said.
"Most people in this arid region still lack proper access to water, without which they cannot expect a healthy harvest or livestock," she added.
For now, said Omar Ali, the women plan to use the group's savings to offer training in sustainable farming to other women in the region, using their village as "a centre of excellence".
($1 = 103.7500 Kenyan shillings)
For more Kenya RAPID news, click HERE. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,536 |
\section{Introduction}\label{introduction}}
In classical approaches to authorship attribution, frequencies of the
most frequent words (MFWs) and other style-markers such as character
\emph{n}-grams are claimed to outperform other types of style-markers
(Koppel et al., 2009; Stamatatos, 2009), even if their performance
varies significantly across different languages (Eder, 2011; Rybicki and
Eder, 2011; Evert et al., 2017). Also, it has been proven that
attribution based on single words and, even more so, on letter
\emph{n}-grams reveal a very high resistance to errors in corpora such
as those derived from imperfect OCR (Eder, 2013). A previous study in
authorship attribution performed on a large corpus of Polish novels
(Rybicki, 2015a) confirmed the usefulness of most frequent words.
Defined, for any text analysis software, as simple strings of letter-
and non-letter characters, all these plain features are easily extracted
from input texts. One should not underestimate the implications of such
an efficient combination of simplicity and performance. Namely, a
stylometric test -- be it authorship attribution or a distant-reading
analysis of literature using quantitative methods -- can be applied to
any web-scraped plain text file with a high probability of achieving
acceptable results.
Very attractive as they are, these shallow text features have also their
limitations. Firstly, there is little theory that would explain the
phenomenon of the visibility of authorial stylometric signal of the very
frequent features -- apart from the general and intuitive assumptions
that authors might be possessed of their own ``stylistic fingerprint''
(Kenny, 1982: 12) or that the very frequent words, for instance, might
define authorial style by establishing the context for the less frequent
yet more ``meaningful'' words (McKenna et al., 1999). Certainly, there
exist studies that aim to provide a convincing theoretical background
for stylometry (Kestemont, 2014), nevertheless one can say that we are
still at the beginning of the journey. This lack of theory might be the
reason why many scholars look askance at frequency-based quantitative
analyses and, consequently, that there is little dialogue between
quantitative and qualitative approaches to textual analysis.
Secondly, the above-cited findings (Eder, 2011; Rybicki and Eder, 2011;
Evert et al., 2017) cast doubt whether the appropriateness of a
quantitative frequency-based method developed for one language easily
translates into similar success in another, as has been suggested in
earlier studies (Juola, 2009). In fact, the high discrepancy in
authorial attribution success observed in the 2013 experiment has been
suspected by the researchers to stem from the differences in the
inflection of the languages compared. The observation that highly
inflected Polish fared worst among less inflected languages such as
English or German, will be quite relevant in the context of the present
study.
To explore further the hypothesis of inflection's role in attribution:
it is obvious that, in inflected languages, different forms of the same
word cannot be recognized using generic text tokenization (e.g., via
regular expressions). This is a possible source of error, since, in
languages such as Polish, word endings play a prominent role; as a
result, much of the grammatical information that is easily available in,
say, English function words, remains ``hidden'', or ``dissolved'', in
inflected nouns or verbs, and has no way of making it to the top ranks
in frequency lists: these countless inflected word forms make word
frequencies sparse, and this complicates most statistical procedures.
Meanwhile, morphologically rich languages with relatively free
word-order, such as Polish, are significantly different from the
grammatical point of view, and it should not come as a surprise that
they make the task substantially different. With its 7 cases multiplied
by 2 numbers, singular and plural (and, to make things even more
complicated, vestiges of the dual coexisting with the plural, as in
\emph{oczyma}~:~\emph{oczami} `eyes' instrumental), a Polish noun might
have up to 14 different inflected forms. As if this were not enough,
nouns with two alternative endings for some cases are not infrequent
(e.g., \emph{reżyserzy}~:~\emph{reżyserowie} `film directors' nominative
plural). This figure is multiplied when adjectives are concerned, since
they inflect by case, by number, and by gender: there are three genders
in the singular and two in the plural (+human masculine vs.~--human
masculine). And while regular homonymy within the inflection paradigm of
the Polish languages keeps those inflected forms considerably lower than
the above-presented worst-case-scenario, this comes at the cost of a
greater degree of ambiguity. The same general rule holds for verbs,
pronouns and numerals.
This brings us back to the question of authorial attribution, this time
in the distinct context of rich inflection. Presumably, the problem of
the morphological abundance can be overcome -- at least to some extent
-- by lemmatization, or transforming the original sequence of words into
their base forms, as in the following example: \emph{w jednym z
pomniejszych miast perskich mieszkali dwaj bracia} (original words),
\emph{w jeden z pomniejszy miasto perskie mieszkać dwa brat} (lemmatized
words). From a theoretical point of view, the difference between a
sequence of original forms and lemmatized words is not as big as it
might seem. After all, any stylometric inference based on word
frequencies means in fact reducing a very complex phenomenon -- the
natural language -- into its simple representation, while filtering out
a vast amount of original information. Lemmatization is no different in
this respect, except that it reduces the language even more, by cutting
off grammatical information held by the original word forms.
Being an obvious remedy for data sparseness, lemmatization should
increase the visibility of the authorial signal. However, an opposite
hypothesis is also plausible, namely one can assume that the
(grammatically richer) original word forms preserve a cleaner authorial
signature than the grammar-less lemmas. Finally, a hypothesis that the
signal is hidden \emph{between} the original forms and the lemmas --
i.e.~in the grammatical structure itself -- cannot be ruled out. From a
linguistic point of view, this third scenario is rooted in fundamental
questions of the authorial freedom of choice vs.~constrains of the
language.
In principle, grammar will always constrain the authorial freedom of
choice to a significantly greater degree than it constrains the (usually
very individual) lexical repertoire. If an author wishes to describe a
given entity with an adjective, there exist numerous words to choose
from: e.g.~the entity's size may be \emph{big}, \emph{large},
\emph{great}, \emph{considerable} etc. However, if we take into account
grammatical categories, the entity will inevitably be represented by a
sequence {[}Adjective{]} + {[}Noun{]}. Moreover, despite some
limitations in combining words (such as the impossible Chomskyan
\emph{green dreams}), these limitations are much more rigid on the
syntactic level than on the lexical level: once a transitive verb is
introduced, it has to be followed by an object. Additionally, the case
of the object cannot be freely chosen -- it is assigned by the verb.
Therefore, we can easily formulate a pre-empirical assumption that
authors enjoy much larger freedom of choice on the level of lexis
compared to syntax. Certainly, novelists usually try to be creative and
do not adhere to most typical collocations\footnote{Our corpus contains
literary sources only. An interesting question -- far beyond the scope
of this study -- is the extent to which the fact that a writer seeks
originality makes the fingerprint clearer compared to non-fiction
literature.}, but even a highly experimental artistic novel cannot
ignore language constrains.
It is quite clear, then, that grammar should not be excluded from the
experimental setup of the present study. Yet, the problem of extracting
the grammatical structure from texts (referred to as parsing) is far
more complex than lemmatization. It is true that, despite new
developments in this area, automatic parsing is still somewhat
unreliable and obtaining a tailor-made tree bank is beyond our
capabilities; however, straightforward insight into grammar can be
obtained using Part-of-Speech (POS) tags combined into \emph{n}-grams
(Wiersma et al., 2011). Attempts to solve this problem have already
yielded promising results (Baayen et al., 1996; Hirst and Feiguina,
2007) -- yet, once again, mostly in English.
The downside of such an approach is that the POS \emph{n}-grams can
provide us with a rather rough model of syntax or, in the words of
Wiersma et al. (2011), ``a good aggregate representation of syntax''.
However, since these features were compared in the context of repetitive
authorial decisions -- conscious or unconscious -- that make texts by
the same author more similar to each other than to texts by other
authors, there was some hope that such an experiment might provide an
insight to the various degrees of linguistic choice at the lexical
and/or syntactic level.
Because of the complexity of individual word forms' grammatical
information, morphologically rich languages are usually annotated with
so-called positional tags, i.e.~sequences of codes for all the values of
grammatical categories which pertain to a word, where only one segment
of a tag stands for the part of speech itself. To illustrate, while the
English word \emph{impossible} is tagged \texttt{AJ0} (Adjective,
general or positive), the Polish \emph{niemożliwemu}, the Dative
Singular of the same adjective \emph{impossible}, must be described by a
fairly verbose tag: \texttt{adj:sg:dat:m1:pos}, where ``adj'' stands for
Adjective, ``sg'' for singular, ``dat'' for Dative, ``m1'' for
Masculine-Virile, ``pos'' for Positive Grade. Consequently, this complex
tag is a bundle of inflectional features of the word; its code for case,
number, and gender \texttt{sg:dat:m1} can also form a part of a
substantive or participle, whereas the first segment of the sequence,
``adj'', is the only part of the tag that is directly comparable to its
English counterpart.
Arguably, a Polish unlemmatized text has a much lower type/token ratio
than a lemmatized one. Equally obviously, a comparable English text (for
instance, an English translation of a Polish text) produces a lower TTR.
Finally, the difference of TTR in a lemmatized and unlemmatized English
text is much less prominent. In the context of automatic POS tagging,
the difference accounts for a substantial increase in the difficulty of
this task as the rich morphology in Polish requires a vast number of
tag-types. The tagset of the National Corpus of Polish (Przepiórkowski
et al., 2012) amounts to over 1,000 tags, a full degree of magnitude
more than the mere 140 tags in the CLAWS-8 tagset for English. This
means, among other things, that a Polish POS-tagged text would produce
much lower frequencies for every POS type. And if this were not enough,
the relatively free word order in Polish makes one expect a higher
number of different POS-tag combinations (\emph{n}-grams), since a
sequence of two or more parts of speech can occur in different order. It
is true that several restrictions on Polish word order might slightly
attenuate this phenomenon, e.g.~the preposition can never be placed in
postposition, and the negation of the verb must immediately precede the
latter, nevertheless the increase in the number possible POS-tag
\emph{n}-grams is still remarkable.
\hypertarget{hypothesis}{%
\section{Hypothesis}\label{hypothesis}}
With all the above remarks taken in the consideration, we can now
formulate the research questions to be addressed in this study: firstly,
we aim to empirically examine the amount of authorial signal that
resides in grammar as assessed through POS-tags; secondly, we aim at
comparing the performance of original word forms against lemmatized
forms (a scenario in which \emph{some} of the grammatical information is
stripped out). Additionally, we aim to test the extent to which
particular segments of positional tags (analyzed separately and combined
into \emph{n}-grams) might be useful in this respect. Therefore, apart
from the entire tags, their segments have also been assessed, namely
\emph{n}-grams of single categories, as well as combinations of two tag
segments. In this approach the Polish word sequence \emph{jedną czerwoną
ranę} (\emph{one red wound} in the Accusative form) was analyzed as word
forms, as lemmas (e.g.~\emph{jeden czerwony rana}), and as different
chains of POS-tag parts:
\begin{enumerate}
\def\arabic{enumi}.{\arabic{enumi}.}
\item
entire tags, e.g.~{[}adj:sg:acc:f:pos{]} + {[}adj:sg:inst:f:pos{]} +
{[}subst:sg:acc:f{]};
\item
POS tags in the strict sense, or the first segments only,
e.g.~{[}adj{]} + {[}adj{]} + {[}subst{]};
\item
tags cut off after their second segment, e.g.~{[}adj:sg{]} +
{[}adj:sg{]} + {[}subst:sg{]}.
\end{enumerate}
The above word forms, lemmas and different variants of grammatical tags
were further combined into \emph{n}-grams (ranging from 1-grams to
3-grams), resulting in 15 distinct types of style-markers assessed
individually in controlled authorship attribution tests.
Our working hypothesis is that the grammatical layer will exhibit some
traces of authorial signal, yet they will not overshadow the primary
signal produced by the lexical layer. As for the lemmatized
vs.~unlemmatized words as efficient style-markers, we hypothesize that
an input text partially stripped out of grammar, i.e.~lemmatized, will
exhibit a slightly stronger authorial voice compared to original word
forms.
\hypertarget{data-and-method}{%
\section{Data and method}\label{data-and-method}}
In order to corroborate the above hypotheses, we compiled a tailored
corpus of 189 novels in Polish. It is true that restricting the choice
to exclusively one genre (literary novels) will not allow us to
generalize the results to the Polish language in its entirety, However,
we wanted to control for genre in our experiments, as it is usually a
crucial factor in authorship attribution. Similarly, we choose novels
because of their naturally large size, which will prevent the authorial
signal from being blurred by the short sample effect.
The corpus consists of Polish novels from the 20\textsuperscript{th}
century, all of them drawn from the National Corpus of Polish. They were
processed using the Pantera tagger (Acedański, 2010) fully
automatically. No stop lists were used, punctuation marks were treated
on a par with words, certainly the same holds for POS-tags. The full
dataset consisted of 189 Polish novels written by 46 authors; each
author was represented by 3 to 6 texts (4.1 on average). The
chronological range was maintained to be possibly narrow, in order to
minimize the potential impact of diachronic linguistic change; it has
been reported that chronology is a strong signal in most-frequent-word
based stylometry (Burrows, 1996; Rybicki, 2015b). Smaller subsets of the
main corpus were also analyzed in two additional cross-check
experiments, one involving 99 novels by 33 authors, and the other 30
novels by 10 authors (in both setups, the even number of 3 books per
author was secured). Due to copyright restrictions, the novels used in
this study could not be made publicly available. However, we post all
frequency tables used in this study, as well as the full set of the
results, followed by the code needed to replicate the tests, on our
GitHub repository:
\url{https://github.com/computationalstylistics/PL_lemmatization_in_attribution}.
In all, 5 different variants of features were tested for attribution
success: (1) unlemmatized words (original word forms); (2) lemmatized
words; (3) full tags; (4) POS-tags in the strict sense, i.e.~the labels
of the Part of Speech alone; (5) two initial tag parts. All these were
analyzed in \emph{n}-grams, at \emph{n} from 1 to 3; which resulted in
15 independent classification experiments. The analyses were performed
for 35 features, and then for 100, 150, 200 and onward up to 2,000 most
frequent items, by increments of 50. Finally, four supervised
machine-learning classifiers were compared: Burrows's Delta, Cosine
Delta, Support Vector Machines (SVM), and Nearest Shrunken Centroids
(NSC). The entire experimental setup was repeated for the three variants
of the corpus, comprising of 189, 99 and 30 novels, respectively.
The choice of the four classification methods was based on their
time-proven applicability to solving authorship attribution tasks.
Delta, a simple distance-based method introduced by Burrows (2002),
enjoys a reasonable share of attention in stylometry due to its
simplicity and efficiency. Next comes its variant known as the Cosine
Delta, which has been proven to outperform most of distance-based
classifiers (Evert et al., 2017). The Nearest Shrunken Centroids,
another distance-based learner, has also been successfully applied to
text classification (Jockers and Witten, 2010). Support Vector Machines
is a widely-known multidimensional classifier, commonly believed to be
one of the best machine-learning techniques for data analysis. It has
been shown that the performance of this method is very high indeed
(Koppel et al., 2009; Jockers and Witten, 2010). In our approach we use
a simple SVM setup: linear kernel (rather than polynomial) with the cost
parameter set to 1 (rather than optimized in cross-validation). While
parameter tuning usualy improves the performance of SVM, we aimed at
keeping the experimental conditions identical for all analyzed
scenarios.
One has to emphasize, however, that the classification setup we deal
with here is substantially different from typical attribution problems,
since it involves multiple classes, instead of the usual two or three.
Such a situation is referred to as the ``needle in a haystack''
attribution scenario (Koppel et al., 2009), i.e.~a type of attribution
in which the real author is hidden among a very high number of false
candidates. An obvious question arises whether a multi-class setup --
significantly more demanding than a standard attribution experiment --
is a good choice to assess the performance of different style-markers in
a given corpus. An answer to this question is twofold. Firstly, it must
be remembered that, since there are not so many prolific authors, the
number of available texts is also limited; moreover, the access to
electronic versions of those texts is also restricted. Our corpus is no
exception -- the main criterion of including particular texts was their
availability. Secondly and more importantly, a corpus of diverse
authors, authors' genders, genres, topics, audience targets etc.
eliminates possible biases which we can easily overlook. Above all,
however, we should emphasize that we did not aim to improve the overall
accuracy in absolute terms. Rather, we aimed at comparing the efficiency
of several style-markers under identical conditions of the experiment.
The analyses were done using a custom script for R, based on the
\texttt{crossv()} function of the \texttt{stylo} package (Eder et al.,
2016). Particular combinations of style-markers, \emph{n}-grams, and
classifiers, were assessed independently. The scores for subsequent
ranges of the most frequent items were recorded in a leave-one-out
cross-validation scenario. In such a case, all the texts but one were
put into the training set, and the remaining single sample was
classified against the training set. The same procedure was performed
iteratively over the corpus, in each iteration a subsequent text (one at
a time) being excluded for classification. The resulting row of
predicted classes was then compared against the expected classes, and
the number of correct ``guesses'' was recorded as the model's general
accuracy.
Being conceptually very simple and compact, however, accuracy is
considered to overestimate the actual classification performance. For
this reason, a routinely applied toolbox of measures not only includes
accuracy, but also recall, precision, and particularly the F1 score. The
reason why these somewhat less intuitive measures are often neglected in
stylometric studies, is that they are not designed for assessing
multi-class scenarios. Since in our experiment 46 authorial classes were
involved, we relied on \emph{macro-averaged} versions of precision,
recall and the F1 score (Sokolova and Lapalme, 2009). Keeping in mind
that the F1 score in a way combines the information provided by both
recall and precision, this will be our primary diagnostic measure
hereafter.
\hypertarget{results}{%
\section{Results}\label{results}}
The high number of particular attribution tests for different
classification methods, features, \emph{n}-grams, and datasets, calls
for a structured way of presenting the results. For this reason, we will
start with a manual inspection of a somewhat random subset of outcomes.
We will then summarize the differences between the three datasets, the
four classifiers, and finally, we will discuss the performance of
particular style-markers: original words, lemmas and POS-tags.
A small subset of the results is presented in Table 1. Here we report
the performance for the full corpus of 189 novels, original word forms
(MFWs), \emph{n}-gram size set to 1 (i.e.~single words), the Cosine
Delta classifier, and 8 different vectors of the most frequent
features\footnote{The full set of tables for particular datasets,
classifiers, feature types, and their \emph{n}-grams, can be found in
our GitHub repository.}. At a glance, one can identify a sweet spot of
performance at 200 MFWs, but a broader picture shows that similar local
areas of better (or worse) performance are not infrequent. In fact, the
classifier reaches its plateau of optimal performance at around 700
MFWs, to slightly decrease for the vectors of more than 1,200 MFWs.
Table 1. 189 novels, Cosine Delta, most frequent words.
\begin{longtable}[]{@{}lrrrr@{}}
\toprule
features & accuracy & precision & recall & F1 score\tabularnewline
\midrule
\endhead
35 & 0.687 & 0.640 & 0.662 & 0.637\tabularnewline
100 & 0.825 & 0.821 & 0.816 & 0.805\tabularnewline
150 & 0.867 & 0.878 & 0.857 & 0.855\tabularnewline
200 & 0.888 & 0.911 & 0.890 & 0.885\tabularnewline
250 & 0.857 & 0.883 & 0.860 & 0.848\tabularnewline
300 & 0.862 & 0.884 & 0.866 & 0.858\tabularnewline
350 & 0.873 & 0.889 & 0.879 & 0.872\tabularnewline
400 & 0.878 & 0.899 & 0.885 & 0.882\tabularnewline
\ldots{} & \ldots{} & \ldots{} & \ldots{} & \ldots{}\tabularnewline
\bottomrule
\end{longtable}
Due to the obvious limitations of presenting the results in a tabular
format, below we present the outcomes in a form of compact plots, so
that reasonable amounts of information can be shown concurrently. To
further increase the clarity of the plots, we will report the F1 scores
only, while delegating all the remaining measures to the GitHub
repository.
Conveniently, the comparison starts with an overview of the three
datasets we used in our study, i.e.~the corpora of 189, 99, and 30
novels, respectively. As it turns out, the general outcome of the
experiment depend, in good accord with intuition, on the size of the
corpus. The best average scores were obtained for the 30-novel subset;
the subcorpus of 99 novels fared somewhat less well; the performance of
the entire set of 189 novels, however, turned out to be very similar to
that of 99 novels (Fig. 1). As a whole, the results were poorer than
expected, even taking into account the fact that the big number of 46
authorial classes -- the needle-in-a-haystack scenario -- was
responsible for this effect. For the subset of 99 texts by 33 authors,
the highest F1 score achieved was as high as 0.91 for the most effective
set of input parameters. For the entire set of 189 novels, the highest
observed score was 0.92. For 30 novels by 10 authors, the score of 1 was
reached for some style-markers combined with Cosine Delta and, to a
lesser extent, with NSC.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{stylistic_fingerprints_files/figure-latex/unnamed-chunk-1-1.png}
\caption{Overall performance (F1 scores) for three datasets of
189, 99, 30 novels. Particular curves represent all the style-marker
types and all the classifiers.}
\end{figure}
Despite being compact, the resulting plot (Fig. 1) is rather difficult
to read. For this reason, the information to be plotted will be further
reduced in the next figures. The undeniable collinearity between the
thee corpora of 189, 99, and 30 novels -- despite a few notable
exceptions that will be discussed below -- allow us to focus exclusively
on a single dataset. Therefore, in the following sections we will show
the behavior of the 189-novel corpus alone, delegating the all the
remaining results to the GitHub repository.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{stylistic_fingerprints_files/figure-latex/unnamed-chunk-2-1.png}
\caption{Overall performance (F1 scores) for the dataset of 189
novels and four different classifiers: Classic Delta, Cosine Delta, SVM
and NSC.}
\end{figure}
The next comparison was that between our four classifiers. As shown in
Fig. 2, the curves representing performance for each of the classifiers
tend to differ significantly. The top (blue) lines are those for Cosine
Delta, outperforming all the other techniques, as evidenced in recent
scholarship as well (Evert et al., 2017). Next go the performance curves
for Classic Delta that, up to \emph{ca}. 150-word vectors, run together
with those for SVM; but then more and more Classic Delta curves come to
the fore while those for SVM (gray) show a decrease in performance. NSC
exhibits its full potential when long vectors of features are concerned,
which stands in contrast with the behavior of SVM -- while NSC seems to
struggle when the feature space is limited, SVM feels overwhelmed by the
abundance of features. Delta's overall good performance (in both Classic
and Cosine variants) can be partially explained by the fact that in
multi-class setups, distance-based methods usually outperform SVM
(Luyckx and Daelemans, 2011).
Next comes the comparison of particular style-markers' types. The main
research question here is whether lemmatization improves the accuracy of
classification. In Fig. 3, for Cosine Delta, the classical frequent word
approach (MFWs) is highlighted, while all the other curves are kept in
the background. Most frequent word 1-grams (on top) are followed by
2-grams, and then 3-grams (at the bottom). As can be observed, this
simple and time-proven type of features turns out to be the clear winner
of the experiment, at least when Cosine Delta and 189-novel dataset is
concerned. At the same time, however, the same features combined into
3-grams turn out to be unsatisfactory as style-markers, reaching the F1
rate of \emph{ca}. 0.77. There is an explanation of this phenomenon:
being highly inflected, Polish has also a free word-order, which
exponentially increases the number of available word 3-grams (let alone
wider \emph{n}-grams) and leads to substantial data sparseness.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{stylistic_fingerprints_files/figure-latex/unnamed-chunk-3-1.png}
\caption{Performance of original word forms (unlemmatized words,
or MFWs) in the corpus of 189 novels, assessed by Cosine Delta.}
\end{figure}
Being the best performer, however, frequent word 1-grams are followed
very closely by their immediate competitor, i.e.~frequent lemmatized
word 1-grams (Fig. 4). The general picture of the lemmatized words is
very similar to that of the unlemmatized ones, in terms of both the
dispersion between particular \emph{n}-grams, and the sequence of the
curves: 1-grams are on top, then go 2-grams, while 3-grams are below any
acceptance level. Another noteworthy observation is the fact that both
lemmatized and unlemmatized 1-grams (and 2-grams, to a lesser extent),
rise well above the 0.8 line, which serves as the (mostly unattainable)
ceiling for other style-markers.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{stylistic_fingerprints_files/figure-latex/unnamed-chunk-4-1.png}
\caption{Performance of lemmatized words in the corpus of 189
novels, assessed by Cosine Delta.}
\end{figure}
However, the rather small divergence between the lemmatized and
unlemmatized words calls for further exploration. Even if manual
inspection of the respective curves (Fig. 3--4) shows that one of the
style-markers outperforms the other, rigorous statistical testing might
suggest otherwise. A standard way to compare two independent variables,
is to scrutinize them using the t-test. In our case, however, the
variables in question don't meet the formal requirements for t-testing,
since neither of them follows the normal distribution, and their
variances differ significantly. In such a case, Wilcoxon test should be
used instead. According to Wilcoxon test, the difference between
lemmatized and unlemmatized words (for Cosine Delta and the dataset of
189 novels), is indeed significant with a marginally low \emph{p}-value
\textless0.00001. The results of a systematic series of tests for each
combination of the classification method and the dataset are provided in
Table 2. In most cases, the unlemmatized words (MFWs) outperform the
lemmatized words to a significant degree, the exception being the
dataset of 30 novels. Here, a clear winner of the competition cannot be
pointed out, at least for Cosine Delta and NSC.
Table 2. Difference between the F1 scores for grammatical word 1-grams
(i.e.~MFWs) and lemmatized word 1-grams (i.e.~lemmas), assessed by means
of Wilcoxon tests for each combination of classifiers and datasets. The
numbers represent the \emph{p}-values obtained in each individual test.
The asterisks indicate conventional levels of significance.
\begin{tabular}[]{@{}lllll@{}}
\hline
corpus & Delta & Cosine & SVM & NSC\\
\hline
189 novels & 0.000*** & 0.000*** & 0.000*** & 0.000***\\
99 novels & 0.000*** & 0.000*** & 0.004** & 0.002**\\
30 novels & 0.051 & 0.248 & 0.003** & 0.698\\
\hline
\end{tabular}
Finally, the behavior of syntactic style-markers -- as assessed via
POS-tag \emph{n}-grams in their various flavors -- should be commented
on. First and foremost, they turned out to be substantially different in
comparison to lexical markers. As shown in Fig. 5, the overall
performance of full POS-tags is worse than both lemmatized and
unlemmatized words. Also, the spread of the POS-tag curves for different
\emph{n}-grams is smaller (the curves are rather flat) than that of
words, which suggests that the POS-tags are more robust (but also more
resistant to hyperparameter fine-tuning) than lexical markers. Last but
definitely not least, worth noticing is the performance of particular
\emph{n}-grams as a function of the number of features tested. Unlike
the lexical markers, full POS-tag 1-grams don't outperform longer
\emph{n}-grams. It is true that 1-grams initially win, but they are
immediately overtaken by 2-grams, and then even by 3-grams. More
interestingly, the 1-grams reveal a further (and constant) decrease of
performance, as if longer feature vectors contained more and more
stylometric noise.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{stylistic_fingerprints_files/figure-latex/unnamed-chunk-5-1.png}
\caption{Performance of full POS-tags in the corpus of 189
novels, assessed by Cosine Delta.}
\end{figure}
The above picture of syntax-based attribution is corroborated by the
other variants of POS markers, particularly POS-tags in the strict sense
(or, the first tag-parts alone), as shown in Fig. 6. Here, 2-grams
proved optimal, but they reveal a constant decrease of performance for
longer vectors of features, until they are overtaken by 3-grams (the
success rate of 1-grams could be assessed only for the vector of 35
features, reaching the F1 score of 0.643, whereas the number of
available 2-grams was exhausted at the 1,000 features mark). The
behavior of POS-tags reduced to their 1\textsuperscript{st} and
2\textsuperscript{nd} segment (Fig. 7) confirms the general picture of
syntactic features, except that the 3-grams turned out to be the least
successful style-markers examined in this study. Worth mentioning is the
fact that even the the worst choice of features would still lead to the
impressive attributive score of \emph{ca}. 0.75.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{stylistic_fingerprints_files/figure-latex/unnamed-chunk-6-1.png}
\caption{Performance of the initial segment of each tag
(i.e.~POS in the strict sense) in the corpus of 189 novels, assessed by
Cosine Delta.}
\end{figure}
The relatively good performance of higher-order POS-tag \emph{n}-grams
over single items or 2-grams deserves a linguistic interpretation. It
clearly shows that syntax (if we believe that it is reflected by
sequences of 3 subsequent POS labels) plays a considerable role in the
authorial fingerprint, even if it cannot compete with the overwhelming
performance of frequent words. Being less noticeable, however, syntactic
style-markers are very stable in terms of resistance to the number of
analyzed \emph{n}-grams. The F1 attributive score of \emph{ca}. 0.75 for
the worst-case scenario provides us with strong evidence that the
syntactic features retain a considerable amount of the authorial
fingerprint.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{stylistic_fingerprints_files/figure-latex/unnamed-chunk-7-1.png}
\caption{Performance of the first initial segments of each tag
in the corpus of 189 novels, assessed by Cosine Delta.}
\end{figure}
\hypertarget{conclusions}{%
\section{Conclusions}\label{conclusions}}
The results obtained in this study allow for a few general observations.
Firstly, this study shows that, at least in Polish, lemmatization is not
necessarily the way to raise attribution accuracy in that language.
Presumably, this claim should be applicable -- by extension -- to other
languages having a rich inflection. This observation is somewhat
counter-intuitive, since lemmatization leads to a decrease of the number
of types and an increase of the number of tokens per type, which in turn
should reduce data sparseness. It turned out otherwise, as if
lemmatization, or a crude way of ``making Polish more like English'',
stripped out some relevant information about authorial uniqueness. Since
we know what exactly is lost in the process of lemmatization, we can
reason that all the suffixes containing inflection play some role in
authorship attribution.
A more convincing evidence of the role of grammar in attribution, is
provided by our tests involving POS-tag \emph{n}-grams. Despite
significantly worse performance, our syntax-based features exhibited a
big potential to distinguish between authors. It is a widely accepted
claim that the linguistic originality of an author manifests itself in
the lexis, i.e.~in predilection to some words and avoidance of other. It
is less obvious whether the same can be said of syntactic constructions;
intuitively, syntax does not allow as much of freedom of choice as
lexis. Our results provide evidence that syntax alone is responsible for
a considerable amount of authorial uniqueness. Even if syntactic
features cannot compete with the lexis, they can still be used as
efficient style-markers, possibly in combination with traditional
features. Interestingly, the loss of accuracy when only grammatical tags
were taken into account was not very high (\emph{ca}. 15\%). This is a
good hint that writers/authors are only slightly less restricted by
syntax than they are by lexis.
\hypertarget{acknowledgements}{%
\section{Acknowledgements}\label{acknowledgements}}
This research was conducted as a result of the project ``Large-Scale
Text Analysis and Methodological Foundations of Computational
Stylistics'' (2017/26/E/HS2/01019) supported by Poland's National
Science Centre.
\hypertarget{references}{%
\section*{References}\label{references}}
\addcontentsline{toc}{section}{References}
\hypertarget{refs}{}
\begin{CSLReferences}{1}{0}
\leavevmode\hypertarget{ref-acedanskiMorphosyntacticBrillTagger2010}{}%
\textbf{Acedański, S.} (2010). A morphosyntactic {Brill} tagger for
inflectional languages. \emph{Advances in {Natural Language
Processing}}. {Reykjavik}, pp. 3--14.
\leavevmode\hypertarget{ref-baayenOutsideCaveShadows1996}{}%
\textbf{Baayen, H., Van Halteren, H. and Tweedie, F.} (1996). Outside
the cave of shadows: Using syntactic annotation to enhance authorship
attribution. \emph{Literary and Linguistic Computing}, \textbf{11}(3):
121--32.
\leavevmode\hypertarget{ref-burrowsTiptoeingInfiniteTesting1996}{}%
\textbf{Burrows, J.} (1996). Tiptoeing into the infinite: {Testing} for
evidence of national differences in the language of {English} narrative.
In Hockey, S. and Ide, N. (eds), \emph{Research in {Humanities
Computing} 4}. {Oxford}: {Oxford University Press}, pp. 1--33.
\leavevmode\hypertarget{ref-burrowsDeltaMeasureStylistic2002}{}%
\textbf{Burrows, J.} (2002). {``{Delta}''}: A measure of stylistic
difference and a guide to likely authorship. \emph{Literary and
Linguistic Computing}, \textbf{17}(3): 267--87.
\leavevmode\hypertarget{ref-ederStylemarkersAuthorshipAttribution2011}{}%
\textbf{Eder, M.} (2011). Style-markers in authorship attribution: A
cross-language study of the authorial fingerprint. \emph{Studies in
Polish Linguistics}, \textbf{6}: 99--114.
\leavevmode\hypertarget{ref-ederMindYourCorpus2013}{}%
\textbf{Eder, M.} (2013). Mind your corpus: Systematic errors in
authorship attribution. \emph{Literary and Linguistic Computing},
\textbf{28}(4): 603--14.
\leavevmode\hypertarget{ref-ederStylometryPackageComputational2016}{}%
\textbf{Eder, M., Rybicki, J. and Kestemont, M.} (2016). Stylometry with
{R}: A package for computational text analysis. \emph{R Journal},
\textbf{8}(1): 107--21.
\leavevmode\hypertarget{ref-evertUnderstandingExplainingDelta2017}{}%
\textbf{Evert, S., Proisl, T., Jannidis, F., Reger, I., Pielström, S.,
Schöch, C. and Vitt, T.} (2017). Understanding and explaining {Delta}
measures for authorship attribution. \emph{Digital Scholarship in the
Humanities}, \textbf{32}(suppl. 2): 4--16
doi:\href{https://doi.org/10.1093/llc/fqx023}{10.1093/llc/fqx023}.
\leavevmode\hypertarget{ref-hirstBigramsSyntacticLabels2007}{}%
\textbf{Hirst, G. and Feiguina, O.} (2007). Bigrams of syntactic labels
for authorship discrimination of short texts. \emph{Literary and
Linguistic Computing}, \textbf{22}(4): 405--17.
\leavevmode\hypertarget{ref-jockersComparativeStudyMachine2010}{}%
\textbf{Jockers, M. L. and Witten, D. M.} (2010). A comparative study of
machine learning methods for authorship attribution. \emph{Literary and
Linguistic Computing}, \textbf{25}(2): 215--23.
\leavevmode\hypertarget{ref-juolaCrosslinguisticTransferrenceAuthorship2009}{}%
\textbf{Juola, P.} (2009). Cross-linguistic transferrence of authorship
attribution, or why {English}-only prototypes are acceptable.
\emph{Digital {Humanities} 2009: {Conference Abstracts}}. {College Park,
MD}: {University of Maryland}, pp. 162--63.
\leavevmode\hypertarget{ref-kennyComputationStyleIntroduction1982}{}%
\textbf{Kenny, A.} (1982). \emph{The Computation of Style: An
Introduction to Statistics for Students of Literature and Humanities}.
{Oxford; New York}: {Pergamon Press}.
\leavevmode\hypertarget{ref-kestemontFunctionWordsAuthorship2014}{}%
\textbf{Kestemont, M.} (2014). Function words in authorship attribution:
From black magic to theory? \emph{Proceedings of the 3rd {Workshop} on
{Computational Linguistics} for {Literature} ({CLFL})}. {Gothenburg,
Sweden}: {Association for Computational Linguistics}, pp. 59--66.
\leavevmode\hypertarget{ref-koppelComputationalMethodsAuthorship2009}{}%
\textbf{Koppel, M., Schler, J. and Argamon, S.} (2009). Computational
methods in authorship attribution. \emph{Journal of the American Society
for Information Science and Technology}, \textbf{60}(1): 9--26.
\leavevmode\hypertarget{ref-luyckxEffectAuthorSet2011}{}%
\textbf{Luyckx, K. and Daelemans, W.} (2011). The effect of author set
size and data size in authorship attribution. \emph{Literary and
Linguistic Computing}, \textbf{26}(1): 35--55.
\leavevmode\hypertarget{ref-mckennaBeckettTrilogyComputational1999}{}%
\textbf{McKenna, W., Burrows, J. and Antonia, A.} (1999). Beckett's
trilogy: Computational stylistics and the nature of translation.
\emph{Revue Informatique Et Statistique Dans Le Sciences Humaines},
\textbf{35}: 151--71.
\leavevmode\hypertarget{ref-przepiorkowskiNarodowyKorpusJezyka2012}{}%
\textbf{Przepiórkowski, A., Bańko, M., Górski, R. L. and
Lewandowska-Tomaszczyk, B. (eds).} (2012). \emph{Narodowy {Korpus Języka
Polskiego}}. {Warszawa}: {PWN}.
\leavevmode\hypertarget{ref-rybickiSuccessRatesMostfrequentwordbased2015}{}%
\textbf{Rybicki, J.} (2015a). Success {Rates} in
most-frequent-word-based authorship attribution: {A} case study of 1000
{Polish} novels from {Ignacy Krasicki} to {Jerzy Pilch}. \emph{Studies
in Polish Linguistics}, \textbf{10}(2): 87--104.
\leavevmode\hypertarget{ref-rybickiViveDifferenceTracing2015}{}%
\textbf{Rybicki, J.} (2015b). Vive la différence: {Tracing} the
(authorial) gender signal by multivariate analysis of word frequencies.
\emph{Digital Scholarship in the Humanities}, \textbf{31}(4): 746--61
doi:\href{https://doi.org/10.1093/llc/fqv023}{10.1093/llc/fqv023}.
\leavevmode\hypertarget{ref-rybickiDeeperDeltaGenres2011}{}%
\textbf{Rybicki, J. and Eder, M.} (2011). Deeper {Delta} across genres
and languages: Do we really need the most frequent words? \emph{Literary
and Linguistic Computing}, \textbf{26}(3): 315--21.
\leavevmode\hypertarget{ref-sokolovaSystematicAnalysisPerformance2009}{}%
\textbf{Sokolova, M. and Lapalme, G.} (2009). A systematic analysis of
performance measures for classification tasks. \emph{Information
Processing and Management}, \textbf{45}(4): 427--37
doi:\href{https://doi.org/10.1016/j.ipm.2009.03.002}{10.1016/j.ipm.2009.03.002}.
\leavevmode\hypertarget{ref-stamatatosSurveyModernAuthorship2009}{}%
\textbf{Stamatatos, E.} (2009). A survey of modern authorship
attribution methods. \emph{Journal of the American Society for
Information Science and Technology}, \textbf{60}(3): 538--56.
\leavevmode\hypertarget{ref-wiersmaAutomaticallyExtractingTypical2011}{}%
\textbf{Wiersma, W., Nerbonne, J. and Lauttamus, T.} (2011).
Automatically extracting typical syntactic differences from corpora.
\emph{Literary and Linguistic Computing}, \textbf{26}(1): 107--24.
\end{CSLReferences}
\bibliographystyle{unsrt}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,118 |
Silke Wieprecht (* 23. August 1965 in Roth bei Nürnberg) ist eine deutsche Bauingenieurin und Hochschulprofessorin.
Leben
Nach dem Abitur 1984 am Adam-Kraft-Gymnasium Schwabach folgte ein Bauingenieur-Studium an der Technischen Universität München. 1998 promovierte sie an der Universität der Bundeswehr München und war anschließend in der Bundesanstalt für Gewässerkunde im Fachgebiet Gewässermorphologie tätig.
Seit Juli 2003 ist sie Professorin am Institut für Wasserbau (jetzt: Institut für Wasser- und Umweltsystemmodellierung) der Universität Stuttgart und leitet dort den Lehrstuhl für Wasserbau und Wassermengenwirtschaft. Seit 2021 ist Wieprecht Prorektorin für Diversity und Internationales an der Universität Stuttgart.
Auszeichnungen
2019 wurde Silke Wieprecht mit dem Qian Ning Prize der World Association for Sedimentation and Erosion Research (WASER) für ihre herausragenden Beiträge zur Förderung des Wissens über Erosion und Sedimentation und zur internationalen Zusammenarbeit ausgezeichnet.
Mitarbeit in Gremien
International Association for Hydro-Environment Engineering and Research (IAHR) Council Member
DWA Hauptausschuss "Wasserbau und Wasserkraft" (Leitung)
DWA Fachausschuss WW-1 Hydraulik
DWA Fachausschuss WW-2 Morphodynamik und Sedimentmanagement
DWA Fachausschuss WW-3 Flussbau (Obfrau)
Weblinks
Silke Wieprecht auf iws.uni-stuttgart.de
Einzelnachweise
Bauingenieur
Hochschullehrer (Universität Stuttgart)
Deutscher
Geboren 1965
Frau | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,330 |
Politics Tue, 8 Jan 2019
I haven't promised SHS students allowances - Mahama
Former president John Mahama has rubbished reports making rounds that he has promised senior high school students monthly allowances if he returns to power.
Describing the report as "fake and a figment of the writer's imagination," the National Democratic Congress (NDC) flagbearer hopeful also denied ever being in the Volta region recently.
"It is also not true that Mr Mahama has visited the Volta Region to speak to students there. Please disregard the bizarre, outrageous and ridiculous publication and treat it with the contempt that it deserves. It is fake news. For the records, Mr Mahama is on a five-day campaign tour in the Western Region," the campaign team of Mahama said in a statement.
Below is the full statement:
RE: SHS STUDENTS WILL TAKE GARI AND ALAWA EVERY MONTH IN MY SECOND COMING — MAHAMA
Our attention has been drawn to an online publication on the subject matter above. The report claims that former President John Dramani Mahama spoke to students in the Volta Region on the said subject matter.
For the avoidance of doubt, we wish to state that the report is fake and a figment of the writer's imagination. It is a matter of public knowledge that Mr Mahama has yet to visit the Volta Region to campaign there ahead of the flagbearership election of the National Democratic Congress (NDC).
It is also not true that Mr Mahama has visited the Volta Region to speak to students there. Please disregard the bizarre, outrageous and ridiculous publication and treat it with the contempt that it deserves. It is fake news. For the records, Mr Mahama is on a five-day campaign tour in the Western Region.
Whilst on this campaign, he has been meeting with delegates and supporters of the NDC. He has been speaking mainly on issues affecting cocoa farmers, small-scale miners, jobs, the high cost of living in the country, his development agenda for the NDC, plans to better implement Free SHS as well as his determination and commitment to eliminate the chaotic double track system in secondary education.
JAMES AGYENIM-BOATENG
Source: Starrfmonline.com | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,618 |
Pelosi says House will impeach Trump again
January 11, 2021 Lisa Mascaro, Darlene Superville, and Mary Clare Jalonick, Associated Press
WASHINGTON, D.C. — House Speaker Nancy Pelosi says the House will proceed with legislation to impeach President Donald Trump as she pushes the vice president and the Cabinet to invoke constitutional authority to force him […]
No charges against Wisconsin officer who shot Jacob Blake
January 7, 2021 Todd Richmond and Michael Tarm, Associated Press
KENOSHA, Wis. — A Wisconsin prosecutor declined Tuesday to file charges against a police officer who shot a black man in the back in Kenosha, concluding he couldn't disprove the officer's contention that he acted […]
In year of protests, treatment depended on political leanings
December 23, 2020 A.P. Dillon
RALEIGH — 2020 was a year defined by the coronavirus pandemic, but the year was also marked by steady "ReOpen" protests over pandemic executive orders. Overlapping with ReOpen protests were Black Lives Matter demonstrations over […]
Guatemala condemns fire at Congress; 12 injured in protests
November 22, 2020 Sonia Perez, Associated Press
GUATEMALA CITY — Guatemala's government called fires set by protesters at Congress "terrorist acts" while the Inter-American Human Rights Commission on Sunday condemned what it called an "excessive use of force" by police against demonstrators […]
900 reported arrested in Belarus protests
November 16, 2020 The Associated Press
KYIV, Ukraine — A human rights group in Belarus said more than 900 people were arrested Sunday in protests around the country calling for authoritarian President Alexander Lukashenko to step down. The demonstrations continued the […]
Fearing election unrest, U.S. businesses are getting ready
November 2, 2020 The Associated Press
Judging by the plywood, it's shaping up to be an Election Day like no other. In downtown Washington, the sounds of hammers and power tools echoed through the streets Monday as workers boarded up dozens […]
Clarence Henderson describes path from 1960 Woolworth's sit-in to 2020 featured RNC speech
September 30, 2020 David Larson
RALEIGH — Clarence Henderson knows what it's like to hold his ground despite strong opposition from the majority. That was true in 1960 when he participated in the historic sit-in at a Greensboro Woolworth's lunch […]
2 Louisville officers shot amid Breonna Taylor protests
September 24, 2020 Dylan Lovan, Piper Hudspeth Blackburn, and John Minchillo, Associated Press
LOUISVILLE, Ky. — Hours after a Kentucky grand jury brought no charges against Louisville police for Breonna Taylor's death and protesters took to the streets, authorities said two officers were shot and wounded Wednesday night […]
1 officer indicted in Breonna Taylor case; not for her death
September 23, 2020 `
LOUISVILLE, Ky. — A Kentucky grand jury on Wednesday indicted a single former police officer for shooting into neighboring apartments but did not move forward with charges against any officers for their role in Breonna […]
Protest with Jacob Blake's family held in North Carolina
September 21, 2020 The Associated Press
CHARLOTTE — Members of Jacob Blake's family attended a rally in Charlotte on Sunday, calling for an end to a "vicious cycle of hate" nearly a month after Blake was shot by a police officer […] | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,155 |
{"url":"https:\/\/physics.stackexchange.com\/questions\/281671\/which-force-pushes-a-ball-away-from-the-center-of-a-spinning-disk","text":"# Which force pushes a ball away from the center of a spinning disk?\n\nHere is a question from my book.\n\nI need to know which force is acting on the ball, making it move outwards?\n\nIt cannot be centrifugal force, as centrifugal force acts if the particle is moving in a circle. Here it is clearly not. Moreover centrifugal force acts when it is seen from the point of view of the ball itself (pseudo forces act only in an accelerated frame). What if I look at it from the ground? Which force will then be responsible for making it move outwards?\n\n\u2022 It is centrifugal force. Imagine yourself seated on the rotating table and holding a ball by a taut string (forget the groove). The string will be radially stretched because of centrifugal force, which is also the force acting on the ball. \u2013\u00a0Deep Sep 22 '16 at 11:26\n\u2022 Well the particle is moving in a circle so you have an (instantiaous) centrifugal force acting on the particle. However as you don't have any balancing inward force, the radial distance of the particle will increase with time. \u2013\u00a0Mikael Fremling Sep 22 '16 at 11:26\n\u2022 @Zero, centrifugal force works if it's going in a circle. Clearly it is not \u2013\u00a0Aaryan Dewan Sep 22 '16 at 12:48\n\u2022 @AaryanDewan, from the point of view of the ground, the ball will move in an outward spiral as it rotates with the disk and moves away from the center. This is effectively moving in a circle whose radius is increasing over time, meaning centripetal\/centrifugal forces apply. \u2013\u00a0Nuclear Wang Sep 23 '16 at 19:33\n\nHere is an attempt to explain what is going on in the (inertial) frame of reference of the world:\n\nThe red vector is the force from the side of the groove on the ball: as a result the ball starts to move. Initially, it will get the same lateral speed as the groove - if it's at a distance $r$, and the disk rotates at $\\omega$, the velocity will be $v=r~\\omega$.\n\nA moment later, the groove will be at a different angle - but the ball tries to keep going in a straight line. It will have moved to a new radial direction, where the groove is going faster than the ball. As a result, it will once again feel a force of the wall, and it will accelerate in a new direction; I tried to indicate the new velocity as the vector sum of the old velocity plus the acceleration.\n\nObviously you can repeat the diagram for subsequent positions of the disk.\n\nIn the rotating frame of reference of the disk, you can describe the same thing in a different way. In a rotating frame of reference, there appear to be two fictitious forces: the centrifugal force that makes the object \"want to move away from the center\", and the Coriolis force that is only apparent if the object has a velocity in the rotating frame of reference.\n\nWhen the ball is stationary in the groove (in the rotating frame of reference), the only force it experiences is the centrifugal force (this is right after the initial impulse that will have given the ball the same velocity as the part of the groove where it was placed). As soon as it starts moving outwards (under the influence of the centrifugal force) it will also start to swerve (under the influence of the Coriolis force). The groove will exert a force equal and opposite to the Coriolis force to keep the ball moving in a straight line in the groove.\n\nIn the rotating frame of reference, the radial acceleration of the ball can be calculated directly from the centrifugal force. The total velocity can be arrived at by calculating both the radial and tangential components of the velocity (tangential velocity is $r\\omega$).\n\nI will leave the details up to you.\n\n\u2022 Hi. See the ball is not moving in a circle. How can you apply centrifugal force on it, as there's no centripetal for on it too! ( even if you look at it from the centre of the circle ) ? \u2013\u00a0Aaryan Dewan Sep 25 '16 at 20:12\n\u2022 @AaryanDewan The centrifugal force is a \"fictitious force\" that appears in a rotating frame of reference and that only depends on the mass of the object, the rate of rotation of the frame of reference and the distance from the axis of rotation. When you constrain an object to go in a circle you need to apply an equal and opposite centripetal force to balance the centrifugal force - making \"no net force\" in the rotating frame of reference, so the object appears stationary. But the centrifugal force is there, regardless. \u2013\u00a0Floris Sep 25 '16 at 21:36\n\u2022 Thanks! But @Floris , can you tell me WHY do we always apply the fictitious force away from the centre, no matter from where are we looking the object from? \u2013\u00a0Aaryan Dewan Sep 26 '16 at 2:05\n\u2022 The fictitious force appears ONLY in the rotating frame of reference. Not sure what you mean by \"no matter from where we are looking at the object\". \u2013\u00a0Floris Sep 26 '16 at 11:37\n\nTechnically it's the groove that's exerting a mechanical force on the ball pushing it in the direction the groove is moving in. Inertia is why the ball travels outward because more of it's momentum is in the outward direction as opposed to the inward direction. The groove constantly applying a force and altering the ball's momentum ensures that the majority of it's momentum will always be in the outward direction.\n\n\u2022 There is no friction. The question says that the table is smooth. \u2013\u00a0sammy gerbil Sep 22 '16 at 16:43\n\u2022 Edited my answer \u2013\u00a0Yogi DMT Sep 22 '16 at 17:34\n\nYou could think of this in terms of Fictious Forces.\n\nWhat if i look at it from the ground? Which force will be then responsible to make it move towards the left?\n\nSource: Coriolis Force\n\nIn the inertial frame of reference (upper part of the picture), the black ball moves in a straight line. However, the observer (red dot) who is standing in the rotating\/non-inertial frame of reference (lower part of the picture) sees the object as following a curved path due to the Coriolis and centrifugal forces present in this frame.\n\nIn your example, the groove constrains the ball from moving sideways.\n\nAs this a homework type question, I can give you an outline and references for you to read yourself. We have Newton's laws for when we are examing an inertial (non accelerating) frame of reference. When we move to a rotating frame of reference, we acquire extra forces, called either inertial forces or pseudo forces.\n\nThese forces are called the Coriolis Force and Centrifugal Force\n\nYou can read up more on how these forces work at Rotating Frames of Reference and Forces Involved in Circular Motion\n\n\u2022 I don't think coriolis force acts here; since it is along the $\\rm{OY}$ axis as is evident from the pic. \u2013\u00a0user36790 Sep 22 '16 at 11:28\n\u2022 This answer (and most other comments to the question) do not address the question when viewed from the non-rotating frame. \u2013\u00a0garyp Sep 22 '16 at 11:55\n\u2022 I think the fictitious Coriolis Force does act here. However, it causes no tangential motion relative to the disk because the particle is confined to the groove, so the Coriolis Force is opposed by a real force of reaction from the side of the groove. \u2013\u00a0sammy gerbil Sep 22 '16 at 11:59\n\u2022 @sammygerbil: Yes, there is indeed Coriolis force when viewed with respect to the rotating table. But it is irrelevant here since the particle is constrained to move only along $\\rm{OX}\\,.$ \u2013\u00a0user36790 Sep 22 '16 at 12:29\n\u2022 @CountTo10: Yes, I do know there is Coriolis force, but it is totally irrelevant here. It is nullified by the normal forces from the sides of the groove. \u2013\u00a0user36790 Sep 22 '16 at 12:30\n\nIn order for the ball to stay at a constant distance from the center of rotation at a constant angular velocity, the net force on the ball would have to point inwards, from the ball towards the center of rotation (i.e. centripetal force).\n\nHowever, the only \"real\" force acting on the ball is the normal force from the side of the groove, which points tangentially. So, the net effect on the ball is that the direction of its velocity vector is constantly being turned to point outward radially.\n\n\u2022 Can you post a picture of your answer that \" normal force from the side of the groove, which points tangentially.\" \u2013\u00a0Aaryan Dewan Sep 22 '16 at 22:48\n\u2022 Floris's answer contains the diagram. The picture on the left shows the normal force (red) and the initial direction of the velocity of the ball (green). Note that the groove points radially, and the normal force is always perpendicular to the radius (i.e. tangential). Now look at the solid green arrow on the right image. If you break that into radial and tangential components, you will see that there is a small radial component - hence the ball starts moving outward radially. \u2013\u00a0mbeckish Sep 26 '16 at 15:03","date":"2019-10-17 15:09:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7016291618347168, \"perplexity\": 191.69485802476868}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986675409.61\/warc\/CC-MAIN-20191017145741-20191017173241-00027.warc.gz\"}"} | null | null |
{"url":"https:\/\/www.nature.com\/articles\/s41467-020-16674-y?error=cookies_not_supported&code=2bb10766-fbe6-48c2-a695-15498482a93a","text":"Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.\n\n# Harnessing strong metal\u2013support interactions via a reverse route\n\n## Abstract\n\nEngineering strong metal\u2013support interactions (SMSI) is an effective strategy for tuning structures and performances of supported metal catalysts but induces poor exposure of active sites. Here, we demonstrate a strong metal\u2013support interaction via a reverse route (SMSIR) by starting from the final morphology of SMSI (fully-encapsulated core\u2013shell structure) to obtain the intermediate state with desirable exposure of metal sites. Using core\u2013shell nanoparticles (NPs) as a building block, the Pd\u2013FeOx NPs are transformed into a porous yolk\u2013shell structure along with the formation of SMSIR upon treatment under a reductive atmosphere. The final structure, denoted as Pd\u2013Fe3O4\u2013H, exhibits excellent catalytic performance in semi-hydrogenation of acetylene with 100% conversion and 85.1% selectivity to ethylene at 80\u2009\u00b0C. Detailed electron microscopic and spectroscopic experiments coupled with computational modeling demonstrate that the compelling performance stems from the SMSIR, favoring the formation of surface hydrogen on Pd instead of hydride.\n\n## Introduction\n\nSupported metal catalysts have long been recognized as the most important group of heterogeneous catalysts for fundamental investigations and modern chemical industries1,2,3,4. Conventionally, these catalysts are synthesized by anchoring the active metal nanoparticles (NPs) onto certain high-surface-area supports to increase the dispersion of catalytically active sites and stabilize the metal against leaching5,6,7. Subsequently, the metal\u2013support interface is constructed. Such an interface provides synergistic properties to regulate catalysis by modifying the electronic (charge transfer between the metal sites and the support) and\/or geometric (decoration or coverage of metal sites by the support) parameters, and also by modulating the reaction pathways, e.g., lattice oxygen in oxide supports may directly participate in catalytic reactions7; multicomponent interfaces can enable tandem reaction pathways that do not exist on single-component active sites8,9.\n\nAs a classic prototype in metal\u2013support interactions, the strong metal\u2013support interaction (SMSI) has been defined as the encapsulation of NPs, usually group VIII metals, by partially reduced oxide supports during high-temperature hydrogen (H2) treatment10,11. Since the very first discovery of SMSI by Tauster et al.12,13,14, SMSI has been widely exploited to tune catalytic performances of group VIII NPs by engineering geometric and\/or electronic structures of these metal sites. For example, the adsorption of H2 or CO on Pd was extremely suppressed upon the formation of SMSI (refs. 13,15), suggesting that the active metal sites were largely covered by support, which altered the geometric ensembles and improved the thermal stability of Pd catalysts. Meanwhile, because the reducible oxide support, e.g., TiO2, Co3O4, CeO2, and Nb2O5, is partially reduced to the structure with a nonstoichiometric oxygen concentration during the reductive annealing, electron transfer between metal NPs and oxide supports was detected16,17,18,19. Under extreme conditions, the formation of intermetallic structure of the supported metal and metal cations in the supporting oxide was observed20,21.\n\nDespite these fascinating interfacial properties in SMSI, the formation of SMSI is restricted to specific combinations of elements, i.e., group VIII metals with high surface energy and transition metal oxides with low surface energy. Consequently, it is extremely challenging for some metals, e.g., Au, to manifest SMSI due to their low work function and surface energy15,17,22. Efforts have been devoted in hope of expanding upon the conventional SMSI. One critical element in this pursuit is switching the high-temperature treatment in H2 into other conditions and thereby changes the mechanistic pathways for the formation of SMSI. For example, Wang et al. reported SMSI formation between Au NPs and TiO2 induced by melamine under an\u00a0oxidative atmosphere. With the formation of SMSI, the Au NPs were encapsulated by a permeable TiOx thin layer, making the Au NPs ultrastable at 800\u2009\u00b0C (ref. 23). Xiao et al. reported a wet chemistry approach to construct SMSI in aqueous solution at room temperature, which was realized by engineering redox interactions between metals and supports. This strategy was applicable to Au, Pt, Pd, and Rh (ref. 15). Christopher et al. developed a strongly bounded-adsorbate-mediated strategy to construct SMSI between Rh and TiO2 through high-temperature treatment in the mixture of CO2 and H2 (ref. 24). Zhang et al. engineered the SMSI between Au NPs and hydroxyapatite by treating the Au NP\u2013hydroxyapatite composite in the air at high temperatures17. Although progress has been made in expanding the boundaries of SMSI, one inevitable issue associated with the conventional SMSI is that upon high-temperature treatment the encapsulation process immediately and uncontrollably takes place, resulting in limited exposure of active sites25. In the ideal scenario, the oxide coverage on the\u00a0metal surface needs to be thin and permeable to small molecules, while still fully encapsulating metal NPs to prevent the dissolution, disintegration, and aggregation of active sites during catalysis.\n\nWe recently reported that voids and cavity space can be developed in metal\u2013metal oxide core\u2013shell NPs in response to H2 treatment at 200\u2009\u00b0C (ref. 26). This observation combined with the current issues in conventional SMSI motivated us to develop alternative routes to metal\u2013support interactions. Here, we denote this type of structural rearrangement as the strong metal\u2013support interaction via a reverse route (SMSIR). Specifically, we start from the final morphology of SMSI (full encapsulation) and end in the intermediate state with partial exposure of metal sites (Fig.\u00a01). As a proof of concept, we demonstrate that the core\u2013shell Pd\u2013FeOx NPs can be restructured into a porous yolk\u2013shell structure after optimized reductive annealing (Pd\u2013Fe3O4\u2013H). Characterizations reveal that Pd atoms gradually migrate into the Fe3O4 lattice and electron is partially transferred from Pd to Fe3O4. The Pd\u2013Fe3O4\u2013H shows 100% conversion and 85.1% selectivity in the acetylene (C2H2) semi-hydrogenation at atmospheric pressure and a mild reaction temperature of 80\u2009\u00b0C. Further investigations demonstrate that the Pd\u2013Fe3O4\u2013H engineered by the SMSIR alleviates the strong H2 adsorption on Pd sites, in favor of the formation of surface hydrogen (surface-H) instead of hydride during the hydrogenation of C2H2 to C2H4. Our results on the scenario of engineering SMSIR can help to circumvent the current limits in metal\u2013support interfaces, expanding the boundaries of conventional SMSI, and providing opportunities to rationally maneuver structure-dependent catalytic outcomes.\n\n## Results\n\n### Synthesis and characterization\n\nDetails of the material synthesis can be found in the \u201cMethods\u201d section. Briefly, the Pd NPs with a size of 5.5\u2009\u00b1\u20090.5\u2009nm (Supplementary Fig.\u00a01) were prepared by the reduction of palladium (II) acetylacetonate (Pd(acac)2) in oleylamine (OAM) as modified from a previous report27. The core\u2013shell Pd\u2013FeOx NPs were obtained by a seed-mediated growth method with the pre-made Pd NPs as the seeds and iron (III) acetylacetonate, as the iron precursor that nucleated on Pd surface, forming an iron oxide shell. The pristine core\u2013shell sample was denoted as Pd-FeOx NPs (Supplementary Figs.\u00a02 and 3). The SMSIR was constructed by treating the Pd\u2013FeOx NPs at 300\u2009\u00b0C in a gas mixture of H2 and argon (Ar; 4 vol.% of H2), and the sample was named as Pd\u2013Fe3O4\u2013H. As a comparison, the Pd\u2013FeOx NPs were treated in the air at 300\u2009\u00b0C to obtain the structure without SMSIR (Pd\u2013Fe3O4\u2013A).\n\nX-ray diffraction (XRD) was performed to determine\u00a0the crystal structures of the samples. As shown in the XRD patterns of pristine Pd\u2013FeOx and Pd\u2013Fe3O4\u2013A (Supplementary Fig.\u00a04), characteristic peaks at 2\u03b8\u2009=\u200940.1\u00b0 with very low intensities were detected, which can be assigned to the (111) peak of face-centered cubic\u00a0(fcc) Pd. No additional peaks in the XRD patterns can be found, indicating the amorphous nature of iron oxide shell in both pristine Pd\u2013FeOx and Pd\u2013Fe3O4\u2013A. On the contrary, in the XRD pattern of Pd\u2013Fe3O4\u2013H, the intensity of Pd (111) peak increases remarkably, and a series of characteristic peaks at 2\u03b8\u2009=\u200930.6\u00b0, 35.9\u00b0, 43.5\u00b0, 53.9\u00b0, 57.3\u00b0, 63.0\u00b0, and 74.3\u00b0 are clearly observed, which can be assigned to (220), (311), (400), (422), (511), (440), and (533) lattices of \u03b3-Fe3O4. The XRD characterization indicates that annealing in the reductive atmosphere may facilitate the spatial redistribution of grains in the oxide shell, and promotes the crystallization of Pd and iron oxides, consistent with our previous report26.\n\nThe aberration-corrected high-angle annular dark-field scanning transmission electron microscopy (HAADF-STEM) images of the Pd\u2013Fe3O4\u2013H in Fig.\u00a02a show that the core\u2013shell structure of pristine Pd\u2013FeOx NPs evolved into a unique porous yolk\u2013shell structure after reductive annealing at 300\u2009\u00b0C. Magnified HR-STEM images\u00a0of Pd\u2013Fe3O4\u2013H in Fig.\u00a02b, c demonstrate a lattice parameter of 0.217\u2009nm in the core, corresponding to the (111) plane of Pd, and lattice parameters of 0.251 and 0.146\u2009nm in the shell, corresponding to the (311) and (440) planes of Fe3O4. More interestingly, the magnified HR-STEM image in Fig.\u00a02d shows that there are abundant voids, i.e., lattice vacancies, in the Fe3O4 shells (marked in yellow circles). To analyze the pore distribution, the pore sizes were determined by averaging pore sizes in multiple HR-STEM images (Supplementary Fig.\u00a05). The majority of these pores on the Fe3O4 shell are micropores with an average pore size of 0.73\u2009nm. Furthermore, electron energy loss spectroscopy (EELS) mapping of Pd\u2013Fe3O4-H in Fig.\u00a02e\u2013i depict a yolk\u2013shell-like structure of Pd yolk and Fe3O4 shell with numerous voids. In contrast, for Pd\u2013Fe3O4\u2013A, no significant voids were detected in the Fe3O4 shells and the intact core\u2013shell structure was retained (Supplementary Fig.\u00a06). It is known that the reducible metal oxides can be partially reduced after high-temperature treatment in H2 (ref. 28). H2 reacts with these oxides to produce water and generate oxygen vacancies in the oxide matrix. This process can be further facilitated by platinum-group metal NPs supported on those oxides through a H2 spillover process29,30. In the meantime, the crystallization of oxide shell could promote the rearrangement of atoms, alter the distribution of oxide grains to expand the generated oxygen vacancies, and finally develop voids and cavity space in the structure.\n\n### XAFS characterization and simulations\n\nTo understand the coordination environments of Pd\u2013Fe3O4 structures, X-ray absorption near-edge spectroscopy (XANES) and extended X-ray absorption fine structure (EXAFS) were performed (Fig.\u00a03, Supplementary Tables\u00a01 and 2). The chemical states of Pd in Pd\u2013Fe3O4\u2013H and Pd\u2013Fe3O4\u2013A samples were investigated in Pd K-edge EXAFS and XANES, and Pd foil was employed as a reference (Fig.\u00a03a, b, Supplementary Table\u00a01). The Pd in Pd\u2013Fe3O4\u2013H mainly exists in the form of metallic Pd0, while in Pd\u2013Fe3O4\u2013A, Pd demonstrates an oxidized feature to some extent. To determine the chemical states and structures of iron oxide shells, the Fe K-edge EXAFS (Fig.\u00a03c), corresponding fitting (Supplementary Table\u00a02), the Fe K-edge XANES (Fig.\u00a03d), and the Fe K-edge first derivative XANES (Supplementary Fig.\u00a07) were collected and analyzed. Compared with the Fe3O4 and Fe2O3 references, the Fe K-edge XANES and the Fe K-edge first derivative XANES indicated that the oxide shell in Pd\u2013Fe3O4\u2013H was similar to Fe3O4, while the oxide shell in Pd\u2013Fe3O4\u2013A possessed a partially oxidized Fe3O4 feature (Fig.\u00a03d, Supplementary Fig.\u00a07).\n\nBecause Pd\u2013Pd and Pd\u2013Fe bond lengths are similar, it is hard to visualize the difference between these two bonds with Fourier transform results. In this regard, the wavelet transform (WT) EXAFS as a powerful technique was employed to distinguish these two bonds in our samples. It can be clearly seen from Fig.\u00a04a, b that compared with the standard WT EXAFS images for Pd\u2013Fe, Pd\u2013O, Pd\u2013Pd, and Pd foil (Supplementary Fig.\u00a08), the Pd in Pd\u2013Fe3O\u2013H remains to be the metallic Pd0 state and Fe\u2013Pd bond emerges (Fig.\u00a04a). In contrast, for the Pd\u2013Fe3O4\u2013A sample (Fig.\u00a04b), the result demonstrates an oxidized feature with the formation of the Pd\u2013O bond, indicating that the Pd may be slightly oxidized by air, consistent with our EXAFS and XANES results in Fig.\u00a03.\n\nDensity functional theory (DFT) simulations combined with the EXAFS curve fitting were carried out to provide more insight into the crystal structure of iron oxide, and the interactions between Pd and Fe3O4. First, a series of models including a Pd cluster atop the surfaces of Fe2O3 and Fe3O4, and a Pd cluster in oxygen vacancy of Fe2O3, and Fe3O4 surfaces were constructed and optimized by DFT in Supplementary Fig.\u00a09, and the corresponding FEFF calculated scattering paths were also presented (Supplementary Tables\u00a035). Then, the EXAFS curve fitting on the DFT-optimized structures (Supplementary Figs.\u00a010 and 11, Supplementary Tables\u00a06 and 7) of both Pd K-edge EXAFS and Fe K-edge EXAFS were obtained. It can be concluded from the results that the best-fitted structure of Pd\u2013Fe3O4\u2013H is that the Pd atoms intercalate into the Fe3O4 matrix (Fig.\u00a04c, detailed optimizing process see Supplementary Fig.\u00a010), indicating that the Pd enters into the Fe3O4 lattice, substituting an oxygen vacancy and tends to form the Fe\u2013Pd bond with Fe in Fe3O4. This observation suggests that there exist strong interactions between Pd and Fe3O4 in Pd\u2013Fe3O4\u2013H. In contrast, the Pd\u2013Fe3O4\u2013A demonstrated a good match to the local geometry of Pd atoms situated on the surface of Fe3O4 (Fig.\u00a04d, detailed optimizing process see Supplementary Fig.\u00a011). This result may shed some light on the formation mechanism of this unique porous yolk\u2013shell structure. Evidently, the reaction between H2 molecules and O atoms in the Fe3O4 could generate oxygen vacancies in the structure upon evaporating the produced water. Meanwhile, the crystallization of the oxide shell and the new Fe\u2013Pd bond formation could promote the rearrangement of oxide lattice and the mobility of Pd atoms, expanding the atom vacancies and developing cavity space in the structure.\n\n### XPS and DRIFTS investigations\n\nTo investigate the charge transfer with the formation of SMSIR, X-ray photoelectron spectroscopy (XPS) and CO diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) were carried out and shown in Supplementary Figs.\u00a01214. In the high-resolution Pd 3d XPS of Pd\u2013FeOx NPs, only a Pd 3d5\/2 peak at 335.4\u2009eV, being assigned to metallic Pd, was found31,32. When the Pd\u2013FeOx NPs were treated in air at 300\u2009\u00b0C, an additional Pd 3d5\/2 peak at 336.8\u2009eV assigned to PdO, emerged in Supplementary Fig.\u00a012. This observation is consistent with our XAFS results that the Pd in Pd\u2013Fe3O4\u2013A possesses an oxidized feature to some extent. Furthermore, a Pd 3d5\/2 shoulder peak at 336.2\u2009eV in the high-resolution Pd 3d XPS of Pd\u2013Fe3O4\u2013H was detected and assigned to the positively charged Pd (Pd\u03b4+)33, which is originated from intercalation of Pd into Fe3O4 matrix, leading to the strong interactions between Pd and Fe to form Pd\u2013Fe bond34, in accordance with the XAFS results. Meanwhile, in the high-resolution Fe 2p XPS of Pd\u2013Fe3O4\u2013H (Supplementary Fig.\u00a013), the peaks of Fe 2p1\/2 and Fe 2p3\/2 downshifted comparing with that of Pd\u2013Fe3O4\u2013A, further confirming that the charge transfers from Pd to Fe3O4 in Pd\u2013Fe3O4\u2013H. The XPS\u00a0results indicate the formation of strong interactions and partial electron transfer between Pd and Fe3O4. The CO DRIFTS was further carried out. During the test, we found that the CO adsorption peak was weak. We, therefore, subtracted the pure gas-phase signal from each data set. As shown in Supplementary Fig.\u00a014, a peak at ~2153\u2009cm\u22121 was detected both in the CO DRIFTS of Pd\u2013Fe3O4\u2013H and Pd\u2013Fe3O4\u2013A, which is assigned to Fe3+\u2013CO (ref. 35). Due to the core\u2013shell morphology of Pd\u2013Fe3O4\u2013A where Pd is fully encapsulated by Fe3O4, no obvious peak was detected in the CO DRIFTS of Pd\u2013Fe3O4\u2013A. In contrast, a very weak peak at 2102\u2009cm\u22121 can be seen in the CO DRIFTS of Pd\u2013Fe3O4\u2013H, which is assigned to the linear CO adsorption on metallic Pd (ref. 36). More interestingly, an additional peak at 2134\u2009cm\u22121 can be found. Compared with the linear CO adsorption on metallic Pd, this blueshifted peak is assigned to linear CO adsorption on positively charged Pd (CO\u2013Pd\u03b4+)37. Combined with the XPS analysis, this peak may be attributed to the linear CO adsorption on Pd\u03b4+ in the newly emerged Pd\u2013Fe bond of Pd\u2013Fe3O4\u2013H.\n\nThe Pd\u2013Fe3O4\u2013H sample was further re-treated in air at 300\u2009\u00b0C to obtain the Pd\u2013Fe3O4\u2013Re sample, and characterized by TEM, CO DRIFTS, and XPS to determine the reversibility of SMSIR. As shown in the TEM image of the Pd\u2013Fe3O4\u2013Re (Supplementary Fig.\u00a015), the sample still possesses a yolk\u2013shell structure, but the voids are smaller than those of Pd\u2013Fe3O4\u2013H. The XPS of Pd\u2013Fe3O4\u2013Re (Supplementary Fig.\u00a016) demonstrates three Pd states of metallic Pd, Pd\u03b4+ in Pd\u2013Fe bond, and PdO. The intensity of Pd\u03b4+ peak is lower than that of Pd\u2013Fe3O4\u2013H (Supplementary Fig.\u00a012), indicating the decrease of Pd\u03b4+ concentration. The CO DRIFTS of Pd\u2013Fe3O4\u2013Re in Supplementary Fig.\u00a017 shows that the intensity of CO\u2013Pd\u03b4+ became weaker than that in the CO DRIFTS of Pd\u2013Fe3O4\u2013H in Supplementary Fig.\u00a014, suggesting the CO DRIFTS spectral feature is an intermediate state between Pd\u2013Fe3O4\u2013H and Pd\u2013Fe3O4\u2013A. The analysis of TEM, CO DRIFTS, and XPS together suggests that the SMSIR in this work is partially reversible.\n\n### Catalytic performance\n\nThe semi-hydrogenation of C2H2 to C2H4 is an important reaction in industrial purification of the C2H4 stream produced from naphtha cracking. Pd-based catalysts are mostly used for this reaction with a consensus that the selectivity is sensitive to the structure of the catalyst38. H2 molecules that are weakly adsorbed onto the Pd surface to form surface-H and C2H2 molecules that are strongly adsorbed can lead to the production of C2H4, while the formation of hydride usually results in the total hydrogenation to ethane (C2H6). The adsorption of H2 and C2H2 strongly relies on the structure of Pd catalysts. Herein, the hydrogenation of C2H2 was systematically investigated over the prepared catalysts to correlate their structural properties with the catalytic outcomes.\n\nAs shown in Fig.\u00a05a, Pd NPs totally converted C2H2 to C2H6 without any selectivity toward C2H4 at 80\u2009\u00b0C, while Fe3O4 NPs treated at 300\u2009\u00b0C in a gas mixture of H2 and Ar (4 vol.% of H2; Fe3O4\u2013H) barely demonstrated any catalytic activity for the hydrogenation of C2H2. The selectivity of C2H2 to C2H4 over the Pd\u2013Fe3O4\u2013A was 100%, but the conversion was only 25.6%. This observation can be mainly attributed to the core\u2013shell structure of the Pd\u2013Fe3O4\u2013A that only exposes limited active Pd sites through the amorphous oxide shell, restricting the adsorption of the reactants. When the Pd\u2013Fe3O4\u2013H was employed in the semi-hydrogenation of C2H2 at 80\u2009\u00b0C, the conversion was 100% and the selectivity was as high as 85.1%. The light-off curves of Pd\u2013Fe3O4\u2013H (Fig.\u00a05b) demonstrate that the conversion of C2H2 increases with the increment of reaction temperature, while the selectivity toward C2H4 shows the opposite trend. To comprehensively compare the catalytic performance between the Pd\u2013Fe3O4\u2013H catalyst and previously reported values, the turnover frequency (TOF) was calculated based on the dispersion of Pd (obtained from H2-pulse chemisorption in Supplementary Table\u00a08). The TOF of Pd\u2013Fe3O4\u2013H was 6.46\u2009s\u22121, ~100-fold higher than those of a series of state-of-the-art single-atom catalysts at 80\u2009\u00b0C (Supplementary Fig.\u00a018), indicating that the Pd\u2013Fe3O4\u2013H demonstrated compelling catalytic performance for semi-hydrogenation of C2H2. Stability tests of the Pd\u2013Fe3O4\u2013H catalyst were further carried out under both the high and low conversion rates (Fig.\u00a05c, Supplementary Fig.\u00a019). Both results show that the Pd\u2013Fe3O4\u2013H catalyst was remarkably stable in the semi-hydrogenation of C2H2 to C2H4, which could be originated from the SMSIR between Pd and Fe3O4.\n\nThe formation of hydride in Pd-based catalysts is temperature-sensitive, and it dominates the total hydrogenation of C2H2 (refs. 38,39). Hence, the dispersion of Pd is determined by H2-pulse chemisorption (Supplementary Table\u00a08) at various temperatures to examine the formation of hydride. For the reference Pd NPs (commercial 5 wt.% Pd\/Al2O3), the dispersion was determined to be 7.6% at \u2212130\u2009\u00b0C (cold bath of isopentane and liquid N2). The corresponding particle size was calculated to be 14.8\u2009nm. However, H2 uptake on Pd NPs increased significantly at 35\u2009\u00b0C, and the estimated particle size decreased to 1.6\u2009nm. This discrepancy can be attributed to the substantial formation of hydride on Pd NPs at higher temperature that interferes with the estimation of particle size40. In contrast, the Pd\u2013Fe3O4\u2013H sample demonstrated a dispersion of 26.7 and 24.4% at \u2212130 and 35\u2009\u00b0C, respectively. The corresponding particle sizes were calculated to be 4.2 and 4.6\u2009nm, in agreement with the Pd core size from STEM investigations (Fig.\u00a02). These observations indicate that the formation of hydride may be effectively inhibited in our Pd\u2013Fe3O4\u2013H catalyst with SMSIR, leading to a superior selectivity toward semi-hydrogenated products in the catalytic investigations.\n\n### Control experiments\n\nA series of control samples obtained at different treating temperatures (T200, T300, and T400) were prepared (see \u201cMethods\u201d section) to determine the optimized condition for the formation of SMSIR. Here, the T300 stands for the Pd\u2013Fe3O4\u2013H sample with SMSIR. The structures and compositions of all catalysts are characterized by TEM, STEM (Supplementary Figs.\u00a020 and 21), EXAFS, XANES, EXAFS curve fitting on DFT-optimized model (Supplementary Figs.\u00a02229, Supplementary Tables\u00a0911), and XRD patterns (Supplementary Fig.\u00a030). The corresponding structures are summarized here: in the pristine core\u2013shell Pd\u2013FeOx sample, the core and shell were metallic Pd0 and amorphous Fe3O4. When the sample was treated in the air at high temperature (Pd\u2013Fe3O4\u2013A), the core\u2013shell structures maintained with no obvious formation of voids. When the Pd\u2013FeOx NPs sample was treated in H2 at different temperatures, the core and shell crystallized into Pd0 and Fe3O4, respectively. As a result, the T200 demonstrated a core\u2013shell structure with fewer voids, T300 (Pd\u2013Fe3O4\u2013H) embraced a yolk\u2013shell-like structure with numerous voids, and the T400 showed a heterostructure of Fe3O4 islands on Pd NPs. Especially, it can be seen from the EXAFS and corresponding fitting results of T300, i.e., Pd\u2013Fe3O4\u2013H, (Fig.\u00a03c, Supplementary Table\u00a02) that compared with the sample obtained at lower annealing temperatures (T200 sample; Supplementary Fig.\u00a024, Supplementary Table\u00a010), the Fe\u2013O coordination number decreased but Fe\u2013Fe coordination number remained stable in Pd\u2013Fe3O4\u2013H. This result further suggests that in Pd\u2013Fe3O4\u2013H, Pd may substitute the oxygen in iron oxide and form a new Fe\u2013Pd bond, indicating the formation of strong interactions between Pd NPs and Fe3O4 shell. In addition, Pd\u2013FeOx NPs with different shell thicknesses (STs) were also prepared. With the increment of STs, the samples were denoted as ST1\u00a0NPs, ST2 NPs\u00a0(i.e., Pd\u2013FeOx NPs), and ST3\u00a0NPs, and the NPs were further loaded onto \u03b3-Al2O3 to obtain ST1, ST2 (i.e. Pd\u2013Fe3O4\u2013H), and ST3\u00a0(for\u00a0characterizations see Supplementary Figs.\u00a03134). To help understand the structures of all prepared samples, a schematic diagram was presented in Supplementary Figs.\u00a035 and 36.\n\nTo highlight the role of SMSIR in tuning the conversion and selectivity of C2H2 semi-hydrogenation, both sets of control samples were employed in the C2H2 semi-hydrogenation reaction. As shown in Supplementary Figs.\u00a037 and 38, the Pd\u2013Fe3O4\u2013H, i.e., T300 and ST2, demonstrates the best catalytic performance. This result further reveals that the optimized ST and treating condition are essential to the formation of SMSIR for the promoted semi-hydrogenation of C2H2 to C2H4. Based on the structures of the catalysts, the different catalytic outcomes can be attributed to the following factors: (1) regarding the effect of annealing temperatures, all samples possess a similar Pd size, indicating that the difference of catalytic performance is not originated from the difference of particle sizes (Supplementary Fig.\u00a039). In the T200 sample, there are fewer voids in the oxide shell and the T200 remains to be a core\u2013shell structure, resulting in poor exposure of Pd active sites with limited activity. In the case of T400 sample, the core\u2013shell structure is completely destroyed, and therefore the formation of hydrides turns to be favorable because of the loss of core\u2013shell structural confinement; (2) for the effect of ST, a thicker shell in ST3 makes the Pd active sites less exposed. However, when the shell becomes too thin as the case in ST1, the structure cannot maintain a fully-encapsulated state, but rather more like a heterostructure with some iron oxide islands on Pd NPs. Consequently, Pd domains tend to form hydrides due to the lack of core\u2013shell structural confinement effect.\n\n### Reaction mechanism\n\nThe reaction kinetics were further explored to understand the underlying mechanisms. As shown in Supplementary Fig.\u00a040, the reaction order over C2H2 is calculated to be \u22121 (up to 2.5% atm partial pressure), roughly in agreement with Monnier\u2019s work with ~\u22120.7 reaction order41, indicating the strong adsorption of C2H2 on the surface of Pd in the Pd\u2013Fe3O4\u2013H catalyst. There exists a debate regarding the H2 reaction order. In general, the reaction order varies from ~0.5 (refs. 42,43), ~1 (refs. 44,45), and up to ~1.6 (ref. 46). In our work, we found the reaction order of H2 to be ~2 (up to 10% atm partial pressure). Such a positive dependence on H2 partial pressure indicates a much weaker H2 adsorption than previous studies. The temperature dependence of the Pd\u2013Fe3O4\u2013H sample was investigated at 1.2%\/6% atm partial pressure of C2H2\/H2 (Supplementary Fig.\u00a041). The apparent activation energy was found to be ~52.7\u2009kJ\u2009mol\u22121, in good agreement with Monnier\u2019s 12.1\u2009kcal\u2009mol\u22121 (ref. 41) and Zhang\u2019s 52\u2009kJ\u2009mol\u22121 (ref. 45).\n\nThe inelastic neutron scattering (INS) spectra of H2 adsorption on Pd\u2013Fe3O4\u2013H and bulk Pd were presented in Fig.\u00a06. The signal of H2-sorption behavior in Pd\u2013Fe3O4\u2013H is totally different from that in bulk PdHx. In bulk PdHx sample, an evident signal of hydride was detected, while in Pd\u2013Fe3O4\u2013H, the signal of\u00a0hydride was\u00a0very weak47,48. The profile of the peak at 500\u2009cm\u22121 reflects the status of the hydride. Specifically, the sharp peak followed by a shoulder as seen in bulk PdHx is due to certain dispersion relation of optical phonons in 3D space, which results in this particular distribution of phonon states. When hydride is only formed at or near the surface, the 3D network is lacking, leading to the broad bump in the spectrum of our Pd\u2013Fe3O4\u2013H sample. The result indicates that only surface-H formed during the reaction process, consistent with our H2-chemisorption results.\n\n## Discussion\n\nIn this work, we reported a strategy to engineer the SMSI between Pd and Fe3O4 by using core\u2013shell NPs as a building block through a reverse process of the formation of conventional SMSI, denoted as SMSIR. With the formation of SMSIR, the core\u2013shell Pd\u2013FeOx NPs was restructured into a unique porous yolk\u2013shell structured Pd\u2013Fe3O4\u2013H, in favor of the exposure of Pd active sites. The Pd\u2013Fe3O4\u2013H with SMSIR demonstrated excellent catalytic performance in semi-hydrogenation of C2H2 to C2H4 with 100% conversion, 85.1% selectivity, and a high TOF of 6.46\u2009s\u22121 at the reaction temperature as low as 80\u2009\u00b0C. XAFS investigations along with DFT simulations verified that the Pd atoms intercalate into the Fe3O4 matrix and form strong interactions. The electron transfer was probed by CO DRIFTS and XPS, suggesting that with the formation of SMSIR, electrons partially transfers from Pd to Fe3O4 shell. The optimized ST of Pd\u2013FeOx NPs and annealing temperature were found to be essential to the formation of SMSIR. Detailed mechanistic investigations indicated that the SMSIR in Pd\u2013Fe3O4\u2013H alleviates the strong chemisorption of H2 on Pd sites, prevents the formation of hydride, and consequently leads to a superior selectivity toward C2H4. This work not only develops a high-performance catalyst for semi-hydrogenation of C2H2 but also provides an approach for the construction of effective catalytic structures based on unconventional SMSI.\n\n## Methods\n\n### Chemicals\n\nPd(acac)2 (>99.99%), OAM (90%), tri-n-octylphosphine (TOP, AR.), hexane (AR.), ethanol (AR.), ferric (III) acetylacetonate (Fe(acac)3, >99.99%), and \u03b3-Al2O3 were obtained from Sigma Aldrich Corporate (USA) without further purification.\n\n### Preparation of 4\u2009nm Pd NPs\n\nThe Pd NPs were prepared by a modified method from previous work as following27: 70\u2009mg of Pd(acac)2 was mixed with 15\u2009mL of OAM in a 100\u2009mL of four-neck flask under stirring. The mixture was then heated to 80\u2009\u00b0C at a ramping rate of 5\u2009\u00b0C\u2009min\u22121 and kept for 1\u2009h under the protection of N2. A total of 0.5\u2009mL of TOP was added to the solution. The mixture was further heated to 250\u2009\u00b0C at a ramping rate of 5.6\u2009\u00b0C\u2009min\u22121, and kept at this temperature for another 1\u2009h before cooling down to room temperature. Subsequently, the mixture was transferred to a 50\u2009mL of centrifuge\u00a0tube, and 30\u2009mL of ethanol was added. The Pd NPs were separated by centrifugation at 4656\u2009\u00d7\u2009g for 10\u2009min. Then, the Pd NPs were redispersed in 10\u2009mL of hexane, and precipitated and washed by adding 30\u2009mL of ethanol for two times. Finally, the Pd NPs were dispersed in 10\u2009mL of hexane for further use.\n\n### Preparation of Pd\u2013FeOx NPs\n\nA total of 110\u2009mg of Fe(acac)3 and 20\u2009mL of OAM were added to a 100\u2009mL four-neck flask. The mixture was heated to 90\u2009\u00b0C at a ramping rate of 5\u2009\u00b0C\u2009min\u22121 in N2. Subsequently, 12.5\u2009mg of Pd NPs were added, followed by heating to 250\u2009\u00b0C and kept there for 30\u2009min. Afterward, the reaction temperature was raised to 300\u2009\u00b0C and kept there for another 30\u2009min before naturally cooling to room temperature. Then, 30\u2009mL of ethanol was added to precipitate the Pd\u2013FeOx NPs, and then centrifuged at 4656\u2009\u00d7\u2009g for 10\u2009min. The Pd\u2013FeOx NPs was redispersed in 10\u2009mL of hexane and washed by 30\u2009mL of ethanol for two times. Finally, the Pd\u2013FeOx NPs were dispersed in 10\u2009mL of hexane.\n\n### Preparation of FeOx\u00a0NPs\n\nThe synthesis of FeOx\u00a0NPs is the same as the preparation of Pd\u2013FeOx NPs, without adding Pd NPs.\n\n### Preparation of supported NPs\n\nWe employed a common-used insert material, \u03b3-Al2O3, as the support to anchor the Pd\u2013FeOx NPs (or Fe3O4 NPs, or Pd NPs). Typically, 200\u2009mg of \u03b3-Al2O3 was dispersed in a mixture of 15\u2009mL of hexane and 20\u2009mL of ethanol under sonication. A total of 20\u2009mg of prepared NPs in 5\u2009mL of hexane was added into the solution dropwise under sonication. The final mixture was further sonicated for 2\u2009h and then magnetically stirred overnight. Subsequently, the NPs\/Al2O3 was separated by centrifugation at 6082\u2009\u00d7\u2009g for 10\u2009min, and washed by 20\u2009mL of ethanol and hexane for two times. The final sample was dried at 50\u2009\u00b0C under vacuum overnight.\n\n### Preparation of Pd\u2013Fe3O4\u2013H and Pd\u2013Fe3O4\u2013A\n\nThe Pd\u2013Fe3O4\u2013H sample was prepared as follows: the Pd\u2013FeOx NPs\/Al2O3\u00a0(ST2 NPs\/Al2O3) was placed in a tube furnace and then heated to 300\u2009\u00b0C at a ramping rate of 20\u2009\u00b0C\u2009min\u22121 and kept there for 1\u2009h under the atmosphere of 4% H2 in Ar atmosphere. The sample obtained was denoted as Pd\u2013Fe3O4\u2013H. The Pd amount was determined to be 0.171 wt.% by ICP. The Pd\u2013Fe3O4\u2013A sample was prepared by treating the Pd\u2013FeOx NPs\/Al2O3 (ST2 NPs\/Al2O3) in air under the same reaction condition.\n\n### Preparation of Pd\u2013Fe3O4\u2013Re\n\nThe Pd\u2013Fe3O4\u2013Re sample was prepared by treating Pd\u2013Fe3O4\u2013H in the\u00a0air for another 1\u2009h.\n\n### Preparation of Fe3O4\u2013H\n\nThe Fe3O4\u2013H sample was prepared by the same process of preparation of Pd\u2013Fe3O4\u2013H\u00a0by using FeOx NPs\/Al2O3 instead of\u00a0Pd\u2013FeOx NPs\/Al2O3 .\n\n### Preparation of control samples with different STs\n\nControl samples with different ST were obtained by a similar synthesis process of Pd\u2013FeOx NPs using different amounts of Pd NPs seeds (37.5, 12.5, and 6.25\u2009mg), the samples were respectively denoted as ST1 NPs, ST2 NPs, and ST3 NPs. The NPs were then deposited on the \u03b3-Al2O3 and further treated in the atmosphere of 4% H2 in Ar for 1\u2009h according to the same process of Pd\u2013Fe3O4\u2013H. (ST2 is the\u00a0Pd\u2013Fe3O4\u2013H sample in this work).\n\n### Preparation of control samples with different structures\n\nThe control samples with different structures were prepared according to a similar process of Pd\u2013Fe3O4\u2013H at different annealing temperatures. The annealing temperature was 200\u2009\u00b0C, 300\u2009\u00b0C, and 400\u2009\u00b0C, and the corresponding samples were denoted as T200, T300, and T400 (T300 is the Pd\u2013Fe3O4\u2013H sample in this work).\n\n### Characterization\n\nThe powder X-ray diffraction (XRD) patterns were collected on a PANalytical X\u2019Pert Pro MPD diffractometer using an X\u2019Celerator RTMS detector. HAADF-STEM and HR-STEM were performed on a Nion Ultra STEM 100 (operated at 100\u2009kV). EELS spectra were collected on a high-resolution Gatan\u2013Enfina ER with a probe size of 1.3\u2009\u00c5. TEM and\u00a0high-angle annular bright-field scanning transmission electron microscopy (HAABF-STEM)\u00a0were obtained on a Hitachi HD-200 with bright-field STEM detector operating at 200\u2009kV.\n\nThe dispersion of the Pd was evaluated via pulse H2-Chemisorption with an Altamira Instruments (AMI-300) system. Before the measurements, ~100\u2009mg catalyst was pretreated at 550\u2009\u00b0C for 3\u2009h under 50\u2009sccm of Ar, followed by cooling down to desired temperature (i.e., \u2212130 and 35\u2009\u00b0C) under the same flow. Then pulses of 4% H2\/Ar from a sample loop with a defined volume (~0.5\u2009cc) were injected by switching a six-way valve until the eluted peak area of consecutive pulses was constant. The dispersion of Pd was calculated from the volume of H2.\n\nINS experiments were performed at the VISION beamline of the Spallation Neutron Source, Oak Ridge National Laboratory. The Pd\u2013Fe3O4\u2013H sample was first treated under vacuum at 600\u2009\u00b0C for 12\u2009h. It was then loaded in an aluminum sample holder in a helium glovebox. The sample holder was attached to a gas-loading sample stick connected to a gas panel. The blank sample was first measured at \u2212268\u2009\u00b0C for 3\u2009h to collect baseline spectrum. H2 gas was then introduced in situ at \u2212238\u2009\u00b0C, followed by heating of the sample to \u221298\u2009\u00b0C for reaction. The system was then cooled back to \u2212268\u2009\u00b0C to measure the reacted spectrum. The difference spectrum (reacted minus baseline) shows the signal associated with the hydride species formed during the reaction. The CO DRIFTS results were obtained on a Nicolet 670\u00a0Fourier Transform Infrared Spectrometer with an MCT detector by\u00a0the following process: each sample (~15\u2009mg) was loaded and then pretreated at 200\u2009\u00b0C under Ar for 30\u2009min. Afterward, the sample was cooled down to \u2212120\u2009\u00b0C to conduct CO adsorption. When the temperature reached \u2212120\u2009\u00b0C, the background was measured and then CO adsorption was conducted for 30\u2009min as followed by desorption with Ar for 10\u2009min (CO desorbed within 1\u2009min after flow Ar). XPS characterization was performed on a PHI VersaProbe III scanning XPS microscope using a\u00a0monochromatic Al K-alpha X-ray source (1486.6\u2009eV). XPS spectra were acquired with 200\u2009\u00b5m\/50\u2009W\/15\u2009kV X-ray settings and dual-beam charge neutralization. All binding energies were referenced to Al 2p peak at 74.8\u2009eV.\n\n### Catalytic performance tests\n\nThe hydrogenation of C2H2 was carried out in a tubular quartz reactor with a\u00a00.25-inch diameter. In a typical run, ~15\u2009mg of catalyst was mixed with 150\u2009mg of 60\u201380 mesh quartz sand and placed in the center of the\u00a0reactor. The catalyst bed was held by quartz wool at both ends and the reactor was loaded in a vertical furnace (Carbolite Gero). The catalyst was purged with He for 30\u2009min at a flow rate of 20\u2009sccm prior to the reaction under room temperature. Then, the reactor was heated to the desired temperature (i.e., 30\u201380\u2009\u00b0C), followed by feeding the gas mixture (i.e., 0.6\u2009sccm C2H2, 3\u2009sccm H2 balanced with He) at a total flow rate of 50\u2009sccm. The exit gas mixture was analyzed on-line by a ThermoStar Mass Spectrometry (Pfeiffer).\n\nThe conversion and selectivity were calculated by using Eqs. (1) and (2):\n\n$${{{\\rm{C}}_2{\\rm{H}}_2}}\\;{{\\rm{Conversion}}}\\left( {\\mathrm{\\% }} \\right) = \\left( {1 - \\frac{{{{{\\rm{X}}_{{\\rm{C}}_2{\\rm{H}}_2,{\\rm{out}}}}}}}{{{{{\\rm{X}}_{{\\rm{C}}_2{\\rm{H}}_2,{\\rm{in}}}}}}}} \\right) \\times 100{\\mathrm{\\% }}$$\n(1)\n$${\\mathrm{Selectivity}}\\left( {\\mathrm{\\% }} \\right) = \\frac{{{{{\\rm{X}}_{{\\rm{C}}_2{\\rm{H}}_4,{\\rm{out}}}}}}}{{{{{\\rm{X}}_{{\\rm{C}}_2{\\rm{H}}_2,{\\rm{in}}}}} - {{{\\rm{X}}_{{\\rm{C}}_2{\\rm{H}}_2,{\\rm{out}}}}}}} \\times 100{\\mathrm{\\% }}$$\n(2)\n\nwhereas in\/out refers to the concentration measured in the inlet\/outlet port.\n\nReaction orders with respect to H2 and C2H2 were calculated by the\u00a0differential method. The corresponding conversion is maintained below 20% to ensure a true kinetic regime. Apparent activation energy is calculated by the Arrhenius equation.\n\n### DFT calculation\n\nThe density functional theory calculations were performed with the Vienna Ab Initio Simulation Package (VASP)49,50. The on-site Coulomb interaction was included with the DFT\u2009+\u2009U method by Dudarev et al.51 in VASP using a Hubbard parameter U\u2009=\u20093.8\u2009eV for the Fe atom. The Perdew\u2013Burke\u2013Ernzerhof52 functional form of generalized-gradient approximation was used for electron exchange and correlation energies. The projector augmented-wave method was used to describe the electron\u2013core interaction49,53. A kinetic energy cutoff of 450\u2009eV was used for the plane waves. A 3\u2009\u00d7\u20092\u2009\u00d7\u20091 sampling of Brillouin zone using a Monkhorst-Pack scheme was used54. A vacuum layer of 15\u2009\u00c5 was added for the surface slabs along the z-direction; the slab contains a total of four layers, with the bottom two layers fixed in their bulk positions.\n\n### XAFS data collection and processing\n\nApproximately 20\u2009mg of sample was enclosed in a nylon washer of 4.953\u2009nm inner diameter and sealed on one side with transparent \u201cScotch\u201d tape. The sample was pressed by hand to form a uniform pallet, then sealed on the open side with a tape. XAFS investigation were performed at beamline 10ID-B of the Advanced Photon Source at Argonne National laboratory55. Spectra were collected at the iron K-edge (7112\u2009eV) and palladium K-edge (24,350\u2009eV) in transmission mode, with an iron and palladium foil as a reference for energy calibration, respectively. All spectra were collected at room temperature and ten scans were collected for each sample. All data were processed and analyzed using the Athena and Artemis program of the IFFEFFIT package56 based on FEFF 6.0. Reference foil data were aligned to the first zero-crossing of the second derivative of the normalized \u03bc(E) data, which was subsequently calibrated to the literature E0 for each Fe K-edge and Pd K-edge. The background was removed, and the data were assigned a Rbkg value of 1.0 prior to normalizing to obtain a unit edge step. All data were initially fit with k-weighting of 1, 2, and 3 then finalized with k3-weighting in R-space. A fit of the Pd foil and Fe foil was used to determine S02 for each sample. Structure models used to fit the data sets were obtained from crystal structure of iron oxide and DFT calculation. Structure parameters that were determined by the fits include the degeneracy of the scattering path (Ndegen), the change in Reff, the mean square relative displacement of the scattering element(\u03c32i), and the energy shift of the photoelectron(\u0394E0). k3-weighting in R-space. Initial fitting was conducted using crystal structure from crystal database. The simulated models were obtained from DFT calculation and scattering paths of selected scattered atom (Fe, Pd) were generated through FEFF calculation. The WT method was adapted for a quantitative analysis of the backscattering atom in the higher coordination shells with EvAX code57.\n\n## Data availability\n\nThe data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.\u00a0Source data are provided with this paper.\n\n## References\n\n1. 1.\n\nKarim, W. et al. Catalyst support effects on hydrogen spillover. Nature 541, 68\u201371 (2017).\n\n2. 2.\n\nShan, J., Li, M., Allard, L. F., Lee, S. & Flytzani-Stephanopoulos, M. Mild oxidation of methane to methanol or acetic acid on supported isolated rhodium catalysts. Nature 551, 605\u2013608 (2017).\n\n3. 3.\n\nDann, E. K. et al. Structural selectivity of supported Pd nanoparticles for catalytic NH3 oxidation resolved using combined operando spectroscopy. Nat. Catal. 2, 157\u2013163 (2019).\n\n4. 4.\n\nO\u2019Connor, N. J., Jonayat, A. S. M., Janik, M. J. & Senftle, T. P. Interaction trends between single metal atoms and oxide supports identified with density functional theory and statistical learning. Nat. Catal. 1, 531\u2013539 (2018).\n\n5. 5.\n\nZhao, M. et al. Metal-organic frameworks as selectivity regulators for hydrogenation reactions. Nature 539, 76\u201380 (2016).\n\n6. 6.\n\nLiu, L. et al. Generation of subnanometric platinum with high stability during transformation of a 2D zeolite into 3D. Nat. Mater. 16, 132\u2013138 (2017).\n\n7. 7.\n\nSuchorski, Y. et al. The role of metal\/oxide interfaces for long-range metal particle activation during CO oxidation. Nat. Mater. 17, 519\u2013522 (2018).\n\n8. 8.\n\nHuang, Y.-B., Liang, J., Wang, X.-S. & Cao, R. Multifunctional metal-organic framework catalysts: synergistic catalysis and tandem reactions. Chem. Soc. Rev. 46, 126\u2013157 (2017).\n\n9. 9.\n\nZhang, J. & Zhao, C. Development of a bimetallic Pd-Ni\/HZSM-5 catalyst for the tandem limonene dehydrogenation and fatty acid deoxygenation to alkanes and arenes for use as biojet fuel. ACS Catal. 6, 4512\u20134525 (2016).\n\n10. 10.\n\nLi, S. et al. Tuning the selectivity of catalytic carbon dioxide hydrogenation over Iridium\/Cerium oxide catalysts with a strong metal-support interaction. Angew. Chem. Int. Ed. 56, 10761\u201310765 (2017).\n\n11. 11.\n\nZhao, E. W. et al. Strong metal-support interactions enhance the pairwise selectivity of parahydrogen addition over Ir\/TiO2. ACS Catal. 6, 974\u2013978 (2016).\n\n12. 12.\n\nTauster, S. J. & Fung, S. C. Strong metal-support interactions: occurrence among the binary oxides of groups IIA\u2013VB. J. Catal. 55, 29\u201335 (1978).\n\n13. 13.\n\nTauster, S. J., Fung, S. C. & Garten, R. L. Strong metal-support interactions. Group 8 noble metals supported on titanium dioxide. J. Am. Chem. Soc. 100, 170\u2013175 (1978).\n\n14. 14.\n\nTauster, S. J., Fung, S. C., Baker, R. T. K. & Horsley, J. A. Strong interactions in supported-metal catalysts. Science 211, 1121\u20131125 (1981).\n\n15. 15.\n\nZhang, J. et al. Wet-chemistry strong metal-support interactions in titania-supported Au catalysts. J. Am. Chem. Soc. 141, 2975\u20132983 (2019).\n\n16. 16.\n\nTang, H. et al. Classical strong metal-support interactions between gold nanoparticles and titanium dioxide. Sci. Adv. 3, e1700231 (2017).\n\n17. 17.\n\nTang, H. et al. Strong metal-support interactions between gold nanoparticles and nonoxides. J. Am. Chem. Soc. 138, 56\u201359 (2016).\n\n18. 18.\n\nBaker, L. R. et al. Furfuraldehyde hydrogenation on titanium oxide-supported platinum nanoparticles studied by sum frequency generation vibrational spectroscopy: acid-base catalysis explains the molecular origin of strong metal-support interactions. J. Am. Chem. Soc. 134, 14208\u201314216 (2012).\n\n19. 19.\n\nDong, J., Fu, Q., Jiang, Z., Mei, B. & Bao, X. Carbide-supported Au catalysts for water-gas shift reactions: a new territory for the strong metal-support interaction effect. J. Am. Chem. Soc. 140, 13808\u201313816 (2018).\n\n20. 20.\n\nLei, H. et al. Galvanic replacement-mediated synthesis of Ni-supported Pd nanoparticles with strong metal-support interaction for methanol electro-oxidation. Small 15, 1804722 (2019).\n\n21. 21.\n\nKast, P. et al. Strong metal-support interaction and alloying in Pd\/ZnO catalysts for CO oxidation. Catal. Today 260, 21\u201331 (2016).\n\n22. 22.\n\nTang, H. et al. Ultrastable hydroxyapatite\/titanium-dioxide-supported gold nanocatalyst with strong metal-support interaction for carbon monoxide oxidation. Angew. Chem. Int. Ed. 55, 10606\u201310611 (2016).\n\n23. 23.\n\nLiu, S. et al. Ultrastable Au nanoparticles on titania through an encapsulation strategy under oxidative atmosphere. Nat. Commun. 10, 5790 (2019).\n\n24. 24.\n\nMatsubu, J. C. et al. Adsorbate-mediated strong metal-support interactions in oxide-supported Rh catalysts. Nat. Chem. 9, 120\u2013127 (2017).\n\n25. 25.\n\nMacino, M. et al. Tuning of catalytic sites in Pt\/TiO2 catalysts for the chemoselective hydrogenation of 3-nitrostyrene. Nat. Catal. 873\u2013881 (2019).\n\n26. 26.\n\nLiu, X. et al. Optimizing the structural configuration of FePt-FeOx nanoparticles at the atomic scale by tuning the post-synthetic conditions. Nano Energy 55, 441\u2013446 (2019).\n\n27. 27.\n\nLiu, F. et al. Exchange-coupled fct-FePd\/\u03b1-Fe nanocomposite magnets converted from Pd\/Fe3O4 core\/shell nanoparticles. Chem. Eur. J. 20, 15197\u201315202 (2014).\n\n28. 28.\n\nJang, J. W. et al. Enhancing charge carrier lifetime in metal oxide photoelectrodes through mild hydrogen treatment. Adv. Energ. Mater. 7, 1701536 (2017).\n\n29. 29.\n\nDoudin, N. et al. Understanding heterolytic H2 cleavage and water-assisted hydrogen spillover on Fe3O4(001)-supported single palladium atoms. ACS Catal. 9, 7876\u20137887 (2019).\n\n30. 30.\n\nHao, R., Fan, Y., Howard, M. D., Vaughan, J. C. & Zhang, B. Imaging nanobubble nucleation and hydrogen spillover during electrocatalytic water splitting. Proc. Natl Acad. Sci. USA 115, 5878\u20135883 (2018).\n\n31. 31.\n\nWu, C. H. et al. Bimetallic synergy in cobalt\u2013palladium nanocatalysts for CO oxidation. Nat. Catal. 2, 78\u201385 (2018).\n\n32. 32.\n\nGuo, Z., Kang, X., Zheng, X., Huang, J. & Chen, S. PdCu alloy nanoparticles supported on CeO2 nanorods: Enhanced electrocatalytic activity by synergy of compressive strain, PdO and oxygen vacancy. J. Catal. 374, 101\u2013109 (2019).\n\n33. 33.\n\nKast, P. et al. CO oxidation as a test reaction for strong metal\u2013support interaction in nanostructured Pd\/FeO powder catalysts. Appl. Catal. A Gen. 502, 8\u201317 (2015).\n\n34. 34.\n\nWu, C.-T. et al. A non-syn-gas catalytic route to methanol production. Nat. Commun. 3, 1050 (2012).\n\n35. 35.\n\nBenziger, J. B. & Larson, L. R. An infrared spectroscopy study of the adsorption of CO on Fe\/MgO. J. Catal. 77, 550\u2013553 (1982).\n\n36. 36.\n\nFelicissimo, M. P., Martyanov, O. N., Risse, T. & Freund, H.-J. Characterization of a Pd\u2013Fe bimetallic model catalyst. Surf. Sci. 601, 2105\u20132116 (2007).\n\n37. 37.\n\nWei, X., Ma, Z., Lu, J., Mu, X. & Hu, B. Strong metal\u2013support interactions between palladium nanoclusters and hematite toward enhanced acetylene dicarbonylation at low temperature. N. J. Chem. 44, 1221\u20131227 (2020).\n\n38. 38.\n\nTeschner, D. et al. The roles of subsurface carbon and hydrogen in palladium-catalyzed alkyne hydrogenation. Science 320, 86\u201389 (2008).\n\n39. 39.\n\nTeschner, D. et al. Understanding Palladium hydrogenation catalysts: When the nature of the reactive molecule controls the nature of the catalyst active phase. Angew. Chem. Int. Ed. 47, 9274\u20139278 (2008).\n\n40. 40.\n\nSchneemann, A. et al. Nanostructured metal hydrides for hydrogen storage. Chem. Rev. 118, 10775\u201310839 (2018).\n\n41. 41.\n\nZhang, Y., Diao, W., Williams, C. T. & Monnier, J. R. Selective hydrogenation of acetylene in excess ethylene using Ag- and Au\u2013Pd\/SiO2 bimetallic catalysts prepared by electroless deposition. Appl. Catal. A Gen. 469, 419\u2013426 (2014).\n\n42. 42.\n\nTakht Ravanchi, M., Sahebdelfar, S. & Rahimi Fard, M. Influence of support structural characteristics on long-term performance of Pd-Ag\/\u03b1-Al2O3 catalyst for tail-end acetylene selective hydrogenation. Int. J. Chem. React. Eng. 14, 1035\u20131046 (2016).\n\n43. 43.\n\nVincent, M. J. & Gonzalez, R. D. A Langmuir\u2013Hinshelwood model for a hydrogen transfer mechanism in the selective hydrogenation of acetylene over a Pd\/\u03b3-Al2O3 catalyst prepared by the sol\u2013gel method. Appl. Catal. A Gen. 217, 143\u2013156 (2001).\n\n44. 44.\n\nMolero, H., Bartlett, B. F. & Tysoe, W. T. The hydrogenation of acetylene catalyzed by palladium: hydrogen pressure dependence. J. Catal. 181, 49\u201356 (1999).\n\n45. 45.\n\nPei, G. X. et al. Ag alloyed Pd single-atom catalysts for efficient selective hydrogenation of acetylene to ethylene in excess ethylene. ACS Catal. 5, 3717\u20133725 (2015).\n\n46. 46.\n\nAd\u00fariz, H. R., Bodnariuk, P., Dennehy, M. & Gigola, C. E. Activity and selectivity of Pd\/\u03b1-Al2O3 for ethyne hydrogenation in a large excess of ethene and hydrogen. Appl. Catal. 58, 227\u2013239 (1990).\n\n47. 47.\n\nTan, S., Cheng, Y. Q., Daemen, L. L. & Lutterman, D. A. Design of a facility for the in situ measurement of catalytic reaction by neutron scattering spectroscopy. Rev. Sci. Instrum. 89, 014101 (2018).\n\n48. 48.\n\nPolo-Garzon, F. et al. Neutron scattering investigations of hydride species in heterogeneous catalysis. Chemsuschem 12, 93\u2013103 (2019).\n\n49. 49.\n\nKresse, G. & Furthmuller, J. Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. Comput. Mater. Sci. 6, 15\u201350 (1996).\n\n50. 50.\n\nKresse, G. & Furthmuller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys. Rev. B 54, 11169\u201311186 (1996).\n\n51. 51.\n\nDudarev, S., Botton, G., Savrasov, S., Humphreys, C. & Sutton, A. electron-energy-loss spectra and the structural stability of nickel oxide: an LSDA+ U study. Phys. Rev. B 57, 1505\u20131509 (1998).\n\n52. 52.\n\nPerdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. Phys. Rev. Lett. 77, 3865\u20133868 (1996).\n\n53. 53.\n\nBl\u00f6chl, P. E. Projector augmented-wave method. Phys. Rev. B 50, 17953\u201317979 (1994).\n\n54. 54.\n\nMonkhorst, H. J. & Pack, J. D. Special points for Brillouin-zone integrations. Phys. Rev. B 13, 5188\u20135192 (1976).\n\n55. 55.\n\nThe MRCAT Insertion Device Beamline at the Advanced Photon Source\u201d, C.U. Segre, N.E. Leyarovska, L.D. Chapman, W.M. Lavender, P.W. Plag, A.S. King, A.J. Kropf, B.A. Bunker, K.M. Kemner, P. Dutta, R.S. Duran and J. Kaduk, CP521,\u00a0Synchrotron Radiation Instrumentation: Eleventh U.S. National Conference, (ed. P. Pianetta, et al.) p419\u2013422, (American Insitute of Physics, New York, 2000).\n\n56. 56.\n\nRavel, B. & Newville, M. ATHENA, ARTEMIS, HEPHAESTUS: data analysis for X-ray absorption spectroscopy using IFEFFIT. J. Synchrotron Radiat. 12, 537\u2013541 (2005).\n\n57. 57.\n\nTimoshenko, J., Kuzmin, A. & Purans, J. EXAFS study of hydrogen intercalation into ReO3 using the evolutionary algorithm. J. Phy. Condens. Mat. 26, 055401 (2014).\n\n## Acknowledgements\n\nThis research is sponsored by the U.S. Department of Energy (DOE), Office of Science, Office of Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division, Catalysis Science Program. The computational calculations used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility. XAFS data were collected at the Advanced Photon Source at Argonne National Laboratory on Beamline 10ID-B, supported by the Materials Research Collaborative Access Team (MRCAT). MRCAT operations are supported by the DOE and the MRCAT member institutions. This research used resources of the Advanced Photon Source, a U.S. DOE Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under contract no. DE-AC02-06CH11357. The neutron studies used resources at the Spallation Neutron Source, a DOE Office of Science User Facility operated by Oak Ridge National Laboratory. Part of the work including the chemisorption was conducted at the Center for Nanophase Materials Sciences, which is a DOE Office of Science User Facility. The Spallation Neutron Source at Oak Ridge National Laboratory is supported by the Scientific User Facilities Division, Office of Basic Energy Sciences, U.S. DOE, under contract no. DE-AC0500OR22725 with UT Battelle, LLC. Part of the TEM work was performed at the Center for Functional Nanomaterials, Brookhaven National Laboratory, which is supported by the U.S. DOE, Office of Basic Energy Science, under contract no. DE-SC0012704. P.W.W., W.S.Z., and H.M.L. were financially supported by the National Natural Science Foundation of China (21722604), Natural Science Foundation of Jiangsu Province (BK20190852). P.W.W. is thankful to the scholarship from China Scholarship Council (CSC).\n\n## Author information\n\nAuthors\n\n### Contributions\n\nH.Y.Z., P.W.W., S.T., and S.D. conceived the idea of the work. P.W.W. synthesized the samples and carried out the XRD analysis. S.T. performed the catalytic experiments. J.M. and C.W.A. performed the XAFS. J.M., V.F., D.E.J., P.W.W., H.Y.Z., and C.W.A. analyzed the XAFS result, and carried out the DFT simulation. J.M. performed the CO DRIFTS. P.W.W. and H.Y.Z. analyzed the CO DRIFTS results. N.L., D.S., and S.Z.Y. performed the part of the TEM, HAADF-STEM, STEM, HR-STEM, and EELS mapping. P.W.W. and H.Y.Z. performed some of the TEM characterizations. N.L., P.W.W., W.S.Z., Z.H.Y., and H.Y.Z. analyzed the microscopic results. Y.Q.C. and Z.L.W. carried out the INS characterization, and analyzed the results. S.T. performed H2-chemisorption characterization. P.W.W., S.T., A.S., A.M.M., H.M.L., Z.H.Y., W.S.Z., S.D., and H.Y.Z. discussed the results. Z.H.Y., P.W.W., and H.Y.Z. analyzed the XPS results. P.W.W., S.T., and H.Y.Z. summarized the results, and drafted the manuscript. All authors modified the manuscript. P.W.W., S.T., and H.Y.Z. finalized the manuscript.\n\n### Corresponding authors\n\nCorrespondence to Wenshuai Zhu or Sheng Dai or Huiyuan Zhu.\n\n## Ethics declarations\n\n### Competing interests\n\nThe authors declare no competing interests.\n\nPeer review information Nature Communications thanks Meenakshisundaram Sankar and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.\n\nPublisher\u2019s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Rights and permissions\n\nReprints and Permissions\n\nWu, P., Tan, S., Moon, J. et al. Harnessing strong metal\u2013support interactions via a reverse route. Nat Commun 11, 3042 (2020). https:\/\/doi.org\/10.1038\/s41467-020-16674-y\n\n\u2022 Accepted:\n\n\u2022 Published:\n\n\u2022 ### Electrocatalysis in confined spaces: interplay between well-defined materials and the microenvironment\n\n\u2022 Xue Han\n\u2022 , Qiang Gao\n\u2022 , Zihao Yan\n\u2022 , Mengxia Ji\n\u2022 , Christopher Long\n\u2022 \u00a0&\u00a0Huiyuan Zhu\n\nNanoscale (2021)\n\n\u2022 ### Theoretical modeling for interfacial catalysis\n\n\u2022 Lingyun Zhou\n\u2022 , Lingshu Zhuo\n\u2022 , Ruming Yuan\n\u2022 \u00a0&\u00a0Gang Fu\n\nWIREs Computational Molecular Science (2021)\n\n\u2022 ### Investigation of the Diffusion of Cr2O3 into Different Phases of TiO2 upon Annealing\n\n\u2022 Abdulrahman S. Alotabi\n\u2022 , Christopher T. Gibson\n\u2022 , Gregory F. Metha\n\nACS Applied Energy Materials (2021)\n\n\u2022 ### Catalytic performance of Ni\/CeO2 catalysts prepared from different routes for CO2 methanation\n\n\u2022 Sakhon Ratchahat\n\u2022 , Sethanat Surathitimethakul\n\u2022 , Anyanee Thamungkit\n\u2022 , Phanatchakorn Mala\n\u2022 , Masao Sudoh\n\u2022 , Ryo Watanabe\n\u2022 , Choji Fukuhara\n\u2022 , Season S. Chen\n\u2022 , Kevin C.-W. Wu\n\u2022 \u00a0&\u00a0Tawatchai Charinpanitkul\n\nJournal of the Taiwan Institute of Chemical Engineers (2021)\n\n\u2022 ### Photoinduced Strong Metal\u2013Support Interaction for Enhanced Catalysis\n\n\u2022 Hao Chen\n\u2022 , Zhenzhen Yang\n\u2022 , Xiang Wang\n\u2022 , Felipe Polo-Garzon\n\u2022 , Phillip W. Halstenberg\n\u2022 , Tao Wang\n\u2022 , Xian Suo\n\u2022 , Shi-Ze Yang\n\u2022 , Harry M. Meyer\n\u2022 , Zili Wu\n\u2022 \u00a0&\u00a0Sheng Dai\n\nJournal of the American Chemical Society (2021)","date":"2021-06-24 01:17:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7030061483383179, \"perplexity\": 8358.834570382014}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623488544264.91\/warc\/CC-MAIN-20210623225535-20210624015535-00287.warc.gz\"}"} | null | null |
The term "renting" often times makes people envision tiny, cramped housing complexes linked to more tiny, cramped housing complexes. Any person moving to 72445, however, really should keep in mind that they will also find houses for rent in the area. A variety of benefits exist for looking at houses for rent in 72445—rather than apartments or even purchasing a place—and by making use of the right tools for the job you can quickly trim down a list of properties to locate the best place to fit your needs.
Tons of people prefer houses for rent in 72445 because they provide you with far more space, when compared to the majority of apartments or condominiums. Considering they`re rentals, they permit tenants to have a whole lot more flexibility and freedom; namely the flexibility to relocate after just six months to a year and not being held liable for major maintenance and other fixes. Landlords or management companies are usually those who will carry out maintenance, as well as other fixes, but look over your lease agreement with care so there won`t be any contention about which party is responsible for what in the event that something happens.
Most houses for rent in 72445 are located in outer neighborhoods, versus downtown. Bear in mind, if you are truly set on moving downtown, there are always going to be a few exceptions. The great thing about houses for rent in 72445—not to mention being economical—is that they make it possible for families, people that have multiple roommates, and anyone just simply in search of more space to live together comfortably. Different from the majority of other styles of housing alternatives, houses are detached from other units—they do not have joint facilities much like apartment units do, and normally will have some kind of a lawn. Even though they can be farther away from the city`s center, houses for rent in 72445 still exist inside of easy travel distance to a good number of places of employment, attractions and recreation areas.
Regardless of what your perfect home`s specs are—RentFinder enables you to explore houses for rent in 72445 that will satisfy all of them. Do you have young children? RentFinder will help you view residences close to elementary schools or Jr. and Sr. High schools on the map on the sidebar of this web page. Do you have a dog or cat? You can also look for pet-friendly sites. Regardless of if you`re a huge pizza buff—this web page`s right-hand portion can point you to nearby restaurants so you can grab a slice.
Right after spotting a house for rent in 72445 that fits all your desires, always go check out it before giving any payment for it or signing anything. It`s best , aside from that, to go see it for yourself, as opposed to simply examining website photos of the place.
When you finally check out a home, pay attention to the overall condition it`s in. The owner is traditionally the one accountable for the house`s overall upkeep and care, so it`s current condition might be a clue as to the manner in which it may be handled by the landlord during the time you`re going to be residing there. Has the home`s lawn been kept up? Do you notice any walls that are in rough shape or peeling all around you? Do each of the furnace or A/C settings function as they ought to? Do each of the doors leading outside the house lock tightly? It`s also advisable to peek at the neighboring properties on the block. No matter if the homes are not under the same exact landlord`s care, how they are currently maintained is influential and an example of the neighborhood in general.
Unsure of the features you are searching for in a house for rent in 72445? Give RentFinder a try today—tap the button shown below, and you`ll be on your way. | {
"redpajama_set_name": "RedPajamaC4"
} | 789 |
\section{Introduction}
The study of gravitational lensing has evolved from a novel application
of General Relativity to an astronomical tool, for the analysis of
lensing provides an additional, independent measure of the mass
distribution of galaxies and clusters of galaxies. Comparison of the
lensing mass and luminous mass, for instance, can begin to answer
questions about the nature of the dark matter in these objects.
One way to extract the information encoded in the strong lensing
behaviour of a cluster of galaxies is to produce a model of the mass
distribution which, together with one or more luminous, background
sources, produces the observed collection of lensed arcs and arclets.
Several lens inversion schemes have been developed in the last 10 years
to study lenses characterized by sets of compact lensed arcs, each set
being multiple images of a background source. The complexity of the
schemes and the strength of their predictions have increased along
with advances in observations.
Kochanek \& Narayan (1992) developed the Lens\-Clean algorithm, based
on Kochanek's earlier Ring Cycle (1989), to study lensing systems
containing extended images, particularly Einstein Rings formed from
background radio sources. The result is a discrete map of the mass
distribution on the lens plane. The LensMEM routine of Wallington et
al. (1996) introduces the maximum entropy method (MEM) into the
inversion routine. Parametric model parameters are adjusted so that the
background source needed to reproduce the observed lensed features is
the ``most probable...consistent with the data''\footnote{Narayan \&
Nityananda 1986, 128.} and therefore the most
``natural.''\footnote{Ibid., 137.}
Another family of lens inversions schemes is based upon parametric lens
models where the nature of the deflector is specified, and only the
values of the parameters are altered. These methods are based on the
fact that when multiple images of a common background source are traced
back through the parametric lens model, the pre-images must coincide on
the source plane.
Mellier, Fort, \& Kneib (1993) (hereafter M93) produce
a parametric model of the mass distribution of the core of the
galaxy-cluster MS~2137. A large pseudo-isothermal, elliptical mass
distribution is postulated, and three lensed images are traced back to
the source plane. Parameter values are selected by minimizing a
$\chi^2$ statistic measuring the distances between the three pre-images
on the source plane. Lensing in the galaxy-cluster A2218 is examined,
first with ground-based data (\cite{KN95.1}) and later with \emph{HST}
observations (\cite{KN96.1}.) In the latter, parameters describing four
large galaxies and 30 smaller galaxies are fit through seven
multiply-imaged background sources. Nair (1998) generates a parametric
model reproducing the 10 lensed images observed in B$1993+503$ by
minimizing distances between pre-images on the source plane, while at
the same time demanding the lensed images show the correct parity.
Tyson, Kochanski, \& Dell'Antonio (1998) constrain a 512 parameter model
of CL~$0024+1654$ by matching close to 4000 lensed pixels in \emph{HST}
and numerically simulated images. The stunning detail is the result of
a very expensive computation.
High-resolution \emph{HST} images of MS~2137 allow Hammer et al. (1997)
(hereafter H97) to produce a more complex parametric
model of the mass distribution of the cluster core, as well as
reconstructions of the background sources. Values for the model
parameters are estimated by selecting bright knots of light observed
within multiple lensed images, a triplet of images of one source and a
pair of images of a second source, and adjusting the parameters to make
the pre-images of these knots coincide on the source plane. Then the
lensed images in their entirety are traced back to the source plane to
reconstruct the two sources.
It is critical in the lens inversion schemes of Mellier, Fort, \&
Kneib, Kneib et al., Hammer et al., and Nair that common structures or
knots of light be identified in two or more lensed arcs. The model
parameters are chosen by forcing these structures back to a common
origin on source plane. Because of the extreme magnification that
occurs near the critical lines of the lens, faint regions of the
background which lie near the corresponding caustics may be greatly
magnified and appear as bright knots of light. These knots can be
misidentified as coming from bright structures in the background source.
Adjusting the model parameters to make coincident these faint regions
and regions of the source which do show structure may lead to
inconsistent models of the lens. This problem stems from the fact that
the appearance of a lensed arc is the product of both the structure of
background source and the effects of magnification of the lens.
The two-stage inversion algorithm is described in detail in Section~2,
using the lensing observed in MS~2137 as a test case. In this Section,
we introduce polar moments, statistics used to quantify the position and
shapes of lensed arcs. In Section~3, the algorithm is applied to the
galaxy-cluster MS~1455, producing a model which suggests that a single
background source is responsible for both a tangential and a radial arc.
Our model does not fully reproduce the lensing behaviour of MS~1455,
revealing both shortcomings of the model and also the difficulties
associated with inverse problems. In Section~4, we discuss strengths
and weakness of the new inversion scheme, particularly the use of polar
moments. Finally, our conclusions are summarized in Section~5.
\section{A Two-Stage Inversion Algorithm}
We introduce a new inversion algorithm for producing a model of the
geometry of the gravitational lens, a parametric description of the
cluster mass distribution, and a reconstruction of the background
source. The key to this new approach is decoupling the effects
of lens magnification and background source structure in the appearance
of multiple lensed arcs. The algorithm is stated briefly here, and then
illustrated with the well-studied gravitational lens MS~2137.
The first stage of the inversion is to build a parametric model of the
deflector mass distribution, establish the redshifts of the lens and
source planes, and determine the position of the background source,
based only on the positions and shapes of the lensed arcs in the
observations. As discussed below, the positions and shapes are
characterized by polar moments. The model establishes a family of
``conduits'' through which the solutions to the lens equation pass. At
the same time, the models fixes the magnification throughout the lens,
so that the magnification factor at each point on the deflector plane
can be calculated and removed from the data.
The second stage of the inversion algorithm is the reconstruction of the
background source responsible for the observed arcs. Each pixel in the
observations identified as containing lensed light is traced back to the
source plane along the solution to the lens equation, and the
magnification factor is removed. This produces a collection of points
on the source plane, each point carrying the flux of the
source as it would appear in the absence of lensing. There are many
choices for interpolating this data across the source plane to build a
pixelized image of the source. We adopt a simple strategy by choosing a
uniform pixel size, and then setting the pixel size so
that the number of pixels in the reconstructed source is comparable to
the number of data, namely the number of lensed pixels identified in the
observations.
As a test of consistency of the model, the reconstructed source is
passed back through the model to check for spurious structure within the
arcs, or spurious arcs altogether.
\subsection{Arcs in MS~2137}
To test the validity of the new lens inversion algorithm, the algorithm
is applied to the lensed objects observed in the galaxy-cluster MS~2137,
for which models have already been produced in M93 and H97.
A \emph{HST} image of the centre of MS~2137 is shown in
Figure~\ref{fig:2137obs}.\footnote{Based on observations with the
NASA/ESA Hubble Space Telescope, obtained from the data Archive at the
Space Telescope Science Institute, which is operated by the
Association of Universities for Research in Astronomy, Inc. under NASA
contract No. NAS5-26555.} The central cD galaxy and a smaller cluster
galaxy, identified as G1 and G7, respectively, in M93, have been removed
to reveal more structure of the arcs. In previous models of MS~2137 and
in the model below, the giant tangential arc A0 and two counter-images
A2 and A4 are traced to one source in the background. The radial arc AR
and its counter-image A6 are traced to a second source. As noted in
H97, a third source produces the BR-B1 pair of arcs. Our model also
predicts a fourth background source responsible for a further C1-CR pair
of arcs.
\placefigure{fig:2137obs}
To select values for the parameters in a parametric model of the lens,
the positions and shapes of numerically simulated lensed arcs are
``matched'' to the positions and shapes of the observed arcs.
Quantifying positions and shapes of lensed objects is the basis of the
inversion-via-distortion techniques applied to weak lensing inversions
(\cite{KA93.1}). In weak lensing analyses, the (flux-weighted) centroid
and quadrupole moments of a weakly distorted background galaxy are
calculated, and the galaxy is modeled as an equivalent ellipse. This
analysis cannot be applied directly in the strong lensing regime because
the arcs are not generally elliptical is shape (for example, the giant
arc A0.) Instead, we introduce ``polar moments'' of the lensed images:
The polar coordinates $(\theta,r)$ of pixels containing lensed light are
interpreted as if they are coordinates in a Cartesian coordinate system.
By summing over pixels about a chosen threshold, $I_o$, the following
statistics are tabulated:
\begin{eqnarray}
Q_o & = & N\ (\textrm{number~of~lensed~pixels}) \label{eqn:polarQo} \\[2ex]
\bar{r} & = &
Q_o^{-\!1}\sum_{I_i>I_o} r_i \label{eqn:polarrbar} \\
\bar{\theta} & = &
Q_o^{-\!1}\sum_{I_i>I_o} \theta_i \label{eqn:polartbar}\\[2ex]
Q_{rr} & = &
Q_o^{-\!1}\sum_{I_i>I_o} (r_i-\bar{r})^2 \label{eqn:polarQrr} \\
Q_{r\theta} & = &
Q_o^{-\!1}\sum_{I_i>I_o} (r_i-\bar{r})(\theta_i-\bar{\theta})
\label{eqn:polarQrt} \\
Q_{\theta\theta} & = &
Q_o^{-\!1}\sum_{I_i>I_o} (\theta_i-\bar{\theta})^2 \label{eqn:polarQtt}
\end{eqnarray}
\noindent
These moments are not flux-weighted, but depend only on the positions of
the lensed features on the image plane. The $0$th moment $Q_o$ is the
area of the image, in units of $\Omega=\Delta^2$ arcsec$^2$, where
$\Delta$ is the arcsecond pixel length of the pixels in the observations.
The radial moment $\bar{r}$ specifies the average radius of the lensed
image, for there is just as much weight in the image outside the circle
of radius $\bar{r}$ as there is inside this circle. In the case of
giant arcs, $\bar{r}$ should closely approximate the Einstein Radius of
the lens. The moment $\bar{\theta}$ specifies the average position
angle of the lensed image: there is just as much weight clockwise from
the line at position angle $\bar{\theta}$ as there is counter-clockwise
from this line. As $\theta$ simply measures position angle on the sky,
the magnitudes of $\bar{r}$ and $\bar{\theta}$ are incomparable.
The second polar moments of an arc characterize its shape: $Q_{rr}$
measures the radial spread of the arc, while $Q_{\theta\theta}$ measures
the angular spread. Analogous to the equivalent ellipse of weak lensing
analyses, we construct a representative region based on the values of
these components. A uniform rod of length $2L$ lying along the $x$-axis
between $-L$ and $L$ has a second moment $Q_{xx}=\frac{1}{3}L^2$. The
length can be recovered from the moment, $L=\sqrt{3Q_{xx}}$. We apply
this result to the polar moments of the lensed arcs. The region lying
between \mbox{$\bar{r}\pm\sqrt{3Q_{rr}}$} and
\mbox{$\bar{\theta}\pm\sqrt{3Q_{\theta\theta}}$} is a rectangle in the
Cartesian system, and what we refer to as an ``annular sector'' in the
polar coordinate system.
The polar moments of the A0-A2-A4 and AR-A6 arcs are listed in
Table~\ref{tbl:2137datamoments}. The quantities $r_i$ and $\theta_i$
which enter the polar moments in
Equations~\hbox{(\ref{eqn:polarrbar})-(\ref{eqn:polarQtt})} are simply
the coordinates of the centre of each pixel containing lensed light, and
are not based on the surrounding light distribution. The radial
coordinate of any ray of light which strikes a lensed pixel is therefore
accurate only to $\delta r=\Delta/2$, which amounts to $0\farcs050$ for
the $0\farcs100$ resolution of the \emph{HST} observations. An
uncertainty of $\Delta/2$ arcseconds at a radius of $\bar{r}$
corresponds to an uncertainty in position angle
\[
\delta\theta=
\frac{90\Delta}{\pi \bar{r}}\ \textrm{degrees}\ \ .
\]
The annular sectors built from the
quadrupole moments are shown in Figure~\ref{fig:2137obs}, where each
annular sector sits at the intersection of a circle of radius $\bar{r}$
and a radial line at position angle $\bar{\theta}$.
\placetable{tbl:2137datamoments}
In weak lensing analyses, the coordinate frame in which the matrix of
quadrupole moments is diagonal defines the principal axes of the
equivalent ellipse. In the polar moments scheme, the off-diagonal
moment $Q_{r\theta}$ cannot be interpreted so easily. This
is due in part to the incomparable dimensions of the quadrupole moments.
We can extract some information, nevertheless, from the sign of the
$Q_{r\theta}$ moment. Spherically symmetric arcs have equal
weight inside and outside the circle of radius $\bar{r}$, and equal
weight clockwise and counter-clockwise from the radial line at position
angle $\bar{\theta}$. The off-diagonal moment $Q_{r\theta}$
in Equation~(\ref{eqn:polarQrt}) vanishes. Outside (inside) the the
centroid circle, $r-\bar{r}$ is positive (negative); counter-clockwise
(clockwise) from the centroid radial line, $\theta-\bar{\theta}$ is
positive (negative). Thus the sign of $Q_{r\theta}$ shows
any asymmetry of the image inside the annular sector. A lensed image
rotated clockwise about the $(\bar{\theta},\bar{r})$ centroid, like arc
A6 in Figure~\ref{fig:2137obs}, more heavily populates the regions where
$Q_{r\theta}>0$. Similarly, $Q_{r\theta}<0$ for
images rotated counter-clockwise with respect to the polar centroid,
like the giant arc A0.
Comparing the polar moments tabulated for the different types of lensed
arcs is revealing. In the weak lensing regime, the ratio of the major
and minor axes of the equivalent ellipse gives a measure of the
ellipticity of the distorted background galaxy. In the strong lensing
regime, we define a shape parameter $\chi$ by measuring the ratio of the
dimensions of the annular sector built from the polar quadrupole moments:
\[
\chi=
\frac{\textrm{tangential~dimension}}{\textrm{radial~dimension}} =
\frac{\pi\bar{r}\sqrt{%
Q_{\theta\theta}}}{180\sqrt{Q_{rr}}}\ \ .
\]
A lensed feature which is circular produces $\chi\sim 1$. In the case
of MS~2137, the giant tangential arc A0 shows $\chi\gg 1$, while the
radial arc AR shows $\chi<1$. The other arcs in the collection are
tangentially distorted, with $\chi>1$ in each case. The quantity $\chi$
may serve to distinguish between radial and tangential arcs, based only
on their polar moments.
The small collection of polar moments defined in
Equations~\hbox{(\ref{eqn:polarQo})-(\ref{eqn:polarQtt})} characterizes
the position of the lensed arcs quite well. Four constraints can be
extracted from each image: $\bar{r}$, $\bar{\theta}$, $Q_{rr}$, and
$Q_{\theta\theta}$, or equivalently, $\bar{r}$, $\bar{\theta}$, the
shape parameter $\chi$, and one dimension, $\sqrt{3Q_{rr}}$. Including
the off-diagonal moment $Q_{r\theta}$ as a constraint is dubious,
although matching its sign between the observations and simulations
provides an additional check of the model.
\subsection{Models of MS~2137}
The arcs in MS~2137 are numerically simulated by ray-tracing through a
parametric model. We model the dark-matter halo of cluster core with a large
pseudo-isothermal, elliptical mass distribution (PID). The radial
profile of the mass density is given by
\[
\rho_{PID}(r)=\frac{\sigma^2}{2\pi G r_c^2}\,
\frac{1}{1 + (r/r_c)^2}
\]
where $\sigma$ is related to the line-of-sight velocity dispersion of
the distribution, and $r_c$ is the core radius of the distribution. A
second, smaller mass distribution models the central cD galaxy,
following a profile proposed by Miralda-Escud\'{e} (1995):
\[
\rho_{cD}(r)=\frac{\sigma^2}{2\pi G r_c^2}\,
\frac{1+r/r_c}{( 1+r^2/r_h^2)^2}\ \ .
\]
The parameter $\sigma$ sets the mass of the cD, while the two scale
parameters $r_c$ and $r_h$ control the shape. Both density profiles are
adapted to elliptical distributions following the prescription of
Schramm (1990).
The orientation and eccentricity of the cD are set to match observed
values, where eccentricity is defined as $\sqrt{1-(b/a)^2\,}$ where $a$
and $b$ are the semi-major and semi-minor axes of the ellipse,
respectively. The scale parameters of the cD are fit to the observed
light profile through an iterative parameter estimation scheme. The
mass parameter $\sigma$, essentially the mass-to-light ratio, is a free
parameter. The centre, orientation, and eccentricity of the PID are
allowed to vary slightly from the central cD. The core radius $r_c$ of
the PID is also a free parameter. The mass of the PID $\sigma$ is set
to reproduce the observed line-of-sight velocity dispersion, following
the description of Binney \& Tremaine (1987) based on the Jeans'
Equation. The redshift of the deflector plane is set to the observed
value of $z_d=0.313$; the redshift $z_s$ of the source plane is
predicted by the model.
The free parameters are chosen by simulating the A0-A2-A4 triplet of
arcs originating from a common source, S1. As only the positions of the
lensed arcs are important at this stage, the lensed appearance of a
uniform, elliptical background source is simulated and the polar moments
of the resulting arcs are tabulated. The interactive simulation program
immediately updates the lensing behaviour as the parameters are
adjusted: coordinates in multiples of $\Delta$, mass $\sigma$ in steps
of 25~km/s, redshifts in steps of 0.025, and each source's semi-major
axis in steps of $0\farcs05$, orientation in steps of $5\arcdeg$, and
eccentricity in steps of $0.02$. The position of the source is
determined primarily by simultaneously matching the centroids
$(\bar{\theta},\bar{r})$ of the three arcs. Values for the size,
eccentricity, and orientation of the background ellipse are set
primarily by matching the quadrupole polar moments of the three arcs.
The values chosen for the parameters are contained in
Table~\ref{tbl:2137paramtbl}. The values are comparable to those found
in M93 and H97, also listed in the Table. The simulation based on these
model parameters is shown in Figure~\ref{fig:2137pidcD2ELL}. Annular
sectors around the simulated arcs are built from the moments listed in
Table~\ref{tbl:2137modelmoments}. The polar moments of the simulated
arcs closely reproduce the observed moments of the A0-A2-A4 arcs. In
particular, the shape parameter $\chi=11.4$ identifies arc A0 as a giant
tangential arc, $\chi=0.3$ identifies AR as a radial arc, and the sign
of the moment $Q_{r\theta}$ correctly characterizes the
asymmetry of each arc.
\placetable{tbl:2137paramtbl}
\placefigure{fig:2137pidcD2ELL}
\placetable{tbl:2137modelmoments}
To further justify the choice of parameter values, a second source S2 is
added at the same redshift as S1 (following H97) without changing the
deflector mass distribution, to test the ability of the model to
reproduce the AR-A6 pair of arclets. The second source and its two
lensed images are included in Figure~\ref{fig:2137pidcD2ELL}. The close
match between the polar moments of the observed and simulated AR-A6 arcs
in Tables~\ref{tbl:2137datamoments} and~\ref{tbl:2137modelmoments}
supports our selection of model parameters.
The two-mass, one-source model of MS~2137 is described by 20 parameters
listed in Table~\ref{tbl:2137paramtbl}. The redshift of the deflector
plane, and the centre, orientation, eccentricity, and scale lengths of
the cD galaxy are deduced from observations, leaving 13 free parameters.
The three arcs A0, A2, and A4 provide 12 constraints on the model,
leaving a one parameter family of models. Because the constraints do
not directly measure the model parameters, there is not a particular
parameter that can identified as the free parameter. Instead, some
mixture of parameters varies in the family of models, for instance the
product of distance and mass which enters the lens equation. The
addition of a second background source S2, assumed to lie at the same
redshift as S1, requires only 5 more parameters, while producing 8 data
from the two new lensed features. Therefore, more information can be
extracted from the model that is needed to produce it, and the results
become predictive. The geometry of the model provides a ready
explanation for lensed objects in the simulation which are independent
of those used to constrain the model in the first place.
\subsection{Reconstruction of the Sources}
Uniform elliptical disks are used to model the background sources in the
first stage of the lens inversion. With the lens geometry and deflector
mass distribution established, these idealized sources can be replaced
with distributions reconstructed from the data itself. The model for
the lens specifies the origin on the source plane of each lensed pixel
in the observations. Furthermore, the model parameters determine the
magnification of the background at any point of the lens. To
reconstruct the appearance of the background source(s), each pixel
containing lensed light in the observations is traced back to the source
plane along the solution to the lens equation and the magnification
factor is removed from the data. This leaves a point on the source
plane carrying the surface brightness of the source as it would appear
in the absence of lensing. A coherent picture of the background source
is constructed by interpolating between these points.
Data are uniformly spaced in the observations and cover the image plane
exactly once. Because of the lensing distortions, data on the source
plane do not inherit this simple structure, but cluster about the
caustics of the lens. We choose a uniform pixel size, $\Delta_s$, to
reconstruct a pixelized rendering of the background source. This is the
simplest choice, and can surely be improved by exploiting the
concentrations of data.
Several choices are available to set the size of the source pixels.
Source pixels representing the same physical size on the source plane as
the pixels in the observations represent on the deflector plane cannot
contiguously cover the background plane, for the background is
physically larger than the foreground. By choosing source pixels with
the same angular size as the pixels in the observations, the background
plane may be covered, but these source pixels extrapolate the
information in the data over a much larger region. We adopt a strategy
which is a compromise between these two choices: The size of the source
pixels is set so that the total number of pixels in the reconstructed
source is comparable to the number of data. The total number of pixels
is used, not just those containing a signal, because blank (dark)
regions on the source plane may be just as important as regions
containing light.
The source plane is divided into a uniform grid, and a flux is assigned
to each source pixel following these steps:
\begin{enumerate}
\item Source pixels which are not pierced by any backwards-traced rays
are assigned a value of NaN, and appear in the result as pixels of
zero flux.
\item A source pixel pierced by a single backwards-traced ray carrying
an observed signal $I$ from a point where the magnification is $\mu$
is assigned the de-magnified flux, $S=\mu^{-\!1}I$.
\item If $k$ backwards-traced rays, each carrying signal $I_i$, error
$\sigma_i$, and magnification $\mu_i$, pierce the same source pixel,
the pixel is assigned a flux $S$ which minimizes the error
\[
\phi=\sum_{i=1}^k
\left( \frac{I_i - |\mu_i|S}{\sigma_i} \right)^2\ \ .
\]
\end{enumerate}
In the observations of MS~2137, we identify 2800 pixels in arcs A0-A2-A4
coming from source S1, and 550 pixels in arcs AR-A6 coming from source
S2, for a total of 3350 pixels containing lensed light. The two sources
reconstructed from the 3350 pixels, following the strategy outlined
above, are shown in Figure~\ref{fig:2137S1andS2}. The reconstruction of
source S1 closely coincides the position of the elliptical disk used to
simulate the arcs in Figure~\ref{fig:2137pidcD2ELL}. The source lies
on the astroid-shaped tangential caustic, producing the giant tangential
arc A0. The reconstruction of source S2 coincides with the second
elliptical disk added without altering the deflector mass parameters to
check the consistency of the model. The second source crosses the
radial caustic, producing the radial arc AR.
Figure~\ref{fig:2137S1andS2} contains 3400 pixels with length
$\Delta_s=0\farcs092$, slightly smaller than the $0\farcs100$
resolution of the \emph{HST} data. The majority of the pixels in the
reconstruction contain no signal, indicating the absence of additional
background light on this source plane which could form additional arcs
in the observations. This reconstruction is comparable to that shown in
H97.
\placefigure{fig:2137S1andS2}
To better explore their internal structure, the two sources are
reconstructed separately in Figure~\ref{fig:2137S1orS2}. In the
reconstruction of source S2, there are 450 pixels, with length
$\Delta_s=0\farcs074$,comparable to the 550 data drawn from arcs AR and
A6. There are only 513 pixels with length $\Delta_s=0\farcs074$ in
the reconstruction of source S1, despite the 2800 data coming from the
A0-A2-A4 triplet of arcs. The reason for this discrepancy is that arcs
A2 and A4 barely support a reconstruction at this resolution as they are
only weak distortions of the data sampled at $0\farcs100$. The data
arriving from the giant arc A0 samples the source plane at a much higher
density because of the great distortion that occurs.
\placefigure{fig:2137S1orS2}
The reconstruction of source S1 in Figure~\ref{fig:2137S1orS2}~(right)
shows a curious faint stripe which follows the caustic of the lens. It
is inconceivable that the source truly has a dim region so perfectly
aligned with the caustic, so the stripe must be a result of the modeling
process. The numerical simulation of the lens in
Figure~\ref{fig:2137pidcD2ELL} shows a peak in the brightness of arc A0
where the image crosses the tangential critical line and the
magnification diverges. The observations of arc A0 in
Figure~\ref{fig:2137obs} show the arc is very nearly uniform in
brightness all the way along its length, however. The lack of a bright
peak in the data along the critical line results in a dim reconstruction
along the caustic. The stripe is also a result of the finite resolution
of the simulation. The magnification is infinite along the critical
line, and must be approximated. We impose an upper limit of $100\times$
magnification in the reconstruction routine. The approximation affects
any pixel in observations through which the critical line passes. By
running the simulation at twice the resolution, the chain of effected
pixels remains, but with only one half the width. The effect remains at
all finite resolutions, with the stripe becoming narrower and narrower.
The overall change in brightness between the parts of the source inside
and outside the caustic is due to the subtraction of galaxy G7 from the
data.
\subsection{Reconstruction of MS~2137}
As a final test of the consistency of the model, the reconstructed
sources shown in Figure~\ref{fig:2137S1andS2} are passed back through
the parametric model. The result is shown in
Figure~\ref{fig:2137relensed}, where gaussian noise matching that in the
data has been added. It is impossible to compare this Figure with the
observations at a pixel-by-pixel level without a complete model of the
sky and a thorough understanding of the noise. It is apparent, though,
that the prediction is consistent with the observations. We note in
particular (i) the reproduction of brighter knots in arcs A2, A4, and
A6, (ii) the twist in the radial arc AR, (iii) the double-ring structure
in the giant arc A0, and (iv) the absence of any extraneous lensed
objects.
\placefigure{fig:2137relensed}
In an ideal model, the re-lensed source perfectly reproduces the
observations. Imperfections in the model are doubly amplified, though,
once in each direction through the lens. The success of the results
provides compelling evidence that gravitational lensing, at the level
prescribed by this PID model, is actually occurring in the
galaxy-cluster MS~2137.
More importantly, the results show that the two-stage inversion
algorithm used to reconstruct MS~2137 is consistent with
other algorithms that exist today. Fitting the lens with polar moments
to decouple the effects of magnification and source structure in the
observed arcs appears to be viable, at least in the cases of relatively
simple mass distributions with a well defined centre-of-lensing. With
this confidence, we turn to the collection of features attributed to
gravitational lensing visible in the galaxy-cluster MS~1455.
\section{A Radial Arc in MS~1455}
The galaxy-cluster MS~1455+22, observed as part of the Einstein Medium
Survey of X-ray clusters, lies at redshift $z=0.257$. A candidate
gravitationally lensed tangential arc was identified by LeF\`{e}vre et
al. (1994). This prompted subsequent observations in
May, 1995 at the Canada-France-Hawaii Telescope (CFHT) as part of a weak
lensing survey of the cluster over a wide field of view. The core of
the cluster appears in each of 12 overlapping 20-minute exposures.
These frames are aligned and added with IRAF routines, resulting in an
equivalent 4-hour exposure of the cluster core. A hint of a structure
is visible in the envelope of the luminous central cD galaxy. When the
cD galaxy is digitally removed, a collection of objects surrounding the
core is revealed, as shown in Figure~\ref{fig:1455datamoments}. These
include several small cluster galaxies and an irregular radial feature
labeled A1 in Figure~\ref{fig:1455datamoments}, which we propose is a
radial arc. The previously identified tangential arc is labeled A2.
During the initial modeling phase of this gravitational lens, a third
arc appeared in the simulations which closely matched the position of a
third diffuse object in the observations. This arc, labelled A3 in
Figure~\ref{fig:1455datamoments}, is incorporated into the modeling
strategy.
\placefigure{fig:1455datamoments}
The lensed features in MS~1455 are similar to those seen in MS~2137.
Both clusters contain a radial arc and a large tangential arc. The arcs
in MS~2137 appear in two sets, the A0-A2-A4 triplet due to source
S1, and the AR-A6 pair due to source S2. Our analysis suggests
that the three arcs in MS~1455 are images of the same background source.
Radial and tangential arcs are produced across the two different types
of critical lines, which implies the single background source lies under
both the tangential and radial caustic. As the caustics cross at only a
limited number of points on the source plane, this greatly constrains
the geometry of the lens.
From the position of 716 pixels in the observations containing lensed
light, polar moments are tabulated for the three arcs, listed in
Table~\ref{tbl:1455datamoments}. Pixels in the CFHT data are
$\Delta=0\farcs207$ in length, producing an uncertainty of about
$0\farcs1$ in the radial positions. The shape parameter $\chi$ again
distinguishes the radial arc A1 ($\chi=0.4<1$) from the tangential
arclet A2 ($\chi=3.1>1$). The proximity of the radial arc to the
centre-of-lensing produces a wide angular width
$Q_{\theta\theta}$. The annular sectors built from the
moments are included in Figure~\ref{fig:1455datamoments}.
\placetable{tbl:1455datamoments}
\subsection{Parametric Models of MS~1455}
A simple model of MS~1455 consists of a large mass distribution to model
the halo of the cluster core, together with a smaller cD distribution at
the centre, and a single background source. To begin to answer more
astrophysical questions, we build two models with two different halo
profiles. The first model contains a PID mass, while the second model
uses a singular mass density profile proposed by Navarro, Frenk, \& White
(1995):
\[
\rho_{NFW}(r)=\frac{\sigma^2}{2\pi G r_s^2}\,
\frac{1}{(r/r_s)( 1 + r/r_s)^2}\ \ .
\]
The NFW density diverges as $r^{-\!1}$ at the origin and falls off as
$r^{-3}$ for $r\gg r_s$. The profile has a well-founded basis in the
results of large numerical $N$-body simulations of cold dark matter.
In both the PID+cD and NFW+cD models, we allow the dark matter halo to
wander slightly from the cD galaxy in position, orientation, and
eccentricity. The redshift of the lens plane is observed to be
$z_d=0.257$, while the redshift $z_s$ of the source is a free parameter.
The mass parameter $\sigma$ of the dark matter halo is set to reproduce
the observed line-of-sight velocity dispersion of the several dozen cluster
galaxies (\cite{CA96.1}). The scale parameters of the cD are set to
match the profile of the surface brightness. The values of the model
parameters we choose are listed in Table~\ref{tbl:1455paramtbl}. The
significant difference in source redshift $z_s$ between the two models,
0.825 for the PID+cD model but only 0.620 for the NFW+cD model, may
serve to distinguish between the two if future observations are made.
Of the 20 parameters in the model, 8 are determined from the
observations, leaving 12 free parameters. Three lensed arcs producing
12 constraints should be sufficient to constrain the parametric models
presented here.
\placetable{tbl:1455paramtbl}
Figure~\ref{fig:1455pidandnfw} shows the numerical simulations of the
PID+cD and NFW+cD lenses. Note how the single background source S1,
represented by a dashed ellipse, lies under both the astroid caustic
(forming arc A3) and the ovoid radial caustic (forming arc A1). The
original tangential arclet A2 is the even parity image that forms
outside the network of critical lines.
\placefigure{fig:1455pidandnfw}
The polar moments of the arcs are listed in
Table~\ref{tbl:1455modelmoments}. In both models of the lens, the
position of the tangential arclet A2 and the position angle of the
radial arc A1 are more carefully matched to the observations. The polar
moments of the third image, arc A3, are treated more as a consistency
check. Because of the extreme distortion that occurs near the critical
lines, the appearance of arc A3 in the simulations is very sensitive to
small changes in the parameters, particularly in the position and shape
of the background source. The magnification across the critical lines
produces problems in the reconstruction discussed below.
\placetable{tbl:1455modelmoments}
\subsection{Source Reconstruction}
With the geometry of the lens fixed by the positions of the lensed arcs,
the magnification of the background source is determined. Some 716
pixels in the observations are flagged as containing lensed light. These
data are traced back to the source plane, and magnification factor is
removed.
The source reconstructed behind the PID+cD lens is shown in
Figure~\ref{fig:1455S1}~(left). There are 780 pixels in the Figure with
length $\Delta_s=0\farcs145$, smaller than the $0\farcs207$ resolution
of the CHFT data. Figure~\ref{fig:1455S1}~(right) shows the source
reconstructed behind the NFW+cD model from 792 pixels with length
$\Delta_s=0\farcs132$.
\placefigure{fig:1455S1}
Both reconstructions show the majority of the signal comes from a
generally elliptical object with a brighter central bulge, very likely a
spiral galaxy. This galaxy closely coincides with the position of the
uniform elliptical disk used to simulate lensed arcs. The data
contained in the third lensed arc A3 are wholly responsible for the
faint limb of the source which follows the astroid caustic to the
upper-right. The flux is quite small because of the great magnification
the signal experiences as it passes through the lens. It is likely the
source is surrounded by a low surface brightness extension, but only a
small portion of this can be seen through the high-magnification parts
of the lens.
The collections of isolated pixels in the lower half of the plots are
due to shortcomings of the model. We remove these spurious pixels by
comparing each datum to its immediate neighbourhood, and discarding data
straying farther than 3 standard deviations from the local average. The
implications of this ``cleaning'' process are addressed below. The
``cleaned'' sources are shown in Figure~\ref{fig:1455S1clean}. The
spurious pixels are gone, but the faint limb responsible for the
tangential arc A3, a feature supported by lensing within the context of
the model, remains.
\placefigure{fig:1455S1clean}
\subsection{Reconstruction of MS~1455}
According to our model of the lens and the source recovered from data,
the lensed features in the observations should be reproduced by passing
the reconstructed source back through the lens model. The results of
this consistency check are shown in Figure~\ref{fig:1455Mxmodels}. The
simulations have been convolved against a Moffat point-spread function
with $\beta=2.5$ and radius $R=0\farcs 414$ (two pixels in the
observations) corresponding to a FWHM of $0\farcs 828$ (four pixels),
recreating the seeing at the CFHT at the time of the observations.
Gaussian noise matched to empty regions in the data has been added.
There are two inconsistencies between the observations and the relensed,
reconstructed source: The radial arc does not extend far enough towards
the centre of the lens, and the third arc is much more extended than in
the observations.
\placefigure{fig:1455Mxmodels}
The first flaw can be traced directly to the data ``cleaning'' step. As
each spurious datum is discarded, its origin in one of the arcs in the
observations is flagged. These flagged data are concentrated entirely
in the inner end of the radial arc. By discarding data on the source
plane which originates in only one of three arcs, the reconstructed
source fails to reproduce a portion of the radial arc without effecting
the appearance of the two other arcs.
This behaviour shows that the models we propose for MS~1455 do not
adequately model the mass distribution near the centre of the lens.
Inconsistencies near the centre of the lens are not unexpected, however.
Gravitational lensing does not uniformly measure the mass distribution
of the deflector, but only the cumulative projected mass distribution.
The lensing behaviour far from centre-of-lensing, but still within the
strong regime, is insensitive to perturbations in the mass distribution
at the lens centre. As demonstrated by \hbox{Miralda-Escud\'{e}} (1995)
in the case of MS~2137, these same perturbations can
remove the radial arc from the lens altogether, because of its proximity
to the centre of the lens. In MS~1455, the other small galaxies in the
vicinity of the cluster core and the radial arc undoubtedly play a role
in the appearance of the radial arc. Refinements to the position and
shape of the radial arc can be made by including more masses near the
centre of the cluster. However, the small number of statistics we
extract from the three lensed arcs does not support the inclusion of
further masses.
Upon relensing the reconstructed source, the third arc A3
appears greatly extended. This can be traced to the effects of
seeing in the data. Ideally, the data in any one arc can be traced back
to the source plane, combined to reconstruct the source, and then traced
forward through all three arcs. The ability of the data from one arc to
reproduce three arcs is a measure of the success of the model. However,
when the data contained in arc A3 alone is used to reconstruct the
source, only the faint limb of the source following the astroid
tangential caustic, and none of the elliptical bulge, is reconstructed.
When this reconstructed source is passed back through the lens, it forms
a large tangential arc following the critical line of the lens, the
light having originated from the corresponding tangential caustic.
This behaviour indicates that the position of lensed light forming arc
A3 in the observations is not due entirely to gravitational lensing.
Instead, as the simulations is Figure~\ref{fig:1455pidandnfw} suggest, a
bright, very compact arc forms at the location of arc A3. The image is
smeared due to the effects of seeing. This broadens the image in the
observations, so that lensing is not wholly responsible for the position
of lensed light in the observations. To reproduce the enlarged arc, the
inversion algorithm must reconstruct a larger background source, which
is then amplified into a larger relensed arc. This suggests that the
data should be deconvolved with a suitable point spread function before
applying the inversion algorithm. Space-based observations of MS~1455
may resolve this problem.
\section{Discussion}
The two-stage inversion algorithm described here decouples the effects
of lens magnification and intrinsic background source structure on the
appearance of the lensed images. This allows for an independent
reconstruction of the background sources. That is, the natural
appearance of the background sources is not used to determine the
parametric model. This is particularly important in the case of
MS~1455, where the faint limb which follows the tangential caustic does
not coincide with the centre of the bright source assumed to be
responsible for all three arcs. Forcing the pre-images of the three
arcs in MS~1455 to coincide on the source plan leads to inconsistent
lens reconstructions.
The polar moments approach was envisioned to describe galaxy-clusters
with a single centre-of-lensing. In some clusters, such as A2390
(\cite{PI96.1}), there appear to be more than one centre-of-lensing.
Because of the rapid decrease in deflection with distance from the mass
centre, however, there may be arcs formed primarily about one or the
other centres-of-lensing, which could serve as the origin of the polar
analyses. Furthermore, relensing the reconstructed sources will test
the consistency of the bimodel mass distribution. Some suggestion of
this occurs at the centre of MS~1455, where the relensed, reconstructed
radial arc suggests that additional masses are needed. In the case of
A2390, a parametric lens model built around two centres-of-lensing
should still predict the existence of the ``long straight arc'' formed
by light squeezed between the two centres, even if the arc is not used
to fit the parameters. Frye \& Broadhurst (1997) suggest that the
``long straight arc'' is in fact a superposition of two arcs, so this
scenario might be moot.
Further analysis of polar moments is needed, but already they have
several favourable characteristics. Only four statistics $\bar{r}$,
$\bar{\theta}$, $Q_{rr}$, and $Q_{\theta\theta}$ are required to quite
adequately characterize the position and shape of a wide range of arcs.
The shape parameter $\chi$ is a quantitative measure which can
distinguish between tangential and radial arcs. Finally, polar moments
may act as a bridge between the strong and weak lensing regimes, for the
dimensions \mbox{$\bar{r}\pm\sqrt{3Q_{rr}}$} and
\mbox{$\bar{\theta}\pm\sqrt{3Q_{\theta\theta}}$} of the annular sector
built from the quadrupole moments smoothly extend into the axes of the
equivalent ellipse which characterizes the shape of a weakly distorted
background galaxy.
We have been unable to define a useful global statistic which can be
minimized to produce a ``best model.'' We could conceive of
constructing a $\chi^2$-like measure which tabulates weighted
differences between the observed and simulated polar statistics. How
the different statistics $\bar{r}$, $\bar{\theta}$, $Q_{rr}$, and
$Q_{\theta\theta}$ should be weighted, if at all, is unclear. This
omission is due, in part, to the lack of a quantitative definition of a
``best model'' and a meaningful target value for the $\chi^2$-like
statistic. At this time, we refrain from forming such a measure, and
rely on the appearance of the relensed, reconstructed sources to check
the consistency of the model.
\section{Conclusions}
We have introduced a two-stage lens inversion algorithm which decouples
the effects of lens magnification and intrinsic source structure in the
appearance of lensed arcs and arclets. The key to decoupling the
deflector plane from the source plane is characterizing the positions
and shapes of the lensed objects using polar moments. While these
statistics are artificial, they have interesting and useful qualities.
In reconstructing a background source from demagnified data, we
adopt a simple strategy of generating a pixelized image of the source,
where the resolution of the image is set so that the number of data in
the reconstructed source is comparable to the number of data in the
observations. Refinements of this strategy, such as adaptive gridding
to take advantage of the concentration of data around the caustics, will
be explored in the future.
It is not the goal of this paper to answer astrophysical questions about
the nature of the cluster-galaxies MS~2137 and MS~1455. It is
interesting to note, however, that both the non-singular PID and
singular NFW halos are able to model the lensing behaviour of MS~1455.
To begin to answer questions like these requires a detailed model of the
gravitational lens. The algorithm described here offers an efficient
and intuitive approach to generating such a model.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,809 |
This roblox hack/exploit is simillar to the roblox rc7 cracked hack/exploit.
for making this awesome hack!
Be sure to subscribe to get daily hacks on this channel!
Enjoy ROBLOX RC1 LEVEL7 HACKEXPLOIT OMG.
All files are uploaded by users like you, we can't guarantee that ROBLOX RC1 LEVEL7 HACKEXPLOIT OMG are up to date.
We are not responsible for any illegal actions you do with theses files. Download and use ROBLOX RC1 LEVEL7 HACKEXPLOIT OMG on your own responsibility.
for making this awesome crack!
Enjoy ROBLOX ELYSIAN LEVEL7 HACKEXPLOIT NEW.
All files are uploaded by users like you, we can't guarantee that ROBLOX ELYSIAN LEVEL7 HACKEXPLOIT NEW are up to date.
We are not responsible for any illegal actions you do with theses files. Download and use ROBLOX ELYSIAN LEVEL7 HACKEXPLOIT NEW on your own responsibility.
– Leave a like for more hacking videos on ROBLOX!!
Hope you enjoy! Remember, John Doe isn't real, but its fun to troll as him!
Enjoy JOHN DOE RETURNS TROLLING ODERS ROBLOX EXPLOITINGHACKING 1.
All files are uploaded by users like you, we can't guarantee that JOHN DOE RETURNS TROLLING ODERS ROBLOX EXPLOITINGHACKING 1 are up to date.
We are not responsible for any illegal actions you do with theses files. Download and use JOHN DOE RETURNS TROLLING ODERS ROBLOX EXPLOITINGHACKING 1 on your own responsibility.
The Free Robux Generator / Roblox Free Hack Tool is versatile well disposed of. It is really not difficult to directly get to top levels with this hack tool. Roblox Hack is 100% safe and secure. It does not put your device at any risk. As there is no need to download, you can have online access to this hack tool.
Enjoy Roblox Hack Cheat Tool 2017 Free Robux Roblox Robux Generator.
All files are uploaded by users like you, we can't guarantee that Roblox Hack Cheat Tool 2017 Free Robux Roblox Robux Generator are up to date.
We are not responsible for any illegal actions you do with theses files. Download and use Roblox Hack Cheat Tool 2017 Free Robux Roblox Robux Generator on your own responsibility.
Be Sure to Subscribe & turn on post notifications 🔔 for the latest videos to join the notification squad!!
Enjoy ✔️ NEW ROBLOX HACKEXPLOIT: RC7 CRACKEDVERY OP LEVEL7.
All files are uploaded by users like you, we can't guarantee that ✔️ NEW ROBLOX HACKEXPLOIT: RC7 CRACKEDVERY OP LEVEL7 are up to date.
We are not responsible for any illegal actions you do with theses files. Download and use ✔️ NEW ROBLOX HACKEXPLOIT: RC7 CRACKEDVERY OP LEVEL7 on your own responsibility.
All files are uploaded by users like you, we can't guarantee that ROBLOX RC7 Cracked WORKS JULY 30 2017 EXECUTES LUA AND are up to date.
Enjoy How To Get UNLIMITED FREE ROBUX on Roblox PC Cheat engine 6.7.
All files are uploaded by users like you, we can't guarantee that How To Get UNLIMITED FREE ROBUX on Roblox PC Cheat engine 6.7 are up to date.
We are not responsible for any illegal actions you do with theses files. Download and use How To Get UNLIMITED FREE ROBUX on Roblox PC Cheat engine 6.7 on your own responsibility. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,101 |
Cancer needs to be diagnosed early in order for patients to receive the necessary treatment as soon as possible. That's why doctors who misdiagnose cancer put patients in such danger. A cancer misdiagnosis can result in a patient losing valuable time that could have been utilized aggressively treating cancer cells growing in the person's body. A patient's chances of survival often decrease rapidly when a doctor fails to diagnose cancer early. In serious instances, a missed cancer diagnosis can be grounds for a wrongful death lawsuit in Illinois.
People trust doctors to make accurate diagnoses and provide prompt treatment when they have a life-threatening medical condition like cancer. But when a doctor makes a diagnosing error of any kind, the results can have grave consequences. At The Deratany Firm, our experienced attorneys understand that a cancer misdiagnosis is among the most serious - and potentially fatal - errors a medical professional can make.
Unfortunately, many doctors are not willing to admit they made a mistake when misdiagnosing cancer. They will often insist they did everything they could to treat your condition. The cancer misdiagnosis lawyers at The Deratany Firm provide victims with honest advice and have access to some of the state's top medical experts.
Treating cancer requires extensive and time-consuming treatments. Gathering evidence to prove a doctor misdiagnosed cancer can be a difficult and time-consuming process. We have decades of experience providing passionate, knowledgeable legal representation for clients, including those who lost a loved one due to cancer.
A cancer misdiagnosis is especially serious because it represents lost time and opportunity. If caught in time, a patient could have had life-extending treatment at one of the many world-class hospitals in Chicago. A misdiagnosis represents one of the worst types of medical malpractice: the lost opportunity to treat the cancer at an early stage, when treatment is more effective.
In some cases, there was a delayed diagnosis of cancer, which should have been detected earlier. In other cases, there was a failure to diagnose cancer at all, and the person's health continued to decline. Neither is acceptable. Highly trained medical professionals should be able to recognize the signs of cancer.
Our attorneys fight for people in Chicago whose cancer was misdiagnosed. We'll conduct our own investigation into the case. We'll examine medical records and consult with experts to ensure that the correct tests were ordered and the results were read properly. We will talk to witnesses who were involved in your care. Our legal team will examine every piece of evidence carefully. When there is negligence on the part of a medical professional, we'll find it.
Then we will fight to help you recover damages. We take the time to listen to clients, to determine your needs and the compensation you should be getting. Depending on your case, you may be able to seek compensation for medical expenses, lost wages if you are out of work, pain and suffering, emotional distress and other damages. We will negotiate with the insurance company to try to get the maximum compensation we possibly can. But we will also be ready to fight them in court if they don't do what's right.
Cancer is a cruel disease that robs people of health and time spent with loved ones. Negligence that prevents the treatment of cancer only makes things worse. Hold negligent medical professional responsible. Contact our law firm. One of our experienced attorneys at The Deratany Firm can help you pursue justice. Call us at 800-529-7285 to schedule a free case consultation. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,128 |
Nebraskan gets 35-60 years for child sex assault
Updated: 7:50 AM CDT Apr 17, 2014
A 38-year-old Beatrice man has been given 35 to 60 years in prison for sexually assaulting a 12-year-old girl.William Foster was sentenced on Wednesday in Gage County District Court in Beatrice. Foster pleaded guilty to one count of first-degree sexual assault of a child.Prosecutors dropped four counts of third-degree sexual assault of a child in exchange for Foster's plea.He was charged last April after the girl told a school employee about the 15 to 20 sexual assaults that had occurred since October 2012.Police say Foster admitted what he'd done and said the assaults had occurred in Gage and Saline counties.
A 38-year-old Beatrice man has been given 35 to 60 years in prison for sexually assaulting a 12-year-old girl.
William Foster was sentenced on Wednesday in Gage County District Court in Beatrice. Foster pleaded guilty to one count of first-degree sexual assault of a child.
Prosecutors dropped four counts of third-degree sexual assault of a child in exchange for Foster's plea.
He was charged last April after the girl told a school employee about the 15 to 20 sexual assaults that had occurred since October 2012.
Police say Foster admitted what he'd done and said the assaults had occurred in Gage and Saline counties. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,591 |
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<scm>
<url>https://github.com/rkovarik/engage-api-client</url>
<connection>scm:git:git://github.com/rkovarik/engage-api-client.git</connection>
<developerConnection>scm:git:git@github.com:rkovarik/engage-api-client.git</developerConnection>
<tag>HEAD</tag>
</scm>
<distributionManagement>
<repository>
<uniqueVersion>false</uniqueVersion>
<id>magnolia.forge.releases</id>
<name>magnolia.forge.releases</name>
<url>https://nexus.magnolia-cms.com/content/repositories/magnolia.forge.releases</url>
<layout>default</layout>
</repository>
<snapshotRepository>
<uniqueVersion>true</uniqueVersion>
<id>magnolia.forge.snapshots</id>
<name>magnolia.forge.snapshots</name>
<url>https://nexus.magnolia-cms.com/content/repositories/magnolia.forge.snapshots</url>
<layout>default</layout>
</snapshotRepository>
</distributionManagement>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-source-plugin</artifactId>
<version>2.1.2</version>
<executions>
<execution>
<id>attach-sources</id>
<goals>
<goal>jar-no-fork</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.18.1</version>
<configuration>
<systemProperties>
<property>
<name>java.awt.headless</name>
<value>true</value>
</property>
</systemProperties>
</configuration>
</plugin>
</plugins>
</build>
<modelVersion>4.0.0</modelVersion>
<groupId>silverpop</groupId>
<artifactId>spapi-client</artifactId>
<packaging>jar</packaging>
<version>1.0.2-magnolia-SNAPSHOT</version>
<dependencies>
<dependency>
<groupId>com.thoughtworks.xstream</groupId>
<artifactId>xstream</artifactId>
<version>1.4.8</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>commons-httpclient</groupId>
<artifactId>commons-httpclient</artifactId>
<version>3.1</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-lang3</artifactId>
<version>3.1</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>6.7</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.beust</groupId>
<artifactId>jcommander</artifactId>
<version>1.0</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mockito</groupId>
<artifactId>mockito-all</artifactId>
<version>1.9.5</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.11</version>
<scope>test</scope>
</dependency>
</dependencies>
</project>
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,434 |
const { spawn } = require('child_process');
const _ = require('lodash');
module.exports = function SpawnWatch(bosco) {
return (service, command, cwd, verbose, buildFinished) => {
bosco.log(`Spawning ${'watch'.cyan} command for ${service.name.blue}: ${command.log}`);
const wc = spawn(process.env.SHELL, ['-c', command.command], cwd);
let output = {
state: 'starting', data: [], stdout: '', stderr: '',
};
let outputCache;
let outputCacheIndex;
let overallTimeoutTimer;
function addOutput(type, data) {
output[type] += data;
output.data.push({ type, data });
}
function reset() {
output = {
state: 'starting', data: [], stdout: '', stderr: '',
};
outputCache = '';
outputCacheIndex = -1;
if (overallTimeoutTimer) clearTimeout(overallTimeoutTimer);
overallTimeoutTimer = null;
}
function buildCompleted(err) {
const outputToReturn = _.clone(output);
reset();
return buildFinished(err, outputToReturn);
}
function onBuildTimeout() {
const errorMessage = `Build timed out beyond ${command.timeout / 1000} seconds, likely the project build not writing out ready text: ${command.ready}\n`;
output.state = 'timeout';
addOutput('stderr', errorMessage);
if (verbose) {
bosco.error(errorMessage);
}
return buildCompleted();
}
function buildStarted() {
bosco.log(`Started build command for ${service.name.blue} ...`);
overallTimeoutTimer = setTimeout(onBuildTimeout, command.timeout);
}
function isBuildFinished() {
output.data.forEach((entry, i) => {
if (i <= outputCacheIndex) { return; }
outputCache += entry.data;
outputCacheIndex = i;
});
return outputCache.indexOf(command.ready) >= 0;
}
function onChildOutput(type, data) {
if (!data) { return; }
if (output.data.length < 1) {
buildStarted();
}
addOutput(type, data.toString());
if (verbose) {
bosco.process[type].write(data.toString());
}
if (isBuildFinished()) {
output.state = 'finished';
buildCompleted();
}
}
function onChildExit(code, signal) {
const childError = new Error(`Watch process exited with code ${code} and signal ${signal}`);
childError.code = code;
childError.signal = signal;
output.state = 'child-exit';
addOutput('stderr', `${'Watch'.red} command for ${service.name.blue} died with code ${code}`);
return buildCompleted(childError);
}
reset();
wc.stdout.on('data', (data) => { onChildOutput('stdout', data); });
wc.stderr.on('data', (data) => { onChildOutput('stderr', data); });
wc.on('exit', onChildExit);
};
};
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,112 |
{"url":"https:\/\/datascience.stackexchange.com\/questions\/14788\/r-time-series-decomposition-without-detection-of-seasonality","text":"R - time series decomposition without detection of seasonality\n\nI have a time series dataset with 200 data points. I have decomposed it using the function below:\n\ndat2 = ts(dat1, frequency = 4)\ndecomposeDat = decompose(dat2, \"multiplicative\")\n\n\nI get 4 components: trend, seasonal, cyclic and irregularity. But when I check if there is seasonality present in the dataset with frequency \"4\", Rstudio says that there is no seasonality for this frequency. The check is performed with the following code:\n\ndat2 = ts(dat1, frequency = 4)\nfit <- tbats(dat2)\nseasonal <- !is.null(fit\\$seasonal)\nseasonal\n\n\nseasonal returns FALSE meaning that there is no seasonality with frequency 4.\n\nCan someone explain, why can I decompose it into a seasonality component when no seasonality is present from the check mentioned above?\n\nWithout seeing your data it is hard to tell whether there is seasonality or not. The decompose() function will try to find seasonality using a different approach than tbats() as discussed in this post and the user comments of this blog post by the author of tbats.","date":"2021-06-13 21:15:38","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.19775937497615814, \"perplexity\": 1589.313421745256}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487610841.7\/warc\/CC-MAIN-20210613192529-20210613222529-00286.warc.gz\"}"} | null | null |
New York City -- Sitting by Bryant Park I took a photo of cylinder shaped object on June 13, 2015.
In special reports, this week's files cover: Admiral Fahrney Intelligence Directs UFOs, Optical and Photographic Technical Component Awareness, Letter from Earth Guardian Susan Signal, Ceres Structures in Crater, and St. Paul of Tarsus, Abductee?
Unidentified Aerial Phenomena sightings were reported over: California, Florida, Illinois, Maryland, Minnesota, Missouri, New York, Ohio, South Dakota, and Utah.
Sightings of UFOs were also reported in: Australia, Brazil, Canada, India, Mexico, Spain, and England and Scotland in the United Kingdom.
WASHINGTON AP – Retired Rear Admiral Delmer S. Fahrney, once head of the Navy's guided missile program, said Wednesday reliable reports indicate that "there are objects coming into our atmosphere at very high speeds. Fahrney told a news conference that "no agency in this country or Russia is able to duplicate at this time the speeds and accelerations which radar and observers indicate these flying objects are able to achieve. On January 16, 1957 – the day after NICAP's Board of Governors met for the first time – Board Chairman Delmer S. Fahrney called a press conference. News media all over the country quoted his statements identifying him as one of the few "top brass" to speak out in defense of UFOs.
This Subject Applies to Telescopes, Binoculars, Cameras and Most Optical Equipment.
Chuck Reever MUFON's International Director of Investigation writes, "There is a technical component to photographs and observations of which the investigator must be aware." Most investigators should be aware that objects, when out of focus through optical aids can appear to be orb like or appear to contain "structure." Digital "Still" and "Video" cameras are very susceptible to this behavior due to camera manufacturing and design. All modern Digital cameras are now manufactured so that the camera "hunts" for a best focus due to the electronic focusing system. (When possible all cameras should be "manually" adjusted to Focus on "infinity" when photographing an object in the sky.) Most objects in the sky are "Tiny" from a camera perspective.
Susan writes, "My experiences started back in 1966-67, when I was 6-7 years old. I was sleeping when I had this dream of flying a ship which entered the Earths' solar system faster than the speed of light. We were in the third phase of reconversion from pure energy to solid form. As we entered the solar system, we bounced off of the planets gravitational fields of Neptune, Pluto and two other unnamed planets".
As we flew passed the sun it looked like a solid rock due to our light speed. I was the navigator on far left hand side of the ship. As such, I was hyper sensitive to any subtle energy changes in the universe-even before they would occur. This was one of the mandatory requirements for navigation (svar-ari-erum) in our language. As we entered the Earths' atmosphere, I was already awake and the rest of my "Nyindan-human" crew was still asleep. As we entered the atmosphere, a rift or black hole opened up.
I found a way to focus the structures on Ceres on June 11, 2015, I teach you how in the video and I seemed to bring out a lot of hidden detail. This is what NASA does not want the public or other governments to see. Alien structures made of highly reflective material.
I have seen such reflective structures before on the dark side of the moon in old NASA photos. I was never able to photograph them (with my camera pointed at computer screen) although this method of looking at a photo with a digital camera often will re-digitalize (auto correct) the blur. I saw a glass-like pyramid and a lit up face on the moon, but when photographed, the objects appear as a white mass of light only. So it was impossible to prove it exists.
St. Paul of Tarsus, Abductee?
If you accept the New Testament premise that Jesus is the Son of God, then he chose his twelve Apostles wisely. Twelve is an ancient mystical number corresponding to the Zodiac and this was not lost on Jesus. Judas served a purpose and was replaced by the drawing of lots where Mathias was chosen to restore order as the new 12th Apostle. I must admit that I am not the first person to suggest that Paul of Tarsus was an alien abductee but the available literature and theories are thin to say the least and no one has (to my knowledge) done an in- depth assessment of Paul's case based on established criteria for commonalities among victims of the abduction phenomenon.
Paul's Conversion is often considered the most important event in human history apart from the life, death and resurrection of Jesus of Nazareth. His writing and preaching is responsible for the spread of Christianity.
There are over 70 symptoms which an abductee may experience. In reviewing Paul's case it is evident he has many of them. Let's look at Paul's story. What happened to Paul that day was nothing less than an epiphany not only for him but for the burgeoning faith called Christianity. It altered everything that developed into the world's most numerous religion. In the 21st century we have institutionalized Paul's experience in what is called a "Road to Damascus Experience" for those among us who have had a sudden enlightenment of any kind.
Paul is blinded by his experience and later scales or scabs fell from his eyes. He seems to be a victim of Klieg Conjunctivitis which occurs after exposure to severe ultraviolet radiation. UFOs often are associated with intense ultraviolet radiation. Was Paul's flash of light the appearance of a UFO discharging ultraviolet radiation in its wake?
Was this divine interloping or was it a telepathic communication between two abductees? Ananias has all the earmarks of a fellow abductee. Why did God need Ananias to cure Paul? Why was Paul blinded at all? Ananias had to use his powers given to him by Jesus to cure Paul. Frequently, abductees have healing powers given to them.
David Halperin, a retired University of North Carolina, Chapel Hill religious studies professor, author and UFO Investigator, mentions in one of his blogs that Paul's alluded to "thorn in his side" from 2nd Corinthians could be an alien implant. I agree but David doesn't take the point far enough.
When Jesus was alive, he chose 12 apostles to carry on his ministry after his death and resurrection. If Jesus was the Son of God you would think his choice of apostles would be divinely inspired for the ages. Yet he converts Saul of Tarsus who apparently had been effective in stopping the spread of Christianity. A strange bald short cranky little man – and Jesus asks his assistance in spreading the good news! What was wrong with the twelve? Were they inferior or is someone or something else at play here interfering with things?
There were witnesses so it was a real event, not something internal to Paul. Additionally, the encounter was violent in that he was thrown from his horse and there was bodily injury. All physical signs of abduction.
Paul promptly changes his name from Saul to Paul and is baptized. He goes on to write one third of the Bible's New Testament with some of the most inspired and eloquent theology themed letters concerning God, Jesus and love ever written.
Abductees frequently feel chosen and Paul feels chosen to carry the message of Christ. He is a totally changed man from murderous religious zealot to a powerful disciple and missionary!
Santa Cruz — Out of the corner of my eye on June 1, 2015, I saw a shadow zip by not sure what it was some kind of craft.
Hovered changed direction circled change shape flew off at high speed.
Aberdeen — We noticed several commercial aircraft making u-turns when eating dinner on our porch on May 27, 2015. I took some images of the contrails because that is something I don't see where I live.
Kansas City — A bright object with spinning bright multiple colors continues to be observed frequently since April of 2011 when sightings in the Blue Springs area began to be reported to MUFON. Assistant State Director Margie Kay and NUFORC reports dozens of witnesses have seen this strange object. Sometimes more than one light appears, but normally it is one object which appears to be a disk with multi-colored lights spinning in a clockwise direction.
The lights are red, green, white, and blue neon bright. The latest sighting was observed on May 3, by at least 10 witnesses in Gladstone looking south towards Kansas City, and then west for at least 20 minutes. MUFON investigators Margie Kay, Larry Jordan, and Corey Pearce have observed and filmed the object on numerous evenings, and flashed a 1 M candlepower flashlight at it, triggering returning flashes. This indicates that intelligence is operating the craft. One witness from Independence has been observing the object for three years. He has filmed a small UFO in his yard. On May 3, 2015 a larger craft with spinning lights were on his security camera. The large disk-shaped craft hovered over his trees, and then moved away. The witness believes that the large craft with the spinning lights on it is the object which is seen from far away.
New York City — Sitting by the lawn of Bryant Park, facing South, when I took a photo of the Empire State Building jutting out among the buildings on 40th Street on June 13, 2015. I didn't notice the object when taking the photo with y cell phone camera.
It wasn't an advertisement banner dragged by an aircraft because it wasn't there when I looked up the sky to take the photo. And I don't think NYC allows banner-dragging aircraft over Manhattan.
Milford — After zooming in I saw it was a triangle. The photo was taken Sunday, June 7, 2015 at 9:35 PM. The object was heading north with an orange plume above it that looked like the aurora borealis. My buddy and were at work when we saw an orange light flying across the skies very slowly.
It looked like that the top of it was on fire had an orange plume swaying back and forth with straight laser beam lights coming straight out the top of it. It was very strange.
Sioux Falls — On June 7th 2005, I noticed a change in the color outside and looked outside and saw this single cloud. It didn't look right so my wife and I took pictures. Later i reviewed the pictures and noticed all these objects in a couple of them.
Salt Lake City –My friend saw three ufo sittings on June 1, 2015, we were looking in the sky at the moon. We saw a bright light coming down going very slowly for about 15 minutes as it got very low in the sky. We saw a white V shaped glowing light. Then it started to go very fast northwest and was gone in a flash at 9:37 PM. At 10 PM, we were looking west over Salt Lake Airport to see a very large round light with about 30 white lights inside the object. It had a red light bouncing around inside it. It was not moving and a commercial airliner going north to the airport went right past it. Then the ufo moved up and down for ten minutes and flew west very fast and was gone. Ten minutes later we saw another white V shaped craft going very slow flying over the airport in a flash it flew west and was gone. Thanks to UFOinfo and Richard J.
Looks like physical craft morphed into spirit beings and descended to earth.
So UFOs might be "demons" or other such extra-dimensional entities on April 24, 2015.
Oshawa — On Sunday, June 7, 2015, while outside my deck in facing west there was a plane constantly circling in the clear sky when l noticed something strange? There was a slow flashing white light over Whitby, Ontario heading east. It looked less than a quarter mile north heading towards my house. Of course being camera already started to snap still shots at 11:07 am.
Hermosillo — Watch as this ball of white light as it flies around during the daytime. It's clearly an orb, and it moves incredibly fast. It's nice that he pulled over the car to watch it some. I just wish the video was longer.
Caha — I randomly accessed a public network Spanish Observatory camera on June 4, 2015, at 2 PM. The objects appeared to be ground to air and I observed three in this manner.
If you are familiar with sea life, these objects moved exactly like "Jellyfish" would move in their habitat.
Bodrum – I was having dinner with two others beside the sea at Bodrum Castle, and took a photo of the bay at sunset at 7:19 PM, on June 13, 2015. I took a shot and observed the photo immediately of a distinct and almost shiny object in the sky that I hadn't seen with my bare eye.
Lock Ness Lake –Tourists on holidaying next to Loch Ness have captured an extraordinary photo which they claim shows a mysterious creature flying over the lake – and it's not Nessie. Alan Betts, 48, was on holiday with his wife, Anna, and her parents when his mother-in-law Tatiana captured this extraordinary image of two mysterious disc-shaped objects flying over the famous loch in the Scottish Highlands.
The family from York did not realize how unusual the picture was until they returned home and looked at pictures. | {
"redpajama_set_name": "RedPajamaC4"
} | 8,147 |
The Heart of Our Music: Underpinning Our Thinking: Reflections on Music and Liturgy by Members of the Liturgical Composers Forum (Paperback)
By John Foley (Editor)
In The Heart of Our Music, master practitioners of the art of liturgical music come together to offer enriching insights, a stirring vision, and practical new ideas that will change the way you think about liturgy and liturgical ministry. These reflections are written with the needs of parish liturgists and liturgical musicians in mind.
This volume includes reflections on the role of composition, the role of music, the kind of language we use, the missionary dimension of our texts and music, whether esthetic beauty is the only quality needed, and how we think about and name God in the songs we sing.
Contributors and their articles include: "A Sacrifice of Praise: Musical Composition as Kenosis" by Alan J. Hommerding; "'The Word Is Near You, in Your Mouth and in Your Heart' Music as Servant of the Word" by Bob Hurd; "The Songs We Sing: The Two Languages of Worship" by Tony Barr; "Moving to Metamelos: A New Heart, a New Church, a New Song" by Rory Cooney; "Beauty and Suitability in Music in the Liturgy" by Paul Inwood; and "From 'God Beyond All Names' to 'O Agape' Images of God in Liturgical Music" by Jan Michael Joncas.
Christian Rituals & Practice - Worship & Liturgy
Christianity - Catholic
Institutions & Organizations
Paperback (June 12th, 2015): $12.95
Publisher: Liturgical Press | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,738 |
Writer Margaret Wander Bonanno has worn many hats over the years: author, proofreader, copy editor, ghost writer, mother, educator and now publisher.
A Star Trek fan "from the time of the beginning", Margaret's first two Star Trek novels Dwellers in the Crucible (1985) and Strangers from the Sky (1987) both became bestsellers but following the farce surrounding the publication of Probe, the majority of which was rewritten although her name remained on the cover, Margaret returned to writing main-stream science fiction producing, among other numerous titles, two well received trilogies, The Others and Preternatural.
To the delight of fans world wide Margaret returned to writing Star Trek fiction with the Lost Era novel Catalyst of Sorrows (2004).
In her newest novel Burning Dreams, Margaret tackles the inscrutable figure of Christopher Pike, the tragic former Captain of the U.S.S. Enterprise. Pike has figured as a character in a number of novels and short stories over the years but until now we've only gotten glimpses of what made Pike the man he was.
Burning Dreams is now available and Margaret was kind enough to answer a few questions for Trek Nation about her newest foray into the Star Trek universe.
Trek Nation: Often an author brings a story idea to the editor but I understand that you were asked to write Burning Dreams. Could you elaborate on that?
Margaret Bonanno: I was sort of at loose ends for a Trek idea after Catalyst of Sorrows, and Marco Palmieri literally came to me and said, "What would you like to do next?" I told him I hadn't a clue, and he asked, "How do you feel about Christopher Pike?" Marco recognized that my stories are character-driven rather than action-driven, and he said he was looking for "the definitive Pike novel". We knocked some ideas around, and the final outline was a combination of his ideas and mine.
TN: When did you first see "The Cage"?
MB: Probably the first time it was aired on TV. I think it was in the late 80s. Saw "Menagerie", of course, when it first aired, and a few hundred times after that, and I was familiar with the history of how "The Cage" became "Menagerie", so it was intriguing to see the differences.
TN: What fascinates you about the character of Christopher Pike?
MB: We know so little about him, and yet we can guess at so much. For me the "hook" was the moment in "The Cage/Menagerie" when the Keeper is punishing him for refusing to eat and gives him the illusion that he's in Hell. The line is "From a fable you once heard in childhood", and I thought Hmmm. It's dubious in that era that he'd have been raised with the fear of fire and brimstone, but was there something else�a personal experience or memory�that made this the perfect way to punish him? He himself asks the Keeper "Why not just put irresistible hunger in my mind?" To my way of thinking, it was because the Keeper, being able to access Pike's deepest thoughts and fears, knew that fire would be more effective. So that's where I began.
TN: Did you have specific inspirations when you first began the process of writing Burning Dreams, or did the inspiration spring from your research into what others had done with the character?
MB: I did read as many of the novels (and The Lives of Dax) as I could get my hands on, and the one consistent theme I found throughout is that these writers understood the difference between Kirk and Pike. Kirk was the headstrong one, Pike the thoughtful one or, as the Suits described him, the "cerebral" one�which could possibly be a drawback in a job that sometimes required split-second decision making. So I tried to use that, particularly in the section of the book where Pike is a young officer on the Aldrin and has to make command decisions he doesn't think he's ready for. My thought was that Kirk would have charged ahead anyway and worried about consequences later, but Pike carried on this internal argument with himself even as he took action.
Additionally, I'm a proofreader/copy editor in real life, and I've had the opportunity to work on a number of book projects written by folks who are quadriplegic. One gentleman has since become a friend and my business partner. Learning what the world is like for someone whose spinal cord has been damaged and whose body refuses to respond from the shoulders down, and encountering the incredible grit and spirit and determination that so many of these folks have to keep on keepin' on, was very helpful in working with Pike.
TN: You've obviously incorporated not only was established in "The Cage" and "The Menagerie" but material from several of the novels that feature Pike. Perhaps it was my imagination but did you manage to slip in biographical information on the actors who played Pike and Vina, Jeffrey Hunter and Susan Oliver, into the narrative?
MB: Ah, ya caught me! I found out from Imdb.com that Jeffrey Hunter's original name was Hank McKinnies, so Pike's mother's name became Willa McKinnies. Susan Oliver, I discovered, had been a record-setting aviator, so it was fun to add that to Vina's character.
TN: Are you satisfied that you accomplished what you set out to do with Burning Dreams?
MB: I think so. There were points in the writing process where I wondered if I'd gotten Pike's voice right, but then I'd just go back and watch the episodes for the umpteenth time. What was really fun, too, was expanding Vina's character. In the original, she suffers from several things�the attitude toward women in the 60s, and Disposable Blonde syndrome, and Gene Roddenberry's having to put the "Menagerie" script together in such a short time. So she could easily be seen as just a bit of fluff.
Making her Pike's coequal, and stopping to think of the implications of her being the only human on Talos IV for 18 years before Pike's arrival, and for another 13 years afterward makes us realize how incredibly strong this character is. How many of us could survive something like that with our minds and souls intact?
Finally, exploring what becomes of the Talosians once Pike decides to remain on Talos IV was challenging, and a lot of fun.
TN: After some very successful Star Trek novels, you took a long break from writing Star Trek fiction after the Probe debacle. For those who don't know the story could you explain briefly what happened with Probe?
TN: What inspired you to write Star Trek again?
MB: I've been in love with this stuff since the time of the beginning. Nearly every aspect of my life has in some way been influenced by Star Trek, and this is my chance to give back.
It's also so much easier to write in this universe than to have to create a universe from scratch. And the rewards are incrementally greater. I've written nearly 20 other novels, but Star Trek is what people remember.
TN: Until now most of your Star Trek fiction has had a strong Vulcan or Romulan influence. Your two most famous novels, Dwellers in the Crucible and Strangers from the Sky for example and also the more recent Catalyst of Sorrows all delve into Vulcan and/or Romulan history and culture so Burning Dreams is somewhat of a departure for you. What allure do the Vulcans and Romulans hold for you?
It's also been fascinating, to coin an expression, to watch the cultures of these two peoples evolve over the decades. Gene Roddenberry just wanted a "Martian" with funny ears on his bridge. He had no idea.
TN: Your previous books have all featured unforgettable strong female characters, primarily original characters. Did you take a different approach when writing for an established male character?
MB: It was a challenge, particularly the childhood parts. For obvious reasons, it's easier for me to get inside a female character's head and understand what makes her tick. Here I had to not only try to see the world from the POV of a 9-12 year-old boy, but one who was raised around horses (because it wouldn't be Pike's story without Tango). I'm a city kid; most of the horses I've seen were either onscreen or on a carousel. Fortunately I got some very good advice from a woman who raises Morgan horses, and I've acknowledged her in the book. But, yeah, even though I've raised a son, boy children are still a mystery to me.
TN: You are also contributing to the Mere Anarchy eBook mini-series, writing the sixth and final book Its Hour Come Round that will be out next April. What can you tell us about Mere Anarchy?
MB: The series is ambitious, in that it covers a long period of time�from Kirk's first five-year mission through his "death" in the Nexus in Generations. It's primarily a story about good intentions gone awry, an instance where the Federation was just on the verge of making first contact with a promising world when a natural disaster nearly destroys that world, and Kirk is instrumental in trying to restore that world to normalcy despite further natural disasters, interference by the Klingons, and the occasional governmental coup over the next several decades.
Some of the greatest fun has been "meeting" the other authors through a humongous exchange of emails, beginning last fall when our editor, Keith R.A. DeCandido, first proposed this project. I've known Howard Weinstein for a long time, and I got to meet Dayton Ward, Kevin Dilmore and Christopher Bennett at Shore Leave. Dave Galanter and Mike W. Barr are the other two writers on this project and, while I haven't met them in person yet, the story tripping and the one-liners and puns have been flying through cyberspace for nearly a year, so that opening my email every morning is a surprise.
More than one of us has suggested that the emails should be published as a "The Making of Mere Anarchy" companion, but it would probably be bigger than the Encyclopedia Americana. But funnier. Much funnier.
TN: Are there any other Star Trek stories you've always wanted to tell? Or characters you've wanted to write?
MB: I've always loved filling in the blanks, asking, "What happens after the last frame of the episode/movie?" and "What did this character do between this episode and that?" and especially "What makes this character the person they are?" Whatever my next project, I wouldn't mind doing some more in-depth character study, as I got a chance to do with Uhura in Catalyst of Sorrows, and with Pike in Burning Dreams.
In addition to Burning Dreams hitting book stores and online retailers in August, Margaret Wander Bonanno's classic novel Strangers from the Sky has been reissued by Pocket as part of their celebration of Star Trek's 40th Anniversary.
This new paperback edition of Strangers from the Sky, which has new cover art and a new introduction written by Margaret, is available now. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,333 |
The Oakland Athletics season ended on Thursday night after losing Game 5 of the American League Division Series to the Detroit Tigers. After winning 96 games and the American League West for second consecutive season, the bulk of the A's roster will be back for 2014.
A's general manager Billy Beane is know for wheeling and dealing, but he has a young and talented roster full of cost-controlled players. Here's a handy cheat sheet for the offseason that details the contract status of some of the A's key players.
For full contract information, check out Cot's Baseball Contacts. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,077 |
We told you that the White House has appointed Michael Robertson as GSA's chief of staff.
I am delighted to announce that effective May 3, 2010 the White House has appointed Michael J. Robertson as GSA's new Chief of Staff.
Michael is no stranger to the agency. Since March 2009 he has served as our White House Liaison and then in August he took on the roles of Associate Administrator for the Office of Governmentwide Policy and Chief Acquisition Officer. In those roles, Michael ably and successfully merged OCAO with OGP and helped drive important White House initiatives on recovery, sustainability, and open government at GSA.
As Chief of Staff, Michael will serve as one of my closest advisors with particular emphasis on furthering the Obama Administration's agenda throughout GSA. He will work closely within GSA to connect and partner us with client agencies and with the White House, to assure our strong focus on our customers, align us with the President's priorities, and ensure that we find creative and collaborative ways to be a leader in sustainability, open government, recovery, and acquisition workforce initiatives.
Since his arrival early last year, Michael's talent has been evident and his passion for this agency and our work together is remarkable. Please join me in welcoming him to this new position.
Just in — GSA Administrator Martha Johnson today named Michael Robertson to be GSA's chief of staff.
That post was vacated earlier this year when Danielle Germain stepped down. The chief of staff is a critical post in the GSA leadership team. In fact, GSA Administrator Martha Johnson served as the chief of staff for then GSA Administrator David Barram, so it is a post with which she has intimate knowledge.
He will take over on May 3, 2010.
Robertson already wears a number of hats within GSA — he serves as the White House liaison, the associate administrator of GSA's Office of Governmentwide Policy, and the agency's chief acquisition officer. It seems unlikely that he would be able to continue holding all those posts, but we were not immediately able to confirm those details.
UPDATE: GSA confirms that Robertson will not continue in the posts at the Office of Governmentwide Policy or the Chief Acquisition Officer. Johnson is working with the White House on candidates for those posts.
Johnson is looking how to build GSA's next generation acquisition team given some key vacancies. Jim Williams retired as the commission of GSA's Federal Acquisition Service last month… and David Drabkin retired from his post as deputy chief acquisition officer. Johnson is known to consider these vacancies an opportunity to build a 2.0 version of GSA's acquisition organization and has been carefully considering a number of options.
Robertson worked on the staff of then Sen. Barack Obama, worked on the Obama presidential campaign, and joined GSA soon after the transition.
Federal News Radio 1500 AM's Daily Debrief with Chris Dorobek and Amy Morris had the first interview with Robertson when he started at GSA. Read more here.
Michael J. Robertson has been appointed by the White House as Chief of Staff for the U.S. General Services Administration effective May 3, 2010.
In this role, he will serve as an advisor to the Administrator with particular emphasis on furthering the Obama Administration's agenda at GSA. He will work with client agencies and the White House to ensure that GSA finds creative and collaborative ways to be a leader in sustainability, open government, recovery, and responsible acquisitions.
Since August 2009, Michael served as Associate Administrator of Governmentwide Policy and Chief Acquisition Officer for GSA. As head of the Office of Governmentwide Policy, Robertson worked to develop and evaluate policies for management of the federal government's internal operations. In addition, as Chief Acquisition Officer, he has been responsible for developing and reviewing acquisition policies, procedures, and related training for GSA and federal acquisition professionals. He also served as the functional manager of GSA's acquisition workforce.
Michael began his service with GSA in early 2009 when he was appointed as White House Liaison.
Before coming to GSA, Robertson served as the deputy working group lead for the Energy and Environment Agency Review Team on the Obama-Biden Transition Project. Immediately prior to that, he served the Obama for America presidential campaign as the primary point person for securing endorsements and superdelegate support from House and Senate members.
In early 2007, Robertson served as then-Senator Barack Obama's Legislative Coordinator and deputy to the Chief Counsel where he managed the appropriations process, worked on judicial nominations, and conducted political outreach to promote Obama's legislative priorities. In 2004, he worked in Chicago on Obama's Senate campaign. Before entering the political field, Robertson worked in venture capital in San Francisco.
A native of Fresno, California, Robertson graduated with a Bachelor of Arts from the University of California at Berkeley and earned his Juris Doctor from Golden Gate University School of Law. He is currently pursuing a Masters of Law from Georgetown University's Law Center in Washington, DC. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,087 |
Big PvE Update for HEX: Shards of Fate
January 28, 2016 16:58 ( F2P News ) 0
HEX: Shards of Fate launches today Chronicles of Entrath, a massive PvE update that introduces a single-player campaign for those with less scope for multiplayer action. Watch below a trailer offering a preview.
The update provides players with a solo campaign where they will get to choose a race and a class for their character and then navigate through the new world map, completing quests and dungeons. It's a classic RPG experience in which players will earn equipment, PvE cards and talent points to unlock further skills and class bonuses.
Eight races are available, each with unique bonuses, and can be combines with three classes: Mage, Warrior and Cleric. Depending on the player's choice of race, a unique storyline unravels around the character. Additionally, there are dialogue options and side-quests available that have a direct impact on the events in the game.
After creating their characters, players receive a starter deck matching their chosen race. The decks of the individual classes differ from one another, and the talent trees unlock class talents which are available independent of race and can be improved once characters level up. Progressing in the campaign will allow more cards to be unlocked.
If you want to know more about HEX check out our profile by clicking on the "info" button below.
Source: Gameforge press Release. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 265 |
Alexej Mikulášek (* 2. prosince 1962, Brno) je český spisovatel a literární historik a učitel češtiny a literatury.
Život
Absolvoval Pedagogickou fakultu Masarykovy univerzity v Brně (obor: český jazyk a literatura – občanská nauka) a interní vědeckou aspiranturu na Filozofické fakultě Univerzity Karlovy v Praze (obor: literární věda).
V devadesátých letech působil jako archivář v Pedagogickém muzeu J. A. Komenského v Praze a dále jako učitel a lektor kurzů tvůrčího psaní, jímž je dodnes. Od loňského roku je členem Katedry českého jazyka a literatury Pedagogické fakulty Jihočeské univerzity, kde působí externě. Předmětem jeho zájmu jsou v oblasti literárněhistorické česko-židovsko-německé a česko-židovsko-slovanské literární vztahy a souvislosti (více než deset let byl vedoucím redaktorem slovníku Literatura s hvězdou Davidovou), v oblasti ediční problematika tvorby a redigování textů, v oblasti literárněkritické pak především současná česká literatura a s ní spjatá interpretace textů (zvláště se zřetelem k jejímu uplatnění ve vzdělávací a čtenářské praxi; tomuto problému je věnována i kniha "Interpretační etudy").
Je členem redakční rady literárního a kulturního týdeníku Obrys-Kmen (vychází jako příloha Haló novin) od jeho založení v roce 1996, rovněž časopisu pro mladou slovenskou literaturu Dotyky (od roku 2007). Je členem a tajemníkem Unie českých spisovatelů v Praze (od jejího založení; je zastoupen v jejím sborníku Na druhém břehu. Říčany, Orego 2002). Je i čestným členem Spolku slovenských spisovatelů v Bratislavě.
V roce 2011 získal Cenu Unie českých spisovatelů za literárně-kritickou práci.
Vyučuje předmět Český jazyk a literatura na Střední odborné škole v Praze 5 v Drtinově ulici, po které se i tato škola jmenuje. Tato škola se specializuje na výuku práva.
Dílo
vedoucí redaktor a spoluautor :
vedoucí redaktor a spoluautor :
V rukopise zůstávají:
Mravoučná literatura jako literární fakt. Se zřetelem k formování obrozenské literatury pro děti a mládež (dis., dosud neobhájená).
Jaromíra Kolárová (monografie).
Zrcadla zrcadel (výbor z kritické esejistiky).
Dalimil & Dobromil (výbor z kritických glos, recenzí, polemik a drobné literární publicistiky).
Uspořádal:
Sborník
Výběr tvorby F. Nepila Dětem Praha : Start, 1997.
Jako redaktor se podílel na edici
Sborníku společnosti Aloise Jiráska I. – IV. Praha : SAJ
Sborníku soutěžních prací Kouzelný klíč III. Praha: Psychiatrická léčebna v Bohnicích, 2008.
Odkazy
Reference
Literatura
Externí odkazy
slaviste.cz
Absolventi Univerzity Karlovy
Čeští spisovatelé
Narození v roce 1962
Narození 2. prosince
Narození v Brně
Žijící lidé
Muži | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,664 |
Sleep endoscopy is also known as sleep nasoendoscopy. It is a drug-induced sleep that acts as a powerful tool to study the dynamic airway in a patient while he is asleep. The information gained from the sleep endoscopy would help the surgeon while tailoring the operation. The condition for each patient could be different and sleep endoscopy offers the solution. Sleep endoscopy is the key to address all the levels of obstruction. There could be multiple obstructions at the mouth, nose and tongue levels. Sleep endoscopy is also suited for patients who are having severe snoring and desire surgical options.
What are the indications that show that a person would require sleep endoscopy?
It is difficult to determine the anatomic area in some patients that is responsible for obstruction when the person is awake. The symptoms are absent when a person is absent and an examination during this stage would not reveal much information. The procedure would become very useful in such cases. Different surgical techniques exist and sleep endoscopy would reveal which surgical procedure to follow.
What are some of the important components in the procedure?
The process of endoscopy involves the introduction of a flexible and thin camera through the nose to examine the upper airway. The entire structure from the tip of the nose to the voice box can be diagnosed.
The sedated or sleep endoscopy involves putting the patient to sleep. This is unlike the traditional method of performing the procedure when the person is awake.
Most patients would wake up immediately if the doctor tries to insert something through the nose and it makes no sense for a doctor to wait for the patient to fall asleep. The patient is sedated by an anaesthesiologist with the help of certain medications.
What are some of the areas that are examined during the process?
There are areas that are specifically examined during sleep endoscopy. These include - uvula, epiglottis, walls of the throat, voicebox, back of the tongue and behind the palate. One or more of the areas might be experiencing collapse that causes the obstructions and snoring.
How does the sleep endoscopy proceed?
Before proper evaluation, the person goes through a basic evaluation. A baseline endoscopic evaluation of the airway is also performed when the person is awake. Sedated or sleep endoscopy is not required for all patients. The procedure might even need to be performed in the operating room depending on the obesity of the person. The procedure is scheduled on a different day if it becomes apparent that sleep endoscopy is necessary.
The person is advised to eat or drink nothing after midnight the day of the procedure. The restrictions thus minimise any chances of vomiting during the procedure.
Before the procedure, the anaesthesiologist puts an injection that induces general anaesthesia, putting you to sleep. The endoscopy is performed when you are asleep. The procedure is generally not more than 15 minutes long. The person is observed for about 30 minutes before getting discharged.
Thus, sleep endoscopy can help uncover a lot of information about the airways during your sleep. This helps to decide on the processes to be performed by the surgeon. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,859 |
E Shram has come to your account also ₹ 1000, check online like this sitting at home?
e shram bhatta 2022: E-Shram Yojana All the workers registered under the scheme will be given an economic benefit of ₹ 1000 per month, for this the state government is working at their respective levels and the government has started a program to send the amount of allowance under this scheme. According to the Labor Department, up to 90% of the workers have been registered and the rest of the people register themselves as soon as possible. e shram card 2022 Get it done under
According to information received from Labor Department e shram card 2022 those who under 31 December 2021 Before getting their registration done under e-shame card, they have started getting financial assistance. This money is being given by the state government to the workers in view of the coronavirus and keeping in mind the situation of unemployment.
What's in this post?
One ₹ 1000 to the workers sending the Yogi government
Before the assembly elections, the process of transferring ₹ 1000 in the account of laborers working in the unorganized sector has been started by the Uttar Pradesh government. This money by UP government E-Shram Portal But registered workers are being given as maintenance allowance. In the first phase, in the form of maintenance allowance, the government has sent an amount of thousands of rupees each to the accounts of about 1.5 crore workers.
At the same time, whether the state government or not, the process of sending money to the account of the remaining laborers is also going on continuously, if we talk about the report, then in the second phase, the government will deposit the amount of thousands of rupees each in the account of about 2.31 crore workers soon. Will transfer.
e shram card 2022 registered workers will get ₹2000
By March, the workers will get an amount of ₹ 2000. E-Shram Yojana The benefit of this is to be given to the workers from December to March i.e. in the name of maintenance allowance of ₹ 2000 in a total of 4 months. Farmers and laborers will get this money every one month in the form of ₹ 500, whose two installments of ₹ 1000 have been sent to the account of the workers, and the remaining workers are expected to get this money soon. If you see the rest E-Shram Card There are many benefits of getting it made, out of which the UP government has shown the first benefit by giving maintenance allowance and similarly, the Bihar government is also giving maintenance allowance by looking at the coronavirus, which is only to the workers registered on this labor portal. Are getting .
Who will get the maintenance allowance of ₹ 2000?
If we talk about ₹2000 allowance, then this money is given by the state government. e shram bhatta 2022 And this money is being given to only such workers who have been registered on this Rum portal and who have received e-shram card. If you had got your registration done before 31st December 2021, then you will get the first installment of ₹ 1000, if you have not registered yourself yet, you can register yourself on the labor portal as soon as possible.
eshram card 2022 How to register on the portal :- E-Shram Card To get it done, you can register yourself through both online and offline means, you can register yourself online by visiting its official website, otherwise you can get your own labor card through offline by visiting the Common Service Center near you. Huh . Along with setting up many common service center operator camps eshram card 2022 If you are making it, then you can get your own eshram card 2022 made through that too.
e shram card 2022 Payment Check Process
If you want to know e shram yojana You can easily know whether the installment amount has been received or not.
1. Through UPI :- If you use UPI in your phone then you can check your bank account balance through UPI, if money has arrived in your bank then you will get to see increase in account balance.
2. Bank App Or Net Banking :- If you use the net banking given through your bank, then through this also you can check your bank account statement and bank account balance.
3. SMS Banking :- If your mobile number is registered with your bank and SMS banking facility is available on it, then you can also get balance inquiry and mini statement information by sending SMS to your bank's registered number.
4. PFMS Portal: – You can also get information about whether money has been sent through PFMS portal or not. PFMS Portal Through this, information about every single transaction done under DBT is available which you can check online, PFMS Portal Balance Check ↗️ Learn the process by clicking here.
FAQ e shram card 2022 , 2000 installment
Q 1. Will all workers get the benefit of e shram bhatta 2022?
At present, this benefit is being given by the Government of Uttar Pradesh and the Government of Bihar, in future and the state government can also take some steps on this.
Q 2. How much amount will we get as e shram bhatta 2022?
e shram bhatta 2022 As different types of funds are being given by different state governments, if we talk about UP, then the UP government has decided to give an allowance of ₹ 2000 for 4 months, which will be given ₹ 500 per month.
Q 3. What needs to be applied to get Labor Card ₹ 1000 Allowance?
no, if you e shram card 2022 If you have already registered for and you have received e shram card 2022 then you have to e shram bhatta 2022 There is no need to reapply to get it.
Note :- In today's article we have told you e shram bhatta 2022 Gave almost all the information related to, if you still want to know something to ask, then you can ask through the comment.
pay attention :- Similarly, we will first give information about new or old government schemes launched by the central government and state government on this website. sarkariyojnaa.com If you give through, then do not forget to follow our website.
If you liked this article then give it a go Like And share Sure do it.
Thanks for reading this article till the end…
Posted by Amar Gupta
FAQ E Shram 2000 Bhatta Check Online 2022
✔️ Will all workers get the benefit of e shram bhatta 2022?
✔️ How much amount will we get as e shram bhatta 2022?
E-Shram bhatta 2022 As different types of funds are being given by different state governments, if we talk about UP, then the UP government has decided to give an allowance of ₹ 2000 for 4 months, which will be given ₹ 500 per month.
✔️ What to apply to get labor card ₹ 1000 allowance?
no, if you E-Shram card 2022 If you have already registered for and you have received e shram card 2022 then you have to E-Shram bhatta 2022 There is no need to reapply to get it.
Categories Government Jobs
Sassoli, Mattarella and Draghi at the funeral home. Minute of silence at the EU summit
Railway RCF Apprentice Recruitment 2022 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,942 |
\section{The Stellar Initial Mass Function in Clusters}
\label{sec:1}
Many recent works have attempted to constrain the stellar initial mass function (IMF) inside massive clusters by comparing their dynamical mass estimates (found through measuring the velocity dispersion and effective radius) to the measured light. These studies have come to different conclusions, with some claiming standard Kroupa-type \cite{kroupa} IMFs (e.g. \cite{maraston}, \cite{larsen06}) while others have claimed extreme non-standard IMFs (e.g. the top or bottom of the IMF is over-populated with respect to a Kroupa IMF \cite{smith}). However, the results appear to be correlated with the age of the clusters, as older clusters ($>$80~Myr) all appear to be well fit by a Kroupa-type IMF whereas younger clusters display significant scatter in their best fitting IMF \cite{bastian06a}. This has led to the suggestion that the younger clusters are out of Virial equilibrium, thus undercutting the fundamental assumption which is necessary to derive dynamical masses. We will return to this point in \S~\ref{sec:2} and \S~\ref{sec:3}. Focusing on the older clusters, we see that they all have standard IMFs (see Fig~2), arguing that at least in massive clusters the IMF does not vary significantly.
\index{paragraph}
\begin{figure}
\includegraphics[height=8cm]{fig1.ps}
\caption{{\bf Taken from \cite{bastian06b}:} Surface brightness profiles for three young clusters (left - M82-F, NGC~1569-A, and NGC~1705-1) and two N-body simulations which include the rapid removal of gas which was left over from a non-100\% star-formation efficiency (right). The solid (red) and dashed (blue) lines are the best fitting EFF~\cite{eff} and King~\cite{king} profiles respectively. Note the excess of light at large radii with respect to the best fitting EFF profile in both the observations and models. This excess light is due to an unbound expanding halo of stars caused by the rapid ejection of the remaining gas after the cluster forms. {\it Hence, excess light at large radii strongly implies that these clusters are not in dynamical equilibrium.} For details of the modelling and observations see \cite{bastian06b,goodwin}.}
\label{fig:2}
\end{figure}
\begin{figure}
\includegraphics[height=9cm]{fig2.ps}
\caption{{\bf Taken from \cite{goodwin}:} The light-to-mass ratio of young clusters. The circles (blue and red) are taken from \cite{bastian06a} and \cite{maraston} and references therein, the triangles with errors (green) are LMC clusters \cite{mclaughlin}, the upside down triangle (brown) is for NGC~6946-1447 corrected for internal extinction \cite{larsen06}, and the squares (cyan) are from \cite{ostlin}. The arrow extending from M82F \cite{smith} is a possible correction to its age (see \cite{bastian06a}). The triangle without errors is the tentative upper limit for cluster R136 in 30~Dor \cite{bosch,hunter}. The solid (black) line is the prediction of simple stellar population models (SSPs) with a Kroupa \cite{kroupa} stellar IMF. The red lines are the SSP model tracks folded with the effects of rapid gas removal following non-100\% star-formation efficiencies (SFE) \cite{bastian06b}. Dashed lines represent the SFEs where the clusters will become completely unbound. The SFE in the simulations
measures the degree to which the cluster is out-of-virial equilibirum
after gas loss, and so is an {\em effective} SFE (see \cite{bastian06b,goodwin}).}
\label{fig:1}
\end{figure}
\section{Dynamical Equilibrium of Young Clusters}
\label{sec:2}
One explanation of why the youngest clusters are not in dynamical equilibrium is that young clusters are expected to expel their remaining gas (left over from the star-formation process) on extremely rapid timescales, which will leave the cluster severely out of equilibrium (e.g.~\cite{goodwin97a}). In order to search for such an effect we compared the luminosity profiles of three young clusters with that of N-body simulations of clusters which are undergoing violent relaxation due to rapid gas loss \cite{bastian06b}. The simulations (Fig~1, right panel) make the generic prediction of excess light at large radii (with respect to the best fitting EFF profile \cite{eff}), due to an unbound expanding halo of stars which stays associated with the cluster for $\sim20-50$~Myr. These stars are unbound due to the rapid decrease of potential energy as the gas is removed on timescales shorter than a crossing time (e.g.~\cite{goodwin97a}). Observations of the three young clusters also show excess light at large radii (Fig.~1, left panel), strongly suggesting that they are experiencing violent relaxation \cite{bastian06b}. Hence these clusters are not in dynamical equilibrium.
\section{The Star Formation Efficiency and Infant Mortality}
\label{sec:3}
Assuming that young clusters are out of equilibrium due to rapid gas loss (the extent of which is determined by the star-formation efficiency - SFE one can fold these effects (see Fig.~3 in \cite{bastian06b}) into SSP models \cite{goodwin}. The results are shown as solid and dashed red lines in Fig.~2 for various SFEs, where we have assumed all gas is lost instantaneously at 2~Myr. The dashed lines show the results for SFEs below 30\% for which the cluster will become completely unbound. Solid lines represent SFEs above 30\% where a bound core may remain. Note that the observed SFEs of the clusters range from 10-60\% \cite{goodwin}.
We also note that 7 out of the 12 clusters with ages below 20~Myr appear unbound (i.e. SFE~$<$~30\%), suggesting that $\sim60$\% of clusters will become unbound in the first 20-50~Myr of their lives \cite{goodwin}, i.e.~what has been termed ``infant mortality''. This is in close agreement with cluster population studies of M51 which found an infant mortality rate of 68\% \cite{bastian05} and comparable to the open cluster dispersal rate of $\sim87$\% \cite{lada} (see also \cite{whitmore03}).
\section{Conclusions}
Through detailed comparisons of the luminosity profiles of young clusters with N-body simulations of clusters including the effects of rapid gas loss, we argue that young clusters are not in Virial equilibrium. This undercuts the fundamental assumption needed to determine dynamical masses. This suggests that the claimed IMF variations are probably due to the internal dynamics of the clusters and not related to the IMF. By limiting the sample to the oldest clusters (which appear to be in equilibrium) we see that they are all well fit by a Kroupa-type IMF arguing that, at least in massive star clusters, the IMF does not vary significantly.
By combining the above N-body simulations with SSP models we can derive the (effective) SFE of clusters. From this we find that $\sim60$\% of young clusters appear to be unbound, in good agreement with other estimates of the infant mortality rate. Note however that even if a cluster survives this phase it may not survive indefinitely due to internal and external effects (e.g.~\cite{gieles}).
\begin{acknowledgement}
NB gratefully thanks his collaborators Roberto Saglia, Paul Goudfrooij, Markus Kissler-Patig, Claudia Maraston, Francois Schweizer, and Manuela Zoccali on dynamical mass studies.
\end{acknowledgement}
\input{referenctalk}
\printindex
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,020 |
{"url":"http:\/\/www.koreascience.or.kr\/article\/ArticleFullRecord.jsp?cn=DBSHBB_2013_v50n3_591","text":"DERIVATIONS OF THE ODD CONTACT LIE ALGEBRAS IN PRIME CHARACTERISTIC\n\nTitle & Authors\nDERIVATIONS OF THE ODD CONTACT LIE ALGEBRAS IN PRIME CHARACTERISTIC\nCao, Yan; Sun, Xiumei; Yuan, Jixia;\n\nAbstract\nThe underlying field is of characteristic $\\small{p}$ > 2. In this paper, we first use the method of computing the homogeneous derivations to determine the first cohomology of the so-called odd contact Lie algebra with coefficients in the even part of the generalized Witt Lie superalgebra. In particular, we give a generating set for the Lie algebra under consideration. Finally, as an application, the derivation algebra and outer derivation algebra of the Lie algebra are completely determined.\nKeywords\nLie superalgebra;derivation;first cohomology;\nLanguage\nEnglish\nCited by\nReferences\n1.\nS. Bouarroudj, P. Grozman, and D. Leites, Classification of finite dimensional modular Lie superalgebras with indecomposable Cartan matrix, Symmetry Integrability Geom. Methods Appl. 5 (2009), 63 pages.\n\n2.\nS. Bouarroudj and D. Leites, Simple Lie superalgebras and nonintegrable distributions in characteristic p, J. Math. Sci. 141 (2007), no. 4, 1390-1398.\n\n3.\nM. J. Celousov, Derivations of Lie algebras of Cartan type, Izv. Vyssh. Uchebn. Zaved. Mat. 98 (1970), 126-134.\n\n4.\nJ.-Y. Fu, Q.-C. Zhang, and C.-P. Jiang, The Cartan-type modular Lie superalgebra KO, Comm. Algebra 34 (2006), no. 1, 107-128.\n\n5.\nV. G. Kac, Lie superalgebras, Adv. Math. 26 (1977), no. 1, 8-96.\n\n6.\nW.-D. Liu and B.-L. Guan, Derivations from the even parts into the odd parts for Lie superalgebras W and S, J. Lie Theory 17 (2007), no. 3, 449-468.\n\n7.\nW.-D. Liu and Y.-H. He, Finite-dimensional special odd Hamiltonian superalgebras in prime characteristic, Commun. Contemp. Math. 11 (2009), no. 4, 523-546.\n\n8.\nW.-D. Liu, X.-Y. Hua, and Y.-C. Su, Derivations of the even part of the odd Hamiltonian superalgebra in modular case, Acta Math Sin. 25 (2009), no. 3, 355-378.\n\n9.\nW.-D. Liu and Y.-Z. Zhang, Outer derivation algebras of finite-dimensional Cartan-type modular Lie superalgebras, Comm. Algebra 33 (2005), no. 7, 2131-2214.\n\n10.\nW.-D. Liu, Derivations of the even parts for modular Lie superalgebras of Cartan type W and S, Internat. J. Algebra Comput. 17 (2007), no. 4, 661-714.\n\n11.\nW.-D. Liu, Y.-Z. Zhang, and X.-L. Wang, The derivation algebra of the Cartan-type Lie superalgebra HO, J. Algebra 273 (2004), no. 1, 176-205.\n\n12.\nH. Strade, Simple Lie algebras over fields of positive characteristic. I, Structure Theory, Walter de Gruyter, Berlin and New York, 2004.\n\n13.\nH. Strade and R. Farnsteiner, Modular Lie Algebras and Their Representations, Monographs and Texbooks in Pure and Appl. Math. Vol. 116. Marcel Dekker Inc. 1988.\n\n14.\nW.-Q. Wang and L. Zhao, Representations of Lie superalgebras in prime characteristic I, arXiv: 0808.0046v[math.RT], 12 Jan 2009.\n\n15.\nC.-W. Zhang, Simple modules for the restricted Lie superalgebras sl(n, 1), J. Pure Appl. Algebra 213 (2009), no. 5, 756-765.\n\n16.\nY.-Z. Zhang, Finite-dimensional Lie superalgebras of Cartan type over fields of prime characteristic, Chinese Sci. Bull. 42 (1997), no. 9, 720-724.\n\n17.\nY.-Z. Zhang, Graded modules for Z-graded Lie superalgebras W(n) and S(n) of Cartan type, Kexue Tongbao (Chinese) 40 (1995), no. 20, 1829-1832.\n\n18.\nY.-Z. Zhang, Z-graded module of Lie superalgebra H(n) of Cartan type, Chinese Sci. Bull. 41 (1996), no. 10, 813-817.","date":"2017-08-19 15:13:29","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 1, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7533189654350281, \"perplexity\": 1704.7748386201322}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886105455.37\/warc\/CC-MAIN-20170819143637-20170819163637-00257.warc.gz\"}"} | null | null |
\section{Introduction}
Several important problems in the sciences are concerned with the activity of interacting sources. Two prominent examples are the dynamics of the brain \cite{sporns2010networks}, where information processing manifests as a spatiotemporal pattern of regional activations constrained by anatomical and functional connectivity, and financial markets \cite{tsay2005analysis}, where values of assets evolve in concert with the decisions of agents. In complex systems such as these, it is critical to infer the underlying structure governing system evolution, and relatedly, to forecast future outcomes.
Granger Causality \cite{granger1969investigating} is a popular technique for measuring a form of dependence rooted in the temporal precedence effect. Time series $x(t)$ is said to cause, in a Granger sense, time series $y(t)$ if the past of $x$ improves the prediction of the present value of $y$ above that of its own past.
Originating in economics\cite{granger2001essays}, Granger Causality has since found extensive utilization in neuroscience \cite{ding200617,seth2015granger}, where it has been applied to recordings of brain activity captured at various spatial and temporal scales to illuminate neural circuits \cite{ding200617,bernasconi1999directionality,kaminski2001evaluating,goebel2003investigating,sheikhattar2018extracting,vicente2011transfer}. Perhaps driven by the ubiquitous interest in causal interactions, the technique has been adopted by many disparate fields, including ecology \cite{sugihara2012detecting},
computational biology \cite{finkle2018windowed},
and epidemiology \cite{eichler2010granger,kleinberg2011review}. The utility of Granger Causality has been aided by several extensions and reformulations of the original technique, most notably a frequency-domain formulation \cite{geweke1982measurement,geweke1984measures} and a generalization to multivariate time series \cite{barrett2010multivariate,barnett2014mvgc}. Moreover, several approaches to capturing non-linear causal interactions between multiple time series have been proposed \cite{hiemstra1994testing,ancona2004radial,marinazzo2008kernel,tank2018neural}.
Conventionally, these different variants of Granger Causality are measured between observed signals that are selected \emph{a priori}. In other words, one must specify the identity of the signals being probed, and the hypothesized direction of causality. Moreover, this approach implicitly assumes that the underlying causal relationships exist in the native space defined by the observations (e.g. the sensors).
The central idea proposed here is that, in many systems, the true causal relations are embedded in a \emph{latent} source space, and that these latent sources enter the observations via an unknown linear mixture. Due to the mixing process, direct application of tools such as Granger Causality to the observed data may not optimally reveal the dynamics of the system. Rather, the approach taken here to identify the latent causal sources is to project the observations into a component space that maximizes the Granger Causality among \emph{pairs} of time series: one signal models the ``driving'' source, and the other captures the source being ``driven''. It is shown that this can be formulated as a non-convex optimization problem with closed-form expressions for the objective function and gradient. Importantly, the optimization does not require access to the mixing process and thus constitutes blind identification.
To solve the optimization problem, a simple coordinate descent algorithm that is implemented with standard numerical packages is presented. By simulating a vector autoregressive (VAR) system with known structure, it is demonstrated that the proposed technique indeed identifies the underlying sources, their connections, and the mixing process. To evaluate the proposed approach on real-world systems, the technique is then applied to data from the human brain and the cryptocurrency market. In both cases, it is shown that the proposed technique recovers multiple pairs of signals whose causal strength is significantly greater than what is found in the observed data.
The distinctions between Granger and true physical causality have been previously described \cite{maziarz2015review,grassmann2020new}. In what follows, the terms ``causal'' and ``causality'' are employed for conciseness with the understanding that the findings presented here pertain to the Granger form of causality.
\section*{Results}
\subsection*{Motivating example}
Consider a simple system with two connected sources, $s_1$ and $s_2$, where source 1 ``Granger causes'' source 2, denoted here by $s_1 \rightarrow s_2$. In the neuroscience context, $s_1$ may represent the mass synaptic activity at a brain region, and $s_2$ the activity of a downstream region to which $s_1$ projects. Due to signal mixing (e.g. volume conduction), the observed signals are modeled as a linear mixture of the two sources:
\begin{eqnarray}
\label{eqn:ill}
\left( \begin{array}{cc} x_1(t) \\ x_2 (t) \end{array} \right) =
\left( \begin{array}{cc} A_{11} & A_{12} \\ A_{21} & A_{22} \end{array} \right) \left( \begin{array}{cc} s_1(t) \\ s_2 (t) \end{array} \right),
\end{eqnarray}
where the 2-by-2 mixing matrix is assumed to be invertible and where sensor noise has been omitted for the sake of this illustrative example. Note that $s_1$ and $s_2$ are mixed together in the captured signals, potentially confounding the measure of Granger Causality between $x_1$ and $x_2$. Given only the observations, the goal is to identify the ``driving'' signal $y(t)\approx s_1(t)$ and the ``driven'' signal $z(t)\approx s_2(t)$. Writing (\ref{eqn:ill}) in matrix notation as $\vec{x}(t) = \vec{A} \vec{s} (t)$, $s_1$ is exactly recovered if $y(t)={\vec{w}^{*}}^T \vec{x}(t)$, where $\vec{w}^{\ast}$ is a column vector whose elements are the first row of $\vec{A}^{-1}$ and $^T$ is the transpose operation. Similarly, $s_2$ is recovered as $z(t)= {\vec{v}^{*}}^T \vec{x}(t)$, where $\vec{v}^{\ast}$ is the second row of $\vec{A}^{-1}$. The projection vectors ${\vec{w}^{*}}$ and ${\vec{v}^{*}}$ undo the mixing process by combining the observed signals to form latent components that approximate the underlying sources. The problem considered here is whether it is possible to recover $s_1$ and $s_2$ \emph{without} access to the mixing process $\vec{A}$. Below, a novel criterion for blind source separation that maximizes the Granger Causality between pairs of component signals is proposed.
\subsection*{Maximizing latent Granger Causality}
Given an observable, centered random process ${\bf x}(t) \in \mathbb{R}^D$, the goal is to identify latent variables $y(t)={\bf w}^T {\bf x}(t)$ and $z(t)={ \bf v}^T {\bf x}(t)$ such that the Granger Causality from $y$ to $z$ is maximized. Namely, it is desired to solve the following optimization problem:
\begin{equation}
\label{eqn:GCAopt}
\max_{{\bf w} , {\bf v}} \mathcal{G}_{y \rightarrow z}
\end{equation}
where
\begin{equation}
\label{eqn:GCdef}
\mathcal{G}_{y \rightarrow z} = 1 - \frac{ E \{ \epsilon_f^2 \} }{E \{ \epsilon_r^2 \}}
\end{equation}
is termed the ``strength of causality'' \cite{granger1969investigating} from $y$ to $z$, $\epsilon_f$ is the residual of a linear regression predicting $z$ from the history of both $z$ \emph{and} $y$ (i.e., the ``full'' model), and $\epsilon_r$ is the residual when regressing $z$ onto only its past (the ``reduced'' model). $\mathcal{G}_{y \rightarrow z} $ is bounded between 0 and 1, with $\mathcal{G}_{y \rightarrow z}=0$ indicating that $y$ does not aid in the prediction of $z$, and $\mathcal{G}_{y \rightarrow z}=1$ denoting a zero-error estimate of $z$ from the past of itself and $y$. The optimization in (\ref{eqn:GCAopt}) is aimed at identifying two projection vectors, $\vec{w}\in \mathbb{R}^D$ and $\vec{v}\in \mathbb{R}^D$, such that the resulting pair of latent variables maximize the strength of causality (\ref{eqn:GCdef}).
In what follows, a history of $L$ samples is assumed, and the temporal apertures of $y$ and $z$ are defined as:
\begin{eqnarray*}
\vec{y}_p &=& \left( \begin{array}{ccc} y(t-1) & \ldots & y(t-L) \end{array} \right)^T \\
\vec{z}_p &=& \left( \begin{array}{ccc} z(t-1) & \ldots & z(t-L) \end{array} \right)^T.
\end{eqnarray*}
To arrive at a form of (\ref{eqn:GCdef}) that can be optimized using gradient-based techniques, note that the minimum mean squared error (MMSE) corresponding to the full and reduced models are given by \cite{wiener1964extrapolation}:
\begin{eqnarray}
\label{eqn:Phif_Phir}
&&\Phi_f = E \{ \epsilon_f^2 \} = \sigma_z^2 - \vec{r}^T \vec{R}^{-1}\vec{r} \nonumber \\
&&\Phi_r = E \{ \epsilon_r^2 \} =\sigma_z^2 - \vec{q}^T \vec{Q}^{-1}\vec{q},
\end{eqnarray}
where $\sigma_z^2 = E \left\{ z^2(t) \right\}$ is the mean power of $z$, $E \left\{ \cdot \right\}$ denotes mathematical expectation,
\begin{eqnarray*}
\vec{r} = E \left\{ z(t) \left( \begin{array}{c} \vec{z}_p(t) \\ \vec{y}_p(t) \end{array} \right) \right\}
~~~~~~~
\vec{q}=E \left\{ z(t) \vec{z}_p(t) \right\}
\end{eqnarray*}
are $2L$ and $L$ dimensional covariance vectors between $z$ and the temporal apertures of the full and reduced models, respectively, and where
\begin{eqnarray*}
\vec{R} = E \left\{ \left( \begin{array}{c} \vec{z}_p(t) \\ \vec{y}_p(t) \end{array} \right) \left( \begin{array}{c} \vec{z}_p(t) \\ \vec{y}_p(t) \end{array} \right)^T \right\}
~~~
\vec{Q} = E \left\{ \vec{z}_p(t) \vec{z}^T_p(t) \right\}
\end{eqnarray*}
are $2L$-by-$2L$ and $L$-by-$L$ covariance matrices of the predictors in the full and reduced models, respectively. Importantly, $\sigma_z^2=\vec{v}^T \vec{\Sigma}(0) \vec{v}$, $\vec{r}$, $\vec{q}$, $\vec{R}$ and $\vec{Q}$ can each be expressed in terms of the projection vectors $\vec{w}$ and $\vec{v}$ and the spatiotemporal statistics of the observations (see \emph{Supplementary Note 1}):
\begin{eqnarray}
\label{eqn:qrRq_vw}
&& \vec{r}= \left( \vec{I}_{2L} \otimes \vec{v}^T \right) \left( \vec{I}_2 \otimes \vec{\Sigma}_{1:L} \right) \left( \begin{array}{c} \vec{1}_L \otimes \vec{v} \\ \vec{1}_L \otimes \vec{w} \end{array} \right)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \vec{q}= \left( \vec{I}_L \otimes \vec{v}^T \right) \vec{\Sigma}_{1:L} \left( \vec{1}_L \otimes \vec{v} \right) \nonumber \\
&&\vec{R}=\left( \begin{array}{c} \vec{1}_2^T \otimes \vec{I}_L \otimes \vec{v}^T \\ \vec{1}_2^T \otimes \vec{I}_L \otimes \vec{w}^T \end{array} \right) \left( \vec{I}_2 \otimes \tilde{\vec{\Sigma}} \right) \left( \begin{array}{cc} \vec{I}_L \otimes \vec{v} & \vec{0} \\ \vec{0} & \vec{I}_L \otimes \vec{w} \end{array} \right)
~~~~~~~ \vec{Q}= \left( \vec{I}_L \otimes \vec{v} \right)^T \tilde{\vec{\Sigma}} \left( \vec{I}_L \otimes \vec{v} \right),
\end{eqnarray}
where
\begin{eqnarray*}
\vec{\Sigma}_{1:L} = \left( \begin{array}{cccc} \vec{\Sigma}(1) & \vec{0} & \ldots & \vec{0} \\
\vec{0} & \vec{\Sigma}(2) & \ldots & \vec{0} \\
\vdots & \vec{0} & \ddots & \vdots \\
\vec{0} & \ldots & \ldots & \vec{\Sigma}(L) \\
\end{array} \right)
\end{eqnarray*}
is an $LD$-by-$LD$ block covariance matrix where $\vec{\Sigma}(\tau)= E \left\{ \vec{x}(t) \vec{x}^T(t-\tau) \right\}$ is the lagged covariance of the observations,
\begin{eqnarray*}
\tilde{\vec{\Sigma}} = \left( \begin{array}{cccc} \vec{\Sigma}(0) & \vec{\Sigma}(-1) & \ldots & \vec{\Sigma}(-L+1) \\
\vec{\Sigma}(1) & \vec{\Sigma}(0) & \ldots & \vec{\Sigma}(-L+2)\\
\vdots & \vec{\Sigma}(1) & \ddots & \vdots \\
\vec{\Sigma}(L-1) & \ldots & \ldots & \vec{\Sigma}(0) \\
\end{array} \right)
\end{eqnarray*}
is an $LD$-by-$LD$ block Toeplitz matrix, $\otimes$ denotes the Kronecker product, $\vec{1}_K$ is a column vector of $K$ ones, and $\vec{I}_K$ is the $K$-by-$K$ identity matrix. Substituting (\ref{eqn:qrRq_vw}) into (\ref{eqn:Phif_Phir}) and the resulting expressions into (\ref{eqn:GCdef}), one arrives at the following expression for the strength of causality between latent sources $y$ and $z$:
\scriptsize{
\begin{align}
{}& \mathcal{G}_{y \rightarrow z} = 1 - \nonumber \\
{}& \frac{\vec{v}^T \vec{\Sigma}(0) \vec{v} - \rvecT \left[ \Rvec \right]^{-1} \left( \vec{I}_{2L} \otimes \vec{v}^T \right) \left( \vec{I}_2 \otimes \vec{\Sigma}_{1:L} \right) \left( \begin{array}{@{}c@{}} \vec{1}_L \otimes \vec{v} \\ \vec{1}_L \otimes \vec{w} \end{array} \right) }{\vec{v}^T \vec{\Sigma}(0) \vec{v} - \qvecT \left[ \Qvec \right]^{-1} \qvec} \label{eqn:GCdef_vw}
\end{align}
}
\normalsize
The gradient of (\ref{eqn:GCdef_vw}) has a closed-form that is derived in \emph{Supplementary Note 2}. Conventional optimization tools may then be employed to learn projection vectors $\vec{w}^{\ast}$ and $\vec{v}^{\ast}$ that maximize the Granger Causality between resulting latent signals $y(t)={\vec{w}^{\ast}}^T \vec{x}(t)$ and $z(t)={\vec{v}^{\ast}}^T \vec{x}(t)$.
\subsection*{Identifying latent causal structure}
The objective in (\ref{eqn:GCdef_vw}) is non-convex, since $ \mathcal{G}_{y \rightarrow z} \left( \vec{w}, \vec{v} \right) = \mathcal{G}_{y \rightarrow z} \left( a \vec{w}, b \vec{v} \right)$ for arbitrary real scalars $a$ and $b$. This follows from the fact that the residual error when predicting $z$ from $y$ is equivalent to that when predicting $bz$ from $ay$ -- any scaling factors will be accommodated by the temporal filter that predicts the driven signal from the driving signal. Thus, the technique presented here is only able to identify the latent sources up to a scaling factor. As with other blind source separation techniques such as independent components analysis (ICA) \cite{comon1994independent,hyvarinen2000independent}, it is not possible to recover the scale or sign of the underlying sources.
Another potential ambiguity when optimizing (\ref{eqn:GCdef_vw}) is related to a known property of multivariate Granger Causality \cite{barrett2010multivariate,barnett2014mvgc}. Namely, the strength of causality between $y$ and $z$ is invariant to mixtures of $y$ and $z$ in the driving signal, such that $\mathcal{G}_{y \rightarrow z} = \mathcal{G}_{a y + b z\rightarrow c z}$. This means that, without appropriate modifications to the objective function, maximizing (\ref{eqn:GCdef_vw}) will only identify $z$. To resolve this ambiguity, one can utilize the concept of time-reversed Granger Causality \cite{haufe2013critical,winkler2016validity}. Notice that if $y \rightarrow z$ in $\vec{x}(t)$, then $z \rightarrow y$ in $\vec{x}(-t)$. Thus, while the ambiguity in forward time occurs on $y$, it occurs in reversed time on $z$. One can therefore combine forward and reversed time into a single objective function according to:
\begin{equation}
\label{eqn:GCAopt_tr}
\max_{{\bf w} , {\bf v}} \mathcal{G}_{y \rightarrow z} + \mathcal{G}_{z \rightarrow y}^{\mathrm{tr}},
\end{equation}
where $\mathcal{G}_{z \rightarrow y}^{\mathrm{tr}}$ is the strength of causality between $z(-t)$ and $y(-t)$.
The non-convexity of the objective function means that a local minimizer of (\ref{eqn:GCAopt_tr}) is not guaranteed to represent a global minimum. Many approaches to non-convex optimization have been developed, including the use of multiple starting points \cite{ugray2007scatter} and stochastic gradient descent \cite{bottou2018optimization}. Here, a grouped coordinate descent algorithm \cite{bezdek1987local} that maximizes $\vec{v}$ and $\vec{w}$ in an alternating fashion is proposed: instead of combining $\vec{v}$ and $\vec{w}$ into a single model parameter and performing a 2$D$ dimensional optimization, the driving and driven signals are learned in tandem. This reduces the dimensionality of the problem, partitions the variables in a natural manner, and is shown empirically to converge to optima that recover the causal structure underlying the data.
As the cost function is non-convex, there are potentially several pairs of projection vectors
that locally maximize the strength of causality (\ref{eqn:GCAopt_tr}) and thus may yield meaningful latent sources. In order to recover $P$ pairs of components $\{y_i(t), z_i(t)\}, i=1,\ldots,P$, here it is proposed to repeat the optimization after the first iterate, but not before removing the contribution of the driving signal $y_1(t)$ from the observed data. This takes the form of a spatiotemporal regression such that any signals that are correlated with $y_1(t)$ or its lagged versions $y_1(t-l), l=1,\ldots,L$ are removed. Given that this should include $z_1(t)$, the driven signal is not explicitly removed. This procedure is repeated until the desired number of component pairs $P$ is obtained. The proposed algorithm is described in \emph{Supplementary Note 3}.
In what follows, the proposed approach is evaluated on synthetic and real-world data. The primary criterion employed to assess performance is the strength of causality (\ref{eqn:GCdef}) among the recovered pairs of components relative to the strength of causality between observed signals, or those formed by conventional component analysis techniques. Where possible, the fidelity of the recovered signals compared to the underlying sources is measured. Moreover, the recovered components and associated projection vectors are interpreted based on what is known about the system being investigated (i.e., neural dynamics, the cryptocurrency market) to further assess the behavior of the proposed approach.
\subsection*{Recovering the causal structure of a three-element system}
To test the proposed method's ability to recover the causal structure embedded in multiple time series, a series of empirical evaluations was conducted on synthetic data. Access to the system's ground-truth structure permitted measuring the fidelity of the recovered signals with respect to the latent sources. The data was generated according to a VAR(3) process whose parameters matched those employed by Stokes and Purdon \cite{stokes2017study}, where $s_1 \rightarrow s_2$ and $s_2 \rightarrow s_3$. Projection of the three connected sources to a $D=4$ dimensional observation vector followed as $\vec{x}(t)= \vec{A} \vec{s}(t)$, where the elements of 4-by-3 mixing matrix $\vec{A}$ were randomly drawn from the uniform distribution $A_{ij} \sim U[0,1]$. The proposed technique was employed to recover $P=2$ pairs of components.
The latent sources, observed data, and recovered signals of a single realization are depicted in Fig \ref{fig:sim_var}a,b, and c, respectively. The goal of the proposed approach is to recover the $s_1 \rightarrow s_2$
relationship in ($y_1,z_1$), and the $s_2 \rightarrow s_3$ link in ($y_2,z_2$). Notice that $s_2$ is both a driven signal as well as a driving signal, and thus the components $z_1$ and $y_2$ are aiming to capture the \emph{same} signal. The strength of causality among all pairs of latent sources is illustrated in Fig \ref{fig:sim_var}d, where the order dependence inherent to Granger Causality is evident in the asymmetry of the matrix (rows correspond to the driving signals, columns to the driven). The underlying strengths of causality were measured as: $ \mathcal{G}_{s_1 \rightarrow s_2} = 0.11 \pm 0.0011$ and $\mathcal{G}_{s_2 \rightarrow s_3} = 0.10 \pm 0.0010$ (mean $\pm$ sem across $n=100$ random realizations).
The strength of causality measured among pairs of observed signals was markedly lower, with a maximum (across all pairs) strength of causality of $0.066 \pm 0.002$, significantly lower than the underlying latent causality ($p=3.9 \times 10^{-18}$ comparing to $\mathcal{G}_{s_1 \rightarrow s_2}$, $p=4.0 \times 10^{-18}$ comparing to $\mathcal{G}_{s_2 \rightarrow s_3}$, Wilcoxon signed rank test, $n=100$). The strengths of causality between observed signals is depicted for a single realization in Fig \ref{fig:sim_var}e, where the strongest connection was $\mathcal{G}_{x_1 \rightarrow x_3}=0.025$. Notice that the system structure (i.e., two connected pairs) is no longer apparent, as the mixing process has both obscured and dampened the underlying causal relationships.
The causality matrix of the $P=2$ recovered components is shown in Fig \ref{fig:sim_var}f (rows and columns correspond to driving and driven components, respectively). Two strong connections are readily apparent: $y_1 \rightarrow z_1$ and $y_2 \rightarrow z_2$. The magnitudes of these causal relationships closely matched those of the latent sources: $\mathcal{G}_{y_1 \rightarrow z_1} = 0.11 \pm 0.0014$ and $\mathcal{G}_{y_2 \rightarrow z_2} = 0.10 \pm 0.0010$. These values were significantly larger than the maximum causality among all pairs of observed variables (pair 1: $p=4.7 \times 10^{-18}$, pair 2: $p=4.0 \times 10^{-18}$). To determine whether the recovered components captured the underlying sources, the mixing matrix was estimated by regressing the driving and driven signals onto the observations. The true mixing matrix is depicted for a single realization in Fig \ref{fig:sim_var}g. The recovered matrices exhibited a large correlation with the true values ($r^2=0.98 \pm 0.004$, shown for a single realization in Fig \ref{fig:sim_var}h). Moreover, the time series of recovered components faithfully tracked the dynamics of the latent sources: $r^2=0.98 \pm 0.007$ between $s_1$ and $y_1$ (Fig \ref{fig:sim_var}i), $r^2=0.96 \pm 0.014$ between $s_2$ and $z_1$ (Fig \ref{fig:sim_var}j), $r^2=0.98 \pm 0.004$ between $s_2$ and $y_2$ (Fig \ref{fig:sim_var}k), and $r^2=0.99 \pm 0.002$ between $s_3$ and $z_2$ (Fig \ref{fig:sim_var}l). Note that latent source $s_2$ was indeed captured by both $z_1$ and $y_2$.
\subsection*{Identifying latent causal connections in the brain}
Scalp electroencephalogram (EEG) signals, sometimes referred to as ``brain waves'', arise from the coordinated activity of a large number of neurons in the cerebral cortex \cite{buzsaki2012origin}. At any time instant, the set of electric potentials registered by scalp electrodes is a linear mixture of dipolar generators \cite{baillet2001electromagnetic} (Fig \ref{fig:eeg}a). It was hypothesized that Granger causal relations would be most strongly observed at the level of these neural generators, estimated by spatially filtering the EEG \cite{parra2005recipes}. To test this, the proposed technique was applied to a previously collected EEG data set where $n=12$ healthy participants viewed television advertisements that originally aired during the 2012 and 2013 \emph{Super Bowl} football matches \cite{dmochowski2014audience}.
To identify the level of Granger causality among the captured $D=64$ signals, the strength of causality was measured for all pairs of electrodes (Fig \ref{fig:eeg}b). The strongest relationship was found between left centroparietal electrode ``CP1'' and right centroparietal electrode ``CP4'', with $\mathcal{G}=0.073$ (Fig \ref{fig:eeg}b). In order to determine whether conventional spatial filtering approaches recover stronger causal relationships than those found among electrodes, the observed data was decomposed with both principal components analysis (PCA) and independent components analysis (ICA). Surprisingly, the strength of causality among pairs of PCs and ICs was not larger than that found in the raw electrodes: a maximum value of $\mathcal{G}=0.067$ was found between principal components 8 and 3 (Fig \ref{fig:eeg}c), and a maximum of $\mathcal{G}=0.022$ between independent components 6 and 3 (Fig \ref{fig:eeg}d). Next, the proposed method was employed to recover $P=3$ pairs of latent components. The strength of causality among the recovered pairs was substantially larger, with $\mathcal{G}_{y_1 \rightarrow z_1} = 0.32$, $\mathcal{G}_{y_2 \rightarrow z_2} = 0.16$, and $\mathcal{G}_{y_3 \rightarrow z_3} = 0.18$, for pairs 1, 2, and 3, respectively (Fig \ref{fig:eeg}e). The presence of more than two-fold increases in the strength of causality at multiple component pairs is consistent with the notion that the underlying causal relationships occur in a latent subspace of the data.
The coefficients of the spatial filter weights learned by the proposed method represent the scalp regions expressing the driving and driven signals.
For pair 1, the causing signal $y_1$ exhibited peak expression over the right temporo-parietal region, while the driven signal $z_1$ had peak expression over the left central electrodes (Fig \ref{fig:eeg}f). This indicates that, during this task, activity over the right temporo-parietal cortex temporally preceded activity over the left central region. To further interpret the learned components, the power spectrum of the driving and driven signals were measured. The power spectrum of scalp EEG is typically segregated into distinct frequency bands, with a large body of literature documenting associations between cognitive states and activity in specific bands \cite{klimesch1999eeg}. Both $y_1$ and $z_1$ showed high levels of power in the delta band (1-3 Hz), and moderate levels of alpha band (8-13 Hz) power (Fig \ref{fig:eeg}f). The spatial topographies of the next strongest pair showed peak expression over the left parieto-occipital ($y_2$) and right temporo-parietal regions ($z_2$), indicating inter-hemispheric connectivity (Fig \ref{fig:eeg}g). An interesting pattern arose in the power spectra of the components: the driving signal was marked by low delta power and high alpha power, while the driven signal exhibited the opposite pattern (i.e., high delta power and a notable absence of alpha power). This result is consistent with previous findings of an inverse correlation between alpha and delta waves, hypothesized to arise from thalamocortical inhibition of the brain stem \cite{robinson1999technical,robinson2001brain}. The topography of driving signal $y_3$ exhibited activation over the left occipital and right centro-temporal regions, while the corresponding driven signal $z_3$ was concentrated over the left occipital region (Fig \ref{fig:eeg}h). As observed in pair 2, the driving signal showed a high ratio of alpha-to-delta power, while a low alpha-to-delta ratio was detected in the driven signal.
To formally test whether the proposed method recovers stronger causal relations than those found with conventional approaches, a two-way ANOVA (method $\times$ component) was conducted. For this analysis, the strength of causality was measured separately for each subject, yielding $n=12$ repeated measures. A large main effect of method was identified ($F(3)=11.53$, $p=9.23 \times 10^{-7}$; Fig \ref{fig:eeg_anova}). There was no main effect of component ($p=0.59$) and no significant interaction ($p=0.98$). Follow-up tests showed that the main effect of method was driven by significantly larger strengths of causality with the proposed method ($\mathcal{G} = 0.10 \pm 0.031$, $0.088 \pm 0.012$, and $0.081 \pm 0.014$ for the first three components, means $\pm$ sem across $n=12$ subjects) relative to the three most connected electrode pairs ($\mathcal{G} = 0.049 \pm 0.0084$, $0.049 \pm 0.0072$, $0.049 \pm 0.0079$; $p=0.034$; $p=4.9 \times 10^{-4}$, and $p=0.034$ for components 1, 2, and 3, respectively; Wilcoxon signed rank test, $n=12$), the three most connected principal component pairs ($\mathcal{G} = 0.050 \pm 0.0054$, $0.049 \pm 0.0058$, $0.046 \pm 0.0046$; $p=0.034$, $p=0.0049$, and $p=0.0093$), and the three most connected independent component pairs ($\mathcal{G} = 0.033 \pm 0.0056$, $0.030 \pm 0.0035$, $0.030 \pm 0.0031$; $p=0.016$, $p=4.9 \times 10^{-4}$, and $p=0.0024$). Thus, the proposed technique detected causal relationships whose magnitude was significantly larger than those measured with conventional approaches.
\subsection*{Probing latent causality in the cryptocurrency market}
Finally, the proposed method was tested on a system without an obvious latent structure: the cryptocurrency market. Historical prices of $D=19$ popular cryptocurrencies (Fig \ref{fig:crypto}A, individual traces have been standardized), were employed for the analysis, which sought to identify the $P=3$ strongest causal relationships.
Among pairs of individual cryptocurrencies, the strength of causality was quite modest: $0.028 \pm 0.023$ (mean $\pm$ sd across all $n=342$ pairs of currencies), with a maximum value of
$\mathcal{G}_{ \mathrm{ETC} \rightarrow \mathrm{QTUM}} = 0.12$ (Fig \ref{fig:crypto}B). In contrast, the proposed technique identified a primary pair of latent components with a statistically significant strength of causality ($\mathcal{G}_{y_1 \rightarrow z_1}=0.40$, $p<0.001$, non-parametric permutation test altering the phase of individual cryptocurrency time series), representing a more than three-fold increase (Fig \ref{fig:crypto}C). A statistically significant strength of causality was also found for the second pair of components ($\mathcal{G}_{y_2 \rightarrow z_2}=0.14$, $p=0.008$; Fig \ref{fig:crypto}C). Note that, even after removing the contribution from the primary driving signal $y_1$, a latent relationship whose causality exceeded that seen in the observed data was still recovered. The strength of causality exhibited by the third pair of latent components ($\mathcal{G}_{y_3 \rightarrow z_3}=0.080$, $p=0.13$) fell short of significance, but nevertheless exceeded 96\% of the individual pair values (compare panels B and C in Fig \ref{fig:crypto}).
The dynamics of the driving and driven components of the first pair are depicted in Fig \ref{fig:crypto}D, where the temporal precedence of $y_1$ relative to $z_1$ is visible in the traces. For example, note that the occurrence of the three prominent peaks in the spring of 2021 is first observed in $y_1$ and shortly after in $z_1$ (see Fig \ref{fig:crypto}D inset). The individual currencies with the largest expression in the driving signal were BNB (Binance Coin) and ETC (Ethereum Classic), while the largest contributions to the driven signal were from QTUM and TRX (Fig \ref{fig:crypto}E, color indicates weight of filter used to construct $y_1$ and $z_1$). This result indicates that past fluctuations in the prices of BNB and ETC predict the current prices of QTUM and TRX. The temporal precedence of $y_2$ relative to $z_2$ is also evident in the dynamics of the second pair of latent components (Fig \ref{fig:crypto}F). For example, a sharp dip in price occurs near May 2021, first in $y_2$ and slightly later in $z_2$. Similar to the first pair of latent components, the currencies best expressed in $y_2$ were ETC and BNB. However, unlike ($y_1,z_1$), the driven signal here most strongly expressed ADA (Cardano) and ETH (Ethereum) (Fig \ref{fig:crypto}G). The finding of similar driving signals (but distinct driven signals) in the first two pairs suggests the presence of multiple ``links'' emanating from the latent driver. The currencies best expressed in the driving signal of the third pair were BNB and XRP (Ripple), while the corresponding driven signal $z_3$ best expressed XLM (Stellar) and ETC (Fig \ref{fig:crypto}I).
\section*{Discussion}
The distinction between the proposed technique and conventional univariate and multivariate Granger Causality can be illuminated by the types of queries that the different approaches address. In the context of the cryptocurrency market, univariate Granger Causality addresses questions such as ``does the price of Bitcoin exert a causal influence on the price of Ethereum?'' Multivariate Granger Causality is concerned with questions such as ``do the prices of Bitcoin and Cardano (taken as a group) drive the prices of Ethereum and Ethereum Classic?'' Note that, in both cases, one must specify the elements and direction of the causal relationship being tested. To identify the full complement of causal links in the system of interest, such a hypothesis testing approach will generally require a large number of statistical tests. In contrast, the proposed method automatically identifies paired groups of cryptocurrencies, with each group defined such that the strength of causality from the driving group to the driven group is maximized: the elements and direction of the causal links are learned directly from the data. This identification may be performed over several iterations, with each iteration revealing a generally weaker but distinct causal relationship from the previous. The weights of the learned filters are interpretable: dimensions with a large magnitude indicate that the corresponding signal is either driving activity, or being driven, in a latent subspace of the system.
In applications such as EEG or magnetoencephalography (MEG) where the source space has a clear physical substrate, the learned filters offer clear insight into the nature of the latent sources. Namely, the cortical generators of the scalp topographies in Fig \ref{fig:eeg}f-h may be estimated with source localization \cite{baillet2001electromagnetic} to estimate the spatial origin of the latent sources. Causal relationships that are obscured at the level of the electrodes may be clarified as genuine connections between cortical sources. The nature of the latent source space is less apparent in other problems. In financial systems defined by a set of evolving prices, the latent sources correspond to a set of linked assets whose dynamics exhibit a temporal dependence on those of a second set. For example, the occurrence of an external event (e.g. activity on social media) may produce a change in the value of a certain group of assets. As a consequence, the value of a second (disparate) group of assets may also be modulated, and due to the delay between the price movements, a Granger causal relationship emerges.
Conventional approaches to blind source separation assume that the underlying sources are statistically independent, perhaps inspired by the ``cocktail party problem'' \cite{mcdermott2009cocktail} solved by the auditory system. This assumption is exploited by Independent Components Analysis (ICA) \cite{comon1994independent,hyvarinen2000independent}, which projects the observed signals into components to maximize their statistical independence. On the other hand, the approach proposed here assumes the existence of Granger Causal sources, and is thus applicable to systems with temporal dependencies among the signals of interest. Notice that the criteria optimized by ICA and the proposed technique to perform source separation are opposing. In the context of brain signals, ICA is seeking to identify decoupled neural sources, while the method proposed here aims to recover functionally connected brain regions. More closely related to the proposed method are approaches that combine Canonical Correlation Analysis \cite{hotelling1992relations} with Granger Causality \cite{sato2010analyzing,wu2011kernel} to test causal relations between pairs of multivariate time series. These approaches share a feature of the proposed method by forming components of observed data, but differ importantly in that the data must already be partitioned into hypothesized driving and driven signals.
One limitation of the proposed technique is the potential difficulty in identifying causality in data with very high dimensionality (i.e., the number of observed signals) or very long temporal dependencies between latent sources. In either case, the covariance matrices required to identify the latent causal sources may be poorly estimated, potentially leading to erroneous estimates of latent Granger Causality. To mitigate this, it is required to assume some prior information about the structure of the observed signals. For example, a form of Tikhonov regularization \cite{golub1999tikhonov} equivalent to adding uncorrelated noise to the measurements was employed here. More sophisticated approaches to covariance estimation in high dimensions will improve the performance of the proposed framework.
A challenge with conventional Granger Causality is the potential presence of exogenous sources that drive two or more observed variables with different delays. In this event, spurious relationships between the observed signals may be inferred. To address this, partial Granger Causality \cite{guo2008partial} may be employed to measure the relationship that remains after removing the contribution of the exogenous source. It is interesting to consider how such confounding sources may affect the behavior of the proposed technique. If the nature of the confounding source is known \emph{a priori}, it should be regressed out of the data prior to deploying the proposed technique. This was performed in the cryptocurrency example above, where the global market trend was removed prior to analysis. In the case of an unknown confounding source, the proposed approach is expected to provide some shielding from spurious inference. This follows from the utilization of multiple component pairs to separate the contributions of distinct latent sources. For example, in the case of a strong confounding source that enters the observed data, the underlying relationship may appear in the first pair of latent sources, leaving the genuine causal relationships in subsequent pairs. The technique proposed here is tasked with capturing all latent sources that produce Granger Causal links, meaningful or otherwise. This highlights the importance of interpreting the weights of the learned projection vectors, which may offer clues as to the origin of the recovered relationship.
Granger Causality is one of several statistical approaches to measuring causality. Two popular frameworks that have been successfully applied to dynamic systems are Dynamic Causal Modeling (DCM) \cite{friston2003dynamic} and Structural Equation Modeling \cite{mcintosh1991structural}. In DCM, a ``forward model'' that relates the activity of underlying sources to the observations is specified, with Bayesian model selection utilized to estimate the parameters of the underlying sources (i.e., connectivity). This allows DCM to take advantage of the known structure of the system, including nonlinear interactions. The approach proposed here, while also aiming to identify causal structure, is complementary in nature. The forward model need not be specified beforehand, and the technique functions not as a statistical test \emph{per se} but rather a decomposition of the data, akin to PCA and ICA. Moreover, the knowledge gleaned from the components recovered by the decomposition may then be employed in a subsequent hypothesis testing procedure that has been informed by the method's findings.
\section*{Materials and Methods}
All data and source code are provided at \href{dmochow.github.io/gca}{\fontfamily{pcr}\selectfont dmochow.github.io/gca}. Data analysis was performed in the MATLAB computing environment (Mathworks, Natick MA).
\paragraph{Implementation} To solve the optimization problems at each iteration of the grouped coordinate descent algorithm (see Algorithm 1 in \textit{Supplementary Note 3}), we employed the built-in MATLAB function {\fontfamily{pcr}\selectfont
fmincon} with the default interior point algorithm solver. The maximum number of function evaluations was set to $10^{4}$ and the maximum number of iterations was set to $4000$. Regularization of the block covariance matrices $\vec{\Sigma}_{1:L}$ and $\tilde{\vec{\Sigma}}$ was implemented by limiting the condition number of each matrix to a value of $c$, where the value of $c$ was selected based on the dimensionality of the problem, as specified below. Limiting the condition number was implemented by adding a small diagonal component $\sigma^2 \vec{I}$ to each covariance matrix, where the value of $\sigma^2 = \frac{ ( \lambda_1 - \lambda_{LD} c) }{c-1}$ ensures that the condition number of the covariance matrix is $c$, where $\lambda_1$ and $\lambda_{LD}$ are the largest and smallest eigenvalues of the block covariance matrix being regularized \cite{hoerl1970ridge,tabeart2020improving}.
Although the closed-form expression for the gradient of $\mathcal{G}_{y \rightarrow z}$ (see \emph{Supplementary Note 2}) was verified empirically, it was more efficient to compute the gradient numerically with finite differences. The numerous Kronecker products and matrix inverse operations required to evaluate the gradient expression led to longer run times compared to the finite differences approximation. Moreover, in order to guarantee that the optimization identified projections with unit norm, a pair of nonlinear constraints were added, leading to the following constrained optimization problem:
\begin{eqnarray}
\min_{\vec{w},\vec{v}} ~ -\left[ \mathcal{G}(\vec{w},\vec{v}) + \mathcal{G}^{\mathrm{tr}}(\vec{v},\vec{w}) \right] \nonumber \\ \mathrm{~~~subject~to:~} \vec{w}^T \vec{w}=1 \mathrm{~and~} \vec{v}^T \vec{v}=1,
\end{eqnarray}
where
$\mathcal{G}(\vec{w},\vec{v})$ is the strength of causality (\ref{eqn:GCdef}) between driving signal $\vec{w}^T \vec{x}(t)$ and driven signal $\vec{v}^T \vec{x}(t)$, and $\mathcal{G}^{\mathrm{tr}}(\vec{v},\vec{w})$ is the strength of causality between driving signal $\vec{v}^T \vec{x}(-t)$ and driven signal $\vec{w}^T \vec{x}(-t)$. After each iteration of the grouped coodinate descent, the driving signal $y(t)={ \vec{w}^{\ast} }^{T} \vec{x}(t)$ and its lagged versions were regressed out of the data according to:
\begin{eqnarray}
\vec{x}(t) &=& \vec{x}(t) - \vec{B}^T \vec{y}_p(t)
\end{eqnarray}
where $\vec{B} = \vec{Y}_p^{\#} \vec{X}$ is the least-squares solution to the linear system:
\begin{eqnarray}
\vec{X} = \vec{Y}_p \vec{B}
\end{eqnarray}
where $D$-by-$T$ matrix $\vec{X} = \left[ \begin{array}{ccc} \vec{x}(1) & \ldots & \vec{x}(T) \end{array} \right] $ and $L$-by-$T$ matrix $ \vec{Y}_p = \left[ \begin{array}{ccc} \vec{y}_p(1) & \ldots & \vec{y}_p(T) \end{array} \right] $ span the spatiotemporal apertures of the observed and driving signals, respectively. Convergence was assessed after every iteration, and the search was stopped when the magnitude of change in both $\mathcal{G}$ and $\mathcal{G}^{\mathrm{tr}}$ was less than $10^{-6}$.
To measure the strength of causality $\mathcal{G}_{f \rightarrow g}$ between signals $f$ and $g$, the full and reduced regression models predicting $g(t)$ were explicitly learned, and the residuals then used to obtain $\mathcal{G}_{f \rightarrow g}$ via Eqn. (\ref{eqn:GCdef}).
\paragraph{Synthetic VAR data and analysis}
Data was generated by explicitly defining the VAR(3) system analyzed previously by Stokes and Purdon \cite{stokes2017study}:
\begin{widetext}
\begin{eqnarray}
\label{eqn:varStokes}
\left[ \begin{array}{c}
s_1(t) \\
s_2(t) \\
s_3(t)
\end{array} \right] &=& \left[ \begin{array}{ccc}
-0.9 & 0 & 0 \\
-0.356 & 1.212 & 0 \\
0 & -0.3098 & -1.3856
\end{array} \right] \left[ \begin{array}{c}
s_1(t-1) \\
s_2(t-1) \\
s_3(t-1)
\end{array} \right] + \nonumber \\
&& \left[ \begin{array}{ccc}
-0.81 & 0 & 0 \\
0.7136 & -0.49 & 0 \\
0 & 0.50 & -0.64
\end{array} \right] \left[ \begin{array}{c}
s_1(t-2) \\
s_2(t-2) \\
s_3(t-2)
\end{array} \right] + \nonumber \\
&& \left[ \begin{array}{ccc}
0 & 0 & 0 \\
-0.356 & 0 & 0 \\
0 & -0.3098 & 0
\end{array} \right] \left[ \begin{array}{c}
s_1(t-3) \\
s_2(t-3) \\
s_3(t-3)
\end{array} \right] + \left[ \begin{array}{c}
\epsilon_1(t) \\
\epsilon_2(t) \\
\epsilon_3(t)
\end{array} \right],
\end{eqnarray}
\end{widetext}
where $\epsilon_i$, $i=1,2,3$, are independent and identically distributed innovation processes with standard deviation $\sigma=1$. $M=100$ realizations, each with a length of $N=5000$ samples, were generated by passing the vector innovation process through the impulse response (\ref{eqn:varStokes}). Projection of these latent sources to a four-dimensional observation vector followed as $\vec{x}(t)= \vec{A} \vec{s}(t)$, where the elements of 4-by-3 mixing matrix $\vec{A}$ were randomly drawn from the uniform distribution $A_{ij} \sim U[0,1]$. Notice that measurement noise enters the observed data via the innovation processes $\epsilon_i$. The proposed technique was employed to recover $P=2$ pairs of causal components:
\begin{eqnarray*}
y_i(t) = \vec{w}_i^T \vec{x} (t), ~~~~i=1,2 \\
z_i(t) = \vec{v}_i^T \vec{x} (t), ~~~~i=1,2,
\end{eqnarray*}
where $\vec{w}_i$ and $\vec{v}_i$ were estimated with Algorithm \ref{alg:cap}. Convergence was observed in under 20 iterations for pair 1, and under 10 for the second pair (Figure \ref{fig:convergence}).
The optimization was performed with no regularization of the block covariance matrices ($c=\infty$) and a maximum lag parameter of $L=3$. $P=2$ pairs were recovered by the optimization. When comparing the fidelity of the recovered component pairs with the ground-truth latent sources, the order of the $P=2$ pairs was corrected \emph{post hoc} if it was evident that the $(y_1,z_1)$ pair matched the $s_2 \rightarrow s_3$ relationship. In practice, the order of the recovered pairs ($s_1 \rightarrow s_2$, $s_2 \rightarrow s_3$) is insignificant, as the causal structure reflected by the two pairs is agnostic to their ordering.
To estimate the mixing matrix from the model's projection vectors $\vec{w}$ and $\vec{v}$, the driving signals $y_1$ and $y_2$, as well as the driven signal $z_2$, were individually regressed onto the observation vector $\vec{x}$. This yielded a $D$-dimensional ``forward model'' for each of the three signals, which were then compared to the three columns of the true mixing matrix. When displaying the estimated and true mixing matrix in Fig \ref{fig:sim_var}g,h, the sign and scale (L2 norm) of each estimated forward model was corrected to match that of the ground-truth mixing matrix column.
When testing for significant differences in the strength of causality between observed signals and those recovered by the proposed method, the Wilcoxon signed rank test ($n=100$ independent VAR realizations) was employed. The maximum value across all pairs of observed signals (i.e., $\max_{i,j} \mathcal{G}_{x_i \rightarrow x_j}$) was compared against the strength of causality of the first two recovered pairs (i.e., $\mathcal{G}_{y_1 \rightarrow z_1}$, $\mathcal{G}_{y_2 \rightarrow z_2}$). The same procedure was employed to test for significant differences in the strength of causality between observed and ground-truth latent sources (i.e., $\mathcal{G}_{s_1 \rightarrow s_2}$, $\mathcal{G}_{s_2 \rightarrow s_3}$)
\paragraph{EEG data and analysis} The neural data employed here to demonstrate the utility of proposed method has been previously described \cite{dmochowski2014audience}. Briefly, scalp EEG was collected from $n=12$ subjects freely viewing a set of 30-60 second advertisements originally broadcast during the 2012 and 2013 SuperBowl. To demonstrate the utility of the proposed method, data from a single stimulus was employed here (``Work'', Bud Light Platinum). The data was acquired with a 64-channel electrode cap connected to a BioSemi Active Two amplifier and sampled at rate of 512 Hz. A set of preprocessing steps comprised of high-pass and notch filtering, removal of eye motion artifacts by linear regression, and artifact rejection with a power criterion was applied to denoise the acquired signals. All data samples identified as artifactual by the preprocessing were linearly interpolated from neighboring samples. The interpolation allowed the computation of block covariance matrices in the presence of missing data. Moreover, data was further downsampled to a sampling frequency of 32 Hz in order to reduce the dimensionality of the ensuing block covariance matrices. The maximum lag parameter $L$ was set to 16 samples (500 ms), reflecting a tradeoff between capturing dependencies occurring on the temporal scale of neural dynamics, while avoiding excessively large covariance matrices. The number of desired component pairs was set to $P=3$.
EEG signals were mean centered prior to testing the proposed method. The block covariance matrices $\vec{\Sigma}_{1:L}$ and $\tilde{\vec{\Sigma}}$ were regularized such that the condition number of each matrix was limited to $K=10^9$. The maximum number of iterations in the grouped coordinate descent was set to 50.
To depict the spatial topographies of the latent components, the ``forward-model'' \cite{haufe2013critical} conveying the distribution of the latent source on the scalp $\vec{a}_{w}=\vec{\Sigma}(0) \vec{w} \left( \vec{w}^T \vec{\Sigma}(0) \vec{w} \right)^{-1}$ was computed, where $\vec{\Sigma}(0)$ is the lag-zero covariance matrix of the observations $\vec{x}(t)$. Power spectra were estimated with the Thomson multitaper spectral analysis technique employing a time-bandwidth product of 64. When comparing the proposed technique with principal components analysis, the strength of causality was measured between all 90 pairs of the first 10 principal components (the approximate knee point of the data's eigenvalue spectrum). Similarly, the strength of causality was calculated among all pairs of the 10 independent components formed after performing PCA on the data. The maximum-kurtosis implementation of ICA was employed \cite{girolami1996negentropy}.
To perform two-way ANOVA with method and component as factors, the spatial filters learned on the subject-aggregated data were applied to the recordings of individual subjects, yielding $n=12$ independent measures of the strength of causality obtained with the proposed method. The three electrode pairs with the largest (subject-aggregated) strength of causality were selected \emph{post hoc}. Similarly, the three principal and independent component pairs with the largest strength of causality were selected. The strength of causality values at the selected pairs were then measured for all subjects and employed in the ANOVA procedure. Note that the values of strength of causality yielded by the proposed method were markedly larger (i.e., $\mathcal{G}=0.32$) when evaluated on the entire (subject aggregated) data set relative to the values obtained when applying the spatial filters to individual subjects and averaging across the cohort (i.e., $\mathcal{G}=0.10 \pm 0.0031$).
\paragraph{Cryptocurrency data and analysis}
Publicly available data was obtained from an online database of historical cryptocurrency prices as captured on the Binance Exchange (\url{CryptoDataDownload.com}). Data was obtained from the following $D=19$ currencies: ADA, BAT, BNB, BTC, BTT, DASH, EOS, ETC, ETH, LINK, LTC, NEO, QTUM, TRX, USDC, XLM, XMR, XRP, and ZEC. Prices were obtained at the resolution of one minute, but subsequently downsampled by a factor of 1800 in order to capture slower dynamics manifesting across half-day segments. The opening price in each segment (i.e., as opposed to the high, low, or closing price) was employed for the analysis.
Due to the fact that the proposed method cannot recover the scale of the latent sources, each currency's time series was standardized by removing the mean and dividing by the standard deviation. Furthermore, in order to capture genuine causal relationships unaffected by exogenous factors not captured in these currencies, the mean waveform (``global'' trend) was linearly regressed out from the multivariate time series with ordinary least squares.
The proposed algorithm was employed with a maximum lag of $L=4$ (i.e., a two-day temporal aperture), and the $P=3$ strongest pairs of latent components were computed. Regularization of the block covariance matrices $\tilde{\vec{\Sigma}}$ and $\vec{\Sigma}_{1:L}$ was performed by limiting the condition number of both matrices to $K=1000$.
To test for statistically significant strengths of causality in the recovered component pairs, a non-parametric test that employs surrogate data generated by randomizing the phase spectrum of the original data (while preserving its power spectrum) was employed \cite{theiler1992testing}. This procedure effectively ``shuffles'' the time series of the various cryptocurrency prices such that the genuine temporal dependencies are removed. The strength of causality measured from the surrogate records then provides a sample of the null distribution to which the true values were compared. A total of 1000 surrogate data records were formed, with the p-value measured as the number of records whose strength of causality exceeded the true value.
To interpret the constituents of the latent souces learned by the proposed method, the elements of $\vec{w}$ and $\vec{v}$ were sorted by magnitude, and the two elements with the largest absolute value were reported in the text.
\section*{Acknowledgments}
The author would like to thank Amilcar Malave for help with figure preparation. This research was supported by the Weinbaum - Wallace H. Coulter Fund.
\bibliographystyle{unsrt}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 263 |
\section{The D0 detector}
The D0 experiment collected data at the Fermilab Tevatron $p\bar{p}$
Collider
at $\sqrt{s}$=1.96 TeV from 2001 through the shutdown of the
Tevatron in 2011, a period referred to as Run II.
The D0 detector is described in detail elsewhere \cite{d0det}. For
the purposes of this analysis, the most important parts of the detector are the
central tracker and the muon system. The inner region of the D0 central tracker
consists of a silicon microstrip tracker (SMT) that covers
pseudorapidities $|\eta|<3$ \cite{eta}.
In the spring of
2006, an additional layer of silicon (Layer 0) was added close to the beam pipe
\cite{layer0}. Since the detector configuration changed significantly with this
addition, the D0 dataset is divided into two distinct periods (Run IIa and Run
IIb), with the analysis performed separately for each period. Moving away from
the interaction region, the next detector subsystem encountered is the D0
central fiber tracker (CFT), which consists of 16 concentric cylinders of
scintillating fibers, covering $|\eta|<2.5$. Both the SMT
and CFT are located within a 2 T superconducting solenoidal magnet. The D0 muon
system is located outside of the finely segmented liquid argon sampling
calorimeter. The
muon system consists of three layers of tracking detectors and trigger
scintillators, one layer in front of 1.8~T toroidal magnets and two
additional layers after the toroids. The muon system covers $|\eta|< 2$.
The data used in
this analysis were collected with a suite of single muon and dimuon triggers.
\section{ Analysis overview}
This analysis was performed with the relevant dimuon mass region
blinded until all analysis procedures were final.
Our dimuon mass resolution is not
sufficient to separate $B_s^0 \to \mu^+ \mu^-$ from $B_d^0 \to \mu^+ \mu^-$, but
in this analysis we assume that
there is no contribution from $B_d^0 \to \mu^+ \mu^-$, since this decay in expected
to be suppressed with respect to $B_s^0 \to \mu^+ \mu^-$ by the ratio of the CKM
matrix elements $|V_{td}/V_{ts}|^2 \approx 0.04$ \cite{pdg12}. The most stringent
95\%~C.L. limit on the decay $B_d^0 \to \mu^+ \mu^-$, which is from the LHCb experiment
\cite{newlhcb}, is ${\cal B}(B^0_d \to \mu^+ \mu^-)<9.4 \times 10^{-10}$.
$B_s^0 \to \mu^{+} \mu^{-}$ candidates are identified by selecting two
high-quality muons of opposite charge that form a good three-dimensional vertex
well-separated from the primary $p\bar{p}$
interaction due to the relatively long lifetime of the
$B_s^0$ meson \cite{pdg12}.
A crucial requirement
for this analysis is the suppression of the large dimuon background
arising from semileptonic $b$ and $c$ quark decays. Figure~\ref{cartoon}
shows a schematic diagram of the signal decay and the two dominant
background
processes. Backgrounds in the dimuon effective mass region below the $B_s^0$ mass
are dominated by sequential decays such as $b \to \mu^- \nu c $
with $c \to \mu^+ \nu X$, as shown in Fig.\ \ref{bg1_d}.
Backgrounds in the dimuon mass region above the $B_s^0$ mass are
dominated by double semileptonic decays such as $b(\bar{c}) \to \mu^- \nu X$ and
$\bar{b}(c) \to \mu^+ \nu X$, as
shown in Fig.\ \ref{bg2_d}.
For both of these backgrounds, the muons do not form a real vertex, but the
tracks can occasionally be close enough in space to be reconstructed as
a ``fake'' vertex.
\begin {figure*}[!th]
\begin {center}
\subfigure[]{\label{sig_d}\includegraphics[width=1.7in] {sig1.eps}}
\subfigure[]{\label{bg1_d}\includegraphics[width=2.0in] {b11.eps}}
\subfigure[]{\label{bg2_d}\includegraphics[width=2.0in] {b21.eps}}
\caption {(color online) Schematic diagrams showing (a) the signal decay,
$B_s^0 \to \mu^{+} \mu^{-}$,
and main backgrounds: (b) sequential decay,
$b \to c \mu^{-}$ followed by $c \to \mu^{+}$, and (c) double semileptonic decay, $b \to \mu^{-}$ and $\bar{b} \to
\mu^{+}$.}
\label{cartoon}
\end{center}
\end{figure*}
Figure \ref{cartoon} illustrates the differences between signal and
background that we exploit as a general analysis strategy. The dimuon
system itself should form a good vertex consistent with the decay of a
single particle originating from the $p\bar{p}$ interaction vertex.
The $B_s^0$ candidate should have a small impact parameter with respect
to the primary $p\bar{p}$ interaction vertex, while the individual muons
should in general have
fairly large impact parameters. In addition to quantities related to the
dimuon system, Fig.\ \ref{cartoon} illustrates that the environment
surrounding the $B_s^0$ candidate should be quite different for signal
compared to backgrounds. The dimuon system for the signal should be fairly
well isolated, while the fake dimuon vertex in background events is likely to
have
additional tracks and additional vertices nearby.
No single variable is able to provide definitive discrimination against these
backgrounds, so we use a multivariate technique as described in
Sec.~\ref{bdt} to exploit these differences between signal and background.
In addition to dimuon backgrounds from semileptonic heavy quark decays,
there are peaking backgrounds arising from $B_s^0 \to hh$ or $B_d^0 \to hh$
where $hh$ can be $KK$, $K\pi$ or $\pi \pi$. Of these, $B_s^0 \to KK$ is
the dominant contribution.
The $K$ or $\pi$ mesons can be misidentified as a muon by decay in flight
$K/\pi \to \mu \nu$
or by penetrating far enough in the detector to create hits in
the muon system. For these decays to be misidentified as
signal, both hadrons must be misidentified as a muon, but since the decay
we are looking for is rare,
$B_s^0/B_d^0 \to hh$ decays constitute a background of magnitude similar to
that of the expected signal.
The number of $B_s^0 \to \mu^+ \mu^-$ decays expected in our dataset is
determined from analysis of the normalization decay channel $B^{\pm} \to
J/\psi K^{\pm}$, with $J/\psi \to \mu^+ \mu^-$, as described in detail in
Sec.~\ref{norm_mode}.
\section { Monte Carlo Simulation} \label{mc}
Detailed Monte Carlo (MC) simulations for both the $B_s^0 \to \mu^{+}
\mu^{-}$ signal and the $B^{\pm}
\to J/\psi K^{\pm}$ normalization channels are obtained using the {\sc pythia}
\cite{pythia} event generator, interfaced with the {\sc evtgen} \cite{evtgen}
decay package.
The MC includes primary production of $b\bar{b}$ quarks that are
approximately back-to-back in azimuthal angle, and also includes gluon splitting
$g \rightarrow b\bar{b}$ where the gluon may have radiated from any quark
in the event. The latter leads to a relatively collimated $b\bar{b}$ system
that produces the dominant background when both $b$ and $\bar{b}$ quarks
decay semileptonically to muons.
The detector response is simulated using {\sc geant} \cite{geant} and overlaid
with events from randomly collected $p\bar{p}$ bunch crossings to simulate
multiple $p\bar{p}$ interactions. A correction to the MC width of the
dimuon mass
distribution is determined from $J/\psi \to \mu^+ \mu^-$ decays in data,
and this correction is then scaled to the $B_s^0$ mass region.
The $B_s^0 \to \mu^+ \mu^-$ mass
distribution in the MC is well described by a double Gaussian function
with the two means
constrained to be equal, but with the widths ($\sigma_1$ and $\sigma_2$) and
relative fractions determined by a fit to the corrected mass distribution.
The average width is
$\sigma_{av}=f\sigma_1 + (1-f)\sigma_2$=125~MeV, where $f$ is the fraction of the
area associated with $\sigma_1$.
We measure the trigger efficiencies in the data using events
with no requirements other than a
$p\bar{p}$ bunch crossing (zero-bias events) or events requiring only
an inelastic $p\bar{p}$ interaction
(minimum-bias events).
The MC generation does not include trigger efficiencies,
but the MC events are
reweighted to reproduce the trigger efficiency as a function of the
muon transverse
momentum ($p_T$). In addition, the MC events are corrected
to describe the $p_T$
distribution of $B$ mesons above the trigger threshold, as determined from
$B^{\pm} \to J/\psi K^{\pm}$ decays. Since the trigger conditions changed
throughout the course of Run II, the $p_T$ corrections are determined separately
for five different data epochs, with each epoch typically separated by
an
accelerator shut-down of a few months' duration.
Figure~\ref{pts} compares data and MC for several $p_T$ distributions
in the normalization channel, after these corrections.
The
background components in the $B^{\pm}$ distributions are removed by
a side-band subtraction technique, that is, by subtracting the
corresponding
distributions from events above and below the
$B^{\pm}$ mass region. As can be seen
in Fig.\ \ref{pts}, the $p_T$ distributions in the MC simulation and
normalization channel data are generally in excellent agreement.
Figure \ref{pts} shows a single data epoch, but all data epochs show
similar agreement.
\begin {figure*}[!th]
\begin {center}
\subfigure[]{\label{ptmu1}\includegraphics [width=3.0in] {ptmu1_fig.eps}}
\subfigure[]{\label{ptmu2}\includegraphics[width=3.0in] {ptmu2_fig.eps}}
\subfigure[]{\label{ptjpsi}\includegraphics[width=3.0in] {ptpsi_fig.eps}}
\subfigure[]{\label{Kpt}\includegraphics[width=3.0in] {ptK_fig.eps}}
\subfigure[]{\label{Bpt}\includegraphics[width=3.0in] {ptB_fig.eps}}
\caption {(color online) Comparison of $p_T$ distributions for data and MC
simulation, for the normalization channel $B^{\pm} \to J/\psi K^{\pm}$, in
a single
data epoch, (a) for the higher-$p_T$ (leading) muon, (b) lower-$p_T$ (trailing)
muon, (c) $J/\psi$, (d) kaon, and (e) $B^{\pm}$ meson.
All distributions are normalized to unit area. }
\label{pts}
\end{center}
\end{figure*}
In addition to the signal MC, we also study the $B_s^0 \to KK$ background
using a sample
of MC events that contains about six times the
expected number of such events in our data
sample.
\section { Event selection}
The $B_s^0$ candidate events selected for further study are chosen as follows.
We select two high-quality, oppositely-charged muons based on information from
both the central tracker and the muon detectors.
The primary vertex (PV) of each $p\bar{p}$ interaction is defined
using all available well-reconstructed tracks and constrained by the mean
beam-spot position in the transverse plane.
If a bunch crossing has more than one $p\bar{p}$ interaction vertex, we
ensure that both muons are consistent with originating from the same PV.
Tracks
reconstructed in the central tracker are required to have at least two hits in
both the SMT and CFT detectors. These tracks are extrapolated to the muon
system, where they are required to match hits observed in the muon detectors.
Each muon is required to have transverse momentum $p_T>1.5$~GeV and to have
pseudorapidity $|\eta|<2$. Both muons are required to have hits in the muon
detectors in front of the toroids, and at least one of the muons must also have
hits in at least one of the muon layers beyond the toroids. To reduce
combinatorial backgrounds, the two muons must form a three-dimensional vertex
with $\chi^2/dof<14$. The dimuon vertex is required to be well separated from
the PV by examining the transverse decay
length. The transverse decay length $L_T$ is defined as $L_T = \vec{l}_T \cdot
\vec{p}_T/|\vec{p}_T|$, where the vector $\vec{l}_T$ is from the PV
to the dimuon vertex in the transverse plane, and
$\vec{p}_T$ is the transverse
momentum vector of the dimuon system. The quantity
$\sigma_{L_T}$ is the
uncertainty on the transverse decay length determined from track parameter
uncertainties and the uncertainty in the position of the PV.
To reduce prompt backgrounds, the
transverse decay length significance of the dimuon vertex, $L_T/\sigma_{L_T}$,
must be greater than three. Events are selected for further study if the dimuon
mass $M_{\mu \mu}$ is between 4.0~GeV and 7.0~GeV. These criteria are intended
to be fairly loose to maintain high signal efficiency, with further
discrimination provided by the multivariate technique discussed in
Sec.~\ref{bdt}.
The normalization channel decays $B^{\pm} \to J/\psi K^{\pm}$ with
$J/\psi \to \mu^+\mu^-$ are
reconstructed in the data by
first finding the decay $J/\psi \to \mu^+ \mu^-$ and then adding a third
track, assumed to be a charged kaon, to the dimuon vertex.
The selection criteria
for the signal and normalization channel are kept as similar as possible.
In
addition to the above requirements on the muons, we require the $K^{\pm}$
to have $p_T >$ 1 GeV and $|\eta|<2$, and we require the three-track vertex to
have $\chi^2/dof<6.7$. In the normalization channel the dimuon mass is required
to be in
the $J/\psi$ mass region, 2.7~GeV $<M(\mu^+\mu^-)<$ 3.45~GeV.
\section{ Determination of the Single Event Sensitivity}\label{norm_mode}
To determine the number of $B_s^0 \to \mu^+ \mu^-$ decays we expect in the data, we
normalize to the number of $B^{\pm}\to J/\psi K^{\pm}$ candidates observed in the data.
The number of $B^{\pm}\to J/\psi K^{\pm}$ decays is used to determine the
single event sensitivity (SES), defined as the branching fraction
for which one event is expected to be present in the dataset. The SES
is calculated from \\
\hspace*{.2in}SES $=\frac{1}{N(B^{\pm})} \times
\frac{\epsilon(B^{\pm})}{\epsilon(B_s^0)} \frac{f(b \to B^{\pm})}{f(b\to B_s^0)} \times $
\\
\hspace*{.8in}${\cal B}(B^{\pm} \to J/\psi K^{\pm})\times $$\cal{B}$$(J/\psi \to
\mu^{+} \mu^{-})$. \\
\\
In this expression $N(B^{\pm})$ is the number of $B^{\pm} \to J/\psi K^{\pm}$
decays observed in the data, as discussed below.
The efficiency for reconstructing the
normalization channel decay, $\epsilon(B^{\pm})$, and the signal channel,
$\epsilon(B_s^0)$, are determined from MC
simulations as discussed in more detail below. The fragmentation
ratio $f(b\to B^{\pm})/f(b \to B_s^0)$ is the
relative probability of a $b$ quark fragmenting to a $B^{\pm}$ compared to a
$B_s^0$. We use the
``high energy'' average $f(b\to B_s^0)/f(b\to
B^{\pm})$ = 0.263 $\pm$ 0.017
provided by the Heavy Flavor Averaging Group \cite{hfag} for the
2012 Particle Data Group compilation \cite{pdg12}, which is
consistent with other recent measurements \cite{frag}. The product of the
branching
fractions $\cal{B}$$(B^{\pm} \to J/\psi K^{\pm})
\times $$\cal{B}$$(J/\psi \to \mu^{+} \mu^{-})$ is $(6.01 \pm 0.21) \times
10^{-5}$ \cite{pdg12}.
Figure~\ref{norm}
shows the normalization channel mass distribution, $M(\mu^+ \mu^- K)$, for the
entire Run II dataset.
\begin {figure} [h]
\begin{center}
\includegraphics [width=3.5in] {Normalization_2a2b.eps}
\caption{ (color online) Invariant mass distribution for the normalization
channel
$B^{\pm} \to J/\psi K^{\pm}$ for the entire Run II dataset. The
full fit is shown as the solid line, the $B^{\pm} \to J/\psi K^{\pm}$
contribution is shown as the dashed line, the exponential background is
shown as
the dotted line, and the contribution from partially reconstructed $B$
meson decays is shown as the dot-dash line. }
\label{norm}
\end{center}
\end{figure}
The mass distribution is fitted to a
double Gaussian function to model the normalization channel decay and an exponential
function to model the dominant background. A hyperbolic tangent
threshold function is also included in the fit to
model partially reconstructed $B$ meson decays, primarily $B^0_d \to J/\psi
K^{0*}$. A possible contribution from
$B^{\pm} \to J/\psi \pi^{\pm}$ is also included in the fit, although this
contribution is not statistically significant and is not shown
in the Fig.~\ref{norm}. Systematic uncertainties on $N(B^{\pm})$ are determined
from
variations in the mass range of the fit, the histogram binning, and the
background model. An additional systematic uncertaintity on $N(B^{\pm})$ is due to the
candidate selection. If an event has more than one $B^{\pm} \to J/\psi K^{\pm}$
candidate, we retain only the candidate with the best vertex $\chi^2$. This
choice results in fewer overall reconstructed $B^{\pm} \to J/\psi K^{\pm}$
decays but also less background. To determine the systematic effect due to this
choice, we have reconstructed $B^{\pm} \to J/\psi K^{\pm}$ decays in two of the
five data epochs retaining all candidates. The SES depends on the ratio
$N(B^{\pm})/\epsilon(B^{\pm})$, and we find that this ratio varies at most
2.2\%, which we take
as an additional systematic uncertainty on $N(B^{\pm})$.
We observe a total of
$(87.4\pm 3.0)\times 10^3$ $B^{\pm} \to J/\psi K^{\pm}$ decays in the full
dataset, where the uncertainty includes both statistical and
systematic effects.
The ratio of reconstruction efficiencies that enters into the SES is
determined
from MC simulation.
One source of systematic uncertainty in the efficiency ratio arises from
the trigger efficiency corrections applied to the MC, as described in
Sec.~\ref{mc}. The variation
in these corrections over data epochs with similar trigger conditions
allows us
to set a 1.5\% systematic uncertainty on the efficiency ratio due to this
source. An additional systematic uncertainty arises from the
efficiency for finding a third track. There could be a data/MC
discrepancy in this efficiency which will not cancel in the ratio.
We evaluate this systematic uncertainty by
comparing the efficiency for finding an extra track in
data and MC in the four-track decay $B^0_d \to J/\psi K^{0*}$ with $K^{0*} \to
K\pi$ and in the three-track normalization channel decay $B^{\pm} \to J/\psi K^{\pm}$.
From this study, we determine that the data/MC efficiency ratio for
identifying the third track varies with data epoch but is on average 0.88 $\pm$
0.06, where the uncertainty
includes statistical uncertainties from the fits used to extract the
number of signal
events, and systematic uncertainties estimated from fit variations.
The efficiency for $B^{\pm}$ reconstruction is adjusted in each
data epoch for this track-finding efficiency correction.
The reconstruction efficiency ratio $\epsilon(B^{\pm})/\epsilon(B_s^0)$ is
determined to be (13.0 $\pm$ 0.5)\% on average, but
varies over the different data epochs by about 1.0\%.
The efficiency for the $B^{\pm} \to J/\psi K^{\pm}$ decay is impacted by the softer
$p_T$ distribution of the muons in the three-body decay as well as the
fairly hard ($p_T>1$~GeV) cut on the $p_T$ of the kaon, and the candidate selection which
retains only the three-track candidate with the best vertex $\chi^2$.
When all statistical and systematic uncertainties are taken into account,
the SES is found to be $(0.336 \pm 0.029) \times 10^{-9}$
before the multivariate selection, yielding a SM expected number of $B_s^0 \to
\mu^{+} \mu^{-}$ events of 10.4 $\pm$ 1.1 events in our data sample.
\section {Multivariate Discriminant} \label{bdt}
A boosted decision tree (BDT) algorithm, as implemented in the {\sc tmva}
package
of {\sc ROOT} \cite{tmva}, is used to differentiate between signal
and the
dominant backgrounds. The BDT is trained using MC simulation for
the signal and data sidebands for the background. The data sidebands include events
in the dimuon mass range 4.0--4.9~GeV (low-mass sidebands) and 5.8--7.0~GeV (high-mass
sidebands), with all selection cuts applied.
The low-mass sidebands
are dominated by sequential decays, illustrated in Fig.~\ref{bg1_d},
while the the high-mass sidebands are dominated by
double $B$ hadron decays, as illustrated in Fig.~\ref{bg2_d}. We therefore
train two BDTs
to separately discriminate against these two backgrounds.
Each BDT discriminant uses 30 variables that
fall into two general classes.
One class of variables includes kinematic and topological quantities
related to the dimuon system. These variables include the pointing angle,
defined as the angle between the dimuon momentum vector $\vec{p}(\mu^+\mu^-)$
and the vector from the PV to the dimuon vertex.
The dimuon $p_T$
and impact parameter, as well as the $p_T$ values of the individual muons
and their impact
parameters, are also used as discriminating variables. As examples of dimuon
system variables that discriminate between signal and background, Fig.\
\ref{ip1} shows the impact parameter significance (impact parameter divided by
its uncertainty) of the $B_s^0$ candidate for signal MC and background, and
Fig.\ \ref{ip2} shows the minimum impact parameter significance for
the individual muons, that is, the smaller of the two values.
\begin {figure*} [!th]
\begin{center}
\subfigure[]{\label{ip1} \includegraphics [width=3.0in] {B_ipsig.eps}}
\subfigure[]{\label {ip2}\includegraphics [width=3.0in] {minimal_muonip.eps}}
\caption{(color online) Comparison of signal MC and background sideband
data for (a) the $B_s^0$ candidate impact parameter significance and (b)
the minimum muon impact parameter significance.
All distributions are normalized to unit area. }
\label{ip}
\end{center}
\end{figure*}
A second general class of variables used in the BDT discriminants includes
various isolation-related quantities. Isolation is defined with respect to a
momentum vector $\vec{p}$ by constructing a cone in azimuthal angle $\phi$
and pseudorapidity $\eta$ around the momentum vector, with the cone radius defined by
${\cal R} = \sqrt{\Delta \eta^2 + \Delta \phi^2}$. The isolation ${\cal I}$
is then
defined as ${\cal I} = p_T/[p_T + p_T(\text{cone}])$
where $p_T(\text{cone})$ is the scalar sum of the $p_T$ of all tracks
(excluding the
track of interest) with $\cal{R}$ less than some cut-off value, chosen to
be ${\cal R}=1$ in
this analysis. For a perfectly isolated track (that is, no other tracks in the
cone), ${\cal I} = 1$. Figure \ref{cartoon} shows that background events
are expected to be
less isolated than signal events. For maximum signal/background discrimination, we
define isolation cones around the dimuon direction and around each muon
individually. From simulation studies, we find that for background events, the
two muons are often fairly well separated in space, so using individual
isolation
cones around each muon adds discriminating power. Figure~\ref{isolation} compares
signal MC and data sidebands for two examples of isolation variables.
\begin {figure*} [!th]
\begin{center}
\subfigure[]{\label{iso1} \includegraphics [width=3.0in] {dimuon_isolation.eps}}
\subfigure[]{\label {iso2}\includegraphics [width=3.0in] {muon_isolation.eps}}
\caption{(color online) Comparison of signal MC and background sideband
data for (a) isolation defined with respect to the dimuon system and (b) for the average
of the two isolations defined with respect to the individual muons.
All distributions are normalized to unit area. }
\label{isolation}
\end{center}
\end{figure*}
We also search for additional vertices near the dimuon vertex using
two different techniques. As illustrated by Fig.\ \ref{cartoon}, in
background events the muons often form a good vertex with another charged
track. We try to reconstruct such vertices using tracks that are
associated with the same PV as the dimuon pair, which have an
impact parameter with respect to the PV of at least 30 microns,
and which have an impact parameter significance of at least 3.0.
If a track satisfying these requirements
forms a vertex with one of the muons with a vertex $\chi^2/dof<5.0$, we consider
this an additional vertex. Additional tracks, satisfying the same
requirements as above, can be included in this vertex
if they do not increase the vertex $\chi^2$ by more than 5.0.
This procedure is carried out with both muons, allowing for the
possibility of
finding an additional vertex with either or both of the muons. We also
attempt to reconstruct additional vertices using tracks that have an impact
parameter significance with respect to the dimuon vertex of less than
4.0. We
allow these vertices to include or not include one of the muons. When an additional
vertex is successfully reconstructed, the vertex $\chi^2$, the
invariant mass of the particles included in the vertex, and the vertex
pointing angle are used as discriminating variables in the BDTs. In the
case where no
such vertices are found, these variables are set to nonphysical values. We
find that, for the background sidebands, at least one additional
vertex is reconstructed 80\% of the time, while for the signal MC, one or more
additional vertices are found 40\% of the time.
To verify that the MC simulation is a good representation of the data,
we compare the sideband-subtracted normalization channel data with the
normalization channel MC. Figure~\ref{ipsignorm} compares the normalization
channel data and the MC simulation for the $B^{\pm}$ meson impact
parameter
significance
and the minimum muon impact parameter significance. Figure~\ref{isonorm}
shows the same comparison for the dimuon and individual muon isolation
variables.
We check all 30 variables used in the multivariate
discriminant to confirm good agreement between data and MC for the
normalization channel.
\begin {figure*} [th!]
\begin{center}
\subfigure[]{\label{ipnorm1}\includegraphics [width=3.0in] {Bipsig_normfig.eps}}
\subfigure[]{\label{ipnorm2}\includegraphics [width=3.0in] {minipsig_normfig.eps}}
\caption{(color online) Comparison of normalization channel MC and
sideband-subtracted data for (a)
$B^{\pm}$ impact parameter significance and (b) the minimum muon impact
parameter
significance. All distributions are normalized to unit area. }
\label{ipsignorm}
\end{center}
\end{figure*}
\begin {figure*} [th!]
\begin{center}
\subfigure[]{\label{isonorm1}\includegraphics [width=3.0in] {isonorm_fig.eps}}
\subfigure[]{\label{isonorm2}\includegraphics [width=3.0in] {isomu1norm_fig.eps}}
\caption{(color online) Comparison of normalization channel MC and
sideband-subtracted data for
(a) dimuon isolation and (b) the average of the two individual muon isolations.
All distributions are normalized to unit area. }
\label{isonorm}
\end{center}
\end{figure*}
We make additional requirements on both the data sidebands and the signal MC
before events are used in the BDT training. These requirements include dimuon
$p_T >5$~GeV and the cosine of the dimuon pointing angle $>0.95$. These
requirements are 78\% efficient on average in retaining signal events but exclude about
96\% of the background. We find a significant enhancement in background
rejection from the BDT discriminants using these additional requirements before
BDT training. These requirements are (93 $\pm$ 1)\% efficient for the normalization
mode MC, and (91 $\pm$ 3) \% efficient for the normalization mode data.
To improve the statistics available for training, the data
epochs
are combined and used together to train the BDT. The signal MC samples for
each data epoch are combined according to the integrated luminosity for each
epoch into a common sample. The data sidebands and signal MC are then
randomly split into three samples. Sample A, with 25\% of the events, is used
to train the BDTs. Sample B, with 25\% of the events, is used to optimize the
selections on the BDT response. Sample C, with 50\% of the events, is
used to
determine the expected signal (from the MC sample) and background (from the
data sideband sample) yields.
The results of the
{\sc TMVA} BDT training for both BDT1, trained to remove sequential decay
backgrounds,
and BDT2, trained to remove double semileptonic $B$ meson decays, can be
seen in Fig.~\ref{KS}. We check that the response of both BDT
discriminants is independent of dimuon mass over the relevant mass range.
The optimal BDT selections are determined by optimizing the
expected limit on ${\cal B}(B_s^0 \to \mu^+ \mu^-)$ and are found to be
BDT1 $>0.19$ and BDT2 $>0.26$.
\begin {figure*} [ht]
\begin{center}
\subfigure[]{\label{KS1}\includegraphics [width=3.0in] {KS_bdt1_f4.eps}}
\subfigure[]{\label{KS2}\includegraphics [width=3.0in] {KS_bdt2_f4.eps}}
\caption{(color online) Distributions of the BDT response for (a) BDT1, trained against
sequential decay backgrounds, and (b) BDT2, trained against double $B$ decay
backgrounds. MC simulation is used for the signal, while the data sidebands are used for
the backgrounds. The vertical lines denote the BDT selection cuts in the
analysis. All distributions are normalized to unit area. }
\label{KS}
\end{center}
\end{figure*}
\section {Background estimates and expected limit}
Figure~\ref{bdtcuts} shows the blinded dimuon mass distributions
before (Fig.~\ref{bdtcutsa}) and after (Fig.~\ref{bdtcutsb})
the BDT selection cuts
for the half of the data (sample C) used to estimate the
number of background events. The signal window within the
blinded region is chosen to
maximize the signal significance $S/\sqrt{S+B}$, where $S$ is the expected
number of signal events as determined from the SM branching fraction, and
$B$ is the expected background. The
number of expected background events is determined by a likelihood fit to
the data in the sideband
regions, which is then interpolated into the blinded region. The optimum signal
region is determined to be $\pm 1.6 \sigma$ centered on the $B_s^0$ mass, where
$\sigma$ = 125~MeV is the average width
of the double Gaussian used to fit the
dimuon mass distribution in the $B_s^0 \to \mu^+ \mu^-$ MC sample.
The blinded region includes a control region of width $2\sigma$ on
each side of the signal window.
While only half of the dataset is shown, the numbers of expected
background
events quoted in Fig.~\ref{bdtcuts} are scaled to the full dataset. The
numbers given are for the
estimated dimuon background events in the signal region.
\begin {figure*} [!th]
\begin{center}
\subfigure[]{\label{bdtcutsa}\includegraphics [width=3.0in] {DataBG_SigReg_all_c_f1.eps}}
\subfigure[]{\label{bdtcutsb}\includegraphics [width=3.0in] {BG_all_blind_c_f3.eps}}
\caption{(color online) Dimuon mass
distribution for sample C (a) before and (b) after BDT selection
cuts. The edges of the
blinded region are denoted in (b) by the vertical lines
at 4.9 and 5.8 GeV, and the shaded area denotes the signal window.
The curves are fits to an exponential plus constant function. The numbers
of expected background events are determined from an interpolation of the fit
into the signal window and scaled to the full dataset.}
\label{bdtcuts}
\end{center}
\end{figure*}
The efficiency for retaining signal events when all BDT selections are applied,
including the pre-training cuts (see Sec.~\ref{bdt}) and the final BDT cuts,
is determined to be 0.12 $\pm$ 0.01, where the error is due to
variation over the different data epochs. We obtain a final SES of
(2.8 $\pm$ 0.24)$\times 10^{-9}$, corresponding to an expected number of
signal events at the SM branching fraction of 1.23 $\pm$ 0.13.
For the
dimuon background the expected number of events in the signal and control
regions is
determined by applying a log likelihood fit to the dimuon mass
distribution using an exponential plus constant functional form.
The fit is performed excluding the blinded region, and
the resulting fit is interpolated into the signal and control regions.
This procedure yields an expected number of dimuon background events
in the signal region of 4.0 $\pm$ 1.5 events, where the uncertainty is
only statistical. The corresponsing estimate for the
expected number of events in the control region is $6.7 \pm 2.6$ events,
with $5.3 \pm 1.9$ events expected in the lower control region (dimuon
masses from 4.9 to 5.15~GeV), and $1.4 \pm 1.4$ events in the upper control
region (dimuon masses from 5.55 to 5.8~GeV).
To determine the systematic uncertainty on the
background estimate, we use other functional forms for the background fit,
resulting in a systematic uncertainty of 0.6 events. Adding the statistical
and systematic errors in quadrature yields a final dimuon background
estimate in the signal region of 4.0 $\pm$ 1.6 events and
$6.7 \pm 2.7$ events in the control region.
In addition to the dimuon background, there is background from
the decay mode $B_s^0 \to
K^+K^-$, which has kinematics very similar to the signal. We estimate this
background by scaling the expected number of signal events by the
appropriate branching fractions \cite{pdg12} and by the ratio of the
probabilities for both $K$ mesons to be misidentified as muons,
$\epsilon(KK\to \mu \mu)$, to
the probability that two muons are correctly identified as muons,
$\epsilon(\mu \mu \to \mu \mu)$. The probability
that a $K$ meson is misidentified as a muon is measured
in the data using $D^0 \to K \pi$ decays. We assume that the
probability of two $K$ mesons being misidentified as muons is the
product of the
probabilities for each individual $K$ meson. The muon identification efficiency
is measured in the data from $J/\psi \to \mu \mu$ decays. The efficiency
ratio $\epsilon(KK\to \mu \mu)/\epsilon(\mu \mu \to \mu \mu)$ is determined
to be $(3.0 \pm 1.1) \times 10^{-5}$. We estimate the background from
$B_s^0 \to KK$ decays to be 0.28 $\pm$ 0.11 events. We also find a
consistent
estimate of this background using a $B_s^0 \to KK$ MC sample. Other
possible peaking backgrounds such as $B^0_d \to K\pi$ and $B_s^0 \to K\pi$
are negligible due to the combination of smaller branching fractions and a
$\pi \to \mu$ misidentification probability that is more than a factor of
10 smaller than the $K \to \mu$ misidentification probability in the D0
detector.
We set an upper limit on the $B_s^0 \to \mu^{+} \mu^{-}$ branching
fraction
using the CL$_s$, or modified frequentist method \cite{junk}. A Poisson
likelihood function is used to
calculate the number of signal events which would occur with a probability of
0.05 (for a 95\% CL upper confidence limit) when $N_{\text{obs}}$ data events are
observed in the signal region with a known expected number of background events.
The limit calculation includes a convolution over probability
distributions representing the uncertainties in the background and the
signal. The uncertainty in the $B_s^0 \to KK$ peaking background
is assumed to be Gaussian. The dimuon background in the
signal region is estimated by the fit shown in Fig.\ \ref{bdtcutsb}. The
normalized likelihood function from this fit is used as the probability
distribution function for the dimuon background in the convolution.
The expected
number of signal events, assuming the SM branching fraction, is 1.23 $\pm$ 0.13
events, with the uncertainty assumed to be Gaussian. The total
expected background is 4.3 $\pm$ 1.6 events.
Weighting each possible outcome by its Poisson probability
yields an expected 95\% C.L. upper limit on the branching fraction ${\cal
B}(B_s^0
\to \mu^+\mu^-)$
of $23 \times 10^{-9}$.
\begin {figure}[h!]
\begin{center}
\includegraphics [width=3.5in] {BG_all_unblinded_f7.eps}
\caption{(color online) Dimuon mass distribution in the blinded region
for the full dataset after BDT selections are applied.
The curve shows the fit from Fig.\
\ref{bdtcutsb} used to
determine the expected number of background events.
The SM expectation for signal events multiplied by five is also indicated.
The vertical lines mark the edge of the signal window.}
\label{final_mass}
\end{center}
\end{figure}
\begin {figure}[h!]
\begin{center}
\includegraphics [width=3.5in] {marj_all.eps}
\caption{(color online) Expected number of events and observed number of
events in the
signal region as the two BDT cuts are relaxed in parallel. The
expected
number of events includes the
dimuon background, the $B_s^0 \to KK$ background, and the expected number of
signal events. The upper horizontal axis shows the cut applied to BDT1,
while the lower horizontal axis shows the cut applied to BDT2.
}
\label{marjplot}
\end{center}
\end{figure}
Upon unblinding, a total of nine events is found in the control region
above and below the signal region, as shown in Fig.\ \ref{final_mass}.
Six events are found in the control region below the signal window, and
three events
are found in the control region above the signal window.
This number of events and their distribution within the control regions
is in agreement
with the expected number of background events interpolated from the data
sidebands. As seen in Fig.\ \ref{final_mass}, three
events are found in the dimuon mass
signal window, in agreement with the expected background and also
with the expected signal + background.
We check that the properties of all events
found in the blinded region, such as the $p_T$ of the dimuon system, the
$p_T$ of the individual muons, the dimuon pointing angle, and the various isolation
quantities, are consistent with expectations.
We also check that, as the BDT cuts are relaxed, the
number of events observed in the signal region remains in good
agreement with expectations, as shown in Fig.\ \ref{marjplot}.
The observed number of events and the SES allow us to set a 95\% C.L.
upper limit $\cal{B}$$(B_s^0 \to \mu^{+} \mu^{-}) < 15 \times 10^{-9}$.
\section {Summary} In summary, we have searched for the rare decay $B_s^0
\to \mu^+ \mu^-$ in the full D0 dataset. We employ two Boosted Decision
Tree multivariate discriminators, one trained to discriminate against
sequential decays $b(\bar{b}) \to c\mu^-(\bar{c}\mu^+)X$ followed by
$c(\bar{c}) \to \mu^+ (\mu^-)X$ and the other to discriminate against
double semileptonic decays $b\to \mu^- X$ and $\bar{b} \to \mu^+ X$.
The sidebands around the signal region in the dimuon invariant mass distribution
are used to
estimate the dominant backgrounds. The
expected limit is 23 $\times 10^{-9}$, and the expected background (signal) in the
signal region is 4.3 $\pm$ 1.6 (1.23 $\pm$ 0.13) events. We observe three events in the
signal region consistent with expected background. The probability that
the background alone (signal + background) could produce the observed number of events or a
larger number of events in the signal region is 0.77 (0.88). We set an observed
95\% C.L. upper limit $\cal{B}$$(B_s^0 \to \mu^{+} \mu^{-}) < 15 \times
10^{-9}$. This upper limit supersedes the previous D0 95\% C.L. limit of
51 $\times 10^{-9}$ \cite {masato}, and improves upon that limit by a
factor of 3.4. The improvement in the expected limit is a factor of 1.7
greater than the improvement that would be expected due to increased
luminosity alone. The additional
improvement arises from the inclusion of several isolation-type variables
in the multivariate discriminants and in the use of two separate
discriminants to distinguish backgrounds from sequential $b$
quark decays and double $b$ quark decays. This result is the most
stringent Tevatron limit and is
compatible with the recent evidence of this decay produced by the LHCb
experiment \cite{newlhcb}.
We thank the staffs at Fermilab and collaborating institutions,
and acknowledge support from the
DOE and NSF (USA);
CEA and CNRS/IN2P3 (France);
MON, NRC KI and RFBR (Russia);
CNPq, FAPERJ, FAPESP and FUNDUNESP (Brazil);
DAE and DST (India);
Colciencias (Colombia);
CONACyT (Mexico);
NRF (Korea);
FOM (The Netherlands);
STFC and the Royal Society (United Kingdom);
MSMT and GACR (Czech Republic);
BMBF and DFG (Germany);
SFI (Ireland);
The Swedish Research Council (Sweden);
and
CAS and CNSF (China).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,390 |
\section{Introduction}
Large $N$ duality plays the central role in understanding dynamics of physical string theory. This duality is inherited by the simpler, topological string with target space a Calabi-Yau three-fold \cite{GV, DV, IH}. The topological large $N$ duality, like the large $N$ duality of the physical string theory, relates the gauge theory on D-branes to closed topological string on a different background. In the topological string case, the duality is in principle tractable, since topological string is tractable.
In some cases, study of topological string theory is related to studying supersymmetric gauge theory in 4d with ${\mathcal N}=2$ supersymmetry, see e.g. \cite{N2, Neitzke:2004ni} and [V:13]. It is natural to ask what the large $N$ duality of topological string theory means in gauge theory terms. We will see that the large $N$ duality of topological string becomes a gauge/vortex duality \cite{DH1, DH2, Simonstalk} which relates a 4d gauge theory in a variant of 2d $\Omega$ background with flux, and the theory living on its vortices.\footnote{For early studies leading to \cite{DH1, DH2, Simonstalk}, see \cite{Dorey, DHT, HananyTong, HananyTong2, Shifman:2004dr}. } The vortices in the gauge theory play the role of D-branes of the topological string. In fact, the gauge theory duality implies the topological string duality, but not the other way around.
What does this have to do with AGT correspondence \cite{AGT}? As we will review, \cite{DVt} conjectured that large $N$ duality of topological string provides a physical explanation for AGT correspondence, under certain conditions: Conformal block should admit free field representation, and Liouville theory should have central charge $c=1$ to correspond to topological string.
We interpret this purely in the gauge theory language, in the context of the gauge/vortex duality, and show that this leads to a proof of correspondence in a fairly general setting. The partition function of the 4d ${\mathcal N}=2$ gauge theory associated in \cite{G2, Gaiotto:2009hg} to a genus zero Riemann surface with arbitrary number of punctures equals the conformal block of Liouville theory with arbitrary central charge $c$, on the same surface. The free field representation of conformal blocks implies Coulomb moduli are quantized, but all other parameters remain arbitrary. The crucial role vortices play, extends AGT correspondence to a triality -- between the gauge theory, its vortices, and Liouville theory. The striking aspect of this result, which appeared first in \cite{AHKS}, is the simplicity of the proof. While in this review we focus on the simplest variant of AGT correspondence, relevant for Liouville theory, same ideas apply for more general Toda CFTs (Liouville theory corresponds to $A_1$ Toda). The generalization to $A_n$ Toda case can be found in \cite{AHS}.\footnote{Proofs of (some aspects of) AGT correspondence using different ideas appeared in \cite{Fateev:2009aw, Alba:2010qc, Mironov:2010pi, Morozov:2013rma, Braverman}. }
\section{Background}
Alday, Gaiotto and Tachikawa \cite{AGT} conjectured a correspondence between conformal blocks of Liouville CFT and partition functions of a class of four-dimensional theories, in 4d $\Omega$-background \cite{N2}. The 4d theories are conformal field theories with ${\mathcal N}=2$ supersymmetry defined in \cite{G2, Gaiotto:2009hg} (see also [V:1]) in terms of a pair of M5 branes wrapping a Riemann surface $C$, which we will call the Gaiotto curve. Specifying both the conformal block and the 4d theory ${\mathcal T}_{4d}$ in this class, involves a choice of the curve $C$ with punctures, data at the punctures and pants decomposition. The conjecture is often referred to as 4d/2d correspondence.
\subsection{4d Gauge Theory}
Let $\Sigma$ be the Seiberg-Witten curve of ${\mathcal T}_{4d}$,
\begin{equation}\label{4dcurve}
{ \Sigma}:\qquad \qquad p^{2} + \phi^{(2)}(z)=0.
\end{equation}
with meromorphic one form $\lambda = p dz$. ${\Sigma}$ is a double cover of $C$, $z$ is a local coordinate on $C$, and $\phi^{(2)}(z) (dz)^2$ is a degree 2 differential on $C$, whose choice specifies the IR data of the theory (the point on the Coulomb branch). Specifying the UV data of the theory requires fixing the behavior of the Seiberg-Witten differential $\lambda$ near the punctures.
At a puncture at $z=z_i$, the $\lambda$ has a pole of order $1$, with residues
$$
p\sim \pm {\alpha_{i}\over z- z_i}
$$
on the two sheets. These lead to second order poles of $\phi^{(2)}(z)dz^2$. In the gauge theory, $\alpha_i$'s and $z_j$'s are the UV data; the mass parameters and the gauge couplings. $\Sigma$ also depends on the IR data of the gauge theory, the choice of Coulomb branch moduli. These are associated to the sub-leading behavior of the ${\phi^{(2)}(z)}$ near the punctures.
Let
$${\mathcal Z}_{{\mathcal T}_{4d}}(\Sigma)
$$
be the partition function of the theory, in 4d $\Omega$-background. Given a gauge theory description of ${\mathcal T}_{4d}$, ${\mathcal Z}_{{\mathcal T}_{4d}}(\Sigma)$ can be computed using results of Nekrasov in \cite{N2} (see also [V:3]).
In addition to the geometric parameters entering $\Sigma$,
${\mathcal Z}_{{\mathcal T}_{4d}}$ depends on
$$\epsilon_1, \;\; \epsilon_2,
$$
the two parameters of the $\Omega$ background \cite{N2}. ${\mathcal Z}$ can in principle depend on data beyond the geometry of $\Sigma$; different choices of the pants decomposition can lead to different descriptions of the theory with different but related ${\mathcal Z}$'s.
\subsection{2d Liouville CFT}
The Liouville CFT has a representation in terms of a boson $\phi$:
$$ S_{Liouv.} = \int dz d{\bar z} \;\sqrt g \; [g^{z {\bar z}}\partial_z \phi \, \partial_{\bar z} \phi + Q \phi R + e^{2 b \phi} ].
$$
Consider a conformal block on $C$ with insertions of primaries with momenta $\alpha_i$ at points $z_i$:
$$
{\mathcal B}(\alpha, z)=\langle V_{\alpha_0}(z_0) \cdots V_{\alpha_{\ell}}(z_{\ell}) V_{\alpha_{\infty}}(\infty)\rangle,
$$
where
$$
V_{\alpha}(z) = \exp\left( - \frac{\alpha}{b} \phi(z) \right)
$$
is the vertex operator of a primary with momentum $\alpha$.
Above, $Q$ is the background charge, $Q= b+{1\over b} $; Liouville theory with this background charge has central charge $c=1+6 Q^2$.
In addition to momenta and positions of the vertex operators inserted, the conformal block depends on the momenta in the intermediate channels; in denoting the conformal block by ${\mathcal B}( \alpha, z)$ we have suppressed the dependence on the latter.
\subsection{The correspondence}
The conjecture of \cite{AGT} is that the partition function ${\mathcal Z}_{{\mathcal T}_{4d}}(\Sigma)$ computes a conformal block of Liouville CFT on $C$:
$${\mathcal Z}_{{\mathcal T}_{4d}}(\Sigma)={\mathcal B}(\alpha, z),
$$
where $b$ is related to two parameters $\epsilon_{1,2}$ by
$$
b= \sqrt{{\epsilon_1 \over \epsilon_2}},
$$
while the parameters $\alpha_i$, $z_i$ of $\Sigma$ map to the corresponding parameters in the conformal block and the Coulomb branch parameters map to the momenta in intermediate channels.
\section{AGT and Large $N$ Duality}
In \cite{DVt} Dijkgraaf and Vafa explained the correspondence, in a particular case of the self-dual $\Omega$-background,
\begin{equation}\label{self}
\epsilon_1= g_s =- \epsilon_2,
\end{equation}
in terms of a large $N$ duality in topological string theory. The argument of \cite{DVt} has three parts, which we will now describe. As everywhere else in this review, we will focus on the case when the Gaiotto curve $C$ is genus zero. One can extend the argument more generally \cite{DVt}, as all the ingredients generalize to $\Sigma$ a double cover of an arbitrary genus $g$ Riemann surface $C$.
\subsection{The Physical and the Topological String }
The gauge theory partition function ${\mathcal Z}_{{\mathcal T}_{4d}}(\Sigma)$ in the self-dual $\Omega$-background is conjectured in \cite{DVt} to be the same as the partition function
$$
Z(Y_{\Sigma})
$$
of the topological B-model on a Calabi-Yau manifold $Y_{\Sigma}$, with topological string coupling $g_s$. The Calabi-Yau $Y_{\Sigma}$ is a hyper surface
\begin{equation}\label{4dcy}
Y_{ \Sigma}:\qquad \qquad p^{2} +\phi^{(2)}(z)=uv,
\end{equation}
with holomorphic three-zero form $du dp dz/u$. The geometry of $Y_{\Sigma}$ and the Seiberg-Witten curve $\Sigma$ \eqref{4dcurve} are closely related: the latter is recovered from the former by setting $u$ or $v$ to zero.
This is a consequence of two facts. First, one observes that IIB string theory on $Y_{\Sigma}$ is dual to M-theory with an M5 brane wrapping $\Sigma$.\footnote{This follows by compactifying M-theory with M5 brane on $\Sigma$ on a $T^2$ transverse to the M5 brane. Since the $T^2$ is transverse to the branes, it does not change the low energy physics. By shrinking one of the cycles of the $T^2$ first, we go to down to IIA string with an NS5 brane wrapping $\Sigma$. T-dualizing on the remaining compact transverse circle, we obtain IIB on $Y_{\Sigma}$.} This gives us another way to obtain the same 4d, ${\mathcal N}=2$ theory ${\mathcal T}_{4d}$. Second, the partition function of IIB string theory on $Y_\Sigma$ times the self-dual $\Omega$ background is the same as the topological B-model string partition function on $Y_{\Sigma}$ \cite{N2, Losev:2003py, Hollowood:2003cv}. Thus, one can simply identify the physical and the topological string partition functions
\begin{equation}\label{gt}
{\mathcal Z}_{{\mathcal T}_{4d}}(\Sigma) =Z(Y_{\Sigma}).
\end{equation}
The power of this observation is that the topological B-model partition function is well defined even when the Nekrasov partition function is not -- because for example, the gauge theory lacks a Lagrangian description. It is also important that sometimes one and the same topological string background gives rise to several different Lagrangian descriptions for one and the same theory -- for example, $SU(2)^{l-2}$ with four fundamentals vs. $SU(l)$ with $2l$ fundamentals. The former is the theory which is usually associated in the AGT literature to Liouville theory on the sphere with $l+1$ punctures; the latter is the one that naturally comes out from our approach.
\subsection{Large $N$ Duality in Topological String}
Next, \cite{DVt} show that the B-model on $Y_\Sigma$ has a dual, holographic description, in terms of $N$ topological B-model branes on a different Calabi-Yau, related to $Y_{\Sigma}$, by a geometric transition. Let us first describe the Calabi-Yau that results. Then, we will explain the duality.
\subsubsection{A Geometric Transition}
By varying Coulomb branch moduli of ${\mathcal T}_{4d}$ we can get the Seiberg-Witten curve $\Sigma$ to degenerate. Let us call the degenerate curve that results the $S$-curve:
\begin{equation}\label{4dcurve2}
{S}:\qquad \qquad p^2 - (W'(z))^2 =0.
\end{equation}
Here
$$
W'(z)=\sum_{i=0}^{\ell} {\alpha_i\over z- z_i},
$$
is determined by keeping the behavior of the Seiberg-Witten differential fixed at the punctures. The $S$-curve describes the degeneration of the Seiberg-Witten curve to two components, $p\pm W'(z)=0$. Correspondingly, a single M5 brane wrapping $\Sigma$ breaks into two branes, wrapping the two components.
The $S$-curve corresponds to a singular Calabi-Yau $Y_S$:
\begin{equation}\label{4dcy2}
Y_{ S}:\qquad \qquad p^{2}- (W'(z))^2 =uv,
\end{equation}
with singularities at $u,v,p$ equal to zero and points in the $z$-plane where
$$
W'(z)=0.
$$
The Calabi-Yau we need is obtained by blowing up the singularities. One can picture this by viewing $Y_S$ as a family of $A_1$ surfaces, one for each point in the $z$-plane. At every $z$ there is an $S^2$ in the $A_1$ surface whose area is proportional to $|W'(z)|$, The singularity occurs where the $S^2$ shrinks. After blowing up, we get a family of $S^2$'s of non-zero area, one at each point in the $z$ plane, and all homologous to each other. The minimal area $S^2$'s are where the singularities were -- at points in the $z$ plane with $W'(z)=0$.
The geometric transition trades $Y_{\Sigma}$ for the blowup of $Y_{S}$. For economy of notations, we will denote $Y_S$ and its blowup in the same way, since their complex structure is the same, given by \eqref{4dcy2}.
\subsubsection{Large $N$ Duality}
The B-model on $Y_\Sigma$ has a holographic description in terms of B-model on (the blowup of ) $Y_S$ with $N$ topological B-model D-branes wrapping the $S^2$ class. The branes get distributed between the minimal $S^2$'s at points in the $z$-plane where $W'(z)$ vanishes. This breaks the gauge group from $U(N)$ to $\prod_{i=0}^{\ell} U(N_i)$, with $\sum_i N_i=N$. The Coulomb-branch moduli of $Y_{\Sigma}$ get related to t'Hooft couplings $N_i g_s$ in the theory on B-branes. The remaining parameters, $\alpha, z$ and the topological string coupling $g_s$ are the same on both sides. This is the topological B-string version of gauge/gravity duality \cite{DV}.
The large $N$ duality relates the closed topological string partition function of the B-model on $Y_\Sigma$, and thus the partition function ${\mathcal Z}(\Sigma)$, to partition function of the $N$ topological B-branes on (the blowup of) $Y_S$,
$$
{Z}(Y_\Sigma) = {Z}(Y_S; N).
$$
The right hand side depends not only on the net number of branes, but also how they are split between the different ${\mathbb P}^1$'s.
The partition function of $N$ B-type branes wrapping the $S^2$ in a Calabi-Yau of the form of \eqref{4dcy2} was found in \cite{DV}. It equals
\begin{equation}\label{matinx}
{1\over {\rm vol}(U(N))} \int d\Phi\; \exp( \,{\rm Tr} W(\Phi)/g_s),
\end{equation}
where ${\rm vol}(U(N))$ is the volume of $U(N)$. The integral is a holomorphic integral, over $N\times N$ complex matrices $\Phi$. In evaluating it, one has to pick a contour, ending at a critical point of the potential. In the present case,
$$
W(x) = \sum_i \alpha_i \log(x-z_i).
$$
Diagonalizing $\Phi$ and integrating over the angles, the integral reduces to
\begin{equation}\label{matin}
{Z}(Y_S; N)={1\over N!} \int d^N x \prod_{ I< J} (x_I-x_J)^2 \prod_{I, i}(x_I-z_i)^{\alpha_i/g_s}.
\end{equation}
Here $N!$ is the order of the Weyl group that remains as a group of gauge symmetries.
The claim is that large $N$ expansion of the integral equals topological B-model partition function on \eqref{4dcy}. At the level of planar diagrams this can be seen as follows. In the matrix integral, define an operator
\begin{equation}\label{mmphi}
\partial \phi(z) = W'(z) + g_s \sum_I {1\over z- x_I},
\end{equation}
where $x_I$ are the eigenvalues of $\Phi$.
The expectation value of
$$T(z) = ( \partial \phi)^2$$
computed in the matrix theory captures the geometry of the underlying Riemann surface by identifying $\phi^{(2)}(z)$ in \eqref{4dcurve} with
$$
\phi^{(2)}(z) = \langle T(z) \rangle.
$$
There are two limits in which a classical geometry emerges from this. First, by simply sending $g_s$ to zero we recover the $S$-curve, since then $ \langle T(z) \rangle =(W')^2$. But, there is also a new classical geometry that emerges at large $N$. Letting $N_i$'s go to infinity, keeping $N_i g_s$ fixed we get
$$
\langle T(z) \rangle \sim (W'(z))^2+f(z),
$$
with
$$f(z) = \langle g_s \sum_I {W'(z) - W'(x_I)\over z-x_I}\rangle.
$$
From the form of the potential $W(z)$, it follows that $f(z)$ has the form
$$
f(x) = \sum_i {\mu_i \over x-z_i}
$$
with at most single poles. Thus, the branes deform the geometry of the Calabi-Yau we started with. The resulting Calabi-Yau is exactly of the form $Y_{\Sigma}$ \eqref{4dcy}, corresponding to the Seiberg-Witten curve $\Sigma$ in \eqref{4dcurve} at a generic point of its moduli space.
The large $N$ duality is expected to hold order by order in the $1/N$ expansion; we just gave evidence it holds in the planar limit (the full proof of the correspondence in the planar limit is easy to give along these lines, see \cite{DV}). The good variable in the large $N$ limit turns out to be the chiral operator $\phi(z)$ we defined in \eqref{mmphi}. The field $\phi(z)$, is in fact the string field of the B-model.
The B-model string field theory, called Kodaira-Spencer theory of gravity, was constructed in \cite{BCOV}, capturing variations of complex structure. For Calabi-Yau manifolds of the form \eqref{4dcy} the Kodaira-Spencer theory becomes a two dimensional theory on the curve $\Sigma$. The theory describes variations of complex structures of $Y_{\Sigma}$, so the Kodaira-Spencer field can be identified with fluctuations of the holomorphic $(3,0)$ form of the Calabi-Yau. For $Y_{\Sigma}$ fluctuations of the $(3,0)$ form are equivalent to fluctuations of the meromorphic $(1,0)$ form on $\Sigma$:
$$
\delta \lambda = \delta p dz= \partial \phi dz.
$$
The Kodaira-Spencer field is a chiral boson $\phi$ which lives on $\Sigma$. When $\Sigma$ is a double cover of a curve $C$, a single boson on $\Sigma$ is really a pair of bosons $\phi_{1}$, $\phi_2$ on $C$, one corresponding to each sheet. The field $\phi$ that arises in the matrix model in \eqref{mmphi} can be thought of as off diagonal combination of the two. The diagonal combination is a center of mass degree of freedom and decouples from the dynamics of the branes.\footnote{The full topological string partition function in the presence of branes is given by the matrix integral in \eqref{matinx} - \eqref{matin}, describing open strings, times a purely closed topological string partition function of $Y_S$. This will be relevant later on.}
\subsection{Topological D-branes and Liouville Correlators}
To complete the argument, \cite{DVt} observe that the B-brane partition function ${Z}(Y_S; N)$ equals the Liouville correlator at $c=1$, when written in the free-field or Dotsenko-Fateev representation \cite{Dotsenko:1984nm,Dotsenko:1984ad},
\begin{equation}\label{p2}
{Z}(Y_S; N) \;\;= \;\;{\mathcal B}(\alpha/g_s, z; N)|_{c=1}.
\end{equation}
One treats the Liouville potential as a perturbation and computes the correlator in the free boson CFT
{\fontsize{11pt}{0pt}
\begin{equation}\label{expect}
{\mathcal B}(\alpha, z; N) = \langle V_{\alpha_1}(z_1) \ldots V_{\alpha_\ell}(z_\ell) V_{\alpha_\infty}(\infty) \;\;\oint dx_1 S(x_1)\cdots \oint dx_N S(x_N)\rangle_0,
\end{equation}}
where we took the chiral half. Here, $S(z)$ is the screening charge
$$
S(z) = e^{2 b \phi(z)},
$$
whose insertions come from bringing down powers of the Liouville potential. It follows that
\eqref{expect} vanishes unless
$$
\frac{\alpha_{\infty}}{b} + \sum_{i=0}^\ell \frac{\alpha_{i}}{b} = 2 b N + R Q,
$$
constraining the net $U(1)$ charge of the vertex operator insertions to be the number of screening charge integrals. This constraint can be found directly from the path integral, by integrating over the zero modes of the bosons \cite{Dotsenko:1984nm,Dotsenko:1984ad,Goulian:1990qr}. We will place a vertex operator at infinity of the $x$ plane, and then the equation determines the momentum of the operator at infinity in terms of the momenta of the $\ell+1$ remaining vertex operators at finite points and numbers of screening charge integrals.
An integral expression for the expectation value of the correlator in \eqref{expect} is easy to obtain, for example, by using the free boson mode expansion
$$
b \phi(z) = \phi_0 + h_0\,\mbox{log}\,z + \sum_{k\neq 0}h_k\frac{z^{-k}}{k},
$$
where $\phi_0$ is a constant, and $h_m$ satisfy the standard algebra
\begin{equation}
\label{heisenberg}
[h_{k}, h_{m}] = {-b^2 \over 2} \ k \,\delta_{k+m,0}
\end{equation}
where $k, m\in {\mathbb Z} $. From this one obtains the two point functions:
\begin{align*}
\nonumber \langle V_{\alpha}(z) V_{\alpha^{\prime}}(z^{\prime}) \rangle = (z - z^{\prime})^{\frac{-\alpha \alpha^{\prime}}{2 b^2}}, \\
\nonumber \langle V_{\alpha}(z) S(z^{\prime}) \rangle = (z - z^{\prime})^{\alpha}, \\
\langle S(z) S(z^{\prime}) \rangle = (z - z^{\prime})^{- 2 b^2}.
\end{align*}
The final result is that \eqref{expect} equals
$$
{\mathcal B}(\alpha, z; N)={r\over N!} \ \int \prod d^Nx\;\prod_{i, I} (x_{ I} - z_i)^{\alpha_{ i}}
\;\prod_{J\leq I}(x_{I}-x_{J})^{-2 b^2},
$$
where the integrals are over the position of screening charge insertions and
$$
r = \prod\limits_{i, j} (z_i - z_j)^{\frac{-\alpha_i \alpha_j}{2 b^2}}
$$
is a constant, independent on the integration variables. This is the free-field $\beta$-ensemble (with $\beta = -b^2$) reviewed in [V:7].
Setting $\epsilon_1=-\epsilon_2$ (taking $b^2=-1$ in Liouville CFT) and rescaling $\alpha$ by $g_s$, it follows immediately that the free field expression for the conformal block ${\mathcal B}(\alpha /g_s, z; N)$ agrees with the partition function ${Z}(S; N)$ of B-branes in topological string on $Y_S$ as we claimed in \eqref{p2}. Moreover, in the large $N$ limit, the holomorphic part of the Liouville field $\phi(z)$ can be identified with the matrix model operator \eqref{mmphi}. This completes the argument of \cite{DVt}.
\subsection{Discussion}
The AGT conjecture, for $\epsilon_1+\epsilon_2=0$ can thus be understood as a consequence of a triality relating the closed B-model on $Y_{\Sigma}$, the holographic dual theory of $B$-branes on the resolution of $Y_S$ and the DF conformal blocks. The first two are conjectured to be related by large $N$ duality\footnote{It may be useful to summarize what the large $N$ asymptotic regime is, on each side of the correspondence. On the B-model side, it is sending $g_s$ to zero while keeping the combination $N g_s$ fixed. On the gauge theory side, it is sending $\epsilon_1 = -\epsilon_2$ to zero while keeping the Coulomb parameters fixed. On the Liouville side, it is sending all the momenta as well as the number $N$ of screening insertions to infinity, while keeping their ratios fixed.} in topological string theory, the latter two by the fact that the partition function of $B$-branes equals the DF block:
\begin{equation}\label{p2x}
\;\; Z(Y_{\Sigma}) \;\; {\stackrel{\rm{Large}\;N}{ = }}\;\;{Z}(Y_S; N) \;\;= \;\;{\mathcal B}(\alpha/g_s, z; N)|_{c=1}.
\end{equation}
We also used the embedding of topological string into superstring theory, which implies that the topological string partition function $Z(Y_{\Sigma})$ is the same as the physical partition function ${\mathcal Z}_{{\mathcal T}_{4d}}(\Sigma)$.
While this gives an explanation for the AGT correspondence in physical terms, it is by no means a proof: while the partition function of B-branes is manifestly equal to the Liouville conformal block in free field representation, the large $N$ duality is still a conjecture. The exact partition function of the B-model on $Y_{\Sigma}$ is not known, so one can only attempt a proof, order by order in the genus expansion. In addition, there is a string theory argument, but no proof, that the partition function of the gauge theory ${\mathcal Z}_{{\mathcal T}_{4d}}(\Sigma)$ and topological string partition function ${Z}(Y_{\Sigma})$ agree.
Thirdly, from the perspective of the 4d gauge theory, it is very natural to consider the partition function on general $\Omega$-background, depending on arbitrary $\epsilon_1$, $\epsilon_2$. Topological string on the other hand requires self-dual background, so the argument of \cite{DVt} can not be extended in this case.\footnote{For general $\epsilon_{1,2}$ the background does not simply decouple into a product of a Calabi-Yau manifold times the $\Omega$ background where the gauge theory lives. Turning on arbitrary $\Omega$ background requires the theory to have an $U(1)\in SU(2)_R$ R-symmetry to preserve supersymmetry. This requires the target Calabi-Yau manifold to admit a $U(1)$ action; this $U(1)$ action is used in constructing the background.} In \cite{DVt}, it was suggested to formulate the refinement at the level of B-model string field theory. This remains to be developed better: refinement exists for any Calabi-Yau of the form $F(p,z)=uv$; the predictions from a naive implementation of this idea work for some, but not all choices of $F(p,z)$.
In the rest of the review, we will explain how to solve the last problem, and as it turns out the first two as problem as well, by following a different route.
The relation between topological string and superstring theory suggests one may be able to reformulate \cite{DVt} in string theory language, replacing topological string branes by branes in string or M-theory. While topological string captures the $\epsilon_1+\epsilon_2=0$ case only, the full superstring or M-theory partition function makes sense for any $\epsilon_1, \epsilon_2$. In fact, will will do something simpler yet: We will formulate the {\it gauge theory analogue of \cite{DVt}} for any $\epsilon_1$, $\epsilon_2$. We will see that this approach is powerful -- in fact it leads to a rigorous yet simple proof that the gauge theory partition function ${\mathcal Z}_{{\mathcal T}_{4d}}(\Sigma)$ agrees with the free field Liouville conformal block for $C$ a sphere with arbitrary number of punctures.
The triality of relations between the 4d gauge theory, its vortices, and Liouville conformal blocks which admit free field representation implies AGT correspondence, however it stops short of the most general case. The restriction to blocks that admit free field representation means, from the 4d perspective, that the Coulomb moduli are quantized to be -- arbitrary -- integers, which get related to vortex charges on one hand, and numbers of screening charge integrals on the other.
\section{Gauge/Vortex Duality}
Translated to gauge theory language, the large $N$ duality of topological string theory becomes a duality between the 4d ${\mathcal N}=2$ gauge theory ${\mathcal T}_{4d}$ and the 2d ${\mathcal N}=(2,2)$ theory on its vortices; we will denote the later theory ${\mathcal V}_{2d}$. Observations of relations between the two theories go back to \cite{Dorey, DHT, HananyTong, HananyTong2}. Recently \cite{DH1, DH2} proposed that the two theories are dual -- indeed this is the "other" 2d/4d relation. On the face of it, the statement is strange at best: to begin with, not even the dimensions of the 4d and the 2d theories match.
In this section we will show that, placed in a certain background, the 4d and the 2d theory describe the same physics, and thus there is good reason why their partition functions agree \cite{Simonstalk}. The large $N$ duality of \cite{DV, DVt} becomes a duality between two $d=2$, ${\mathcal N}=(2,2)$ theories: the 4d gauge theory ${\mathcal T}_{4d}$ we started with, in a variant of 2d $\Omega$-background with vortex flux turned on, and the 2d theory ${\mathcal V}_{2d}$ on its vortices.
\subsection{Higgs to Coulomb Phase Transition and Vortices}
In gauge theory language, the geometric transition that relates B-model on a Calabi-Yau $Y_{\Sigma}$, first to a singular Calabi-Yau $Y_S$ and then to a blowup of $Y_S$, is a Coulomb to Higgs phase transition.
This follows from embedding of the B-model into IIB superstring on a Calabi-Yau, and the relation between the string theory and the gauge theory which arises in its low energy limit \cite{Strominger:1995cz}. The same transition, in the language of M5 branes corresponds to degenerating a single M5 brane wrapping $\Sigma$, to a pair of M5 branes wrapping two Riemann surfaces $p\pm W'(z)=0$ that the $S$-curve consists of, and then separating these in the transverse directions (these are $x^{7,8,9}$ directions in the language of \cite{W7}).
The geometric transition becomes a topological string duality, as opposed to a phase transition, by adding $N$ B-branes on the $S^2$ in the blowup of $Y_S$. In terms of IIB string, the $N$ B-branes on the $S^2$ are $N$ D3 branes wrapping the $S^2$ and filling 2 of the 4 space-time directions. In terms of M5 branes, the vortices are M2 branes stretching between the M5 brane wrapping $p-W'=0$ and the one wrapping $p+W'=0$. In the gauge theory on the Higgs branch, $N$ branes of string/M-theory become $N$ BPS vortices, as explained in \cite{Greene:1996dh,Hori:1997zj} and \cite{HananyTong, HananyTong2}.\footnote{One should not confuse the vortices here with surface operators in the gauge theory, studied for example in \cite{WittenGukov, Alday:2009fs,Dimofte:2010tz}. The surface operators are solutions on the Coulomb branch, with infinite tension. From the M5 brane perspective, surface operators are semi-infinite M2 branes ending on M5's.}
The vortices in question are non-abelian generalization of Nielsen-Olesen vortex solutions whose BPS tension is set by the value of the FI parameters. These were constructed explicitly in \cite{HananyTong, HananyTong2}. The net BPS charge of the vortex is $N=\int {\rm Tr}F$ where $F$ is the field strength of the corresponding gauge group and the integral is taken in the 2 directions transverse to the vortex.\footnote{Usually, the gauge theories on M5 branes wrapping Riemann surfaces are said to be of special unitary type, rather than unitary type. There is no contradiction; the $U(1)$ centers of the gauge groups that arise on branes are typically massive by Green-Schwarz mechanism. This does not affect the BPS tension of the solutions, see e.g. discussion in \cite{Douglas:1996sw}.}
\subsection{Gauge/Vortex Duality}
Consider subjecting the 4d ${\mathcal N}=2$ theory ${\mathcal T}_{4d}$ to a {\it two}-dimensional ${\Omega}$-background in the two directions transverse to the vortex. We set $\epsilon_1=\hbar $ to zero momentarily since the duality we want to claim holds for any $\hbar$. This is the Nekrasov-Shatashvili background studied in \cite{NS2}. The 2d $\Omega$-background depends on the one remaining parameter, $\epsilon=\epsilon_2$. (The equivalence of two theories is a stronger statement that the equivalence of their partition functions. The later assumes a specific background, while the former implies equivalence for any background. We will let $\hbar$ be arbitrary once we become interested in the partition functions, as opposed to the theories themselves.)
As in \cite{NS2}, we view this partial ${\Omega}$-background as a kind of compactification: it results in a 2d theory with infinitely many massive modes, with masses spaced in multiples of $\epsilon$. The background also preserves only $4$ out of the $8$ supercharges. Under conditions which we will spell out momentarily, the effective 2d ${\mathcal N}=(2,2)$ theory that we get is equivalent to the theory on its vortices. The condition that is clearly necessary is that we turn on vortex flux. We assume it is also sufficient.
The vortex charge is $\int_D F_i =N_i$ where $i$ labels a $U(1)$ gauge field in the IR, and $F_i$ is the corresponding field strength. Here, $D$ is the cigar, the part of the 4d space time with 2d $\Omega$ deformation on it. It is parameterized by one complex coordinate, which we will call $w$. Without the $\Omega$ deformation, turning on $N_i\neq 0$ would be introducing singularities in space-time which one would interpret in terms of surface operator insertions \cite{WittenGukov}. In $\Omega$ background, one can turn on the vortex flux without inserting additional operators -- in fact, the only effect of the flux is to shift the effective values of the Coulomb branch moduli. Let us explain this in some detail.
In the $\Omega$ background, $D$ gets rotated with rotation parameter $\epsilon$, in such a way that the origin is fixed. The best way to think of the theory that results \cite{NW, NS2} is in terms of deleting the fixed point of the rotation, and implementing a suitable boundary condition. Because the disk is non-compact, we really need two boundary conditions: one at the origin of the $w$ plane and one at infinity. Turning on flux simply changes the boundary condition we impose at the origin. Without vortices, one imposes the boundary condition \cite{NW} that involves setting $A_{i,w}=0$, where $A_{i,w}$ is the connection of $i$-th $U(1)$ gauge field along $D$. With $N_i$ units of vortex flux on $D$, we need instead $A_{i,w}= N_i/w $.
In the $\Omega$-background, the 4d theory in the presence of $N_i$ units of vortex flux $A_{i,w}= N_i/w $ and with Coulomb branch scalar $a_i$ turned on is equivalent to studying the theory without vortices, at $A_{i,w}=0$, but with $a_i$ shifted by
$$
a_i \;\; \rightarrow \;\; a_i+N_i \epsilon.
$$
This comes about because in the $\Omega$ background, $a_i$ always appears in the combination \cite{NW}
$$
a_i+\epsilon wD_{i,w},
$$
where $D_w=\partial_w + A_{i,w}$ is the covariant derivative along the $w$-plane traverse to the vortex. Thus, in the $\Omega$ background, at the level of F-terms, turning on vortex flux is indistinguishable from the shift the effective values of the Coulomb branch moduli.\footnote{In \cite{NW} one proves that any flat gauge field on the punctured disk preserves supersymmetry of the $\Omega$ background.}
The 4d theory placed in 2d $\Omega$-background, with vortex flux turned on has an effective description studied in \cite{NS2, NW} in terms of the $2d$ theory with ${\mathcal N}=(2,2)$ supersymmetry with massive modes integrated out. The $(2,2)$ theory has a non-zero superpotential
${\mathcal W}(a, \epsilon; N) = {\mathcal W}_{NS}(a_i +N_i \epsilon, \epsilon)$,where ${\mathcal W}_{NS}(a_i, \epsilon)$ is the effective superpotential derived in \cite{NS2}, and the shift by $N_i \epsilon$ is due to the flux we turned on. The critical points of the superpotential correspond to supersymmetric vacua of the theory. In the A-type quantization, considered in \cite{NS2}, the vacua are at $\exp(\partial_{a_i} {\mathcal W}_{NS}/\epsilon) =1$ or, equivalently, at $a_{D,i}/\epsilon = \partial_{a_i} {\mathcal W}_{NS}/\epsilon \in {\mathbb Z}$. In the B-type quantization, they are at $a_i/\epsilon \in {\mathbb Z}$ \cite{quantum, DH1,DH2}. Choosing $a_i=0$, for all $i$ is the vacuum at the intersection of the Higgs and the Coulomb branch. Choosing $a_i = N_i \epsilon$ corresponds to putting the theory at the root of the Higgs branch -- but in the background of $N_i$ units of flux.\footnote{We thank Cumrun Vafa for discussion relating to this point.}
There is a {\it second description} of the same system. If we place the theory at the root of the Higgs branch, the 4d theory has vortex solutions of charge $N_i$ even without the $\Omega$-deformation. These are the non-abelian Nielsen-Olsen vortices of \cite{HananyTong, HananyTong2}. We get a second $2d$ theory with ${\mathcal N}=(2,2)$ supersymmetry -- this is the theory on vortices themselves. In the theory on the vortex, the only effect of the $\Omega$-deformation is to give the scalar, parameterizing the position of the vortex in the $w$-plane, twisted mass $\epsilon$. From this perspective, turning on $\epsilon$ is necessary since it removes a flat direction (position of vortices in the trasverse space).
Similarity of the two theories at the level of the BPS spectrum was observed in \cite{Dorey, DHT, HananyTong, HananyTong2, Shifman:2004dr}. For a class of theories, this duality was first proposed in \cite{DH1, DH2}, motivated by study of integrability. The physical explanation for gauge/vortex duality we provided implies the duality should be general, and carry over to many other systems.\footnote{See \cite{AS3} for a highly nontrivial example.}
\subsection{Going up a Dimension}
The duality between ${\mathcal T}_{4d}$, in the variant of the 2d $\Omega$-background we described above, and ${\mathcal V}_{2d}$ lifts to a duality in one higher dimension, between a pair of theories, ${\mathcal T}_{5d}$ and ${\mathcal V}_{3d}$, compactified on a circle. We will prove the stronger, higher dimensional version, of the duality. ${\mathcal T}_{4d}$ lifts to a five-dimensional theory ${\mathcal T}_{5d}$ with ${\mathcal N}=1$ supersymmetry. From 4d perspective, one gets a theory with infinitely many Kaluza-Klein modes. One can view this theory as a deformation of ${\mathcal T}_{4d}$, depending on one parameter, the radius $R$ of the circle. Note that ${\mathcal T}_{5d}$ is not simply placed in a product of 2d $\Omega$-background times a circle -- rather the background is a circle fibration
%
$$( D\times S^1)_t,
$$
where as one goes around the $S^1$ D rotates by $t$, sending $w \rightarrow w t$.\footnote{This 3d background was used in \cite{N2, Losev:2003py, NO, NW} as a natural path to defining the 2d $\Omega$-background. For a review see \cite{Nekrasov:2004vw}.} Similarly, the 2d theory on the vortex, ${\mathcal V}_{2d}$ lifts to a 3d theory ${\mathcal V}_{3d}$, on a circle of the same radius. The claim is that the two $d=2$, ${\mathcal N}=(2,2)$ theories we get in this way are dual, where the duality holds at least at the level of $F$-type terms. In the limit when $R$ goes to zero, the KK tower is removed, and we recover the theories we started with.
In the next section we will prove the duality by showing that partition functions of the two theories agree. When we compute the partition function of the 5d theory, we submit it to the full Nekrasov background depending on both $\epsilon$ and $\hbar$. This is the background
\begin{equation}\label{5dob}
(D \times {\mathbb C} \times S^1)_{q,t},
\end{equation}
where as one goes around the $S^1$, we simultaneously rotate $D$ by $t=e^{R \epsilon}$, and ${\mathbb C}$ by $q^{-1}=e^{- R \hbar}$. In the 3d theory on vortices, $\epsilon$ is a twisted mass, but $\hbar$ is a parameter of the $\Omega$ background along the vortex world volume. The background for ${\mathcal V}_{3d}$ is fixed once we choose the background for ${\mathcal T}_{5d}$, simply by the 5d origin of the vortices. ${\mathcal V}_{3d}$ is compactified on %
\begin{equation}\label{3dob}
({\mathbb C} \times S^1)_{q}.
\end{equation}
As we go around the $S^1$, ${\mathbb C}$ rotates by $q^{-1}$, and we turn on a Wilson line $t$ for a global symmetry rotating the adjoint scalar (and thus giving it mass $\epsilon$).
\section{Building up Triality}
When ${\mathcal T}_{5d}$ is a lift of the M5 brane theory of section 2 to a one higher dimensional theory on a circle of radius $R$, the gauge/vortex duality extends to a triality. The triality is a correspondence between the 5d gauge theory
${\mathcal T}_{5d}$, the 3d theory on its vortices ${\mathcal V}_{3d}$, both on a circle of radius $R$ and a $q$-deformation of Liouville conformal block. As $R$ goes to zero, the $q$ deformation goes away and we recover the conformal blocks of Liouville.
The $q$-deformation of the Virasoro algebra was defined in \cite{Shiraishi:1995rp, Awata:1996xt}, and studied further and as well as extended to W-algebras in \cite{FR1}.
The triality comes about because the partition function of the vortex theory ${\mathcal V}_{3d}$ will turn out to equal the $q$-deformed Liouville conformal block,
\begin{equation}\label{first}
{\mathcal Z}_{{\mathcal V}_{3d}} ={\mathcal B}_{q},
\end{equation}
analogously to the way the partition function of topological D-branes was the same as the conformal block of Liouville at $b^2=-1$.
The relation between ${\mathcal T}_{5d}$ and ${\mathcal V}_{3d}$ is the gauge/vortex duality. The duality implies that their partition functions are equal,
\begin{equation}\label{second}
{\mathcal Z}_{{\mathcal T}_{5d}} = {\mathcal Z}_{{\mathcal V}_{3d}}.
\end{equation}
The left hand side is computed on \eqref{5dob} and the right hand side, by restriction, on \eqref{3dob}. Thus, combining the two relations, we get a relation between $R$-deformation of the partition function of ${\mathcal T}_{4d}$ and the $q$-deformation of the Liouville conformal block,
\begin{equation}\label{main}
{\mathcal Z}_{{\mathcal T}_{5d}} = {\mathcal Z}_{{\mathcal V}_{3d}} = {\mathcal B}_{q}.
\end{equation}
In a limit, both deformations go away and we recover the relation between a partition function of the 4d, ${\mathcal N}=2$ theory ${\mathcal T}_{4d}$ and the ordinary Liouville conformal block ${\mathcal B}$. We will prove this for the case when $C$ is a sphere with any number of punctures. The equality in \eqref{second}, as we anticipated on physical grounds, holds for special values of Coulomb branch moduli -- those corresponding to placing the 5d theory at a point where the Higgs branch and Coulomb branches meet, and turning on fluxes. By taking the large flux limit, where $N_i$ goes to infinity, $\epsilon$ goes to zero keeping their product $N_i\epsilon$ fixed, all points of the Coulomb branch and arbitrary conformal blocks get probed in this way.
In the rest of the section we will spell out the details of the theories involved, and their partition functions. Then, in the next section, we will prove their equivalence.
\subsection{The 5d Gauge Theory ${\mathcal T}_{5d}$}
The 5d ${\mathcal N}=1$ theory ${\mathcal T}_{5d}$ per definition reduces to, as we send $R$ to zero, the 4d theory ${\mathcal T}_{4d}$ arising from a pair of M5 branes wrapping a genus zero curve $C$ with $\ell+2 $ punctures.
The ${\mathcal T}_{5d}$ theory turns out to be very simple: at low energies it is described by a $U(\ell)$ gauge theory with $2\ell$ hyper-multiplets: $\ell$ hypermultiplets in fundamental representation, $\ell$ in anti-fundamental, and 5d Chern-Simons level zero.\footnote{At very short distances there is a UV fixed point corresponding to it, which is a strongly coupled theory, accessible via its string or M-theory embedding \cite{Seiberg:1996bd,Intriligator:1997pq}} Except for $\ell =2$, the $U(\ell)$ gauge theory theory is different from the generalized quiver of \cite{G2}. This is nothing exotic: there are different ways to take $R$ to zero limit, and different limits can indeed result in inequivalent theories. At finite $R$, the theory we get is unique, but with possibly more than one description.
The Coulomb branch of the 4d theory ${\mathcal T}_{4d}$ is described by a single M5 brane wrapping the 4d Seiberg-Witten curve \eqref{4dcurve}. The Seiberg-Witten curve of ${\mathcal T}_{5d}$ compactified on a circle can be written as
\begin{equation}\label{5dsw}
\Sigma:\qquad Q_+(e^x) e^p +P(e^x) + Q_-(e^x) e^{-p} =0,
\end{equation}
with the meromorphic one form equal to $\lambda = p dx$ (see, e.g. \cite{Nekrasov:1996cz}). We will denote both the 4d and the 5d Seiberg Witten curves by the same letter, $\Sigma$ even though the curves are inequivalent; it should be clear from the context which one is meant. Here, $Q_{\pm}$ are polynomials of degree $\ell$ in $e^x$,
$$
Q_{\pm}(e^x) = {e^{\pm \zeta/2 } \prod_{i=1}^{\ell} ( 1- e^x/ f_{\pm, i}) },
$$
and $P(x)$ is a polynomial of degree $\ell$ in $x$. At points where the Higgs and the Coulomb branch meet, $\Sigma$ degenerates to:
\begin{equation}\label{5dcurve2}
S\;\;:\qquad\qquad
(Q_+(e^x) e^{p} - Q_-(e^x) ) (e^{-p} - 1)=0.
\end{equation}
The 5d Seiberg-Witten curve in \eqref{5dsw} and the S-curve in \eqref{5dcurve2} reduce to the 4d ones in \eqref{4dcurve}, and \eqref{4dcurve2}, by taking the $R$ to zero limit. The limit one needs corresponds to keeping $\zeta/R$ and $p/R$ fixed and taking
\begin{equation}\label{massb}
f_{+, i} = z_i, \ \ \ f_{-,i} = z_i \ q^{\alpha_i}.
\end{equation}
Finally, one defines $z=e^x$, and replaces $p$ by $pz$ to get \eqref{4dcurve2}, the curve with its canonical one form $\lambda = pdz$. Note that one of the punctures we get is automatically placed at $z=0$.\footnote{The second four-dimensional limit gives the 4d ${\mathcal N}=2$ $U(\ell)$ gauge theory with $2\ell$ fundamental hypermultiplets by \cite{W7, G2}. In the Seiberg-Witten curve, one writes $f_{i}$ as $f_i = e^{R \mu_i}$, and takes $R$ to zero keeping $x/R$, $e^p R$, $e^{\zeta}R$ and the $\mu$'s fixed in the limit. The effect of this is that the 4d curve has the same form as \eqref{5dsw}, but with $Q$ and $P$ replaced by polynomials of the same degree, but in $x$, rather than $e^x$.}
\subsubsection{Partition function in $\Omega$-background}
The 5d $\Omega$-background is defined as a twisted product
\begin{equation}\label{back}
({\mathbb C}\times {\mathbb C}\times S^1)_{q,t},
\end{equation}
where as, one goes around the $S^1$, one rotates the two complex planes by $q = \exp(R \epsilon_1)$ and $t^{-1}=\exp(R \epsilon_2)$ (the first copy of ${\mathbb C}$ is what we called $D$ before). These are paired together with the 5d $U(1)_R\subset SU(2)_R$ symmetry twist by $t q^{-1}$, to preserve supersymmetry. The 5d gauge theory partition function in this background is the trace
\begin{equation}\label{5dtrace}
{\mathcal Z}_{{\mathcal T}_{5d}}(\Sigma)={\rm Tr} (-1)^F {\bf g}_{5d},
\end{equation}
corresponding to looping around the circle in \eqref{back}. Insertion of $(-1)^F$ turns the partition function of the theory to a supersymmetric partition function. One imposes periodic identifications with a twist by ${\bf g}$ where ${\bf g}$ is a product of simultaneous rotations: the space-time rotations by $q$ and $t^{-1}$, the $R$-symmetry twist, flavor symmetry rotations $f_{i, \pm} = \exp(-R m_{i, \pm})$, and gauge rotation by $e_i = \exp(R a_i)$ for the $i$'th $U(1)$ factor. The latter has the same effect as turning on a Coulomb-branch modulus $a_i$ (see \cite{Nekrasov:2004vw} for a review). The partition function of ${\mathcal T}_{5d}$ in this background is computed in \cite{N2}, using localization. The partition function is a sum
\begin{equation}\label{bN}
{\mathcal Z}_{{\mathcal T}_{5d}}(\Sigma) = r_{5d} \sum_{\vec R} I^{5d}_{\vec R},
\end{equation}
over $\ell$-touples of 2d partitions
$$
{\vec R} = (R_1, \ldots , R_\ell),
$$
labeling fixed points in the instanton moduli space. The instanton charge is the net number of boxes $|\vec R|$ in the $R$'s. The coefficient $r_{5d}$ contains the perturbative and the one loop contribution to the partition function.
The contribution
$$I^{5d}_{\vec R} = \;q^{\zeta |\vec R|} \ z_{V, {\vec R}} \times z_{H, {\vec R}} \times z_{H^{\dagger}, {\vec R}}
$$
of each fixed point is a product over the contributions of the $U(\ell)$ vector multiplets, the $\ell$ fundamental and anti-fundamental hypermultiplets $H$, $H^{\dagger}$ in ${\mathcal T}_{5d}$. The instanton counting parameter, related to the gauge coupling of the theory, is $q^{\zeta}$. $I^{5d}$ depends on $\ell$ Coulomb branch moduli encoded in ${\vec e}$, and the $2\ell$ parameters ${\vec f}$ related to the masses of the $2\ell$ hypermultiplets.
The vector multiplet contributes
$$
z_{V, {\vec R}}= \prod_{1\leq a,b\leq \ell}[N_{R_a R_b}(e_a/e_b)]^{-1}.
$$
The $\ell$ fundamental hypermultiplets contribute
$$
z_{H, {\vec R}} = \prod_{1\leq a \leq \ell} \prod_{1\leq b \leq \ell}N_{\varnothing R_b}( v f_{a}/e_b),
$$
and the $\ell$ anti-fundamentals give
$$
z_{H^{\dagger}, {\vec R}} = \prod_{1\leq a \leq \ell} \prod_{1\leq b \leq \ell}N_{R_a \varnothing }( v e_a/{f_{b+\ell}}).
$$
The basic building block is the Nekrasov function
\begin{align*}
N_{RP}(Q) = \prod\limits_{i = 1}^{\infty} \prod\limits_{j = 1}^{\infty}
\dfrac{\varphi\big( Q q^{R_i-P_j} t^{j - i + 1} \big)}{\varphi\big( Q q^{R_i-P_j} t^{j - i} \big)} \
\dfrac{\varphi\big( Q t^{j - i} \big)}{\varphi\big( Q t^{j - i + 1} \big)},
\end{align*}
with $\varphi(x) = \prod\limits_{n=0}^{\infty}(1-q^n x)$ being the quantum dilogarithm \cite{Faddeev:1993rs, AHKS}. Furthermore,
$ T_R =(-1)^{|R|} q^{\Arrowvert R\Arrowvert/2}t^{-\Arrowvert R^t\Arrowvert/2}$, and $v = {(q/t)^{1/2}}$ as before (we use the conventions of \cite{Awata:2008ed}). In what follows, it is good to keep in mind that there is no essential distinction between the fundamental and anti-fundamental hypermultiplets.\footnote{By varying the Coulomb branch and the mass parameters, the real mass $m$ of the 5d hypermultiplet can go through zero. This exchanges the fundamental hypermultiplet of mass $m$ for an anti-fundamental of mass $-m$, while at the same time the 5d Chern-Simons level jumps by $1$ \cite{Witten5dphases}. A relation between the anti-fundamental and the fundamental hypermultiplet contributions to the partition function reflects this, see \cite{AHKS} for details.}
In keeping with this, it is natural to think of all the $2\ell$ matter multiplets at the same footing,
and write the partition function, say, in terms of the fundamentals alone, whose masses run over $2\ell$ values, $f_a, f_{\ell+a}$, with $a=1, \ldots, \ell$.
\subsection{The Vortex Theory ${\mathcal V}_{3d}$}
The non-abelian generalization of Nielsen-Olesen vortices was found in \cite{HananyTong, HananyTong2}. In particular, starting with a bulk non-abelian gauge theory like ${\mathcal T}_{5d}$, with $8$ supercharges, $U(\ell)$ gauge symmetry and $2\ell$ hypermultiplets in fundamental representation, they constructed the theories living on its half BPS vortex solutions. The theory on charge $N$ vortices is very simple: it is a $U(N)$ gauge theory with $4$ supercharges, with $\ell$ chiral multiplets in fundamental, and $\ell$ in anti-fundamental representation, as well as a chiral multiplet in the adjoint representation. The theory has a $U(\ell)\times U(\ell)$ flavor symmetry rotating the chiral and anti-chiral multiplets separately. This symmetry prevents their superpotential couplings. Since ${\mathcal T}_{5d}$ is five dimensional, the theory on its vortices is three dimensional ${\mathcal N}=2$ theory, which we will denote ${\mathcal V}_{3d}$. Presence of the 2d $\Omega$ background transverse to the vortex gives the adjoint chiral field twisted mass $\epsilon$. In addition, the theory is compactified on a circle of radius $R$. The masses of $2\ell$ hypermultiplets of ${\mathcal T}_{5d}$ get related to the $2\ell$ twisted masses of the chiral multiplets in ${\mathcal V}_{3d}$. We will see the precise relation momentarily.
\subsubsection{Partition function in $\Omega$-background}
We compactify ${\mathcal V}_{3d}$ on the 3d $\Omega$ background:
$$
({\mathbb C} \times S^1)_q.
$$
As we go around the $S^1$ we simultaneously rotate the complex plane by $q$ and twist by the $U(1)_R$-symmetry, to preserve supersymmetry. The partition function of the theory in this background in computes the index
\begin{equation}\label{3dtrace}
{\mathcal Z}_{{\mathcal V}_{3d}}(S; N)={\rm Tr} (-1)^F{\bf g}_{3d},
\end{equation}
where ${\bf g}_{3d}$ is a product of space-time rotation by $q$, an $U(1)_R$ symmetry transformation by $q^{-1}$, as well as the global symmetry rotation by $t$. The partition function of the theory can be computed by first viewing the $U(N)$ symmetry as a global symmetry: in this case, since the theory is not gauged, and due to the 3d $\Omega$ background, the index in \eqref{3dtrace} is simply a product of contributions from matter fields and the $W$-bosons, all depending on the $N$ Coulomb branch parameters $x_I$.
The contribution of the flavor in the fundamental representation is
\begin{equation}\label{basic}
\Phi_{F}(x)= \prod_{1\leq I \leq N} {\varphi(e^{R x_I - R m_-})\over \varphi(e^{R x_I - R m_+})},
\end{equation}
where $m_{\pm}$ are the twisted masses. The right hand side is written in terms of Faddeev-Kashaev quantum dilogarithms \cite{Faddeev:1993rs, AHKS},
$$
\varphi(z) = \prod_{n=0}^{\infty}( 1 -q^n z).
$$
There are different ways to show this, for example, one can reduce the 3d theory down to quantum mechanics on the circle and integrate out a tower of massive states. Alternatively, the index can be obtained by counting holomorphic functions on the target space of the quantum mechanics, see \cite{Nekrasov:2004vw}. We can think of the flavor in the fundamental representation in one of two equivalent ways: it is a pair of ${\mathcal N}=2$ chiral multiplets, one in the fundamental and the other in the anti-fundamental representation. Alternatively, it contains a chiral multiplet and an anti-chiral multiplet, but both transform in the fundamental representation. The above way of writing $\Phi_F(x)$ is adapted to the second viewpoint.
The ${\mathcal N}=4$ vector multiplet, the adjoint chiral field and the $W$-bosons, give a universal contribution for any $U(N)$ gauge group:%
\begin{equation}
\Phi_{V}(x) = \prod_{1\leq I <J \leq N}{\varphi(\; e^{R x_I - R x_J})\over \varphi(t\; e^{R x_I - R x_J})}.
\end{equation}
The numerator is due to the W-bosons, and the denominator to the adjoint of mass scalar of mass $\epsilon$.
Finally, since the gauge group is gauged, we integrate over $x$'s. This simply projects to gauge invariant functions of the moduli space,
\begin{equation}\label{part}
{\mathcal Z}_{{\mathcal V}_{3d}}(S; N)={1\over N!}\int d^Nx \;\; \Phi_{V}(x)\, \prod_{a=1}^{\ell} \Phi_{F_a}(x) \; e^{ \zeta \,{\rm Tr} x/\hbar}.
\end{equation}
The integrand is a product including all contributions of the massive BPS particles in the theory, the $W$ bosons, flavors $\Phi$'s, and the adjoint.
The exponent contains the classical terms, the FI parameter $\zeta$, and the Chern-Simons level $k$ which is zero in our case. If the gauge symmetry were just a global symmetry, $x$'s would have been parameters of the theory and the partition function of the theory would have been the integrand. Gauging the $U(N)$ symmetry corresponds to simply integrating\footnote{This partition function is the index studied in \cite{AS, AS2, Aganagic:2012au} with application to knot theory; see also \cite{Fuji:2012nx}. The index is a chiral building block of the $S^3$ or $S^2\times S^1$ partition functions \cite{Hama:2010av,Kapustin:2011jm, Hama:2011ea,Pasquetti:2011fj, Nieri:2013yra, BDP, Taki:2013opa}, deformed by $t$, the fugacity of a very particular flavor symmetry. } over $x$.
We need to determine the contour of integration to fully specify the path integral.
The choice of a contour in the matrix model corresponds to the choice of boundary conditions at infinity in the space where the gauge theory lives \cite{Cheng:2010yw}. At infinity, fields have to approach a vacuum of the theory. For small $q$ and $t$, the vacua are the critical points of
$$
W(x) = \sum\limits_{a=1}^{\ell} \ \log {\varphi(e^{R x - R m_{-,a}})\over \varphi(e^{R x - R m_{+,a}})}.
$$
There are $\ell$ vacua of $W(x)$ both before and after the $R$-deformation. Splitting the $N$ eigenvalues so that $N_a$ of them approach the $a$-th critical point, we break the gauge group,
$$
U(N) \qquad \rightarrow \qquad U(N_1)\times \ldots \times U(N_\ell).
$$
We can think of all the quantities appearing in the potential as real; then the integration is along the real $x$ axis. To fully specify the contour of integration, we need to prescribe how we go around the poles in the integrand. The integral can be computed by residues, with slightly different prescriptions for how we go around the poles for the different gauge groups. In this way, we get $\ell$ distinct contours
${\mathcal C}_{N_1, \ldots, N_\ell}$, and with them the partition function,
$$
{\mathcal Z}_{{\mathcal V}_{3d}}(S;{N})={1\over \prod_{a=1}^{\ell} N_a!}\oint_{{\mathcal C}_{N_1, \ldots, N_\ell}} d^Nx \;\Phi_{V}(x)\\;
\prod_{a=1}^{{ \ell}} \Phi_{F_a}(x) \; e^{-\zeta \,{\rm Tr} x/\hbar}.
$$
Dividing by $N_a!$ corresponds to dividing by the residual gauge symmetry, permuting the $N_a$ eigenvalues in each of the vacua. For $q=t$ this is a topological string partition function of the B-model on $Y_S$ studied in \cite{Aganagic:2002wv}, and related to Chern-Simons theory. The $q\neq t$ partition function is the partition function of refined Chern-Simons theory \cite{AS}, with observables inserted.
We will show that the partition function of ${\mathcal V}_{3d}$ is nothing but the $q$-deformation of the free-fieldfree field conformal block of the Liouville CFT on a sphere with $\ell+2$ punctures. Since the $q$ deformation of Liouville CFT might be not familiar, let us review it.
\subsection{$q$-Liouville}
In this section, we will show that the free field integrals of a $q$-deformed Liouville conformal field theory \cite{Shiraishi:1995rp, Awata:1996xt, Awata:2010yy} have a physical interpretation. They are partition functions of the 3d ${\mathcal N}=2$ gauge theory, which we will called ${\mathcal V}_{3d}$, in the 3d $\Omega$-background $({\mathbb C }\times S^1)_q$. The equivalence of the $q$-Liouville conformal block and the gauge theory partition function is manifest. The screening charge integrals of DF are the integrals over the Coulomb branch of the gauge theory. Inserting the Liouville vertex operators corresponds to coupling the 3d gauge theory to a flavor. The momentum and position of the puncture are given by the real masses of the two chirals within the flavor.
The $q$-deformed Virasoro algebra is written in terms of the deformed screening charges
$$
S(z)=\ : \exp\left( 2 \phi_0 + 2 h_0 \log z + \sum\limits_{k \neq 0} \dfrac{1 + (t/q)^k}{k} h_k z^{-k} \right) :,
$$
where
\begin{align*}
[h_k, h_m] = \dfrac{1}{1 + (t/q)^k} \frac{1 - t^k}{1 - q^k} \ m\, \delta_{k+m,0}.
\end{align*}
The defining property of the generators of the $q-$deformed Virasoro-algebra, is that they commute with the integrals of the screening charges $S$.
The primary vertex operators get deformed as well. The vertex operator carrying momentum $\alpha$ becomes:
$$
V_{\alpha}(z) = \ : \exp\left( - \frac{\alpha}{b^2} \phi_0 - \frac{\alpha}{b^2} h_0 \log z + \sum\limits_{k \neq 0} \dfrac{1 - q^{-\alpha k}}{k(1 - t^{-k})} h_k z^{-k} \right) : \\.
$$
Note, that these operators manifestly become the usual Liouville operators in the limit where $q=e^{R\epsilon_1}, t=e^{-R \epsilon_2}$ go to $1$, by sending $R$ to zero.
Just as before, using these commutation relations, one computes the correlator and obtains the following free field integral:
\begin{align}\label{lcv}
{\mathcal B}_q(\alpha, z; N)= {{r} \over \prod_{a=1}^{\ell} N_a!} \oint_{{\mathcal C}_1, \ldots, {\mathcal C}_{\ell}} d^{N}y\; \Delta^2_{q,t}(y) \; \prod_{a=0}^{\ell} V_a(y; z_a),
\end{align}
where the measure is the $q,t$-deformed Vandermonde
$$
\Delta_{q,t}^2(y) = \prod_{1\leq I\neq J\leq N} {\varphi(y_I/y_J)\over \varphi(t\; y_I /y_J)},
$$
and the potential equals
$$
V_a(y; z_a) = \prod\limits_{I=1}^{N} \dfrac{ \varphi\big(q^{\alpha_a} {z_a/y_I}\big) }{ \varphi\big(z_a/y_I\big) }.
$$
In particular, using the properties of the quantum dilogarithm, it is easy to find that
$V_0( y; 0) = (y_1 \ldots y_N)^{\alpha_0}.
$
As in the undeformed case, the relation holds up to a constant of proportionality $r$. In this paper, we avoid detailed consideration of this normalization constant. The meaning of the constant $r$, on the Liouville side, is to account for all possible two-point functions between the vertex operators $V_{\alpha}(z_a)$. Like in the undeformed case, the $N$ eigenvalues are grouped into sets of size $N_a$, $a=1,\ldots, \ell$, by the choice of contours they get integrated over.\footnote{The contours of integration \emph{are the same as} in the undeformed case -- encircling the segments $[0, z_a]$. The $q$ deformation affects the operators and the algebra, but not the contours. It is important to emphasize that these contours agree with the alternative approach \cite{Mironov:2011dk} where the free field integrals are replaced by Jackson $q$-integrals: in our picture, the latter are the residue sums for the former.}
\section{Gauge/Liouville Triality}
In what follows, we will prove that there is a triality that relates the 5d and 3d gauge theories ${\mathcal T}_{5d}$ and ${\mathcal V}_{3d}$, compactified on a circle, and $q$-deformation of Liouville conformal blocks. We will show this in two steps.
\subsection{$q$-Liouville and ${\mathcal V}_{3d}$ }
The first step is to show that $q$-deformation of the Liouville conformal block \eqref{lcv}, corresponding to a sphere with $\ell+2$ punctures equals the partition function of ${\mathcal V}_{3d}$:
$$
{ {\mathcal Z}}_{{\mathcal V}_{3d}}(S; N)= { {\mathcal B}_q}(\alpha, z; N).
$$
This follows immediately by a simple change of variables that sets
\begin{equation}\label{chv}
z_a=e^{- R m_{+,a}}, \;\;q^{\alpha_a}=e^{R m_{+,a} - R m_{-,a}}, \;\;y = e^{- R x}.
\end{equation}
The insertion of a primary vertex operator in Liouville gets related to coupling the 3d gauge theory on the vortex to a flavor: the mass splitting is related to Liouville momentum, the mass itself to the position of the vertex operator. The puncture at $z=0$ arises from the Fayet-Iliopolous potential, if we set $\alpha_0=\zeta/\hbar-1$.
\subsection{${\mathcal V}_{3d}$ and ${\mathcal T}_{5d}$: Gauge/Vortex Duality }
The second step is to show that the partition function of the 5d gauge theory ${\mathcal T}_{5d}$ and partition function of its vortices, described by the 3d gauge theory ${\mathcal V}_{3d}$ agree
$$
{{\mathcal Z}}_{{\mathcal V}_{3d}}(S, N) = {{\mathcal Z}}_{{\mathcal T}_{5d}}({\Sigma}).
$$
For this we place ${\mathcal T}_{5d}$ at the point where the Coulomb and Higgs branches of ${\mathcal T}_{5d}$ meet, $e_a = f_{a}\, /v$ with $v = {(q/t)^{1/2}}$ as before, and $\Sigma$ degenerates to $S$. In addition we turn on $N_a$ units of vortex flux.\footnote{The shift by $v$ is due to the $\Omega$ background. It is natural that the partition function becomes singular at the point where the two branches meet; this determines the shift.} In the $\Omega$-background this is equivalent to not turning on flux and shifting the Coulomb-branch parameters of ${\mathcal T}_{5d}$ so that
$$
{\mathcal Z}_{{\mathcal T}_{5d}}(\Sigma) = r_{5d} \sum_{\vec R} I^{5d}_{\vec R}
$$
is evaluated at
\begin{equation}\label{rhc}
e_a = t^{N_a}\,f_{a}\, /v,
\end{equation}
where $a$ runs form $1$ to $\ell$. Here, $f_a$ are the masses of $\ell$ of the $2\ell$ hypermultiplets, and the integer shifts correspond to $N_a$ units of vortex flux turned on. Note that as long as $N_a$ are arbitrary, this is no restriction at all.
To recover ${\mathcal T}_{5d}$ at an arbitrary point of its Coulomb branch, we take the limit $N_a\rightarrow \infty$, $\epsilon = {\rm ln}( t) \rightarrow 0$ keeping the product $N_a\epsilon$ fixed. The gauge/vortex duality is the gauge theory realization of large $N$ duality.
\subsubsection{Residues and Instantons}
We start by computing the partition function of ${\mathcal V}_{3d}$ by residues. Then we show that the sum over the residues is the instanton sum of the 5d gauge theory ${\mathcal T}_{5d}$. The positions of the poles are labeled by tuples of partitions, and the integrands are equal to Nekrasov summands.
With the change of variables in \eqref{chv}, the 3d partition function of ${\mathcal V}_{3d}$ becomes:
\begin{equation}\label{liouville}
{\mathcal Z}_{{\mathcal V}_{3d}}(N; S) ={1\over \prod_{a=1}^\ell N_a! }\oint_{{\mathcal C}_{1}, \ldots {\mathcal C}_\ell} d^{N} y\;\; I^{3d}(y),
\end{equation}
where the integrand $I^{3d}(y)$ equals
$$
I^{3d}(y) = V_0(y)\; \Phi_V(y)\;
\prod_{a=1}^{{ \ell}} \Phi_{F_a}(y),
$$
and, in terms of the new variables,
$$
{\Phi}_{V}(y)= \prod_{1\leq I \neq J\leq N} {\varphi(y_J/y_I)\over\varphi(t y_J/y_I )}, \;\;
{\Phi}_{F_a}(y)= \prod_{I=1}^{N} {\varphi(q^{\alpha_a} z_a/y_I)\over\varphi(z_a/y_I )}, \;\; V_0(y) = \prod_{I=1}^N y_I^{\alpha_0}.
$$
The $\ell$ contours ${{\mathcal C}_{1}, \ldots {\mathcal C}_\ell} $ run around the intervals in the complex $y$ plane: ${\mathcal C}_a$ circles the interval from $y=0$ to $y=z_a $, where $z_a$ is the location of a pole in the integral corresponding to a chiral multiplet going massless. The quantum dilogarithm $\varphi( y) = \prod_{n=0}^{\infty} {( 1- q^n \,y)}$ \cite{Faddeev:1993rs, AHKS} has zeros at $y=q^{-n}$, hence the integrand has poles there. The contour is chosen so as to pick up the residues of the poles. For each of the $\ell$ the groups of eigenvalues we choose the contour that runs from $0$ to $z_a$, circling the poles at
$$ y= q^n\,z_a, \qquad n=0, 1,\ldots .
$$
For $|t|, |q|<1$, the poles interpolate between $y=0$ and $y= z_a$, and the contours ${\mathcal C}_a$ circle around the interval (this is also where the critical points of the integral are located). However, not all the poles contribute -- the numerator in ${\Phi}_{V}(y)$ eliminates some: all those for which poles for a pair $y_I, y_J$ coincide up to a $q$ shift. At the same time, the denominator of ${\Phi}_V(y)$ introduces new poles with $y$'s shifted by $t$, up to a multiple of $q$. Up to permutations, the poles that end up contributing are labeled by $\ell${\it-tuples of 2d Young diagrams}:
\begin{equation}\label{bmp}
{\vec R} = ({R}_1, \ldots, {R}_a, \ldots, { R}_\ell),
\end{equation}
where $R_a$ has at most $N_a$ rows. The poles corresponding to the $a$-th group of variables are at
$$
{y} = {y}_{\vec R},
$$
where, up to permutations the components of ${ y}_{\vec R}$ equal
\begin{equation}\label{cmp}
y_{(N_1+\ldots +N_{a-1})+ i} = q^{R_{a,i}} t^{N_a - i}z_a,
\end{equation}
where $i$ runs from $1$ to $N_a$ and $a$ from $1$ to $\ell$. The sum over the residues of the integral becomes the sum over the Young diagrams
$$
\prod_{a=1}^\ell {1\over N_a! }\;\oint_{{\mathcal C}_{1}, \ldots {\mathcal C}_\ell} d^{N} y\qquad \rightarrow \qquad \sum_{{\vec R}}.
$$
While the integrand itself does not make sense at a pole, the ratio of its values at different poles turns out to be finite. This implies that {\it ratio of the residues} at the poles labeled by ${\vec R}$ and ${\vec \varnothing}$
$$
I^{3d}_{{\vec R}} = {\rm res}^{-1}_{\varnothing}\cdot {\rm res}_{R} \; I^{3d}(y)
$$
is simply equal to the {\it ratio of the integrand} itself at the two poles:
\begin{equation}\label{3dsummand}
I^{3d}_{{\vec R}}= q^{\alpha_0|{\vec R}|}\cdot {\Phi_V(y_{\vec R})\over \Phi_V(y_{\vec {\varnothing}})}\cdot{
\prod_{a=1}^{{ \ell}} \Phi_{F_a}(y_{\vec R}) \over \;
\prod_{a=1}^{{\ell}} \Phi_{F_a}(y_{\vec {\varnothing}})}.
\end{equation}
Note that
${V_0(y_{\vec R})\over V_0(y_{\vec {\varnothing}})}= q^{\alpha_0|{\vec R}|} .
$
This makes the sum over residues easy to find:
$$
{\mathcal Z}_{{\mathcal V}_{3d}}(N; S) =r_{3d} \sum_{{\vec R}} \; I^{3d}_{\vec R}(N, f),
$$
where
$$r_{3d} = {\rm res}_{\varnothing} I^{3d}(y).
$$
The structure of the answer is reminiscent of the 5d partition function ${\mathcal Z}_{{\mathcal T}_{5d}}(\Sigma)$, except that the sum in
${\mathcal Z}_{{\mathcal T}_{5d}}(\Sigma)$ runs over $\ell$-touples of Young diagrams of arbitrary size.
However, from the gauge/vortex duality, we only expect the 3d and the 5d partition functions to equal on the locus \eqref{rhc}.
Restricting to the locus \eqref{rhc}, the Nekrasov sum truncates to a sum over diagrams $R_a$ with at most $N_a$ rows. Moreover, for every such $\ell$-touple, the summand $I^{5d}_{{\vec R}}$ indeed becomes equal to $I^{3d}_{\vec R}$. The detailed proof is presented in \cite{AHKS}, here we only give a sketch.
Recall
$$
I^{5d}_{\vec R}= q^{\zeta |R|}\cdot z_{V,\vec R}\cdot z_{H,{\vec R}}\cdot z_{H^{\dagger}, {\vec R}}.
$$
The $\ell$ hypermultiplet contributions $z_{H^{\dagger}, {\vec R}}$ each contain
$N_{R_a\varnothing}(v e_a/f_a)$, as a factor. Restricting this to \eqref{rhc} we get $N_{R_a\varnothing}(t^{N_a})$, which, as one can show\footnote{See \cite{AHKS} for a proof, and \cite{Awata:2008ed, DHG} for earlier work making use of this.} vanishes if $R_a$ has more than $N_a$ rows. So at this point,
$I^{5d}_{\vec R}$
is non-zero only for those $\ell$-touples of Young diagrams
${\vec R} = (R_1, \ldots, R_a, \ldots R_{\ell})$ for which $R_a$ has no more that $N_a$ rows, for each $a$ between $1$ and $\ell$. Thus, the non-zero fixed point contributions to the instanton sum are the same as the poles of the 3d partition function.
Not only does the sum over Young diagrams truncate, but moreover one can prove that the value of the summand in the instanton partition function is exactly $I^{3d}_{\vec R}$:
$$
I^{3d}_{\vec R}(N, f)= I^{5d}_{\vec R}(e, f),
$$
with identifications
$$e_a/ f_{a} = t^{N_a}/ v.$$
Recall we let $f_{a} = f_{+, a}$ and $f_{a+\ell}=f_{-, a}$ for $a$ running from $1$ to $\ell$. Finally, we have $q^{\zeta} = q^{\alpha_0}q$.
The vector multiplet contributions in 5d are related to vector multiplet contributions in 3d, and the 5d hypermultiplets to 3d flavors and the instanton counting parameter in 5d to FI term contributions to the potential in 3d. The 5d partition function is actually a product of the instanton sum $I^{5d}_{\vec R}$ together with the perturbative and the one loop factors contained in $r_{5d}$. This equals the partition function of the 5d gauge theory at the root of the Higgs and Coulomb branches in the absence of vortices. On the 3d gauge theory side, one can prove that this is accounted by the product of $r_{3d}$,
the residue at the $y= y_{\vec \varnothing}$ pole, together with a contribution that is not captured by the theory on the vortex -- this is the partition function of the bulk gauge theory, at the root of the Higgs branch in the absence of vortices.
(From the string theory perspective, this contribution is the partition function of $Y_S$ without branes). One can prove that, taking this into account, the full partition functions on the two sides of the duality are equal.
We have thus proven our main claim \eqref{main} for the case the Gaiotto curve $C$ has genus zero with arbitrary number of punctures. It is elementary to extend this to the case when $C$ is a genus one curve, with arbitrary punctures. We expect the triality to generalize to the case when the Liouville CFT gets replaced by $ADE$ type Toda CFT. The generalization to $A_n$ case will be presented in \cite{AHS}.
\bibliographystyle{utcaps}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 248 |
The light blinds me before my irises adjust. The walls of the small room are a lime green, speckled with thick stickers of water-color flowers and over-sized sparkles. A T.V with scattered Pokemon games and a cube console sit unrestfully upon a wooden dresser. Everything is quiet as I lay in the center of the room upon a circular rug that resembles a blandly-hued blue and green earth. I'm on top of the world. I'm surrounded by an oasis of childhood belongings and frivolous knick-knacks that fill my little heart with so much nonsensical joy.
The planet underneath me is but a construct of textiles lacking detail and truth. I am above it all. I am but a small being finding that life is something of a wonder— but unbeknownst to me, innocence is what truly comprises such curiosities. The world is great, but foreign; so I imagine there is something grander I must discover. I am flooded by warmth at the thought of possibility. To live will be an adventure to see the world. And to see the world, I must open my door.
And I am blinded by darkness.
I groan and stretch my arms. My phone is blaring, and I can no longer tell whether it's an alarm for college, work, or a ring for a meeting. I rub my eyes after muting my device and realize I had another dream of me wandering throughout my youth's subconscious.
Sometimes, I wish I could go back. To the days where living life was an innocent adventure. It was a time where bad things still existed of course— as they always will— but such terrors were overcome by the innocence of forgetting.
Childhood things are something to cherish. For without the joy we experience with the games, do-dads, and the culture, we might as well be as bland as the rug with two colors. Take pride in the characters that taught you life-lessons. Feel bliss in the tune of a theme-song and the nostalgia that follows. Feel relieved for all the messes you barely escaped. Smile at having been a child, because no such innocent time will ever exist again.
It can be a disturbing feeling knowing that things can't be the same as they were so long ago. It's even scarier, feeling as if we've lost ourselves as time has proceeded. But it's funny, because as children, we are unaware of functionality, yet we find ways to thrive and be who we are with much ease. As adults, we long to return to such simple times. But we must remember: at the time, we didn't know it was so simple.
History cannot be relived; yet it can be remembered, which is why it's so familiar. Times are almost always better when relishing in the past, but it is not impossible to thrive again. When change is the world's only constant, it seems happiness can also be easily refuted. But that's wrong.
The past is not a dangerous place, nor is it a time where all of our innocence has been swept away. The past is a masterfully assembled composition of memories of every nature. But more than anything, these memories are lessons. The good ones teach us how to enjoy ourselves and reveal that fun is a plausible continuum. The bad ones teach us of mistakes we can correct and situations we can successfully fail to repeat. And everything in between? They are the stepping stones to which we owe our ultimate growth.
As adults, we crave for the youth to know what we are knowledgable of, but what we know arrived through experience. As children, we learn right and wrong; as burgeoning adults, we live it. Through growth, we understand that innocence is but an uncovered truth. But to a child, innocence is but an undefined word.
But now, in every instance my mind makes me return to the past, I understand a little more of why it does. That green room was a treasure chest of memory, filled with emotions, lessons, and mistakes of how I've come to be. It is a reminder to not let go of the little being full of big dreams.
You will always find me submerged in the visceral cognitions of who I was. I am not afraid to look back on the good or the bad. Innocence stopped me from loving who I was; but life taught me to embrace myself for everything I became after.
So let us treasure and hold onto the memories shall we? The ones that makes us laugh until we cry. The ones that make us smile until our cheeks ache. The ones that make you recall your true strength. Because in the end, it is the memories that color our souls and the loss that gets drained of its hue.
So here is to a past that cannot be relived.
And here's to the future, that will become the past in which we wish we could live again.
Next Post The Big One- Happy Anniversary SD! | {
"redpajama_set_name": "RedPajamaC4"
} | 7,640 |
A New Editor in an Old Office: A Welcome from Gerald Maa
Let me introduce myself: I am Gerald Maa, a new person in an old office. I started my job as editor of The Georgia Review only two weeks ago, and I'm honored to be the latest link in a long chain of editors that extends back to 1947, when the journal was founded at the University of Georgia. I succeed Stephen Corey, who has devoted slightly more than half his life to making sure that The Georgia Review publishes work of the highest order issue after issue. His tenure as editor closes with the Fall 2019 issue (due out in September), which includes a valedictory essay that should not be missed.
As Stephen has assured you, we aren't missing a beat with the transition. We're glad to announce the Loraine Williams Poetry Prize winner for this year and will soon share the details for next year's competition. We're also happy to announce that submissions are open 15 August, as always. (Remember that submissions are free for current subscribers to The Georgia Review.) We are currently preparing for events with poets Alberto Ríos and Eavan Boland, whose work is featured, respectively, in our Summer and Fall issues. In January we will be staging a reading and conversation on queer faith with Muslim writer Kazim Ali. And later this fall we will be working with UGA's Special Collections Library when it brings poet A. E. Stallings for induction into the Georgia Writers Hall of Fame.
A print periodical—dare I say here—is capable of cultivating communities in ways that no other medium can. To open up a journal—break a spine, perhaps—to carry a volume, or run your fingers over your name printed on a page is very special. But to congregate around a print journal is also special in its own right. So subscribe, submit, read, and write. But also don't forget to introduce yourself, when you have the chance, at the Decatur Book Festival, the Brooklyn Book Festival, AWP, or one of our local events. Or tell us online what you are reading or thinking. We live to hear whatever words you have to offer us, even if it's simply a "hello."
← Rosalie Moffett Wins 2019 Loraine Williams Poetry Prize → Jacob Baynham's "Jerry's Dirt" (Fall 2019) Finalist for National Magazine Award in Profile Writing
Hannah Perrin King Wins 2020 Loraine Williams Poetry Prize
Janisse Ray's "The Lonely Ruralist" Wins Pushcart Prize
To Our Readers: 9 June 2020
Jacob Baynham and The Georgia Review Win National Magazine Award
Statement from The Georgia Review on Its Operations during the COVID-19 Pandemic | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,556 |
Lõuka is a village in Tõstamaa Parish, Pärnu County, in southwestern Estonia. It is located just east of Tõstamaa, the administrative centre of the municipality. Lõuka has a population of 50 (as of 1 January 2011).
References
Villages in Pärnu County | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,406 |
Q: Ext JS 6 - Charts are not found despite including ext-all.js Despite that I included ext-all.js file in my index page; getting error like below when I try this online Guage chart example provided by Sencha
http://myapp.com/widget/polar.js?_dc=1436970370848 404 (Not Found)
Uncaught Error: [Ext.create] Unrecognized class name / alias: widget.polar
A: The charts are in a separated package:
Sencha Charts are not included in the Ext JS library by default. In
order to include the charts package, simply add "charts"
("sencha-charts" if working with Ext JS 5.x) to the requires block
in your Sencha Cmd generated application's {appRoot}/app.json file.
Adding a package name to the requires array directs Cmd to make the
package available to your application.
https://docs.sencha.com/extjs/5.1/components/introduction_to_charting.html
A: "requires": [
"charts"
],
This should be uncommented from your app.json
A: In Extjs 6, you have to include sencha charts by uncommenting
"requires": [
"sencha-charts"
],
in app.json and the run sencha app watch command in sencha cmd through application folder.
It works for me, hope this will be helpful to you :)
A: I had exactly same problem in rendering polar charts.Found out below solution:
Add below in application.js
requires: ['Ext.chart.*']
A: In addition to uncommenting "required" : 'Charts' ('charts' for ExtJS 6, and 'sencha-charts' for ExtJS 5) that works well for Sencha Cmd projects, I see that you include ext-all.js files by yourself. Probably you need to find them... you can find all files there ->
https://cdnjs.com/libraries/extjs/6.2.0 All ExtJS files, used for including. That may be used for online linking (in jsfiddle.net, for example). Write at the end 6.1.0, 6.0.0, 5.1.0 or any version you need.
Found within this linkage example https://www.sencha.com/forum/showthread.php?303990-Is-there-a-free-GPA-CDN-for-ExtJS-6-l&p=1115697&viewfull=1#post1115697
In jsfiddle - https://jsfiddle.net/Elunyfay/7v0uo2w6/7/
<!DOCTYPE html><html>
<head>
<script type="text/javascript" src="http://cdnjs.cloudflare.com/ajax/libs/extjs/6.0.0/ext-all-debug.js"></script>
<link rel="stylesheet" type="text/css" href="http://cdnjs.cloudflare.com/ajax/libs/extjs/6.0.0/classic/theme-triton/resources/theme-triton-all-debug.css">
<script type="text/javascript" src="http://cdnjs.cloudflare.com/ajax/libs/extjs/6.0.0/classic/theme-triton/theme-triton-debug.js"></script>
<script type="text/javascript" src="http://cdnjs.cloudflare.com/ajax/libs/extjs/6.0.0/packages/charts/classic/charts-debug.js"></script>
<link type="text/css" href="http://cdnjs.cloudflare.com/ajax/libs/extjs/6.0.0/packages/charts/classic/classic/resources/charts-all-debug.css">
...
A: For ExtJS 6.x using open tooling - you have to install charts module manually
npm install @sencha/ext-charts
(do not use -g flag because Sencha CMD searches for files in the source folder)
and than add
"requires": [
"charts"
],
to your app.json file.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,832 |
{"url":"http:\/\/codeforces.com\/blog\/entry\/45223","text":"### usaxena95's blog\n\nBy\u00a0usaxena95, 5 years ago,\n\n## Introduction\n\nIn this post, I am going to share my little knowledge on how to solve some problems involving calculation of Sum over Subsets(SOS) using dynamic programming. Thus the name SOS DP. I have chosen this topic because it appears frequently in contests as mediu2m-hard and above problems but has very few blogs\/editorials explaining the interesting DP behind it. I also have a predilection for this since I came across it for the first time in ICPC Amritapuri Regionals 2014. Since then I have created many questions based on this concept on various platforms but the number of accepted solutions always seems to be disproportionate to the lucidity of the concept. Following is a small attempt to bridge this gap \ud83d\ude09\n\n## Problem\n\nI will be addressing the following problem: Given a fixed array A of 2N integers, we need to calculate \u2200 x function F(x) = Sum of all A[i] such that x&i = i, i.e., i is a subset of x.\n\n## Prerequisite\n\n\u2022 Basic Dynamic Programming\n\nIn no way this should be considered an introduction to the above topics.\n\n## Solutions\n\n#### Bruteforce\n\nfor(int mask = 0;mask < (1<<N); ++mask){\nfor(int i = 0;i < (1<<N); ++i){\n}\n}\n}\n\n\nThis solution is quite straightforward and inefficient with time complexity of O(4N)\n\n#### Suboptimal Solution\n\n\/\/ iterate over all the masks\n\/\/ iterate over all the subsets of the mask\nfor(int i = mask; i > 0; i = (i-1) & mask){\n}\n}\n\n\nNot as trivial, this solution is more efficient with time complexity of O(3N). To calculate the time complexity of this algorithm, notice that for each mask we iterate only over its subsets. Therefore if a mask has K on bits, we do 2K iterations. Also total number of masks with K on bits is . Therefore total iterations =\n\n#### SoS Dynamic Programming solution\n\nIn this approach we will try to iterate over all subsets of mask in a smarter way. A noticeable flaw in our previous approach is that an index A[x] with x having K off bits is visited by 2K masks. Thus there is repeated recalculation.\nA reason for this overhead is that we are not establishing any relation between the A[x]'s that are being used by different F[mask]'s. We must somehow add another state to these masks and make semantic groups to avoid recalculation of the group.\n\nDenote . Now we will partition this set into non intersecting groups. , that is set of only those subsets of mask which differ from mask only in the first i bits (zero based).\nFor example . Using this we can denote any set as a union of some non intersecting sets.\n\nLets try to relate these sets of numbers. S(mask, i) contains all subsets of mask which differ from it only in the first i bits.\nConsider that ith bit of mask is 0. In this case no subset can differ from mask in the ith bit as it would mean that the numbers will have a 1 at ith bit where mask has a 0 which would mean that it is not a subset of mask. Thus the numbers in this set can now only differ in the first i-1 bits. S(mask,i) = S(mask, i-1).\nConsider that ith bit of mask is 1. Now the numbers belonging to S(mask, i) can be divided into two non intersecting sets. One containing numbers with ith bit as 1 and differing from mask in the next i-1 bits. Second containing numbers with ith bit as 0 and differing from mask\u22952i in next i-1 bits. S(mask, i) = S(mask, i-1) \u222a S(mask\u22952i, i-1).\n\nThe following diagram depicts how we can relate the S(mask,i) sets on each other. Elements of any set S(mask,i) are the leaves in its subtree. The red prefixes depicts that this part of mask will be common to all its members\/children while the black part of mask is allowed to differ.\n\nKindly note that these relations form a directed acyclic graph and not necessarily a rooted tree (think about different values of mask and same value of i)\nAfter realization of these relations we can easily come up with the corresponding dynamic programming.\n\n\/\/iterative version\nfor(int i = 0;i < N; ++i){\nelse\n}\n}\n\n\/\/memory optimized, super easy to code.\nfor(int i = 0; i<(1<<N); ++i)\nF[i] = A[i];\n}\n\n\nThe above algorithm runs in O(N\u20092N) time.\n\n## Discussion Problem\n\nNow you know how to calculate Sum over Subsets for a fixed array A. What would happen if A and F are SOS functions of each other \ud83d\ude09 . Consider following modification to the problem. Assume H1, H2 to be 32 bit integer valued hash functions (just to avoid any combinatoric approach to circumvent this problem) and can be evaluated at any point in constant time.:\n\nI enjoyed solving this with harshil. Lets discuss the approaches in comments :)\n\n## Practice Problems\n\nI hope you enjoyed it. Following are some problems built on SOS.\n\nEDIT: Practice problems are now arranged in almost increasing order of difficulty.\n\n\u2022 +64\n\n \u00bb 5 years ago, # | \u00a0 +3 Great tutorial! If only I knew about this before today's contest :P\n\u2022 \u00bb \u00bb 5 years ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 +5 Thankyou. Did a similar problem appear in yesterday's contest ?\n\u2022 \u00bb \u00bb \u00bb 5 years ago, # ^ | \u2190 Rev. 3 \u2192 \u00a0 0 363 Div 1 C. You can add it to the list.\n\u2022 \u00bb \u00bb \u00bb \u00bb 5 years ago, # ^ | \u00a0 0 Could you expalain how that problem can be solved using the above technique?\n\u2022 \u00bb \u00bb \u00bb \u00bb 5 years ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 +5 Well I don't think this technique is required to solve this problem. The dp recurrence does not demand any summation over subsets.DP for solving this problem would be where\n \u00bb 5 years ago, # | \u00a0 +5 Good job!You can also add this problem to the list: http:\/\/hsin.hr\/coci\/archive\/2011_2012\/contest6_tasks.pdf (problem KOSARE).\n\u2022 \u00bb \u00bb 5 years ago, # ^ | \u00a0 +5 Thankyou. Added :)\n \u00bb 5 years ago, # | \u00a0 +8 Some more problems that use a similar approach: http:\/\/codeforces.com\/contest\/165\/problem\/E https:\/\/www.hackerrank.com\/contests\/countercode\/challenges\/subset\n\u2022 \u00bb \u00bb 5 years ago, # ^ | \u00a0 +5 Thanks. Added :)\n \u00bb 5 years ago, # | \u00a0 +8 Very well written blog.P.S:The spelling of prerequisites is wrong\n\u2022 \u00bb \u00bb 5 years ago, # ^ | \u00a0 +5 Thanks. Fixed.\n \u00bb 5 years ago, # | \u00a0 0 In suboptimal solution: What is mask initialized to ? It is this line F[mask] = A[0]\n\u2022 \u00bb \u00bb 5 years ago, # ^ | \u2190 Rev. 3 \u2192 \u00a0 0 In sub optimal solution I think outer array has mask instead of i i.e. \/\/ iterate over all the masks for (int mask=0; mask < (1< 0; i = (i-1) & mask){ F[mask] += A[i]; } }\n\u2022 \u00bb \u00bb \u00bb 5 years ago, # ^ | \u00a0 0 yeah ! I think that's the case.\n\u2022 \u00bb \u00bb 5 years ago, # ^ | \u00a0 0 Thanks. Fixed now.\n\u2022 \u00bb \u00bb \u00bb 5 years ago, # ^ | \u2190 Rev. 4 \u2192 \u00a0 0 :)\n\u2022 \u00bb \u00bb \u00bb 4 years ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 Got it.\n \u00bb 5 years ago, # | \u00a0 0 in suboptimal solution -> first for : 'i' should be 'mask'! :)\n\u2022 \u00bb \u00bb 5 years ago, # ^ | \u00a0 0 and what's value of 'i' in this line ?! dp[mask][-1] = A[i]; \/\/handle base case separately (leaf states)\n\u2022 \u00bb \u00bb \u00bb 5 years ago, # ^ | \u00a0 0 Fixed. it should have been A[mask] not A[i].\n \u00bb 5 years ago, # | \u00a0 +13 Great tutorial. I find bitmask concepts hard to undestand. But got a clear understanding with this one. Kudos to the author. :)\n \u00bb 5 years ago, # | \u2190 Rev. 2 \u2192 \u00a0 0 I think value of 'i' in 2nd last row of diagram should be zero in all case.\n \u00bb 5 years ago, # | \u00a0 +8 thanks great.sorry, how we can prove that for(int i = mask; i > 0; i = (i-1) & mask) will pass over all subsets ?\n\u2022 \u00bb \u00bb 5 years ago, # ^ | \u2190 Rev. 6 \u2192 \u00a0 +19 I will give it a try. As of now I am not able to think about an easier proof. I will try to prove it by mathematical induction. Note: Operation M-1 switches OFF the first ON bit and switches ON the remaining prefix. Eg 101100002\u2009-\u20091\u2009=\u2009101011112Statement P(n) = Given an integer M, this algorithm will iterate over all subsets s.t. x differs from M only in the first n bits in strictly decreasing order. algorithm successfully iterates over all elements in S(M,\u2009n) in strictly decreasing order.Base Case P(0): Case 1: if M is even The first iteration i = M successfully visits S(M,0) = {M} Case 2: if M is odd First iteration i = M, second iteration i = (i-1)&M switches off the 0th bit thus visits . Algo visits in decreasing order Hence P(0) is true.Assumption: Assume P(n) to be true. Algo visits S(M, n) in descending order.Inductive step: To prove P(n+1) is true. Since P(n) is true, algo visits all S(M, n) in descending order. P(n+1) is trivially true if n\u2009+\u20091th bit of M is OFF since S(M,n+1) = S(M,n).Lets focus on case when n\u2009+\u20091th bit of M is ON. Since the visits of S(M,n) as assumed by P(n) are in descending order, the last integer visited by this algo would be M with first n bits OFF. For example, if M = , n = 4 the last value of i would be .After reaching this integer, we do i = (i-1)&M. The i-1 operation turns OFF the n\u2009+\u20091th bit and turns ON first n bits. Taking bitwise AND with original M copies the first n bits of M into it. Taking the example above, we have following transitions -> -> ->. Thus what we final get is .Since P(n) is true, we can now iterate over S(, n). But . Therefore we iterate over all elements of S(M,n+1).Hence P(n+1) is true.\n\u2022 \u00bb \u00bb \u00bb 4 years ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 The example in 2nd last line of the 2nd paragraph under heading \"SoS Dynamic Programming solution\" doesn't makes sense to me. This guy 101 0000 (last element in the set) when XORed with 101 1010 will produce 1010 which is no way <= (1<<3). Also the statement that \"set of only those subsets of mask which differ from mask only in the first i bits (zero based)\" conflicts with x XOR mask being <= (1<\n\u2022 \u00bb \u00bb \u00bb \u00bb 4 years ago, # ^ | \u2190 Rev. 3 \u2192 \u00a0 +3 I have the same doubt! Is it a typo or am I missing something?UPD: I think it should be 2^(i+1)-1.\n\u2022 \u00bb \u00bb \u00bb 16 months ago, # ^ | \u00a0 0 Can we just say the following invariant:let say j is the jth iteration then i is the jth largest value which is subset of mask\n \u00bb 5 years ago, # | \u00a0 +4 That is a great post! It really helped, thank you !!! I tryied the first problem (Special Pairs) source: http:\/\/pastebin.com\/UXDiad27 But I get WA. My logic is the following: If we find 1 then because we need final result to be zero and we use AND bitwise, then we need to compare it with a number that has 0 at that position. So we go to dp[mask^(1<\n\u2022 \u00bb \u00bb 5 years ago, # ^ | \u2190 Rev. 5 \u2192 \u00a0 +9 Got the bug. if(exist[0]) --answer; Why man why yy yy ? And I was examining your recurrence all this time :P .It is nowhere written i!=j. You can have a look at the diff. The correct answer was always just one less than the correct answer :PAnyways have fun now :)\n\u2022 \u00bb \u00bb \u00bb 5 years ago, # ^ | \u00a0 +5 Yes! Removing that code gives AC! I thought i != j. I am sorry for your suffering checking my recurrence ! Thank you !\n\u2022 \u00bb \u00bb 5 years ago, # ^ | \u00a0 +4 An easier way to solve this problem: for any A[i], to find how many numbers are there which has bitwise AND zero with A[i] -> it would just be a subset of one's complement of A[i]. So the answer is Anyways your modified recurrence shows your proper understanding of the concept :)\n\u2022 \u00bb \u00bb \u00bb 5 years ago, # ^ | \u00a0 +5 Nice way! Also even if I am not so strong in bitmask I managed to think a modification because your tutorial is so clear and good that made things easier ! Thanks again !\n \u00bb 5 years ago, # | \u00a0 0 Given a fixed array A of 2^N integers I'm unable to understand why 2^N integers must be there in A. Is this a typo?\n\u2022 \u00bb \u00bb 5 years ago, # ^ | \u00a0 0 Actually the technique was designed for 2N integers. But you can always make an arbitrary length of array to a 2N sized array by adding extra elements with value \"0\".\n\u2022 \u00bb \u00bb \u00bb 5 years ago, # ^ | \u00a0 0 Shouldn't it be that the number of bitmasks=2^SIZE for any integer SIZE. If SIZE itself is 2^N, then overall complexity will be SIZE*2^SIZE = 2^N * 2^( 2^N ).\n\u2022 \u00bb \u00bb \u00bb \u00bb 5 years ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 N = minimum number of bits required to represent any index of the array. SIZE\u2009<\u20092N\n\u2022 \u00bb \u00bb \u00bb \u00bb \u00bb 5 years ago, # ^ | \u00a0 +6 Now it makes sense\n \u00bb 5 years ago, # | \u00a0 +5 great tutorial\n \u00bb 5 years ago, # | \u2190 Rev. 2 \u2192 \u00a0 +8 In your memory optimized code shouldn't it be for(int i = 0; i<(1<= 0; --mask){ if(mask & (1<\n\u2022 \u00bb \u00bb 4 years ago, # ^ | \u00a0 0 anyways the loop must start with 1<\n\u2022 \u00bb \u00bb 6 months ago, # ^ | \u00a0 +3 Thanks for informing was stuck on that !\n\u2022 \u00bb \u00bb 4 months ago, # ^ | \u00a0 0 I have the same thought and your opinion is really helpful.\n \u00bb 5 years ago, # | \u00a0 -8 Hello , Can you explain the formula used in Vim War.Seems like inclusion exclusion to me but I cannot understand it.\n \u00bb 5 years ago, # | \u00a0 0 http:\/\/codeforces.com\/contest\/165\/problem\/E Can someone explain it ? please , can't understand dp-state.\n\u2022 \u00bb \u00bb 4 years ago, # ^ | \u00a0 0 First solve this problem \u2014 Special Pairs. Then this problem should be easy.However, in my solution dp state was dp[mask]\u2009=\u2009 a number from the given array such that . So we have base cases , for other masks initialize with 0.Now, for each array elements you need to find out another number from the array, such that their AND is 0. Note that the number you need to find will always be a subset of . So you can just print the number stored at . If it is 0 then there is no solution. [ N = minimum number of bits needed to represent all the numbers of the array]\n\u2022 \u00bb \u00bb \u00bb 4 years ago, # ^ | \u00a0 0 My idea was almost the same. But it is getting TLE on test 12. Here's the link to my submission http:\/\/codeforces.com\/contest\/165\/submission\/29478386Can you please check and tell me what's wrong with it?\n\u2022 \u00bb \u00bb \u00bb \u00bb 4 years ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 +1 Change cin\/cout to scanf\/printf! The 12th Case has 1e6 numbers, cin\/cout will definitely make it TLE.Always use scanf\/printf if you have some constrains > 5e5.\n\u2022 \u00bb \u00bb \u00bb \u00bb \u00bb 4 years ago, # ^ | \u00a0 +3 Thanks a lot. I never thought that it would make much of a mess..:) I got it accepted after using scanf and printf. Thanks a lot of pointing this out.\n \u00bb 4 years ago, # | \u00a0 0 Added a nice problem to the set. Varying Kibibits\n \u00bb 4 years ago, # | \u00a0 0 dp[mask][-1] = A[mask]; \/\/handle base case separately (leaf states) . why it doesn't give array index out of bound exception\n\u2022 \u00bb \u00bb 4 years ago, # ^ | \u00a0 +11 Because this is pseudocode.\n\u2022 \u00bb \u00bb \u00bb 4 years ago, # ^ | \u00a0 0 :p\n\u2022 \u00bb \u00bb 2 years ago, # ^ | \u00a0 -13 Well technically, c++ doesn't check for boundaries so it should work...But, you know, unexpected behaviour.\n \u00bb 4 years ago, # | \u00a0 +1 It's a new problem in town guys. The problem is the editorial is not explained very well ,, i guess you guys should take a look and if anyone understands may be he can shed some light for us newbies.https:\/\/www.hackerrank.com\/contests\/world-codesprint-11\/challenges\/best-mask\/editorialIt's a recent codesprint problem from hackerrank.\n \u00bb 4 years ago, # | \u00a0 0 hey can anyone explain how to approach question this. it is similar to discuss problem but i am unable to do it? thanks in advance,\n \u00bb 4 years ago, # | \u2190 Rev. 3 \u2192 \u00a0 0 In the question explained. What is the range of x ?? And the size of array is N or 2^N ???\n\u2022 \u00bb \u00bb 4 years ago, # ^ | \u00a0 0 You can take any size, that does not matter. But array will have only n elements. Rest all will be zeros. But for the answer container(\/array) you are required to have a container(\/array) of size 1<\n \u00bb 4 years ago, # | \u2190 Rev. 2 \u2192 \u00a0 0 I can't understand this completely, maybe this is too hard for me.\n \u00bb 4 years ago, # | \u00a0 0 Icannot understand this line .Please explain with a short example.\"A noticeable flaw in our previous approach is that an index A[x] with x having K off bits is visited by 2K masks. Thus there is repeated recalculation\"\n \u00bb 4 years ago, # | \u2190 Rev. 4 \u2192 \u00a0 0 What does F will contain? Does accumulate of F will give the sum over all subsets? I want to ask that... What is the meaning of F[mask] in the last implementation? If I need to find SoS then how should I proceed after calculation of F?\n\u2022 \u00bb \u00bb 4 years ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 Accumulating F won't give you sum over all subsets of the array. F[mask] is the sum of A[i] such that mask & i == i, that mean the on bits of i is subset of the on bits of mask. Solve more problems, you'll find out yourself. Most of the times you'll just need to change the base case.\n \u00bb 4 years ago, # | \u00a0 -8 Absolutely Perfect.\n\u2022 \u00bb \u00bb 4 years ago, # ^ | \u00a0 0 Absolutely Pointless. :)\n\u2022 \u00bb \u00bb \u00bb 4 years ago, # ^ | \u2190 Rev. 3 \u2192 \u00a0 -8 Can you help me in solving KOSARE. I understand this article. But it seems that I am not making out the official solution of KOSARE. Here the link to a solution I found online. https:\/\/github.com\/marioyc\/Online-Judge-Solutions\/blob\/master\/COCI\/2011-2012\/Contest%20%236\/KOSARE.cpp for(int i = 0,r = 0;i < M;++i,r ^= 1){ for(int j = (1 << M) - 1;j >= 0;--j){ dp[r][j] = dp[r ^ 1][j]; if(j >> i & 1) dp[r][j] += dp[r ^ 1][j ^ (1 << i)]; } } What is the need of this r and r^1. I can't understand this part as well. for(int mask = 1;mask < (1 << M);++mask){ int nb = __builtin_popcount(mask); if(nb & 1){ ans -= mod_pow(2,dp[(M - 1) & 1][mask ^ ((1 << M) - 1)]); if(ans < 0) ans += MOD; }else{ ans += mod_pow(2,dp[(M - 1) & 1][mask ^ ((1 << M) - 1)]); if(ans >= MOD) ans -= MOD; } } Any help would be greatly appreciated. Edit : I solved this. Why so many downvotes? I was only asking a doubt!\n\u2022 \u00bb \u00bb \u00bb 4 years ago, # ^ | \u00a0 -13 Absolutely perfectly pointless\n \u00bb 4 years ago, # | \u00a0 0 Please star my projects and contribute if you are interested. 1. https:\/\/github.com\/ArmenGabrielyan16\/DiceRoller 2. https:\/\/github.com\/ArmenGabrielyan16\/SuperLibrary\n \u00bb 4 years ago, # | \u2190 Rev. 2 \u2192 \u00a0 0 1) for(int i = 0;i < N; ++i) for(int mask = 0; mask < (1<\n\u2022 \u00bb \u00bb 4 years ago, # ^ | \u00a0 0 if(mask & (1<\n\u2022 \u00bb \u00bb \u00bb 4 years ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 So 2nd version will not count some dp states. Right?\n\u2022 \u00bb \u00bb \u00bb \u00bb 4 years ago, # ^ | \u00a0 0 It will visit every state but t will calculate it wrong. When you do F[mask] += F[mask^(1<\n\u2022 \u00bb \u00bb 3 years ago, # ^ | \u00a0 0 Why do we do that xor operation? ,What does F[mask]+=F[mask^(1<\n \u00bb 4 years ago, # | \u00a0 0 This reminds me of this: http:\/\/web.evanchen.cc\/handouts\/SOS_Dumbass\/SOS_Dumbass.pdf\n \u00bb 4 years ago, # | \u00a0 -6 for ( x = y; x > 0; x = ( y & (x-1) ) )generates all subsets of bitmask y.How does this iteration works? Any intuitive explaination?\n\u2022 \u00bb \u00bb 4 years ago, # ^ | \u00a0 0 x starts with the value y, and everytime you subtract 1 from x, the lowest value bit that is set would become unset, and the bits ahead of the lowest set bit (now unset) would become set. Eg: x = y = 10100 x-1 = 10011 Now when you bitwise AND this with y (which was initially x), you get the common set bits between x and y ( definition of bitwise AND ). Eg: x=10011 and y=10100 x&y = 10000 Everytime you AND x with y, it is making sure that x is always a subset of y. Subtracting x by 1 after every iteration makes sure that you go through all combinations ( 2N ) of the mask y. Further proof for the same is given by usaxena95 above.\n \u00bb 4 years ago, # | \u00a0 +6\n \u00bb 4 years ago, # | \u2190 Rev. 5 \u2192 \u00a0 0 I see it is not clear with the example above. S(1011010, 3) contains 1010000. Let take the XOR operator: 1011010 xor 101000 = 0001010. In decimal representation, this value should be equal to 2^3 + 2^1 > 2^3, contradicting to (1011010 xor 101000)_{10} <= 2^3.I may misunderstand. Can someone help me explain this gap?Thanks,Hanh Tang\n\u2022 \u00bb \u00bb 4 years ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 +3 Thanks for pointing that out. I have fixed that. It is now \u2009<\u20092i\u2009+\u20091.\n\u2022 \u00bb \u00bb \u00bb 4 years ago, # ^ | \u00a0 0 Thank you, usaxena95.\n\u2022 \u00bb \u00bb 23 months ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 it should be 1011010 xor 1010000 = 0001010\n \u00bb 4 years ago, # | \u00a0 0 In this approach we will try to iterate over all subsets of mask in a smarter way. A noticeable flaw in our previous approach is that an index A[x] with x having K off bits is visited by 2^K masks. Thus there is repeated recalculation Can someone explain me this line? The mask: 4(100) has 2 off-bits, so 2^2=4 masks will visit the mask 4(100). How?\n\u2022 \u00bb \u00bb 4 years ago, # ^ | \u00a0 0 An index x with k off bits is a subset to 2^k masks (hence it is visited by 2^k masks).In your example case 4(100) is visited by 100,101,110,111.\n\u2022 \u00bb \u00bb \u00bb 6 months ago, # ^ | \u00a0 0 Thanks\n\u2022 \u00bb \u00bb \u00bb 6 months ago, # ^ | \u00a0 0 Thanks really helped alot.\n \u00bb 4 years ago, # | \u00a0 0 Excellent editorial! Kudos...\n \u00bb 3 years ago, # | \u00a0 0 Approach for discussion problem?\n \u00bb 3 years ago, # | \u2190 Rev. 3 \u2192 \u00a0 0 .\n \u00bb 3 years ago, # | \u2190 Rev. 3 \u2192 \u00a0 0\n \u00bb 3 years ago, # | \u00a0 +12 recent one DANYANUM\n \u00bb 3 years ago, # | \u2190 Rev. 2 \u2192 \u00a0 0 Thanks for such a nice blog\n \u00bb 3 years ago, # | \u00a0 +5 One more problem from a recent contest: Or Plus Max\n \u00bb 3 years ago, # | \u00a0 +8 can someone explain what we can't use following for(int mask = 0; mask < (1<\n\u2022 \u00bb \u00bb 2 years ago, # ^ | \u00a0 0 I think it's because of compiler optimizations.\n \u00bb 3 years ago, # | \u2190 Rev. 2 \u2192 \u00a0 0 Could anyone please tell me, why my code is failing ? I used almost same approach for solving Compatible Numbers question. Submission 47377519 https:\/\/codeforces.com\/contest\/165\/my\n \u00bb 2 years ago, # | \u00a0 0 I found one problem with same concept on CodeChef Long Challenge Killing Monsters\n \u00bb 2 years ago, # | \u00a0 0 can we say that when ith bit is ON then simply S(mask,i) = 2*S(mask,i-1), Because all the subsets till (i-1)th bit now have 2 choices .Either they can pair up with ith bit as 0 or 1 ??\n \u00bb 2 years ago, # | \u00a0 0 How to solve the problem Pepsi Cola?\n \u00bb 2 years ago, # | \u00a0 0 \/\/iterative version for(int mask = 0; mask < (1<\n\u2022 \u00bb \u00bb 2 years ago, # ^ | \u00a0 0 dp[0][-1] is really incorrect, but it only means the base case here,and helps us to understand the method. Sorry for my poor English.\n \u00bb 23 months ago, # | \u00a0 +10 Great tutorial! Also, isn't the iterative solution akin to finding an N-dimensional prefix sum array with dimensions 2x2...x2? If this is the case, I think it could be possible to extend this idea to \"bitmasks\" with a different base.\n\u2022 \u00bb \u00bb 23 months ago, # ^ | \u00a0 0 Yes, see this problem.\n \u00bb 23 months ago, # | \u00a0 0 When you do the partition, why it is a partition? Mask is in all those sets, right?\n \u00bb 23 months ago, # | \u00a0 +3\n \u00bb 22 months ago, # | \u00a0 0 Jersey Number is probably a better place to submit. There seems to be some problem with the testcases on ICPC Live Archive (AC on Codechef gets WA there; Noone has solved it).\n \u00bb 22 months ago, # | \u00a0 0 The suboptimal solution is very clever\n \u00bb 20 months ago, # | \u2190 Rev. 2 \u2192 \u00a0 +32 There is another cool way of visualizing\/memorizing SOS that I learnt from errichto's video: How do you transform a 1D array to its prefix sum? You do: a[i] += a[i - 1].How do you transform a 2D array to its prefix sum? Normally it is done by p[i][j] = p[i-1][j] + p[i][j-1] - p[i-1][j-1] + a[i][j]. But notice that you can also apply the 1D prefix sum in rows and columns separately: for i in [1, N]: for j in [1, N]: a[i][j] += a[i][j-1] for i in [1, N]: for j in [1, N]: a[i][j] += a[i-1][j] Now, the sub over submasks problem can be imagined as doing a prefix sum on $2\\times 2\\times\\ldots \\times2$ hypercube! For example, lets say the mask has three bits, and you want sum over submasks for $101$. That is equivalent to taking the sum from cell $(0, 0, 0)$ to $(1, 0, 1)$ on a 3D cube. So, you can just apply 1D prefix sum on each dimension separately. That is exactly what the final version of the code snippet is doing. It first iterates on the dimension, then does a[..][1][..] += a[..][0][..] for that dimension; in other words, takes prefix sum on that dimension. And after that, the initial array turns into the SOS!\n\u2022 \u00bb \u00bb 20 months ago, # ^ | \u00a0 +8 Very interesting visualization. Thanks!\n\u2022 \u00bb \u00bb 20 months ago, # ^ | \u00a0 0 Can you please post the link to that video of Errichto's?\n\u2022 \u00bb \u00bb \u00bb 20 months ago, # ^ | \u00a0 +3 Watch analysis of the first problem here: Innopolis Open 2018-19 analysis\n\u2022 \u00bb \u00bb \u00bb \u00bb 19 months ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 Analysis of the 2nd problem (B: Cake Tasting) it is, actually.\n\u2022 \u00bb \u00bb 16 months ago, # ^ | \u00a0 0 Thanks. Made my understanding of the situation clear.\n \u00bb 20 months ago, # | \u00a0 0 Kindly note that these relations form a directed acyclic graph and not necessarily a rooted tree (think about different values of mask and same value of i)can someone explain not a rooted tree more\n\u2022 \u00bb \u00bb 20 months ago, # ^ | \u00a0 0 rooted tree = tree with a root node.\n \u00bb 20 months ago, # | \u00a0 0 Can anyone please help me out in the question \u2014 VIM WAR ? Couldn't understand the summation formula in the editorial. Thankyou.\n\u2022 \u00bb \u00bb 2 months ago, # ^ | \u00a0 +3 Did you get it? I am also facing problem understanding it.\n \u00bb 20 months ago, # | \u00a0 0 Can we use F[mask] = (1 << (__builtin_popcount(mask) - 1)) * a[badbit] + F[nmask] * 2; to update F[mask] in $O(1)$ where badbit = 31 - __builtin_clz(mask) and nmask = mask - (1 << badbit) ? F[0] = 0; for(int mask = 1; mask < (1 << n); mask++){ int badbit = 32 - 1 - __builtin_clz(mask); int nmask = mask - (1 << badbit); F[mask] = (1 << (__builtin_popcount(mask) - 1)) * a[badbit] + F[nmask] * 2; } \n \u00bb 20 months ago, # | \u00a0 0\n \u00bb 20 months ago, # | \u2190 Rev. 2 \u2192 \u00a0 0 for(int i = 0; i<(1<\n\u2022 \u00bb \u00bb 20 months ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 wq\n\u2022 \u00bb \u00bb 19 months ago, # ^ | \u00a0 0 F[i] = A[i]????? why?, did u check your code with some examples?\n\u2022 \u00bb \u00bb \u00bb 19 months ago, # ^ | \u00a0 0 its the memory optimized code in the blog i just copied it here.f[i]=a[i] is initialization for all masks beacause every set is subset of itself ofc!\n\u2022 \u00bb \u00bb \u00bb \u00bb 19 months ago, # ^ | \u00a0 0 you are right, sorry for my wrong reply\n \u00bb 19 months ago, # | \u00a0 0 Thanks for the information , i was having problem in subset sum\n \u00bb 19 months ago, # | \u00a0 0 Can I use FWT to solve this problem with the same complexity? F = FWT_AND(A,B),where B=[1,1,1,1...]\n \u00bb 16 months ago, # | \u00a0 0 Easy ques, just above code: linksoln : link SAME AS ABOVE\n\u2022 \u00bb \u00bb 16 months ago, # ^ | \u00a0 0 thanks\n \u00bb 16 months ago, # | \u00a0 0 can I get link of the same problem which discussed above Actually I want to check a little different approach\n \u00bb 16 months ago, # | \u2190 Rev. 3 \u2192 \u00a0 0 issue solved\n \u00bb 16 months ago, # | \u00a0 0 This problem from csacademy uses SOS dp as a subroutine. Maybe checking it out.\n \u00bb 14 months ago, # | \u00a0 0 Ngon\n \u00bb 12 months ago, # | \u2190 Rev. 3 \u2192 \u00a0 0 If I need to iterate over the subsets of the mask's complement , How can I apply SoS DP approach? I'm able to apply the 3^N approach easily, However N is upto 20 which would lead to A TLE verdict. private int dp(int msk) { if (msk == (1 << n) - 1) return 0; if (memo[msk] != -1) return memo[msk]; int max = 0; int avalMsks = msk ^ ((1 << n) - 1); for (int i = avalMsks; i > 0; i = (i - 1) & (avalMsks)) { if (valid[i]) max = Math.max(max, dp(msk | i) + 1); } return memo[msk] = max; } usaxena95 any thoughts?here is the problem if anyone is interested :101666G - Going Dutch\n\u2022 \u00bb \u00bb 12 months ago, # ^ | \u00a0 0 It seems isomorphic to this problem I ran into a few weeks ago. You'll have to change your DP from \"How big is the largest valid partition of (the complement of) this mask?\" to \"How big is the largest valid partition of any subset of (the complement of) this mask?\"\n\u2022 \u00bb \u00bb \u00bb 12 months ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 Edit : I get it now , here is the code in case someone needs it private int dp(int msk) { if (msk == (1 << n) - 1) return 1; if (memo[msk] != -1) return memo[msk]; int max = 0; for (int i = 0; i < n; i++) { if ((msk & (1 << i)) == 0) max = Math.max(max, dp(msk ^ (1 << i)) + (valid[msk] ? 1 : 0)); } return memo[msk] = max; } \n \u00bb 11 months ago, # | \u00a0 +3 Very well explained tutorial! it was very helpful. Thank you UwU\n \u00bb 11 months ago, # | \u00a0 0 I have $Q \\le 200000$ queries and set of $N \\le 200000$ bitmasks. For each bitmask in query I have to calculate number of bitmasks in set that have a bitwise AND equal to $0$ with bitmask from query. How can I solve it, if bitmasks contains $50$ bits? I already can solve it for $20$ bits, but can't figure out sparse solution. I thought about something like: We can just iterate over all parents in prefix tree from this article for all $N$ leafs and do hashmap[node]++ but it is directed acyclic graph...\n \u00bb 8 months ago, # | \u00a0 0 https:\/\/www.codechef.com\/problems\/ANDPREF u can add this problem too , came after studying about this topic in the editorial and thanks for the blog :D\n \u00bb 8 months ago, # | \u00a0 -22 Almost same article with lot more explanations \u2014 Link\n\u2022 \u00bb \u00bb 8 months ago, # ^ | \u2190 Rev. 2 \u2192 \u00a0 0 The article linked has exactly same content. That one is blatantly copied, without even citing this blog as reference.\n \u00bb 6 months ago, # | \u00a0 0 Cant get link for Special Pairs.\n\u2022 \u00bb \u00bb 6 months ago, # ^ | \u00a0 0 Quite an old problem I guess. They removed it.\n\u2022 \u00bb \u00bb \u00bb 6 months ago, # ^ | \u00a0 0 Hey, I really liked your tutorial. Thanks for this blog. If you want, you can add this problem to your list ANDPREF\n\u2022 \u00bb \u00bb \u00bb 6 months ago, # ^ | \u00a0 0 hello can you explain the logic behind the memory optimised code :- for(int i = 0; i<(1<\n \u00bb 4 months ago, # | \u00a0 0 Here's problem from CSES Problemset using this technique: https:\/\/cses.fi\/problemset\/task\/1654.\n \u00bb 4 months ago, # | \u00a0 +17 A small reminder, we can change `if(mask & (1<\n\u2022 \u00bb \u00bb 4 months ago, # ^ | \u00a0 0 Thank you! I finally passed that problem!\n\u2022 \u00bb \u00bb 4 months ago, # ^ | \u00a0 0 or you can keep the if condition same and do F[mask&(1<\n \u00bb 4 months ago, # | \u00a0 0 How to prove complexity in the suboptimal solution? I haven't got how to come to 3^n through the sum.\n\u2022 \u00bb \u00bb 4 months ago, # ^ | \u00a0 +9\n\u2022 \u00bb \u00bb \u00bb 4 months ago, # ^ | \u00a0 0 omg, hadn't thought about binomial theorem here, thanks\n \u00bb 5 weeks ago, # | \u00a0 -11 Great tutorial","date":"2021-09-26 00:12:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3440309464931488, \"perplexity\": 2162.4397335764515}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057787.63\/warc\/CC-MAIN-20210925232725-20210926022725-00126.warc.gz\"}"} | null | null |
\section{Introduction}
The propagation of scalar waves inside periodic structures has
been receiving growing interest in recent years. A great effort
has been made to understand the physics of these systems since the
acoustical properties of a periodic sculpture by Eusebio Sempere
were measured. \cite{Martinez05}
Phononic crystals (PCs) consist of an inhomogeneous periodic
distribution of elastic materials embedded in other elastic
materials with different properties. \cite{Kushwaha93, Sigalas93}
These systems are extensions of the photonic crystals
\cite{Yablonovitch, John} used for the propagation of elastic
waves through periodic elastic structures. If one of the elastic
materials is a fluid medium, then PCs are called sonic crystals
(SC). Several studies discuss the similarities and differences
between these periodic systems. \cite{Sigalas94, Economou93}
The periodicity of these systems is introduced in the solution of
the wave equation by means of Bloch's theorem. This solution leads
to the phenomenon of band gaps (BGs): frequency regimes where
waves do not propagate through the crystal. Traditionally, wave
propagation inside such systems was analyzed by means of the band
structures. Plane wave expansion (PWE) \cite{Kushwaha94PRB}
transforms the wave equation into an eigenvalue problem that can
be solved for each Bloch vector, $k$, in the irreducible first
Brillouin zone; and so obtaining the eigenfrequencies
$\omega(\vec{k})$ that constitute the band structures. In the
case of SCs, it has been proven that eigenfrequencies for an
arbitrary crystal structure and an arbitrary filling fraction
\cite{Halevi95} are real values. A great number of applications
based on SCs are explained by the existence of BGs: acoustic
filters; \cite{Sanchez98} acoustic barriers; \cite{Sanchez02} or
wave guides. \cite{Khelif03, Khelif04}
Propagating waves inside a periodic media represent a set of
solutions to the wave equation that satisfy the translational
symmetry, and these are characterized by the transmission bands in
the PWE method. However, where the translational symmetry is
broken, finite periodic media or periodic media with point
defects, can support the well known evanescent modes
characterized by a complex wave number, $k$.\cite{Joannopoulus08}
Recent experimental results \cite{Wu09} show measurements of the
sound levels recorded inside a point defect and behind an SC.
These authors observed that this level is higher inside the cavity
than behind the crystal. This fact clearly shows both the
generation of a trapping mode (i.e. localized mode) inside the
point defect and its evanescent behavior outside the vacancy. Some
authors in the electromagnetic regime have measured the evanescent
modes in photonic crystals and revealed multi-exponential
decay.\cite{Engelen09}
Several extensions of the PWE method have been used to analyze the
propagation of sound through periodic systems in different
situations; for example, crystals with point defects have been
analyzed with PWE using the supercell approximation. \cite{Wu01,
Zhao09} The same methodology has been used to analyze the
influence of the following: constituent materials, plate
thickness, and the geometry of the array on the band structure in
two dimensional (2D) phononic crystal plates. \cite{Vasseur08}
However, these $\omega(\vec{k})$ methods interpret the BG as
frequency regimes where no real $k$ exists. Therefore, these
methods can only be used to study and characterize propagating
modes.
We have been motivated by the work of Hsue et al., \cite{Hsue05}
in which the PWE was extended for the case of photonic crystals to
calculate the complex $k$ in a 2D isotropic and in general 3D
anisotropic cases. In this paper we show the extended plane wave
expansion (EPWE) for the case of 2D SCs. The aim is to obtain the
band structures using the inverse expression $k(\omega)$, and with
a possibly complex $k$. Recent works show the calculation of
complex band structures for phononic crystals.\cite{Laude09,
Sainidou06} In the present work we show the explicit matrix
formulation and the approximation of supercell for analyzing the
complex relation dispersion of SCs. The extension of the
methodology enables us to characterize the evanescent and
propagating modes in complete SCs, as well as in SCs with point
defects.
In this paper we present novel measurements of the pressure in the
space between rows inside an SC. We have developed a 3D
computer-controlled automatic positioning system together with an
automatized acquisition system, called 3DReAMS (3D Robotized
e-Acoustic Measurement System). This system enables the pressure
field in trajectories inside a crystal to be measured, and we have
consequently analyzed the decay of the evanescent modes throughout
an SC. The imaginary part of the wave number of the evanescent
modes can be obtained experimentally with the measurements taken
by 3DReAMS. These data represent the experimental confirmation of
the analytical results obtained by the EPWE, as well as an
experimental analysis of propagating and evanescent modes in an
SC.
The paper is organized as follows. Section \ref{sec:PWE}
summarizes the main ingredients of the PWE for 2D SCs with the
explicit matrix formulation of the problem. In Section
\ref{sec:EPWE} we extend the PWE to the EPWE to solve the
eigenvalue problem $k(\omega)$. We show the matrix formulation, as
well as the EPWE, together with the supercell approximation for
studying the complex band structures of 2D SC with point defects.
In Section \ref{sec:results} the complex band structures of an SC
of PVC cylinders embedded in air are obtained with EPWE for a 2D
SC with, and without, point defects. Experimental results
validating the predictions of the EPWE for the evanescent and
propagating modes are shown in Section \ref{sec:experimental}.
Finally, the work is summarized in Section \ref{sec:Conclusions}.
\section{Plane wave method}
\label{sec:PWE}
Propagation of sound is described by the equation
\begin{eqnarray}
\frac{1}{\rho c^2} \frac{\partial^2 p}{\partial
t^2}=\nabla\left(\frac{1}{\rho}\nabla p \right)
\label{eq:acoustic}
\end{eqnarray}
where $c$ is the sound velocity, $\rho$ is the density of the
medium, and $p$ is the pressure.
In this paper we consider a system composed of an array of
straight, infinite cylinders made of an isotropic solid $A$,
embedded in an acoustic isotropic background $B$. There is
translational invariance in direction $z$ parallel to the
cylinders' axis; and the system has a 2D periodicity in the
transverse plane. By making use of this periodicity, we can expand
the properties of the medium in the Fourier series,
\begin{eqnarray}
\sigma=\frac{1}{\rho(\vec{r})}=\sum_{\vec{G}}\sigma_{\vec{k}}(\vec{G})e^{\imath \vec{G}\vec{r}} \label{eq:sigma},\\
\eta=\frac{1}{B
(\vec{r})}=\sum_{\vec{G}}\eta_{\vec{k}}(\vec{G})e^{\imath
\vec{G}\vec{r}}\label{eq:eta}.
\end{eqnarray}
$\vec{G}$ is the 2D reciprocal-lattice vector and
$B(\vec{r})=\rho(\vec{r})c(\vec{r})^2$ is the bulk modulus. For
the pressure $p$ we use the Bloch theorem and harmonic temporal
dependence,
\begin{eqnarray}
p(\vec{r},t)=e^{\imath (\vec{k}\vec{r}-\omega
t)}\sum_{\vec{G}}p_k(\vec{G})e^{\imath \vec{G}\vec{r}}.
\label{eq:pressure}
\end{eqnarray}
It is simple to show that \cite{Kushwaha94PRB}
\begin{eqnarray}
\beta(\overrightarrow{G})= \left\{ \begin{array}{ll}
\beta_{A}f+\beta_{B}(1-f)& \mbox{if $\overrightarrow{G} = \overrightarrow{0}$}\\
\left(\beta_{A}-\beta_{B}\right)F(\overrightarrow{G}) & \mbox{if $\overrightarrow{G} \neq \overrightarrow{0}$}
\end{array}\right.
\end{eqnarray}
where $\beta=(\sigma,\eta)$, and $F(\overrightarrow{G})$ is the
structure factor. For a circular cross section of radius $r$, the
structure factor is
\begin{eqnarray}
F(\overrightarrow{G})=\frac{1}{A_{uc}}\int_{A_{cyl}}
e^{{-i\overrightarrow{G}\overrightarrow{r}}}\overrightarrow{dr}=\frac{2f}{Gr}J_{1}(G).
\end{eqnarray}
$A_{uc}$ is the area of the unit cell, $A_{cyl}$ is the area of
the cylinder, and $J_1$ is the Bessel function of the first kind
of order $1$.
Using equations (\ref{eq:sigma}), (\ref{eq:eta}),
(\ref{eq:pressure}) and (\ref{eq:acoustic}) we
obtain\cite{Kushwaha94PRB}
\begin{eqnarray}
\sum_{\vec{G'}}\left((\vec{k}+\vec{G})\sigma_k(\vec{G}-\vec{G'})(\vec{k}+\vec{G'})-\omega^2\eta(\vec{G}-\vec{G'})\right)p_{\vec{k}}(\vec{G'})=0.
\label{eq:eigenproblem}
\end{eqnarray}
For $\vec{G}$ taking all the possible values, Equation
(\ref{eq:eigenproblem}) constitutes a set of linear, homogeneous
equations for the eigenvectors $p_{\vec{k}(\vec{G})}$ and
eigenfrequencies $\omega({\vec{k}})$. We obtain the band
structures when $\vec{k}$ scans the area of the irreducible region
of the first Brillouin zone.
Equation (\ref{eq:eigenproblem}) can be expressed by the matrix
formulation below
\begin{eqnarray}
\label{eq:matricial} \sum_{i=1}^3\Gamma_i\Sigma\Gamma_i P=\omega^2
\Omega P,
\end{eqnarray}
where i=1,2,3. The matrices $\Gamma_i$, $\Sigma$ and $\Omega$ are
defined as
\begin{eqnarray}
(\Gamma_i)_{mn}=\delta_{mn}(k_i+G_i^m).
\end{eqnarray}
The explicit matrix formulation is shown as follows:
\begin{eqnarray} \Gamma_i=\left(
\begin{array}{cccc}
k_i+G_i & 0 & \ldots & 0 \\
0 & k_i+G_i & \ldots & 0 \\
\vdots & \vdots & \ddots & \vdots\\
0 & \ldots & \ldots & k_i+G_i \end{array}
\right)\label{eq:Gamma_matrix},\\[0.1cm]
\Sigma=\left( \begin{array}{ccc}
\sigma(\vec{G}_1-\vec{G}_1) & \ldots & \sigma(\vec{G}_1-\vec{G}_{N\times N}) \\
\vdots & \ddots & \vdots \\
\sigma(\vec{G}_{N\times N}-\vec{G}_1) & \ldots & \sigma(\vec{G}_{N\times N}-\vec{G}_{N\times N})\\
\end{array}
\right),\label{eq:Sigma_matrix}\\[0.1 cm]
\Omega=\left( \begin{array}{ccc}
\eta(\vec{G}_1-\vec{G}_1) & \ldots & \eta(\vec{G}_1-\vec{G}_{N\times N}) \\
\vdots & \ddots & \vdots \\
\eta(\vec{G}_{N\times N}-\vec{G}_1) & \ldots & \eta(\vec{G}_{N\times N}-\vec{G}_{N\times N})\\
\end{array}
\right),\label{eq:eta_matrix}\\[0.1 cm]
P=\left(\begin{array}{c}
P(\vec{G}_1)\\
\vdots\\
P(\vec{G}_{N\times N})\\
\end{array}
\right),
\end{eqnarray}
where $\vec{G}=(G_x, G_y, G_x)$. To solve (\ref{eq:matricial}) we
must truncate the matrices. If we chose $m=n=(-M,\ldots,M)$, the
size of the previous matrices is $N\times N=(2M+1)\times (2M+1)$.
$N\times N$ is usually the number of plane waves used in the
calculation.
By solving the system given in (\ref{eq:matricial}) for each Bloch
vector in the irreducible area of the first Brillouin zone, we
obtain $N\times N$ eigenvalues, $\omega^2$, which can be used to
represent the band structures, $\omega(\vec{k})$.
\section{Extended Plane Wave Method}
\label{sec:EPWE} In the $\omega(\vec{k})$ formulation, the
existence of BG is indicated by the absence of bands in determined
ranges of frequencies. However, BG could be understood by means of
the evanescent behavior of the internal modes. This interpretation
was predicted by some authors\cite{Joannopoulus08} when
approximating the second band near the BG by expanding
$\omega(\vec{k})$ to powers of $k$ around the edge $k=\pi/a$,
being $a$ the lattice constant of the array. These authors claimed
that as the BG is traversed, the exponential decay grows as the
frequency nears the center of the BG. At a given frequency
$\omega$ inside the BG, the evanescent wave is characterized by a
complex value of its wave number $\vec{k}(\omega)$ and which the
imaginary part characterizes as the exponential-like decay of the
mode. In this section, we extend the previous PWE to the EPWE to
obtain $\vec{k}(\omega)$ and with a possibly imaginary $k$.
From Equation (\ref{eq:matricial}) we define the next vector,
\begin{eqnarray}
\Phi_i=\Sigma\Gamma_iP.
\end{eqnarray}
With this definition we can reformulate the eigenvalue problem
(\ref{eq:matricial}) as the equation system
\begin{eqnarray}
\Phi_i=\Sigma\Gamma_iP\nonumber\\
\omega^2\Omega P=\sum_{i=1}^3\Gamma_i\Phi_i.
\end{eqnarray}
To obtain an eigenvalue problem for $\vec{k}(\omega)$, we write
$\vec{k}=k\vec{\alpha}$, where $\vec{\alpha}$ is a unit vector.
Then (\ref{eq:Gamma_matrix}) can be written as
\begin{eqnarray}
\Gamma_i=\Gamma_i^0+k\alpha_iI,
\end{eqnarray}
where $I$ is the identity matrix, and
\begin{eqnarray}
\Gamma_i^0=\left(
\begin{array}{cccc}
G_i & 0 & \ldots & 0 \\
0 & G_i & \ldots & 0 \\
\vdots & \vdots & \ddots & \vdots\\
0 & \ldots & \ldots & G_i \end{array}
\right),\label{eq:Gamma_matrix_b} \\[0.5cm]
\alpha_i=\left(
\begin{array}{cccc}
\alpha_i & 0 & \ldots & 0 \\
0 & \alpha_i & \ldots & 0 \\
\vdots & \vdots & \ddots & \vdots\\
0 & \ldots & \ldots & \alpha_i \end{array}
\right).\label{eq:alpha_matrix_b}
\end{eqnarray}
Equation (\ref{eq:matricial}) can then be written as
\begin{eqnarray}
\left(
\begin{array}{cc}
\omega^2\Omega -\sum_{i=1}^3\Gamma_i^0\Sigma\Gamma_i^0 & 0 \\
-\sum_{i=1}^3\Sigma \Gamma_i^0 & I\end{array} \right) \left(
\begin{array}{c}
P \\
\Phi' \end{array}\right)=k \left(
\begin{array}{cc}
\sum_{i=1}^3\Gamma_i^0\Sigma\alpha_i & I \\
\sum_{i=1}^3\Sigma \alpha_i & 0\end{array} \right) \left(
\begin{array}{c}
P\\
\Phi'\end{array} \right) \label{eq:matricial_complex}
\end{eqnarray}
where $\Phi'=\sum_{i=1}^3\alpha_i\Phi_i$.
Equation (\ref{eq:matricial_complex}) represents a generalized
eigenvalue problem with $2N$ eigenvalues $k$, and possibly complex
numbers for each frequency. Complex band structures have been
calculated for the incidence direction characterized by vector
$\vec{\alpha}$ by solving the previous eigenvalue equation for a
discrete number of frequencies and then sorting by continuity of
$k$. In contrast to the $\omega(\vec{k})$ method, the periodicity
is not relevant in this formulation of the problem and
$k(\omega)$ does not follow the first Brillouin zone.
Because of the periodicity of the system, Bloch waves can be
expanded in a series of harmonics where each harmonic corresponds
with a value of $k$, if $k$ is then a complex number, the
evanescent behavior of a wave with a predetermined frequency would
be multiexponential.\cite{Engelen09} The complex band structures
show the values of all of the complex values of $k$ which
contribute to the multi-exponential decay of the mode in the BG.
As we will see later, for the case of the SC analyzed in this
paper, we can only approximate the evanescent behavior in the
modes inside the BG by considering the first term of this
harmonic expansion in terms of $k$.
\subsection{Supercell approximation}
One particularly interesting aspect of SCs is the possibility of
creating point defects that confine acoustic waves in localized
modes. \cite{Sigalas98, Zhao09} Because of the locally breaking
periodicity of the structure, defect modes can be created within
the BG. These defect modes are strongly localized around the point
defect: once the wave is inside the defect, it is trapped because
the borders of the defect act as perfect mirrors for waves with
frequencies in the BG. Localization depends on several parameters,
such as the size of the point defect. However, in finite periodic
structures the strength of sound localization also depends on the
size of the structure around the defect because of the exponential
decay of the outgoing wave.\cite{Wu09}
To analyze the propagation of waves inside periodic structures
with defects, authors have traditionally used PWE with supercell
approximation. The supercell method requires the lowest possible
interaction between defects. This results in a periodic
arrangement of supercells that contain the point defect. With this
method we can obtain the relation $\omega(\vec{k})$ for crystals
with local defects and, for instance, the physics of wave guides
\cite{Khelif04, Vasseur08} or filters \cite{Khelif03} can be
explained.
In this section, we apply the supercell approximation to the EPWE.
This methodology enables us to obtain the relation $k(\omega)$ for
defect modes. It will be interesting to discover how the imaginary
part of the wave vector inside the BG changes with the creation of
the defect.
Consider an SC with primitive lattice vectors $\vec{a}_i$
($i=1,2,3$). The supercell is a cluster of $n_1\times n_2\times
n_3$ scatterers periodically placed in space. The primitive
lattice vectors in the supercell approximation are
$\vec{a'}_i=n_i\vec{a}_i$, and the complete set of lattices in the
supercell approximation is $\{R'|R'=l_i\vec{a'}_i\}$, where $n_i$
and $l_i$ are integers. The primitive reciprocal vectors are then
\begin{eqnarray}
\vec{b'}_i=2\pi \frac{\varepsilon_{ijk}\vec{a'}_j\times
\vec{a'}_k}{\vec{a'}_1\cdot(\vec{a'}_2\times \vec{a'}_3)}
\end{eqnarray}
where $\varepsilon_{ijk}$ is the completely anti-symmetrical
three-dimensional Levi-Civita symbol. The complete set of
reciprocal lattice vectors in the supercell is
$\{\vec{G}|\vec{G}_i=N_i\vec{b'}_i\}$ where $N_i$ are integers.
Finally, the structural factor of the supercell in this
approximation has to be computed while taking into account the
size of the supercell. If we consider a 2D SC with cylindrical
scatterers with a radius $r$ and an $n_1\times n_2$ sized
supercell, the structure factor of the supercell is expressed by
\begin{eqnarray}
F(\vec{G})=\sum_{i=-(n_1-1)/2}^{(n_1-1)/2}\sum_{j=-(n_2-1)/2}^{(n_2-1)/2}e^{\imath(ia|\vec{G}_1|+ja|\vec{G}_2|)P(\vec{G})}
\end{eqnarray}
where
\begin{eqnarray}
P(\vec{G})=\frac{2f}{Gr}J_{1}(G).
\end{eqnarray}
$f$ is the filling fraction of the supercell, $G=|\vec{G}|$ and
$a$ is the lattice constant of the 2D periodic system.
By introducing the previous expressions in the matrices of the PWE
(\ref{eq:matricial}), or in the case of the EPWE
(\ref{eq:matricial_complex}), we can then use the supercell
approximation to calculate the band structure of a periodic
structure with, and without, a point defect.
\section{Numerical Results}
\label{sec:results} We consider a 2D SC consisting of PVC
cylinders of radius $r$ in an air background arranged in a square
lattice with a lattice constant $a$. The material parameters
employed in the calculations are $\rho_{air}=1.23$kg/$m^3$,
$\rho_{PVC}=1400$kg/$m^3$, $c_{air}=340$m/s and $c_{PVC}=2380$m/s.
We consider a filling fraction $f=\pi r^2/a^2\simeq0.65$. We have
used reduced magnitudes, \cite{Kushwaha94PRB} so the reduced
frequency is $\Omega=wa/(2\pi c_{host})$, and the reduced wave
vector is $K=ka/(2\pi)$.
\subsection{Complete array}
In Figure \ref{fig:complete} we can observe the complex band
structure obtained by EPWE for the SC described above. In the left
panel we have represented the imaginary part of the wave vector in
the $\Gamma X$ direction; in the right panel we have shown the
complex band structures in the $\Gamma M$ direction; and the
central panel shows the real part of the band structures. The
imaginary part is not restricted in values of $k$; while the real
part is restricted to the first Brillouin zone. The area in gray
represents the full BG ranged between the frequencies
$\Omega_1=\omega_1 a/(2\pi c_{host})=0.4057$ and
$\Omega_2=\omega_2 a/(2\pi c_{host})=0.7189$. Note that the real
part of the complex band structures has exactly the same values as
in the case of the PWE.
In Figure \ref{fig:complete} we can observe that modes inside the
BG present purely imaginary wave vectors and these can be
characterized as evanescent modes with an exponential-like decay.
The elegant and intuitive explanation of the evanescent behavior
of modes inside the BG given by Joannopoulus\cite{Joannopoulus08}
is reproduced in Figure \ref{fig:complete} in $\Gamma X$; as well
as in $\Gamma M$ directions (red dashed lines). The imaginary part
of the wave number for frequencies inside the BG grows with values
of frequency closer to the center of the BG; and disappears at the
edges of the BG. In other words, the rate of decay is greater for
frequencies closer to the center of the BG. We can also observe
that the imaginary part of the wave vector connects propagating
bands and so conserves the overall number of modes.
\begin{figure}
\includegraphics[width=80mm,height=70mm,angle=0]{Figure1}
\caption{\label{fig:complete}(Color online) Band structure of an
SC of PVC cylinders embedded in air with filling fraction
$f\simeq0.65$. The left panel represents the imaginary part of the
wave vector for each $\Gamma X$ direction frequency. The central
panel represents the real part of the wave vector, constrained in
the first Brillouin zone, for each frequency. The right panel
represents the imaginary part of the wave vector for each $\Gamma
M$ direction frequency. The red dashed line represents the
imaginary part of the wave vector of the evanescent modes inside
the BG. Reduced magnitudes have been used.}
\end{figure}
A recent paper has shown the multi-exponential decay of evanescent
modes in a photonic crystal.\cite{Engelen09} In Figure
\ref{fig:experimental}, we can observe clearly that each frequency
inside the BG is characterized by several values of $Im(k)$,
corresponding to the harmonics of the multi-exponential decay of
the evanescent modes. In the Section \ref{sec:results} we will see
that only the first value of the $Im(k)$ contributes to the decay
of the mode, and therefore higher harmonics can be neglected and
we can approximate in the same way as an exponential-like decay.
\subsection{Defect modes}
In this paper, point defects have been created by removing
cylinders in an SC. We have used the EPWE method with supercell
approximation to analyze the propagating and evanescent behavior
of modes in an SC with point defects.
Figure \ref{fig:defect} shows the complex band structures for the
$\Gamma X$ direction and real band structures for an SC with a
point defect. In our case, we use only one direction of incidence
to analyze the complex band structure because the localized mode
appears at the same frequency for all the incidence directions.
The supercell used for the calculations is shown in the inset of
Figure \ref{fig:defect}. We can observe that the localized mode
appears at $\Omega_3=\omega_3 a/(2\pi c_{host})=0.59$ (green
dashed line). For frequencies in the BG, the borders of the point
defect act as perfect mirrors and produce the localized mode in
this cavity. The complex value of the $k$ number for the modes
inside the BG can be obtained by EPWE and becomes a purely real
value for the localized mode (red dotted line and green dashed
line). The value exactly coincides with the value obtained by PWE
with supercell approximation.
\begin{figure}
\includegraphics[width=80mm,height=70mm,angle=0]{Figure2
\caption{\label{fig:defect}(Color online) Band structure for an SC
with an internal defect, calculated using the EPWE with supercell
approximation. The left panel represents the imaginary part of the
wave vector for each $\Gamma X$ direction frequency. The right
panel represents the real part, constrained in the first Brillouin
zone, of the wave vector for each frequency. The green dashed line
represents the frequency of the localized mode in the defect. The
red dotted line represents the imaginary part of the wave vector
of the evanescent modes inside the BG. Reduced magnitudes have
been used.}
\end{figure}
\section{Experimental results}
\label{sec:experimental}
We performed the experiments in an echo-free chamber sized
$8\times 6\times 3$m$^3$. To obtain the experimental dependence of
the pressure all along the SC, we measured the pressure field at
several points between two rows of the SC. To achieve this we
built a finite SC and placed the microphone inside the periodic
structure in a space between two rows. The finite 2D SC used in
this paper was made of PVC cylinders hung in a frame and measuring
5$a\times$5$a$. The radius of the cylinders was $r=10$cm, and the
lattice constant of the SC was $a=22$cm. With these parameters,
the finite SC has the same filling fraction ($f\simeq0.65$) as in
Section \ref{sec:results}, and the dimensions are large enough for
the microphone to be placed between the rows. The microphone used
was a prepolarized free-field 1/2" Type $4189$ B\&K. The diameter
of the microphone was $1.32$cm, which is approximately $0.06a$,
and so a low level of influence over the pressure field measured
is expected.
The 3DReAMS system is capable of sweeping the microphone through a
3D grid of measuring points located at any trajectory inside the
echo-free chamber. The motion of the robot was controlled by an
NI-PCI 7334. We analyzed the absolute value of the sound pressure
between two rows of the SC by moving the microphone in steps of
$1$ cm.
In Section \ref{sec:results} we analyzed the upper and lower
frequencies of the BG for an SC of PVC cylinders with the filling
fraction value as in our experimental set up. By considering the
corresponding values of the parameters of our experimental SC, we
can obtain the frequency range of the BG. In our case, the BG
appears between $627$Hz and $1111$Hz. To measure the propagation
of sound inside the SC, we analyzed two different frequencies, one
inside the BG and the other in the first transmission band. The
frequencies were $920$Hz and $442$Hz, respectively.
\begin{figure}
\includegraphics[width=80mm,height=70mm,angle=0]{Figure3
\caption{\label{fig:experimental}(Color online) Absolute value of
the pressure inside the SC in the positions between two rows. Blue
squares represent these values for a frequency outside of the BG,
$442$Hz. Red circles represent these values for a frequency inside
the BG, $920$Hz. Black dots represent the values used to fit the
exponential decay. Green line represents the fit of the
exponential decay of the evanescent mode inside the BG. The black
continuous line represents the absolute values of the pressure
obtained by finite element methods.}
\end{figure}
In Figure \ref{fig:experimental} we show the experimental
measurements of the absolute value of the pressure inside SC for
propagating and evanescent modes. These experimental results
represents a novel measurement of the pressure field inside an SC.
The inset of Figure \ref{fig:experimental} shows the measured
points in steps of $1$ cm placed between two rows of cylinders
inside the SC using the 3DReAMS system. Blue squares with a
continuous blue polygonal line represent the absolute value of the
pressure of a frequency outside of the BG, that is $442$Hz. This
frequency represents a propagating mode inside the SC. Red circles
with a polygonal red continuous line represent the absolute value
of the pressure of a frequency inside the BG, that is $920$Hz. For
the last case, we can observe the decay of the pressure inside the
SC because of the evanescent behavior of the mode inside the BG.
In contrast to the propagating mode (blue squares with a blue
polygonal continuous line), the evanescent mode (red squares with
a red polygonal continuous line) is practically extinguished at
the end of the crystal -- and just a small value remaining for the
emerging pressure. This characteristic of evanescent behavior in
finite SCs has been measured recently by Wu et al. \cite{Wu09} in
an SC with a point defect.
The value of the imaginary part of the first harmonic of the wave
vector for the $920$Hz frequency can be obtained from Figure
\ref{fig:complete}. Using the values of parameters of the SC, we
can observe a value $Im(k)=-5.6$m$^{-1}$. From experimental data
(see Figure \ref{fig:experimental}), we can fit the decay of the
evanescent mode. We have chosen the points with maximum values in
order to fit an exponential decay $ae^{bx}$. The values of the fit
are $a=0.05597\pm0.0103$Pa and $b=Im(k)=-5.60\pm1.45$m$^{-1}$.
Note that the experimental value is very close to the analytical
value, i.e., the assumption that only the first harmonic is needed
to represent the multiexponential decay of the evanescent mode is
correct.
By solving the scattering problem inside the SC by means of the
finite element method (FEM) we can analyze the evanescent behavior
of the modes inside the BG of an SC. We have studied numerically
the absolute value of the sound pressure between two rows of an
SC. Continuity boundary conditions in the walls of the cylinders
and the radiation condition at the borders of the numerical domain
have been considered in the simulation. The black continuous line
in Figure \ref{fig:experimental} represents the absolute values of
pressure obtained numerically inside the SC, considering an
incidence of a plane wave with a frequency of $920$Hz. The
correspondence between the experimental data (red polygonal line
with open red circles) and the numerical results is clear.
\section{Conclusions}
\label{sec:Conclusions} The propagation of waves inside periodic
structures consists of propagating and evanescent modes. $\omega
(\vec{k})$ methods can be used to analyze the propagating modes,
while evanescent modes are represented by the absence of $k$ for
some ranges of frequencies. In this paper, we extend the
$\omega(\vec{k})$ to the $k(\omega)$ method for the case of 2D
SCs. We present the formulation of the supercell approximation for
the $k(\omega)$ method. For the EPWE we have predicted the
evanescent nature of the modes inside the BG of an SC. In this
paper we have reported measurements of the exponential-like decay
of the acoustic field inside an SC. EPWE predicted a value for the
imaginary part of the first harmonic of the wave number,
$Im(k)=-5.6$m$^{-1}$; and by fitting an exponential decay,
$ae^{bx}$, the experimental value we have obtained is
$b=Im(k)=-5.60\pm1.45$m$^{-1}$. Therefore, we can conclude that
only the first harmonic contributes to the exponential-like decay
of the evanescent mode. We have also shown that the imaginary part
of the wave vector connects propagation bands and conserves the
overall number of modes.
We have also applied the EPWE with supercell aproximation to SC
with point defects. We have analyzed the case of one vacancy
observing the localized mode inside the BG predicted by EPWE. The
value of the $k$ number for this localized mode, that is purely
imaginary in the case of complete SC, changes to purely real and
it becomes in a passing mode as it was observed in the literature.
The frequency of the localized mode exactly coincides with the
value obtained by PWE.
Analytical, numerical, and experimental results reproduce with
very good agreement the complex values of the wave vector inside
the BG, meaning that these methodologies obtain good values for
the exponential-like decay of the evanescent modes in an SC. This
work shows the basis for the correct understanding of the design
of narrow filters and wave guides based on phononic or sonic
crystals with point defects.
\begin{acknowledgments}
The authors would like to thank Dr. E.A. S\'anchez-P\'erez for his
comments and suggestions and thank Daniel Fenollosa and Talleres
Ferriols for their help in building the mechanical part of
3DReAMS. This work was supported by MEC (Spanish government) and
the European Regional Development Fund , under grants
MAT2009-09438 and MTM2009-14483-C02-02.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,607 |
Les I-Threes (The I Threes) sont un trio vocal composé de Marcia Griffiths, Judy Mowatt et Rita Marley.
Elles sont principalement connues pour avoir travaillé avec Bob Marley et The Wailers à partir de leur , Natty Dread (1974) où par leur chorus, elles remplacent Peter Tosh et Bunny Wailer.
Elles ont également enregistré avec Serge Gainsbourg dans ses albums reggae Aux armes et cætera et Mauvaises nouvelles des étoiles.
Liens externes
Groupe jamaïcain de reggae
Groupe musical formé en 1974
Trio vocal
Trio musical jamaïcain
Girl group | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,077 |
All of our Virtual Chat Messages email to Sales@ONPFJ.com. This system has been a great solution for fast and easy questions. We have stored tons of FAQ and Special Order information. Virtual Chat also stores information by Brand, for quick answers about special orders. | {
"redpajama_set_name": "RedPajamaC4"
} | 2,517 |
\section{Introduction}
The discovery of novel molecules and materials with desired properties is crucial for applications such as batteries, catalysis and drug design.
However, the vastness of chemical compound space and the computational cost of accurate quantum-chemical calculations prevent an exhaustive exploration.
In recent years, there have been increased efforts to use machine learning for the accelerated discovery of molecules and materials with desired properties~\citep{rupp2012fast,montavon2013machine,hansen2013assessment,schutt2014represent,Hansen-JCPL,faber2017fast,brockherde2017bypassing,boomsma2017spherical,eickenberg2017scattering}.
However, these methods are only applied to stable systems in so-called \emph{equilibrium}, i.e., local minima of the potential energy surface $E(\mathbf{r}_1, \dots, \mathbf{r}_n)$ where $\mathbf{r}_i$ is the position of atom $i$.
Data sets such as the established QM9 benchmark~\citep{ramakrishnan2014quantum} contain only equilibrium molecules.
Predicting stable atom arrangements is in itself an important challenge in quantum chemistry and material science.
In general, it is \emph{not} clear how to obtain equilibrium conformations without optimizing the atom positions.
Therefore, we need to compute both the total energy $E(\mathbf{r}_1, \dots, \mathbf{r}_n)$ and the forces acting on the atoms
\begin{equation}
\textbf{F}_i (\mathbf{r}_1, \dots, \mathbf{r}_n) = - \frac{\partial E}{\partial \mathbf{r}_i} (\mathbf{r}_1, \dots, \mathbf{r}_n). \label{eq:force}
\end{equation}
One possibility is to use a less computationally costly, however, also less accurate quantum-chemical approximation.
Instead, we choose to extend the domain of our machine learning model to both compositional (chemical) and configurational (structural) degrees of freedom.
In this work, we aim to learn a representation for molecules using equilibrium and non-equilibrium conformations.
Such a general representation for atomistic systems should follow fundamental quantum-mechanical principles.
Most importantly, the predicted force field has to be curl-free.
Otherwise, it would be possible to follow a circular trajectory of atom positions such that the energy keeps increasing, i.e., breaking the law of energy conservation.
Furthermore, the potential energy surface as well as its partial derivatives have to be smooth, e.g., in order to be able to perform geometry optimization.
Beyond that, it is beneficial that the model incorporates the invariance of the molecular energy with respect to rotation, translation and atom indexing.
Being able to model both chemical and conformational variations constitutes an important step towards ML-driven quantum-chemical exploration.
This work provides the following key contributions:
\begin{itemize}
\item We propose \emph{continuous-filter convolutional (cfconv)} layers as a means to move beyond grid-bound data such as images or audio towards modeling objects with arbitrary positions such as astronomical observations or atoms in molecules and materials.
\item We propose \emph{SchNet}: a neural network specifically designed to respect essential quantum-chemical constraints.
In particular, we use the proposed cfconv layers in $\mathbb{R}^3$ to model interactions of atoms at arbitrary positions in the molecule.
SchNet delivers both rotationally invariant energy prediction and rotationally equivariant force predictions.
We obtain a smooth potential energy surface and the resulting force-field is guaranteed to be energy-conserving.
\item We present a new, challenging benchmark -- ISO17 -- including both chemical and conformational changes\footnote{ISO17 is publicly available at \url{www.quantum-machine.org}.}.
We show that training with forces improves generalization in this setting as well.
\end{itemize}
\section{Related work}
Previous work has used neural networks and Gaussian processes applied to hand-crafted features to fit potential energy surfaces~\citep{manzhos2006random,malshe2009development,behler2007generalized,bartok2010gaussian,behler2011atom,bartok2013representing}.
Graph convolutional networks for circular fingerprint~\citep{duvenaud2015convolutional} and molecular graph convolutions~\citep{Kearnes2016} learn representations for molecules of arbitrary size.
They encode the molecular structure using neighborhood relationships as well as bond features, e.g., one-hot encodings of single, double and triple bonds.
In the following, we briefly review the related work that will be used in our empirical evaluation: gradient domain machine learning (GDML), deep tensor neural networks (DTNN) and enn-s2s.
\paragraph*{Gradient-domain machine learning (GDML)} \citet{chmiela2017machine} proposed GDML as a method to construct force fields that explicitly obey the law of energy conservation. GDML captures the relationship between energy and interatomic forces (see Eq.~\ref{eq:force}) by training the gradient of the energy estimator. The functional relationship between atomic coordinates and interatomic forces is thus learned directly and energy predictions are obtained by re-integration.
However, GDML does not scale well due to its kernel matrix growing quadratically with the number of atoms as well as the number of examples.
Beyond that, it is not designed to represent different compositions of atom types unlike SchNet, DTNN and enn-s2s.
\paragraph*{Deep tensor neural networks (DTNN)} \citet{schutt2017quantum} proposed the DTNN for molecules that are inspired by the many-body Hamiltonian applied to the interactions of atoms. They have been shown to reach chemical accuracy on a small set of molecular dynamics trajectories as well as QM9.
Even though the DTNN shares the invariances with our proposed architecture, its interaction layers lack the continuous-filter convolution interpretation.
It falls behind in accuracy compared to SchNet and enn-s2s.
\paragraph*{enn-s2s} \citet{gilmer2017neural} proposed the enn-s2s as a variant of message-passing neural networks that uses bond type features in addition to interatomic distances.
It achieves state-of-the-art performance on all properties of the QM9 benchmark~\citep{gilmer2017neural}.
Unfortunately, it cannot be used for molecular dynamics predictions (MD-17).
This is caused by discontinuities in their potential energy surface due to the discreteness of the one-hot encodings in their input.
In contrast, SchNet does not use such features and yields a continuous potential energy surface by using continuous-filter convolutional layers.
\section{Continuous-filter convolutions}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{figures/discrete_continuous}
\caption{The discrete filter (left) is not able to capture the subtle positional changes of the atoms resulting in discontinuous energy predictions $\hat{E}$ (bottom left). The continuous filter captures these changes and yields smooth energy predictions (bottom right).}
\label{fig:discrete}
\end{figure}
In deep learning, convolutional layers operate on discretized signals such as image pixels~\citep{lecun1989backpropagation, krizhevsky2012imagenet}, video frames~\citep{karpathy2014large} or digital audio data~\citep{van2016wavenet}.
While it is sufficient to define the filter on the same grid in these cases, this is not possible for unevenly spaced inputs such as the atom positions of a molecule (see Fig. \ref{fig:discrete}).
Other examples include astronomical observations~\citep{max2014method}, climate data~\citep{olafsdottir2016redfit} and the financial market~\citep{nieto2015bayesian}.
Commonly, this can be solved by a re-sampling approach defining a representation on a grid~\citep{snyder2012finding,hirn2017wavelet,brockherde2017bypassing}.
However, choosing an appropriate interpolation scheme is a challenge on its own and, possibly, requires a large number of grid points.
Therefore, various extensions of convolutional layers even beyond the Euclidean space exist, e.g., for graphs~\citep{BrunaZSL13,HenaffBL15} and 3d shapes\citep{masci2015geodesic}.
Analogously, we propose to use continuous filters that are able to handle unevenly spaced data, in particular, atoms at arbitrary positions.
Given the feature representations of $n$ objects $X^l = (\mathbf{x}^l_1,\ldots,\mathbf{x}^l_n)$ with $\mathbf{x}^l_i \in \mathbb{R}^F$ at locations $R =(\mathbf{r}_1,\ldots,\mathbf{r}_n)$ with $\mathbf{r}_i \in \mathbb{R}^D$, the continuous-filter convolutional layer $l$ requires a filter-generating function
\[
W^l: \mathbb{R}^D \rightarrow \mathbb{R}^F,
\]
that maps from a position to the corresponding filter values.
This constitutes a generalization of a filter tensor in discrete convolutional layers.
As in dynamic filter networks~\citep{BrabandereJTG16}, this filter-generating function is modeled with a neural network.
While dynamic filter networks generate weights restricted to a grid structure, our approach generalizes this to arbitrary position and number of objects.
The output $\mathbf{x}_i^{l+1}$ for the convolutional layer at position $\mathbf{r}_i$ is then given by
\begin{equation}
\mathbf{x}_i^{l+1} = (X^l * W^l)_i = \sum_{j} \mathbf{x}^l_j \circ W^l(\mathbf{r}_i - \mathbf{r}_j),
\end{equation}
where "$\circ$" represents the element-wise multiplication.
We apply these convolutions feature-wise for computational efficiency~\citep{chollet2016xception}.
The interactions between feature maps are handled by separate object-wise or, specifically, atom-wise layers in SchNet.
\section{SchNet}
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{figures/fig1}
\caption{Illustration of SchNet with an architectural overview (left), the interaction block (middle) and the continuous-filter convolution with filter-generating network (right). The shifted softplus is defined as $\text{ssp}(x) = \ln(0.5e^x + 0.5)$.}\label{fig:architecture}
\end{figure}
SchNet is designed to learn a representation for the prediction of molecular energies and atomic forces.
It reflects fundamental physical laws including invariance to atom indexing and translation, a smooth energy prediction w.r.t. atom positions as well as energy-conservation of the predicted force fields.
The energy and force predictions are rotationally invariant and equivariant, respectively.
\subsection{Architecture}
Fig.~\ref{fig:architecture} shows an overview of the SchNet architecture.
At each layer, the molecule is represented atom-wise analogous to pixels in an image. Interactions between atoms are modeled by the three interaction blocks.
The final prediction is obtained after atom-wise updates of the feature representation and pooling of the resulting atom-wise energy.
In the following, we discuss the different components of the network.
\paragraph{Molecular representation}
A molecule in a certain conformation can be described uniquely by a set of $n$ atoms with nuclear charges $Z=(Z_1, \dots, Z_n)$ and atomic positions $R=(\mathbf{r}_1, \dots \mathbf{r}_n)$.
Through the layers of the neural network, we represent the atoms using a tuple of features $X^l= (\mathbf{x}_1^l, \dots \mathbf{x}_n^l)$, with $\mathbf{x}^l_i \in \mathbb{R}^F$ with the number of feature maps $F$, the number of atoms $n$ and the current layer $l$.
The representation of atom $i$ is initialized using an embedding dependent on the atom type $Z_i$:
\begin{equation}
\mathbf{x}^0_i = \mathbf{a}_{Z_i}.
\end{equation}
The atom type embeddings $\mathbf{a}_Z$ are initialized randomly and optimized during training.
\paragraph{Atom-wise layers}
A recurring building block in our architecture are atom-wise layers.
These are dense layers that are applied separately to the representation $\mathbf{x}^{l}_i$ of atom $i$:
\[
\mathbf{x}^{l+1}_i = W^l \mathbf{x}^{l}_i + \mathbf{b}^l
\]
These layers is responsible for the recombination of feature maps.
Since weights are shared across atoms, our architecture remains scalable with respect to the size of the molecule.
\paragraph{Interaction}
The interaction blocks, as shown in Fig.~\ref{fig:architecture}~(middle), are responsible for updating the atomic representations based on the molecular geometry $R=(\mathbf{r}_1, \dots \mathbf{r}_n)$.
We keep the number of feature maps constant at $F=64$ throughout the interaction part of the network.
In contrast to MPNN and DTNN, we do not use weight sharing across multiple interaction blocks.
The blocks use a residual connection inspired by ResNet~\citep{he2016deep}:
\[
\mathbf{x}_i^{l+1} = \mathbf{x}_i^{l} + \mathbf{v}_i^{l}.
\]
As shown in the interaction block in Fig.~\ref{fig:architecture}, the residual $\mathbf{v}_i^l$ is computed through an atom-wise layer, an interatomic continuous-filter convolution (cfconv) followed by two more atom-wise layers with a softplus non-linearity in between.
This allows for a flexible residual that incorporates interactions between atoms and feature maps.
\paragraph{Filter-generating networks}
\begin{figure}
\centering
\begin{subfigure}[b]{0.28\textwidth}
\includegraphics[width=\textwidth]{figures/eth_layer_1}
\caption{1$^\text{st}$ interaction block}
\label{fig:layer1}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.28\textwidth}
\includegraphics[width=\textwidth]{figures/eth_layer_2}
\caption{2$^\text{nd}$ interaction block}
\label{fig:layer1}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.28\textwidth}
\includegraphics[width=\textwidth]{figures/eth_layer_3}
\caption{3$^\text{rd}$ interaction block}
\label{fig:layer1}
\end{subfigure}
\caption{10x10 {\AA} cuts through all 64 radial, three-dimensional filters in each interaction block of
SchNet trained on molecular dynamics of ethanol. Negative values are blue, positive values are red.}\label{fig:filters}
\end{figure}
The cfconv layer including its filter-generating network are depicted at the right panel of Fig.~\ref{fig:architecture}.
In order to satisfy the requirements for modeling molecular energies, we restrict our filters for the cfconv layers to be rotationally invariant.
The rotational invariance is obtained by using interatomic distances
\[
d_{ij} = \| \mathbf{r}_i-\mathbf{r}_j\|
\]
as input for the filter network.
Without further processing, the filters would be highly correlated since a neural network after initialization is close to linear.
This leads to a plateau at the beginning of training that is hard to overcome.
We avoid this by expanding the distance with radial basis functions
\[
e_k(\mathbf{r}_i-\mathbf{r}_j) = \exp ( -\gamma \|d_{ij} - \mu_k \|^2 )
\]
located at centers $0\text{\AA} \leq \mu_k \leq 30\text{\AA}$ every $0.1${\AA} with $\gamma=10${\AA}.
This is chosen such that all distances occurring in the data sets are covered by the filters.
Due to this additional non-linearity, the initial filters are less correlated leading to a faster training procedure.
Choosing fewer centers corresponds to reducing the resolution of the filter, while restricting the range of the centers corresponds to the filter size in a usual convolutional layer.
An extensive evaluation of the impact of these variables is left for future work.
We feed the expanded distances
into two dense layers with softplus activations to compute the filter weight $W(\mathbf{r}_i - \mathbf{r}_j)$ as shown in Fig.~\ref{fig:architecture} (right).
Fig~\ref{fig:filters} shows 2d-cuts through generated filters for all three interaction blocks of SchNet trained on an ethanol molecular dynamics trajectory.
We observe how each filter emphasizes certain ranges of interatomic distances.
This enables its interaction block to update the representations according to the radial environment of each atom.
The sequential updates from three interaction blocks allow SchNet to construct highly complex many-body representations in the spirit of DTNNs~\citep{schutt2017quantum} while keeping rotational invariance due to the radial filters.
\subsection{Training with energies and forces}
As described above, the interatomic forces are related to the molecular energy, so that we can obtain an energy-conserving force model by differentiating the energy model w.r.t. the atom positions
\begin{equation}
\hat{\textbf{F}}_i(Z_1, \dots, Z_n, \mathbf{r}_1, \dots, \mathbf{r}_n) = -\frac{\partial \hat{E}}{\partial \mathbf{r}_i}(Z_1, \dots, Z_n, \mathbf{r}_1, \dots, \mathbf{r}_n).
\end{equation}
\citet{chmiela2017machine} pointed out that this leads to an energy-conserving force-field by construction.
As SchNet yields rotationally invariant energy predictions, the force predictions are rotationally equivariant by construction.
The model has to be at least twice differentiable to allow for gradient descent of the force loss.
We chose a shifted softplus $\text{ssp}(x) = \ln(0.5e^x + 0.5)$ as non-linearity throughout the network in order to obtain a smooth potential energy surface.
The shifting ensures that $\text{ssp}(0) = 0$ and improves the convergence of the network.
This activation function shows similarity to ELUs~\citep{clevert2015fast}, while having infinite order of continuity.
We include the total energy $E$ as well as forces $\mathbf{F}_i$ in the training loss to train a neural network that performs well on both properties:
\begin{equation}
\ell(\hat{E}, (E, \mathbf{F}_1, \dots, \mathbf{F}_n)) = \rho \|E - \hat{E} \|^2 + \frac{1}{n} \sum_{i=0}^n \left \| \mathbf{F}_i - \left (-\frac{\partial \hat{E}}{\partial \mathbf{R}_i}\right ) \right\|^2. \label{eq:loss}
\end{equation}
This kind of loss has been used before for fitting a restricted potential energy surfaces with MLPs~\citep{pukrittayakamee2009simultaneous}.
In our experiments, we use $\rho=0.01$ for combined energy and force training. The value of $\rho$ was optimized empirically to account for different scales of energy and forces.
Due to the relation of energies and forces reflected in the model, we expect to see improved generalization, however, at a computational cost.
As we need to perform a full forward and backward pass on the energy model to obtain the forces, the resulting force model is twice as deep and, hence, requires about twice the amount of computation time.
Even though the GDML model captures this relationship between energies and forces, it is explicitly optimized to predict the force field while the energy prediction is a by-product.
Models such as circular fingerprints~\citep{duvenaud2015convolutional}, molecular graph convolutions or message-passing neural networks\citep{gilmer2017neural} for property prediction across chemical compound space are only concerned with equilibrium molecules, i.e., the special case where the forces are vanishing.
They can not be trained with forces in a similar manner, as they include discontinuities in their predicted potential energy surface caused by discrete binning or the use of one-hot encoded bond type information.
\section{Experiments and results}
In this section, we apply the SchNet to three different quantum chemistry datasets: QM9, MD17 and ISO17.
We designed the experiments such that each adds another aspect towards modeling chemical space.
While QM9 only contains equilibrium molecules, for MD17 we predict conformational changes of molecular dynamics of single molecules.
Finally, we present ISO17 combining both chemical and structural changes.
For all datasets, we report mean absolute errors in kcal/mol for the energies and in kcal/mol/{\AA} for the forces. The architecture of the network was fixed after an evaluation on the MD17 data sets for benzene and ethanol (see supplement).
In each experiment, we split the data into a training set of given size $N$ and use a validation set of 1,000 examples for early stopping. The remaining data is used as test set.
All models are trained with SGD using the ADAM optimizer~\citep{KingmaB14} with 32 molecules per mini-batch.
We use an initial learning rate of $10^{-3}$ and an exponential learning rate decay with ratio $0.96$ every 100,000 steps. The model used for testing is obtained using an exponential moving average over weights with decay rate 0.99.
\subsection{QM9 -- chemical degrees of freedom}
\begin{table}
\caption{Mean absolute errors for energy predictions in kcal/mol on the QM9 data set with given training set size $N$. Best model in bold.}\label{tab:qm9}
\centering
\small
\begin{tabular}{rrrrrrrr}
\toprule
$N$ & & SchNet & DTNN~\citep{schutt2017quantum} & enn-s2s~\citep{gilmer2017neural} & & enn-s2s-ens5~\citep{gilmer2017neural}\\ \midrule
50,000 && \textbf{0.59} & 0.94 & -- & & -- \\
100,000 && \textbf{0.34} & 0.84 & -- & & -- \\
110,462 && \textbf{0.31} & -- & 0.45 & & 0.33 \\
\bottomrule
\end{tabular}
\end{table}
QM9 is a widely used benchmark for the prediction of various molecular properties in equilibrium~\citep{ramakrishnan2014quantum,blum2009gdb13,reymond2015chemical}.
Therefore, the forces are zero by definition and do not need to be predicted.
In this setting, we train a single model that generalizes across different compositions and sizes.
QM9 consists of $\approx$130k organic molecules with up to 9 heavy atoms of the types $\{$C, O, N, F$\}$.
As the size of the training set varies across previous work, we trained our models each of these experimental settings.
Table~\ref{tab:qm9} shows the performance of various competing methods for predicting the total energy (property $U_0$ in QM9).
We provide comparisons to the DTNN~\citep{schutt2017quantum} and the best performing MPNN configuration denoted \emph{enn-s2s} and an ensemble of MPNNs (enn-s2s-ens5)~\citep{gilmer2017neural}.
SchNet consistently obtains state-of-the-art performance with an MAE of 0.31 kcal/mol at 110k training examples.
\subsection{MD17 -- conformational degrees of freedom}
\setlength{\tabcolsep}{5pt}
\begin{table}
\caption{Mean absolute errors for energy and force predictions in kcal/mol and kcal/mol/\AA, respectively. GDML and SchNet test errors for training with 1,000 and 50,000 examples of molecular dynamics simulations of small, organic molecules are shown. SchNets were trained only on energies as well as energies and forces combined. Best results in bold.}\label{tab:md}
\centering
\small
\begin{tabular}{llrrrrrrrr}
\toprule
& & & \multicolumn{3}{c}{$N$ = 1,000} & & \multicolumn{3}{c}{$N$ = 50,000}\\ \cmidrule{4-6} \cmidrule{8-10}
& & & \textbf{GDML}~\citep{chmiela2017machine} & \multicolumn{2}{c}{\textbf{SchNet}} & & \textbf{DTNN}~\citep{schutt2017quantum} & \multicolumn{2}{c}{\textbf{SchNet}} \\
& & & \textit{forces} & \textit{energy} & \textit{both} & & \textit{energy} & \textit{energy} & \textit{both} \\ \cmidrule{1-2} \cmidrule{4-6} \cmidrule{8-10}
\multirow{2}{*}{\textbf{Benzene}} & \textit{energy} & & \textbf{0.07} & 1.19 & 0.08 && \textbf{0.04} & 0.08 & 0.07 \\
& \textit{forces} & & \textbf{0.23} & 14.12 & 0.31 && -- & 1.23 & \textbf{0.17} \\\cmidrule{1-2} \cmidrule{4-6} \cmidrule{8-10}
\multirow{2}{*}{\textbf{Toluene}} & \textit{energy} & & \textbf{0.12} & 2.95 & \textbf{0.12} && 0.18 & 0.16 & \textbf{0.09} \\
& \textit{forces} & & \textbf{0.24} & 22.31 & 0.57 && -- & 1.79 & \textbf{0.09} \\ \cmidrule{1-2} \cmidrule{4-6} \cmidrule{8-10}
\multirow{2}{*}{\textbf{Malonaldehyde}} & \textit{energy} & & 0.16 & 2.03 & \textbf{0.13} && 0.19 & 0.13 & \textbf{0.08} \\
& \textit{forces} & & 0.80 & 20.41 & \textbf{0.66} && -- & 1.51 & \textbf{0.08} \\\cmidrule{1-2} \cmidrule{4-6} \cmidrule{8-10}
\multirow{2}{*}{\textbf{Salicylic acid}} & \textit{energy} & & \textbf{0.12} & 3.27 & 0.20 && 0.41 & 0.25 & \textbf{0.10} \\
& \textit{forces} & & \textbf{0.28} & 23.21 & 0.85 && -- & 3.72 & \textbf{0.19} \\ \cmidrule{1-2} \cmidrule{4-6} \cmidrule{8-10}
\multirow{2}{*}{\textbf{Aspirin}} & \textit{energy} & & \textbf{0.27} & 4.20 & 0.37 && -- & 0.25 & \textbf{0.12} \\
& \textit{forces} & & \textbf{0.99} & 23.54 & 1.35 && -- & 7.36 & \textbf{0.33} \\ \cmidrule{1-2} \cmidrule{4-6} \cmidrule{8-10}
\multirow{2}{*}{\textbf{Ethanol}} & \textit{energy} & & 0.15 & 0.93 & \textbf{0.08} && -- & 0.07 & \textbf{0.05} \\
& \textit{forces} & & 0.79 & 6.56 & \textbf{0.39} && -- & 0.76 & \textbf{0.05} \\ \cmidrule{1-2} \cmidrule{4-6} \cmidrule{8-10}
\multirow{2}{*}{\textbf{Uracil}} & \textit{energy} & & \textbf{0.11} & 2.26 & 0.14 && -- & 0.13 & \textbf{0.10} \\
& \textit{forces} & & \textbf{0.24} & 20.08 & 0.56 && -- & 3.28 & \textbf{0.11} \\
\cmidrule{1-2} \cmidrule{4-6} \cmidrule{8-10}
\multirow{2}{*}{\textbf{Naphtalene}} & \textit{energy} & & \textbf{0.12} & 3.58 & 0.16 && -- & 0.20 & \textbf{0.11} \\
& \textit{forces} & & \textbf{0.23} & 25.36 & 0.58 && -- & 2.58 & \textbf{0.11} \\
\bottomrule
\end{tabular}
\end{table}
MD17 is a collection of eight molecular dynamics simulations for small organic molecules.
These data sets were introduced by \citet{chmiela2017machine} for prediction of energy-conserving force fields using GDML.
Each of these consists of a trajectory of a single molecule covering a large variety of conformations.
Here, the task is to predict energies and forces using a separate model for each trajectory.
This molecule-wise training is motivated by the need for highly-accurate force predictions when doing molecular dynamics.
Table~\ref{tab:md} shows the performance of SchNet using 1,000 and 50,000 training examples in comparison with GDML and DTNN.
Using the smaller data set, GDML achieves remarkably accurate energy and force predictions despite being only trained on forces.
The energies are only used to fit the integration constant.
As mentioned before, GDML does not scale well with the number of atoms and training examples.
Therefore, it cannot be trained on 50,000 examples.
The DTNN was evaluated only on four of these MD trajectories using the larger training set~\citep{schutt2017quantum}.
Note that the \emph{enn-s2s} cannot be used on this dataset due to discontinuities in its inferred potential energy surface.
We trained SchNet using just energies and using both energies and forces.
While the energy-only model shows high errors for the small training set, the model including forces achieves energy predictions comparable to GDML.
In particular, we observe that SchNet outperforms GDML on the more flexible molecules malonaldehyde and ethanol, while GDML reaches much lower force errors on the remaining MD trajectories that all include aromatic rings.
The real strength of SchNet is its scalability, as it outperforms the DTNN in three of four data sets using 50,000 training examples using only energies in training.
Including force information, SchNet consistently obtains accurate energies and forces with errors below 0.12 kcal/mol and 0.33 kcal/mol/{\AA}, respectively.
Remarkably, when training on energies and forces using 1,000 training examples, SchNet performs better than training the same model on energies alone for 50,000 examples.
\subsection{ISO17 -- chemical and conformational degrees of freedom}
\begin{table}
\caption{Mean absolute errors on C$_7$O$_2$H$_{10}$ isomers in kcal/mol.}\label{tab:isomer}
\centering
\small
\begin{tabular}{llrrrrrrr}
\toprule
& & \textbf{mean predictor} & \multicolumn{2}{c}{\textbf{SchNet}} \\
& & & \textit{energy} & \textit{energy+forces} \\ \midrule
\textbf{known molecules /} & \textit{energy} & 14.89 & 0.52 & \textbf{0.36}\\
\textbf{unknown conformation} & \textit{forces} & 19.56 & 4.13 & \textbf{1.00} \\ \midrule
\textbf{unknown molecules /} & \textit{energy} & 15.54 & 3.11 & \textbf{2.40} \\
\textbf{unknown conformation} & \textit{forces} & 19.15 & 5.71 & \textbf{2.18} \\
\bottomrule
\end{tabular}
\end{table}
As the next step towards quantum-chemical exploration, we demonstrate the capability of SchNet to represent a complex potential energy surface including conformational and chemical changes.
We present a new dataset -- ISO17 -- where we consider short MD trajectories of 129 isomers, i.e., chemically different molecules with the same number and types of atoms.
In contrast to MD17, we train a joint model across different molecules.
We calculate energies and interatomic forces from short MD trajectories of 129 molecules drawn randomly from the largest set of isomers in QM9.
While the composition of all included molecules is C$_7$O$_2$H$_{10}$, the chemical structures are fundamentally different.
With each trajectory consisting of 5,000 conformations, the data set consists of 645,000 labeled examples.
We consider two scenarios with this dataset:
In the first variant, the molecular graph structures present in training are also present in the test data.
This demonstrates how well our model is able to represent a complex potential energy surface with chemical and conformational changes.
In the more challenging scenario, the test data contains a different subset of molecules.
Here we evaluate the generalization of our model to previously unseen chemical structures.
We predict forces and energies in both cases and compare to the mean predictor as a baseline.
We draw a subset of 4,000 steps from 80\% of the MD trajectories for training and validation.
This leaves us with a separate test set for each scenario:
(1) the unseen 1,000 conformations of molecule trajectories included in the training set and
(2) all 5,000 conformations of the remaining 20\% of molecules not included in training.
Table~\ref{tab:isomer} shows the performance of the SchNet on both test sets.
Our proposed model reaches chemical accuracy for the prediction of energies and forces for the test set of known molecules.
Including forces in the training improves the performance here as well as on the set of unseen molecules.
This shows that using force information does not only help to accurately predict nearby conformations of a single molecule, but indeed helps to generalize across chemical compound space.
\section{Conclusions}
We have proposed continuous-filter convolutional layers as a novel building block for deep neural networks.
In contrast to the usual convolutional layers, these can model unevenly spaced data as occurring in astronomy, climate reasearch and, in particular, quantum chemistry.
We have developed SchNet to demonstrate the capabilities of continuous-filter convolutional layers in the context of modeling quantum interactions in molecules.
Our architecture respects quantum-chemical constraints such as rotationally invariant energy predictions as well as rotationally equivariant, energy-conserving force predictions.
We have evaluated our model in three increasingly challenging experimental settings.
Each brings us one step closer to practical chemical exploration driven by machine learning.
SchNet improves the state-of-the-art in predicting energies for molecules in equilibrium of the QM9 benchmark.
Beyond that, it achieves accurate predictions for energies and forces for all molecular dynamics trajectories in MD17.
Finally, we have introduced ISO17 consisting of 645,000 conformations of various C$_7$O$_2$H$_{10}$ isomers.
While we achieve promising results on this new benchmark, modeling chemical and conformational variations remains difficult and needs further improvement.
For this reason, we expect that ISO17 will become a new standard benchmark for modeling quantum interactions with machine learning.
\subsubsection*{Acknowledgments}
This work was supported by the Federal Ministry of Education and Research (BMBF) for the Berlin Big Data Center BBDC (01IS14013A). Additional support was provided by the DFG (MU 987/20-1) and from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement NO 657679. K.R.M. gratefully acknowledges the BK21 program funded by Korean National Research Foundation grant (No. 2012-005741) and the
Institute for Information \& Communications Technology Promotion (IITP) grant funded
by the Korea government (no. 2017-0-00451).
\bibliographystyle{unsrtnat}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 33 |
{"url":"https:\/\/www.physicsforums.com\/threads\/pitot-tube-experiment.461427\/","text":"# Pitot Tube experiment\n\nHi there I am new to this forum and after reading like 100s of forums I cant find a solution so here I am\n\nSo recently I conducted an experiment to measure the pressure using pitot tube and use that to find the velocity profile which would be used to find mass flow rate. The whole experiment is rather long so I will only consider the first pressure. The pressure measured in pitot is in mmH2O and atmospheric pressure is 763.25 mmHg.\n\nExperiment Setup:\nAir pump blowing air at an unknown constant velocity into a horizontal large tube which narrows down to a tube with a smaller diameter(venturi). Pitot is attached to the small tube in the middle about 15cm away from where the small tube starts. A digital display is used to display the pressure from pitot. All of the apparatus is at the same level(horizontal), no elevation. the other end of the small tube is open.\n\nNow stagnation pressure = dynamic + static pressure. So stagnation pressure must be the pressure measured by the pitot which appears on a digital display and the static pressure must be the atmospheric as it is 90 degrees to the flow. Am I right or wrong ?\n\nSo what I am doing is taking the pitot pressure 80 mmH2O and converting it to Pascals which gives me 784.5 Pa. Now when I convert the atmospheric pressure 763.25 mmHg to Pascals which gives me 101767 Pa which I sub back into my equation to find dynamic pressure gives me 784.5 - 101767 = Dynamic Pressure. To my understanding I should now be able to find the velocity using 0.5*desnity of air*v^2 = 784.5 - 101767 but as you can see it gives me a negative number which cant be square rooted. So I am thinking I am wrong can anyone explain to me how I might go about finding the velocity using the data from the pitot tube because really all I have is the pitot pressure, temperature of the room (18 Degrees Celcius) and the atmospheric pressure.\n\nThank you very much for any replies.\n\n## Answers and Replies\n\nNow stagnation pressure = dynamic + static pressure. So stagnation pressure must be the pressure measured by the pitot which appears on a digital display and the static pressure must be the atmospheric as it is 90 degrees to the flow. Am I right or wrong ?\n\nStagnation pressure is static pressure (measured at a point where flow velocity is zero).\n\nTotal Pressure = Static Pressure + Dynamic Pressure\nSo what I am doing is taking the pitot pressure 80 mmH2O and converting it to Pascals which gives me 784.5 Pa. Now when I convert the atmospheric pressure 763.25 mmHg to Pascals which gives me 101767 Pa which I sub back into my equation to find dynamic pressure gives me 784.5 - 101767 = Dynamic Pressure.\n\nWell, for a start the fact your total pressure is lower than atmospheric should have flagged an error to you (see above equation).\n\nI'm also curious why your pitot reading would be in mmH20? Did you build it yourself?\n\nAlso, doesn't the pitot tube do both readings? So the figure you receive on the digital reading would be the dynamic pressure to convert to flow velocity, not the total pressure (this would explain the excessively low reading).\n\nAlso, doesn't the pitot tube do both readings? So the figure you receive on the digital reading would be the dynamic pressure to convert to flow velocity, not the total pressure (this would explain the excessively low reading).\n\nHey jarednjames thank you for your reply, thats what I was saying i am getting a negative number which cant be square rooted so I was wrong but like you said the reading would be the dynamic pressure so I tried doing the whole thing using just the dynamic pressure equation which is 0.5*density*v^2. Now the results look better the mass flow rate I found using the venturi is very similar to the mass flow rate I found using the pitot velocity traverse. I am not sure why its in mmH2O its just the way the equipment was set by the technician. Anyways thanks alot mate :)\n\nYour welcome, glad to help.\n\nHey , can someone please explain me this experiment , exactly how to find and what do we have to calculate in it ............................ ???\n\nboneh3ad\nScience Advisor\nGold Member\nDon't listen to JaredJames, his answer is incorrect.\n\nNow stagnation pressure = dynamic + static pressure. So stagnation pressure must be the pressure measured by the pitot which appears on a digital display and the static pressure must be the atmospheric as it is 90 degrees to the flow. Am I right or wrong ?\n\nThere is a difference between a Pitot and a Pitot tube and not a Pitot-static tube. You can tell the difference because if it is a Pitot-static tube, there will be a second pressure port somewhere on the tube that opens normal to the flow direction so as to measure static pressure as well.\n\nIf it is, in fact, a Pitot tube as you describe, then yes, the measurement you get with it would be the stagnation pressure (or total pressure, they are the same thing). However, since it is measuring such a low value, I would be more inclined to believe you have a Pitot-static tube and the readout is actually the differential pressure between your total pressure and your static pressure. In other words, it is reading out dynamic pressure directly.\n\nUnder that assumption, you are measuring a flow velocity of\n$$v = \\sqrt{\\frac{2 q}{\\rho}} = 36.1 \\text{ m\/s}$$\n\nI don't know much about your pump, but that is a reasonable value.\n\nNow when I convert the atmospheric pressure 763.25 mmHg to Pascals which gives me 101767 Pa which I sub back into my equation to find dynamic pressure gives me 784.5 - 101767 = Dynamic Pressure.\n\nAtmospheric pressure is not your static pressure in this case. I am sure you could find a pump setting that would result in your atmospheric pressure and your static pressure being equal, but this is not, in general, the case.\n\nSo I am thinking I am wrong can anyone explain to me how I might go about finding the velocity using the data from the pitot tube because really all I have is the pitot pressure, temperature of the room (18 Degrees Celcius) and the atmospheric pressure.\n\nLike I said, you have two potential problems. One is that your may be using a Pitot-static tube, which would be reading out dynamic pressure directly. The second possibility is that you are using the wrong static pressure. In general, if you have a simple Pitot tube, you need to have a separate static pressure port somewhere in the flow to get that value. Atmospheric doesn't help you.","date":"2021-06-18 21:35:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.44448739290237427, \"perplexity\": 518.0961397962315}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487641593.43\/warc\/CC-MAIN-20210618200114-20210618230114-00608.warc.gz\"}"} | null | null |
P102 may refer to:
Vessels
, a patrol boat of the Mexican Navy
, a defense boat of the Nigerian Navy
, a patrol boat of the Timor Leste Defence Force
Other uses
Papyrus 102, a biblical manuscript
P102, a a state regional road in Latvia | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,857 |
30 Songs By 30 Acts From 30 Days of Music in Houston
30 Songs By 30 Acts From 30 Days of Music in Houston, Second Set
Last Night: Puscifer at Bayou Music Center
Matthew Keever
Matthew Keever | June 28, 2012 | 10:00am
Puscifer Bayou Music Center June 27, 2012
Always making the most of minimalism, Maynard James Keenan, the mastermind behind Tool and A Perfect Circle, made the back of Bayou Music Center's stage his home Wednesday night, content to let the rest of his band stand and sit in the foreground as he stood in the shadows, his cowboy hat and aviator-style sunglasses shrouding his face for the entire evening.
Eccentric much? Maybe. But the guy could just be really modest. Or shy.
The stage setup, which resembled a campsite, was so cliché that it had to have been a joke inside of a joke. There was a camper in the back left, a few lawn chairs and a picnic table near the middle of the stage (complete with a red-and-white checkered tablecloth), and a fake campfire front and center.
You'd be hard-pressed to find any country-western themes in the music itself, but as far as presentation is concerned, Maynard has made it clear that this is a staple of the band.
Upon entering the venue, I was a bit surprised at how few people were in attendance... Then I remembered that this wasn't a Tool show; this wasn't a Perfect Circle show; this was a Puscifer show. And while their music is phenomenal in its own right, it was written with a niche audience in mind: Maynard's random thought process.
Contrary to the band's purpose, the crowd's attention was always diverted from the the front man and was drawn instead to colorful backdrops, which sometimes displayed slow-moving colors and blurred images when the screen above the band wasn't showing a video of some sort, and it was just busy enough to keep fans entertained while not distracting from the figures onstage, swaying back and forth as they sang and played their instruments.
With the successes of his other projects, Maynard's direction with Puscifer is even less concerned with what the general public thinks than his other two bands. Instead, it delves into the singer/songwriter's thoughts and is aided by a musical style that could have been explored by either of his other two bands, but is such full, strong sound on its own that it deserves its own name.
That name is Puscifer. And lucky for all of us, Maynard is talented enough to drive all three.
Personal Bias: Puscifer is an acquired taste, but once you've accepted the band for what it is, you might just thoroughly enjoy it.
Overheard In the Crowd: "Shut the fuck up and sing. I didn't pay a hundred dollars to hear you rant about George Bush and wave your hippie flag." (Note: I believe this was in reference to Puscifer's last Houston performance at Jones Hall, which was oddly received.)
Random Notebook Dump: I'm not sure what Maynard drank more of during his performance: wine or water. Which, given his vocal prowess, was even more impressive.
Tiny Monsters Vagina Mine Dozo Toma The Rapture The Weaver Rev 22:20 Potions Momma Sed Oceans Monsoons Horizons Conditions of My Parole Man Overboard Telling Ghosts The Undertaker Encore: Tumbleweed
Follow Rocks Off on Facebook and on Twitter at @HPRocksOff.
Matt is a regular contributor to the Houston Press' music section. He graduated from the University of Houston with a degree in print journalism and global business. Matt first began writing for the Press as an intern, having accidentally sent his resume to the publication's music editor instead of the news chief. After half a decade of attending concerts and interviewing musicians, he has credited this fortuitous mistake to divine intervention.
Houston Concert Watch 7/3: Chicago and More
Houston Concert Watch 7/10: 21 Savage and More
Tracking Folk Punk's New Wave With Its Archivist
The Sax of Boney James Wafts Across the Studio, the Stage,... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,204 |
namespace blink {
InterpolationValue SVGIntegerOptionalIntegerInterpolationType::maybeConvertNeutral(const InterpolationValue&, ConversionCheckers&) const
{
OwnPtr<InterpolableList> result = InterpolableList::create(2);
result->set(0, InterpolableNumber::create(0));
result->set(1, InterpolableNumber::create(0));
return InterpolationValue(result.release());
}
InterpolationValue SVGIntegerOptionalIntegerInterpolationType::maybeConvertSVGValue(const SVGPropertyBase& svgValue) const
{
if (svgValue.type() != AnimatedIntegerOptionalInteger)
return nullptr;
const SVGIntegerOptionalInteger& integerOptionalInteger = toSVGIntegerOptionalInteger(svgValue);
OwnPtr<InterpolableList> result = InterpolableList::create(2);
result->set(0, InterpolableNumber::create(integerOptionalInteger.firstInteger()->value()));
result->set(1, InterpolableNumber::create(integerOptionalInteger.secondInteger()->value()));
return InterpolationValue(result.release());
}
static PassRefPtrWillBeRawPtr<SVGInteger> toPositiveInteger(const InterpolableValue* number)
{
return SVGInteger::create(clampTo<int>(roundf(toInterpolableNumber(number)->value()), 1));
}
PassRefPtrWillBeRawPtr<SVGPropertyBase> SVGIntegerOptionalIntegerInterpolationType::appliedSVGValue(const InterpolableValue& interpolableValue, const NonInterpolableValue*) const
{
const InterpolableList& list = toInterpolableList(interpolableValue);
return SVGIntegerOptionalInteger::create(
toPositiveInteger(list.get(0)),
toPositiveInteger(list.get(1)));
}
} // namespace blink
| {
"redpajama_set_name": "RedPajamaGithub"
} | 9,020 |
The Dolphins' Season Is Over After an Embarrassing Blowout Loss in Buffalo
Satanists Put Up I-95 Billboard Advertising Abortion Law Loophole
Florida Teachers Repeatedly Misuse Guns, but Lawmakers Want to Arm Them Anyway
Jessica Lipscomb
| Politics |
Jessica Lipscomb | April 10, 2019 | 2:22pm
Protesters with March for Our Lives, Moms Demand Action, and Everytown at a Parkland rally in 2018
Photo by Ian Witlen
After the Parkland shooting last year, a modest package of gun-control laws passed through the Florida Legislature with one notable deletion: language about arming schoolteachers. As part of a compromise, lawmakers agreed to remove the controversial measure — but this year, the idea is back.
Tomorrow a state Senate bill that would allow trained teachers to carry guns at school (SB 7030) will go up for a vote before the Senate Appropriations Committee. A vote on the companion House bill has been temporarily postponed while members of both chambers negotiate the differences between the two measures.
Although the proposal to arm teachers is popular among some conservatives, most Floridians oppose such a plan, according to a recent Quinnipiac University poll. There's also plenty of evidence that teachers with guns can be downright dangerous. Last week, Giffords, the gun-violence prevention organization led by former congresswoman Gabrielle Giffords, put together an analysis of more than 60 incidents of mishandled guns at schools across the nation, including nine in Florida. Last year, the Tampa Bay Times did something similar, pulling state disciplinary reports that showed teachers and other school staff had made threats of violence, sometimes against students.
Now New Times has identified a litany of troubling incidents involving Florida educators, including many that have gone previously unreported. According to records from the Florida Department of Education:
A high-school teacher in Duval County told students in her class they were lucky she "didn't have a gun in [her] purse, because after their behavior the prior day, she would shoot them."
A middle-school teacher in Polk County told another teacher that he "pictured a bullet going into the front of [redacted student name]'s head and coming out the back." He also told the other teacher that at target practice, he thought about shooting his students and co-workers.
A Palm Beach County high-school teacher told her students "she would blow them up if she had a gun," or words to that effect.
A third-grade teacher in Marion County recited a rhyme to her class along the lines of "5, 4, 3, 2, 1, shut your mouths or I will get a gun."
An Orange County art teacher held her hand in the shape of a gun and told a student that she would "have to shoot [him] right between the eyes" if he spilled paint.
A drama teacher in Broward County allowed a student to bring a pellet gun, a paintball gun, and several toy guns to school, violating the school's policy.
An Orange County science teacher told a guidance counselor he wanted one student to have priority getting into a special class even if it meant "taking a machine gun and shooting all the other students," or words to that effect.
A first-grade teacher in Duval County brought a stun gun to school, where it was intercepted by a 7-year-old student who shocked himself.
A Highlands County teacher told his sixth-graders he wished he had a .38-caliber pistol so he could shoot all the students.
A middle-school teacher and basketball coach in Duval County brought a gun to school and showed it to students.
Penalties for those teachers ranged from a letter of reprimand to the permanent revocation of teaching certifications.
Of course, those incidents don't reflect what happens in the vast majority of classrooms under the supervision of dedicated, hardworking Florida teachers. But some of the state's top educators say even well-meaning teachers could mistakenly misuse a gun if they were to be armed. The president of the statewide teachers' union, Fed Ingram of the Florida Education Association, has said he fears the possibilities.
"I don't want any of my children's teachers having guns because I don't want them to be placed in a situation to make a mistake," Ingram told Florida Politics in February.
Ahead of Thursday's vote, the gun-control group Everytown for Gun Safety has been driving a digital billboard truck around Tallahassee to discourage lawmakers from passing the bills. The organization also ran full-page ads Sunday in the Miami Herald, Tampa Bay Times, Tallahassee Democrat, St. Lucie County News, Orlando Sentinel, and Bradenton Herald.
Don't Arm Florida Teachers, says just about every Floridian. #dontarmFLteachers @MomsDemand @book4senate @anitere_flores @AaronPBean @lizbethkb @DSimmonsFL @kellistargel @kathleen4SWFL @debbie_mayfield @TravisJHutson @JeffreyBrandes @BillGalvano @Fla_Pol @AMarch4OurLives pic.twitter.com/ky0qMqgbOb
— MariQ (@Islander500) April 9, 2019
Jessica Lipscomb is news editor of Miami New Times and an enthusiastic Florida Woman. Born and raised in Orlando, she has been a finalist for the Livingston Award for Young Journalists.
Twitter: @jessicalipscomb
Fort Lauderdale Mayor Criticized for Not Quarantining After...
Broward Pet Dealer Accused of Regularly Selling Sick Dogs
Florida Democrats Say the Party Must Change
Dolphins' Miracle Win Proves Brian Flores Is the NFL Coach of... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,872 |
Today you will learn about the different plantation shutter types explained by 3 Blind Mice Window Coverings in San Diego. . Wood shutters have the most flexibility. Traditionally if you compare a good quality wood shutter to a good quality vinyl shutter or You .
Our California shutters are made with the highest quality components available on todays market. Constructed with premium level american poplar wood, our shutters are made to resist warping in any environment. Our shutters will not peel, crack or yellow in high .
Wooden shutters Rooms Bathroom shutters Bedroom shutters Kitchen shutters Loft shutters Lounge shutters Windows Bay window shutters Quote Save up to 20% on all shutters until 27th March Get a quote See how much you could save on Width mm Height .
For craftsman-quality design and purpose-built practicality, choose plantation shutters to get the best of both worlds. Order your made-to-measure shutter blinds from 247 Blinds and enhance the aesthetic of virtually any room in the house, safe in the knowledge that .
California Shutters and More California shutters can make a bold and beautiful design statement for your window treatments. Sometimes generic blinds and curtains dont quite give you the aesthetic or the privacy you desire. They may also lack the durability that .
Transform Your Home with California Shutters . Real Wood Shutters Traditionally, all plantation shutters were made from real wood, and the species of wood most commonly .
B&Q have partnered with the California Shutters to bring you shutters and blinds custom made and delivered directly to your home. Free swatch sample available. . While your bespoke wooden shutters are being made, use California Shutters Quick-fix Temporary .
Window Blinds Direct - Looking for Window Shutters in Toronto Toronto's best window shades, blinds and California shutters. We will beat any price on Window on Blinds and .
Here we list and break down the BEST cost of California shutters in Ontario. Shutter Boys | Toronto's Best California Wood & Vinyl Shutters Shutter Boys .
Beautiful DIY wooden window shutters at a fraction of the cost of fully fitted ones - available from a UK online shutter shop. Servicing the UK and Europe. .
California Shutters Toronto Co is the shutter manufacturing arm of A Divine Design Inc, a custom window coverings company with a mission to cover your windows with a heavenly touch. As a wood and vinyl shutter manufacturer, we provide customers in southern .
When it comes to California Shutters we are experts in the industry. Whether you choose from our custom-made, economical wood shutters or from our exclusive, top-of-the line Hunter Douglas California Shutters, our objective is to surpass your requirements and .
A window shutter is a solid and stable window covering usually consisting of a frame of vertical stiles and horizontal . California shutters, or plantation shutters. Plantation shutters, typical of warmer climates like Florida, South Africa, the Mediterranean or .
California, Vinyl & Wood Shutters Eclipse Vinyl Shutters Eclipse Vinyl Shutters offer precise light control, energy efficiency and UV protection. If your windows face south, east or west, you probably want to control the heat and protect your furnishings and floors .
Plantation shutters can do so much more than just look amazing from inside the house and out. Window shutters help control light, give privacy and can reduce your energy bills. No matter what style interior shutters you decide on wood shutter s or custom .
MDF, Phoenix Wood and Waterproof ABS Shutters. Genuine 50% off plantation shutters with The Shutter Store for easy DIY installation with the help of Sarah Beeny The Shutter Store Ltd Call 0800 0747 321 Home Shop Online Classic MDF Shutters .
Wood California and Plantation Shutters Fashion over Function Wood shutters are the choice when your design intent is geared towards look, feel, and decor. The wood shutters are typically higher in price and generally will have a shorter lifespan when Pros: .
Shutters Instantly transform the look and feel of your home with high quality, durable, sophisticated window coverings. No matter if you know them as plantation, traditional, or California shutters, here at Select Blinds, we have something for every style and budget.
California Shutters 4,737 views 1:47 How to repair a shutter - Duration: 1:57. Blind Installer 21,871 views 1:57 Replace / Repair Shutter Slat - Duration: 1:28. . | {
"redpajama_set_name": "RedPajamaC4"
} | 9,554 |
{"url":"http:\/\/gosiaborzecka.net\/2020\/09\/MachineLearning-Week3\/","text":"# Machine Learning Coursera\u2019s Notes \u2013 Week 3\n\n01 September 2020\n\nMy notes from the Machine Learning Course provided by Andrew Ng on Coursera\n\n# Logistic Regression\n\nExample of classification problem:\n\n\u2022 Email: Spam\/Not spam?\n\u2022 Online transaction: Fraudulent (Yes\/No)?\n\u2022 Tumour: Malingnat\/Benign?\n\n$y \\in \\{0,1\\}$ where $0$ - \"Ngative class\" $1$ - \"Positive class\"\n\n## Multiclass classification problem $y \\in \\{0,1,2,3,...\\}$\n\n### How to develop a classification algorithm?\n\nExample of a training set for classification task:\n\nFor this data set applying linear regression algorithm: $h_{\\theta}(x)=\\theta^Tx$ and it could finish like below:\n\nFor prediction, it could be useful to try to do the threshold:\n\nThreshold classifier output $h_{\\theta}(x)$ at $0.5$: (vertical axis value)\n\n\u2022 If $h_{\\theta}(x) \\geqslant 0.5$, predict \"$y=1$\"\n\u2022 If $h_{\\theta}(x) < 0.5$, predict \"$y=0$\"\n\n### What happens if a problem will change a bit?\n\nBy adding one positive example cause linear regression to change, it's a straight line to fit the new data. It causes (in this example) a worse hypothesis.\n\n### Conclusion: Applying linear regression to classification problem often isn't a good idea\n\nFunny bit of using linear regression on classification problem: Classification: $y=0$ or $y=1$ but $h_{\\theta}(x)$ can be $>1$ or $<0$ So even when if we know that label should be $0$ or $1$, the linear regression could back with output values much larger than $1$ or much smaller than $0$.\n\n# Hypothesis Representation\n\n### Logistic Regression Model\n\nWant $0 \\leqslant h_{\\theta}(x) \\leqslant 1$\n\n$h_{\\theta}(x) = g(\\theta^Tx)$\n\n$g(z) = {1 \\over {1 + e^{-z}}}$ (two different names for the same function: sigmoid function and logistic function )\n\n$\\begin{rcases} h_{\\theta}(x) = g(\\theta^Tx) \\\\ g(z) = {1 \\over {1 + e^{-z}}} \\end{rcases}\u21d2h_{\\theta}(x)={1 \\over {1 + e^{-\\theta^Tx}}}$\n\n$z$ - real number\n\n### Sigmoid function\n\nParameters: $\\theta$\n\n## Interpretation of Hypothesis Output\n\n$h_{\\theta}(x)$ - estimated probability that $y=1$ on input $x$\n\nExample:\n\n$\\text{if } x = \\begin{bmatrix} x_0 \\\\ x_1 \\end{bmatrix} = \\begin{bmatrix} 1 \\\\ \\text{tumor size} \\end{bmatrix}$\n\n$h_{\\theta}(x) = 0.7$ Tell patient that 7-% chance of tumor being malignant\n\n$h_{\\theta}(x) = P(y=1|x;\\theta)$ - probability that $y=1$, give $x$, parameterized by $\\theta$\n\nBecause this is a classification problem, then $y$ can get only two values: $0$ or $1$\n\n$P(y=0|x;\\theta) + P(y=1|x;\\theta) = 1$\n\n$P(y=0|x;\\theta) = 1 - P(y=1|x;\\theta)$\n\n# Decision Boundary\n\nLogistic Regresion:\n\n$h_{\\theta}(x) = g(\\theta^Tx)$\n\n$g(z) = {1 \\over {1 + e^{-z}}}$\n\nPredict $\"y=1\"$ if $h_{\\theta}(x) \\geqslant 0.5$\n\nPredict $\"y=0\"$ if $h_{\\theta}(x) < 0.5$\n\n$g(z) \\geqslant 0.5$ when $z \\geqslant 0$\n\n$h_{\\theta}(x) = g(\\theta^Tx) \\geqslant 0.5$ where $\\theta^Tx \\geqslant 0$\n\n### Training set\n\n$h(_{\\theta}(x) = q(\\theta_0 + \\theta_1x_1 + \\theta_2x_2)$\n\nBased on the plot: $h(_{\\theta}(x) = q(-3 + 1x_1 + 1x_2)$\n\nso: $\\theta = \\begin{bmatrix} -3 \\\\ 1 \\\\ 1 \\end{bmatrix}$\n\nPredict: $\"y=1\"$ if $-3 + x_1 + x_2 \\geqslant 0$\n\n$-3 + x_1 + x_2 \\geqslant 0$\n\n$x_1 + x_2 \\geqslant 3$ - will create a sraight line\n\n### Excercise:\n\n$\\theta_0=5; \\theta_1=-1; \\theta_2=0;$\n\n$h_{\\theta}(x) = g(5-x_1)$\n\nso: $x_1 \\geqslant 5$\n\n### Non-linear decision boundaries\n\n$h_{\\theta}(x) = g(\\theta_0 +\\theta_2x_2 + \\theta_3x_1^2 + \\theta_4x_2^2)$\n\n$\\theta = \\begin{bmatrix} -1 \\\\ 0 \\\\ 0 \\\\ 1 \\\\ 1 \\end{bmatrix}$\n\nPredict $\"y=1\"$ if $-1 + x_1^2 + x_2^2 \\geqslant 0$\n\n$x_1^2 + x_2^2 = 1$\n\n#### More complex decision boundaries\n\n$h_{\\theta}(x) = g(\\theta_0 +\\theta_2x_2 + \\theta_3x_1^2 + \\theta_4x_2^2 + \\theta_5x_1^2x_2^2 + \\theta_6x_1^3x_2 + ...)$\n\n# Cost Function\n\n## Supervised learning problem:\n\nTraining set: $\\{(x^{(1)}, y^{(1)}), (x^{(2)}, y^{(2)}), ... , (x^{(m)}, y^{(m)})\\}$\n\n$m$ examples:\n\n$x \\in \\begin{bmatrix} x_0 \\\\ x_1 \\\\ \\vdots \\\\ x_n \\end{bmatrix}$ $x_0 =1, y \\in \\{0,1\\}$\n\n$h_{\\theta}(x) = {1 \\over 1 + e - \\theta^Tx}$\n\n## How to choose parameter $\\theta$?\n\n### Cost function\n\nLinear regression: $J(\u03b8)=\\frac{1}{m} \\displaystyle\\sum_{i=1}^m \\frac{1}{2} (h_\u03b8 (x^{(i)})-y^{(i)})^2$\n\nAn alternative way of writing this function:\n\n$J(\u03b8)=\\frac{1}{m} \\displaystyle\\sum_{i=1}^m cost (h_\u03b8 (x^{(i)}), y)$\n\nso: $Cost (h_\u03b8 (x^{(i)}), y) = \\frac{1}{2} (h_\u03b8 (x^{(i)})-y^{(i)})$\n\nThis function is fine for linear regression.\n\nFor logistic regression would also work fine, but as well would be a non-convex function of the parameters $\\theta$\n\nExample:\n\n## Logistic regression cost function\n\n$Cost (h_\u03b8 (x^{(i)}) = \\begin{array}{cc} \\log(h_{\\theta}(x)) \\text{if } y = 1 \\\\ \\log(1 - h_{\\theta}(x)) \\text{if } y = 0 \\end{array}$\n\n$Cost = 0$ if $y = 1, h_{\\theta}(x) = 1$\n\nBut as $h_{\\theta}(x) \\to 0$ and $Cost \\to \\infty$\n\nCapture intuition that if $h_{\\theta}(x) = 0$ (predict $P(y=1|x;\\theta)=0$), but $y = 1$, we will penalize learning algorithm by a very large cost\n\n## Simplified cost function and gradient descent\n\n### Logistic regression cost function\n\n$J(\u03b8)=\\frac{1}{m} \\displaystyle\\sum_{i=1}^m Cost (h_\u03b8 (x^{(i)}), y^{(i)})$\n\n$Cost (h_\u03b8 (x^{(i)}) = \\begin{array}{cc} \\log(h_{\\theta}(x)) \\text{if } y = 1 \\\\ \\log(1 - h_{\\theta}(x)) \\text{if } y = 0 \\end{array}$\n\nAlways $y=0 \\text{ or } y=1$\n\n$Cost (h_\u03b8 (x),y) = -y\\log(h_{\\theta}(x)) - ((-y)\\log(1-h_{\\theta}(x))$ (compact way to write in one line)\n\nif $y=1$: $Cost (h_\u03b8 (x),y) = -\\log(h_{\\theta}(x))$\n\nif $y=0$: $Cost (h_\u03b8 (x),y) = -\\log(1 - h_{\\theta}(x))$\n\n#### Logistic regression cost function\n\n$J(\u03b8)=\\frac{1}{m} \\displaystyle\\sum_{i=1}^m Cost (h_\u03b8 (x^{(i)}), y^{(i)}) = -\\frac{1}{m} \\lbrack\\displaystyle\\sum_{i=1}^m y^{(i)} \\log h_{\\theta}(x^{(i)})+(1-y^{(i)})\\log(1-h_{\\theta}(x^{(i)})) \\rbrack$$J(\u03b8)=\\frac{1}{m} \\displaystyle\\sum_{i=1}^m Cost (h_\u03b8 (x^{(i)}), y^{(i)})$\n\nTo fit parameter $\\theta$: $\\begin{matrix} min \\\\ \\theta \\end{matrix} J(\\theta)$\n\nTo make a prediction given new $x$:\n\nOutput $h_{\\theta}(x) = \\frac{1}{1 + e^{-\\theta T} x}$\n\n$J(\u03b8)=-\\frac{1}{m} \\lbrack\\displaystyle\\sum_{i=1}^m y^{(i)} \\log h_{\\theta}(x^{(i)})+(1-y^{(i)})\\log(1-h_{\\theta}(x^{(i)})) \\rbrack$\n\nWant $min_{\\theta}J(\\theta)$:\n\nRepeat {\n$\u03b8_j := \u03b8_j - \\alpha \\frac{\u2202}{\u2202\u03b8_j} J(\u03b8)$\n}\n(simultanously update for every $\\theta_j$)\n\n$\\alpha \\frac{\u2202}{\u2202\u03b8_j} J(\u03b8) = \\frac{1}{m} \\displaystyle\\sum_{i=1}^m (h_\u03b8 (x^{(i)}) - y^{(i)})x_j^{(i)}$\n\nso:\n\nRepeat {\n$\u03b8_j := \u03b8_j - \\alpha \\displaystyle\\sum_{i=1}^m (h_\u03b8 (x^{(i)}) - y^{(i)})x_j^{(i)}$\n}\n(simultanously update all $\\theta_j$)\n\n$\\theta$ = $\\begin{bmatrix} \\theta_0 \\\\ \\theta_1 \\\\ \\theta_2 \\\\ \\vdots \\\\ \\theta_n \\end{bmatrix} h_{\\theta}(x)=\\frac{1}{1 + e^{\\theta^Tx}}$\n\nAlgorithms look identical like linear regression but have a different $h_{\\theta}(x)$ definition.\n\n## Optimalisation algotithms\n\nCost funtion $J(\\theta)$. Want $min_{\\theta}J(\\theta)$.\n\nGiven $\\theta$, we already have a code that can compute:\n\n\u2022 $J(\\theta)$\n\u2022 $\\frac{\u2202}{\u2202\u03b8_j} J(\u03b8)$ (for $j=0,1,...,n$)\n\nRepeat {\n$\u03b8_j := \u03b8_j - \\alpha \\frac{\u2202}{\u2202\u03b8_j} J(\u03b8)$\n}\n\n### Other (than Gradient descent) optimazation algorithms:\n\n\u2022 BFGS\n\u2022 L-BFGS\n\n\u2022 No need to manually peek $\\alpha$\n\n\u2022 have inline search algorithm that automatically tries different values for the learning rate $\\alpha$, and automatically picks a good learning rate $\\alpha$\n\u2022 Often faster than gradient descent\n\nExample: $\\theta = \\begin{bmatrix} \\theta_1 \\\\ \\theta_2 \\end{bmatrix}$\n\n$\\begin{matrix} min \\\\ \\theta \\end{matrix} J(\\theta) \\implies \\theta_1=5; \\theta_2=5$\n\n$J(\\theta) = (\\theta_1 - 5)^2 + (\\theta_2 - 5)^2$ (cost function)\n\n$\\frac{\u2202}{\u2202\u03b8_1}J(\\theta) = 2(\\theta_1-5)$\n\n$\\frac{\u2202}{\u2202\u03b8_2}J(\\theta) = 2(\\theta_2-5)$\n\n### Prove in Octave:\n\n>> options = optimset('GradObj', 'On', 'MaxIter', 100);\n>> initialTheta = zeros(2,1);\n>> function [jVal, gradient] = costFunction(theta)\n\njVal = (theta(1)-5)^2 + (theta(2)-5)^2;\nendfunction\n>> [optTheta, functionValm exitFlag] = fminunc(@costFunction, initialTheta, options)\noptTheta =\n\n5.0000\n5.0000\n\nfunctionValm = 1.5777e-30\nexitFlag = 1\n>>\n\nfminunc - Advance Optimalization function in Octave\n\n# Multi-class classification: ONE-VS-ALL\n\n### Multiclass classification\n\nExamples:\n\n\u2022 Email foldering\/tagging: Work $(y=1)$, Friends $(y=2)$, Family $(y=3)$, Hobby $(y=4)$\n\u2022 Medical diagrams: Not ill $(y=1)$, Cold $(y=2)$, Flu $(y=3)$\n\u2022 Weather: Sunny $(y=1)$, Cloudy $(y=2)$, Rain $(y=3)$, Snow $(y=4)$\n\n## One-vs-all (one-vs-rest)\n\nStep 1: Createing a new training set to fit the classifier $h_{\\theta}^{(1)}(x)$\n\nStep 2: Createing a new training set to fit the classifier $h_{\\theta}^{(2)}(x)$\n\nStep 3: Createing a new training set to fit the classifier $h_{\\theta}^{(3)}(x)$\n\nWe fit three classifiers what is a probablity of one of the three classes:\n\n$h_{\\theta}^{(i)}(x) = P(y=i|x;\\theta)$\n\n## One-vs-all\n\nTrain a logistoc regression classifier $h_{\\theta}^{(i)}(x)$ for each class $i$ to predict the probability that $y=i$ On a new input $x$, to make a prediction, pick the class $i$ ythat maximaizes: $\\begin{matrix} max\\\\ i \\end{matrix} h_{\\theta}^{(i)}(x)$","date":"2020-09-27 17:58:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 145, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9971022605895996, \"perplexity\": 6819.686475236338}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600400283990.75\/warc\/CC-MAIN-20200927152349-20200927182349-00720.warc.gz\"}"} | null | null |
\section{Introduction}
The journal \textit{Monthly Notices of the Royal Astronomical Society} (MNRAS) encourages authors to prepare their papers using \LaTeX.
The style file \verb'mnras.cls' can be used to approximate the final appearance of the journal, and provides numerous features to simplify the preparation of papers.
This document, \verb'mnras_guide.tex', provides guidance on using that style file and the features it enables.
This is not a general guide on how to use \LaTeX, of which many excellent examples already exist.
We particularly recommend \textit{Wikibooks \LaTeX}\footnote{\url{https://en.wikibooks.org/wiki/LaTeX}}, a collaborative online textbook which is of use to both beginners and experts.
Alternatively there are several other online resources, and most academic libraries also hold suitable beginner's guides.
For guidance on the contents of papers, journal style, and how to submit a paper, see the MNRAS Instructions to Authors\footnote{\label{foot:itas}\url{http://www.oxfordjournals.org/our_journals/mnras/for_authors/}}.
Only technical issues with the \LaTeX\ class are considered here.
\section{Obtaining and installing the MNRAS package}
Some \LaTeX\ distributions come with the MNRAS package by default.
If yours does not, you can either install it using your distribution's package manager, or download it from the Comprehensive \TeX\ Archive Network\footnote{\url{http://www.ctan.org/tex-archive/macros/latex/contrib/mnras}} (CTAN).
The files can either be installed permanently by placing them in the appropriate directory (consult the documentation for your \LaTeX\ distribution), or used temporarily by placing them in the working directory for your paper.
To use the MNRAS package, simply specify \verb'mnras' as the document class at the start of a \verb'.tex' file:
\begin{verbatim}
\documentclass{mnras}
\end{verbatim}
Then compile \LaTeX\ (and if necessary \bibtex) in the usual way.
\section{Preparing and submitting a paper}
We recommend that you start with a copy of the \texttt{mnras\_template.tex} file.
Rename the file, update the information on the title page, and then work on the text of your paper.
Guidelines for content, style etc. are given in the instructions to authors on the journal's website$^{\ref{foot:itas}}$.
Note that this document does not follow all the aspects of MNRAS journal style (e.g. it has a table of contents).
If a paper is accepted, it is professionally typeset and copyedited by the publishers.
It is therefore likely that minor changes to presentation will occur.
For this reason, we ask authors to ignore minor details such as slightly long lines, extra blank spaces, or misplaced figures, because these details will be dealt with during the production process.
Papers must be submitted electronically via the online submission system; paper submissions are not permitted.
For full guidance on how to submit a paper, see the instructions to authors.
\section{Class options}
\label{sec:options}
There are several options which can be added to the document class line like this:
\begin{verbatim}
\documentclass[option1,option2]{mnras}
\end{verbatim}
The available options are:
\begin{itemize}
\item \verb'letters' -- used for papers in the journal's Letters section.
\item \verb'onecolumn' -- single column, instead of the default two columns. This should be used {\it only} if necessary for the display of numerous very long equations.
\item \verb'doublespacing' -- text has double line spacing. Please don't submit papers in this format.
\item \verb'referee' -- \textit{(deprecated)} single column, double spaced, larger text, bigger margins. Please don't submit papers in this format.
\item \verb'galley' -- \textit{(deprecated)} no running headers, no attempt to align the bottom of columns.
\item \verb'landscape' -- \textit{(deprecated)} sets the whole document on landscape paper.
\item \verb"usenatbib" -- \textit{(all papers should use this)} this uses Patrick Daly's \verb"natbib.sty" package for citations.
\item \verb"usegraphicx" -- \textit{(most papers will need this)} includes the \verb'graphicx' package, for inclusion of figures and images.
\item \verb'useAMS' -- adds support for upright Greek characters \verb'\upi', \verb'\umu' and \verb'\upartial' ($\upi$, $\umu$ and $\upartial$). Only these three are included, if you require other symbols you will need to include the \verb'amsmath' or \verb'amsymb' packages (see section~\ref{sec:packages}).
\item \verb"usedcolumn" -- includes the package \verb"dcolumn", which includes two new types of column alignment for use in tables.
\end{itemize}
Some of these options are deprecated and retained for backwards compatibility only.
Others are used in almost all papers, but again are retained as options to ensure that papers written decades ago will continue to compile without problems.
If you want to include any other packages, see section~\ref{sec:packages}.
\section{Title page}
If you are using \texttt{mnras\_template.tex} the necessary code for generating the title page, headers and footers is already present.
Simply edit the title, author list, institutions, abstract and keywords as described below.
\subsection{Title}
There are two forms of the title: the full version used on the first page, and a short version which is used in the header of other odd-numbered pages (the `running head').
Enter them with \verb'\title[]{}' like this:
\begin{verbatim}
\title[Running head]{Full title of the paper}
\end{verbatim}
The full title can be multiple lines (use \verb'\\' to start a new line) and may be as long as necessary, although we encourage authors to use concise titles. The running head must be $\le~45$ characters on a single line.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Authors and institutions}
Like the title, there are two forms of author list: the full version which appears on the title page, and a short form which appears in the header of the even-numbered pages. Enter them using the \verb'\author[]{}' command.
If the author list is more than one line long, start a new line using \verb'\newauthor'. Use \verb'\\' to start the institution list. Affiliations for each author should be indicated with a superscript number, and correspond to the list of institutions below the author list.
For example, if I were to write a paper with two coauthors at another institution, one of whom also works at a third location:
\begin{verbatim}
\author[K. T. Smith et al.]{
Keith T. Smith,$^{1}$
A. N. Other,$^{2}$
and Third Author$^{2,3}$
\\
$^{1}$Affiliation 1\\
$^{2}$Affiliation 2\\
$^{3}$Affiliation 3}
\end{verbatim}
Affiliations should be in the format `Department, Institution, Street Address, City and Postal Code, Country'.
Email addresses can be inserted with the \verb'\thanks{}' command which adds a title page footnote.
If you want to list more than one email, put them all in the same \verb'\thanks' and use \verb'\footnotemark[]' to refer to the same footnote multiple times.
Present addresses (if different to those where the work was performed) can also be added with a \verb'\thanks' command.
\subsection{Abstract and keywords}
The abstract is entered in an \verb'abstract' environment:
\begin{verbatim}
\begin{abstract}
The abstract of the paper.
\end{abstract}
\end{verbatim}
\noindent Note that there is a word limit on the length of abstracts.
For the current word limit, see the journal instructions to authors$^{\ref{foot:itas}}$.
Immediately following the abstract, a set of keywords is entered in a \verb'keywords' environment:
\begin{verbatim}
\begin{keywords}
keyword 1 -- keyword 2 -- keyword 3
\end{keywords}
\end{verbatim}
\noindent There is a list of permitted keywords, which is agreed between all the major astronomy journals and revised every few years.
Do \emph{not} make up new keywords!
For the current list of allowed keywords, see the journal's instructions to authors$^{\ref{foot:itas}}$.
\section{Sections and lists}
Sections and lists are generally the same as in the standard \LaTeX\ classes.
\subsection{Sections}
\label{sec:sections}
Sections are entered in the usual way, using \verb'\section{}' and its variants. It is possible to nest up to four section levels:
\begin{verbatim}
\section{Main section}
\subsection{Subsection}
\subsubsection{Subsubsection}
\paragraph{Lowest level section}
\end{verbatim}
\noindent The other \LaTeX\ sectioning commands \verb'\part', \verb'\chapter' and \verb'\subparagraph{}' are deprecated and should not be used.
Some sections are not numbered as part of journal style (e.g. the Acknowledgements).
To insert an unnumbered section use the `starred' version of the command: \verb'\section*{}'.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Lists}
Two forms of lists can be used in MNRAS -- numbered and unnumbered.
For a numbered list, use the \verb'enumerate' environment:
\begin{verbatim}
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
\end{verbatim}
\noindent which produces
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
Note that the list uses lowercase Roman numerals, rather than the \LaTeX\ default Arabic numerals.
For an unnumbered list, use the \verb'description' environment without the optional argument:
\begin{verbatim}
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
\end{verbatim}
\noindent which produces
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
Bulleted lists using the \verb'itemize' environment should not be used in MNRAS; it is retained for backwards compatibility only.
\section{Mathematics and symbols}
The MNRAS class mostly adopts standard \LaTeX\ handling of mathematics, which is briefly summarised here.
See also section~\ref{sec:packages} for packages that support more advanced mathematics.
Mathematics can be inserted into the running text using the syntax \verb'$1+1=2$', which produces $1+1=2$.
Use this only for short expressions or when referring to mathematical quantities; equations should be entered as described below.
\subsection{Equations}
Equations should be entered using the \verb'equation' environment, which automatically numbers them:
\begin{verbatim}
\begin{equation}
a^2=b^2+c^2
\end{equation}
\end{verbatim}
\noindent which produces
\begin{equation}
a^2=b^2+c^2
\end{equation}
By default, the equations are numbered sequentially throughout the whole paper. If a paper has a large number of equations, it may be better to number them by section (2.1, 2.2 etc.). To do this, add the command \verb'\numberwithin{equation}{section}' to the preamble.
It is also possible to produce un-numbered equations by using the \LaTeX\ built-in \verb'\['\textellipsis\verb'\]' and \verb'$$'\textellipsis\verb'$$' commands; however MNRAS requires that all equations are numbered, so these commands should be avoided.
\subsection{Special symbols}
\begin{table}
\caption{Additional commands for special symbols commonly used in astronomy. These can be used anywhere.}
\label{tab:anysymbols}
\begin{tabular*}{\columnwidth}{@{}l@{\hspace*{50pt}}l@{\hspace*{50pt}}l@{}}
\hline
Command & Output & Meaning\\
\hline
\verb'\sun' & \sun & Sun, solar\\[2pt]
\verb'\earth' & \earth & Earth, terrestrial\\[2pt]
\verb'\micron' & \micron & microns\\[2pt]
\verb'\degr' & \degr & degrees\\[2pt]
\verb'\arcmin' & \arcmin & arcminutes\\[2pt]
\verb'\arcsec' & \arcsec & arcseconds\\[2pt]
\verb'\fdg' & \fdg & fraction of a degree\\[2pt]
\verb'\farcm' & \farcm & fraction of an arcminute\\[2pt]
\verb'\farcs' & \farcs & fraction of an arcsecond\\[2pt]
\verb'\fd' & \fd & fraction of a day\\[2pt]
\verb'\fh' & \fh & fraction of an hour\\[2pt]
\verb'\fm' & \fm & fraction of a minute\\[2pt]
\verb'\fs' & \fs & fraction of a second\\[2pt]
\verb'\fp' & \fp & fraction of a period\\[2pt]
\verb'\diameter' & \diameter & diameter\\[2pt]
\verb'\sq' & \sq & square, Q.E.D.\\[2pt]
\hline
\end{tabular*}
\end{table}
\begin{table}
\caption{Additional commands for mathematical symbols. These can only be used in maths mode.}
\label{tab:mathssymbols}
\begin{tabular*}{\columnwidth}{l@{\hspace*{40pt}}l@{\hspace*{40pt}}l}
\hline
Command & Output & Meaning\\
\hline
\verb'\upi' & $\upi$ & upright pi\\[2pt]
\verb'\umu' & $\umu$ & upright mu\\[2pt]
\verb'\upartial' & $\upartial$ & upright partial derivative\\[2pt]
\verb'\lid' & $\lid$ & less than or equal to\\[2pt]
\verb'\gid' & $\gid$ & greater than or equal to\\[2pt]
\verb'\la' & $\la$ & less than of order\\[2pt]
\verb'\ga' & $\ga$ & greater than of order\\[2pt]
\verb'\loa' & $\loa$ & less than approximately\\[2pt]
\verb'\goa' & $\goa$ & greater than approximately\\[2pt]
\verb'\cor' & $\cor$ & corresponds to\\[2pt]
\verb'\sol' & $\sol$ & similar to or less than\\[2pt]
\verb'\sog' & $\sog$ & similar to or greater than\\[2pt]
\verb'\lse' & $\lse$ & less than or homotopic to \\[2pt]
\verb'\gse' & $\gse$ & greater than or homotopic to\\[2pt]
\verb'\getsto' & $\getsto$ & from over to\\[2pt]
\verb'\grole' & $\grole$ & greater over less\\[2pt]
\verb'\leogr' & $\leogr$ & less over greater\\
\hline
\end{tabular*}
\end{table}
Some additional symbols of common use in astronomy have been added in the MNRAS class. These are shown in tables~\ref{tab:anysymbols}--\ref{tab:mathssymbols}. The command names are -- as far as possible -- the same as those used in other major astronomy journals.
Many other mathematical symbols are also available, either built into \LaTeX\ or via additional packages. If you want to insert a specific symbol but don't know the \LaTeX\ command, we recommend using the Detexify website\footnote{\url{http://detexify.kirelabs.org}}.
Sometimes font or coding limitations mean a symbol may not get smaller when used in sub- or superscripts, and will therefore be displayed at the wrong size. There is no need to worry about this as it will be corrected by the typesetter during production.
To produce bold symbols in mathematics, use \verb'\bmath' for simple variables, and the \verb'bm' package for more complex symbols (see section~\ref{sec:packages}). Vectors are set in bold italic, using \verb'\mathbfit{}'.
For matrices, use \verb'\mathbfss{}' to produce a bold sans-serif font e.g. \mathbfss{H}; this works even outside maths mode, but not all symbols are available (e.g. Greek). For $\nabla$ (del, used in gradients, divergence etc.) use \verb'$\nabla$'.
\subsection{Ions}
A new \verb'\ion{}{}' command has been added to the class file, for the correct typesetting of ionisation states.
For example, to typeset singly ionised calcium use \verb'\ion{Ca}{ii}', which produces \ion{Ca}{ii}.
\section{Figures and tables}
\label{sec:fig_table}
Figures and tables (collectively called `floats') are mostly the same as built into \LaTeX.
\subsection{Basic examples}
\begin{figure}
\includegraphics[width=\columnwidth]{example}
\caption{An example figure.}
\label{fig:example}
\end{figure}
Figures are inserted in the usual way using a \verb'figure' environment and \verb'\includegraphics'. The example Figure~\ref{fig:example} was generated using the code:
\begin{verbatim}
\begin{figure}
\includegraphics[width=\columnwidth]{example}
\caption{An example figure.}
\label{fig:example}
\end{figure}
\end{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
The example Table~\ref{tab:example} was generated using the code:
\begin{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
\subsection{Captions and placement}
Captions go \emph{above} tables but \emph{below} figures, as in the examples above.
The \LaTeX\ float placement commands \verb'[htbp]' are intentionally disabled.
Layout of figures and tables will be adjusted by the publisher during the production process, so authors should not concern themselves with placement to avoid disappointment and wasted effort.
Simply place the \LaTeX\ code close to where the figure or table is first mentioned in the text and leave exact placement to the publishers.
By default a figure or table will occupy one column of the page.
To produce a wider version which covers both columns, use the \verb'figure*' or \verb'table*' environment.
If a figure or table is too long to fit on a single page it can be split it into several parts.
Create an additional figure or table which uses \verb'\contcaption{}' instead of \verb'\caption{}'.
This will automatically correct the numbering and add `\emph{continued}' at the start of the caption.
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
Table~\ref{tab:continued} was generated using the code:
\begin{verbatim}
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
To produce a landscape figure or table, use the \verb'pdflscape' package and the \verb'landscape' environment.
The landscape Table~\ref{tab:landscape} was produced using the code:
\begin{verbatim}
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & ...\\
Unit & Unit & ...\\
\hline
Data & Data & ...\\
Data & Data & ...\\
...\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\end{verbatim}
Unfortunately this method will force a page break before the table appears.
More complicated solutions are possible, but authors shouldn't worry about this.
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & Header & Header & Header & Header & Header & Header & Header & Header\\
Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit \\
\hline
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\section{References and citations}
\subsection{Cross-referencing}
The usual \LaTeX\ commands \verb'\label{}' and \verb'\ref{}' can be used for cross-referencing within the same paper.
We recommend that you use these whenever relevant, rather than writing out the section or figure numbers explicitly.
This ensures that cross-references are updated whenever the numbering changes (e.g. during revision) and provides clickable links (if available in your compiler).
It is best to give each section, figure and table a logical label.
For example, Table~\ref{tab:mathssymbols} has the label \verb'tab:mathssymbols', whilst section~\ref{sec:packages} has the label \verb'sec:packages'.
Add the label \emph{after} the section or caption command, as in the examples in sections~\ref{sec:sections} and \ref{sec:fig_table}.
Enter the cross-reference with a non-breaking space between the type of object and the number, like this: \verb'see Figure~\ref{fig:example}'.
The \verb'\autoref{}' command can be used to automatically fill out the type of object, saving on typing.
It also causes the link to cover the whole phrase rather than just the number, but for that reason is only suitable for single cross-references rather than ranges.
For example, \verb'\autoref{tab:journal_abbr}' produces \autoref{tab:journal_abbr}.
\subsection{Citations}
\label{sec:cite}
MNRAS uses the Harvard -- author (year) -- citation style, e.g. \citet{author2013}.
This is implemented in \LaTeX\ via the \verb'natbib' package, which in turn is included via the \verb'usenatbib' package option (see section~\ref{sec:options}), which should be used in all papers.
Each entry in the reference list has a `key' (see section~\ref{sec:ref_list}) which is used to generate citations.
There are two basic \verb'natbib' commands:
\begin{description}
\item \verb'\citet{key}' produces an in-text citation: \citet{author2013}
\item \verb'\citep{key}' produces a bracketed (parenthetical) citation: \citep{author2013}
\end{description}
Citations will include clickable links to the relevant entry in the reference list, if supported by your \LaTeX\ compiler.
\defcitealias{smith2014}{Paper~I}
\begin{table*}
\caption{Common citation commands, provided by the \texttt{natbib} package.}
\label{tab:natbib}
\begin{tabular}{lll}
\hline
Command & Ouput & Note\\
\hline
\verb'\citet{key}' & \citet{smith2014} & \\
\verb'\citep{key}' & \citep{smith2014} & \\
\verb'\citep{key,key2}' & \citep{smith2014,jones2015} & Multiple papers\\
\verb'\citet[table 4]{key}' & \citet[table 4]{smith2014} & \\
\verb'\citep[see][figure 7]{key}' & \citep[see][figure 7]{smith2014} & \\
\verb'\citealt{key}' & \citealt{smith2014} & For use with manual brackets\\
\verb'\citeauthor{key}' & \citeauthor{smith2014} & If already cited in close proximity\\
\verb'\defcitealias{key}{Paper~I}' & & Define an alias (doesn't work in floats)\\
\verb'\citetalias{key}' & \citetalias{smith2014} & \\
\verb'\citepalias{key}' & \citepalias{smith2014} & \\
\hline
\end{tabular}
\end{table*}
There are a number of other \verb'natbib' commands which can be used for more complicated citations.
The most commonly used ones are listed in Table~\ref{tab:natbib}.
For full guidance on their use, consult the \verb'natbib' documentation\footnote{\url{http://www.ctan.org/pkg/natbib}}.
If a reference has several authors, \verb'natbib' will automatically use `et al.' if there are more than two authors. However, if a paper has exactly three authors, MNRAS style is to list all three on the first citation and use `et al.' thereafter. If you are using \bibtex\ (see section~\ref{sec:ref_list}) then this is handled automatically. If not, the \verb'\citet*{}' and \verb'\citep*{}' commands can be used at the first citation to include all of the authors.
\subsection{The list of references}
\label{sec:ref_list}
It is possible to enter references manually using the usual \LaTeX\ commands, but we strongly encourage authors to use \bibtex\ instead.
\bibtex\ ensures that the reference list is updated automatically as references are added or removed from the paper, puts them in the correct format, saves on typing, and the same reference file can be used for many different papers -- saving time hunting down reference details.
An MNRAS \bibtex\ style file, \verb'mnras.bst', is distributed as part of this package.
The rest of this section will assume you are using \bibtex.
References are entered into a separate \verb'.bib' file in standard \bibtex\ formatting.
This can be done manually, or there are several software packages which make editing the \verb'.bib' file much easier.
We particularly recommend \textsc{JabRef}\footnote{\url{http://jabref.sourceforge.net/}}, which works on all major operating systems.
\bibtex\ entries can be obtained from the NASA Astrophysics Data System\footnote{\label{foot:ads}\url{http://adsabs.harvard.edu}} (ADS) by clicking on `Bibtex entry for this abstract' on any entry.
Simply copy this into your \verb'.bib' file or into the `BibTeX source' tab in \textsc{JabRef}.
Each entry in the \verb'.bib' file must specify a unique `key' to identify the paper, the format of which is up to the author.
Simply cite it in the usual way, as described in section~\ref{sec:cite}, using the specified key.
Compile the paper as usual, but add an extra step to run the \texttt{bibtex} command.
Consult the documentation for your compiler or latex distribution.
Correct formatting of the reference list will be handled by \bibtex\ in almost all cases, provided that the correct information was entered into the \verb'.bib' file.
Note that ADS entries are not always correct, particularly for older papers and conference proceedings, so may need to be edited.
If in doubt, or if you are producing the reference list manually, see the MNRAS instructions to authors$^{\ref{foot:itas}}$ for the current guidelines on how to format the list of references.
\section{Appendices and online material}
To start an appendix, simply place the \verb'
\section{Introduction}
Galaxy clusters are important probes of cosmology and are laboratories for the study of the highest energy events since the Big Bang. Consequently, much effort has gone into surveys to find them. The first surveys \citep{Abell1958,Zwicky1961} used over-densities of galaxies to locate clusters, but with the dawn of X-ray astronomy, in the late 1960's, searches for clusters relying on the emission from the hot intracluster medium (ICM) became possible \citep[e.g.][]{Uhuru,Einstein_clusters,x-ray_survey}.
In the last two decades, the thermal Sunyaev-Zel'dovich effect \citep[tSZE;][]{Sunyaev1972} has been added to the toolkit to both find clusters and study their ICM \citep[see e.g.][for a review]{Mroczkowski2019}.
Rather than emission, the tSZE consists of the inverse Compton scattering of cosmic microwave background (CMB) photons as they pass through the ICM which causes a spectral distortion of the CMB blackbody. The magnitude of the effect in any one direction is proportional to the pressure integrated along the line of sight and is referred to as Compton-$y$. The total Compton-$y$ integrated across the angular extent of the cluster, $Y$, is a good proxy for cluster mass \citep[e.g. ][]{Kravtsov2012}.
One of the advantages of the tSZE is that its surface brightness is redshift independent, meaning that, given sufficient resolution, it is relatively easy to detect and study clusters at high redshifts where optical and X-ray methods need long exposure times. Experiments such as \textit{Planck} \citep{PlanckInstrument} and the South Pole Telescope (SPT) \cite{Benson2014} have carried out deep surveys finding hundreds of clusters \citep{Planck_SZE,SPT_SZE}
and the recent data release from the Atacama Cosmology Telescope \citep[ACT;][]{ACT} contains 4195 optically confirmed clusters \citep{Hilton2021}. In the future, experiments such as the Simons Observatory, CMB-S4,
and CMB-HD expect to find an order of magnitude more clusters \citep{SimonsForecastPaper, abazajian2016cmbs4, Abazajian2019, CMB_HD}. Because of the large size of current and future tSZE surveys, their use for cosmology will be limited by systematic effects. The selection function of these surveys is relatively well understood -- due to their insensitivity to redshift, it is mostly a selection by mass -- however, as we will show later, one possible systematic effect is the exclusion of some clusters due to radio sources. Although relationships between $Y$ and cluster mass exist, there is an 11 per cent scatter, with some clusters deviating from the relationship by up to 15 per cent \citep{YMRelationship}. A good understanding of the causes of this scatter is essential to realizing the full potential of future data sets.
In order to maximize their survey speed, experiments such as ACT and the SPT have moderate resolutions (1--2 arcmin) which is well matched to the typical angular size of a cluster. As Fig.~\ref{fig:maps} shows, this means they are unable to resolve features in the ICM such as elliptical cores or shock fronts. Such features can indicate which clusters are undergoing mergers -- events that can affect the $Y\!\text{-}M$\ relationship \citep{MergersAndSZE}. Also visible in these 9~arcsec resolution maps, taken with MUSTANG2 \citep{Dicker2014}, are point sources. At 1--2 arcmin resolution these blend in with the clusters contributing to the scatter in the measured $Y$. If this source population is well quantified it can be taken into account when fitting cluster mass to the $Y\!\text{-}M$\ relation. The measured masses of individual clusters will vary from their true values but, when taken as a whole, the survey will give the correct distribution of masses. Many studies have predicted source contamination levels derived from source catalogs at frequencies well below and well above the tSZE bands \citep[e.g.,][]{knox2004,Lin2007,SPT_src,Lin2009}. However, there can be large uncertainties in the spectral indices used to extrapolate to tSZE frequencies and not all studies agree. The best solution is to simply measure source properties at the frequencies of interest (90--150~GHz for current surveys).
In this paper we present a pilot study using data from MUSTANG2 to better quantify the effect of point sources on tSZE derived cluster masses.
\begin{figure}
\centering
\includegraphics[clip,trim=5mm 2mm 5mm 0mm, width=0.49\columnwidth]{moo1142_w_ACT_y_big.pdf}
\includegraphics[clip,trim=5mm 2mm 5mm 0mm, width=0.49\columnwidth]{Abell2052_w_ACT_90_big.pdf}
\caption{\label{fig:maps} MUSTANG2 signal-to-noise ratio maps of two clusters. In MOO J1142+1527; z=1.189 (left), the filtered ACT $y$ map is shown as cyan contours spaced by $5{\times}10^{-5}$. The bright point source and the elliptical center of this on-going merger do not show up in the ACT $y$ map. In Abell 2052; z=0.03 (right) the blue contours are the ACT 90~GHz map with 100~$\mu$K spacing. The ACT beam is shown in cyan. The point source in this cluster is strong enough that all ACT sees is the point source and this well known cluster is missing from the ACT DR5 sample. The MUSTANG2 beam is shown (in white) in the bottom left of both maps.
}
\end{figure}
This paper is organized as follows: In Section~\ref{sec:M2obs} we describe the MUSTANG2 observations. Next, in Section~\ref{sec:ACT}, we give an outline of how clusters are found in tSZE surveys using the example of ACT and present simulations showing how point sources will affect the measurements. In Section~\ref{sec:results} we apply the results of these simulations to different samples of clusters, we use extrapolations from low frequency data to compare tSZE, optical, and X-ray selected samples and we calibrate these extrapolations using the point sources in clusters observed by MUSTANG2. The conclusions are presented in Section~\ref{sec:conclusions}. We assume a flat cosmology with $\Omega_\text{m} = 0.3$, $\Omega_\Lambda = 0.7$, and $H_0 = 70~\text{km}\,\text{s}^{-1}$ throughout this paper.
\section{MUSTANG2 observations of clusters}\label{sec:M2obs}
MUSTANG2 is a 90~GHz bolometer camera on the 100~m Green Bank Telescope \citep{Dicker2014}. It has 9 arcsec resolution, a 4.2 arcmin field-of-view and can map a 6~arcmin diameter area to 56~$\mu$Jy/beam in an hour making it ideal for follow up observations of galaxy clusters.
As part of a wider observing program with goals ranging from solar system and galactic science
to cosmology, MUSTANG2 has mapped over 40 galaxy clusters with typical map depths of 15--50~$\mu \mbox{K}_{\mbox{\tiny RJ}} $ (11--38~$\mu$Jy\,beam$^{-1}$ or
3--10~$\mu\mbox{K}_{\mbox{\tiny CMB}}$\,arcmin). Science goals of these cluster observations range from searching for substructure such as bubbles and shocks, measuring ICM profiles
\citep{Romero2020}, looking for filaments between clusters \citep{Adam2021}, and the follow-up of clusters identified by surveys such as the Massive and Distant Clusters of WISE Survey \citep[MaDCoWS - ][]{Gonzalez2019} and Hyper Suprime-Cam \citep[][]{Okabe2021}. These observations are spread over many different projects and on their own they do not give any statistically significant information about the population of point sources in clusters. However, by combining all public data we are able to construct a sample of 30 clusters which have a clear detection of the tSZE. These clusters span redshifts between 0.03 and 1.8 but most are above $z=0.4$ and the sample has a median redshift of 1.035. Included in the sample are well known clusters such as RXJ1347.5-1145 \citep{Mason2010} and MACSJ0717.5+3745 \citep{Mroczkowski2012}, tSZE identified clusters from ACT \citep{Hasselfield2013}, and 20 clusters from ongoing follow up of clusters identified by MaDCoWS \citep{Dicker2020}. All but two of the clusters are either optically or X-ray selected.
To extract the point sources, MUSTANG2's MIDAS pipeline \citep[see][for details]{Romero2020} was used to calibrate the raw data and produce signal and noise maps with 1 arcsec pixel spacing. The noise maps were made by inverting half of each cluster's data. For the most conservative numbers, the first half of each night's observations was subtracted from the second half (thus, long timescale drifts that are not removed in data analysis are included in the noise estimates). From these maps, signal-to-noise (SNR) maps smoothed to 9 arcsec were produced and the approximate locations of any sources more than 4.5 sigma above the average value of the surrounding pixels and within 5~arcmin of the center of the cluster where found. A non-linear least squares fit of a 2D Gaussian over a 20 arcsec square region around each source was used to find the source sizes and peak amplitudes (allowing for a 2D linear offset over the 20 arcsec region). Where the source size was statistically larger than 10~arcsec it was assumed to be extended and an integrated flux was calculated using the ratio of the source and beam solid angles. For spectral index calculations the integrated fluxes were used whenever a source was extended.
As well as flux densities for sources visible in the maps, we place limits on those that might not have been detected. The noise in Jy\,beam$^{-1}$ can be calculated by smoothing the noise maps and taking the RMS over the central arcminute, a spatial scale on which the noise is relatively white. From the noise in each map the 90 per cent completeness limits for point sources in Table~\ref{tab:clusters} are calculated -- in all but 2 of the clusters, the 90 per cent completeness limit was better than 0.2~mJy. Since the coverage in the MUSTANG2 maps falls off with radius, these numbers are only applicable to the central $r{\approx}2$~arcmin of the maps. The noise (hence the detection threshold) increases by a factor of $\sim2$ by r=3~arcmin and up to a factor of 3 by r=4~arcmin, but as shown in Section~\ref{sec:sims}, sources at these distances from the cluster centers are correspondingly less important to measurements of Compton-$y$.
\subsection{MUSTANG2 sources}
The locations and flux densities of point sources found are listed in Table~\ref{tab:src}. Of our 30 clusters, 18 had one or more sources visible at the depth of the available data. This is far higher than what would be expected from the chance alignment of foreground and background sources -- extrapolating source counts from the 31~GHz results in \citet{Mason2009} predicts that less than 5 per cent of clusters should have a source brighter than 1~mJy. As the cumulative histogram in Fig.~\ref{fig:hist} shows, 20 per cent of our cluster sample have sources totalling more than 1~mJy, more than can be explained by any reasonable flattening of spectral indices such as described in \citet{Whittam2017}. This implies most of the sources measured by MUSTANG2 are either cluster members or lensed background sources.
When searching for clusters, the ACT DR5 pipeline masks out areas of the maps that have point sources with measured amplitudes above 10~mJy at 150~GHz. For typical radio sources with spectral indices of $-0.7$ this corresponds to 14.2~mJy in MUSTANG2's band. From Fig.~\ref{fig:hist}, it can be seen that only one of the clusters would have been (and was) masked out. Calculations based on the simulations in Section~\ref{sec:sims} show that an embedded radio source with an amplitude of 0.4~mJy would have a 5 per cent effect on the Compton-$y$ measured for a $2.5{\times}10^{14}~\text{M}_\odot$\ cluster (approximately the mean mass in the latest ACT cluster catalog). Similarly, dusty sources with spectral indices of 3.5 and amplitudes less than 1.75~mJy at 90~GHz would not be strong enough to be cut by the ACT mask, but a dusty source as faint as 0.07~mJy would still be strong enough to cause a 5 per cent change in the measured Compton-$y$. From Fig.~\ref{fig:hist} it can be seen that, regardless of the source type, a significant number of the sources found by MUSTANG2 have flux densities that could bias surveys when they lie close to the center of the clusters.
\begin{figure}
\includegraphics[width=\columnwidth]{histogram_no_top_axis.pdf}
\caption{\label{fig:hist} The fraction of clusters that have point source flux densities totaling more than $s$, where $s$ is plotted on the x axis. The dotted vertical lines represent the flux density cutoff used in the ACT point source mask used in DR5 (10~mJy at 150~GHz) scaled to 90~GHz assuming a typical dust spectral index of 3.5 (blue) and a typical synchrotron spectral index of $-0.7$ (red). The vertical dashed lines represent the flux densities which, {\it if the source were in the center a cluster}, would cause a 5 per cent reduction in the measured Compton-$y$ for a cluster with $M_{500\text{c}}$=$2.5{\times}10^{14}~\mbox{M}_\odot$\ for the cases of a dusty source (in blue) and a radio source (in red).}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{source_flux_vs_offset.pdf}
\caption{\label{fig:dist} The source flux densities found by MUSTANG2 (the blue crosses) plotted against their distance from the cluster centers. The flux density scale is given on the left. The red histogram shows the distribution of these sources in 1 arcmin bins normalized by area -- although the sources are centrally concentrated, in absolute numbers there are roughly equal numbers of sources at each radius. The black line shows the absolute value of $N(r)$, the normalized response discussed in section 3.1. The dotted part of this line represents where $N(r)$ is negative.}
\end{figure}
As discussed in Section~\ref{sec:sims} the magnitude of a source's effect on measurements of the tSZE is dependant on its distance from the cluster's center and its spectral index at the frequencies at which the tSZE survey is carried out. The distance from the cluster center of all the sources found by MUSTANG2 is shown in Fig.~\ref{fig:dist}. Within the limits of our sample size, the distribution with radius seems uniform and independent of source amplitude. If the source counts are binned in radius and normalized by the solid angle of each bin then it can be seen that the source distribution is centrally peaked. The uniform distribution with radius implies that the source density (in projection) must fall by approximately the radius squared. Given the small number statistics due to our sample size, this source distribution is consistent with \citet{SPT_src} who observed a radial distribution of sources of $\sim r^{-3}$ ($r^{-2.5}$ in projection).
\begin{table}
\caption{The clusters in our sample, their redshifts, and masses (measured from fits to the MUSTANG2 data), and the noise in the centers of the MUSTANG2 maps in $\mu\mbox{K}_{\mbox{\tiny RJ}}$ and also expressed as the 90 per cent completeness limit, $L90$, for point sources.\label{tab:clusters} }
\begin{tabular}{lcccc}
\hline
Cluster ID & redshift & $M_{500\text{c}}$ & map noise & $L90$ \\%Completeness \\
& & $10^{14}\mbox{M}_\odot $ & $\mu\mbox{K}_{\mbox{\tiny{RJ}}}$ & mJy \\ \hline
ACT-CL J0059-0049 & 0.787 & $4.19^{\mbox{\tiny $+0.49-0.62$}}$ & 35 & 0.14 \\
MOO J0105+1323 & 1.130 & $3.83^{\mbox{\tiny $+0.23-0.24$}}$ & 41 & 0.15 \\
MOO J0135+3207 & 1.460 & $1.82^{\mbox{\tiny $+0.31-0.31$}}$ & 29 & 0.10 \\
HSC J0210-0611 & 0.434 & $1.41^{\mbox{\tiny $+0.18-0.23$}}$ & 71 & 0.28 \\
HSC J0221-0346 & 0.430 & $4.41^{\mbox{\tiny $+0.69-1.41$}}$ & 13 & 0.05 \\
HSC J0233-0530 & 0.420 & $1.28^{\mbox{\tiny $+0.31-0.37$}}$ & 21 & 0.08 \\
ACT-CL J0326-0043 & 0.447 & $4.49^{\mbox{\tiny $+0.25-0.19$}}$ & 30 & 0.12 \\
MOO J0448-1705 & 0.960 & $4.58^{\mbox{\tiny $+0.34-0.34$}}$ & 27 & 0.11 \\
MACS J0717.5+3745 & 0.550 & $2.24^{\mbox{\tiny $+0.22-0.20$}}$ & 27 & 0.11 \\
2XMM J0830+5241 & 0.990 & $4.00^{\mbox{\tiny $+0.63-0.59$}}$ & 18 & 0.07 \\
RDCS J0910+5422 & 1.100 & $3.19^{\mbox{\tiny $+0.26-0.21$}}$ & 24 & 0.09 \\
MOO J1001+6619 & 1.530 & $2.12^{\mbox{\tiny $+0.56-1.19$}}$ & 27 & 0.11 \\
MOO J1014+0038 & 1.210 & $3.12^{\mbox{\tiny $+0.16-0.15$}}$ & 23 & 0.08 \\
Zwicky 3146 & 0.291 & $8.16^{\mbox{\tiny $+0.44-0.54$}}$ & 15 & 0.06 \\
MOO J1046+2757 & 1.160 & $2.00^{\mbox{\tiny $+0.21-0.23$}}$ & 36 & 0.16 \\
MOO J1052+0823 & 1.410 & $1.93^{\mbox{\tiny $+0.31-0.35$}}$ & 23 & 0.08 \\
RX J1053.7+5735 & 1.260 & $5.19^{\mbox{\tiny $+0.21-0.19$}}$ & 18 & 0.07 \\
MOO J1054+0505 & 1.450 & $1.34^{\mbox{\tiny $+0.33-0.34$}}$ & 40 & 0.14 \\
MOO J1059+5454 & 1.190 & $2.58^{\mbox{\tiny $+0.06-0.06$}}$ & 27 & 0.11 \\
MOO J1108+3242 & 1.020 & $2.31^{\mbox{\tiny $+0.19-0.20$}}$ & 21 & 0.09 \\
MOO J1110+6838 & 0.900 & $2.02^{\mbox{\tiny $+0.16-0.16$}}$ & 34 & 0.12 \\
MOO J1142+1527 & 1.100 & $3.52^{\mbox{\tiny $+0.19-0.19$}}$ & 44 & 0.17 \\
MACS J1149.5+2223 & 0.540 & $8.12^{\mbox{\tiny $+0.30-0.30$}}$ & 37 & 0.15 \\
MOO J1322-0228 & 0.820 & $3.07^{\mbox{\tiny $+0.41-0.53$}}$ & 28 & 0.11 \\
MOO J1329+5647 & 1.430 & $3.56^{\mbox{\tiny $+0.20-0.20$}}$ & 46 & 0.15 \\
RX J1347.5-1145 & 0.451 & $7.03^{\mbox{\tiny $+0.45-0.45$}}$ & 48 & 0.19 \\
MOO J1354+1329 & 1.480 & $2.46^{\mbox{\tiny $+0.25-0.30$}}$ & 44 & 0.15 \\
MOO J1506+5136 & 1.090 & $3.09^{\mbox{\tiny $+0.29-0.29$}}$ & 36 & 0.14 \\
Abell 2052 & 0.030 & $7.37^{\mbox{\tiny $+0.70-0.70$}}$ & 65 & 0.26 \\
MOO J1554-0447 & 1.050 & $5.36^{\mbox{\tiny $+0.73-0.85$}}$ & 50 & 0.20 \\
\hline
\end{tabular}
\end{table}
\begin{figure*}
\centering
\includegraphics[trim=0in 2.4in 0in 0in,width=\textwidth]{SED_main.pdf}
\caption{\label{fig:SEDs} Selected SEDs for the sources observed. The MUSTANG2 flux densities assuming a point or an extended source at 10 arcsec resolution are shown by the green and yellow circles respectively -- only J04:48:42.2-17:04:55 shows strong evidence for extended emission so in the other plots the green points obscure the yellow ones. All points have error bars but many are too small to be easily seen. The 1.4~GHz data from FIRST/NVSS are in red, where available Herschel/SPIRE data is in blue, black points are from the VLA and BIMA, and the four high frequency points (in purple) are from WISE - note the lower two frequencies of these four points are mostly upper limits (all upper limits are shown as triangles). The fitted spectral index between 90~GHz and 1.4~GHz is the red dashed line. A 40~K black body spectrum with a 90~GHz flux density set at 20 per cent of the measured MUSTANG2 flux density is shown in cyan -- with the exception of J04:48:42.2-17:04:55 and J11:49:22.5+22:23:25 this emission is ruled out. SEDs for other sources along with further notes are available as supplementary material.}
\end{figure*}
\subsection{Source counterparts}
To place constraints on the spectral indices of these sources, searches for counterparts in radio and infrared surveys were carried out. A search radius of 9~arcsec was used except for lower resolution data sets, in which case half the beam width of the survey was used. Where coverage was available, data at 1.4~GHz was obtained from the Faint Images of the Radio Sky at Twenty-cm (FIRST) point source catalog \citep{FIRST} while for all other clusters we used the NRAO VLA Sky Survey \citep[NVSS][]{Condon1998}. To account for the higher resolution of these surveys, integrated fluxes for any counterparts found were used. At the limits of the survey depths (1.0~mJy for FIRST and 2.4~mJy for NVSS) robust radio counterparts were found for 80 per cent of the sources seen by MUSTANG2. In the case of Zwicky 3146 two of the sources are extended in the MUSTANG2 images but are marked as separate sources in the FIRST catalog. In this paper we chose to combine the integrated FIRST fluxes. Infrared (3.4, 4.6, 12, \& 22~$\mu$m) counterparts for most sources were also found in the WISE survey \citep{WISE}; however, the density of sources in WISE is much higher than FIRST. Searches at locations randomly offset by 20~arcsec from each of our sources had a match in WISE in over 50 per cent of cases so it is likely that a significant number of WISE matches with MUSTANG2 sources could be chance alignments. Nevertheless, the analysis in this paper only uses the WISE sources to provide upper limits on emission from hot dust. Upper limits from counterparts that are chance alignments will be weaker upper limits than would be obtained without the chance alignment but in no case would these weaker limits change our conclusions. Few of the clusters in this paper were in the Herschel/SPIRE public archive, but where they were, counterparts or upper limits were obtained at wavelengths of 250, 350, and 500~$\mu$m. A few of the point sources also had 28.5~GHz observations by BIMA \citep{BIMA} or VLA counterparts at 74~MHz \citep{Cohen2007}, 4.9, and 8.5~GHz \citep{Lin2009}. A summary of all these data can be found in Table~\ref{tab:src} while spectral energy density (SED) plots can be seen in Fig.~\ref{fig:SEDs} (selected sources only) and its extended version containing all the sources in the supplementary material.
These SED plots place limits on which emission mechanisms dominate at 90~GHz. Any significant contribution from a hot ($\gg$40~K) thermal component is ruled out by the WISE data -- extrapolating the WISE flux densities to 90~GHz with any reasonable dust spectral index ($\beta>1)$ gives predicted emission an order of magnitude or more below the flux densities measured by MUSTANG2. Conversely, the majority of the sources have a counterpart at 1.4~GHz and some of the better studied clusters (e.g. Abell 2052 and RX~J1347.5-1145 in Fig.~\ref{fig:SEDs}) have additional radio data between 1.4 and 90~GHz. These data are mostly consistent with spectral indices between $-0.1$ and $-1$ implying there is likely to be a synchrotron component in most of the sources measured by MUSTANG2. In addition, Herschel data in some clusters such as MACS J0717.5+3745 put strong upper limits on a cold ($<$40~K) dust component, showing it contributes 20 per cent or less of the flux density at 90~GHz.
Many more of the sources are like those in the cluster MOO J0448-1705 on the top of Fig.~\ref{fig:SEDs} where there are virtually no constraints on a cold dust component and a small change in the radio spectral index between 1.4 and 90~GHz would change the dominant emission mechanism in the MUSTANG2 data. Five sources observed by MUSTANG2 show an inverted radio spectrum between 1.4 and 90~GHz but these SEDs could also be explained by the presence of a cold dust component or source variability.
\section{Extracting masses from millimeter wave surveys}\label{sec:ACT}
To see how point sources could affect masses recovered from tSZE surveys, a brief outline of the data analysis steps used by these surveys is needed.
Full details of the data analysis pipelines can be found in the relevant papers (e.g. \cite{Hilton2021,SPT_SZE,Planck_SZE}). In this paper we concentrate on how masses are obtained from raw maps in the recent ACT DR5 data release \citep{Hilton2021} which follows the multi-frequency matched filter approach in \citet{Melin2006}. However, the methods used by other experiments and data releases are broadly similar.
First, the magnitude of the tSZE (in units of flux density) varies with frequency as given by
\begin{equation} \label{equ:g(x)}
g(x) = \frac{x^4e^x}{(e^x -1)^2} \left(x \frac{e^x+1}{e^x-1} - 4\right)[1 + \delta(x,T_e) ]
\end{equation}
where $x$ is the dimensionless frequency defined as $ h \nu / k_\text{\tiny B} T_\text{\tiny CMB}$, $\delta(x,T_e)$ is a relativistic correction which is at most a few per cent and, in the analysis presented in this paper, can be ignored, and $T_e$ is the electron temperature. Below $\sim$218~GHz, $g(x)$ is negative meaning that, at these wavelengths, clusters show up as decrements in the microwave background temperature (0.05--1~mK at 90~GHz). Because of the unique spectral shape of the tSZE, it is possible to separate out the tSZE signal using multifrequency observations -- in the case of ACT DR5, 90 and 150~GHz are used to find clusters. Other ground based experiments use similar bands while, due to the lack of atmospheric absorption, space based experiments such as {\it Planck} \citep{PlanckInstrument} can have wider and more complete frequency coverage. Sets of matched filters (matched to $g(x)$ and the cluster profile) are used to extract maps of peak Compton-$y$ with different filter sets being used to obtain maximum signal-to-noise (SNR) on clusters in different mass and redshift ranges \citep{Hilton2021}.
Because of how the magnitude of the tSZE changes with wavelength and the relative noise in the maps, frequencies around 90~GHz contribute most to the sensitivity of the ACT Compton-$y$ maps. For most calculations \citet{Hilton2021} use a reference filter set optimized for clusters with $M_{500c}$ having an angular extent on the sky of 2.4 arcmin. The peak Compton-$y$ recovered from this filter set is referred to as \mbox{$\tilde{y}_0$}.
With \mbox{$\tilde{y}_0$}\ calculated, the clusters are found by making cuts at fixed SNR ($4\sigma$ in the case of DR5). However, because of the log-normal nature of intrinsic scatter in the $Y\!\text{-}M$\ relationships and the steepness of the cluster mass function, a simple inversion of the relationship is not used to evaluate mass directly.
Instead, \citet{Hilton2021} find the most likely mass given the cluster's redshift and our knowledge of the intrinsic scatter. For a given survey, the better our knowledge of the intrinsic scatter (which is potentially dependent on redshift) the more accurate the recovered masses will be. Any measurements of the causes of the intrinsic scatter in the measured $y$ values, such as the effects of point sources, apply across all tSZE cluster experiments current and future.
\subsection{The effects of point sources}\label{sec:sims}
As the tSZE signal at the frequencies
experiments such as ACT are most sensitive to is negative, central point sources will have the effect of `infilling' some of the tSZE signal. This will increase the scatter of the masses obtained by such surveys. Many authors \citep[e.g. ][]{Lin2007,SPT_src,Lin2009} have made calculations of the magnitude of this effect by equating source flux density to equivalent Compton-$y$. In most cases, data at tSZE frequencies are not available and extrapolations over two decades in frequency need to be made to predict the source population. Small errors in these extrapolations can have a large effect and there is evidence that radio sources in the centers of clusters have different spectra indices than typical sources \citep{Coble2007}. \citet{SPT_src} circumvent this problem by looking for sources in their low resolution SPT maps in the directions of X-ray clusters. Cross correlations with the Sydney University Molonglo Sky Survey (SUMSS) at 843~MHz \citep{Murphy2007} were used to build a comprehensive model of the source populations within the virial radius of clusters. At z=0.25, this model predicts that 0.5 per cent of $3{\times}10^{14}\mbox{M}_\odot$\ clusters would be totally infilled at 150 GHz, rising to 1.5 percent at 90 GHz -- the inclusion of the higher frequency data giving a result 6 times lower than that of \citet{Lin2007}. However, these techniques which simply add positive cluster flux to the negative tSZE flux, do not fully take into account the matched filter in the cluster finding pipeline described above.
To better quantify the effects of point sources on DR5's measurement of \mbox{$\tilde{y}_0$}, simulations were carried out; 25 simulated clusters with masses ranging from $M_{500c}{=}$2.1--$6.5{\times}10^{14}~\mbox{M}_\odot$ (corresponding to typical detections of 4.5--10$\sigma$) and redshifts between 0.145 and 1.85 (the range of redshifts in DR5) were placed at random locations (avoiding the positions of known clusters and sources) in a single $12.6^\circ{\times}7.3^\circ$ tile drawn from the 90 and 150 GHz ACT DR5 maps. Cluster profiles from \citet{Arnaud2010} were assumed and the maps were run through the ACT cluster detection pipeline \citep{Hilton2021}. This process was then repeated with fake point sources added to the maps and the recovered properties of the simulated clusters with and without sources present where compared. Initial tests placed sources with spectral indices of $\alpha=-0.7$ (a radio source) or $\alpha=3.5$ (a dust source) in the centers of the clusters. The results, shown in Fig.~\ref{fig:sim_results}a show that the change in \mbox{$\tilde{y}_0$}\ is independent of cluster mass and is linear with source flux density up until the source changes the recovered \mbox{$\tilde{y}_0$}\ by approximately 30 per cent. Sources that changed \mbox{$\tilde{y}_0$}\ by more than 30 per cent resulted in recovered \mbox{$\tilde{y}_0$}\ values with low SNR and high scatter. If these were real clusters, most would not have made the SNR cut to be included in the DR5 catalog so these points were dropped from the simulations. For the purposes of further analysis we adopt a reference source as having a spectral index of $\alpha=-0.7$ and a 90~GHz flux density of 1~mJy which gives a change in \mbox{$\tilde{y}_0$}\ of $-8.76{\times}10^{-6}$ (12 per cent for a typical $M_{500\text{c}}$=$2.5{\times}10^{14}~\mbox{M}_\odot$\ cluster).
\begin{figure*}
a\includegraphics[width=0.65\columnwidth]{delta_y_vs_flux_mass_big.pdf}
b\includegraphics[width=0.65\columnwidth]{delta_y_vs_radius_big.pdf}
c\includegraphics[width=0.65\columnwidth]{delta_y_vs_alpha_big.pdf}
\caption{\label{fig:sim_results}Results from injecting sources to simulated clusters with different masses and redshifts. left: How \mbox{$\tilde{y}_0$}\ changes with the flux density of sources placed in the center of each cluster and the cluster mass (more massive, higher significance clusters are plotted in fainter colors). Sources with spectral indices of $-0.7$ and $+3.5$ are plotted in blue and red respectively. For a given spectral index the change in \mbox{$\tilde{y}_0$}\ is linear with source strength and independent of cluster mass. Simulated clusters were recovered from the maps with significances between $4.5\sigma$ (dark colours) to $10\sigma$ (lighter points). The yellow star is for a 1~mJy, alpha=-0.7 source used to normalize the other plots; Center: How the average change in \mbox{$\tilde{y}_0$}\ varies as you move the reference source away from the center of the cluster normalized to a peak of 1 when the source is in the center. The results are symmetrical around r=0~arcsec; Right: The effects of spectral index normalized to a value of 1 for the reference source, averaged across all simulated clusters.}
\end{figure*}
As not all sources will be in the center of clusters, more simulations were carried out adding the reference 1~mJy, $\alpha=-0.7$ source to the clusters at distances between 0 and 300~arcsec from their known centers. After discarding data points where the source infill exceeded 30 per cent , the resulting $\Delta_{\tilde{y}_0}$ was found to be independent of cluster mass and an average could be taken across all simulated clusters to obtain the normalized response function $N(r)$ shown in Fig.~\ref{fig:sim_results}b. The independence of $N(r)$ from cluster mass is expected as it represents the compact source response of the matched filter used to calculate \mbox{$\tilde{y}_0$}\ not the filter's response to the cluster. The size and shape of $N(r)$ falls between the 90~GHz and 150~GHz components of the matched filter.
Past a radius of 59~arcsec, a source contributes less than half the change in \mbox{$\tilde{y}_0$}\ than it would in the center. Also, past 104~arcsec, the shape of the matched filters used to find clusters means that a positive source will in fact add to the negative tSZE signal resulting in some scatter to higher masses. At a radius of 220~arcsec the response to a source is still above 10 per cent (but with the opposite sign) of that of the same source in the center of the cluster - even though this is far outside the $R_{500}$ of most clusters. The above measurements from MUSTANG2 show there are many sources in this region.
When calculating $N(r)$, note that $r$ refers to the angular distance of a source from the known center of the cluster not the measured position reported by the data analysis pipeline. A strong source that significantly affects the measured \mbox{$\tilde{y}_0$}\ can shift the measured cluster location. For off-centered sources less than 104 arcsec from the true cluster center, the measured cluster location will move away from the source and calculations of $N(r)$ using the measured positions will be biased low. However, for changes in \mbox{$\tilde{y}_0$}\ less than 20 per cent, the simulations in Sec.~\ref{sec:sims} show this effect is less important than variations in the measured locations of clusters due to map noise (${\sim}10$~arcsec for a cluster with SNR=8 -- see Figure~5 of \citet{Hilton2021}). Consequently, regardless of the presence of a source, the measured cluster locations can be used.
From the first simulations, dusty sources had a much larger effect on \mbox{$\tilde{y}_0$}\ than radio sources of the same 90~GHz flux density. This is due to the higher flux density of the dusty source when extrapolated to the 150~GHz ACT band. To explore this in more detail, the spectral index, $\alpha$, of fake sources placed at the center of the clusters were changed between -1 and +4. The results of these simulations can be seen in Fig.~\ref{fig:sim_results}c. When normalized so that the reference source has an amplitude of 1, the results can be represented by the normalized function $A(\alpha)$. Taking all this together, for any given source with a known distance from the cluster center $r$, flux density in the ACT 90~GHz band $I$ (in mJy), and spectral index $\alpha$, the difference in \mbox{$\tilde{y}_0$}\ can be written as:
\begin{equation} \label{equ:dy0}
\Delta_{\tilde{y}_0} = I \:\delta_{\tilde{y}_0}\: N(r)\: A(\alpha)
\end{equation}
where $\delta_{\tilde{y}_0}= -8.76{\times}10^{-6}$ is the reference value reported above for a 1~mJy source with a spectral index of $-0.7$.
\section{Source Contamination in Clusters}\label{sec:results}
In this section we use Equation~\ref{equ:dy0} to predict the change in the measured \mbox{$\tilde{y}_0$}\ for different cluster samples. To begin with, we use only the low frequency radio data to explore the expected contamination of the ACT DR5 sample under different assumptions. We then use the flux densities measured by MUSTANG2 to calibrate this relationship. After this we expand the analysis to cluster samples selected by other observing techniques.
\subsection{Extrapolation from 1.4~GHz}\label{sec:1.4GHz}
Using a search radius of 5~arcmin centered on each DR5 cluster in the FIRST survey footprint, we found 1.4~GHz flux densities for all sources above the FIRST detection threshold of 1~mJy. In this subset of 2138 DR5 clusters, 1947 of them had one or more FIRST sources. For these clusters, we made the large extrapolation from 1.4~GHz to the ACT 90~GHz band (actual central frequency 98~GHz) to find the flux density used in Equation~\ref{equ:dy0}. As there are no radio surveys at intermediate frequencies with sufficient resolution and sensitivity for the majority of the observed clusters, we simply assumed spectral indices of $-0.6$, $-0.7$, and $-0.8$ and assumed these spectral indices were constant across the ACT bands. The distance of each source from the cluster center was calculated and the fractional change in \mbox{$\tilde{y}_0$}\ found using Equation~\ref{equ:dy0}. Histograms of the results are shown in Fig.~\ref{fig:FIRST}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{histogram_DR5_vs_alpha1.pdf}
\caption{\label{fig:FIRST}Cumulative histograms of the percentage change in \mbox{$\tilde{y}_0$}\ from FIRST radio sources in DR5 clusters. The sources were extrapolated to the ACT 90~GHz band using different spectral indices (the black lines). The red line uses a spectral index of $-0.7$ that flattens to 0.0 at 98~GHz.
}
\end{figure}
For a spectral index of $-0.7$, we find that a significant number of clusters ($\sim$5 per cent ) have measured \mbox{$\tilde{y}_0$}\ 5 per cent {\it above} their true values. These are due to sources located more than 104~arcsec from the cluster centers and is an effect not predicted by calculations using simple aperture photometry. Although less important than the approximately equal number of sources found close to the cluster center this is a non-negligible effect.
The reduction in the observed \mbox{$\tilde{y}_0$}, due to sources closer to the cluster center, has a long tail. When assuming a spectral index of $-0.7$, 2.8 per cent of clusters have a reduction in \mbox{$\tilde{y}_0$}\ of more than 20 per cent which is broadly in line with predictions of \citet{Lin2007} who predict that this number is less than 3 per cent of clusters.
At lower contamination fractions, the number of clusters affected is much larger with 13.1 per cent of clusters predicted to have a measured \mbox{$\tilde{y}_0$}\ reduced by 5 per cent. However, the most important result is the sensitivity to spectral index of these numbers. Changes in the assumed spectral index of 0.1, far less then the typical scatter in spectral indices, can result in a factor of 2 change in the number of clusters affected, showing the importance of high frequency, high resolution point source searches within clusters in order to measure source flux densities directly.
This sensitivity to spectral index of the predicted amplitude of these sources in the ACT 90~GHz band is driven by the large extrapolation from 1.4~GHz. Even when data at intermediate frequencies (5--20~GHz) are available, the flatter spectrum sources that are more likely to be bright at tSZE wavelengths are more likely to be variable \citep{ODea1998}. As data at different frequencies can be taken years apart, accurate extrapolations can be problematic.
Also, the spectral index $\alpha$ in Equation~\ref{equ:dy0} is the {\it local} spectral index between the frequency bands used to measure the tSZE. In the case of ACT, these frequencies are in a range where many sources start to be dominated by dust, so their spectral index may change with frequency. Radio sources can also change spectral index as, at higher frequencies, flatter-spectrum radio cores can start to dominate over the steep-spectrum radio lobes \citep{Whittam2017}. To test our sensitivity to such spectral index changes, the 1.4~GHz flux densities were extrapolated to the ACT 90~GHz band using a spectral index of $-0.7$ (to obtain $I$ in Equation~\ref{equ:dy0}) and above this frequency a flat spectrum of $\alpha=0$ was assumed (in $A(\alpha)$). The results (the red line in Fig.~\ref{fig:FIRST}) are different from when a constant spectral index is assumed, demonstrating the need for additional data to constrain any dust contribution to the mm-wave spectrum of sources.
As the number of DR5 clusters with FIRST coverage is large, it is possible to bin clusters by redshift. Fig.~\ref{fig:DR5redshift} shows an example for an assumed spectral index of $-0.7$. As would be expected if the majority of these FIRST sources were associated with the clusters then, due to redshift dimming, clusters at redshifts below 0.4 have significantly more contamination from point sources. This trend extends all the way to redshifts past z=1 with redshift dimming more than making up for possible increases in source counts or luminosity at higher redshifts.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{histogram_DR5_redshift1.pdf}
\caption{\label{fig:DR5redshift}Cumulative histograms of the percentage change in \mbox{$\tilde{y}_0$}\ predicted from 1.4~GHz source counts, broken down by cluster redshift.}
\end{figure}
\subsection{MUSTANG2 constraints on ACT $\Delta\tilde{y}_0$}
We return now to the MUSTANG2 results. As stated in Section~\ref{sec:M2obs} many of the sources measured by MUSTANG2 are clearly dominated by radio emission so it makes sense to use the spectral index calculated between 1.4 and 90~GHz to make the small extrapolation from the center of MUSTANG2's band (90~GHz) to ACT's (98~GHz) as well as the value for $\alpha$ (the spectral index within the ACT bands). In other sources, where a cold thermal component cannot be ruled out, a 1.4~GHz radio counterpart is often present and it is likely that the transition from being radio to dust dominated is close to MUSTANG2's measurement so $\alpha$ will be somewhere between a typical dust and a radio index. Any dust component of the emission at 90~GHz will have the effect of increasing the spectral index calculated between 1.4 and 90~GHz over that calculated where no dust contribution is present, giving a value between that of dust and radio. For these sources, the spectral index calculated between 1.4 and 90~GHz is probably the best estimate for $\alpha$\ from the available data. In the cases where no 1.4~GHz counterpart source was found, the point source sensitivity limit of the appropriate catalog was used instead. Calculated values of the resulting spectral index are in Table~\ref{tab:src} and range between $-1.01$ and 0.24 with a median value of $-0.460$, comparable to the median value of $-0.66$ for sources in clusters found by \citet{Coble2007}.
To find the value of \mbox{$\tilde{y}_0$}\ that ACT would have measured if it had observed the MUSTANG2 clusters (many of which are outside the ACT survey area), we use equation 5 from \citet{Hilton2021} :
\begin{equation}
\tilde{y}_0 = 4.95{\times }10^{-5} E(z)^2 \left( \frac{{M}_{500c}}{3\times 10 ^{14}} \right)^{1.08} Q({M}_{500c},z) f_{\rm rel}
\label{equ:y0}
\end{equation}
where ${M}_{500c}$ is the cluster mass, $E(z)$ is the evolution of the Hubble parameter with redshift (e.g. $\sqrt{\Omega_m(1+z)^3+\Omega_\Lambda}$), $Q({M}_{500c},z)$ is a function that describes the mismatch between the clusters angular size and the 2.4~arcmin matched filter used to calculate \mbox{$\tilde{y}_0$}. $Q({M}_{500c},z)$ becomes significant for large clusters at low redshifts. $f_{\rm rel}$ is a relativistic correction which is far less than the assumed errors in ${M}_{500c}$ (typically 1--2 per cent) and so is taken to be 1. The cluster masses used are those in Table~\ref{tab:clusters} which are derived from non-parametric cluster profile fits to the MUSTANG2 data using the method described in \citet{Romero2020} and \citet{Dicker2020}. The fits are carried out on the calibrated detector timestreams and the point sources are included in the fits.
The \mbox{$\tilde{y}_0$}\ values calculated from equation~\ref{equ:y0} and spectral indices from Table~\ref{tab:src} were used to calculate the fractional change in \mbox{$\tilde{y}_0$}\ shown in Fig.~\ref{fig:results}.
Of our sample of 30 clusters, 5 have a change in \mbox{$\tilde{y}_0$}\ of more than 5 per cent (MACS J0717.5+3745=6 per cent; ACT-CL J0326-0043=7 per cent; RX J1347.5-1145=12 per cent; MOO J1554-0447=26 per cent; Abell 2052=395 per cent -- it appears as a point source in Fig.\ref{fig:maps}.)
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{histogram_M2_and_DR5_fit1.pdf}
\caption{\label{fig:results}A cumulative histogram of the percentage change in \mbox{$\tilde{y}_0$}\ for the sources found by MUSTANG2. The dotted histograms are calculations of a sample of DR5 clusters matched in redshift to the MUSTANG2 sample with point source flux densities extrapolated from FIRST 1.4~GHz flux densities using spectral indices of -0.460 in red (to match the median spectral index of the MUSTANG2 sources) and $-0.68$ in blue (which provides a better match to the histogram measured by MUSTANG2). }
\end{figure}
To properly compare the predicted effects of point sources between the DR5 and MUSTANG2 samples, the dependency on redshift needs to be taken into account. There are enough DR5 clusters in the FIRST region to bin into redshift bins of width $\Delta z = 0.1$ while maintaining a meaningful sample (${\gg}10$) in each bin over the redshift range z=0.1 to 1.3. Histograms were taken within each bin and then added together with weights matched to the redshift distribution of the MUSTANG2 clusters. Fig.~\ref{fig:results} shows the histograms obtained by extrapolating the FIRST sources found in a redshift matched sample of DR5 clusters using the median spectral index found by MUSTANG2. This predicts a larger effect on \mbox{$\tilde{y}_0$}\ for the DR5 sample than the measured values from MUSTANG2. The best match (in a least squares sense) between the two samples uses a spectral index of $-0.68$. A best fit spectral index steeper than the median value found in sources detected by MUSTANG2 reflects the large amount of scatter in spectral indices -- a significant number of the FIRST sources will have steep spectral indices and fall below the MUSTANG2 detection threshold effectively biasing the median value found by MUSTANG2 high.
For the purpose of predicting the contamination of the DR5 sample, using a single spectral index across all redshifts, source locations within clusters, and cluster masses is clearly an approximation. A larger survey for sources in clusters over wider cluster redshift and mass ranges would allow us to test for effects such as source evolution and to build a model that takes them into account.
However, taking this result at face value, contamination fractions similar to \citet{Lin2007} are obtained, 3 per cent of clusters have more than a 20 per cent decrease in Compton-$y$. Unlike \citet{Lin2007} we also predict another 3 per cent will have a 10 per cent increase. In addition, it is possible to calculate the intrinsic scatter in DR5 clusters due to point sources alone and compare it with the value of the scatter in the fitted $Y\!\text{-}M$\ relationship of $\sigma (\log \tilde{y}_0) = 0.2$ \citep{Hasselfield2013}. The results in Fig.~\ref{fig:intrisic} show that, while the peak in the intrinsic scatter due to point sources is much more narrow than that in \citet{Hasselfield2013}, there exist long tails with significant amounts of scatter. The scatter in the DR5 clusters due only to point sources is 6 per cent. However, tSZE surveys are inherently biased -- clusters with significant point source contamination will be missing from the DR5 sample making the scatter artificially low. In the next section, comparisons with clusters selected by non-tSZE methods shows this effect is important.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{intrinsic_scatter_fig9_data.pdf}
\caption{\label{fig:intrisic}Histograms of the predicted scatter in \mbox{$\tilde{y}_0$}\ due to FIRST point sources for redMaPPer (black) and DR5 (blue) clusters. A single spectral index of $-0.68$ , the best fit for the MUSTANG2 data, has been used and the samples have been scaled to the same size. For comparison, in red, is a fit to the total (20 per cent) intrinsic scatter of the clusters that were used to calibrate the $Y\!\text{-}M$\ relationship from figure~9 in \citet{Hasselfield2013} (normalized for sample size). The scatter due to point sources in DR5 is 6 per cent compared to 11.3 per cent for the redMaPPer clusters. Although the scatter due to point sources is smaller than the total scatter there are long tails. The difference between the histograms on the negative side is evidence that a significant number of clusters are missing from the DR5 sample due to point sources.
\end{figure}
\subsection{Comparisons between cluster surveys}\label{sec:comp}
In this section we examine how common point sources are between clusters selected by different survey techniques. For comparison we choose the Meta-Catalog of X-ray detected
Clusters of galaxies \cite[MCXC][]{MCXC}, an X-ray catalog made by combining observations from many different ROSAT and {\it Einstein} cluster catalogs. Although highly heterogeneous (in terms of depth and redshift ranges), this sample has 732 clusters in the FIRST region giving it better constraining power than some smaller but more pure samples. Like \citet{SPT_src}, we assign a generous 40 per cent error to the masses of this survey. To compare tSZE selected clusters to optically selected clusters we choose the SDSS DR8 redMaPPer catalog which has over 22\,300 galaxy clusters in the FIRST footprint and a 21 per cent intrinsic scatter to true halo mass when calibrated to {\it Planck} using the scaling relation from \citet{RedmapperIII}. The redMaPPer catalog is made using an iterative algorithm that finds galaxy clusters using the red sequence \citep{RedmapperI}. There is significant overlap among the optical, X-ray, and ACT DR5 cluster samples, but the overlap is far from complete. The X-ray and optical catalogs extend to lower mass values but they are less sensitive than ACT at higher redshifts due to their approximately flux-limited natures. DR5 clusters in the FIRST region have redshifts between 0.035 and 1.91 with a median value of 0.518, MCXC redshifts with FIRST coverage range from 0.0031 to 1.261 with a median value of 0.161, and redMaPPer clusters with FIRST coverage have redshifts between 0.062 and 0.94 with a median value of 0.368. Direct comparisons between the MCXC / redMaPPer catalogs and the MUSTANG2 sample was not possible due to limited overlap in redshift
To ensure the clusters were on the same mass scale, the MCXC and redMaPPer catalogs were searched to find co-detections with DR5. Possible matches were identified as being less then 5 arcmin apart on the sky and within 0.1 in redshift. When more than one match was possible both pairs were rejected. Over the region of the sky with FIRST coverage, we identified 100 DR5/MCXC and 983 DR5/redMaPPer potential matches. The ratio of the DR5 mass to the MCXC/redMaPPer masses were calculated and the median and standard deviations of these ratios found. The MCXC clusters had a median mass 5 per cent higher than the DR5 clusters with a scatter of 43 per cent. This scatter is consistent with our assumed value for the intrinsic scatter of the MCXC sample. The redMaPPer mass scale was 37 per cent higher than the DR5 clusters with a scatter of 39 per cent which, given the 21 per cent scatter in the redMaPPer mass richnesss relation \citep{RedmapperIII}, is higher than expected. Similar mass discrepancies between optical and tSZE measurements of clusters have recently been pointed out by \citet{JackOS2021} and \citet{Myles2021} showed that, at low redshift, redMaPPer clusters have a richness dependent bias due to projection effects. When averaged over richnesses, this is large enough to explain the additional scatter in the measured masses between DR5 and RedMaPPer co-detections as well as the higher median mass. This is not something that can be corrected on a cluster by cluster basis, so
for an initial comparison of the effects of point sources on clusters selected via different techniques, we simply scale the cluster masses in the MCXC and redMaPPer samples so that co-detections are on the same average mass scale.
As with the DR5 sample, the FIRST point source catalog was searched for any sources located within 5~arcmin of the cluster centers and for each cluster $\Delta_{\tilde{y}_0} $ was calculated using Equation~\ref{equ:dy0}. Values for \mbox{$\tilde{y}_0$}\ were calculated using Equation~\ref{equ:y0} and the scaled cluster mass from the relevant survey. Upper and lower limits on \mbox{$\tilde{y}_0$}\ were calculated for each cluster assuming the 40 per cent and 21 per cent errors in the masses for the MCXC and redMaPPer surveys. Fractional differences that FIRST sources would have made to these \mbox{$\tilde{y}_0$}\ values assuming the spectral index that was found to best match the DR clusters ($-0.68$) are shown in Fig.~\ref{fig:otherSamples}. For comparison, similar histograms of all DR5 clusters with FIRST data and a subset of these clusters chosen to match the redshift distributions of the other catalogs are also plotted. Weights have been applied so that the MCXC and redMaPPer surveys have a similar distribution of \mbox{$\tilde{y}_0$}\ values to DR5.
Even allowing for errors in the cluster masses and cutting all clusters below z=0.1 (where ACT is less sensitive) there is significantly more point source contamination in the non-tSZE selected samples -- 13.5 per cent of redMaPPer clusters have more than 20 per cent contamination compared to just 4.9 per cent of the redshift adjusted DR5 sample. For the MCXC sample, the difference is similar with 14.5 per cent of the X-ray clusters having more than 20 per cent contamination compared to only 6 per cent for a DR5 sample adjusted to match the MCXC redshift distribution. This can be explained by the fact that, on average, radio sources decrease the amplitude of the tSZE so some clusters with strong radio contamination will be scattered out of a tSZE sample. Evidence of this happening can be seen in the intrinsic scatter of the redMaPPer clusters shown in Fig.~\ref{fig:intrisic}. Although the scatter in the DR5 and redMaPPer smaples are similar at positive values (where sources over 104 arcsec from the cluster centers increase \mbox{$\tilde{y}_0$} ), there are far more redMaPPer selected clusters on the negative side (due to sources close to the cluster center cancelling out the calculated tSZE signal). This larger negative tail is expected from equation~\ref{equ:dy0} and the distribution of sources observed by MUSTANG2 (Fig.~\ref{fig:dist}). Quantitative comparisons of the scatter in each sample are limited by calibrations and systematics such as uncorrected differences in the mass and redshift distributions. However, by taking the difference between the two histograms and assuming that the scatter in \mbox{$\tilde{y}_0$}\ due to point sources of the underlying cluster population is better described by the redMaPPer sample, it can be calculated that approximately 5~per cent of DR5 clusters could be missing due to point source contamination. This is on this higher side of previous studies, such as \citet{SPT_src}, which have put this number between 1.8 and 5.6 per cent. An observational program to compare the prevalence of sources at tSZE wavelengths in tSZE and non-tSZE selected clusters would better constrain this number.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{histogram_DR5_vs_x-ray_w_error1.pdf}
\includegraphics[width=\columnwidth]{histogram_DR5_vs_Redmapper_w_error1.pdf}
\caption{\label{fig:otherSamples} Predictions of the change in \mbox{$\tilde{y}_0$}\ due to point sources for clusters selected by other observational techniques. The dotted blue lines represent the complete samples while the solid blue lines include a cut to only include clusters above a redshift of 0.1, weighted to match the mass distribution of the DR5 catalog. The shaded blue areas represent the uncertainties in the expected true \mbox{$\tilde{y}_0$}\ values which are based on the reported cluster masses. For comparison the entire DR5 prediction is plotted (the dashed black line) and a version of the DR5 catalog scaled to match the redshift distribution of the relevant survey is shown in red. The top plot shows clusters taken from the MCXC X-ray survey while the bottom plot uses clusters from the redMaPPer catalog.}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
In this paper we have presented flux density measurements at the frequencies used in tSZE surveys of a population of radio sources that could cause reductions or enhancements in the measured \mbox{$\tilde{y}_0$}\ of clusters by a significant amount (5 per cent and larger). For massive clusters with high SNR, reductions result in a lower inferred mass but for smaller clusters this can result in non-detections. By comparing with optical surveys there are indications that undetected sources could be masking 5 per cent of clusters in the DR5 survey.
Enhancements due to sources, caused by the shape of the matched filters used to find clusters, increase the intrinsic scatter in the $Y\!\text{-}M$\ relationship -- sources almost 4~arcmin from the cluster centers can still have a 10 per cent effect on the measured \mbox{$\tilde{y}_0$} . Although these results have not been tested on other tSZE surveys, as similar data processing steps are used, the results are likely to be similar, warranting further investigation.
Because of the wide variation in spectral indices, using low-frequency radio surveys such as FIRST and NVSS, to remove these sources at the much higher frequencies of 90--150~GHz is very inaccurate. As is shown in Fig.~\ref{fig:FIRST}, a spectral index change as small as 0.1 can double the predicted effect of a source on the central Compton-$y$ of a galaxy cluster. The variation in spectral indices in the radio, seen in this work and others \citep[e.g.][]{PlanckSources}, is an order of magnitude larger than this. Higher frequency measurements at tSZE wavelengths, such as those from MUSTANG2 presented in this paper, can greatly improve matters as there is no need to extrapolate source flux densities over two decades in frequency. However, the spectral index (and hence the dominant emission mechanism of the source) at the wavelengths used by cluster surveys will matter. We were able to find radio (1.4--28.5~GHz) counterparts for 80 per cent of the sources detected by MUSTANG2 and these indicated that at frequencies around 90~GHz, the source population is dominated by radio sources and the spectral indices could be estimated using the 1.4~GHz data. It is worth noting that, due to emission from cold dust, at 90~GHz, the spectral index of some sources will vary with frequency. For these sources, calculations made using spectral indices calculated at radio wavelengths will have small but still significant change in the predicted difference in the central Compton-$y$. Due to the small number of sources in the sample in this paper with multiple radio/submillimeter measurements it is not possible to make firm predictions on how common this effect is.
A larger survey of several hundred clusters at the frequencies used by tSZE surveys would be valuable. Resolutions better than 15~arcsec and, for the ACT DR5 data release, a depth of at least 0.7~mJy$\,$beam$^{-1}$ would enable all sources that could significantly bias \mbox{$\tilde{y}_0$}\ to be found. A shallower survey would still be useful as, with a large enough sample, it should be possible to extrapolate source counts. Such a survey should go out to at least 4~arcmin from the center of each cluster in order to find all sources of importance (those sources where $N(r)$ is non-negligible). This is significantly larger than the field-of-view of instruments such as ALMA. While a single frequency (e.g. 90~GHz) would be useful, follow up observations at frequencies such as 30 or 150~GHz of the sources found would make such a survey even more valuable as spectral indices within the tSZE frequency bands could be calculated. These follow-up observations could be highly targeted and would not require large maps. With a large enough sample, the distribution of spectral indices would be robust against issues such as source variability between observations. It is also worth noting that the sample presented in this paper is dominated by clusters with redshifts greater than z=0.4. The number of sources of different types evolves with redshift, for example, AGN in clusters are more common above z=0.4 \citep{Martini2009}. However, due to cosmological dimming, sources at higher redshifts will be fainter and affect the central Compton-$y$ by less than the increase in source counts would indicate. By splitting up the DR5 survey into redshift bins we showed that lower redshift clusters are, on average, more affected by sources. A large survey should include clusters across all redshifts so it can better quantify this.
The analysis in this paper does not take into account that, assuming the same rest-frame SED, redshifting will disfavor detection of synchrotron sources and favor detection of dusty sources at tSZE frequencies. With a larger survey for sources than in this paper it would be possible to look for how the source population changes with cluster mass and redshift and build a more complex model that could be used to make better use of current and future tSZE surveys. Including clusters identified by different methods (e.g. optical, X-ray, and tSZE) would also help measure any biases caused by the different selection effects of each technique. Hints of some of the differences can be seen in the different amounts of point source contamination found when comparing MCXC and redMaPPer clusters with the DR5 sample. Comparisons of the source populations within clusters selected by different methods would also help quantify the number of clusters missed by tSZE surveys due to contamination. Better knowledge of the statistics of the point source population would feed back into the process of translating measured Compton-$y$ to cluster mass, not just for specific clusters in the ACT survey discussed in this paper but for other experiments underway and in the future.
The simulations presented in Section~\ref{sec:ACT} used only 90~GHz and 150~GHz data. Data currently being taken by ACT includes data at 30 and 40~GHz and in upcoming data releases the 220~GHz data will have lower noise too. Future experiments such as the Simons Observatory will also have lower and higher frequency information. This opens up the possibility of using our knowledge of the point source population's typical spectra, number counts, distance from the cluster centers, and evolution with redshift to better detect (and possibly correct for) clusters with significant point source contamination by looking for discrepancies in the measured Compton-$y$ between frequency channels. Simulations similar to those presented in Section~\ref{sec:sims} of this paper would be an important part of this analysis. Due to the distribution of spectral indices of sources and possible source variability with time, the identification of all clusters in a tSZE survey with high levels of contamination is not possible using low frequency surveys such as FIRST.
\section*{Acknowledgements}
MUSTANG2 is supported by the NSF award number 1615604 and by the Mt.\ Cuba Astronomical Foundation.
This material is based upon work supported by the Green Bank Observatory.
GBT data were acquired under the project IDs AGBT17A\_340, AGBT17A\_358, AGBT17B\_101, AGBT17B\_266, AGBT17B\_334, AGBT18B\_215, AGBT19B\_200, and AGBT20A\_290.
The Green Bank Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
The ACT project is supported by the U.S. National Science Foundation through awards AST-0408698, AST-0965625, and AST-1440226, as well as awards PHY-0355328, PHY-0855887 and PHY-1214379.
Funding was also provided by Princeton University, the University of Pennsylvania, and a Canada Foundation for Innovation (CFI) award to UBC.
ACT operates in the Parque Astron\'{o}mico Atacama in northern Chile under the auspices of the La Agencia Nacional de Investigaci\'{o}n y Desarrollo (ANID; formerly Comisi\'{o}n Nacional de Investigaci\'{o}n Cient\'{i}fica y Tecnol\'{o}gica de Chile, or CONICYT).
The development of multichroic detectors and lenses was supported by NASA grants NNX13AE56G and NNX14AB58G.
Detector research at NIST was supported by the NIST Innovations in Measurement Science program. Computations were performed on Cori at NERSC as part of the CMB Community allocation, on the Niagara supercomputer at the SciNet HPC Consortium, and on Feynman and Tiger at Princeton Research Computing, and on the hippo cluster at the University of KwaZulu-Natal. SciNet is funded by the CFI under the auspices of Compute Canada, the Government of Ontario, the Ontario Research Fund--Research Excellence, and the University of Toronto.
Colleagues at AstroNorte and RadioSky provide logistical support and keep operations in Chile running smoothly. We also thank the Mishrahi Fund and the Wilkinson Fund for their generous support of the project.
JPH acknowledges funding for SZ cluster studies from NSF grant number AST-1615657, NS acknowledges support from NSF grant number AST-1907657, and ADH acknowledges support from the Sutton Family Chair in Science, Christianity and Cultures.
\section*{Data Availability}
The ACT DR5 cluster catalog used in this paper is available on the NASA Legacy Archive Microwave Background Data Analysis (LAMBDA) website (\url{https://lambda.gsfc.nasa.gov}). MUSTANG-2 maps of individual clusters in this paper are on the Harvard Dataverse (\url{https://dataverse.harvard.edu/}). The redMaPPer DR8 catalog can be downloaded from \url{http://risa.stanford.edu/redmapper/} while the MCXC catalog can be found at \url{https://heasarc.gsfc.nasa.gov/W3Browse/rosat/mcxc.html}.
\bibliographystyle{mnras}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,438 |
{"url":"https:\/\/www.shaalaa.com\/question-bank-solutions\/find-centre-circle-passing-through-points-6-6-3-7-3-3-area-of-a-triangle_7132","text":"Find the Centre of a Circle Passing Through the Points (6, \u2212 6), (3, \u2212 7) and (3, 3). - Mathematics\n\nFind the centre of a circle passing through the points (6, \u2212 6), (3, \u2212 7) and (3, 3).\n\nSolution\n\nLet O (xy) be the centre of the circle. And let the points (6, \u22126), (3, \u22127), and (3, 3) be representing the points A, B, and C on the circumference of the circle.\n\n:.OA = sqrt((x-6)^2+(y+6)^2)\n\nOB = \u00a0sqrt((x-3)^2+(y+7)^2)\n\nOC = sqrt((x-3)^2+(y-3)^2)\n\nHowever OA = OB (Radii of same circle)\n\n=>sqrt((x-6)^2+(y+6)^2)=sqrt((x-3)^2+(y+7)^2)\n\n=>x2+36 - 12x + y2\u00a0+ 36 + 12y = x2\u00a0+ 9 -6x + y2 + 49 -14y\n\n\u21d2 -6x + 2y + 14 = 0\n\n\u21d2 3x + y = 7 ....1\n\nSimilary OA = OC (Radii of same circle)\n\n=sqrt((x-6)^2+(y+6)^2) = sqrt((x-3)^2 + (y -3)^2)\n\n=x2\u00a0+ 36 - 12x +y2\u00a0+ 36 + 12y = x2\u00a0+ 9 - 6x + y2\u00a0+ 9 - 6y\n\n\u21d2 -6x + 18y + 54 = 0\n\n\u21d2 -3x + 9y = -27 \u00a0.....(2)\n\nOn adding equation (1) and (2), we obtain\n\n10y\u00a0= \u221220\n\ny\u00a0= \u22122\n\nFrom equation (1), we obtain\n\n3x\u00a0\u2212 2 = 7\n\n3x\u00a0= 9\n\nx\u00a0= 3\n\nTherefore, the centre of the circle is (3, \u22122).\n\nConcept: Area of a Triangle\nIs there an error in this question or solution?\n\nAPPEARS IN\n\nNCERT Class 10 Maths\nChapter 7 Coordinate Geometry\nExercise 7.4 | Q 3 | Page 171","date":"2021-12-01 07:16:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2802885174751282, \"perplexity\": 1122.0704417971567}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964359093.97\/warc\/CC-MAIN-20211201052655-20211201082655-00362.warc.gz\"}"} | null | null |
An interview is 'a conversation with a purpose', conducted in person or over the phone.
Interviews enable you to gather detailed information about people's attitudes, motivations, beliefs, and perspectives. They also allow you to probe beyond the findings of a survey to explore particular issues in greater depth or seek explanation for any unexpected answers.
Here we take you through the different types of interviews, and how to conduct interviews.
This will depend on why you are conducting the interviews. If you are trying to get a broad sense of the issues encountered when using a service, it may be appropriate to interview a representative sample of users. If you are trying to understand the barriers to using a service, you may wish to increase representation of individuals with particular needs.
You may want to ask everyone the same specific questions (structured) or explore issues more informally (unstructured). The two approaches can also be combined (semi-structured).
Interviewers are ideally skilled at interviewing, and knowledgeable about the field and people they will be interviewing. They must be active listeners and good note-takers. There are advantages and disadvantages to using an interviewer who is familiar to the interviewee; some users may prefer communicating with a person they know, while others may wish to speak to someone who is neutral. It can be helpful to get training for interviewers where possible.
A topic guide should outline the questions the interviewer needs to ask, and provide instructions on how to capture feedback, for example.
Keep your questions simple, focused, and easy to understand. Use non-technical language, and keep sentences short. Avoid words that are open to interpretation; for example, use 'daily' or 'weekly' rather than 'often' or 'usually'.
For closed questions, avoid leading questions. These are questions that prompt or encourage a specific answer; for example, 'How satisfied are you with the service?'.
For open questions, try to encourage full responses. If the participant's answer is short, the interviewer can reply with 'Can you tell me more about that?' or leave silence for them to elaborate.
Ask one thing at a time. For example, split 'Did you find the session helpful and interesting?' into two questions, because "helpful" and "interesting" are not the same thing.
Focus on the objectives of your interviews. It can be tempting to take advantage of the opportunity to gather information that is peripheral to your immediate objective. For example, you may want to ask about other aspects of your service, test interest in an event or project, or gauge opinion on a particular issue. This will only make your survey longer and less appealing to participants.
Confirm the time, how you will conduct your survey (in person, over the phone or online), and the location, where necessary, and share a brief overview of the topics to be discussed. Explain the purpose of the interview and what will be done with the information.
Interviews can be carried out by a trained member of staff, but it is better to commission an external evaluator or use trained volunteers. Respondents will be more likely to give honest answers.
You could take notes or record the conversation. Remember that you will need permission to do both of these things before conducting the interview, and there will be data protection implications for the information you collect.
Remain neutral, but show empathy and respect. Make eye contact and be aware of your body language. Try to build rapport and break down barriers between you and the interviewee.
Make notes as you go, also capturing non-verbal communication, and seek clarification or probe further if needed.
Signal that the interview will be ending soon to help you and the interviewee wind down and give the interviewee an opportunity to add anything that may have been missed.
Clarify key points with the interviewee to check you have correctly interpreted and accurately recorded what they said.
Let the interviewee know how and when you propose to share the findings of the interview.
Accept responsibility for taking forward any issues raised. Interviewees may request help on a particular point. You may be able to prepare for this by bringing information about avenues of assistance with you to the interview. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,493 |
Designed with durable, strong fabric, the Men�s Tabre Insulated Parka blends timeless style with contemporary performance. The Shieldtex fabric protects you from wind and rain, while the 250 grams of Proloft Insulation provides the right amount of warmth to keep you outside for hours. This jacket is feature-rich with high quality, YKK zippers throughout, two-way, front zipper for flexible comfort, and adjustable hood, waist and bottom hem to keep wind out. Remove the fur trim on the hood for a traditional look. The Tabre is loaded with pockets inside and out to keep your valuables safe while you explore the outdoors, or head into the city. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,181 |
{"url":"https:\/\/www.groundai.com\/project\/prospects-for-charged-higgs-searches-at-the-lhc\/","text":"Prospects for charged Higgs searches at the LHC\n\n# Prospects for charged Higgs searches at the LHC\n\n###### Abstract\n\nThe goal of this report is to summarize the current situation and discuss possible search strategies for charged scalars, in non-supersymmetric extensions of the Standard Model at the LHC. Such scalars appear in Multi-Higgs-Doublet models (MHDM), in particular in the popular Two-Higgs-Doublet model (2HDM), allowing for charged and additional neutral Higgs bosons. These models have the attractive property that electroweak precision observables are automatically in agreement with the Standard Model at the tree level. For the most popular version of this framework, Model\u00a0II, a discovery of a charged Higgs boson remains challenging, since the parameter space is becoming very constrained, and the QCD background is very high. We also briefly comment on models with dark matter which constrain the corresponding charged scalars that occur in these models. The stakes of a possible discovery of an extended scalar sector are very high, and these searches should be pursued in all conceivable channels, at the LHC and at future colliders.\n\nA.G.\u00a0Akeroyd, M.\u00a0Aoki, A.\u00a0Arhrib, L.\u00a0Basso, I.F.\u00a0Ginzburg, R.\u00a0Guedes, J.\u00a0Hernandez-Sanchez, K.\u00a0Huitu, T.\u00a0Hurth, M.\u00a0Kadastik, S.\u00a0Kanemura, K.\u00a0Kannike, W.\u00a0Khater, M.\u00a0KrawczykaaaCorresponding authors: Maria.Krawczyk@fuw.edu.pl, Per.Osland@uib.no, F.\u00a0Mahmoudi, S.\u00a0Moretti, S.\u00a0Najjari, P.\u00a0Osland, G.M.\u00a0Pruna, M.\u00a0Purmohammadi, A.\u00a0Racioppi, M.\u00a0Raidal, R.\u00a0Santos, P.\u00a0Sharma, D.\u00a0Soko\u0142owska, O.\u00a0St\u00e5l, K.\u00a0Yagyu, E.\u00a0Yildirim\n\nSchool of Physics and Astronomy, University of Southampton, Highfield, Southampton SO17 1BJ, United Kingdom,\n\nInstitute for Theoretical Physics, Kanazawa University, Kanazawa 920-1192, Japan,\n\nD\u00e9partement de Math\u00e9matique, Facult\u00e9 des Sciences et Techniques, Universit\u00e9 Abdelmalek Essa\u00e2di, B.\u00a0416, Tangier, Morocco,\n\nLPHEA, Facult\u00e9 des Sciences-Semlalia, B.P.\u00a02390 Marrakesh, Morocco,\n\nCPPM, Aix-Marseille Universit\u00e9, CNRS-IN2P3, UMR 7346, 163 avenue de Luminy, 13288 Marseille Cedex 9, France,\n\nSobolev Inst. of Mathematics SB RAS and Novosibirsk University, 630090 Novosibirsk, Russia,\n\nIHC, Instituto de Hist\u00f3ria Contemporanea, FCSH - New University of Lisbon, Portugal,\n\nFacultad de Ciencias de la Electr\u00f3nica, Benem\u00e9rita Universidad Aut\u00f3noma de Puebla,\n\nApdo. Postal 542, C.P. 72570 Puebla, Puebla, M\u00e9xico\n\nand Dual C-P Institute of High Energy Physics, M\u00e9xico,\n\nDepartment of Physics, and Helsinki Institute of Physics, P.O.Box 64 (Gustaf H\u00e4llstr\u00f6min katu 2), FIN-00014 University of Helsinki, Finland,\n\nPRISMA Cluster of Excellence and Institute for Physics (THEP), Johannes Gutenberg University, D-55099 Mainz, Germany,\n\nNational Institute of Chemical Physics and Biophysics, R\u00e4vala 10, 10143 Tallinn, Estonia,\n\nDepartment of Physics, University of Toyama, 3190 Gofuku, Toyama 930-8555, Japan,\n\nDepartment of Physics, Birzeit University, Palestine,\n\nFaculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland,\n\nUniv Lyon, Univ Lyon 1, ENS de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, F-69230 Saint-Genis-Laval, France,\n\nTheoretical Physics Department, CERN, CH-1211 Geneva 23, Switzerland,\n\nDepartment of Physics and Technology, University of Bergen, Postboks 7803, N-5020 Bergen, Norway,\n\nPaul Scherrer Institute, CH-5232 Villigen PSI, Switzerland,\n\nCentro de F\u00edsica Te\u00f3rica e Computacional, Faculdade de Ci\u00eancias, Universidade de Lisboa, Campo Grande, Edif\u00edcio C8 1749-016 Lisboa, Portugal,\n\nInstituto Superior de Engenharia de Lisboa - ISEL, 1959-007 Lisboa, Portugal,\n\nCenter of Excellence in Particle Physics (CoEPP), The University of Adelaide, South Australia,\n\nThe Oskar Klein Centre, Department of Physics, Stockholm University, SE-106 91 Stockholm, Sweden\n\n## 1 Introduction\n\nIn the summer of 2012 an SM-like Higgs particle () was found at the LHC [Aad:2012tfa, Chatrchyan:2012ufa]. As of today its properties agree with the SM predictions at the 20% level [Khachatryan:2014jba, Aad:2015ona]. Its mass derived from the and channels is [Aad:2015zhl]. However, the SM-like limit exists in various models with extra neutral Higgs scalars. A charged Higgs boson () would be the most striking signal of an extended Higgs sector, for example with more than one Higgs doublet. Such a discovery at the LHC is a distinct possibility, with or without supersymmetry. However, a charged Higgs particle might be rather hard to find, even if it is abundantly produced.\n\nWe here survey existing results on charged scalar phenomenology, and discuss possible strategies for further searches at the LHC. Such scalars appear in Multi-Higgs-Doublet models (MHDM), in particular in the popular Two-Higgs-Doublet model (2HDM) [Gunion:1989we, Branco:2011iw], allowing for charged and more neutral Higgs bosons. We focus on these models, since they have the attractive property that electroweak precision observables are automatically in agreement with the Standard Model at the tree level, in particular, [Ross:1975fq, Veltman:1976rt, Veltman:1977kh].\n\nThe production rate and the decay pattern would depend on details of the theoretical model [Gunion:1989we], especially the Yukawa interaction. It is useful to distinguish two cases, depending on whether the mass of the charged scalar () is below or above the top mass. Since an extended Higgs sector naturally leads to Flavor-Changing Neutral Currents (FCNC), these would have to be suppressed [Glashow:1976nt, Paige:1977nz]. This is normally achieved by imposing discrete symmetries in modeling the Yukawa interactions. For example, in the 2HDM with Model\u00a0II Yukawa interactions a symmetry under the transformation , is assumed. In this case, the data constrain the mass of to be above approximately 480\u00a0GeV [Misiak:2015xwa]. A recent study concludes that this limit is even higher, in the range 570\u2013800\u00a0GeV [Misiak:2017bgg]. Our results can easily be re-interpreted for this new limit. Alternatively, if all fermion masses are generated by only one doublet (, Model I) there is no enhancement in the Yukawa coupling of with down-type quarks and the allowed mass range is less constrained. The same is true for the Model X (also called Model IV or lepton-specific 2HDM) [Akeroyd:1994ga, Logan:2009uf], where the second doublet is responsible for the mass of all quarks, while the first doublet deals with leptons. Charged Higgs mass below has been excluded at LEP [Abbiendi:2013hk]. Low and high values of are excluded by various theoretical and experimental model-dependent constraints.\n\nAn extension of the scalar sector also offers an opportunity to introduce additional CP violation [Lee:1973iz], which may facilitate baryogenesis [Riotto:1999yt].\n\nCharged scalars may also appear in models explaining dark matter (DM). These are charged scalars not involved in the spontaneous symmetry breaking, and we will denote them as . Such charged particles will typically be members of an \u201cinert\u201d or \u201cdark\u201d sector, the lightest neutral member of which is the DM particle (). In these scenarios a symmetry will make the scalar DM stable and forbid any charged-scalar Yukawa coupling. Consequently, the phenomenology of the , the charged component of a -odd doublet, is rather different from the one in usual 2HDM models. In particular, may become long-lived and induce observable displaced vertices in its leptonic decays. This is a background-free experimental signature and would allow one to discover the at the LHC.\n\nThe SM-like scenario (also referred to as the \u201calignment limit\u201d) observed at the LHC corresponds to the case when the relative couplings of the 125 GeV Higgs particle to the electroweak gauge bosons with respect to the ones in the SM are close to unity. We will assume that this applies to the lightest neutral, mainly CP-even Higgs particle, denoted . Still there are two distinct options possible\u2014with and without decoupling of other scalars in the model. In the case of decoupling, very high masses of other Higgs particles (both neutral and charged) arise from the soft breaking term in the potential without any conflict with unitarity.\n\nThe focus of this paper will be the -softly-broken 2HDM, but we will also briefly discuss models with more doublets. In such models, one pair of charged Higgs-like scalars would occur for each additional doublet. We also briefly describe scalar dark matter models.\n\nThis work arose as a continuation of activities around the workshops \u201cProspects for Charged Higgs Discovery at Colliders\u201d, taking place every two years in Uppsala. The paper is organized as follows. In sections\u00a024 we review the basic theoretical framework. Then, in section\u00a05 we review charged Higgs decays, and in section\u00a06 we review charged-Higgs production at the LHC. Section\u00a07 is devoted to an overview of different experimental constraints. Proposed search channels for the 2HDM are presented in section\u00a08, whereas in sections\u00a09 and 10 we discuss models with several doublets, and models with dark matter, respectively. Section\u00a011 contains a brief summary. Technical details are collected in appendices.\n\n## 2 Potential and states\n\nThe general 2HDM potential allows for various vacua, including CP violating, charge breaking and inert ones, leading to distinct phenomenologies. Here we consider the case when both doublets have non-zero vacuum expectation values. CP violation, explicit or spontaneous, is possible in this case.\n\n### 2.1 The potential\n\nWe limit ourselves to studying the softly -violating 2HDM potential, which reads\n\n V(\u03a61,\u03a62) =\u221212{m211\u03a6\u20201\u03a61+m222\u03a6\u20202\u03a62+[m212\u03a6\u20201\u03a62+% h.c.]} +\u03bb12(\u03a6\u20201\u03a61)2+\u03bb22(\u03a6\u20202\u03a62)2+\u03bb3(\u03a6\u20201\u03a61)(\u03a6\u20202\u03a62)+\u03bb4(\u03a6\u20201\u03a62)(\u03a6\u20202\u03a61) +12[\u03bb5(\u03a6\u20201\u03a62)2+h.c.]. (2.1)\n\nApart from the term , this potential exhibits a symmetry,\n\n (\u03a61,\u03a62)\u2194(\u03a61,\u2212\u03a62)or(\u03a61,\u03a62)\u2194(\u2212\u03a61,\u03a62). (2.2)\n\nThe most general potential contains in addition two more quartic terms, with coefficients and , and violates symmetry in a hard way [Gunion:1989we]. The parameters , and are real. There are various bases in which this potential can be written, often they are defined by fixing properties of the vacuum state. The potential (2.1) can lead to CP violation, provided .\n\n### 2.2 Mass eigenstates\n\nWe use the following decomposition of the doublets (see Appendix\u00a0A):\n\n \u03a61=(\u03c6+1(v1+\u03b71+i\u03c71)\/\u221a2),\u03a62=(\u03c6+2(v2+\u03b72+i\u03c72)\/\u221a2), (2.3)\n\nwhich corresponds to a basis where both have a non-zero, real and positive, vacuum expectation value (vev). Here , , , with .\n\nWe adopt the mixing matrix , between the scalar fields and mass eigenstates (for the CP conserving case CP-even , and CP-odd , respectively) defined by\n\n \u239b\u239c\u239dH1H2H3\u239e\u239f\u23a0=R\u239b\u239c\u239d\u03b71\u03b72\u03b73\u239e\u239f\u23a0, (2.4)\n\nsatisfying\n\n RM2RT=M2diag=diag(M21,M22,M23),M1\u2264M2\u2264M3. (2.5)\n\nThe rotation matrix is parametrized in terms of three rotation angles as [Accomando:2006ga]\n\n R=\u239b\u239c\u239dc1c2s1c2s2\u2212(c1s2s3+s1c3)c1c3\u2212s1s2s3c2s3\u2212c1s2c3+s1s3\u2212(c1s3+s1s2c3)c2c3\u239e\u239f\u23a0 (2.6)\n\nwith , , and . In Eq.\u00a0(2.4), is the combination of \u2019s which is orthogonal to the neutral Nambu\u2013Goldstone boson. In terms of these angles, the limits of CP conservation correspond to [ElKaffas:2007rq]\n\n H1\u00a0odd\u00a0(H1\u2261A): \u03b12=\u00b1\u03c0\/2, H2\u00a0odd\u00a0(H2\u2261A): \u03b12=0,\u03b13=\u00b1\u03c0\/2, H3\u00a0odd\u00a0(H3\u2261A): \u03b12=0,\u03b13=0. (2.7)\n\nThe charged Higgs bosons are the combination orthogonal to the charged Nambu\u2013Goldstone bosons: , and their mass is given by\n\n M2H\u00b1=\u03bc2\u2212v22(\u03bb4+Re\\thinspace\u03bb5), (2.8)\n\nwhere we define a mass parameter by\n\n \u03bc2\u2261(v2\/2v1v2)Re\\thinspacem212. (2.9)\n\nNote also the following relation arising from the extremum condition:\n\n Im\\thinspacem212=Im\\thinspace\u03bb5v1v2. (2.10)\n\n### 2.3 Gauge couplings\n\nWith all momenta incoming, we have the gauge couplings [ElKaffas:2006nt]:\n\n H\u2213W\u00b1Hj:g2[\u00b1i(sin\u03b2Rj1\u2212cos\u03b2Rj2)+Rj3](pj\u03bc\u2212p\u2213\u03bc). (2.11)\n\nSpecifically, for coupling to the lightest neutral Higgs boson, the -matrix (2.6) gives:\n\n H\u2213W\u00b1H1:g2[\u00b1icos\u03b12sin(\u03b2\u2212\u03b11)+sin\u03b12](p\u03bc\u2212p\u2213\u03bc). (2.12)\n\nThe familiar CP-conserving limit is obtained by evaluating for , , , with the mapping , and . In that limit, we recover the results of [Gunion:1989we]:\n\n H\u2213W\u00b1h: \u2213ig2cos(\u03b2\u2212\u03b1)(p\u03bc\u2212p\u2213\u03bc), H\u2213W\u00b1H: \u00b1ig2sin(\u03b2\u2212\u03b1)(p\u03bc\u2212p\u2213\u03bc), H\u2213W\u00b1A: g2(p\u03bc\u2212p\u2213\u03bc). (2.13)\n\nThe strict SM-like limit corresponds to , however the experimental data from the LHC [Khachatryan:2014jba, Aad:2015ona] allow for a departure from this limit111Note that in the 2HDM, this factor cannot exceed 1. down to approximately 0.7, which we are going to allow in our study.\n\nIn the following analysis, the gauge couplings to neutral Higgs bosons are also involved. They differ from the SM coupling by the factor ():\n\n VVHj:cos\u03b2Rj1+sin\u03b2Rj2. (2.14)\n\nIn particular, for , this factor becomes . In the CP-conserving case, we have\n\n VVh: sin(\u03b2\u2212\u03b1), VVH: cos(\u03b2\u2212\u03b1), VVA: 0. (2.15)\n\nNote that the couplings (2.11) and (2.14) are given by unitary matrices, and hence satisfy sum rules. Furthermore, for any , the relative couplings of (2.11) (the expression in the square brackets) and (2.14) satisfy the following relation [Ginzburg:2014pra]:\n\n |(???)|2+[(???)]2=1. (2.16)\n\nThese relations are valid for both the CP-conserving and the CP-violating cases.\n\n## 3 Theoretical constraints\n\nThe 2HDM is subject to various theoretical constraints. First, it has to have a stable vacuum222Here we perform an analysis at the tree level, for more advanced studies, see [Nie:1998yn, Ferreira:2004yd, Goudelis:2013uca, Swiezewska:2015paa, Khan:2015ipa]., what leads to so-called positivity constraints for the potential [Deshpande:1977rw, Nie:1998yn, Kanemura:1999xf], as . Second, we should be sure to deal with a particular vacuum (a global minimum) as in some cases various minima can coexist [Barroso:2013awa, Ginzburg:2010wa, Swiezewska:2012ej].\n\nOther types of constraints arise from requiring perturbativity of the calculations, tree-level unitarity [Kanemura:1993hm, Akeroyd:2000wc, Arhrib:2000is, Ginzburg:2003fe, Ginzburg:2005dt] and perturbativity of the Yukawa couplings. In general, imposing tree-level unitarity has a significant effect at high values of and , by excluding such values. These constraints limit the absolute values of the parameters as well as , the latter both at very low and very high values. This limit is particularly strong for a symmetric model [WahabElKaffas:2007xd, Gorczyca:2011he, Swiezewska:2012ej]. The dominant one-loop corrections to the perturbative unitarity constraints for the model with softly-broken symmetry are also available [Grinstein:2015rtl].\n\nThe electroweak precision data, parametrized in terms of and [Kennedy:1988sn, Peskin:1990zt, Altarelli:1990zd, Peskin:1991sw, Altarelli:1991fk, Grimus:2007if, Grimus:2008nb], also provide important constraints on these models.\n\n## 4 Yukawa Interaction\n\nThere are various models of Yukawa interactions, all of them, except Model\u00a0III, lead to suppression of FCNCs at the tree level, assuming some vanishing Yukawa matrices. The most popular is Model\u00a0II, in which up-type quarks couple to one (our choice: ) while down-type quarks and charged leptons couple to the other scalar doublet (). They are presented schematically in Table\u00a01. For a self-contained description of the 2HDM Yukawa sector, see Appendix\u00a0B.333The absence of tree-level FCNC interactions can also be obtained by imposing flavor space alignment of the Yukawa couplings of the two scalar doublets [Jung:2010ik].\n\nFor Model\u00a0II, and the third generation, the neutral-sector Yukawa couplings are:\n\n Hjb\u00afb: \u2212igmb2mW1cos\u03b2[Rj1\u2212i\u03b35sin\u03b2Rj3], Hjt\u00aft: \u2212igmt2mW1sin\u03b2[Rj2\u2212i\u03b35cos\u03b2Rj3]. (4.1)\n\nExplicitly, for the charged Higgs bosons in Model\u00a0II, we have for the coupling to the third generation of quarks [Gunion:1989we]\n\n H+b\u00aft: ig2\u221a2mWVtb[mb(1+\u03b35)tan\u03b2+mt(1\u2212\u03b35)cot\u03b2], H\u2212t\u00afb: ig2\u221a2mWV\u2217tb[mb(1\u2212\u03b35)tan\u03b2+mt(1+\u03b35)cot\u03b2], (4.2)\n\nwhere is the appropriate element of the CKM matrix. For other Yukawa models the factors and will be substituted according to Table\u00a06 in Appendix\u00a0B.\n\nAs mentioned above, the range in (or ) is , which can be taken as , or . This is different from the MSSM, where only a range of is required [Gunion:1986nh], . The spontaneous breaking of the symmetry and the convention of having a positive value for means that the sign (phase) of the field is relevant. This doubling of the range in the 2HDM as compared with the MSSM is the origin of \u201cwrong-sign\u201d Yukawa couplings.\n\n## 5 Charged Higgs boson decays\n\nThis section presents an overview of the different decay modes, illustrated with branching ratio plots for parameter sets that are chosen to exhibit the most interesting features. Branching ratios required for modes considered in sections\u00a0810 are calculated independently.\n\nAs discussed in [Gunion:1989we, Moretti:1994ds, Djouadi:1995gv, Djouadi:1997yw, Kanemura:2009mk, Eriksson:2009ws], a charged Higgs boson can decay to a fermion-antifermion pair,\n\n H+ \u2192c\u00afs, (5.1a) H+ \u2192c\u00afb, (5.1b) H+ \u2192\u03c4+\u03bd\u03c4, (5.1c) H+ \u2192t\u00afb, (5.1d)\n\n(note that (5.1b) refers to a mixed-generation final state), to gauge bosons,\n\n H+ \u2192W+\u03b3, (5.2a) H+ \u2192W+Z, (5.2b)\n\nor to a neutral Higgs boson and a gauge boson:\n\n H+\u2192HjW+, (5.3)\n\nand their charge conjugates.\n\nBelow, we consider branching ratios mainly for the CP-conserving case. For the lightest neutral scalar we take the mass . Neither experimental nor theoretical constraints are here imposed. (They have significant impacts, as will be discussed in subsequent sections.) For the calculation of branching ratios, we use the software 2HDMC [Eriksson:2009ws] and HDECAY [Djouadi:1997yw, Harlander:2013qxa]. As discussed in [Harlander:2013qxa], branching ratios are calculated at leading order in the 2HDM parameters, but include QCD corrections according to [Mendez:1990jr, Li:1990ag, Djouadi:1994gf], and three-body modes via off-shell extensions of , , and . The treatment of three-body decays is according to Ref.\u00a0[Djouadi:1995gv].\n\nFor light charged Higgs bosons, , Model\u00a0II is excluded by the constraint discussed in section\u00a07. For Model\u00a0I (which in this region is not excluded by ), the open channels have fermionic couplings proportional to . The gauge couplings (involving decays to a and a neutral Higgs) are proportional to or , whereas the corresponding Yukawa couplings depend on the masses involved, together with .\n\nThe CP-violating case for the special channel is presented in section 5.4.\n\n### 5.1 Branching ratios vs tan\u03b2\n\nBelow, we consider branching ratios, assuming for simplicity , in the low and high mass regions.\n\n#### 5.1.1 Light H+ (MH\u00b1<mt)\n\nFor a light charged Higgs boson, such as might be produced in top decay, the and channels would be closed, and the and channels would dominate. The relevant Yukawa couplings are given by and the fermion masses involved. With scalar masses taken as follows:\n\n MH\u00b1=MA=100\u00a0GeV,MH=150\u00a0GeV, (5.4)\n\nwe show in Fig.\u00a01 branching ratios for the different Yukawa models.\n\nSince the and couplings for Model\u00a0I are the same, the branching ratios are independent of , as seen in the left panel. For Models\u00a0X and II the couplings to and have different dependences on , and consequently the branching ratios will depend on . In the case of Model\u00a0Y, the channel is for controlled by the term , which dominates over the channel at high .\n\n#### 5.1.2 Heavy H+ (MH\u00b1>mt)\n\nBelow, we consider separately the two cases where one more neutral scalar is light, besides , this being either or . For a case where both the channels and are open, whereas is not, exemplified by the masses\n\n MH\u00b1=MA=500\u00a0GeV,MH=130\u00a0GeV, (5.5)\n\nwe show in Fig.\u00a02 branching ratios for the different Yukawa models. Two values of are considered, 1 and 0.7. For comparison with section\u00a05.2, we have drawn dashed lines at , 3 and 30.\n\nFor Model\u00a0I (left part of Fig.\u00a02), the dominant decay rates are to the heaviest fermion-antifermion pair and to together with or (for the considered parameters, both and are kinematically available). Model\u00a0X differs in having an enhanced coupling to tau leptons at high , see Table\u00a06 in Appendix\u00a0B. If the decay to is kinematically not accessible, the mode may be accessible at high .\n\nFor Model\u00a0II (right part of Fig.\u00a02), the dominant decay rates are to the heaviest fermion-antifermion pair at low and high values of , with or dominating at medium (if kinematically available). At high it is the down-type quark that has the dominant coupling. Hence, modulo phase space effects, the rate is only suppressed by the mass ratio . Model\u00a0Y differs from Model\u00a0II in not having enhanced coupling to the tau at high values of .\n\nWhereas the couplings and hence the decay rates to and , for fixed values of , are independent of , the branching ratios are not. They will depend on the strengths of the competing Yukawa couplings. The strength of the channel increases with , and is therefore absent in the upper panels where .\n\nIt should also be noted that if the channel is not kinematically available, the channel would dominate for all values of . The channel, which may offer less background for experimental searches, is only relevant at higher , and then only in Models\u00a0II and X.\n\nWhen is light, such that the channels and are both open, whereas is not, the situation is similar to the previous case, with the mode replaced by the mode. The choice turns off the mode (see Eq.\u00a0(2.13)), and there is a competition among the and the modes, except for the region of high , where also the mode can be relevant.\n\n### 5.2 Branching ratios vs MH\u00b1\n\nIn Figs.\u00a034 we show how the branching ratios change with the charged Higgs mass. Here, we have taken (Fig.\u00a03), 3 and 30 (Fig.\u00a04), together with the neutral-sector masses\n\n (MH,MA)=(130\u00a0GeV,MH\u00b1), (5.6)\n\n(note that here we take ) and consider the two values and 0.7, corresponding to different strengths of the gauge couplings (2.13).\n\nThe picture from Figs.\u00a01 and 2 is confirmed: At low masses, the channel dominates, whereas at higher masses, the channel will compete against and , if these channels are kinematically open, and not suppressed by some particular values of the mixing angles.\n\nOf course, for (Fig.\u00a03), all four Yukawa models give the same result. Qualitatively, the result is simple. At low masses, the and channels dominate, whereas above the threshold, the channel dominates. There is however some competition with the and channels. Similar results hold for , the only difference being that the branching ratio rises faster with mass, and the mode disappears completely in this limit. Even below the threshold, branching ratios for three-body decays via an off-shell can be significant [Djouadi:1995gv]. The strength of the channel is proportional to , and is therefore absent for (not shown).\n\nAt higher values of (Fig.\u00a04), the interplay with the and channels becomes more complicated. At high charged-Higgs masses, the rate can be important (if kinematically open). On the other hand, the channel can dominate over , because of the larger phase space. Here, we present the case of . The case of is similar, the main difference is a higher branching ratio, while the channel disappears. It should be noted that three-body channels that proceed via and can be important also below threshold, if the channel is closed.\n\n### 5.3 Top decay to H+b\n\nA light charged Higgs boson may emerge in the decay of the top quark\n\n t\u2192H+b, (5.7)\n\nfollowed by a model-dependent decay. In Model\u00a0I possible channels are and , as shown in Fig.\u00a01. For the former case, the product is shown in Fig.\u00a05 for three values of . Note that recent LHC data have already excluded a substantial region of the low- and low- parameter region in Model\u00a0I, see section\u00a07.2.3.\n\n### 5.4 The H+\u2192H1W+ partial width\n\nIn this section we consider the decay mode , allowing for the possibility that the lightest Higgs boson, , is not an eigenstate of CP.\n\nThe coupling is given by Eq.\u00a0(2.12). The partial width, relative to its maximum value, is given by the quantity\n\n cos2\u03b12sin2(\u03b2\u2212\u03b11)+sin2\u03b12, (5.8)\n\nwhich is shown in Fig.\u00a06. We note that there is no dependence on the mixing angle . If or , then CP is conserved along the axis with .\n\nIn the alignment limit,\n\n \u03b11=\u03b2,\u03b12=0, (5.9)\n\nwhich is closely approached by the LHC data on the Higgs-gauge-boson coupling, the coupling actually vanishes.\n\nHence, the decay crucially depends on some deviation from this limit. We note that the coupling is proportional to . Thus, the deviation of the square of this coupling from unity (which represents the SM-limit), is given by the expression (5.8). Note that the experimental constraint (on the deviation of the coupling squared from unity) is 15\u201320% at the 95% CL [Khachatryan:2014jba, Aad:2015ona].\n\nFor comparison, a recent study of decay modes that explicitly exhibit CP violation in Model\u00a0II [Fontes:2015xva], compatible with all experimental constraints, considers values in the range 1.3 to 3.3, with parameter points displaced from the alignment limit by ranging from 1.5% to 83.2% (the one furthest away has a negative value of ).\n\nThis decay channel is also interesting for Model\u00a0I [Keus:2015hva].\n\n## 6 H+ production mechanisms at the LHC\n\nThis section describes production and detection channels at the LHC. Since a charged Higgs boson couples to mass, it will predominantly be produced in connection with heavy fermions, , , and , or bosons, or , and likewise for the decays. The cross sections given here, are for illustration only. For the studies presented in sections\u00a0810 they are calculated independently.\n\nWe shall here split the discussion of possible production mechanisms into two mass regimes, according to whether the charged Higgs boson can be produced (in the on-shell approximation) in a top decay or whether it could decay to a top and a bottom quark. These two mass regimes will be referred to as \u201clow\u201d and \u201chigh\u201d mass, respectively.\n\nWhile discussing such processes in hadron-hadron collisions one should be aware that there are two approaches to the treatment of heavy quarks in the initial state. One may take the heavy flavors as being generated from the gluons, then the relevant number of active quarks is (or sometimes 3). Alternatively, the -quark can be included as a constituent of the hadron, then an parton density should be used in the calculation of the corresponding cross section. These two approaches are referred to as the 4-flavor and 5-flavor schemes, abbreviated 4FS and 5FS. This should be kept in mind when referring to the lists of possible subprocesses initiated by heavy quarks and the corresponding figures in the following discussion. Below, we will use the notation , and to denote quarks which are not -quarks. We only indicate -quarks when they couple to Higgs bosons, thus enhancing the rate.\n\nFor some discussions it is useful to distinguish \u201cbosonic\u201d and \u201cfermionic\u201d production mechanisms, since the former, corresponding to final states involving only and , may proceed via an intermediate neutral Higgs, and thus depend strongly on its mass, see e.g., Ref.\u00a0[Basso:2015dka].\n\n### 6.1 Production processes\n\nBelow, we list all important production processes represented in Figs.\u00a011-14 in the 5FS.444Charge-conjugated processes are not shown separately. Higgs radiation from initial-state quarks are not shown explicitly.\n\n#### 6.1.1 Single H+ production\n\nA single can be accompanied by a (Fig.\u00a07a, \u201cbosonic\u201d) [Dicus:1989vf, BarrientosBendezu:1998gd, Moretti:1998xq, BarrientosBendezu:1999vd, Brein:2000cv, Hollik:2001hy, Asakawa:2005nx, Eriksson:2006yt, Hashemi:2010ce]:\n\n gg \u2192W\u2212H+, (6.1a) b\u00afb \u2192W\u2212H+, (6.1b)\n\nor by a and a jet (Fig.\u00a07b, \u201cfermionic\u201d) [Gunion:1986pe, DiazCruz:1992gg, Moretti:1996ra, Miller:1999bm, Moretti:1999bw, Zhu:2001nt, Plehn:2002vy, Berger:2003sm, Kidonakis:2004ib, Weydert:2009vr, Kidonakis:2010ux, Flechl:2014wfa, Degrande:2015vpa, Kidonakis:2016eeu, Degrande:2016hyf]:555Note that in the 5FS (6.2) can be a tree-level process, whereas (6.1a) can not.\n\n g\u00afb\u00a0(\u2192\u00aftH+)\u2192\u00afbW\u2212H+. (6.2)\n\nThe pioneering study [Dicus:1989vf] of the bosonic process (6.1) already discussed both the triangle and box contributions to the one-loop -initiated production, but considered massless -quarks, i.e., the -quark Yukawa couplings were omitted. This was subsequently restored in a complete one-loop calculation of the -initiated process [BarrientosBendezu:1998gd, BarrientosBendezu:1999vd], and it was realized that there can be a strong cancellation between the triangle- and box diagrams. This interplay of triangle and box diagrams has also been explored in the MSSM [Brein:2000cv].\n\nNLO QCD corrections to the -initiated production process were found to reduce the cross section by [Hollik:2001hy]. On the other hand, possible -channel resonant production via heavier neutral Higgs bosons (see Fig.\u00a07a (i) and (iii)) was seen to provide possible enhancements of up to two orders of magnitude [Asakawa:2005nx]. These authors also pointed out that one should use running-mass Yukawa couplings, an effect which significantly reduced the cross section at high mass [Eriksson:2006yt].\n\nA first comparison of the signal with the background [Moretti:1998xq] (in the context of the MSSM) concluded that the signal could not be extracted from the background. More optimistic conclusions were reached for the channel [Eriksson:2006yt, Hashemi:2010ce], again in the context of the MSSM.\n\nThe first study [Gunion:1986pe] of the fermionic process (6.2) pointed out that there is a double counting issue (see sect.\u00a06.1.2). Subsequently, it was realized [DiazCruz:1992gg, Borzumati:1999th] that the process could be described as , where a gluon splits into and one of these is not observed. As mentioned above, this approach is in recent literature referred to as the four-flavor scheme (4FS) whereas in the five-flavor scheme (5FS) one considers -quarks as proton constituents.\n\nNLO QCD corrections to the cross section have been calculated [Zhu:2001nt, Plehn:2002vy, Degrande:2016hyf], and the resulting scale dependence studied [Plehn:2002vy, Berger:2003sm], both in the 5FS and the 4FS. In a series of papers by Kidonakis [Kidonakis:2004ib, Kidonakis:2010ux, Kidonakis:2016eeu], soft-gluon corrections have been included at the \u201capproximate NNLO\u201d order and found to be significant near threshold, i.e., for heavy . A recent study [Degrande:2016hyf] is devoted to total cross sections in the intermediate-mass region, , providing a reliable interpolation between low and high masses.\n\nThese fixed-order cross section calculations have been merged with parton showers [Alwall:2004xw, Weydert:2009vr, Flechl:2014wfa, Degrande:2015vpa], both at LO and NLO, in the 4FS and in the 5FS. The 5FS results are found to exhibit less scale dependence [Degrande:2015vpa].\n\nDifferent background studies [Moretti:1996ra, Miller:1999bm, Moretti:1999bw] compared triple -tagging vs 4--tagging, identifying parameter regions where either is more efficient.\n\nIn addition to the importance of the channel at low mass, the following processes containing two accompanying jets (see Fig.\u00a08) are important at high charged-Higgs mass:\n\n gg,q\u00afq,b\u00afb\u00a0(\u2192t\u00aft\u2192b\u00aftH+)\u2192b\u00afbW\u2212H+, (6.3a) gg,q\u00afq\u00a0(\u2192b\u00aftH+)\u2192b\u00afbW\u2212H+. (6.3b)\n\nThere are also processes with a single and two jets (see Fig.\u00a09):\n\n (i):\u00a0q\u00afq(\u00afq\u2032)\u2192Q\u00afQ\u2032H+,(% ii):\u00a0qq\u2032\u2192q(Q)Q\u2032H+. (6.4)\n\nIn this particular case, with many possible gauge boson couplings, one of the final-state jets could be a .\n\nIn addition, single production can be initiated by a -quark,\n\n qb\u2192q\u2032H+b, (6.5)\n\nas illustrated in Fig.\u00a010.\n\nIn the 5FS, single production can also take place from and quarks, typically accompanied by a gluon jet [He:1998ie, DiazCruz:2001gf, Slabospitsky:2002gw, Dittmaier:2007uw] (Fig.\u00a011):\n\n c\u00afs \u2192H+, (6.6a) c\u00afs \u2192H+g. (6.6b)\n\nSimilarly, one can consider initial states.\n\nAt infinite order the 4FS and the 5FS should only differ by terms of , but the perturbation series of the two schemes are organized differently. Some authors (see, e.g., Ref.\u00a0[Flechl:2014wfa]) advocate combining the two schemes according to the \u201cSantander matching\u201d [Harlander:2011aa]:\n\n \u03c3=\u03c3(4FS)+w\u03c3(5FS)1+w, (6.7)\n\nwith the relative weight factor\n\n w=logMH\u00b1mb\u22122, (6.8)\n\nsince the difference between the two schemes is logarithmic, and in the limit of the 5FS should be exact.\n\n#### 6.1.2 The double counting and NWA issues\n\nA -quark in the initial state may be seen as a constituent of the proton (5FS), or as resulting from the gluon splitting into (4FS). Adding (with one possibly not detected) and in the 5FS one may therefore commit double counting [Barnett:1987jw, Olness:1987ep]. The resolution lies in subtracting a suitably defined infrared-divergent part of the gluon-initiated amplitude [Alwall:2004xw].666For a complete discussion on the flavour scheme choice in inclusive charged Higgs production associated with fermions see IV.3.2 of [deFlorian:2016spz] and references therein. The problem can largely be circumvented by choosing either the 5FS or the 4FS. For a more pragmatic approach, see Refs.\u00a0[Belyaev:2001qm, Belyaev:2002eq].\n\nA related issue is the one of low-mass production via -quark decay, followed by (with a spectator), usually treated in the Narrow Width Approximation (NWA). The NWA however fails the closer the top and charged Higgs masses are, in which case the finite top width needs to be accounted for, which in turn implies that the full gauge invariant set of diagrams yielding has to be computed. Considerable effort has been devoted to understanding this implementation, see also Refs.\u00a0[Guchait:2001pi, Alwall:2003tc, Assamagan:2004gv].\n\n#### 6.1.3 H+Hj and H+h\u2212 production\n\nWe can have a single production in association with a neutral Higgs boson [Kanemura:2001hz, Akeroyd:2003bt, Akeroyd:2003jp, Cao:2003tr, Belyaev:2006rf, Miao:2010rg]:\n\n q\u00afq\u2032\u2192H+Hj, (6.9)\n\nas shown in Fig.\u00a012.\n\nFor pair production we have [Eichten:1984eu, Willenbrock:1986ry, Glover:1987nx, Dicus:1987ic, Jiang:1997cg, Krause:1997rc, BarrientosBendezu:1999gp, Brein:1999sy, Moretti:2001pp, Moretti:2003px, Alves:2005kr]:\n\n gg,q\u00afq,b\u00afb\u2192H+H\u2212, (6.10a) q\u00afq(\u00afq\u2032),qQ\u2192q\u2032Q\u2032H+H\u2212, (6.10b)\n\nas illustrated in Figs.\u00a013 and 14, respectively. These mechanisms would be important for light charged Higgs bosons, as allowed in Models\u00a0I and X.\n\n### 6.2 Production cross sections\n\nIn this section, predictions for single Higgs production at 14\u00a0TeV for the CP-conserving 2HDM, Models\u00a0I and II (valid also for X and Y) are discussed.\n\nIn Fig.\u00a015, cross sections for the main production channels are shown at leading order, sorted by the parton-level mechanism [Basso:2015dka]777In the Feynman diagrams is represented by its dominant decay products .. The relevant partonic channels can be categorized as:\n\n\u2022 \u201cfermionic\u201d:\u2003, Fig.\u00a07 b (solid),\n\n\u2022 \u201cfermionic\u201d:\u2003, Fig.\u00a08 a, b (dotted),\n\n\u2022 \u201cbosonic\u201d:\u2003, Fig.\u00a07 a (i) (dash-dotted).\n\nThe charge-conjugated channels are understood to be added unless specified otherwise. No constraints are imposed here, neither from theory (like positivity, unitarity), nor from experiments.\n\nThe CTEQ6L (5FS) parton distribution functions [Pumplin:2002vw] are adopted here, with the scale . Three values of are considered, and and are held fixed at . Furthermore, we consider the CP-conserving alignment limit, with . The bosonic cross section is accompanied by a next-to-leading order QCD -factor enhancement [Spira:1995rr].\n\nSeveral points are worth mentioning:\n\n\u2022 To any contribution at fixed order in the perturbative expansion of the gauge coupling, the three cross sections are to be merged with regards to the interpretation in different flavour schemes, as discussed above. In the following, we focus on the first fermionic channel in the 5FS at the tree level.\n\n\u2022 The enhancement exhibited by the dotted curve at low masses is due to resonant production of -quarks which decay to . However, in Model\u00a0I this mode is essentially excluded by LHC data (see section\u00a07.2.4), and in Model\u00a0II it is excluded by the -constraint (see section\u00a07.1.2).\n\n\u2022 Model\u00a0I differs from Model\u00a0II also for , because of a different relative sign between the Yukawa couplings proportional to and those proportional to , see Table\u00a06.\n\n\u2022 Models\u00a0X and Y will have the same production cross sections as Models\u00a0I and II, respectively, but the sensitivity in the -channel would be different.\n\n\u2022 The bumpy structure seen for the bosonic mode is due to resonant production of neutral Higgs bosons, and depends on the values of and . Note that in the MSSM the masses of the heavier neutral Higgs bosons are close to that of the charged one, and this resonant behavior is absent.\n\nWhile recent studies (see section\u00a06.1.1) provide a more accurate calculation of the cross section than what is given here, they typically leave out the 2HDM model-specific -channel (possibly resonant) contribution to the cross section.\n\nIn Fig.\u00a016, the bosonic charged-Higgs production cross section vs for a set of CP-conserving parameter points that satisfy the theoretical and experimental constraints [Basso:2015dka] (see also [Basso:2012st, Basso:2013wna]) are presented. These are shown in different colors for different values of . The spread in cross section values for each value of and reflects the range of allowed values of the other parameters scanned over, namely , and .\n\nLow values of are enhanced for the bosonic mode due to the contribution of the -quark in the loop, whereas the modulation is due to resonant production. In the CP-violating case, this modulation is more pronounced [Basso:2015dka].\n\nAs summarized by the LHC Top Physics Working Group the cross section has been calculated at next-to-next-to leading order (NNLO) in QCD including resummation of next-to-next-to-leading logarithmic (NNLL) soft gluon terms with the software Top++2.0\u00a0[Beneke:2011mq, Cacciari:2011hy, Czakon:2011xx, Baernreuther:2012ws, Czakon:2012zr, Czakon:2012pz, Czakon:2013goa]. The decay width is available at NNLO\u00a0[Czarnecki:1998qc, Chetyrkin:1999ju, Blokland:2004ye, Blokland:2005vq, Czarnecki:2010gb, Gao:2012ja, Brucherseifer:2013iv], while the decay width is available at NLO\u00a0[Czarnecki:1992zm].\n\n## 7 Experimental constraints\n\nHere we review various experimental constraints for charged Higgs bosons derived from different low (mainly -physics) and high (mainly LEP, Tevatron and LHC) energy processes. Also some relevant information on the neutral Higgs sector is presented. Some observables depend solely on exchange, and are thus independent of CP violation in the potential, whereas other constraints depend on the exchange of neutral Higgs bosons, and are sensitive to the CP violation introduced via the mixing discussed in subsection\u00a02.2. Due to the possibility of , in addition to exchange, we are getting constraints from a variety of processes, some at tree and some at the loop level. In addition, we present general constraints coming from electroweak precision measurements, , , the muon magnetic moment and the electric dipole moment of the electron. The experimental constraints listed below are valid only for Model II, if not stated otherwise.888Analyses with general Yukawa couplings can be found in Refs.\u00a0[Mahmoudi:2009zx] and [Crivellin:2013wna]. Also, some of the constraints are updated, with respect to those used in the studies presented in later sections.\n\nThe charged-Higgs contribution may substantially modify the branching ratios for -production in -decays [Krawczyk:1987zj]. An attempt to describe various and anomalies (also ) in the 2HDM, Model III, with a novel ansatz relating up- and down-type Yukawa couplings, can be found in [Cline:2015lqp]. This analysis points towards an mass around 100\u00a0GeV, with masses of other neutral Higgs bosons in the range 100\u2013125 GeV. A similar approach to describe various low energy anomalies by introducing additional scalars can be found in [Crivellin:2015hha]. Here, a lepton-specific 2HDM (i.e., of type X) with non-standard Yukawa couplings has been analysed with the second neutral CP-even Higgs boson light (below 100\u00a0GeV) and a relatively light , with a mass of the order of 200\u00a0GeV.\n\n### 7.1 Low-energy constraints\n\nAs mentioned above, several decays involving heavy-flavor quarks could be affected by in addition to -exchange. Data on such processes provide constraints on the coupling (represented by ) and the mass, . Below, we discuss the most important ones.\n\n#### 7.1.1 Constraints from H+ tree-level exchange\n\n##### B\u2192\u03c4\u03bd\u03c4(X):\n\nThe measurement of the branching ratio of the inclusive process [Abbiendi:2001fi] leads to the following constraint, at the CL,\n\n tan\u03b2MH\u00b1<0.53\u00a0GeV\u22121. (7.1)\n\nThis is in fact a very weak constraint. (A similar result can be obtained from the leptonic tau decays at the tree level [Krawczyk:2004na].) A more recent measurement for the exclusive case gives [Agashe:2014kda]999The error of the measurement, given by HFAG [Amhis:2014hma] and released after the PDG 2014 [Agashe:2014kda], is slightly lower: (.. With a Standard Model prediction of [Charles:2004jd]101010We have added in quadrature symmetrized statistical and systematic errors. , we obtain\n\n rHexp=BR(B\u2192\u03c4\u03bd\u03c4)BR(B\u2192\u03c4\u03bd\u03c4)SM=1.56\u00b10.47. (7.2)\n\nInterpreted in the framework of the 2HDM at the tree level, one finds [Hou:1992sy, Grossman:1994ax, Grossman:1995yp]\n\n rH2HDM=[1\u2212m2BM2H\u00b1tan2\u03b2]2. (7.3)\n\nTwo sectors of the ratio are excluded. Note that this exclusion is relevant for high values of .\n\n##### B\u2192D\u03c4\u03bd\u03c4:\n\nThe ratios [Aubert:2007dsa]\n\n Rexp(D(\u2217))=BR(B\u2192D(\u2217)\u03c4\u03bd\u03c4)BR(B\u2192D(\u2217)\u2113\u03bd\u2113),\u2113=e,\u03bc, (7.4)\n\nare sensitive to -exchange, and lead to constraints similar to the one following from [Nierste:2008qe]. In fact, there has been some tension between BaBar results [Aubert:2007dsa, Lees:2012xj, Lees:2013uzd] and both the 2HDM (II) and the SM. These ratios have also been measured by Belle [Huschle:2015rga, Abdesselam:2016cgx] and LHCb [Aaij:2015yra]. Recent averages [Freytsis:2015qca, Cline:2015lqp] are summarized in Table\u00a02, together with the SM predictions [Fajfer:2012vx, Lattice:2015rga, Na:2015kha]. They are compatible at the level. A comparison with the 2HDM (II) concludes [Huschle:2015rga] that the results are compatible for . However, in view of the high values for required by the constraint, uncomfortably high values of would be required. The studies given for Model\u00a0II in section\u00a08.3 do not take this constraint into account.\n\n##### Ds\u2192\u03c4\u03bd\u03c4:\n\nSevere constraints can be obtained, which are competitive with those from [Akeroyd:2009tn].\n\n#### 7.1.2 Constraints from H+ loop-level exchange\n\n##### B\u2192Xs\u03b3:\n\nThe transition may also proceed via charged Higgs boson exchange, which is sensitive to the values of and . The allowed region depends on higher-order QCD effects. A huge effort has been devoted to the calculation of these corrections, the bulk of which are the same as in the SM [Chetyrkin:1996vx, Buras:1997bk, Bauer:1997fe, Bobeth:1999mk, Buras:2002tp, Misiak:2004ew, Neubert:2004dd, Melnikov:2005bx, Misiak:2006zs, Misiak:2006ab, Asatrian:2006rq, Czakon:2006ss, Boughezal:2007ny, Ewerth:2008nv, Misiak:2010sk, Asatrian:2010rq, Ferroglia:2010xe, Misiak:2010tk, Kaminski:2012eb, Czakon:2015exa]. They are now complete up to NNLO order. On top of these, there are 2HDM-specific contributions [Ciafaloni:1997un, Ciuchini:1997xe, Borzumati:1998tg, Bobeth:1999ww, Gambino:2001ew, Misiak:2015xwa] that depend on and . The result is that mass roughly up to is excluded for high values of [Misiak:2015xwa], with even stronger constraints for very low values of . Recently, a new analysis [Trabelsi:2015] of Belle results [Saito:2014das] concludes that the lower limit is 540\u00a0GeV. Also note the new result of Misiak and Steinhauser [Misiak:2017bgg] with lower limit in the range 570\u2013800 GeV, see Fig.\u00a017 (right) for high and high masses. We have here adopted the more conservative value of 480\u00a0GeV, however our results can easily be re-interpreted for this new limit. Constraints from decay for lower masses are presented in Fig.\u00a019 together with other constraints.\n\nFor low values of , the constraint is even more severe. This comes about from the charged-Higgs coupling to and quarks ( and ) containing terms proportional to and (). The product of these two couplings determine the loop contribution, where there is an intermediate state, and leads to terms proportional to (responsible for the constraint at low ) and (responsible for the constraint that is independent of ). For Models\u00a0I and X, on the other hand, both these couplings are proportional to . Thus, the constraint is in these models only effective at low values of .111111For early studies, see [Grossman:1994jb, Akeroyd:1994ga]. This can be seen in Fig.\u00a017 (left) and Fig.\u00a018, where the new results from the analysis applied to Model\u00a0I of the 2HDM are shown. We stress that Model I can avoid the constraints and hence it can accommodate a light .\n\n##### B0\u2212\u00afB0 mixing:\n\nDue to the possibility of charged-Higgs exchange, in addition to exchange, the mixing constraint excludes low values of (for ) and low values of [Abbott:1979dt, Inami:1980fz, Athanasiu:1985ie, Glashow:1987qe, Geng:1988bq, Urban:1997gw]. Recent values for the oscillation parameters and are given in Ref.\u00a0[Deschamps:2009rh], only at very low values of do they add to the constraints coming from .\n\n#### 7.1.3 Other precision constraints\n\n##### T and S:\n\nThe precisely measured electroweak (oblique) parameters and correspond to radiative corrections, and are (especially ) sensitive to the mass splitting of the additional scalars of the theory. In papers [Grimus:2007if, Grimus:2008nb] general expressions for these quantities are derived for the MHDMs and by confronting them with experimental results, in particular , strong constraints are obtained on the masses of scalars. In general, imposes a constraint on the splitting in the scalar sector, a mass splitting among the neutral scalars gives a negative contribution to , whereas a splitting between the charged and neutral scalars gives a positive contribution. A recent study [Gorbahn:2015gxa] also demonstrates how RGE running may induce contributions to and . Current data on and are given in [Agashe:2014kda].\n\n##### The muon anomalous magnetic moment:\n\nWe are here considering heavy Higgs bosons (), with a focus on the Model\u00a0II, therefore, according to [Cheung:2003pw, Chang:2000ii, WahabElKaffas:2007xd], the 2HDM contribution to the muon anomalous magnetic moment is negligible even for as high as (see, however, [Krawczyk:2002df]).\n\n##### The electron electric dipole moment:\n\nThe bounds on electric dipole moments constrain the allowed amount of CP violation of the model. For the study of the CP-non-conserving Model\u00a0II presented in section\u00a08.3, the bound [Regan:2002ta] (see also [Pilaftsis:2002fe]):\n\n |de|to0.0pt<\u223c%\u00a01\u00d710\u221227[ecm], (7.5)\n\nwas adopted at the level. (More recently, an order-of-magnitude stronger bound has been established [Baron:2013eja].) The contribution due to neutral Higgs exchange, via the two-loop Barr\u2013Zee effect [Barr:1990vd], is given by Eq.\u00a0(3.2) of [Pilaftsis:2002fe].\n\n#### 7.1.4 Summary of low-energy constraints\n\nA summary of constraints of the 2HDM Model\u00a0II coming from low-energy physics performed by the \u201cGfitter\u201d group [Flacher:2008zq] is presented on Fig.\u00a019. The more recent inclusion of higher-order effects pushes the constraint up to around 480\u00a0GeV [Misiak:2015xwa] or even higher, as discussed above. See also Refs.\u00a0[Deschamps:2009rh, Bona:2009cj, Enomoto:2015wbn].\n\n### 7.2 High-energy constraints\n\nMost bounds on charged Higgs bosons are obtained in the low-mass region, where a charged Higgs might be produced in the decay of a top quark, , with the subsequently decaying according to Eqs.\u00a0(5.1a-c), (5.2) or (5.3). Of special interest are the decays and . For comparison with data, products like are relevant, as presented in section\u00a05.3. At high charged-Higgs masses, the rate can be important (if kinematically open). On the other hand, the channel can dominate over , because of the larger phase space. However, as illustrated in Fig.\u00a04, it vanishes in the alignment limit.\n\n#### 7.2.1 Charged-Higgs constraints from LEP\n\nThe branching ratio would be affected by Higgs exchange. Experimentally [Agashe:2014kda]. The contributions from neutral Higgs bosons to are negligible [ElKaffas:2006nt], however, charged Higgs boson contributions, as given by [Denner:1991ie], Eq.\u00a0(4.2), exclude low values of and low . See also Fig.\u00a019.\n\nLEP and the Tevatron have given limits on the mass and couplings, for charged Higgs bosons in the 2HDM. At LEP a lower mass limit of 80 GeV that refers to the Model\u00a0II scenario for","date":"2020-08-06 09:59:18","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8384075164794922, \"perplexity\": 1888.4274070675096}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439736902.24\/warc\/CC-MAIN-20200806091418-20200806121418-00557.warc.gz\"}"} | null | null |
The Mystery Files of Shelby Woo is a children's mystery television series that ran on Nickelodeon between 1996 and 1999. A total of 41 episodes of 30 minutes each were produced. Episodes from the first three seasons were taped at Nickelodeon Studios at Universal Studios in Orlando, Florida, and was one of the few single-camera productions there, while the final season's episodes were shot in Montreal, Quebec, Canada.
History
The series first aired in March 1996 as a six-episode test run, since Nickelodeon usually produced one major new series at a time and they were already producing Space Cases. The success of the test run prompted Nickelodeon to re-introduce the series on SNICK in January 1997, along with seven new episodes. During the show's third season, production stopped after eight of a proposed thirteen episodes were filmed due to a crew strike, as the show's budget did not cover the International Alliance of Theatrical Stage Employees's demands, partly due to the decision to shoot film instead of videotape. Production resumed in Montreal in February 1998, after Cinar agreed to co-produce the series. As a result, the show's setting changed from Cocoa Beach, Florida to Boston, Massachusetts.
Starring Irene Ng as the title character, the series revolves around the adventures of a Chinese American teenage girl who lives with her innkeeper grandfather and works as a non-sworn intern at the local police department where she helps out with odds and ends around the office. Occasionally an intriguing case comes to Shelby's attention, prompting her to apply her unique insight and enlist the help of her friends to solve it. Her supervisors, however, do not appreciate her help, as she is only a teenager. Her grandfather also does not want her getting involved in cases, often reminding her "We are not detectives with warrant badges, we are innkeepers with brooms." Many of the stories, with three clear suspects, keep the audience guessing until the truth is ultimately explained.
Cast
Primary characters
Shelby Woo (Irene Ng): Main protagonist of the show, who solves mysteries and is an overachiever. She lives in Cocoa Beach, Florida in Seasons 1-3 and moved to Boston with her grandfather in Season 4. Despite Shelby being portrayed as a teenager, in reality, Ng was 21 when the series began and was close to 25 by the end of the series. Shelby's parents aren't seen though are mentioned in episode "Hot Seats" to be in China after Shelby receives a package from them.
Michael "Mike" Woo (Pat Morita): Shelby's loving grandfather and legal guardian who is looking after Shelby while her parents are in China. He's a practical retired detective with the San Francisco PD. Mike doesn't want Shelby to solve mysteries because he's afraid that she'll get hurt.
Cindy Ornette (Preslaysa Edwards): Shelby's perky best friend in Cocoa Beach; like Shelby, she likes getting involved in cases. Cindy is close to her cousin Wayne.
Noah Allen (Adam Busch): Shelby's other best friend in Cocoa Beach; he doesn't like getting involved in cases. Noah wants to be an actor.
Detective Whit Hineline (Steve Purnick): Works at the Cocoa Beach PD; he is Shelby's sarcastic former boss and doesn't like her interfering in his investigations. Detective Hineline does care about Shelby's well-being.
Detective Sharon Delancey (Ellen David): Works at the Boston PD, and is Shelby's new boss. While she isn't thrilled with Shelby's help, she is more accepting of it than Detective Hineline was.
Angela "Angie" Burns (Eleanor Noble): Shelby's new best friend from Boston who replaces Cindy; very good at science and applies that knowledge in certain cases.
Vincent "Vince" Rosania (Noah Klar): Shelby's other new best friend from Boston who replaces Noah; originally a suspect in one of Shelby's first cases in Boston. He becomes Shelby's love interest.
Recurring characters
Detective Muldoon (Angelo Tsarouchas): heavy set detective who assists Shelby in a few cases in Detective Delancey's absence; does needlepoint
Will (Joshua Harto): works at CJ's burger joint where Shelby and her friends hang out in Cocoa Beach; known for breaking things or coming up with poorly-thought-out ideas
Christie Sayers (Jennifer Finnigan): Shelby's nemesis who is determined to solve a case before she does and fails each time; only appears in season 4
Episodes
Series overview
<onlyinclude>
Season 1 (1996)
Season 2 (1997)
Season 3 (1997–98)
Season 4 (1998–99)
Broadcast
On December 28, 2011, TeenNick aired the episode "The Smoke Screen Case" on The '90s Are All That block. The series began airing on a more permanent basis in late October 2015 on The '90s Are All That's successor block, The Splat.
Home media
All 12 episodes from seasons 1 and 2 are available for purchase on the iTunes Store and Amazon Video. Season 2 is available for purchase on Vudu.
On November 24, 2014, the entire series was released on DVD exclusive to Amazon.com in region 1.
References
External links
1990s American mystery television series
1990s Nickelodeon original programming
1996 American television series debuts
1996 Canadian television series debuts
1999 American television series endings
1999 Canadian television series endings
Canadian mystery television series
English-language television shows
Television series about teenagers
Television series by Cookie Jar Entertainment
Television shows set in Florida
Television shows set in Boston
Television shows filmed in Montreal
Chinese American television | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,942 |
AppStudio-Bandsintown-Demo
==========================
Sample Universal App showing how to build the ultimate fan app using Windows AppStudio and BandsintownAPI
Check out my Blog Post series, "Create The Ultimate Fan Universal App Using Windows AppStudio", which provides a walk through of how this was built:
[Part 1](http://geekswithblogs.net/lorilalonde/archive/2014/11/25/create-the-ultimate-fan-universal-app-using-windows-appstudio.aspx)
[Part 2](http://geekswithblogs.net/lorilalonde/archive/2014/12/05/create-the-ultimate-fan-universal-app-using-windows-appstudiondashpart-2.aspx)
[Part 2 1/2](http://geekswithblogs.net/lorilalonde/archive/2014/12/27/create-the-ultimate-fan-universal-app-using-windows-appstudio-ndash.aspx)
[Part 3](http://www.geekswithblogs.net/lorilalonde/archive/2014/12/29/create-the-ultimate-fan-universal-app-using-windows-appstudio-ndash-again.aspx)
| {
"redpajama_set_name": "RedPajamaGithub"
} | 713 |
Historipedia
Daedric Princes
List of undefeated boxing world champions
List of Norwegian monarchs
List of war deities
KSI vs. Joe Weller
James Stewart Jr.
Roman Catholic monarchs
Władysław III of Poland
John II Casimir Vasa
Miguel I of Portugal
Charles X of France
Louis XIV of France
Stanisław August Poniatowski
Louis XVIII of France
Knights of the Golden Fleece
Alexander I of Russia
Christian VIII of Denmark
Grand Duke Constantine Pavlovich of Russia
Nicholas I of Russia
Frederick William III of Prussia
Citations needed, Pages containing cite templates with deprecated parameters, Wikipedia pages with incorrect protection templates,
Use British English from September 2016
Articles containing non-English language text
Articles containing potentially dated statements from 2005
G7 nations
Italian-speaking countries and territories
Member states of NATO
Member states of the Council of Europe
Member states of the European Union
Member states of the Union for the Mediterranean
Member states of the United Nations
Romance countries and territories
Southern European countries
States and territories established in 1861
1861 establishments in Europe
Pages with reference errors
Template:Pp-sock Template:Pp-semi-protected Template:EngvarB
Coordinates: Template:Coord/input/d
Fatal error: The format of the coordinate could not be determined. Parsing failed.
Repubblica Italiana (language?)
Emblem of Italy
Anthem: Il Canto degli Italiani (language?)
"The Song of the Italians"
File:National anthem of Italy - U.S. Navy Band (long version).ogg
Template:Map caption
and largest city Rome
Native languages
Demonym
Unitary constitutional
parliamentary republic
- President Sergio Mattarella
- Prime Minister Paolo Gentiloni
- President of the Senate Pietro Grasso
- President of the Chamber of Deputies Laura Boldrini
- Upper house Senate of the Republic
- Lower house Chamber of Deputies
- Unification 17 March 1861
- Republic 2 June 1946
- Founded the EEC (now the European Union) 1 January 1958
- Total 301,338 kmTemplate:Smallsup (71st)
- Water (%) 2.4
- 2016 estimate 60,589,445 [1] (23rd)
- Density 201.3/kmTemplate:Smallsup (63rd)
521.5/sq mi
GDP (PPP) 2016 estimate
- Total $2.233 trillion [2] (12th)
- Per capita $36,823[2] (32nd)
GDP (nominal) 2018 estimate
- Total 2.050 trillion[3] (9th)
- Per capita $33,700[4] (27th)
Gini (2016) 33.1[5]
HDI (2015) 0.887[6]
very highTemplate:·26th
Euro (€)b (EUR)
dd/mm/yyyy (AD)
Drives on the
+39c
.itd
a. German is co-official in South Tyrol; French is co-official in the Aosta Valley; Slovene is co-official in the province of Trieste and the province of Gorizia; Ladin is co-official in South Tyrol, in Trentino and in other northern areas.
b. Before 2002, the Italian lira. The euro is accepted in Campione d'Italia but its official currency is the Swiss franc.[7]
c. To call Campione d'Italia, it is necessary to use the Swiss code +41.
d. The .eu domain is also used, as it is shared with other European Union member states.
Italy (Italian: Italia [iˈtaːlja] ( listen)), officially the Italian Republic (Italian: Repubblica italiana [reˈpubːlika itaˈljaːna]),[8][9][10][11] is a unitary parliamentary republic in Europe.[note 1] Located in the heart of the Mediterranean Sea, Italy shares open land borders with France, Switzerland, Austria, Slovenia, San Marino and Vatican City. Italy covers an area of 301,338 km2 (116,347 sq mi) and has a largely temperate seasonal and Mediterranean climate. With around 61 million inhabitants it is the fourth most populous EU member state.
Since classical times, ancient Phoenicians, Carthaginians and Greeks established settlements in the south of Italy, with Etruscans and Celts inhabiting the centre and the north of Italy respectively and various ancient Italian tribes and Italic peoples dispersed throughout the Italian peninsula and insular Italy. The Italic tribe known as the Latins formed the Roman Kingdom, which eventually became a republic that conquered and assimilated its neighbors. Ultimately the Roman Empire emerged as the dominant power in the Mediterranean basin and became the leading cultural, political and religious centre of Western civilisation.
During the Early Middle Ages, Italy suffered sociopolitical collapse amid calamitous barbarian invasions, but by the 11th century, numerous rival city-states and maritime republics, mainly in the northern and central regions of Italy, rose to great prosperity through shipping, commerce and banking, laying the groundwork for modern capitalism.[12] These mostly independent statelets, acting as Europe's main spice trade hubs with Asia and the Near East, often enjoyed a greater degree of democracy than the larger feudal monarchies that were consolidating throughout Europe. Part of central Italy was under the control of the theocratic Papal States, while Southern Italy remained largely feudal until the 19th century, partially as a result of a succession of Byzantine, Arab, Norman, Angevin and Spanish conquests of the region.[13]
The Renaissance began in Italy and spread to the rest of Europe, bringing a renewed interest in humanism, science, exploration and art. Italian culture flourished at this time, producing famous scholars, artists and polymaths, such as Michelangelo, Leonardo da Vinci, Raphael, Galileo and Machiavelli. Since Middle Age, Italian explorers such as Marco Polo, Christopher Columbus, Amerigo Vespucci, John Cabot and Giovanni da Verrazzano discovered new routes to the Far East and the New World, helping to usher in the European Age of Discovery. Nevertheless, Italy's commercial and political power significantly waned with the opening of trade routes which bypassed the Mediterranean.[13][14][15] Furthermore, the Italian city-states constantly engaged one another in bloody warfare, culminating in the Italian Wars of the 15th and 16th centuries that left them exhausted, with none emerging as a dominant power. They soon fell victim to conquest by European powers such as France, Spain and Austria.
By the mid-19th century, a rising movement in support of Italian nationalism and independence from foreign control led to a period of revolutionary political upheaval. After centuries of foreign domination and political division, Italy was almost entirely unified in 1871, creating a great power.[16] From the late 19th century to the early 20th century, the new Kingdom of Italy rapidly industrialised, although mainly in the north, and acquired a colonial empire,[17] while the south remained largely impoverished and excluded from industrialisation, fuelling a large and influential diaspora.[18] Despite being one of the main victors in World War I, Italy entered a period of economic crisis and social turmoil, leading to the rise of a fascist dictatorship in 1922. Participation in World War II on the Axis side ended in military defeat, economic destruction, and an Italian civil war. Following the liberation of Italy and the rise of the resistance, the country abolished the monarchy, reinstated democracy, enjoyed a prolonged economic boom and, despite periods of sociopolitical turmoils, became a major advanced economy.[19][20][21]
Today, Italy has the third largest nominal GDP in the Eurozone and the eighth largest in the world. As advanced economy the country has the sixth-largest worldwide national wealth and it is ranked third for its central bank gold reserve. Italy has a very high level of human development and it is sixth in the world for life expectancy. The country plays a prominent role in regional and global economic, military, cultural, and diplomatic affairs, and it is both a regional power[22][23] and a great power.[24][25] Italy is a founding and leading member of the European Union and the member of numerous international institutions, including the UN, NATO, the OECD, the OSCE, the WTO, the G7, G20, the Union for the Mediterranean, the Council of Europe, Uniting for Consensus and many more. As a reflection of its cultural wealth, Italy is home to 53 World Heritage Sites, the most in the world, and is the fifth most visited country.
2.1 Prehistory and antiquity
2.2 Ancient Rome
2.3 Middle Ages
2.4 Early Modern
2.5 Italian unification
2.6 Fascist regime
2.7 Republican Italy
3.1 Volcanology
3.2 Environment
3.3 Biodiversity
4.1 Government
4.2 Law and criminal justice
4.2.1 Law enforcement
4.4 Military
5.3 Science and technology
6.1 Metropolitan cities and larger urban zone
6.2 Immigration
6.3 Languages
6.6 Health
7.2 Visual art
7.3 Literature and theatre
7.6 Sport
7.7 Fashion and design
7.9 Public holidays and festivals
Main article: Name of Italy
Hypotheses for the etymology of the name "Italia" are numerous.[26] One is that it was borrowed via Greek from the Oscan Víteliú 'land of calves' (cf. Lat vitulus "calf", Umb vitlo "calf").[27] The bull was a symbol of the southern Italic tribes and was often depicted goring the Roman wolf as a defiant symbol of free Italy during the Social War. Greek historian Dionysius of Halicarnassus states this account together with the legend that Italy was named after Italus,[28] mentioned also by Aristotle[29] and Thucydides.[30]
The name Italia originally applied only to a part of what is now Southern Italy, according to Antiochus of Syracuse, the southern portion of the Bruttium peninsula (modern Calabria: province of Reggio, and part of the provinces of Catanzaro and Vibo Valentia). But by his time Oenotria and Italy had become synonymous, and the name also applied to most of Lucania as well. The Greeks gradually came to apply the name "Italia" to a larger region, but it was during the reign of Emperor Augustus (end of the 1st century BC) that the term was expanded to cover the entire peninsula until the Alps.[31]
Main article: History of Italy
Prehistory and antiquity
Main articles: Prehistoric Italy, Etruscan civilisation, Magna Graecia, and Nuragic civilisation
File:Etruscan Painting 1.jpg
Etruscan fresco in the Monterozzi necropolis, 5th century BCE
Excavations throughout Italy revealed a Neanderthal presence dating back to the Palaeolithic period, some 200,000 years ago,[32] modern Humans appeared about 40,000 years ago. Archaeological sites from this period include Addaura cave, Altamura, Ceprano, Monte Poggiolo and Gravina in Puglia.[33]
The Ancient peoples of pre-Roman Italy – such as the Umbrians, the Latins (from which the Romans emerged), Volsci, Oscans, Samnites, Sabines, the Celts, the Ligures, and many others – were Indo-European peoples; the main historic peoples of possible non-Indo-European heritage include the Etruscans, the Elymians and the Sicani in Sicily, and the prehistoric Sardinians, who gave birth to the Nuragic civilization. Other ancient populations being of undetermined language families and of possible non-Indo-European origin include the Rhaetian people and Cammuni, known for their rock carvings.
Between the 17th and the 11th centuries BC Mycenaean Greeks established contacts with Italy[34][35][36][37] and in the 8th and 7th centuries BC a number of Greek colonies were established all along the coast of Sicily and the southern part of the Italian Peninsula, that became known as Magna Graecia. Also, the Phoenicians established colonies on the coasts of Sicily and in Sardinia.
Main article: Ancient Rome
The Colosseum in Rome, built c. 70 – 80 AD, is considered one of the greatest works of architecture and engineering of ancient history
The Roman Empire at its greatest extent, 117 AD
Rome, a settlement around a ford on the river Tiber conventionally founded in 753 BC, was ruled for a period of 244 years by a monarchical system, initially with sovereigns of Latin and Sabine origin, later by Etruscan kings. The tradition handed down seven kings: Romulus, Numa Pompilius, Tullus Hostilius, Ancus Marcius, Tarquinius Priscus, Servius Tullius and Tarquinius Superbus. In 509 BC, the Romans expelled the last king from their city and established an oligarchic republic.
In the wake of Julius Caesar's rise and death in the first century B.C., Rome grew over the course of centuries into a massive empire stretching from Britain to the borders of Persia, and engulfing the whole Mediterranean basin, in which Greek and Roman and many other cultures merged into a unique civilisation. The Italian Peninsula was named Italia and was not a province, but the territory of the city of Rome, thus having a special status.[38] The long and triumphant reign of the first emperor, Augustus, began a golden age of peace and prosperity.
The Roman Empire was among the most powerful economic, cultural, political and military forces in the world of its time. It was one of the largest empires in world history. At its height under Trajan, it covered 5 million square kilometres.[39][40] The Roman legacy has deeply influenced the Western civilisation, shaping most of the modern world; among the many legacies of Roman dominance are the widespread use of the Romance languages derived from Latin, the numerical system, the modern Western alphabet and calendar, and the emergence of Christianity as a major world religion.[41]
In a slow decline since the third century AD, the Empire split in two in 395 AD. The Western Empire, under the pressure of the barbarian invasions, eventually dissolved in 476 AD, when its last Emperor was deposed by the Germanic chief Odoacer, while the Eastern half of the Empire survived for another thousand years.
Main article: Italy in the Middle Ages
File:Naval Jack of Italy.svg
Flag of the Italian Navy, displaying the coat of arms of the most prominent maritime republics (clockwise from left): Venice, Genoa, Pisa and Amalfi
After the fall of the Western Roman Empire, Italy was seized by the Ostrogoths,[42] followed in the 6th century by a brief reconquest under Byzantine Emperor Justinian. The invasion of another Germanic tribe, the Lombards, late in the same century, reduced the Byzantine presence to a rump realm (the Exarchate of Ravenna) and started the end of political unity of the peninsula for the next 1,300 years. The Lombard kingdom was subsequently absorbed into the Frankish Empire by Charlemagne in the late 8th century. The Franks also helped the formation of the Papal States in central Italy. Until the 13th century, Italian politics was dominated by the relations between the Holy Roman Emperors and the Papacy, with most of the Italian city-states siding for the former (Ghibellines) or for the latter (Guelphs) from momentary convenience.[43]
The Iron Crown of Lombardy, for centuries symbol of the Kings of Italy
Castel del Monte, built by German Emperor Frederick II, now a UNESCO World Heritage Site
It was during this chaotic era that Italian towns saw the rise of a peculiar institution, the medieval commune. Given the power vacuum caused by extreme territorial fragmentation and the struggle between the Empire and the Holy See, local communities sought autonomous ways to maintain law and order.[44] In 1176 a league of city-states, the Lombard League, defeated the German emperor Frederick Barbarossa at the Battle of Legnano, thus ensuring effective independence for most of northern and central Italian cities. In coastal and southern areas, the maritime republics, the most notable being Venice, Genoa, Pisa and Amalfi, heavily involved in the Crusades, grew to eventually dominate the Mediterranean and monopolise trade routes to the Orient.[45]
In the south, Sicily had become an Islamic emirate in the 9th century, thriving until the Italo-Normans conquered it in the late 11th century together with most of the Lombard and Byzantine principalities of southern Italy.[46] Through a complex series of events, southern Italy developed as a unified kingdom, first under the House of Hohenstaufen, then under the Capetian House of Anjou and, from the 15th century, the House of Aragon. In Sardinia, the former Byzantine provinces became independent states known as Giudicati, although some parts of the island were under Genoese or Pisan control until the Aragonese conquered it in the 15th century. The Black Death pandemic of 1348 left its mark on Italy by killing perhaps one third of the population.[47][48] However, the recovery from the plague led to a resurgence of cities, trade and economy which allowed the bloom of Humanism and Renaissance, that later spread in Europe.
File:Italy 1494.svg
Italian states before the beginning of the Italian Wars in 1494.
In the 14th and 15th centuries, northern-central Italy was divided into a number of warring city-states, the rest of the peninsula being occupied by the larger Papal States and the Kingdom of Sicily, referred to here as Naples. Though many of these city-states were often formally subordinate to foreign rulers, as in the case of the Duchy of Milan, which was officially a constituent state of the mainly Germanic Holy Roman Empire, the city-states generally managed to maintain de facto independence from the foreign sovereigns that had seized Italian lands following the collapse of the Western Roman Empire. The strongest among these city-states gradually absorbed the surrounding territories giving birth to the Signorie, regional states often led by merchant families which founded local dynasties. War between the city-states was endemic, and primarily fought by armies of mercenaries known as condottieri, bands of soldiers drawn from around Europe, especially Germany and Switzerland, led largely by Italian captains.[49] Decades of fighting eventually saw Florence, Milan and Venice emerged as the dominant players that agreed to the Peace of Lodi in 1454, which saw relative calm brought to the region for the first time in centuries. This peace would hold for the next forty years.
File:Leonardo self.jpg
Leonardo da Vinci, the quintessential Renaissance man, in a self-portrait, c. 1512. Royal Library, Turin
The Renaissance, a period of vigorous revival of the arts and culture, originated in Italy thanks to a number of factors, as the great wealth accumulated by merchant cities, the patronage of its dominant families,[50] and the migration of Greek scholars and texts to Italy following the Conquest of Constantinople at the hands of the Ottoman Turks.[51][52][53] The Italian Renaissance peaked in the mid-16th century as foreign invasions plunged the region into the turmoil of the Italian Wars.
The Medici became the leading family of Florence and fostered and inspired the birth of the Italian Renaissance,[50][54] along with other families of Italy, such as the Visconti and Sforza of Milan, the Este of Ferrara, and the Gonzaga of Mantua. Greatest artists like Leonardo da Vinci, Brunelleschi, Botticelli, Michelangelo, Giotto, Donatello, Titian and Raphael produced inspired works – their paintwork was more realistic-looking than had been created by Medieval artists and their marble statues rivalled and sometimes surpassed those of Classical Antiquity. Humanist historian Leonardo Bruni also split the history in the antiquity, Middle Ages and modern period.[55] The ideas and ideals of the Renaissance soon spread into Northern Europe, France, England and much of Europe. In the meantime, the discovery of the Americas, the new routes to Asia discovered by the Portuguese and the rise of the Ottoman Empire, all factors which eroded the traditional Italian dominance in trade with the East, caused a long economic decline in the peninsula.
File:Columbus Taking Possession.jpg
Christopher Columbus discovered America in 1492, opening a new era in the history of humankind
Following the Italian Wars (1494 to 1559), ignited by the rivalry between France and Spain, the city-states gradually lost their independence and came under foreign domination, first under Spain (1559 to 1713) and then Austria (1713 to 1796). In 1629–1631, a new outburst of plague claimed about 14% of Italy's population.[56] In addition, as the Spanish Empire started to decline in the 17th century, so did its possessions in Naples, Sicily, Sardinia, and Milan. In particular, Southern Italy was impoverished and cut off from the mainstream of events in Europe.[57]
In the 18th century, as a result of the War of Spanish Succession, Austria replaced Spain as the dominant foreign power, while the House of Savoy emerged as a regional power expanding to Piedmont and Sardinia. In the same century, the two-century long decline was interrupted by the economic and state reforms pursued in several states by the ruling élites.[58] During the Napoleonic Wars, northern-central Italy was invaded and reorganised as a new Kingdom of Italy, a client state of the French Empire,[59] while the southern half of the peninsula was administered by Joachim Murat, Napoleon's brother-in-law, who was crowned as King of Naples. The 1814 Congress of Vienna restored the situation of the late 18th century, but the ideals of the French Revolution could not be eradicated, and soon re-surfaced during the political upheavals that characterised the first part of the 19th century.
Italian unification
Main articles: Italian unification, Kingdom of Italy, and Military history of Italy during World War I
File:Italian-unification.gif
Animated map of the Italian unification, from 1829 to 1871
The birth of the Kingdom of Italy was the result of efforts by Italian nationalists and monarchists loyal to the House of Savoy to establish a united kingdom encompassing the entire Italian Peninsula. In the context of the 1848 liberal revolutions that swept through Europe, an unsuccessful war was declared on Austria. The Kingdom of Sardinia again attacked the Austrian Empire in the Second Italian War of Independence of 1859, with the aid of France, resulting in liberating Lombardy.
File:Giuseppe Garibaldi (1866).jpg
Giuseppe Garibaldi, considered one of the greatest generals of modern times and one of Italy's "fathers of the fatherland",[60] commanded and fought in many military campaigns that led eventually to the Italian unification, and is known as the Hero of the Two Worlds[61]
The patriotic journalist Giuseppe Mazzini, member of the secret revolutionary society Carbonari and founder of the influential political movement Young Italy in the early 1830s, favored a unitary republic and advocated a broad nationalist movement. His prolific output of propaganda helped the unification movement stay active. In 1860–1861, general Giuseppe Garibaldi led the drive for unification in Naples and Sicily,[62] while the House of Savoy troops occupied the central territories of the Italian peninsula, except Rome and part of Papal States. This allowed the Sardinian government led by Camillo Benso, Count of Cavour, to declare a united Italian kingdom on 17 March 1861. The capital of Italy was moved from Turin to Florence. In 1866, Victor Emmanuel II allied with Prussia during the Austro-Prussian War, waging the Third Italian War of Independence which allowed Italy to annex Venetia. Finally, as France abandoned its garrisons in Rome during the disastrous Franco-Prussian War of 1870, the Italians rushed to fill the power gap by taking over the Papal States. After the unification, Victor Emmanuel, Garibaldi, Cavour and Mazzini have been referred as Italy's Four Fathers of the Fatherland.[60]
The Constitutional Law of the Kingdom of Sardinia the Albertine Statute of 1848, was extended to the whole Kingdom of Italy in 1861, and provided for basic freedoms of the new State, but electoral laws excluded the non-propertied and uneducated classes from voting. The government of the new kingdom took place in a framework of parliamentary constitutional monarchy dominated by liberal forces. From 2 November 1899 to 7 September 1901, Italy participated as part of the Eight-Nation Alliance forces during the Boxer Rebellion in China. On 7 September 1901, a concession in Tientsin was ceded to the country, and on 7 June 1902, the concession was taken into Italian possession and administered by a consul.
File:Vittoriano Altare della Patria 2013-09-16.jpg
The Altare della Patria in Rome, built in honor of Victor Emmanuel II, the first king of a unified Italy. Since the end of World War I, it holds the tomb of the Unknown Soldier
In 1913, male universal suffrage was adopted. As Northern Italy quickly industrialised, the South and rural areas of the North remained underdeveloped and overpopulated, forcing millions of people to migrate abroad, while the Italian Socialist Party constantly increased in strength, challenging the traditional liberal and conservative establishment. Starting from the last two decades of the 19th century, Italy developed into a colonial power by forcing Somalia, Eritrea and later Libya and the Dodecanese under its rule.[63]
Italy, nominally allied with the German Empire and the Empire of Austria-Hungary in the Triple Alliance, in 1915 joined the Allies into the war with a promise of substantial territorial gains, that included western Inner Carniola, former Austrian Littoral, Dalmatia as well as parts of the Ottoman Empire. The war was initially inconclusive, as the Italian army get struck in a long attrition war in the Alps, making little progress and suffering very heavy losses. Eventually, in October 1918, the Italians launched a massive offensive, culminating in the victory of Vittorio Veneto. The Italian victory[64][65][66] marked the end of the war on the Italian Front, secured the dissolution of the Austro-Hungarian Empire and was chiefly instrumental in ending the First World War less than two weeks later.
During the war, more than 650,000 Italian soldiers and as many civilians died[67] and the kingdom went to the brink of bankruptcy. Under the Peace Treaties of Saint-Germain, Rapallo and Rome, Italy obtained most of the promised territories, but not Dalmatia (except Zara), allowing nationalists to define the victory as "mutilated". Moreover, Italy annexed the Hungarian harbour of Fiume, that was not part of territories promised at London but had been occupied after the end of the war by Gabriele D'Annunzio.
Fascist regime
Main articles: Italian Fascism and Military history of Italy during World War II
File:Benito Mussolini colored.jpg
Benito Mussolini, duce of Fascist Italy
The socialist agitations that followed the devastation of the Great War, inspired by the Russian Revolution, led to counter-revolution and repression throughout Italy. The liberal establishment, fearing a Soviet-style revolution, started to endorse the small National Fascist Party, led by Benito Mussolini. In October 1922 the Blackshirts of the National Fascist Party attempted a coup (the "March on Rome") which failed but at the last minute, King Victor Emmanuel III refused to proclaim a state of siege and appointed Mussolini prime minister. Over the next few years, Mussolini banned all political parties and curtailed personal liberties, thus forming a dictatorship. These actions attracted international attention and eventually inspired similar dictatorships such as Nazi Germany and Francoist Spain.
In 1935, Mussolini invaded Ethiopia, resulting in an international alienation and leading to Italy's withdrawal from the League of Nations; Italy allied with Nazi Germany and the Empire of Japan and strongly supported Francisco Franco in the Spanish civil war. In 1939, Italy annexed Albania, a de facto protectorate for decades. Italy entered World War II on 10 June 1940. After initially advancing in British Somaliland and Egypt, the Italians were defeated in East Africa, the Balkans, Russia and North Africa.
File:Italian Empire maximum extent 1942-43.png
Maximum extent of the Italian Empire (1940–43)
The Armistice of Villa Giusti, which ended fighting between Italy and Austria-Hungary at the end of World War I, resulted in Italian annexation of neighboring parts of Yugoslavia. During the interwar period, the fascist Italian government undertook a campaign of Italianisation in the areas it annexed, which suppressed Slavic language, schools, political parties, and cultural institutions. During World War II, Italian war crimes included extrajudicial killings and ethnic cleansing[68] by deportation of about 25,000 people, mainly Jews, Croats, and Slovenians, to the Italian concentration camps, such as Rab, Gonars, Monigo, Renicci di Anghiari and elsewhere. In Italy and Yugoslavia, unlike in Germany, few war crimes were prosecuted.[69][70][71][72] Yugoslav Partisans perpetrated their own crimes during and after the war, including the foibe killings. Meanwhile, about 250,000 Italians and anti-communist Slavs fled to Italy in the Istrian exodus.
An Allied invasion of Sicily began in July 1943, leading to the collapse of the Fascist regime and the fall of Mussolini on 25 July. On 8 September, Italy surrendered. The Germans helped by the Italian fascists shortly succeeded in taking control of northern and central Italy. The country remained a battlefield for the rest of the war, as the Allies were slowly moving up from the south.
In the north, the Germans set up the Italian Social Republic (RSI), a Nazi puppet state with Mussolini installed as leader. The post-armistice period saw the rise of a large anti-fascist resistance movement, the Resistenza. In late April 1945, with total defeat looming, Mussolini attempted to escape north,[73] but was captured and summarly executed near Lake Como by Italian partisans. His body was then taken to Milan, where it was hung upside down at a service station for public viewing and to provide confirmation of his demise.[74] Hostilities ended on 29 April 1945, when the German forces in Italy surrendered. Nearly half a million Italians (including civilians) died in the conflict,[75] and the Italian economy had been all but destroyed; per capita income in 1944 was at its lowest point since the beginning of the 20th century.[76]
Republican Italy
Main article: History of the Italian Republic
File:Alcide de Gasperi 2.jpg
Alcide De Gasperi, first republican Prime Minister of Italy and one of the Founding Fathers of the European Union
Italy became a republic after a referendum[77] held on 2 June 1946, a day celebrated since as Republic Day. This was also the first time that Italian women were entitled to vote.[78] Victor Emmanuel III's son, Umberto II, was forced to abdicate and exiled. The Republican Constitution was approved on 1 January 1948. Under the Treaty of Peace with Italy of 1947, most of Julian March was lost to Yugoslavia and, later, the Free Territory of Trieste was divided between the two states. Italy also lost all its colonial possessions, formally ending the Italian Empire.
Fears in the Italian electorate of a possible Communist takeover proved crucial for the first universal suffrage electoral outcome on 18 April 1948, when the Christian Democrats, under the leadership of Alcide De Gasperi, obtained a landslide victory. Consequently, in 1949 Italy became a member of NATO. The Marshall Plan helped to revive the Italian economy which, until the late 1960s, enjoyed a period of sustained economic growth commonly called the "Economic Miracle". In 1957, Italy was a founding member of the European Economic Community (EEC), which became the European Union (EU) in 1993.
File:Римський договір.jpg
The signing ceremony of the Treaty of Rome at the Palazzo dei Conservatori on the Capitoline Hill. Italy is a founding member of all EU institutions.
From the late 1960s until the early 1980s, the country experienced the Years of Lead, a period characterised by economic crisis (especially after the 1973 oil crisis), widespread social conflicts and terrorist massacres carried out by opposing extremist groups, with the alleged involvement of US and Soviet intelligence.[79][80][81] The Years of Lead culminated in the assassination of the Christian Democrat leader Aldo Moro in 1978 and the Bologna railway station massacre in 1980, where 85 people died.
In the 1980s, for the first time since 1945, two governments were led by non-Christian-Democrat premiers: one republican (Giovanni Spadolini) and one socialist (Bettino Craxi); the Christian Democrats remained, however, the main government party. During Craxi's government, the economy recovered and Italy became the world's fifth largest industrial nation, gaining entry into the G7 Group. However, as a result of his spending policies, the Italian national debt skyrocketed during the Craxi era, soon passing 100% of the GDP.
In the early 1990s, Italy faced significant challenges, as voters – disenchanted with political paralysis, massive public debt and the extensive corruption system (known as Tangentopoli) uncovered by the 'Clean Hands' investigation – demanded radical reforms. The scandals involved all major parties, but especially those in the government coalition: the Christian Democrats, who ruled for almost 50 years, underwent a severe crisis and eventually disbanded, splitting up into several factions.[82] The Communists reorganised as a social-democratic force. During the 1990s and the 2000s (decade), centre-right (dominated by media magnate Silvio Berlusconi) and centre-left coalitions (led by university professor Romano Prodi) alternately governed the country.
In the late 2000s, Italy was severely hit by the Great Recession. From 2008 to 2013, the country suffered 42 months of GDP recession. The economic crisis was one of the main problems that forced Berlusconi to resign in 2011. The government of the conservative Prime Minister was replaced by the technocratic cabinet of Mario Monti. Following the 2013 general election, the Vice-Secretary of the Democratic Party Enrico Letta formed a new government at the head of a right-left Grand coalition. In 2014, challenged by the new Secretary of the PD Matteo Renzi, Letta resigned and was replaced by Renzi. The new government started important constitutional reforms such as the abolition of the Senate and a new electoral law. On 4 December the constitutional reform was rejected in a referendum and Renzi resigned after few days on 12 December; the Foreign Affairs Minister Paolo Gentiloni was appointed new Prime Minister.
Italy was affected by the European migrant crisis in 2015 as it became the entry point and leading destination for most asylum seekers entering the EU. The country took in over half a million refugees, which caused great strain on the public purse and a surge in the support for far-right and euroskeptic political parties.[83][84]
Main article: Geography of Italy
File:Italy topographic map-blank.svg
Topographic map of Italy
Italy is located in Southern Europe, between latitudes 35° and 47° N, and longitudes 6° and 19° E. To the north, Italy borders France, Switzerland, Austria and Slovenia, and is roughly delimited by the Alpine watershed, enclosing the Po Valley and the Venetian Plain. To the south, it consists of the entirety of the Italian Peninsula and the two Mediterranean islands of Sicily and Sardinia, in addition to many smaller islands. The sovereign states of San Marino and the Vatican City are enclaves within Italy, while Campione d'Italia is an Italian exclave in Switzerland.
The country's total area is 301,230 square kilometres (116,306 sq mi), of which 294,020 km2 (113,522 sq mi) is land and 7,210 km2 (2,784 sq mi) is water. Including the islands, Italy has a coastline and border of 7,600 kilometres (4,722 miles) on the Adriatic, Ionian, Tyrrhenian seas (740 km (460 mi)), and borders shared with France (488 km (303 mi)), Austria (430 km (267 mi)), Slovenia (232 km (144 mi)) and Switzerland (740 km (460 mi)). San Marino (39 km (24 mi)) and Vatican City (3.2 km (2.0 mi)), both enclaves, account for the remainder.
The Apennine Mountains form the peninsula's backbone and the Alps form most of its northern boundary, where Italy's highest point is located on Monte Bianco (4,810 m or 15,780 ft).[note 2] The Po, Italy's longest river (652 kilometres or 405 miles), flows from the Alps on the western border with France and crosses the Padan plain on its way to the Adriatic Sea. The five largest lakes are, in order of diminishing size:[85] Garda (367.94 km2 or 142 sq mi), Maggiore (212.51 km2 or 82 sq mi, shared with Switzerland), Como (145.9 km2 or 56 sq mi), Trasimeno (124.29 km2 or 48 sq mi) and Bolsena (113.55 km2 or 44 sq mi).
Although the country includes the Italian peninsula, adjacent islands and most of the southern Alpine basin, some of Italy's territory extends beyond the Alpine basin and some islands are located outside the Eurasian continental shelf. These territories are the comuni of: Livigno, Sexten, Innichen, Toblach (in part), Chiusaforte, Tarvisio, Graun im Vinschgau (in part), which are all part of the Danube's drainage basin, while the Val di Lei constitutes part of the Rhine's basin and the islands of Lampedusa and Lampione are on the African continental shelf.
Mbcourmayeur0001.jpg
Monte Bianco in Aosta Valley, the highest point in the European Union
Dolomites - panoramio (14).jpg
Dolomites in the Italian alps
Bellagio 1.jpg
Lake Como, often cited as the most beautiful lake in the world.[86]
Vernazza and the sea, Cinque Terre, Italy.jpg
The Riviera in Liguria
I fenicotteri rosa prendono il volo - panoramio.jpg
Delta of the Po river
Marmore Falls 01.jpg
The Marmore Falls in Umbria
Hilly landscape of Tuscany.jpg
Undulating landscape in Tuscany
Capri Faraglioni with boat.jpg
Faraglioni rocks, Capri
St. Antioco Island, Sardinia.jpg
The rocky coastline of the Isle of Sant'Antioco, Sardinia
Golfo di Macari S.Vito lo Capo, Trapani (Sicily).jpg
The Gulf of Macari in San Vito Lo Capo, Sicily
See also: Volcanology of Italy
File:Mt Etna and Catania1.jpg
The Mount Etna is an active stratovolcano in Sicily
The country is situated at the meeting point of the Eurasian Plate and the African Plate, leading to considerable seismic and volcanic activity. There are 14 volcanoes in Italy, four of which are active: Etna (the traditional site of Vulcan's smithy), Stromboli, Vulcano and Vesuvius. The latter one is the only active volcano in mainland Europe and is most famous for the destruction of Pompeii and Herculanum in the eruption in 79 AD. Several islands and hills have been created by volcanic activity, and there is still a large active caldera, the Campi Flegrei north-west of Naples.
The high volcanic and magmatic neogenic activity is subdivided into provinces:
Magmatic Tuscan (Monti Cimini, Tolfa and Amiata);
Magmatic Latium (Monti Volsini, Vico nel Lazio, Colli Albani, Roccamonfina);
Ultra-alkaline Umbrian Latium District (San Venanzo, Cupaello and Polino);
File:Vesuvius from Monte Somma (Panorama II).jpg
Mount Vesuvius, as seen from the Mount Somma
Vulcanic bell (Vesuvius, Campi Flegrei, Ischia);
Windy arch and Tyrrhenian basin (Aeolian Islands and Tyrrhenian seamounts);
African-Adriatic Avampa (Channel of Sicily, Graham Island, Etna and Mount Vulture).[87]
Until the 1950s, Italy was the first and only country to exploit geothermal energy to produce electricity in the Larderello area, and later in the Mount Amiata area. The high geothermal gradient that forms part of the peninsula makes potentially exploitable also other provinces: research carried out in the 1960s and 1970s identifies potential geothermal fields in Lazio and Tuscany, as well as in most volcanic islands.[88]
See also: List of national parks of Italy and List of regional parks of Italy
File:Italy natural parks.png
National (green) and regional (orange) parks in Italy
After its quick industrial growth, Italy took a long time to confront its environmental problems. After several improvements, it now ranks 84th in the world for ecological sustainability.[89] National parks cover about 5% of the country.[90] In the last decade, Italy has become one of the world's leading producers of renewable energy, ranking as the world's fourth largest holder of installed solar energy capacity[91][92] and the sixth largest holder of wind power capacity in 2010.[93] Renewable energies now make up about 12% of the total primary and final energy consumption in Italy, with a future target share set at 17% for the year 2020.[94]
File:Bergtocht van Gimillan (1805m.) naar Colle Tsa Sètse in Cogne Valley (Italië). Zicht op de omringende alpentoppen van Gran Paradiso 06.jpg
Gran Paradiso, established in 1922, is the oldest Italian national park
However, air pollution remains a severe problem, especially in the industrialised north, reaching the tenth highest level worldwide of industrial carbon dioxide emissions in the 1990s.[95] Italy is the twelfth largest carbon dioxide producer.[96][97] Extensive traffic and congestion in the largest metropolitan areas continue to cause severe environmental and health issues, even if smog levels have decreased dramatically since the 1970s and 1980s, and the presence of smog is becoming an increasingly rarer phenomenon and levels of sulphur dioxide are decreasing.[98]
Many watercourses and coastal stretches have also been contaminated by industrial and agricultural activity, while because of rising water levels, Venice has been regularly flooded throughout recent years. Waste from industrial activity is not always disposed of by legal means and has led to permanent health effects on inhabitants of affected areas, as in the case of the Seveso disaster. The country has also operated several nuclear reactors between 1963 and 1990 but, after the Chernobyl disaster and a referendum on the issue the nuclear programme was terminated, a decision that was overturned by the government in 2008, planning to build up to four nuclear power plants with French technology. This was in turn struck down by a referendum following the Fukushima nuclear accident.[99]
Deforestation, illegal building developments and poor land-management policies have led to significant erosion all over Italy's mountainous regions, leading to major ecological disasters like the 1963 Vajont Dam flood, the 1998 Sarno[100] and 2009 Messina mudslides.
Main articles: Fauna of Italy and Flora of Italy
File:Wolf at Castello Belfort.jpg
The Italian wolf, which inhabits the Apennine Mountains and the Western Alps, features prominently in Latin and Italian cultures, such as in the legend of the founding of Rome.[101]
Italy has the highest level of faunal biodiversity in Europe, with over 57,000 species recorded, representing more than a third of all European fauna.[102] The Italian peninsula is in the centre of the Mediterranean Sea, forming a corridor between central Europe and North Africa, and has 8,000 km of coastline. Italy also receives species from the Balkans, Eurasia, the Middle East. Italy's varied geological structure, including the Alps and the Apennines, Central Italian woodlands, and Southern Italian Garigue and Maquis shrubland, also contribute to high climate and habitat diversity.
Italian fauna includes 4777 endemic animal species, such as the Sardinian long-eared bat, Sardinian red deer, spectacled salamander, Brown cave salamander, Italian cave salamander, Monte Albo cave salamander, Sardinian brook newt, Italian newt, Italian frog, Apennine yellow-bellied toad, Aeolian wall lizard, Sicilian wall lizard, Italian Aesculapian snake, and Sicilian pond turtle. There are 102 mammals species in Italy, such as the Alpine marmot, Etruscan shrew (the smallest mammal in the world), and European snow vole; notable large mammals are the Italian wolf, Marsican brown bear, Pyrenean chamois, Alpine ibex, rough-toothed dolphin, crested porcupine and Mediterranean monk seal. Italy has also recorded 516 bird species and 56213 invertebrates species.
The flora was traditionally estimated to comprise about 5,500 vascular plant species.[103] However, as of 2005[update], 6,759 species are recorded in the Data bank of Italian vascular flora.[104] Geobotanically, the Italian flora is shared between the Circumboreal Region and Mediterranean Region. Italy is a signatory to the Berne Convention on the Conservation of European Wildlife and Natural Habitats and the Habitats Directive both affording protection to the Italian fauna and flora.
Main article: Climate of Italy
File:Isola di Levanzo, Sicilia, Italia.jpg
Southern Italy has a Mediterranean climate
Thanks to the great longitudinal extension of the peninsula and the mostly mountainous internal conformation, the climate of Italy is highly diverse. In most of the inland northern and central regions, the climate ranges from humid subtropical to humid continental and oceanic. In particular, the climate of the Po valley geographical region is mostly continental, with harsh winters and hot summers.[105][106]
The coastal areas of Liguria, Tuscany and most of the South generally fit the Mediterranean climate stereotype (Köppen climate classification Csa). Conditions on peninsular coastal areas can be very different from the interior's higher ground and valleys, particularly during the winter months when the higher altitudes tend to be cold, wet, and often snowy. The coastal regions have mild winters and warm and generally dry summers, although lowland valleys can be quite hot in summer. Average winter temperatures vary from 0 °C (32 °F) on the Alps to 12 °C (54 °F) in Sicily, like so the average summer temperatures range from 20 °C (68 °F) to over 25 °C (77 °F).[107]
Main article: Politics of Italy
Italy has been a unitary parliamentary republic since 2 June 1946, when the monarchy was abolished by a constitutional referendum. The President of Italy (Presidente della Repubblica), currently Sergio Mattarella since 2015, is Italy's head of state. The President is elected for a single seven years mandate by the Parliament of Italy in joint session. Italy has a written democratic constitution, resulting from the work of a Constituent Assembly formed by the representatives of all the anti-fascist forces that contributed to the defeat of Nazi and Fascist forces during the Civil War.[108]
129px 140px
Paolo Gentiloni
Prime Minister since 2016 Sergio Mattarella
President since 2015
Italy has a parliamentary government based on a proportional voting system. The parliament is perfectly bicameral: the two houses, the Chamber of Deputies (that meets in Palazzo Montecitorio) and the Senate of the Republic (that meets in Palazzo Madama), have the same powers. The Prime Minister, officially President of the Council of Ministers (Presidente del Consiglio dei Ministri), is Italy's head of government. The Prime Minister and the cabinet are appointed by the President of the Republic, but must pass a vote of confidence in Parliament to come into office. The incumbent Prime Minister is Paolo Gentiloni of the Democratic Party.
The prime minister is the President of the Council of Ministers—which holds effective executive power— and he must receive a vote of approval from it to execute most political activities. The office is similar to those in most other parliamentary systems, but the leader of the Italian government is not authorised to request the dissolution of the Parliament of Italy.
Another difference with similar offices is that the overall political responsibility for intelligence is vested in the President of the Council of Ministers. By virtue of that, the Prime Minister has exclusive power to: coordinate intelligence policies, determining the financial resources and strengthening national cyber security; apply and protect State secrets; authorise agents to carry out operations, in Italy or abroad, in violation of the law.[109]
File:Giuramento Mattarella Montecitorio.jpg
The Chamber of Deputies is the lower house of Italy.
A peculiarity of the Italian Parliament is the representation given to Italian citizens permanently living abroad: 12 Deputies and 6 Senators elected in four distinct overseas constituencies. In addition, the Italian Senate is characterised also by a small number of senators for life, appointed by the President "for outstanding patriotic merits in the social, scientific, artistic or literary field". Former Presidents of the Republic are ex officio life senators.
Italy's three major political parties are the Lega Nord, Democratic Party and the Five Star Movement. During the 2018 general election these three parties won 614 out of 630 seats available in the Chamber of Deputies and 309 out of 315 in the Senate.[110] Most of the seats were won by Luigi Di Maio's Five Star Movement with the rest going to Berlusconi's Forza Italia which formed a centre-right coalition with Matteo Silvani's Northern League and Giorgia Meloni's Brothers of Italy, beyond these the rest were taken by Matteo Renzi's Democratic Party along with Achammer and Panizza's South Tyrolean People's Party & Trentino Tyrolean Autonomist Party in a centre-left coalition and the independent Free and Equal party.
Law and criminal justice
Main articles: Law of Italy and Judiciary of Italy
File:Rome (IT), Corte Suprema di Cassazione -- 2013 -- 3787.jpg
The Supreme Court of Cassation
The Italian judicial system is based on Roman law modified by the Napoleonic code and later statutes. The Supreme Court of Cassation is the highest court in Italy for both criminal and civil appeal cases. The Constitutional Court of Italy (Corte Costituzionale) rules on the conformity of laws with the constitution and is a post–World War II innovation. Since their appearance in the middle of the 19th century, Italian organised crime and criminal organisations have infiltrated the social and economic life of many regions in Southern Italy, the most notorious of which being the Sicilian Mafia, which would later expand into some foreign countries including the United States. Mafia receipts may reach 9%[111][112] of Italy's GDP.[113]
A 2009 report identified 610 comuni which have a strong Mafia presence, where 13 million Italians live and 14.6% of the Italian GDP is produced.[114][115] The Calabrian 'Ndrangheta, nowadays probably the most powerful crime syndicate of Italy, accounts alone for 3% of the country's GDP.[116] However, at 0.013 per 1,000 people, Italy has only the 47th highest murder rate[117] (in a group of 62 countries) and the 43rd highest number of rapes per 1,000 people in the world (in a group of 65 countries), relatively low figures among developed countries.
Main article: Law enforcement in Italy
File:Alfa-Romeo159-Carabinieri-di-Roma.JPG
A Alfa Romeo vehicle of the Carabinieri corps
Law enforcement in Italy is provided by multiple police forces, five of which are national, Italian agencies. The Polizia di Stato (State Police) is the civil national police of Italy. Along with patrolling, investigative and law enforcement duties, it patrols the Autostrada (Italy's Express Highway network), and oversees the security of railways, bridges and waterways. The Carabinieri is the common name for the Arma dei Carabinieri, a Gendarmerie-like military corps with police duties. They also serve as the military police for the Italian armed forces.
The Guardia di Finanza, (English: Financial Guard) is a corps under the authority of the Minister of Economy and Finance, with a role as police force. The Corps is in charge of financial, economic, judiciary and public safety. The Polizia Penitenziaria (Prison Guards, literally Penitentiary Police) operate the Italian prison system and handle the transportation of inmates.
Main article: Foreign relations of Italy
File:EU High Representative Mogherini Walks With Italian FM Gentioni Prior to First Working Session of G7 Ministerial Meeting cropped.jpg
Prime Minister Paolo Gentiloni with EU High Representative Federica Mogherini
Italy is a founding member of the European Community, now the European Union (EU), and of NATO. Italy was admitted to the United Nations in 1955, and it is a member and strong supporter of a wide number of international organisations, such as the Organisation for Economic Co-operation and Development (OECD), the General Agreement on Tariffs and Trade/World Trade Organization (GATT/WTO), the Organization for Security and Co-operation in Europe (OSCE), the Council of Europe, and the Central European Initiative. Its recent or upcoming turns in the rotating presidency of international organisations include the Organization for Security and Co-operation in Europe in 2018, the G7 in 2017 and the EU Council from July to December 2014. Italy is also a recurrent Non-permanent member of the UN Security Council, the most recently in 2017.
Italy strongly supports multilateral international politics, endorsing the United Nations and its international security activities. As of 2013[update], Italy was deploying 5,296 troops abroad, engaged in 33 UN and NATO missions in 25 countries of the world.[118] Italy deployed troops in support of UN peacekeeping missions in Somalia, Mozambique, and East Timor and provides support for NATO and UN operations in Bosnia, Kosovo and Albania. Italy deployed over 2,000 troops in Afghanistan in support of Operation Enduring Freedom (OEF) from February 2003.
Italy supported international efforts to reconstruct and stabilise Iraq, but it had withdrawn its military contingent of some 3,200 troops by 2006, maintaining only humanitarian operators and other civilian personnel. In August 2006 Italy deployed about 2,450 troops in Lebanon for the United Nations' peacekeeping mission UNIFIL.[119] Italy is one of the largest financiers of the Palestinian National Authority, contributing €60 million in 2013 alone.[120]
Main article: Italian Armed Forces
File:Cavour (550).jpg
The aircraft carrier MM Cavour
File:Eurofighter Typhoon 02.jpg
A Eurofighter Typhoon operated by the Italian Air Force
The Italian Army, Navy, Air Force and Carabinieri collectively form the Italian Armed Forces, under the command of the Supreme Defence Council, presided over by the President of Italy. Since 2005, military service is voluntary.[121] In 2010, the Italian military had 293,202 personnel on active duty,[122] of which 114,778 are Carabinieri.[123] Total Italian military spending in 2010 ranked tenth in the world, standing at $35.8 billion, equal to 1.7% of national GDP. As part of NATO's nuclear sharing strategy Italy also hosts 90 United States B61 nuclear bombs, located in the Ghedi and Aviano air bases.[124]
The Italian Army is the national ground defence force, numbering 109,703 in 2008. Its best-known combat vehicles are the Dardo infantry fighting vehicle, the Centauro tank destroyer and the Ariete tank, and among its aircraft the Mangusta attack helicopter, in the last years deployed in EU, NATO and UN missions. It also has at its disposal a large number of Leopard 1 and M113 armoured vehicles.
The Italian Navy in 2008 had 35,200 active personnel with 85 commissioned ships and 123 aircraft.[125] It is a blue-water navy. In modern times the Italian Navy, being a member of the EU and NATO, has taken part in many coalition peacekeeping operations around the world.
The Italian Air Force in 2008 had a strength of 43,882 and operated 585 aircraft, including 219 combat jets and 114 helicopters. A transport capability is guaranteed by a fleet of 27 C-130Js and C-27J Spartan.
An autonomous corps of the military, the Carabinieri are the gendarmerie and military police of Italy, policing the military and civilian population alongside Italy's other police forces. While the different branches of the Carabinieri report to separate ministries for each of their individual functions, the corps reports to the Ministry of Internal Affairs when maintaining public order and security.[126]
Main articles: Regions of Italy, Metropolitan cities of Italy, Provinces of Italy, and Municipalities of Italy
Italy is subdivided into 20 regions (regioni), five of these regions having a special autonomous status that enables them to enact legislation on some of their local matters. The country is further divided into 14 metropolitan cities (città metropolitane) and 96 provinces (province), which in turn are subdivided in 7,960 municipalities (2018) (comuni).[127]
Template:Italy Labelled Map
Area (km2)
Area (sq mi)
Abruzzo L'Aquila 10,763 4,156 1,331,574
Aosta Valley Aosta 3,263 1,260 128,298
Apulia Bari 19,358 7,474 4,090,105
Basilicata Potenza 9,995 3,859 576,619
Calabria Catanzaro 15,080 5,822 1,976,631
Campania Naples 13,590 5,247 5,861,529
Emilia-Romagna Bologna 22,446 8,666 4,450,508
Friuli-Venezia Giulia Trieste 7,858 3,034 1,227,122
Lazio Rome 17,236 6,655 5,892,425
Liguria Genoa 5,422 2,093 1,583,263
Lombardy Milan 23,844 9,206 10,002,615
Marche Ancona 9,366 3,616 1,550,796
Molise Campobasso 4,438 1,713 313,348
Piedmont Turin 25,402 9,808 4,424,467
Sardinia Cagliari 24,090 9,301 1,663,286
Sicily Palermo 25,711 9,927 5,092,080
Tuscany Florence 22,993 8,878 3,752,654
Trentino-Alto Adige/Südtirol Trento 13,607 5,254 1,055,934
Umbria Perugia 8,456 3,265 894,762
Veneto Venice 18,399 7,104 4,927,596
Main article: Economy of Italy
File:Full Milan skyline from Duomo roof.jpg
Milan is a global financial centre and a fashion capital of the world.
Italy has a major advanced[128] capitalist mixed economy, ranking as the third-largest in the Eurozone and the eighth-largest in the world.[129] A founding member of the G7, the Eurozone and the OECD, it is regarded as one of the world's most industrialised nations and a leading country in world trade and exports.[130][131][132] It is a highly developed country, with the world's 8th highest quality of life in 2005[133] and the 26th Human Development Index. The country is well known for its creative and innovative business,[134] a large and competitive agricultural sector[135] (Italy is the world's largest wine producer),[136] and for its influential and high-quality automobile, machinery, food, design and fashion industry.[137][138][139]
File:Ferrari 488 GTB.jpg
A Ferrari 488. Italy maintains a large automotive industry,[140] and is the world's seventh exporter of goods.[141]
Italy is the world's sixth largest manufacturing country,[142] characterised by a smaller number of global multinational corporations than other economies of comparable size and a large number of dynamic small and medium-sized enterprises, notoriously clustered in several industrial districts, which are the backbone of the Italian industry. This has produced a manufacturing sector often focused on the export of niche market and luxury products, that if on one side is less capable to compete on the quantity, on the other side is more capable of facing the competition from China and other emerging Asian economies based on lower labour costs, with higher quality products.[143] Italy was the world's 7th largest exporter in 2016. Its closest trade ties are with the other countries of the European Union, with whom it conducts about 59% of its total trade. Its largest EU trade partners, in order of market share, are Germany (12.9%), France (11.4%), and Spain (7.4%).[144]
File:Eurozone.svg
Italy is part of a monetary union, the Eurozone (dark blue) and of the EU single market.
The automotive industry is a significant part of the Italian manufacturing sector, with over 144,000 firms and almost 485,000 employed people in 2015,[145] and a contribution of 8.5% to Italian GDP.[146] Fiat Chrysler Automobiles (abbreviated in FCA) is currently the world's seventh-largest auto maker.[147] The country boasts a wide range of acclaimed products, from very compact city cars to luxury supercars such as Maserati, Lamborghini, and Ferrari, which was rated the world's most powerful brand by Brand Finance.[148] Italian cars have also won 12 times at the European Car of the Year, with 9 awards won by Fiat (the most of any manufacturer), 2 by Alfa Romeo, and one by Lancia.
Italy is part of the European single market which represents more than 500 million consumers. Several domestic commercial policies are determined by agreements among European Union (EU) members and by EU legislation. Italy introduced the common European currency, the Euro in 2002.[149][150] It is a member of the Eurozone which represents around 330 million citizens. Its monetary policy is set by the European Central Bank.
Italy has been hit hard by the Financial crisis of 2007–08, that exacerbated the country's structural problems.[151] Effectively, after a strong GDP growth of 5–6% per year from the 1950s to the early 1970s,[152] and a progressive slowdown in the 1980-90s, the country virtually stagnated in the 2000s.[153][154] The political efforts to revive growth with massive government spending eventually produced a severe rise in public debt, that stood at over 135% of GDP in 2014, ranking second in the EU only after the Greek one (at 174%).[155] For all that, the largest chunk of Italian public debt is owned by national subjects, a major difference between Italy and Greece,[156] and the level of household debt is much lower than the OECD average.[157]
A gaping North–South divide is a major factor of socio-economic weakness.[158] It can be noted by the huge difference in statistical income between the northern and southern regions and municipalities.[159] The richest department, Alto Adige-South Tyrol, earns 152% of the national GDP per capita, while the poorest region, Calabria, 61%.[160] The unemployment rate (11.1%) stands slightly above the Eurozone average,[161] but the disaggregated figure is 6.6% in the North and 19.2% in the South.[162]
File:Vineyards in Tuscany quality image.jpg
Vineyards in the Chianti region, Tuscany. The Italian food industry is well known for the high quality and variety of its products.
According to the last national agricultural census, there were 1.6 million farms in 2010 (−32.4% since 2000) covering 12.7 million hectares (63% of which are located in Southern Italy).[163] The vast majority (99%) are family-operated and small, averaging only 8 hectares in size.[163] Of the total surface area in agricultural use (forestry excluded), grain fields take up 31%, olive tree orchards 8.2%, vineyards 5.4%, citrus orchards 3.8%, sugar beets 1.7%, and horticulture 2.4%. The remainder is primarily dedicated to pastures (25.9%) and feed grains (11.6%).[163]
Italy is the world's top wine producer,[164] and one of the leading in olive oil, fruits (apples, olives, grapes, oranges, lemons, pears, apricots, hazelnuts, peaches, cherries, plums, strawberries and kiwifruits), and vegetables (especially artichokes and tomatoes). The most famous Italian wines are probably the Tuscan Chianti and the Piedmontese Barolo. Other famous wines are Barbaresco, Barbera d'Asti, Brunello di Montalcino, Frascati, Montepulciano d'Abruzzo, Morellino di Scansano, and the sparkling wines Franciacorta and Prosecco. Quality goods in which Italy specialises, particularly the already mentioned wines and regional cheeses, are often protected under the quality assurance labels DOC/DOP. This geographical indication certificate, which is attributed by the European Union, is considered important in order to avoid confusion with low-quality mass-produced ersatz products.
Main article: Transport in Italy
File:Frecciarossa di Trenitalia.jpg
FS' Frecciarossa 1000 high speed train, with a maximum speed of 400 km/h (249 mph),[165] is the fastest train in Italy and Europe
In 2004 the transport sector in Italy generated a turnover of about 119.4 billion euros, employing 935,700 persons in 153,700 enterprises. Regarding the national road network, in 2002 there were 668,721 km (415,524 mi) of serviceable roads in Italy, including 6,487 km (4,031 mi) of motorways, state-owned but privately operated by Atlantia. In 2005, about 34,667,000 passenger cars (590 cars per 1,000 people) and 4,015,000 goods vehicles circulated on the national road network.[166]
The national railway network, state-owned and operated by Ferrovie dello Stato, in 2008 totalled 16,529 km (10,271 mi) of which 11,727 km (7,287 mi) is electrified, and on which 4,802 locomotives and railcars run.
The national inland waterways network comprised 1,477 km (918 mi) of navigable rivers and channels in 2002. In 2004 there were approximately 30 main airports (including the two hubs of Malpensa International in Milan and Leonardo da Vinci International in Rome) and 43 major seaports (including the seaport of Genoa, the country's largest and second largest in the Mediterranean Sea). In 2005 Italy maintained a civilian air fleet of about 389,000 units and a merchant fleet of 581 ships.[166]
Italy needs to import about 80% of its energy requirements.[167][168][169]
Main article: Water supply and sanitation in Italy
Italy does not invest enough to maintain its drinking water supply and sanitation infrastructure, while water and sanitation tariffs are among the lowest in the European Union. The Galli Law, passed in 1993, aimed at raising the level of investment and to improve service quality by consolidating service providers, making them more efficient and increasing the level of cost recovery through tariff revenues. Despite these reforms, investment levels have declined and remain far from sufficient.[170][171][172]
Main article: Science and technology in Italy
File:Collage scienziati italiani.jpg
Clockwise from left: Alessandro Volta, inventor of the electric battery and discoverer of methane;[173]
Galileo Galilei, recognized as the Father of modern science, physics and observational astronomy;[174]
Guglielmo Marconi, inventor of the long-distance radio transmission;[175]
Enrico Fermi, creator of the first nuclear reactor, the Chicago Pile-1[176]
Through the centuries, Italy has fostered the scientific community that produced many major discoveries in physics and the other sciences. During the Renaissance Italian polymaths such as Leonardo da Vinci (1452–1519), Michelangelo (1475–1564) and Leon Battista Alberti (1404–72) made important contributions to a variety of fields, including biology, architecture, and engineering. Galileo Galilei (1564–1642), a physicist, mathematician and astronomer, played a major role in the Scientific Revolution. His achievements include key improvements to the telescope and consequent astronomical observations, and ultimately the triumph of Copernicanism over the Ptolemaic model.
Other astronomers suchs as Giovanni Domenico Cassini (1625–1712) and Giovanni Schiaparelli (1835–1910) made many important discoveries about the Solar System. In mathematics, Joseph Louis Lagrange (born Giuseppe Lodovico Lagrangia, 1736–1813) was active before leaving Italy. Fibonacci (c. 1170 – c. 1250), and Gerolamo Cardano (1501–76) made fundamental advances in mathematics. Luca Pacioli established accounting to the world. Physicist Enrico Fermi (1901–54), a Nobel prize laureate, led the team in Chicago that developed the first nuclear reactor and is also noted for his many other contributions to physics, including the co-development of the quantum theory and was one of the key figures in the creation of the nuclear weapon. He, Emilio G. Segrè ((1905–89) who discovered the elements technetium and astatine, and the antiproton), Bruno Rossi ((1905–93) a pioneer in Cosmic Rays and X-ray astronomy) and a number of Italian physicists were forced to leave Italy in the 1930s by Fascist laws against Jews,.[177]
Other prominent physicists include: Amedeo Avogadro (most noted for his contributions to molecular theory, in particular the Avogadro's law and the Avogadro constant), Evangelista Torricelli (inventor of barometer), Alessandro Volta (inventor of electric battery), Guglielmo Marconi (inventor of radio), Galileo Ferraris and Antonio Pacinotti, pioneers of the induction motor, Alessandro Cruto, pioneer of light bulb and Innocenzo Manzetti, eclectic pioneer of auto and robotics, Ettore Majorana (who discovered the Majorana fermions), Carlo Rubbia (1984 Nobel Prize in Physics for work leading to the discovery of the W and Z particles at CERN). Antonio Meucci is known for developing a voice-communication device which is often credited as the first telephone.[178][179] Pier Giorgio Perotto in 1964 designed the first Desktop Computer, the Programma 101, arguably the first kind of commercial personal computer. In biology, Francesco Redi has been the first to challenge the theory of spontaneous generation by demonstrating that maggots come from eggs of flies and he described 180 parasites in details and Marcello Malpighi founded microscopic anatomy, Lazzaro Spallanzani conducted important research in bodily functions, animal reproduction, and cellular theory, Camillo Golgi, whose many achievements include the discovery of the Golgi complex, paved the way to the acceptance of the Neuron doctrine, Rita Levi-Montalcini discovered the nerve growth factor (awarded 1986 Nobel Prize in Physiology or Medicine). In chemistry, Giulio Natta received the Nobel Prize in Chemistry in 1963 for his work on high polymers. Giuseppe Occhialini received the Wolf Prize in Physics for the discovery of the pion or pi-meson decay in 1947. Ennio de Giorgi, a Wolf Prize in Mathematics recipient in 1990, solved Bernstein's problem about minimal surfaces and the 19th Hilbert problem on the regularity of solutions of Elliptic partial differential equations.
Main article: Tourism in Italy
File:Atrani (Costiera Amalfitana, 23-8-2011).jpg
The Amalfi Coast is one of the major tourist destinations[180]
Italy is the fifth most visited country in the world, with a total of 50.7 million international arrivals in 2015.[181] The total contribution of travel & tourism to GDP (including wider effects from investment, the supply chain and induced income impacts) was EUR162.7bn in 2014 (10.1% of GDP) and generated 1,082,000 jobs directly in 2014 (4.8% of total employment).[182]
Italy is well known for its cultural and environmental tourist routes and is home to 53 UNESCO World Heritage Sites, the most in the world.[183] Milan is the 6th most visited city in Europe and the 14th in the world, with an average of 7.65 million international arrivals in 2016 while Rome is the 8th and 16th resptectively, with 7.12 million toruists.[184] In addition, Venice and Florence are also among the world's top 100 destinations.
Italy's most-visited landmarks include e.g. Coloseum and Roman Forum, Pompeii, Uffizi Gallery, Galleria dell'Accademia, Castel Sant'Angelo, Boboli Garden, Venaria Reale, Turin Egyptian Museum, the Borghese Gallery, the Royal Palace of Caserta, Cenacolo Vinciano Museum, Villa d'Este, Pitti Palace, the Excavations of Hercolaneum, Naples National Archaeological Museum, the Medici Chapels, Ostia Antica Excavations and Museum, Blu Grotto, Venice National Archaeological Museum, Lake Como and Pinacoteca di Brera.[185]
Main article: Demographics of Italy
File:Map of population density in Italy (2011 census) alt colours.jpg
Map of population density in Italy as of the 2011 census.
At the end of 2013, Italy had 60,782,668 inhabitants.[186] The resulting population density, at 202 inhabitants per square kilometre (520/sq mi), is higher than that of most Western European countries. However, the distribution of the population is widely uneven. The most densely populated areas are the Po Valley (that accounts for almost a half of the national population) and the metropolitan areas of Rome and Naples, while vast regions such as the Alps and Apennines highlands, the plateaus of Basilicata and the island of Sardinia are very sparsely populated.
The population of Italy almost doubled during the 20th century, but the pattern of growth was extremely uneven because of large-scale internal migration from the rural South to the industrial cities of the North, a phenomenon which happened as a consequence of the Italian economic miracle of the 1950–1960s. High fertility and birth rates persisted until the 1970s, after which they start decline. The population rapidly aged. At the end of the 2000s (decade), one in five Italians was over 65 years old.[187] However, in recent years Italy experienced a significant growth in birth rates.[188] The total fertility rate has also climbed from an all-time low of 1.18 children per woman in 1995 to 1.41 in 2008.[189] The TFR is expected to reach 1.6–1.8 in 2030.[190]
From the late 19th century until the 1960s Italy was a country of mass emigration. Between 1898 and 1914, the peak years of Italian diaspora, approximately 750,000 Italians emigrated each year.[191] The diaspora concerned more than 25 million Italians and it is considered the biggest mass migration of contemporary times.[192] As a result, today more than 4.1 million Italian citizens are living abroad,[193] while at least 60 million people of full or part Italian ancestry live outside of Italy, most notably in Argentina,[194] Brazil,[195] Uruguay,[196] Venezuela,[197] the United States,[198] Canada,[199] Australia[200] and France.[201]
Template:Largest cities of Italy
Metropolitan cities and larger urban zone
Source:[202][203]
Metropolitan city
Population1 January 2016
Functional Urban Areas
(FUA) Population (2014)
Rome Lazio 5,352 4,340,474 4,370,538
Milan Lombardy 1,575 3,208,509 4,252,246
Naples Campania 1,171 3,113,898 3,627,021
Turin Piedmont 6,829 2,282,127 1,801,729
Palermo Sicily 5,009 1,271,406 1,006,602
Bari Apulia 3,821 1,263,820 589,407
Catania Sicily 3,574 1,115,535 657,293
Florence Tuscany 3,514 1,113,348 760,325
Bologna Emilia-Romagna 3,702 1,005,831 770,998
Genoa Liguria 1,839 854,099 723,959
Venice Veneto 2,462 855,696 499,966
Messina Sicily 3,266 640,675 277,584
Reggio Calabria Calabria 3,183 555,836 221,789
Cagliari Sardinia 1,248 430,413 476,974
Main article: Immigration to Italy
File:COB data Italy.PNG
Italy is home to a large population of migrants from Eastern Europe and North Africa
In 2016, Italy had about 5.05 million foreign residents,[204] making up 8.3% of the total population. The figures include more than half a million children born in Italy to foreign nationals—second generation immigrants, but exclude foreign nationals who have subsequently acquired Italian citizenship;[205] In 2016, about 201,000 people acquired Italian citizenship[206] (130,000 in 2014).[207] The official figures also exclude illegal immigrants, that were estimated in 2008 to number at least 670,000.[208]
Starting from the early 1980s, until then a linguistically and culturally homogeneous society, Italy begun to attract substantial flows of foreign immigrants.[209] After the fall of the Berlin Wall and, more recently, the 2004 and 2007 enlargements of the European Union, large waves of migration originated from the former socialist countries of Eastern Europe (especially Romania, Albania, Ukraine and Poland). An equally important source of immigration is neighbouring North Africa (in particular, Morocco, Egypt and Tunisia), with soaring arrivals as a consequence of the Arab Spring. Furthermore, in recent years, growing migration fluxes from Asia-Pacific (notably China[210] and the Philippines) and Latin America have been recorded.
Currently, about one million Romanian citizens (around 10% of them being from the Romani people ethnic group[211]) are officially registered as living in Italy, representing thus the most important individual country of origin, followed by Albanians and Moroccans with about 500,000 people each. The number of unregistered Romanians is difficult to estimate, but the Balkan Investigative Reporting Network suggested in 2007 that there might have been half a million or more.[212][note 3] Overall, at the end of the 2000s (decade) the foreign born population of Italy was from: Europe (54%), Africa (22%), Asia (16%), the Americas (8%) and Oceania (0.06%). The distribution of immigrants is largely uneven in Italy: 87% of immigrants live in the northern and central parts of the country (the most economically developed areas), while only 13% live in the southern half of the peninsula.
Main articles: Languages of Italy, Italian language, and Regional Italian
File:Map Italophone World.png
Geographic distribution of the Italian language in the world
Secondary or non-official language
Italophone minorities
According to the first article of the framework law no.482/99, following Art. 6 of the Italian Constitution, Italy's official language is Italian.[214] It is estimated that there are about 64 million native Italian speakers[215][216][217] while the total number of Italian speakers, including those who use it as a second language, is about 85 million.[218] Italian is often natively spoken in a regional variety, not to be confused with Italy's regional and minority languages;[219][220] however, the establishment of a national education system has led to a decrease in variation in the languages spoken across the country during the 20th century. Standardisation was further expanded in the 1950s and 1960s due to economic growth and the rise of mass media and television (the state broadcaster RAI helped set a standard Italian).
File:Minoranze linguistiche it.svg
All the minority language groups officially recognised by Italy[221]
Twelve historical minority languages are formally recognised by the framework law no.482/99: Albanian, Catalan, German, Greek, Slovene, Croatian, French, Franco-Provençal, Friulian, Ladin, Occitan and Sardinian.[214] Of these, four languages even enjoy a co-official status in their respective region: French in the Aosta Valley — although Franco-Provencal is more commonly spoken there;[222] German in South Tyrol, and Ladin as well in some parts of the same province and in parts of the neighbouring Trentino; and finally, Slovene in the province of Trieste, Gorizia and Udine. A number of other Ethnologue, ISO and UNESCO languages are not recognised by the Italian law. Like France, Italy has signed the European Charter for Regional or Minority Languages, but has not ratified it.[223]
Because of recent immigration influx, Italy has sizeable populations whose native language is not Italian, nor a regional language. According to the Italian National Institute of Statistics, Romanian is the most common mother tongue among foreign residents in Italy: almost 800,000 people speak Romanian as their first language (21.9% of the foreign residents aged 6 and over). Other prevalent mother tongues are Arabic (spoken by over 475,000 people; 13.1% of foreign residents), Albanian (380,000 people) and Spanish (255,000 people). Other languages spoken in Italy are Ukrainian, Hindi, Polish and Tamil amongst others.[224]
Main article: Religion in Italy
{{{box_caption}}}
Italy is home to many of the world's largest churches and masterpieces of architecture. Clockwise from left: Florence Cathedral, which has the biggest brick dome in the world;[225][226] St. Peter's Basilica, the largest church of Christendom;[227] Milan Cathedral, the largest Italian church and the third largest in the world;[228] and St Mark's Basilica, one of the best known examples of Italo-Byzantine architecture[229]
Roman Catholicism is, by far, the largest religion in the country, although since 1985 no longer officially the state religion.[230] In 2010, the proportion of Italians that identify themselves as Roman Catholic was 81.2%.[231]
The Holy See, the episcopal jurisdiction of Rome, contains the central government of the entire Roman Catholic Church, including various agencies essential to administration. Diplomatically, it is recognised by other subjects of international law as a sovereign entity, headed by the Pope, who is also the Bishop of Rome, with which diplomatic relations can be maintained.[232][233] Often incorrectly referred to as "the Vatican", the Holy See is not the same entity as the Vatican City State, which came into existence only in 1929; the Holy See dates back to early Christian times. Ambassadors are officially accredited not to the Vatican City State but to "the Holy See", and papal representatives to states and international organisations are recognised as representing the Holy See, not the Vatican City State.
Minority Christian faiths in Italy include Eastern Orthodox, Waldensians and other Protestant communities. In 2011, there were an estimated 1.5 million Orthodox Christians in Italy, or 2.5% of the population;[234] 0.5 million Pentecostals and Evangelicals (of whom 0.4 million are members of the Assemblies of God), 235,685 Jehovah's Witnesses,[235] 30,000 Waldensians,[236] 25,000 Seventh-day Adventists, 22,000 Latter-day Saints, 15,000 Baptists (plus some 5,000 Free Baptists), 7,000 Lutherans, 4,000 Methodists (affiliated with the Waldensian Church).[237]
One of the longest-established minority religious faiths in Italy is Judaism, Jews having been present in Ancient Rome since before the birth of Christ. Italy has for centuries welcomed Jews expelled from other countries, notably Spain. However, as a result of the Holocaust, about 20% of Italian Jews lost their lives.[238] This, together with the emigration that preceded and followed World War II, has left only a small community of around 28,400 Jews in Italy.[239]
Soaring immigration in the last two decades has been accompanied by an increase in non-Christian faiths. In 2010, there were 1.6 million Muslims in Italy, forming 2.6% of population.[231] In addition, there are more than 200,000 followers of faiths originating in the Indian subcontinent with some 70,000 Sikhs with 22 gurdwaras across the country,[240] 70,000 Hindus, and 50,000 Buddhists.[241] There were an estimated 4,900 Bahá'ís in Italy in 2005.[242]
The Italian state, as a measure to protect religious freedom, devolves shares of income tax to recognised religious communities, under a regime known as Eight per thousand (Otto per mille). Donations are allowed to Christian, Jewish, Buddhist and Hindu communities; however, Islam remains excluded, since no Muslim communities have yet signed a concordat with the Italian state.[243] Taxpayers who do not wish to fund a religion contribute their share to the state welfare system.[244]
Main article: Education in Italy
File:Archiginnasio ora blu Bologna.jpg
Bologna University, established in AD 1088, is the oldest academic institution of the world
Education in Italy is free and mandatory from ages six to sixteen,[245] and consists of five stages: kindergarten (scuola dell'infanzia, formerly known as asilo), primary school (scuola primaria, formerly known as scuola elementare), lower secondary school (scuola secondaria di primo grado, formerly known as scuola media), upper secondary school (scuola secondaria di secondo grado, formerly known as scuola superiore) and university (università).[246]
Primary education lasts eight years. The students are given a basic education in Italian, English, mathematics, natural sciences, history, geography, social studies, physical education and visual and musical arts. Secondary education lasts for five years and includes three traditional types of schools focused on different academic levels: the liceo prepares students for university studies with a classical or scientific curriculum, while the istituto tecnico and the Istituto professionale prepare pupils for vocational education. In 2012, the Italian secondary education has been evalued as slightly below the OECD average, with a strong and steady improvement in science and mathematics results since 2003;[247] however, a wide gap exists between northern schools, which performed significantly better than the national average (among the best in the world in some subjects), and schools in the South, that had much poorer results.[248]
Tertiary education in Italy is divided between public universities, private universities and the prestigious and selective superior graduate schools, such as the Scuola Normale Superiore di Pisa. The university system in Italy is generally regarded as poor for a world cultural powerhouse, with no universities ranked among the 100 world best and only 20 among the top 500.[249] However, the current government has scheduled major reforms and investments in order to improve the overall internationalisation and quality of the system.[250]
Main article: Healthcare in Italy
File:Oil-1383546 1920.jpg
Olive oil and vegetables are central to the Mediterranean diet.
The Italian state runs a universal public healthcare system since 1978.[251] However, healthcare is provided to all citizens and residents by a mixed public-private system. The public part is the Servizio Sanitario Nazionale, which is organised under the Ministry of Health and administered on a devolved regional basis. Healthcare spending in Italy accounted for 9.2% of the national GDP in 2012, very close the OECD countries' average of 9.3%.[252] Italy in 2000 ranked as having the world's 2nd best healthcare system,[251][253] and the world's 2nd best healthcare performance.
Life expectancy in Italy is 80 for males and 85 for females, placing the country 5th in the world for life expectancy.[254] In comparison to other Western countries, Italy has a relatively low rate of adult obesity (below 10%[255]), probably thanks to the health benefits of the Mediterranean diet. The proportion of daily smokers was 22% in 2012, down from 24.4% in 2000 but still slightly above the OECD average.[252] Smoking in public places including bars, restaurants, night clubs and offices has been restricted to specially ventilated rooms since 2005.[256] In 2013, UNESCO added the Mediterranean diet to the Representative List of the Intangible Cultural Heritage of Humanity of Italy (promoter), Morocco, Spain, Portugal, Greece, Cyprus and Croatia.[257][258]
Main article: Culture of Italy
File:Alberobello BW 2016-10-16 13-43-03.jpg
The Trulli buildings of Alberobello
For centuries divided by politics and geography until its eventual unification in 1861, Italy has developed a unique culture, shaped by a multitude of regional customs and local centres of power and patronage.[259] During the Middle Ages and the Renaissance, a number of magnificent courts competed for attracting the best architects, artistis and scholars, thus producing an immense legacy of monuments, paintings, music and literature.[260]
Italy has more UNESCO World Heritage Sites (53) than any other country in the world, and has rich collections of art, culture and literature from many different periods. The country has had a broad cultural influence worldwide, also because numerous Italians emigrated to other places during the Italian diaspora. Furthermore, the nation has, overall, an estimated 100,000 monuments of any sort (museums, palaces, buildings, statues, churches, art galleries, villas, fountains, historic houses and archaeological remains).[261]
Main article: Architecture of Italy
Italy has a very broad and diverse architectural style, which cannot be simply classified by period, but also by region, because of Italy's division into several regional states until 1861. This has created a highly diverse and eclectic range in architectural designs.
Italy is known for its considerable architectural achievements,[262] such as the construction of arches, domes and similar structures during ancient Rome, the founding of the Renaissance architectural movement in the late-14th to 16th centuries, and being the homeland of Palladianism, a style of construction which inspired movements such as that of Neoclassical architecture, and influenced the designs which noblemen built their country houses all over the world, notably in the UK, Australia and the US during the late 17th to early 20th centuries. Several of the finest works in Western architecture, such as the Colosseum, the Milan Cathedral and Florence cathedral, the Leaning Tower of Pisa and the building designs of Venice are found in Italy.
Canal Grande Chiesa della Salute e Dogana dal ponte dell Accademia.jpg
The city of Venice, built on 117 islands
Pisa - Campo Santo - Campanile 2 - 2005-08-08 10-23 2005.JPG
The Leaning Tower and the Duomo of Pisa
Reggia di Caserta, prospettiva dalla fontana di Venere e Adone - panoramio.jpg
The Royal Palace of Caserta
Vicenza - (Lista del Patrimonio Mondiale) - Villa Almerico Capra (La Rotonda).JPG
Villa Capra "La Rotonda", one of the influential Palladian villas of the Veneto
Agrigent BW 2012-10-07 13-09-13.jpg
Temple of Concordia in the Valley of the Temples, Agrigento
Italian architecture has also widely influenced the architecture of the world. British architect Inigo Jones, inspired by the designs of Italian buildings and cities, brought back the ideas of Italian Renaissance architecture to 17th-century England, being inspired by Andrea Palladio.[263] Additionally, Italianate architecture, popular abroad since the 19th century, was used to describe foreign architecture which was built in an Italian style, especially modelled on Renaissance architecture.
Main article: Art of Italy
File:Leonardo da Vinci (1452-1519) - The Last Supper (1495-1498).jpg
The Last Supper (1494–1499), Leonardo da Vinci, Church of Santa Maria delle Grazie, Milan
The history of Italian visual art is part of Western painting history. Roman art was influenced by Greece and can in part be taken as a descendant of ancient Greek painting. However, Roman painting does have important unique characteristics. The only surviving Roman paintings are wall paintings, many from villas in Campania, in Southern Italy. Such painting can be grouped into 4 main "styles" or periods[264] and may contain the first examples of trompe-l'œil, pseudo-perspective, and pure landscape.[265]
Panel painting becomes more common during the Romanesque period, under the heavy influence of Byzantine icons. Towards the middle of the 13th century, Medieval art and Gothic painting became more realistic, with the beginnings of interest in the depiction of volume and perspective in Italy with Cimabue and then his pupil Giotto. From Giotto on, the treatment of composition by the best painters also became much more free and innovative. They are considered to be the two great medieval masters of painting in western culture.
File:Michelangelo's David 2015.jpg
Michelangelo's David (1501–1504), Galleria dell'Accademia, Florence
The Italian Renaissance is said by many to be the golden age of painting; roughly spanning the 14th through the mid-17th centuries with a significant influence also out of the borders of modern Italy. In Italy artists like Paolo Uccello, Fra Angelico, Masaccio, Piero della Francesca, Andrea Mantegna, Filippo Lippi, Giorgione, Tintoretto, Sandro Botticelli, Leonardo da Vinci, Michelangelo Buonarroti, Raphael, Giovanni Bellini, and Titian took painting to a higher level through the use of perspective, the study of human anatomy and proportion, and through their development of an unprecedented refinement in drawing and painting techniques. Michelangelo was an active sculptor from about 1500 to 1520, and his great masterpieces including his David, Pietà, Moses. Other prominent Renaissance sculptors include Lorenzo Ghiberti, Luca Della Robbia, Donatello, Filippo Brunelleschi and Andrea del Verrocchio.
In the 15th and 16th centuries, the High Renaissance gave rise to a stylised art known as Mannerism. In place of the balanced compositions and rational approach to perspective that characterised art at the dawn of the 16th century, the Mannerists sought instability, artifice, and doubt. The unperturbed faces and gestures of Piero della Francesca and the calm Virgins of Raphael are replaced by the troubled expressions of Pontormo and the emotional intensity of El Greco. In the 17th century, among the greatest painters of Italian Baroque are Caravaggio, Annibale Carracci, Artemisia Gentileschi, Mattia Preti, Carlo Saraceni and Bartolomeo Manfredi. Subsequently, in the 18th century, Italian Rococo was mainly inspired by French Rococo, since France was the founding nation of that particular style, with artists such as Giovanni Battista Tiepolo and Canaletto. Italian Neoclassical sculpture focused, with Antonio Canova's nudes, on the idealist aspect of the movement.
In the 19th century, major Italian Romantic painters were Francesco Hayez, Giuseppe Bezzuoli and Francesco Podesti. Impressionism was brought from France to Italy by the Macchiaioli, led by Giovanni Fattori, and Giovanni Boldini; Realism by Gioacchino Toma and Giuseppe Pellizza da Volpedo. In the 20th century, with Futurism, primarily through the works of Umberto Boccioni and Giacomo Balla, Italy rose again as a seminal country for artistic evolution in painting and sculpture. Futurism was succeeded by the metaphysical paintings of Giorgio de Chirico, who exerted a strong influence on the Surrealists and generations of artists to follow.
Literature and theatre
Main article: Literature of Italy
Italian literature began after the founding of Rome in 753 BC. Latin literature was, and still is, highly influential in the world, with numerous writers, poets, philosophers, and historians, such as Pliny the Elder, Pliny the Younger, Virgil, Horace, Propertius, Ovid and Livy. The Romans were also famous for their oral tradition, poetry, drama and epigrams.[266] In early years of the 13th century, St. Francis of Assisi was considered the first Italian poet by literary critics, with his religious song Canticle of the Sun.[267]
File:DanteDetail.jpg
Dante, poised between the mountain of Purgatory and the city of Florence, displays the famous incipit "Nel mezzo del cammin di nostra vita" of the Divine Comedy in a detail of Domenico di Michelino's painting, 1465
Another Italian voice originated in Sicily. At the court of emperor Frederick II, who ruled the Sicilian kingdom during the first half of the 13th century, lyrics modeled on Provençal forms and themes were written in a refined version of the local vernacular. The most important of these poets was the notary Giacomo da Lentini, inventor of the sonnet form, though the most famous early sonneteer was Petrarch.[268]
Guido Guinizelli is considered the founder of the Dolce Stil Novo, a school that added a philosophical dimension to traditional love poetry. This new understanding of love, expressed in a smooth, pure style, influenced Guido Cavalcanti and the Florentine poet Dante Alighieri, who established the basis of the modern Italian language; his greatest work, the Divine Comedy, is considered among the foremost literary statements produced in Europe during the Middle Ages; furthermore, the poet invented the difficult terza rima. The two great writers of the 14th century, Petrarch and Giovanni Boccaccio, sought out and imitated the works of antiquity and cultivated their own artistic personalities. Petrarch achieved fame through his collection of poems, Il Canzoniere. Petrarch's love poetry served as a model for centuries. Equally influential was Boccaccio's The Decameron, one of the most popular collections of short stories ever written.[269]
File:Portrait of Niccolò Machiavelli by Santi di Tito.jpg
Niccolò Machiavelli, founder of the modern political science and ethics
Italian Renaissance authors produced a number of important works. Niccolò Machiavelli's The Prince is one of the world's most famous essays on political science and modern philosophy, in which the effective truth is taken to be more important than any abstract ideal. Another important work of the period, Ludovico Ariosto's Orlando Furioso, continuation of Matteo Maria Boiardo's unfinished romance Orlando Innamorato, is perhaps the greatest chivalry poem ever written. Baldassare Castiglione's dialogue The Book of the Courtier describes the ideal of the perfect court gentleman and of spiritual beauty. The lyric poet Torquato Tasso in Jerusalem Delivered wrote a Christian epic, making use of the ottava rima, with attention to the Aristotelian canons of unity.
Giovanni Francesco Straparola and Giambattista Basile, which have written The Facetious Nights of Straparola (1550–1555) and the Pentamerone (1634) respectively, printed some of the first known versions of fairy tales in Europe.[270][271][272] In the early 17th century, some literary masterpieces were created, such as Giambattista Marino's long mythological poem, L'Adone. The Baroque period also produced the clear scientific prose of Galileo as well as Tommaso Campanella's The City of the Sun, a description of a perfect society ruled by a philosopher-priest. At the end of the 17th century, the Arcadians began a movement to restore simplicity and classical restraint to poetry, as in Metastasio's heroic melodramas. In the 18th century, playwright Carlo Goldoni created full written plays, many portraying the middle class of his day.
File:Pinocchio.jpg
Pinocchio, the title character of The Adventures of Pinocchio by Carlo Collodi, is an icon of children's literature.[273][274]
The Romanticism coincided with some ideas of the Risorgimento, the patriotic movement that brought Italy political unity and freedom from foreign domination. Italian writers embraced Romanticism in the early 19th century. The time of Italy's rebirth was heralded by the poets Vittorio Alfieri, Ugo Foscolo, and Giacomo Leopardi. The Betrothed by Alessandro Manzoni, the leading Italian Romantic, was the first Italian historical novel to glorify Christian values of justice and Providence, and it has been called the most famous and widely read novel in the Italian language.[275]
In the late 19th century, a realistic literary movement called Verismo played a major role in Italian literature; Giovanni Verga and Luigi Capuana were its main exponents. In the same period, Emilio Salgari, writer of action adventure swashbucklers and a pioneer of science fiction, published his Sandokan series.[276] In 1883, Carlo Collodi also published the novel The Adventures of Pinocchio, the most celebrated children's classic by an Italian author and the most translated non-religious book in the world.[273] A movement called Futurism influenced Italian literature in the early 20th century. Filippo Tommaso Marinetti wrote Manifesto of Futurism, called for the use of language and metaphors that glorified the speed, dynamism, and violence of the machine age.[277]
Modern literary figures and Nobel laureates are Gabriele D'Annunzio from 1889 to 1910, nationalist poet Giosuè Carducci in 1906, realist writer Grazia Deledda in 1926, modern theatre author Luigi Pirandello in 1936, short stories writer Italo Calvino in 1960, poets Salvatore Quasimodo in 1959 and Eugenio Montale in 1975, Umberto Eco in 1980, and satirist and theatre author Dario Fo in 1997.[278]
Prominent Italian philosophers include Cesare Beccaria, Giordano Bruno, Benedetto Croce, Marsilio Ficino, and Giambattista Vico.
Italian theatre can be traced back to the Roman tradition which was heavily influenced by the Greek; as with many other literary genres, Roman dramatists tended to adapt and translate from the Greek. For example, Seneca's Phaedra was based on that of Euripides, and many of the comedies of Plautus were direct translations of works by Menander. During the 16th century and on into the 18th century, Commedia dell'arte was a form of improvisational theatre, and it is still performed today. Travelling troupes of players would set up an outdoor stage and provide amusement in the form of juggling, acrobatics and, more typically, humorous plays based on a repertoire of established characters with a rough storyline, called canovaccio.
Main article: Music of Italy
File:GiacomoPuccini.jpg
Giacomo Puccini, Italian composer whose operas, including La bohème, Tosca, Madama Butterfly and Turandot, are among the most frequently worldwide performed in the standard repertoire[279][280]
From folk music to classical, music has always played an important role in Italian culture. Instruments associated with classical music, including the piano and violin, were invented in Italy, and many of the prevailing classical music forms, such as the symphony, concerto, and sonata, can trace their roots back to innovations of 16th- and 17th-century Italian music.
Italy's most famous composers include the Renaissance composers Palestrina and Monteverdi, the Baroque composers Scarlatti, Corelli and Vivaldi, the Classical composers Paganini and Rossini, and the Romantic composers Verdi and Puccini. Modern Italian composers such as Berio and Nono proved significant in the development of experimental and electronic music. While the classical music tradition still holds strong in Italy, as evidenced by the fame of its innumerable opera houses, such as La Scala of Milan and San Carlo of Naples, and performers such as the pianist Maurizio Pollini and the late tenor Luciano Pavarotti, Italians have been no less appreciative of their thriving contemporary music scene.
File:Luciano Pavarotti in Saint Petersburg.jpg
Luciano Pavarotti, one of the most influential tenors of all time
Italy is widely known for being the birthplace of opera.[281] Italian opera was believed to have been founded in the early 17th century, in Italian cities such as Mantua and Venice.[281] Later, works and pieces composed by native Italian composers of the 19th and early 20th centuries, such as Rossini, Bellini, Donizetti, Verdi and Puccini, are among the most famous operas ever written and today are performed in opera houses across the world. La Scala operahouse in Milan is also renowned as one of the best in the world. Famous Italian opera singers include Enrico Caruso and Alessandro Bonci.
Introduced in the early 1920s, jazz took a particularly strong foothold in Italy, and remained popular despite the xenophobic cultural policies of the Fascist regime. Today, the most notable centres of jazz music in Italy include Milan, Rome, and Sicily. Later, Italy was at the forefront of the progressive rock and pop movement of the 1970s, with bands like PFM, Banco del Mutuo Soccorso, Le Orme, Goblin, and Pooh. The same period saw diversification in the cinema of Italy, and Cinecittà films included complex scores by composers including Ennio Morricone, Armando Trovaioli, Piero Piccioni and Piero Umiliani. The Italian hip hop scene began in the early 1990s with Articolo 31 duo, mainly influenced by the East Coast rap.
File:Giorgio Moroder Melt! 2015 02.jpg
Giorgio Moroder, pioneer of Italo disco and electronic dance music, is known as the "Father of Disco"[282]
Italy was also an important country in the development of disco and electronic music, with Italo disco, known for its futuristic sound and prominent usage of synthesisers and drum machines, being one of the earliest electronic dance genres, as well as European forms of disco aside from Euro disco (which later went on to influence several genres such as Eurodance and Nu-disco). Notable Italian DJs and remixers include Benny Benassi, Gigi D'Agostino, and Gabry Ponte, member of the Eiffel 65 group.
Producers such as Giorgio Moroder, who won three Academy Awards for his music, were highly influential in the development of electronic dance music. Today, Italian pop music is represented annually with the Sanremo Music Festival, which served as inspiration for the Eurovision song contest, and the Festival of Two Worlds in Spoleto. Singers such as Mina, Andrea Bocelli, Grammy winner Laura Pausini, Eros Ramazzotti and Tiziano Ferro have attained international acclaim.
Main article: Cinema of Italy
The history of Italian cinema began a few months after the Lumière brothers began motion picture exhibitions. The first Italian film was a few seconds, showing Pope Leo XIII giving a blessing to the camera. The Italian film industry was born between 1903 and 1908 with three companies: the Società Italiana Cines, the Ambrosio Film and the Itala Film. Other companies soon followed in Milan and in Naples. In a short time these first companies reached a fair producing quality, and films were soon sold outside Italy. Cinema was later used by Benito Mussolini, who founded Rome's renowned Cinecittà studio for the production of Fascist propaganda until World War II.[283]
After the war, Italian film was widely recognised and exported until an artistic decline around the 1980s. Notable Italian film directors from this period include Vittorio De Sica, Federico Fellini, Sergio Leone, Pier Paolo Pasolini, Luchino Visconti, Michelangelo Antonioni and Roberto Rossellini; some of these are recognized among the greatest and most influential filmmakers of all time.[284][285][286] Movies include world cinema treasures such as Bicycle Thieves, La dolce vita, 8½, The Good, the Bad and the Ugly and Once Upon a Time in the West. The mid-1940s to the early 1950s was the heyday of neorealist films, reflecting the poor condition of post-war Italy.[287][288]
File:Cinecittà - Entrance.jpg
Entrance to Cinecittà in Rome, the largest film studio in Europe
As the country grew wealthier in the 1950s, a form of neorealism known as pink neorealism succeeded, and other film genres, such as sword-and-sandal followed as spaghetti westerns, were popular in the 1960s and 1970s. Actresses such as Sophia Loren, Giulietta Masina and Gina Lollobrigida achieved international stardom during this period. Erotic Italian thrillers, or giallos, produced by directors such as Mario Bava and Dario Argento in the 1970s, also influenced the horror genre worldwide. In recent years, the Italian scene has received only occasional international attention, with movies like Life Is Beautiful directed by Roberto Benigni, Il Postino: The Postman with Massimo Troisi and The Great Beauty directed by Paolo Sorrentino.
The aforementioned Cinecittà studio is today the largest film and television production facility in continental Europe and the centre of the Italian cinema, where a large number of biggest box office hits are filmed, and one of the biggest production communities in the world. In the 1950s, the number of international productions being made there led to Rome's being dubbed "Hollywood on the Tiber". More than 3,000 productions have been made on its lot, of which 90 received an Academy Award nomination and 47 of these won it, from some cinema classics to recent rewarded features (such as Ben-Hur, Cleopatra, Romeo and Juliet, The English Patient, Gladiator, The Passion of the Christ, and Gangs of New York).[289]
Italy is the most awarded country at the Academy Awards for Best Foreign Language Film, with 14 awards won, 3 Special Awards and 31 nominations. As of 2016, Italian films have also won 12 Palmes d'Or (the second-most of any country), 11 Golden Lions and 7 Golden Bears.
Main article: Sport in Italy
File:Nat team of italy 2012.jpg
The Azzurri, here players of 2012, is the men's national football team
The most popular sport in Italy is, by far, football.[290] Italy's national football team (nicknamed Gli Azzurri – "the Blues") is one of the world's most successful team as it has won four FIFA World Cups (1934, 1938, 1982 and 2006).[291] Italian clubs have won 48 major European trophies, making Italy the second most successful country in European football. Italy's top-flight club football league is named Serie A and ranks as the fourth best in Europe and is followed by millions of fans around the world.
File:Giro d'Italia 2012, 072 pampeago rodriguez met puffertje (17786750665).jpg
Starting in 1909, the Giro d'Italia is the second oldest of the prestigious Grands Tours[292]
Other popular team sports in Italy include volleyball, basketball and rugby. Italy's male and female national teams are often featured among the world's best. The Italian national basketball team's best results were gold at Eurobasket 1983 and EuroBasket 1999, as well as silver at the Olympics in 2004. Lega Basket Serie A is widely considered one of the most competitive in Europe. Rugby union enjoys a good level of popularity, especially in the north of the country. Italy's national team competes in the Six Nations Championship, and is a regular at the Rugby World Cup. Italy ranks as a tier-one nation by World Rugby. The men's volleyball team won three consecutive World Championships (in 1990, 1994, and 1998) and earned the Olympic silver medal in 1996, 2004, and 2016.
File:Kimi Raikkonen 2017 Catalonia test (27 Feb-2 Mar) Day 4 2.jpg
A Ferrari SF70H by Scuderia Ferrari, the oldest surviving and most successful Formula One team.[293]
Italy has a long and successful tradition in individual sports as well. Bicycle racing is a very familiar sport in the country.[294] Italians have won the UCI World Championships more than any other country, except Belgium. The Giro d'Italia is a cycling race held every May, and constitutes one of the three Grand Tours, along with the Tour de France and the Vuelta a España, each of which last approximately three weeks. Alpine skiing is also a very widespread sport in Italy, and the country is a popular international skiing destination, known for its ski resorts.[295] Italian skiers achieved good results in Winter Olympic Games, Alpine Ski World Cup, and World Championship. Tennis has a significant following in Italy, ranking as the fourth most practised sport in the country.[296] The Rome Masters, founded in 1930, is one of the most prestigious tennis tournaments in the world. Italian professional tennis players won the Davis Cup in 1976 and the Fed Cup in 2006, 2009, 2010 and 2013. Motorsports are also extremely popular in Italy. Italy has won, by far, the most MotoGP World Championships. Italian Scuderia Ferrari is the oldest surviving team in Grand Prix racing, having competed since 1948, and statistically the most successful Formula One team in history with a record of 228 wins.
Historically, Italy has been successful in the Olympic Games, taking part from the first Olympiad and in 47 Games out of 48. Italian sportsmen have won 522 medals at the Summer Olympic Games, and another 106 at the Winter Olympic Games, for a combined total of 628 medals with 235 golds, which makes them the fifth most successful nation in Olympic history for total medals. The country hosted two Winter Olympics (in 1956 and 2006), and one Summer games (in 1960).
Main articles: Italian fashion and Italian design
File:Prada milano.JPG
Prada shop in Milan
Italian fashion has a long tradition, and is regarded as one most important in the world. Milan, Florence and Rome are Italy's main fashion capitals. According to Top Global Fashion Capital Rankings 2013 by Global Language Monitor, Rome ranked sixth worldwide when Milan was twelfth.[297] Major Italian fashion labels, such as Gucci, Armani, Prada, Versace, Valentino, Dolce & Gabbana, Missoni, Fendi, Moschino, Max Mara, Trussardi, and Ferragamo, to name a few, are regarded as among the finest fashion houses in the world. Also, the fashion magazine Vogue Italia, is considered one of the most prestigious fashion magazines in the world.[298]
Italy is also prominent in the field of design, notably interior design, architectural design, industrial design and urban design. The country has produced some well-known furniture designers, such as Gio Ponti and Ettore Sottsass, and Italian phrases such as "Bel Disegno" and "Linea Italiana" have entered the vocabulary of furniture design.[299] Examples of classic pieces of Italian white goods and pieces of furniture include Zanussi's washing machines and fridges,[300] the "New Tone" sofas by Atrium,[300] and the post-modern bookcase by Ettore Sottsass, inspired by Bob Dylan's song "Stuck Inside of Mobile with the Memphis Blues Again".[300] Today, Milan and Turin are the nation's leaders in architectural design and industrial design. The city of Milan hosts Fiera Milano, Europe's largest design fair.[301] Milan also hosts major design and architecture-related events and venues, such as the "Fuori Salone" and the Salone del Mobile, and has been home to the designers Bruno Munari, Lucio Fontana, Enrico Castellani and Piero Manzoni.[302]
Main article: Italian cuisine
File:Italian food.JPG
Some typical Italian foods: pizza (Margherita), pasta (Carbonara), espresso, and gelato
The Italian cuisine has developed through centuries of social and political changes, with roots as far back as the 4th century BC. Italian cuisine in itself takes heavy influences, including Etruscan, ancient Greek, ancient Roman, Byzantine, and Jewish.[303] Significant changes occurred with the discovery of the New World with the introduction of items such as potatoes, tomatoes, bell peppers and maize, now central to the cuisine but not introduced in quantity until the 18th century.[304][305] Italian cuisine is noted for its regional diversity,[306][307][308] abundance of difference in taste, and is known to be one of the most popular in the world,[309] wielding strong influence abroad.[310]
The Mediterranean diet forms the basis of Italian cuisine, rich in pasta, fish, fruits and vegetables and characterised by its extreme simplicity and variety, with many dishes having only four to eight ingredients.[311] Italian cooks rely chiefly on the quality of the ingredients rather than on elaborate preparation.[312] Dishes and recipes are often derivatives from local and familial tradition rather than created by chefs, so many recipes are ideally suited for home cooking, this being one of the main reasons behind the ever-increasing worldwide popularity of Italian cuisine, from America[313] to Asia.[314] Ingredients and dishes vary widely by region.
A key factor in the success of Italian cuisine is its heavy reliance on traditional products; Italy has the most traditional specialities protected under EU law.[315] Cheese, cold cuts and wine are a major part of Italian cuisine, with many regional declinations and Protected Designation of Origin or Protected Geographical Indication labels, and along with coffee (especially espresso) make up a very important part of the Italian gastronomic culture.[316] Desserts have a long tradition of merging local flavours such as citrus fruits, pistachio and almonds with sweet cheeses like mascarpone and ricotta or exotic tastes as cocoa, vanilla and cinnamon. Gelato,[317] tiramisù[318] and cassata are among the most famous examples of Italian desserts, cakes and patisserie.
Public holidays and festivals
Main articles: Public holidays in Italy and Italian festivals
File:65th venice film festival.jpg
The Venice Film Festival is the oldest film festival in the world and one of the "Big Three" alongside Cannes and Berlin.[319][320]
Public holidays celebrated in Italy include religious, national and regional observances.[321] Italy's National Day, the Festa della Repubblica (Republic Day) is celebrated on 2 June each year, and commemorates the birth of the Italian Republic in 1946.
The Epiphany in Italy is associated with the figure of the Befana, a broomstick-riding old woman who, in the night between 5 and 6 January, brings gifts to children, or a lump of "coal" (really black candy) for the times they have not been good during the year.[322] The Saint Lucy's Day, which take place on 13 December, is very popular among children in some Italian regions, where she plays a role similar to Santa Claus.[323]
The Assumption of Mary coincides with Ferragosto on 15 August, the summer vacation period which may be a long weekend or most of the month.[324] Each city or town also celebrates a public holiday on the occasion of the festival of the local patron saint, for example: Rome on 29 June (Saints Peter and Paul) and Milan on 7 December (S. Ambrose).[325]
There are many festivals and festivities in Italy. Some of them include the Palio di Siena, Holy Week rites, Saracen Joust of Arezzo, Saint Ubaldo Day in Gubbio, Giostra della Quintana in Foligno, and the Calcio Fiorentino. In 2013, UNESCO has included among the intangible cultural heritage some Italian festivals and pasos, such as the Varia di Palmi, the Macchina di Santa Rosa in Viterbo, the Festa dei Gigli in Nola, and faradda di li candareri in Sassari.[326]
Other festivals include the carnivals in Venice, Viareggio, Satriano di Lucania, Mamoiada, and Ivrea, mostly known for its Battle of the Oranges. The prestigious Venice International Film Festival, awarding the "Golden Lion" and held annually since 1932, is the oldest film festival in the world.[319]
Index of Italy-related articles
Outline of Italy
↑ "National demographic estimate, December 2016". ISTAT. Archived from the original on 6 August 2017. http://demo.istat.it/bil2016/index.html. Retrieved 23 October 2017.
↑ 2.0 2.1 "Archived copy". Archived from the original on 8 February 2018. https://www.imf.org/external/pubs/ft/weo/2017/02/weodata/weorept.aspx?pr.x=37&pr.y=12&sy=2015&ey=2022&scsm=1&ssd=1&sort=country&ds=.&br=1&c=136&s=NGDPD%2CPPPGDP%2CNGDPDPC%2CPPPPC&grp=0&a=. Retrieved 2018-01-12.
↑ "Gini coefficient of equivalsed disposable income (source: SILC)". Luxembourg: Eurostat. 15 June 2017. Archived from the original on 4 March 2016. http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=ilc_di12. Retrieved 24 June 2017.
↑ "2016 Human Development Report". United Nations Development Programme. 2016. Archived from the original on 22 March 2017. http://hdr.undp.org/sites/default/files/2016_human_development_report.pdf. Retrieved 23 March 2017.
↑ "Comune di Campione d'Italia". Comune.campione-d-italia.co.it. 14 July 2010. Archived from the original on 30 April 2011. http://www.comune.campione-d-italia.co.it/. Retrieved 30 October 2010.
↑ Search the agreements database Template:Webarchive Council of the European Union (retrieved 13 October 2013).
↑ Italy: The World Factbook Template:Webarchive Central Intelligence Agency (retrieved 13 October 2013).
↑ "Country names". Archived from the original on 19 May 2011. http://www.pcgn.org.uk/country_names.htm.
↑ "BBC News – Italy profile – Facts". BBC News. Archived from the original on 25 September 2013. http://www.bbc.co.uk/news/world-europe-17433143.
↑ Sée, Henri. "Modern Capitalism Its Origin and Evolution". University of Rennes. Batoche Books. Archived from the original on 7 October 2013. http://www.efm.bris.ac.uk/het/see/ModernCapitalism.pdf. Retrieved 29 August 2013.
↑ 13.0 13.1 Jepson, Tim (2012). National Geographic Traveler: Italy. National Geographic Books,. ISBN: 9781426208614. https://books.google.com/?id=f2jihJ0bq4EC&pg=PA28&dq=trade+routes+italy+new+world#v=onepage&q=trade%20routes%20italy%20new%20world&f=false.
↑ Bonetto, Cristian (2010). Discover Italy. Lonely Planet. ISBN: 9781741799958. https://books.google.com/?id=OnmfD4Ue3RMC&pg=PA169&dq=new+world+trade+italy#v=onepage&q=new%20world%20trade%20italy&f=false.
↑ Bouchard, Norma; Ferme, Valerio (2013). Italy and the Mediterranean: Words, Sounds, and Images of the Post-Cold War Era. Palgrave Macmillan. ISBN: 9781137343468. https://books.google.com/?id=_XwhAQAAQBAJ&pg=PT30&dq=new+world+trade+italy#v=onepage&q=new%20world%20trade%20italy&f=false. Retrieved 17 December 2015.
↑ "Unification of Italy". Library.thinkquest.org. 4 April 2003. Archived from the original on 7 March 2009. https://web.archive.org/web/20090307050237/http://library.thinkquest.org/TQ0312582/unification.html. Retrieved 19 November 2009.
↑ "The Italian Colonial Empire". All Empires. Archived from the original on 24 February 2012. http://www.allempires.com/article/index.php?q=italian_colonial. Retrieved 17 June 2012. "At its peak, just before WWII, the Italian Empire comprehended the territories of present time Italy, Albania, Rhodes, Dodecanese, Libya, Ethiopia, Eritrea, the majority of Somalia and the little concession of Tientsin in China"
↑ "Microsoft Word - 447F3DE3-55E9-08D35E.doc" (PDF). Archived from the original on 28 April 2017. http://globalmakeover.com/sites/economicreconstruction.com/static/JonRynn/FirstChapterDissertation.pdf. Retrieved 15 March 2017.
↑ "IMF Advanced Economies List. World Economic Outlook, April 2016, p. 148". Archived from the original on 21 April 2016. http://www.imf.org/external/pubs/ft/weo/2016/01/pdf/text.pdf.
↑ CIA (2008). "Appendix B. International Organizations and Groups.". World Factbook.. Archived from the original on 9 April 2008. https://www.cia.gov/library/publications/the-world-factbook/appendix/appendix-b.html. Retrieved 10 April 2008.
↑ Country and Lending Groups. Template:Webarchive World Bank. Retrieved 1 August 2016.
↑ Gabriele Abbondanza, Italy as a Regional Power: the African Context from National Unification to the Present Day (Rome: Aracne, 2016)
↑ "Operation Alba may be considered one of the most important instances in which Italy has acted as a regional power, taking the lead in executing a technically and politically coherent and determined strategy." See Federiga Bindi, Italy and the European Union (Washington, D.C.: Brookings Institution Press, 2011), p. 171.
↑ Canada Among Nations, 2004: Setting Priorities Straight. McGill-Queen's Press – MQUP. 17 January 2005. p. 85. ISBN: 0773528369. https://books.google.com/?id=nTKBdY5HBeUC&printsec=frontcover&dq=Canada+Among+Nations,+2004:+Setting+Priorities+Straight#v=onepage&q=Canada%20Among%20Nations%2C%202004%3A%20Setting%20Priorities%20Straight&f=false. Retrieved 13 June 2016. ("The United States is the sole world's superpower. France, Italy, Germany and the United Kingdom are great powers")
↑ Sterio, Milena (2013). The right to self-determination under international law : "selfistans", secession and the rule of the great powers. Milton Park, Abingdon, Oxon: Routledge. p. xii (preface). ISBN: 0415668182. https://books.google.com/?id=-QuI6n_OVMYC&printsec=frontcover&dq=The+Right+to+Self-determination+Under+International+Law:+%22selfistans%22,+Secession+and+the+Rule+of+the+Great+Powers#v=onepage&q=The%20Right%20to%20Self-determination%20Under%20International%20Law%3A%20%22selfistans%22%2C%20Secession%20and%20the%20Rule%20of%20the%20Great%20Powers. Retrieved 13 June 2016. ("The great powers are super-sovereign states: an exclusive club of the most powerful states economically, militarily, politically and strategically. These states include veto-wielding members of the United Nations Security Council (United States, United Kingdom, France, China, and Russia), as well as economic powerhouses such as Germany, Italy and Japan.")
↑ Alberto Manco, Italia. Disegno storico-linguistico, 2009, Napoli, L'Orientale, ISBN: 978-88-95044-62-0
↑ J.P. Mallory and D.Q. Adams, Encyclopedia of Indo-European Culture (London: Fitzroy and Dearborn, 1997), 24.
↑ Dionysius of Halicarnassus, Roman Antiquities, 1.35, on LacusCurtius
↑ Aristotle, Politics, 7.1329b Template:Webarchive, on Perseus
↑ Thucydides, The Peloponnesian War, 6.2.4 Template:Webarchive, on Perseus
↑ Pallottino, M., History of Earliest Italy, trans. Ryle, M & Soper, K. in Jerome Lectures, Seventeenth Series, p. 50
↑ Kluwer Academic/Plenum Publishers 2001, ch. 2. ISBN: 0-306-46463-2 .
↑ "Istituto Italiano di Preistoria e Protostoria". IIPP. 29 January 2010. Archived from the original on 15 October 2013. http://www.iipp.it.
↑ The Mycenaeans Template:Webarchive and Italy: the archaeological and archaeometric ceramic evidence, University of Glasgow, Department of Archaeology
↑ Emilio Peruzzi, Mycenaeans in early Latium, (Incunabula Graeca 75), Edizioni dell'Ateneo & Bizzarri, Roma, 1980
↑ Gert Jan van Wijngaarden, Use and Appreciation of Mycenaean Pottery in the Levant, Cyprus and Italy (1600–1200 B.C.): The Significance of Context, Amsterdam Archaeological Studies, Amsterdam University Press, 2001
↑ Bryan Feuer, Mycenaean civilization: an annotated bibliography through 2002, McFarland & Company; Rev Sub edition (2 March 2004)
↑ Mommsen, Theodor (1855). History of Rome, Book II: From the Abolition of the Monarchy in Rome to the Union of Italy. Leipzig: Reimer & Hirsel.
↑ Taagepera, Rein (1979). "Size and Duration of Empires: Growth-Decline Curves, 600 B.C. to 600 A.D". Social Science History (Duke University Press) 3 (3/4): 125. doi:10.2307/1170959.
↑ Turchin, Peter; Adams, Jonathan M.; Hall, Thomas D (2006). "East-West Orientation of Historical Empires". Journal of world-systems research 12 (2): 222. ISSN 1076-156X. Archived from the original on 29 August 2016. https://web.archive.org/web/20160829103201/http://peterturchin.com/PDF/Turchin_Adams_Hall_2006.pdf. Retrieved 6 February 2016.
↑ Richard, Carl J. (2010). Why we're all Romans : the Roman contribution to the western world (1st pbk. ed.). Lanham, Md.: Rowman & Littlefield. pp. xi–xv. ISBN: 0-7425-6779-6.
↑ Sarris, Peter (2011). Empires of faith : the fall of Rome to the rise of Islam, 500 – 700. (1st. pub. ed.). Oxford: Oxford UP. p. 118. ISBN: 0-19-926126-1.
↑ Nolan, Cathal J. (2006). The age of wars of religion, 1000–1650 : an encyclopedia of global warfare and civilization (1. publ. ed.). Westport (Connecticut): Greenwood Press. p. 360. ISBN: 0-313-33045-X.
↑ Jones, Philip (1997). The Italian city-state : from Commune to Signoria. Oxford: Clarendon Press. pp. 55–77. ISBN: 978-0-19-822585-0.
↑ Lane, Frederic C. (1991). Venice, a maritime republic (4. print. ed.). Baltimore: Johns Hopkins University Press. p. 73. ISBN: 0-8018-1460-X.
↑ Ali, Ahmed Essa with Othman (2010). Studies in Islamic civilization : the Muslim contribution to the Renaissance. Herndon, VA: International Institute of Islamic Thought. pp. 38–40. ISBN: 1-56564-350-X.
↑ Stéphane Barry and Norbert Gualde, "The Biggest Epidemics of History" (La plus grande épidémie de l'histoire), in L'Histoire n° 310, June 2006, pp. 45–46
↑ "Plague". Brown University. Template:Webarchive
↑ Jensen 1992, p. 64.
↑ 50.0 50.1 Strathern, Paul The Medici: Godfathers of the Renaissance (2003)
↑ Encyclopædia Britannica, Renaissance, 2008, O.Ed.
↑ Har, Michael H. History of Libraries in the Western World, Scarecrow Press Incorporate, 1999, ISBN: 0-8108-3724-2
↑ Norwich, John Julius, A Short History of Byzantium, 1997, Knopf, ISBN: 0-679-45088-2
↑ Peter Barenboim, Sergey Shiyan, Michelangelo: Mysteries of Medici Chapel, SLOVO, Moscow, 2006 Template:Webarchive. ISBN: 5-85050-825-2
↑ Leonardo Bruni; James Hankins (9 October 2010). History of the Florentine People. 1. Boston: Harvard University Press. Archived from the original on 3 January 2013. http://www.hup.harvard.edu/results-list.php?collection=1389.
↑ Karl Julius Beloch, Bevölkerungsgeschichte Italiens, volume 3, pp. 359–360.
↑ Thomas James Dandelet, John A. Marino (2007). Spain in Italy: Politics, Society, and Religion 1500–1700. Leiden: Koninklijke Brill. ISBN: 978-90-04-15429-2.
↑ Galasso, Giuseppe (1972). Storia d'Italia 1: I caratteri originali. Turin: Einaudi. pp. 509–10.
↑ Napoleon Bonaparte, "The Economy of the Empire in Italy: Instructions from Napoleon to Eugène, Viceroy of Italy," Exploring the European Past: Texts & Images, Second Edition, ed. Timothy E. Gregory (Mason: Thomson, 2007), 65–66.
↑ 60.0 60.1 "Scholar and Patriot". Manchester University Press. https://books.google.com/books?id=iWK7AAAAIAAJ&pg=PA133&dq=Garibaldi+one+of+the+greatest+generals+of+modern+time&hl=it&sa=X&ved=0ahUKEwjIxJm7j9HVAhXHC8AKHU0DA5MQ6AEIHDAA#v=onepage&q=Garibaldi+one+of+the+greatest+generals+of+modern+time&f=false.
↑ "Giuseppe Garibaldi (Italian revolutionary)". Archived from the original on 26 February 2014. http://www.britannica.com/EBchecked/topic/225978/Giuseppe-Garibaldi. Retrieved 6 March 2014.
↑ Mack Smith, Denis (1997). Modern Italy; A Political History. Ann Arbor: The University of Michigan Press. ISBN: 0-472-10895-6
↑ (Bosworth (2005), pp. 49.)
↑ Burgwyn, H. James: Italian foreign policy in the interwar period, 1918–1940. Greenwood Publishing Group, 1997. Page 4. ISBN: 0-275-94877-3
↑ Schindler, John R.: Isonzo: The Forgotten Sacrifice of the Great War. Greenwood Publishing Group, 2001. Page 303. ISBN: 0-275-97204-6
↑ Mack Smith, Denis: Mussolini. Knopf, 1982. Page 31. ISBN: 0-394-50694-4
↑ Mortara, G (1925). La Salute pubblica in Italia durante e dopo la Guerra. New Haven: Yale University Press.
↑ James H. Burgwyn (2004). General Roatta's war against the partisans in Yugoslavia: 1942 Template:Webarchive, Journal of Modern Italian Studies, Volume 9, Number 3, pp. 314–329(16)
↑ Italy's bloody secret (archived by WebCite), written by Rory Carroll, Education, The Guardian, June 2001
↑ Effie Pedaliu (2004) Template:Jstor Britain and the 'Hand-over' of Italian War Criminals to Yugoslavia, 1945–48. Journal of Contemporary History. Vol. 39, No. 4, Special Issue: Collective Memory, pp. 503–529
↑ Oliva, Gianni (2006) «Si ammazza troppo poco». I crimini di guerra italiani. 1940–43 Template:Webarchive, Mondadori, ISBN: 88-04-55129-1
↑ Baldissara, Luca & Pezzino, Paolo (2004). Crimini e memorie di guerra: violenze contro le popolazioni e politiche del ricordo, L'Ancora del Mediterraneo. ISBN: 978-88-8325-135-1
↑ Viganò, Marino (2001), "Un'analisi accurata della presunta fuga in Svizzera" (in Italian), Nuova Storia Contemporanea 3
↑ "1945: Italian partisans kill Mussolini". BBC News. 28 April 1945. Archived from the original on 26 November 2011. http://news.bbc.co.uk/onthisday/hi/dates/stories/april/28/newsid_3564000/3564529.stm. Retrieved 17 October 2011.
↑ "Italy – Britannica Online Encyclopedia". Britannica.com. Archived from the original on 19 March 2012. http://www.britannica.com/EBchecked/topic/297474/Italy#. Retrieved 2 August 2010.
↑ Adrian Lyttelton (editor), "Liberal and fascist Italy, 1900–1945", Oxford University Press, 2002. pp. 13
↑ Template:Cite video
↑ "Italia 1946: le donne al voto, dossier a cura di Mariachiara Fugazza e Silvia Cassamagnaghi" (PDF). Archived from the original on 20 May 2011. https://web.archive.org/web/20110520041048/http://www.insmli.it/pubblicazioni/35/Voto%20donne%20versione%20def.pdf. Retrieved 30 May 2011.
↑ "Commissione parlamentare d'inchiesta sul terrorismo in Italia e sulle cause della mancata individuazione dei responsabili delle stragi (Parliamentary investigative commission on terrorism in Italy and the failure to identify the perpetrators)" (in it). 1995. Archived from the original on 19 August 2006. https://web.archive.org/web/20060819211212/http://www.isn.ethz.ch/php/documents/collection_gladio/report_ital_senate.pdf. Retrieved 2 May 2006.
↑ Template:En icon / (Italian) / Template:Fr icon /(de) "Secret Warfare: Operation Gladio and NATO's Stay-Behind Armies". Swiss Federal Institute of Technology / International Relation and Security Network. Archived from the original on 25 April 2006. https://web.archive.org/web/20060425182721/http://www.isn.ethz.ch/php/collections/coll_gladio.htm. Retrieved 2 May 2006.
↑ "Clarion: Philip Willan, Guardian, 24 June 2000, page 19". Cambridgeclarion.org. 24 June 2000. Archived from the original on 29 March 2010. http://www.cambridgeclarion.org/press_cuttings/us.terrorism_graun_24jun2000.html. Retrieved 24 April 2010.
↑ The so-called Second Republic was born by forceps: not with a revolt of Algiers, but formally under the same Constitution, with the mere replacement of one ruling class to another: Buonomo, Giampiero (2015). "Tovaglie pulite". Mondoperaio edizione online. Archived from the original on 24 March 2016. https://web.archive.org/web/20160324160801/https://www.questia.com/projects#!/project/89429827. Template:Subscription required
↑ "Italy starts to show the strains of migrant influx". The Local. Archived from the original on 29 April 2017. http://www.thelocal.it/20150519/migrant-surge-tests-italys-humanitarian-instincts. Retrieved 10 January 2017.
↑ "Italy's far right jolts back from dead". Politico. 3 February 2016. Archived from the original on 19 January 2017. http://www.politico.eu/article/italys-other-matteo-salvini-northern-league-politicians-media-effettosalvini/. Retrieved 10 January 2017.
↑ "Morphometric and hydrological characteristics of some important Italian lakes". Largo Tonolli 50, 28922 Verbania Pallanza: Istituto per lo Studio degli Ecosistemi. Archived from the original on 5 February 2010. https://web.archive.org/web/20100205043503/http://www.iii.to.cnr.it/limnol/cicloac/lagit.htm. Retrieved 3 March 2010.
↑ "Clima, cibo e ville. Il lago più bello è quello di Como" (in Italian). Il Corriere della Sera. 2014. Archived from the original on 27 September 2015. http://archiviostorico.corriere.it/2014/gennaio/24/Clima_cibo_ville_lago_piu_co_0_20140124_1110d202-84c3-11e3-9095-7e94aaaa6e8f.shtml. Retrieved 24 January 2014.
↑ [citation needed].
↑ "Inventario delle risorse geotermiche nazionali". UNMIG. 2011. Archived from the original on 22 July 2011. http://unmig.sviluppoeconomico.gov.it/unmig/geotermia/inventario/inventario.asp. Retrieved 14 September 2011.
↑ "Italy – Environment". Dev.prenhall.com. Archived from the original on 1 July 2009. https://web.archive.org/web/20090701064224/http://dev.prenhall.com/divisions/hss/worldreference/IT/environment.html. Retrieved 2 August 2010.
↑ "National Parks in Italy". Parks.it. 1995–2010. Archived from the original on 29 March 2010. http://www.parks.it/indice/NatParks.html. Retrieved 15 March 2010.
↑ REN21 (15 July 2010). "Renewables 2010 Global Status Report". REN21. Archived from the original on 20 August 2011. https://web.archive.org/web/20110820095506/http://www.ren21.net/Portals/97/documents/GSR/REN21_GSR2011.pdf. Retrieved 16 July 2010.
↑ "Photovoltaic energy barometer 2010 – EurObserv'ER". http://www.eurobserv-er.org/pdf/baro196.asp. Retrieved 30 October 2010.Template:Dead link
↑ "World Wind Energy Report 2010" (PDF). Report. World Wind Energy Association. February 2011. Archived from the original on 4 September 2011. https://web.archive.org/web/20110904232058/http://www.wwindea.org/home/images/stories/pdfs/worldwindenergyreport2010_s.pdf. Retrieved 8 August 2011.
↑ wwea
↑ "Italy – Environment". Encyclopedia of the Nations. Archived from the original on 4 January 2011. http://www.nationsencyclopedia.com/Europe/Italy-ENVIRONMENT.html. Retrieved 7 April 2010.
↑ United Nations Statistics Division, Millennium Development Goals indicators: Carbon dioxide emissions (Template:CO2), thousand metric tons of Template:CO2 Template:Webarchive (collected by CDIAC)
↑ Human-produced, direct emissions of carbon dioxide only. Excludes other greenhouse gases; land-use, land-use-change and forestry (LULUCF); and natural background flows of Template:CO2 (See also: Carbon cycle)
↑ [3] Template:Webarchive
↑ Duncan Kennedy (14 June 2011). "Italy nuclear: Berlusconi accepts referendum blow". Bbc.co.uk. Archived from the original on 12 June 2011. http://www.bbc.co.uk/news/world-europe-13741105. Retrieved 20 April 2013.
↑ Nick Squires (2 October 2009). "Sicily mudslide leaves scores dead". The Daily Telegraph (London). Archived from the original on 6 October 2009. http://www.telegraph.co.uk/news/worldnews/europe/italy/6255575/Sicily-mudslide-leaves-scores-dead.html#. Retrieved 2 October 2009.
↑ Livy (1797). The history of Rome. George Baker (trans.). Printed for A. Strahan.
↑ "ITALY'S FIFTH NATIONAL REPORT TO THE CONVENTION ON BIOLOGICAL DIVERSITY". Italian Ministry for the Environment, Land and Sea. Archived from the original on 18 May 2015. http://www.minambiente.it/sites/default/files/archivio/allegati/biodiversita/italian_fifth_report_cbd.pdf. Retrieved 17 May 2015.
↑ Pignatti, S.,1982 Flora d'Italia. Edagricole, Bologna, vol. 1–3, 1982
↑ Riccardo Guarino, Sabina Addamiano, Marco La Rosa, Sandro Pignatti Flora Italiana Digitale:an interactive identification tool for the Flora of Italy Template:Webarchive
↑ Adriana Rigutti, Meteorologia, Giunti, p. 95, 2009.
↑ Thomas A. Blair, Climatology: General and Regional, Prentice Hall pages 131–132
↑ "Climate Atlas of Italy". Network of the Air Force Meteorological Service. Archived from the original on 14 November 2012. http://clima.meteoam.it/atlanteClimatico.php?ling=eng. Retrieved 30 September 2012.
↑ Smyth, Howard McGaw Italy: From Fascism to the Republic (1943–1946) The Western Political Quarterly vol. 1 no. 3 (pp. 205–222), September 1948.Template:Jstor
↑ "About us - Sistema di informazione per la sicurezza della Repubblica". Archived from the original on 29 March 2015. http://www.sicurezzanazionale.gov.it/sisr.nsf/english/about-us.html.
↑ "Elezioni politiche 2013, Riepilogo Nazionale". Il Sole 24 Ore. Archived from the original on 14 December 2014. http://www.ilsole24ore.com/speciali/2013/elezioni/risultati/politiche/static/italia.shtml. Retrieved 6 December 2014.
↑ Claudio Tucci (11 November 2008). "Confesercenti, la crisi economica rende ancor più pericolosa la mafia" (in Italian). Confesercenti. Ilsole24ore.com. Archived from the original on 27 April 2011. http://www.ilsole24ore.com/art/SoleOnLine4/Economia%20e%20Lavoro/2008/11/confesercenti-mafia-racket-pizzo.shtml?uuid=20ff3b9c-afe7-11dd-8057-9c09c8bfa449. Retrieved 21 April 2011.
↑ Nick Squires (9 January 2010). "Italy claims finally defeating the mafia". The Daily Telegraph. Archived from the original on 29 April 2011. http://www.telegraph.co.uk/news/worldnews/europe/italy/6957240/Italy-claims-finally-defeating-the-mafia.html. Retrieved 21 April 2011.
↑ Kiefer, Peter (22 October 2007). "Mafia crime is 7% of GDP in Italy, group reports". The New York Times. Archived from the original on 1 May 2011. https://www.nytimes.com/2007/10/22/world/europe/22iht-italy.4.8001812.html?_r=1. Retrieved 19 April 2011.
↑ Maria Loi (1 October 2009). "Rapporto Censis: 13 milioni di italiani convivono con la mafia" (in Italian). Censis. Antimafia Duemila. Archived from the original on 29 April 2011. https://web.archive.org/web/20110429082416/http://www.antimafiaduemila.com/content/view/20052/78/. Retrieved 21 April 2011.
↑ Kington, Tom (1 October 2009). "Mafia's influence hovers over 13 m Italians, says report". The Guardian (London). Archived from the original on 8 September 2013. https://www.theguardian.com/world/2009/oct/01/mafia-influence-hovers-over-italians. Retrieved 5 May 2010.
↑ ANSA (14 March 2011). "Italy: Anti-mafia police arrest 35 suspects in northern Lombardy region". adnkronos.com. Mafia Today. Archived from the original on 29 April 2011. http://mafiatoday.com/sicilian-mafia-ndrangheta/italy-anti-mafia-police-arrest-35-suspects-in-northern-lombardy-region/. Retrieved 21 April 2011.
↑ "Crime Statistics – Murders (per capita) (most recent) by country". NationMaster.com. Archived from the original on 29 September 2008. http://www.nationmaster.com/graph/cri_mur_percap-crime-murders-per-capita. Retrieved 4 April 2010.
↑ "MISSIONI/ATTIVITA' INTERNAZIONALI DAL 1 October 2013 AL 31 December 2013 – SITUAZIONE AL 11.12.2013". Italian Ministry of Defence. Archived from the original on 1 February 2014. http://www.difesa.it/OperazioniMilitari/Documents/SIT%20ANNO%202013%20al%2011%20dicembre%202013.pdf. Retrieved 27 January 2014.
↑ "Italian soldiers leave for Lebanon Template:Webarchive Corriere della Sera, 30 August 2006
↑ "Italy donates 60 million euros to PA". Ma'an News Agency. 4 September 2013. Archived from the original on 18 October 2014. http://www.maannews.net/eng/ViewDetails.aspx?ID=626926. Retrieved 27 January 2014.
↑ "Law n°226 of August 23, 2004". Camera.it. Archived from the original on 17 January 2013. http://www.camera.it/parlam/leggi/04226l.htm. Retrieved 13 July 2012.
↑ "The Military Balance 2010", pp. 141–145. International Institute for Strategic Studies, 3 February 2010.
↑ Italian Ministry of Defence. "Nota aggiuntiva allo stato di previsione per la Difesa per l'anno 2009" (in Italian). Archived from the original on 4 May 2011. https://web.archive.org/web/20110504073613/http://www.difesa.it/NR/rdonlyres/5EF11493-59DD-4FB7-8485-F4258D9F5891/0/Nota_Aggiuntiva_2009.pdf. Retrieved 11 July 2014.
↑ Hans M. Kristensen / Natural Resources Defense Council (2005). "NRDC: U.S. Nuclear Weapons in Europe – part 1" (PDF). Archived from the original on 1 January 2011. https://web.archive.org/web/20110101060355/http://www.nrdc.org/nuclear/euro/euro_pt1.pdf. Retrieved 30 May 2011.
↑ "Marina Militare (Italian military navy website)" (in Italian). Marina.difesa.it. Archived from the original on 24 November 2010. http://www.marina.difesa.it/. Retrieved 30 May 2011.
↑ "The Carabinieri Force is linked to the Ministry of Defence". Carabinieri. Archived from the original on 30 April 2011. http://www.carabinieri.it/Internet/Multilingua/EN/GoverningBodies/. Retrieved 14 May 2010.
↑ "Codici comuni, province e regioni" (in Italian). Archived from the original on 10 October 2017. http://www.istat.it/it/archivio/6789. Retrieved 17 Jan 2018.
↑ "Archived copy". Archived from the original on 22 October 2017. https://www.imf.org/external/pubs/ft/weo/2017/02/weodata/weoselgr.aspx. Retrieved 2017-10-22.
↑ "Gross domestic product (2015)". The World Bank: World Development Indicators database. World Bank. 28 April 2017. Archived from the original on 1 February 2017. http://databank.worldbank.org/data/download/GDP.pdf. Retrieved 17 May 2017.
↑ Sensenbrenner, Frank; Arcelli, Angelo Federico. "Italy's Economy Is Much Stronger Than It Seems". The Huffington Post. Archived from the original on 6 December 2014. http://www.huffingtonpost.com/frank-sensenbrenner/italy-economy_b_3401988.html. Retrieved 25 November 2014.
↑ Dadush, Uri. "Is the Italian Economy on the Mend?". Carnegie Europe. Archived from the original on 13 July 2015. http://carnegieeurope.eu/publications/?fa=50565&reloadFlag=1. Retrieved 25 November 2014.
↑ "Doing Business in Italy: 2014 Country Commercial Guide for U.S. Companies". United States Commercial Service. Archived from the original on 15 July 2014. https://web.archive.org/web/20140715152504/http://www.export.gov/italy/static/2014%20CCG%20Italy_Latest_eg_it_076513.pdf. Retrieved 25 November 2014.
↑ The Economist Intelligence Unit's quality-of-life index Template:Webarchive, Economist, 2005
↑ "The Global Creativity Index 2011". Martin Prosperity Institute. Archived from the original on 30 September 2014. http://martinprosperity.org/media/GCI%20Report%20Sep%202011.pdf. Retrieved 26 November 2014.
↑ Aksoy, M. Ataman; Ng, Francis. "The Evolution of Agricultural Trade Flows". The World Bank. Archived from the original on 29 November 2014. https://openknowledge.worldbank.org/bitstream/handle/10986/3793/WPS5308.pdf?sequence=1. Retrieved 25 November 2014.
↑ Pisa, Nick (12 June 2011). "Italy overtakes France to become world's largest wine producer". The Telegraph. Archived from the original on 3 September 2011. http://www.telegraph.co.uk/foodanddrink/wine/8571222/Italy-overtakes-France-to-become-worlds-largest-wine-producer.html. Retrieved 17 August 2011.
↑ "Automotive Market Sector Profile – Italy". The Canadian Trade Commissioner Service. Archived from the original on 5 December 2014. http://www.enterprisecanadanetwork.ca/_uploads/resources/Automotive-Market-Sector-Profile-Italy.pdf. Retrieved 26 November 2014.
↑ "Data & Trends of the European Food and Drink Industry 2013–2014". FoodDrinkEurope. Archived from the original on 6 December 2014. https://web.archive.org/web/20141206010318/http://www.fooddrinkeurope.eu/uploads/publications_documents/Data__Trends_of_the_European_Food_and_Drink_Industry_2013-2014.pdf. Retrieved 26 November 2014.
↑ "Italy fashion industry back to growth in 2014". Reuters. Archived from the original on 5 December 2014. http://uk.reuters.com/article/2014/01/10/uk-italy-fashion-growth-idUKBREA0912220140110. Retrieved 26 November 2014.
↑ Leblanc, John (25 April 2014). "The top 10 largest automakers in the world". Driving. Archived from the original on 17 March 2017. http://driving.ca/toyota/corolla/auto-news/news/the-top-10-largest-automakers-in-the-world.
↑ "Trade in goodsExports, Million US dollars, 2016". OECD. Archived from the original on 15 April 2017. https://data.oecd.org/trade/trade-in-goods.htm#indicator-chart. Retrieved 17 May 2017.
↑ "Manufacturing, value added (current US$) Template:Webarchive". accessed on 17 May 2017.
↑ "Knowledge Economy Forum 2008: Innovative Small And Medium Enterprises Are Key To Europe & Central Asian Growth". The World Bank. 19 May 2005. Archived from the original on 23 June 2008. http://web.worldbank.org/WBSITE/EXTERNAL/COUNTRIES/ECAEXT/0,,contentMDK:21808326~menuPK:258604~pagePK:2865106~piPK:2865128~theSitePK:258599,00.html. Retrieved 17 June 2008.
↑ "CIA – The World Factbook". CIA. Archived from the original on 11 February 2011. https://www.cia.gov/library/publications/the-world-factbook/geos/it.html. Retrieved 26 January 2011.
↑ "Auto: settore da 144mila imprese in Italia e 117 mld fatturato". adnkronos.com. Archived from the original on 25 September 2015. http://www.adnkronos.com/soldi/economia/2015/09/23/auto-settore-mila-imprese-italia-mld-fatturato_WooBmrBqxgxO7mOvIRXUBI.html. Retrieved 23 September 2015.
↑ "Country Profiles – Italy". acea.thisconnect.com. Archived from the original on 11 February 2008. https://web.archive.org/web/20080211190839/http://acea.thisconnect.com/index.php/country_profiles/detail/italy. Retrieved 9 February 2008.
↑ "Fiat Chrysler to spin off Ferrari, issue $2.5 billion convertible bond". Archived from the original on 29 October 2014. https://www.reuters.com/article/2014/10/29/us-fiatchrysler-ferrari-divestiture-idUSKBN0II1DB20141029. Retrieved 29 October 2014.
↑ Haigh, Robert (18 February 2014). "Ferrari – The World's Most Powerful Brand". Brand Finance. Archived from the original on 2 February 2016. http://brandfinance.com/news/ferrari--the-worlds-most-powerful-brand/.
↑ Andrews, Edmund L. (1 January 2002). "Germans Say Goodbye to the Mark, a Symbol of Strength and Unity". The New York Times. Archived from the original on 1 May 2011. https://www.nytimes.com/2002/01/01/world/germans-say-goodbye-to-the-mark-a-symbol-of-strength-and-unity.html. Retrieved 18 March 2011.
↑ Taylor Martin, Susan (28 December 1998). "On Jan. 1, out of many arises one Euro". St. Petersburg Times: p. National, 1.A.
↑ Orsi, Roberto. "The Quiet Collapse of the Italian Economy". The London School of Economics. Archived from the original on 19 November 2014. http://blogs.lse.ac.uk/eurocrisispress/2013/04/23/the-quiet-collapse-of-the-italian-economy/. Retrieved 24 November 2014.
↑ Nicholas Crafts, Gianni Toniolo (1996). Economic growth in Europe since 1945. Cambridge University Press. p. 428. ISBN: 0-521-49627-6.
↑ Balcerowicz, Leszek. "Economic Growth in the European Union". The Lisbon Council. Archived from the original on 14 July 2014. http://www.lisboncouncil.net/growth/documents/LISBON_COUNCIL_Economic_Growth_in_the_EU%20(1).pdf. Retrieved 8 October 2014.
↑ ""Secular stagnation" in graphics". The Economist. Archived from the original on 23 November 2014. https://www.economist.com/blogs/graphicdetail/2014/11/secular-stagnation-graphics. Retrieved 24 November 2014.
↑ "Government debt increased to 93.9% of GDP in euro area and to 88.0% in EU28". Eurostat. Archived from the original on 21 October 2014. http://epp.eurostat.ec.europa.eu/cache/ITY_PUBLIC/2-22072014-AP/EN/2-22072014-AP-EN.PDF. Retrieved 24 November 2014.
↑ "Could Italy Be Better Off than its Peers?". CNBC. 18 May 2010. Archived from the original on 30 April 2011. https://web.archive.org/web/20110430030613/http://www.cnbc.com/id/37207942/Could_Italy_Be_Better_Off_than_its_Peers. Retrieved 30 May 2011.
↑ "Household debt and the OECD's surveillance of member states". OECD Economics Department. Archived from the original on 9 January 2015. https://web.archive.org/web/20150109041518/http://www.nationalbanken.dk/da/om_nationalbanken/oekonomisk_forskning/Documents/4_Household%20debt%20and%20the%20OECD%27s%20surveillance%20of%20member%20states%20by%20Christophe%20Andr%C3%A9.pdf. Retrieved 26 November 2014.
↑ "Oh for a new risorgimento". The Economist. Archived from the original on 24 October 2014. http://www.economist.com/node/18780831. Retrieved 24 November 2014.
↑ "Comune per Comune, ecco la mappa navigabile dei redditi dichiarati in Italia". Archived from the original on 5 April 2015. http://www.lastampa.it/economia/speciali/redditi-italia.
↑ "GDP per capita at regional level". Istat. Archived from the original on 26 October 2017. https://www.istat.it/it/files/2016/12/Conti-regionali_2015.pdf?title=Conti+economici+territoriali+-+12%2Fdic%2F2016+-+Testo+integrale+e+nota+metodologica.pdf. Retrieved 25 October 2017.
↑ "Euro area unemployment rate at 11%". Eurostat. Archived from the original on 31 July 2017. http://ec.europa.eu/eurostat/documents/2995521/8121455/3-31072017-AP-EN.pdf/. Retrieved 26 October 2017.
↑ Istat. "Employment and unemployment: second quarter 2017" (PDF). Archived from the original on 26 October 2017. http://www.istat.it/it/files/2017/09/Mercato-del-lavoro-II-trim-2017.pdf?title=Il+mercato+del+lavoro+-+12%2Fset%2F2017+-+Testo+integrale+e+nota+metodologica.pdf. Retrieved 26 October 2017.
↑ 163.0 163.1 163.2 "Censimento Agricoltura 2010". ISTAT. 24 October 2010. Archived from the original on 13 February 2015. http://dati-censimentoagricoltura.istat.it/.
↑ "OIV report on the State of the vitiviniculture world market" (PowerPoint presentation). Réseau-CONCEPT. 2010. Archived from the original on 28 July 2011. https://web.archive.org/web/20110728145648/http://news.reseau-concept.net/images/oiv_es/Client/DIAPORAMA_STATISTIQUES_Tbilissi_2010_EN.ppt.
↑ "Frecciarossa 1000 in Figures". Ferrovie dello Stato Italiane. Archived from the original on 18 December 2014. https://web.archive.org/web/20141218192603/http://www.fsitaliane.it/fsi-en/GROUP/Safety-and-Technology/Frecciarossa1000%3A-the-train-of-the-future/Frecciarossa-1000-in-Figures. Retrieved 24 November 2014.
↑ 166.0 166.1 European Commission. "Panorama of Transport" (PDF). Archived from the original on 7 April 2009. https://web.archive.org/web/20090407142402/http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KS-DA-07-001/EN/KS-DA-07-001-EN.PDF. Retrieved 3 May 2009.
↑ "Energy imports, net (% of energy use)". World Bank. Archived from the original on 30 April 2011. http://data.worldbank.org/indicator/EG.IMP.CONS.ZS. Retrieved 24 November 2014.
↑ Eurostat. "Energy, transport and environment indicators". Archived from the original on 23 November 2009. https://web.archive.org/web/20091123071423/http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KS-DK-08-001/EN/KS-DK-08-001-EN.PDF. Retrieved 10 May 2009.
↑ Eurostat. "Panorama of energy". Archived from the original on 3 June 2010. https://web.archive.org/web/20100603143806/http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KS-GH-09-001/EN/KS-GH-09-001-EN.PDF. Retrieved 10 May 2009.
↑ L. Anwandter and P. Rubino (2006). "Risks, uncertainties and conflicts of Interest in the Italian water sector: A review and proposals for reform". Materiali UVAL (Public Investment Evaluation Unit of the Department for Development and Cohesion Policies (DPS) in the Ministry for Economic Development), According to ISTAT figures analysed by the Water Resources Surveillance Committee (CoViRi),. p. 9.
↑ Bardelli, Lorenzo. "Pro aqua Italian policy to get prices and governance right". Utilitatis, 29th International Congress of CIRIEC, Wien, 14 September 2012. p. 16.
↑ Albasser, Francesco (May 2012). "The Italian Water industry – Beyond the Public/Private debate & back to basics, Presentation at the Conference Water Loss Europe". in3act Energy. p. 12.
↑ Giuliano Pancaldi, "Volta: Science and culture in the age of enlightenment", Princeton University Press, 2003.
↑ Weidhorn, Manfred (2005). The Person of the Millennium: The Unique Impact of Galileo on World History. iUniverse. p. 155. ISBN: 0-595-36877-8.
↑ Bondyopadhyay, Prebir K. (1995). "Guglielmo Marconi – The father of long distance radio communication – An engineer's tribute". 25th European Microwave Conference, 1995. p. 879. doi:10.1109/EUMA.1995.337090.
↑ "Enrico Fermi, architect of the nuclear age, dies". Autumn 1954. Archived from the original on 17 November 2015. http://www.history.com/this-day-in-history/enrico-fermi-architect-of-the-nuclear-age-dies.
↑ Lucia Orlando, "Physics in the 1930s: Jewish Physicists' Contribution to the Realization of the" New Tasks" of Physics in Italy." Historical studies in the physical and biological sciences (1998): 141–181. Template:Jstor
↑ Wheen, Andrew. Dot-Dash to Dot.com: How Modern Telecommunications Evolved from the Telegraph to the Internet. Template:Webarchive Springer, 2010. p. 45. Web. 23 September 2011.
↑ Cleveland, Cutler (Lead Author) ; Saundry, Peter (Topic Editor). Meucci, Antonio. Template:Webarchive Encyclopedia of Earth, 2006. Web. 22 July 2012.
↑ "Foreign tourist numbers in Italy head towards new record" Template:Webarchive, Retrieved 21 May 2017.
↑ "2016 Tourism Highlights". World Tourism Organization. http://www.e-unwto.org/doi/pdf/10.18111/9789284418145. Retrieved 4 August 2016.
↑ "Travel & Tourism Economic Impact 2015 Italy". World Travel and Tourism Council. Archived from the original on 10 October 2017. https://www.wttc.org/-/media/files/reports/economic%20impact%20research/countries%202015/italy2015.pdf. Retrieved 20 May 2017.
↑ "The World Heritage Convention". UNESCO. Archived from the original on 27 August 2016. http://whc.unesco.org/en/convention/. Retrieved 17 September 2010.
↑ "Global Destination Cities Index by Mastercard, 2016 edition". Archived from the original on 24 September 2016. https://newsroom.mastercard.com/wp-content/uploads/2016/09/FINAL-Global-Destination-Cities-Index-Report.pdf.
↑ "2013 Survey on Museums, Monuments and Archeological sites". Italian Ministry of Heritage and Cultural Activities. Archived from the original on 10 October 2017. http://www.statistica.beniculturali.it/RILEVAZIONI/MUSEI/Anno%202013/MUSEI_TAVOLA8_2013.pdf. Retrieved 20 May 2017.
↑ "National demographic balance, 2013". Istat. Archived from the original on 6 October 2014. http://www.istat.it/it/files/2014/06/Bilanciodemografico_2013_def.pdf?title=Bilancio+demografico+nazionale+-+16%2Fgiu%2F2014+-+Testo+integrale.pdf. Retrieved 1 October 2014.
↑ EUROSTAT. "Ageing characterises the demographic perspectives of the European societies – Issue number 72/2008". Archived from the original on 2 January 2009. https://web.archive.org/web/20090102184227/http://epp.eurostat.ec.europa.eu/cache/ITY_OFFPUB/KS-SF-08-072/EN/KS-SF-08-072-EN.PDF. Retrieved 28 April 2009.
↑ ISTAT. "Crude birth rates, mortality rates and marriage rates 2005–2008" (in it). Archived from the original on 21 August 2011. http://demo.istat.it/altridati/indicatori/2008/Tab_1.pdf. Retrieved 10 May 2009.
↑ ISTAT. "Average number of children born per woman 2005–2008" (in it). Archived from the original on 21 August 2011. http://demo.istat.it/altridati/indicatori/2008/Tab_4.pdf. Retrieved 3 May 2009.
↑ "Previsioni della popolazione, 2011–2065, dati al 1° gennaio". Demo.istat.it. Archived from the original on 6 March 2013. https://web.archive.org/web/20130306125456/http://demo.istat.it/uniprev2011/index.html?lingua=ita. Retrieved 12 March 2013.
↑ "Causes of the Italian mass emigration". ThinkQuest Library. 15 August 1999. Archived from the original on 1 July 2009. https://web.archive.org/web/20090701010600/http://library.thinkquest.org/26786/en/articles/view.php3?arKey=4&paKey=7&loKey=0&evKey=&toKey=&torKey=&tolKey=. Retrieved 11 August 2014.
↑ Favero, Luigi e Tassello, Graziano. Cent'anni di emigrazione italiana (1861–1961) Introduction
↑ "Statistiche del Ministero dell'Interno". Archived from the original on 27 February 2010. https://web.archive.org/web/20100227045432/http://www.interno.it/mininterno/export/sites/default/it/sezioni/servizi/legislazione/elezioni/0947_2010_02_01_DM27012010.html.
↑ Lee, Adam (3 April 2006). "Unos 20 millones de personas que viven en la Argentina tienen algún grado de descendencia italiana" (in Spanish). Archived from the original on 11 June 2008. http://www.asteriscos.tv/dossier-3.html. Retrieved 27 June 2008.
↑ Consulta Nazionale Emigrazione. Progetto ITENETs – "Gli italiani in Brasile"; pp. 11, 19 Template:Webarchive . Retrieved 10 September 2008.
↑ "Ethnic origins, 2006 counts, for Uruguay, provinces and territories – 20% sample data". Archived from the original on 11 May 2011. http://www.hotelsclick.com/hoteles/UY/Uruguay-DEMOGRAF%C3%ADA-5.html.
↑ Santander Laya-Garrido, Alfonso. Los Italianos forjadores de la nacionalidad y del desarrollo economico en Venezuela. Editorial Vadell. Valencia, 1978
↑ American FactFinder, United States Census Bureau. "U.S Census Bureau – Selected Population Profile in the United States". American FactFinder, United States Census Bureau. Archived from the original on 30 April 2011. https://web.archive.org/web/20110430031737/http://factfinder.census.gov/servlet/IPTable?_bm=y&-reg=ACS_2006_EST_G00_S0201%3A543%3BACS_2006_EST_G00_S0201PR%3A543%3BACS_2006_EST_G00_S0201T%3A543%3BACS_2006_EST_G00_S0201TPR%3A543&-qr_name=ACS_2006_EST_G00_S0201&-qr_name=ACS_2006_EST_G00_S0201PR&-qr_name=ACS_2006_EST_G00_S0201T&-qr_name=ACS_2006_EST_G00_S0201TPR&-ds_name=ACS_2006_EST_G00_&-TABLE_NAMEX=&-ci_type=A&-redoLog=true&-charIterations=047&-geo_id=01000US&-geo_id=NBSP&-format=&-_lang=en. Retrieved 30 May 2011.
↑ "Ethnic origins, 2006 counts, for Canada, provinces and territories – 20% sample data". Archived from the original on 1 November 2009. http://www12.statcan.ca/english/census06/data/highlights/ethnic/pages/Page.cfm?Lang=E&Geo=PR&Code=01&Data=Count&Table=2&StartRec=1&Sort=3&Display=All&CSDFilter=5000.
↑ "20680-Ancestry by Country of Birth of Parents – Time Series Statistics (2001, 2006 Census Years) – Australia". Australian Bureau of Statistics. 27 June 2007. Archived from the original on 1 October 2007. https://web.archive.org/web/20071001032237/http://www.censusdata.abs.gov.au/ABSNavigation/prenav/ViewData?action=404&documentproductno=0&documenttype=Details&order=1&tabname=Details&areacode=0&issue=2006&producttype=Census%20Tables&javascript=true&textversion=false&navmapdisplayed=true&breadcrumb=LPTD&&collection=Census&period=2006&productlabel=Ancestry%20by%20Country%20of%20Birth%20of%20Parents%20-%20Time%20Series%20Statistics%20%282001%2C%202006%20Census%20Years%29&producttype=Census%20Tables&method=Place%20of%20Usual%20Residence&topic=Ancestry&. Retrieved 30 December 2008.
↑ "The Cambridge survey of world migration Template:Webarchive". Robin Cohen (1995). Cambridge University Press. p. 143. ISBN: 0-521-44405-5
↑ Roberto, Vincenzo Patruno, Marina Venturi, Silvestro. "Demo-Geodemo. - Mappe, Popolazione, Statistiche Demografiche dell'ISTAT". Archived from the original on 9 July 2011. http://demo.istat.it/.
↑ "Archived copy". Archived from the original on 3 September 2015. http://appsso.eurostat.ec.europa.eu/nui/show.do?dataset=urb_lpop1&lang=en. Retrieved 2017-11-03.
↑ "Resident Foreigners on 31st December 2016". Istat. Archived from the original on 22 June 2017. http://demo.istat.it/index_e.html. Retrieved 15 June 2017.
↑ "Immigrants.Stat". Istat. Archived from the original on 9 July 2017. http://stra-dati.istat.it/Index.aspx. Retrieved 15 June 2017.
↑ "National demographic balance 2016". Istat. https://www.istat.it/en/archive/201143. Retrieved 15 June 2017.
↑ "National demographic balance 2014". Istat. Archived from the original on 2 May 2017. http://www.istat.it/en/archive/162261. Retrieved 15 June 2017.
↑ Elisabeth Rosenthal, "Italy cracks down on illegal immigration Template:Webarchive". The Boston Globe. 16 May 2008.
↑ Allen, Beverly (1997). Revisioning Italy national identity and global culture. Minneapolis: University of Minnesota Press. p. 169. ISBN: 978-0-8166-2727-1.
↑ "Milan police in Chinatown clash Template:Webarchive". BBC News. 13 April 2007.
↑ "EUROPE: Home to Roma, And No Place for Them". IPS ipsnews.net. Template:Webarchive
↑ "Balkan Investigative Reporting Network". Birn.eu.com. 8 November 2007. http://www.birn.eu.com/en/111/15/5745/. Retrieved 4 November 2008.
↑ Mitrica, Mihai Un milion de romani s-au mutat in Italia ("One million Romanians have moved to Italy"). Evenimentul Zilei, 31 October 2005. Visited 11 April 2006.
↑ 214.0 214.1 "Legge 15 Dicembre 1999, n. 482 "Norme in materia di tutela delle minoranze linguistiche storiche" pubblicata nella Gazzetta Ufficiale n. 297 del 20 dicembre 1999". Italian Parliament. Archived from the original on 12 May 2015. http://www.camera.it/parlam/leggi/99482l.htm. Retrieved 2 December 2014.
↑ Italian language Template:Webarchive Ethnologue.com
↑ "Eurobarometer – Europeans and their languages" (485 KB). February 2006. Archived from the original on 30 April 2011. http://ec.europa.eu/public_opinion/archives/ebs/ebs_243_sum_en.pdf.
↑ Nationalencyklopedin "Världens 100 största språk 2007" The World's 100 Largest Languages in 2007
↑ Italian language Template:Webarchive University of Leicester
↑ "UNESCO Atlas of the World's Languages in danger" (in en). Archived from the original on 18 December 2016. http://www.unesco.org/languages-atlas/index.php.
↑ "Italian language". Encyclopædia Britannica. 3 November 2008. Archived from the original on 29 November 2009. http://www.britannica.com/EBchecked/topic/297241/Italian-language. Retrieved 19 November 2009.
↑ "Lingue di Minoranza e Scuola: Carta Generale". Archived from the original on 10 October 2017. http://www.minoranze-linguistiche-scuola.it/carta-generale/.
↑ [L.cost. 26 febbraio 1948, n. 4, Statuto speciale per la Valle d'Aosta; L.cost. 26 febbraio 1948, n. 5, Statuto speciale per il Trentino-Alto Adige; L. cost. 31 gennaio 1963, n. 1, Statuto speciale della Regione Friuli Venezia Giulia]
↑ "Ready for Ratification". European Centre for Minority Issues. Archived from the original on 3 January 2018. https://rm.coe.int/european-centre-for-minority-issues-vol-1-/1680737191.
↑ "Linguistic diversity among foreign citizens in Italy". Italian National Institute of Statistics. Archived from the original on 30 July 2014. http://www.istat.it/en/archive/129304. Retrieved 27 July 2014.
↑ "The Duomo of Florence | Tripleman". tripleman.com. Archived from the original on 6 December 2009. http://www.tripleman.com/index.php?showimage=737. Retrieved 25 March 2010.
↑ "Brunelleschi's Dome". Brunelleschi's Dome.com. Archived from the original on 16 April 2010. http://www.brunelleschisdome.com/. Retrieved 25 March 2010.
↑ "St. Peter's Basilica (Basilica di San Pietro) in Rome, Italy". reidsitaly.com. Archived from the original on 23 February 2015. http://www.reidsitaly.com/destinations/lazio/rome/sights/st_peters.html.
↑ See List of largest church buildings in the world; note that the #3 entry, First Family Church building in Kansas, is now a school education complex.
↑ "Basilica di San Marco". Archived from the original on 5 March 2015. https://web.archive.org/web/20150305102304/http://www.basilicasanmarco.it/WAI/eng/basilica/architettura/interne/fasi_costrutt.bsm.
↑ "Catholicism No Longer Italy's State Religion". Sun Sentinel. 4 June 1985. Archived from the original on 20 October 2013. http://articles.sun-sentinel.com/1985-06-04/news/8501220260_1_italian-state-new-agreement-church. Retrieved 7 September 2013.
↑ 231.0 231.1 "The Global Catholic Population". Pew Research Center. Archived from the original on 19 August 2014. http://www.pewforum.org/2013/02/13/the-global-catholic-population/. Retrieved 24 August 2014.
↑ Text taken directly from "Archived copy". Archived from the original on 31 December 2010. https://web.archive.org/web/20101231084624/http://www.fco.gov.uk/en/travel-and-living-abroad/travel-advice-by-country/country-profile/europe/holy-see/. Retrieved 5 February 2016. (viewed on 14 December 2011), on the website of the British Foreign & Commonwealth Office.
↑ The Holy See's sovereignty has been recognized explicitly in many international agreements and is particularly emphasized in article 2 of the Lateran Treaty of 11 February 1929, in which "Italy recognizes the sovereignty of the Holy See in international matters as an inherent attribute in conformity with its traditions and the requirements of its mission to the world" (Lateran Treaty, English translation).
↑ Leustean, Lucian N. (2014). Eastern Christianity and Politics in the Twenty-First Century. Routledge. p. 723. ISBN: 978-0-415-68490-3.
↑ "Le religioni in Italia: I Testimoni di Geova (Religions in Italy: The Jehovah's Witnesses)" (in Italian). Center for Studies on New Religions. Archived from the original on 6 June 2011. http://www.cesnur.org/religioni_italia/t/testimoni_geova_02.htm. Retrieved 30 May 2011.
↑ "Chiesa Evangelica Valdese – Unione delle chiese Metodiste e Valdesi (Waldensian Evangelical Church – Union of Waldensian and Methodist churches)" (in Italian). Chiesa Evangelica Valdese – Unione delle chiese Metodiste e Valdesi (Waldensian Evangelical Church – Union of Waldensian and Methodist churches). http://www.chiesavaldese.org/pages/storia/dove_viviamo.php. Retrieved 30 May 2011.
↑ "World Council of Churches – Evangelical Methodist Church in Italy". World Council of Churches. Archived from the original on 9 July 2008. https://web.archive.org/web/20080709033652/http://www.oikoumene.org/en/member-churches/regions/europe/italy/evangelical-methodist-church-in-italy.html. Retrieved 30 October 2010.
↑ Dawidowicz, Lucy S. (1986). The war against the Jews, 1933–1945. New York: Bantam Books. ISBN: 0-553-34302-5.p. 403
↑ "THE JEWISH COMMUNITY OF ITALY Unione delle Comunita Ebraiche Italiane". The European Jewish Congress. http://www.eurojewcong.org/communities/italy.html. Retrieved 25 August 2014.
↑ "NRI Sikhs in Italy". Nriinternet.com. 15 November 2004. Archived from the original on 7 February 2011. http://www.nriinternet.com/EUROPE/ITALY/2004/111604Gurdwara.htm. Retrieved 30 October 2010.
↑ "Unione Buddhista Italiana – UBI: L'Ente". Buddhismo.it. 18 August 2009. Archived from the original on 4 April 2007. https://web.archive.org/web/20070404034319/http://www.buddhismo.it/ente.htm. Retrieved 30 October 2010.
↑ "Most Baha'i Nations (2005)". QuickLists > Compare Nations > Religions >. The Association of Religion Data Archives. 2005. Archived from the original on 14 April 2010. http://www.thearda.com/QuickLists/QuickList_40c.asp. Retrieved 30 January 2010.
↑ "Italy: Islam denied income tax revenue – Adnkronos Religion". Adnkronos.com. 7 April 2003. Archived from the original on 20 June 2013. http://www.adnkronos.com/AKI/English/Religion/?id=3.1.880028077. Retrieved 2 June 2013.
↑ Camera dei deputati Dossier BI0350 Template:Webarchive. Documenti.camera.it (10 March 1998). Retrieved on 12 July 2013.
↑ "Law 27 December 2007, n.296". Italian Parliament. Archived from the original on 6 December 2012. http://www.camera.it/parlam/leggi/06296l.htm. Retrieved 30 September 2012.
↑ "| Human Development Reports". Hdr.undp.org. Archived from the original on 29 April 2011. https://web.archive.org/web/20110429033726/http://hdr.undp.org/en/media/HDR_20072008_EN_Complete.pdf. Retrieved 18 January 2014.
↑ "PISA 2012 Results". OECD. Archived from the original on 4 March 2016. http://www.oecd.org/pisa/keyfindings/PISA-2012-results-italy.pdf. Retrieved 16 November 2015.
↑ "The literacy divide: territorial differences in the Italian education system". Parthenope University of Naples. Archived from the original on 17 November 2015. https://web.archive.org/web/20151117015624/http://new.sis-statistica.org/wp-content/uploads/2013/10/CO09-The-literacy-divide-territorial-differences-in-the-Italian.pdf. Retrieved 16 November 2015.
↑ "Academic Ranking of World Universities 2015". Shanghai Ranking Consultancy. 2015. Archived from the original on 30 October 2015. http://www.shanghairanking.com/ARWU2015.html. Retrieved 29 October 2015.
↑ "Italy's Budget/4: 500 new university "chairs of excellence" open up to foreign professors and scholars". Il Sole 24 Ore Digital Edition. Archived from the original on 17 October 2015. http://www.italy24.ilsole24ore.com/art/government-policies/2015-10-15/italy-s-stability-law-funds-500-new-university-professors-open-to-foreign-candidates--174432.php?uuid=ACDy9uGB. Retrieved 16 November 2015.
↑ 251.0 251.1 "Italy – Health". Dev.prenhall.com. Archived from the original on 1 July 2009. https://web.archive.org/web/20090701064229/http://dev.prenhall.com/divisions/hss/worldreference/IT/health.html. Retrieved 2 August 2010.
↑ 252.0 252.1 "OECD Health Statistics 2014 How Does Italy Compare?". OECD. 2014. Archived from the original on 24 September 2015. https://web.archive.org/web/20150924133234/http://www.oecd.org/els/health-systems/Briefing-Note-ITALY-2014.pdf.
↑ "The World Health Organization's ranking of the world's health systems". ΦΩΤΗΣ ΚΟΥΤΣΟΥΚΗΣ (Photius Coutsoukis). Archived from the original on 5 January 2010. http://www.photius.com/rankings/healthranks.html. Retrieved 27 October 2009.
↑ "World Health Statistics 2016: Monitoring health for the SDGs Annex B: tables of health statistics by country, WHO region and globally". World Health Organization. 2016. Archived from the original on 23 June 2016. http://www.who.int/gho/publications/world_health_statistics/2016/Annex_B/en/. Retrieved 27 June 2016.
↑ "Global Prevalence of Adult Obesity" (PDF). International Obesity Taskforce. Archived from the original on 11 December 2009. https://www.webcitation.org/5lwMsu50m?url=http://www.iotf.org/database/documents/GlobalPrevalenceofAdultObesity16thDecember08.pdf. Retrieved 29 January 2008.
↑ "Smoking Ban Begins in Italy | Europe | DW.COM | 10 January 2005". Deutsche Welle. Archived from the original on 21 June 2015. http://www.dw.com/en/smoking-ban-begins-in-italy/a-1453590. Retrieved 1 August 2010.
↑ "UNESCO Culture Sector, Eighth Session of the Intergovernmental Committee (8.COM) – from 2 to 7 December 2013". Archived from the original on 20 December 2013. http://www.unesco.org/culture/ich/index.php?lg=en&pg=00473. Retrieved 3 April 2014.
↑ "UNESCO – Culture – Intangible Heritage – Lists & Register – Inscribed Elements – Mediterranean Diet". Archived from the original on 15 April 2014. http://www.unesco.org/culture/ich/index.php?lg=en&pg=00011&RL=00884. Retrieved 3 April 2014.
↑ Killinger, Charles (2005). Culture and customs of Italy (1. publ. ed.). Westport, Conn.: Greenwood Press. p. 3. ISBN: 0-313-32489-1.
↑ Cole, Alison (1995). Virtue and magnificence : art of the Italian Renaissance courts. New York: H.N. Abrams. ISBN: 0-8109-2733-0.
↑ Eyewitness Travel (2005), pg. 19
↑ Architecture in Italy Template:Webarchive, ItalyTravel.com
↑ "History – Historic Figures: Inigo Jones (1573–1652)". BBC. 1 January 1970. Archived from the original on 21 August 2013. http://www.bbc.co.uk/history/historic_figures/jones_inigo.shtml. Retrieved 12 March 2013.
↑ "Roman Painting". art-and-archaeology.com. Archived from the original on 26 July 2013. http://www.art-and-archaeology.com/roman/painting.html.
↑ "Roman Wall Painting". accd.edu. Archived from the original on 19 March 2007. https://web.archive.org/web/20070319123717/http://www.accd.edu/sac/vat/arthistory/arts1303/Rome4.htm.
↑ "Poetry and Drama: Literary Terms and Concepts.". The Rosen Publishing Group. 2011. https://books.google.com/books?id=LHA_SydyKOYC&pg=PA39&dq. Retrieved 18 October 2011.
↑ Brand, Peter; Pertile, Lino, eds. (1999). "2 - Poetry. Francis of Assisi (pp. 5ff.)". The Cambridge History of Italian Literature. Cambridge University Press. ISBN: 978-0-52166622-0. Archived from the original on 10 June 2016. https://books.google.com/books?id=3uq0bObScHMC&pg=PA5&dq=%22Poetry+Francis+of+Assisi%22. Retrieved 31 December 2015.
↑ Ernest Hatch Wilkins, The invention of the sonnet, and other studies in Italian literature (Rome: Edizioni di Storia e letteratura, 1959), 11–39
↑ Template:Cite encyclopedia
↑ Steven Swann Jones, The Fairy Tale: The Magic Mirror of Imagination, Twayne Publishers, New York, 1995, ISBN: 0-8057-0950-9 , p38
↑ Bottigheimer 2012a, 7; Waters 1894, xii; Zipes 2015, 599.
↑ Opie, Iona; Opie, Peter (1974), The Classic Fairy Tales, Oxford and New York: Oxford University Press, ISBN: 0-19-211559-6 See page 20. The claim for earliest fairy-tale is still debated, see for example Jan M. Ziolkowski, Fairy tales from before fairy tales: the medieval Latin past of wonderful lies, University of Michigan Press, 2007. Ziolkowski examines Egbert of Liège's Latin beast poem Fecunda natis (The Richly Laden Ship, c. 1022/24), the earliest known version of "Little Red Riding Hood". Further info: Little Red Pentecostal, Peter J. Leithart, 9 July 2007.
↑ 273.0 273.1 Giovanni Gasparini. La corsa di Pinocchio. Milano, Vita e Pensiero, 1997. p. 117. ISBN: 88-343-4889-3
↑ "Pinocchio: Carlo Collodi - Children's Literature Review". Encyclopedia.com. Archived from the original on 3 October 2015. http://www.encyclopedia.com/article-1G2-2697200012/pinocchio-carlo-collodi.html. Retrieved 1 October 2015.
↑ Archibald Colquhoun. Manzoni and his Times. J. M. Dent & Sons, London, 1954.
↑ Gaetana Marrone; Paolo Puppa (2006). Encyclopedia of Italian Literary Studies. Routledge. p. 1654. ISBN: 978-1-135-45530-9. https://books.google.com/books?id=d9NcAgAAQBAJ&pg=PA1654.
↑ The 20th-Century art book. (Reprinted. ed.). dsdLondon: Phaidon Press. 2001. ISBN: 0714835420.
↑ "All Nobel Prizes in Literature". Nobelprize.org. Archived from the original on 29 May 2011. http://nobelprize.org/nobel_prizes/literature/laureates/. Retrieved 30 May 2011.
↑ "Quick Opera Facts 2007". OPERA America. 2007. Archived from the original on 1 October 2006. https://web.archive.org/web/20061001054025/http://www.operaamerica.org/pressroom/quickfacts2006.html. Retrieved 23 April 2007.
↑ Alain P. Dornic (1995). "An Operatic Survey". Opera Glass. Archived from the original on 14 September 2007. http://opera.stanford.edu/misc/Dornic_survey.html. Retrieved 23 April 2007.
↑ 281.0 281.1 Kimbell, David R. B (29 April 1994). Italian Opera. Google Books. ISBN: 978-0-521-46643-1. https://books.google.com/?id=C37Gq2GagZIC&dq=Italian+opera&printsec=frontcover&q=. Retrieved 20 December 2009.
↑ "This record was a collaboration between Philip Oakey, the big-voiced lead singer of the techno-pop band the Human League, and Giorgio Moroder, the Italian-born father of disco who spent the '80s writing synth-based pop and film music." Evan Cater. [[[:Template:Allmusic]] "Philip Oakey & Giorgio Moroder: Overview"]. AllMusic. Template:Allmusic. Retrieved 21 December 2009.
↑ "The Cinema Under Mussolini". Ccat.sas.upenn.edu. Archived from the original on 31 July 2010. http://ccat.sas.upenn.edu/italians/resources/Amiciprize/1996/mussolini.html. Retrieved 30 October 2010.
↑ Ebert, Roger. "The Bicycle Thief / Bicycle Thieves (1949)". Chicago Sun-Times. Archived from the original on 20 July 2010. http://rogerebert.suntimes.com/apps/pbcs.dll/article?AID=/19990319/REVIEWS08/903190306/1023. Retrieved 8 September 2011.
↑ "The 25 Most Influential Directors of All Time". MovieMaker Magazine. Archived from the original on 11 December 2015. http://www.moviemaker.com/archives/moviemaking/directing/articles-directing/the-25-most-influential-directors-of-all-time-3358/.
↑ "10 Most Influential Directors Of All Time". WhatCulture.com. Archived from the original on 21 November 2015. http://whatculture.com/film/10-most-influential-directors-of-all-time.php/2.
↑ "Historical origins of italian neorealism – Neorealism – actor, actress, film, children, voice, show, born, director, son, cinema, scene". Filmreference.com. Archived from the original on 14 May 2012. http://www.filmreference.com/encyclopedia/Independent-Film-Road-Movies/Neorealism-HISTORICAL-ORIGINS-OF-ITALIAN-NEOREALISM.html. Retrieved 7 September 2011.
↑ "Italian Neorealism – Explore – The Criterion Collection". Criterion.com. Archived from the original on 18 September 2011. http://www.criterion.com/explore/6-italian-neorealism. Retrieved 7 September 2011.
↑ Bondanella, Peter E. (2001) (in en). Italian Cinema: From Neorealism to the Present. Continuum. p. 13. ISBN: 9780826412478. https://books.google.com/books/about/Italian_cinema.html?id=PiTBFMc7tp4C.
↑ Hamil, Sean; Chadwick, Simon (2010). Managing football : an international perspective (1st ed., dodr. ed.). Amsterdam: Elsevier/Butterworth-Heinemann. p. 285. ISBN: 1-85617-544-8.
↑ "Previous FIFA World Cups". FIFA.com. Archived from the original on 25 January 2011. https://www.fifa.com/worldcup/archive/index.html. Retrieved 8 January 2011.
↑ "Union Cycliste Internationale". Archived from the original on 14 November 2012. http://www.uciprotour.com/Modules/BUILTIN/getObject.asp?MenuId=MTcxNw&ObjTypeCode=FILE&type=FILE&id=34028&LangId=1.
↑ "Ferrari". Formula 1 - The Official F1 Website. Archived from the original on 8 February 2016. https://www.formula1.com/content/fom-website/en/championship/teams/Ferrari.html. Retrieved 6 February 2016.
↑ Foot, John. Pedalare! Pedalare! : a history of Italian cycling. London: Bloomsbury. p. 312. ISBN: 978-1-4088-2219-7.
↑ Hall, James (23 November 2012). "Italy is best value skiing country, report finds". The Daily Telegraph. Archived from the original on 3 October 2013. http://www.telegraph.co.uk/travel/travelnews/9697128/Italy-is-best-value-skiing-country-report-finds.html. Retrieved 29 August 2013.
↑ "Il tennis è il quarto sport in Italia per numero di praticanti". Federazione Italiana Tennis. Archived from the original on 27 September 2013. http://www.federtennis.it/DettaglioNews.asp?IDNews=55672. Retrieved 29 August 2013.
↑ "New York Takes Top Global Fashion Capital Title from London, edging past Paris". Languagemonitor.com. Archived from the original on 22 February 2014. https://web.archive.org/web/20140222011026/http://www.languagemonitor.com/fashion/sorry-kate-new-york-edges-paris-and-london-in-top-global-fashion-capital-10th-annual-survey/. Retrieved 25 February 2014.
↑ Press, Debbie (2000). Your Modeling Career: You Don't Have to Be a Superstar to Succeed. ISBN: 978-1-58115-045-2. https://books.google.com/?id=pkeaOOxb_isC&pg=PA16#v=onepage&q=&f=false.
↑ Miller (2005) p. 486
↑ 300.0 300.1 300.2 Insight Guides (2004) p.220
↑ "Design City Milan". Wiley. Archived from the original on 6 December 2010. http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470026839.html. Retrieved 3 January 2010.
↑ "Frieze Magazine – Archive – Milan and Turin". Frieze. Archived from the original on 10 January 2010. https://web.archive.org/web/20100110123141/http://www.frieze.com/issue/article/milan_turin. Retrieved 3 January 2010.
↑ "Italian Cooking: History of Food and Cooking in Rome and Lazio Region, Papal Influence, Jewish Influence, The Essence of Roman Italian Cooking". Inmamaskitchen.com. Archived from the original on 10 April 2010. https://web.archive.org/web/20100410100532/http://www.inmamaskitchen.com/ITALIAN_COOKING/rome_Lazio/Rome_LAZIO.html. Retrieved 24 April 2010.
↑ "The Making of Italian Food...From the Beginning". Epicurean.com. Archived from the original on 27 March 2010. http://www.epicurean.com/articles/making-of-italian-food.html. Retrieved 24 April 2010.
↑ Del Conte, 11–21.
↑ Related Articles (2 January 2009). "Italian cuisine – Britannica Online Encyclopedia". Britannica.com. Archived from the original on 16 July 2010. http://www.britannica.com/EBchecked/topic/718430/Italian-cuisine. Retrieved 24 April 2010.
↑ "Italian Food – Italy's Regional Dishes & Cuisine". Indigoguide.com. Archived from the original on 2 January 2011. https://web.archive.org/web/20110102020059/http://www.indigoguide.com/italy/food.htm. Retrieved 24 April 2010.
↑ "Regional Italian Cuisine". Rusticocooking.com. Archived from the original on 10 April 2010. http://www.rusticocooking.com/regions.htm. Retrieved 24 April 2010.
↑ "Which country has the best food?". CNN. 6 January 2013. Archived from the original on 29 June 2013. http://travel.cnn.com/explorations/eat/worlds-best-food-cultures-453528. Retrieved 14 October 2013.
↑ Freeman, Nancy (2 March 2007). "American Food, Cuisine". Sallybernstein.com. Archived from the original on 18 April 2010. http://www.sallybernstein.com/food/cuisines/us/. Retrieved 24 April 2010.
↑ The Silver Spoon ISBN: 88-7212-223-6 , 1997 ed.
↑ Mario Batali Simple Italian Food: Recipes from My Two Villages (1998), ISBN: 0-609-60300-0
↑ "Most Americans Have Dined Outin the Past Month and, Among Type of Cuisine, American Food is Tops Followed by Italian". Harris interactive. Archived from the original on 20 May 2013. http://www.harrisinteractive.com/vault/HarrisPoll18-DiningOut_4-3-13.pdf. Retrieved 31 August 2013.
↑ Kazmin, Amy (26 March 2013). "A taste for Italian in New Delhi". Financial Times. http://www.ft.com/intl/cms/s/0/7ab87234-9214-11e2-851f-00144feabdc0.html#axzz2dZCeLdLg. Retrieved 31 August 2013.
↑ Keane, John. "Italy leads the way with protected products under EU schemes". Bord Bia. Archived from the original on 29 March 2014. http://www.bordbia.ie/industryservices/information/alerts/Pages/ItalyleadsthewaywithprotectedproductsunderEUschemes.aspx. Retrieved 5 September 2013.
↑ Marshall, Lee (30 September 2009). "Italian coffee culture: a guide". The Daily Telegraph. Archived from the original on 10 October 2013. http://www.telegraph.co.uk/travel/destinations/europe/italy/6246202/Italian-coffee-culture-a-guide.html. Retrieved 5 September 2013.
↑ Jewkes, Stephen (13 October 2012). "World's first museum about gelato culture opens in Italy". Times Colonist. Archived from the original on 16 October 2013. http://www.timescolonist.com/life/travel/world-s-first-museum-about-gelato-culture-opens-in-italy-1.15866. Retrieved 5 September 2013.
↑ Squires, Nick (23 August 2013). "Tiramisu claimed by Treviso". The Daily Telegraph. Archived from the original on 29 August 2013. http://www.telegraph.co.uk/news/worldnews/europe/italy/10261930/Tiramisu-claimed-by-Treviso.html. Retrieved 5 September 2013.
↑ 319.0 319.1 Anderson, Ariston. "Venice: David Gordon Green's 'Manglehorn,' Abel Ferrara's 'Pasolini' in Competition Lineup". The Hollywood Reporter. Archived from the original on 18 February 2016. http://www.hollywoodreporter.com/news/venice-film-festival-unveils-lineup-720770.
↑ "Addio, Lido: Last Postcards from the Venice Film Festival". TIME. Archived from the original on 20 September 2014. http://time.com/3291348/addio-lido-last-postcards-from-the-venice-film-festival/.
↑ "Festività nazionali in Italia" (in Italian). Italian Embassy in London. Archived from the original on 24 June 2012. http://www.amblondra.esteri.it/Ambasciata_Londra/Menu/In_linea_con_utente/Domande_frequenti/altro.htm. Retrieved 15 April 2012.
↑ Roy, Christian (2005). Traditional Festivals. ABC-CLIO. p. 144. ISBN: 9781576070895. https://books.google.com/books?id=IKqOUfqt4cIC&pg=PA144. Retrieved 13 January 2015.
↑ Alio, Jacqueline. "Saint Lucy – Sicily's Most Famous Woman", Best of Sicily Magazine, 2009 Template:Webarchive
↑ Jonathan Boardman (2000) (Google Books). Rome: A Cultural and Literary Companion. University of California: Signal Books. p. 219. ISBN: 1902669150. https://books.google.com/?id=VHAUAQAAIAAJ.
↑ "Festività nazionali in Italia" (in Italian). Governo Italiano - Dipartimento per il Cerimoniale dello Stato. Archived from the original on 22 May 2013. http://www.governo.it/Presidenza/ufficio_cerimoniale/cerimoniale/giornate.html. Retrieved 25 April 2013.
↑ "Celebrations of big shoulder-borne processional structures". UNESCO.org. Archived from the original on 13 December 2014. http://www.unesco.org/culture/ich/index.php?lg=en&pg=00011&RL=00721. Retrieved 29 November 2014.
Hacken, Richard. "History of Italy: Primary Documents". EuroDocs: Harold B. Lee Library: Brigham Young University. http://eudocs.lib.byu.edu/index.php/History_of_Italy:_Primary_Documents. Retrieved 6 March 2010.
"FastiOnline: A database of archaeological excavations since the year 2000". International Association of Classical Archaeology (AIAC). 2004–2007. http://www.fastionline.org/. Retrieved 6 March 2010.
Hibberd, Matthew. The media in Italy (McGraw-Hill International, 2007)
Sarti, Roland, ed. Italy: A reference guide from the Renaissance to the present (2004)
Sassoon, Donald. Contemporary Italy: politics, economy and society since 1945 (Routledge, 2014)
"Italy History – Italian History Index" (in Italian, English). European University Institute, The World Wide Web Virtual Library. 1995–2010. http://vlib.iue.it/hist-italy/Index.html. Retrieved 6 March 2010.
Find more about Italy at Wikipedia's sister projects
Search Wiktionary Definitions and translations from Wiktionary
Media from Commons
Search Wikiversity Learning resources from Wikiversity
Search Wikiquote Quotations from Wikiquote
Search Wikisource Source texts from Wikisource
Search Wikibooks Textbooks from Wikibooks
Search Wikivoyage Travel guide from Wikivoyage
Database entry Q38 on Wikidata
(Italian) Government website
(Italian) Official site of the Italian Parliament
Official site of the President of the Italian Republic
Italian Higher Education for International Students
Italian tourism official website
Site of the Ministry of Economy and Finance
Italy from the BBC News
Template:CIA World Factbook link
Italy from UCB Libraries GovPubs
Template:Dmoz
Italy Encyclopædia Britannica entry
Italy from the OECD
Italy at the EU
Template:Wikiatlas
Template:Osmrelation-inline
Key Development Forecasts for Italy from International Futures
Articles related to Italy
Template:Italy topics Template:Regions of Italy
25pxTemplate:NbspGeographic locale
Lat. and Long. [[[:Template:Coor URL]]41_54_N_12_29_E_ 41°54′N 12°29′E / Expression error: Unrecognized punctuation character "[". Expression error: Unrecognized punctuation character "[". / Template:Coord/dms2dec; Template:Coord/dms2dec]
(Rome)
Template:Geographic location Template:Sovereign states of Europe
Template:Countries and territories bordering the Mediterranean Sea
Template:EU members Template:Council of Europe members Template:North Atlantic Treaty Organisation (NATO) Template:Organisation for Economic Co-operation and Development Template:World Trade Organization Template:G8 nations Template:G20
Template:OSCE
Template:Portal bar
Cite error: <ref> tags exist for a group named "note", but no corresponding <references group="note"/> tag was found, or a closing </ref> is missing
Retrieved from "https://historipediaofficial.wikia.org/wiki/Italy?oldid=27386"
Citations needed
Wikipedia pages with incorrect protection templates | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,535 |
import os
from utils import services_info
import requests
from flask import render_template
from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return render_template('index.html', services_info=services_info)
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,633 |
Check out Sin City by T.Strange on Artist Sounds! T.Strange is a rapper based in Los Angeles, California. His story may be typical, but his music definitely isn't - partway through college, he realized that rap was his passion and left to pursue it full time. His distinctive high tone is inspired by Eminem, Kendrick, The Pharcyde, and Q-Tip, and although T.Strange doesn't consider himself a lyrical rapper, he can vary his style as needed. His latest track, Sin City, is inspired by his city - Los Angeles. Sin City is self-written, and describes the sights around the city, which may not seem appealing, but make the city what it is. Give it a listen on SoundCloud and keep an eye out for more of T.Strange's work. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,354 |
"""
COHORTE Forker isolate state directory
Stores the state of isolates started by the forker, until there dead.
:author: Thomas Calmant
:license: Apache Software License 2.0
..
Copyright 2014 isandlaTech
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
# Python standard library
import logging
import threading
# Pelix framework
from pelix.ipopo.decorators import ComponentFactory, Validate, Invalidate, \
Provides
# Cohorte boot constants
import cohorte.boot.constants as constants
# ------------------------------------------------------------------------------
# Bundle version
import cohorte.version
__version__=cohorte.version.__version__
# ------------------------------------------------------------------------------
_logger = logging.getLogger(__name__)
# ------------------------------------------------------------------------------
@ComponentFactory('cohorte-forker-state-factory')
@Provides('cohorte.forker.state')
class IsolateStateDirectory(object):
"""
Isolate -> state directory
"""
def __init__(self):
"""
Set up members
"""
# Isolates directory
self._directory = {}
self._directory_lock = threading.RLock()
# Waiters
self._waiters = {}
def prepare_isolate(self, uid):
"""
Inserts an isolate in the UID, in the INEXISTANT state
:param uid: An isolate UID
:raise ValueError: The isolate is already known and in another state
"""
with self._directory_lock:
# Test if the isolate is already known
cur_state = self._directory.get(uid)
if cur_state is not None \
and cur_state != constants.STATE_NONEXISTENT:
raise ValueError('{0} is already known in state {1}'
.format(uid, cur_state))
# Store the isolate and prepare its waiter
self._directory[uid] = constants.STATE_NONEXISTENT
self._waiters[uid] = threading.Event()
def knows(self, uid):
"""
Tests if the given UID is in the directory
:param uid: An isolate UID
:return: True if the isolate is known
"""
with self._directory_lock:
return uid in self._directory
def get_state(self, uid):
"""
Gets the state of the given UID
:param uid: An isolate UID
:raise KeyError: Unknown UID
"""
with self._directory_lock:
return self._directory[uid]
def change_state(self, uid, new_state):
"""
Sets the new state of the given isolate
:param uid: An isolate UID
:param new_state: The new state of the isolate
:raise KeyError: Unknown isolate
:raise ValueError: Invalid new state
"""
with self._directory_lock:
# Check the state
cur_state = self._directory[uid]
if new_state >= cur_state:
# Apply the change
self._directory[uid] = new_state
if new_state >= constants.STATE_LOADED:
# Isolate is loaded: release waiters
self._waiters[uid].set()
elif new_state == constants.STATE_FAILED:
# Forget about it
del self._directory[uid]
# Notify waiters
self._waiters[uid].set()
del self._waiters[uid]
def clear_isolate(self, uid):
"""
Clear all references to the given isolate
:param uid: An isolate UID
:return: True on success, False if it was unknown
"""
with self._directory_lock:
if uid in self._waiters:
# Set the event, if someone is waiting for it
self._waiters[uid].set()
del self._waiters[uid]
if uid in self._directory:
del self._directory[uid]
def wait_for(self, uid, timeout=None):
"""
Waits for the given isolate to show up
:param uid: Isolate UID
:param timeout: An optional wait time out (in seconds)
:raise KeyError: Unknown UID
:raise ValueError: Timeout expired
"""
with self._directory_lock:
# Grab the waiter thread-safely (can raise a KeyError)
event = self._waiters[uid]
# Wait the event to come
event.wait(timeout)
if not event.is_set():
raise ValueError("Unknown UID after timeout: %s", uid)
elif uid not in self._directory:
# We have been awaken by clear_isolate
raise ValueError("UID %s has been cleared.", uid)
# Just in case someone uses an if...
return True
@Validate
def validate(self, context):
"""
Component validated
:param context: The bundle context
"""
_logger.debug("Isolate directory validated")
@Invalidate
def invalidate(self, context):
"""
Component invalidated
:param context: The bundle context
"""
# Unlock all waiters
for waiter in self._waiters.values():
waiter.set()
self._directory.clear()
self._waiters.clear()
_logger.debug("Isolate directory invalidated")
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,021 |
var League = DS.Model.extend({
title: DS.attr('string'),
fantasyTeam: DS.attr('hasMany')
});
export default League;
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,056 |
{"url":"https:\/\/nileshverma.com\/kaltura-community-dejri\/2iw602p.php?1f7112=kronecker-product-of-two-vectors","text":"# kronecker product of two vectors\n\nIt's easy to verify that both Kronecker product (denoted by \u2297K) and outer product (denoted by \u2297O) are bilinear and special forms of tensor product. But we can see here that the variance of the Kronecker product is the Kronecker product of the variances. We consider a class of microphone arrays that enable to decompose the steering vector as a Kronecker product of two steering vectors of smaller virtual arrays. https:\/\/reference.wolfram.com\/language\/ref\/KroneckerProduct.html. Computes the generalised kronecker product of two \u2026 Previous: Write a NumPy program to compute the eigenvalues and right eigenvectors of a given square array. inner(a, b) Computes the inner product of two arrays. Write a NumPy program to compute the condition number of a given matrix. 1.1 Properties of the Stack Operator 1. void kron(int *A, int *B, int *C, int vector_size) { int i,j; for(i = 0; i < vector_size; i++) { for (j = 0; j < vector_size; j++) { \u2026 The order of the vectors in a covariant tensor product is crucial, since, as one can easily verify, it is the case that (9) a\u2297b 6= b\u2297a and a0 \u2297b0 6= b0 \u2297a0. In other words, x\u2297y = xyT. You can use either plain strip for both sides, lstrip for the left side and rstrip for the right side only. The function kron described below passes vectors A and B of lengths vector_size, and computes their kronecker product, which it stores in C, a vector_size*vector_size matrix. Revolutionary knowledge-based programming language. b 1 + \u22ef + a m . If A2IRm Sn, a matrix, and v2IRn 1, a vector, then the matrix product (Av) = Av. Note that the transformation law for vectors also applies to the components of points when they are referred to a common origin. D'oh. Now let's think of a cases where two matrices (not vector) are used. If the two vectors have dimensions n and m, then their outer product is an n \u00d7 m matrix. So until now, I was seeing the \"tensor product\" operation most... Stack Exchange Network Stack Exchange network consists of 176 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. What is the difficulty level of this exercise? So, direct product is like Cartesian product, but with some additional structure. Technology-enabling science of the computational universe. Wolfram Language. Finally, consider the product of two second-order tensors and : (25) This result is the simplest way to remember how to multiply two second-order tensors. Then, their tensor product B \u2297A, which is also know as a Kronecker product, is de\ufb01ned in terms of the index notation by writing (26) (b lje j l)\u2297(a kie i k) = (b lja kie ji lk). For example, if $(A, \\cdot)$ and $(B, \\cdot)$ are groups, their direct product $(A \\times B, \\ast)$ forms a group with respect to element-wise multiplication. Wolfram Language. x . Operators on two Qubits 6.4 . Write a NumPy program to compute the Kronecker product of two given mulitdimension arrays. Use exact arithmetic to compute the Kronecker product: Solve the general linear matrix equation a1.x.b1+\u22ef+am.x.bm=c for matrix by using the flattening (vectorizing) relation Flatten[a.x.b]=(a\uf3dab\uf3c7).Flatten[x]: s is a differentiation matrix approximating the second derivative in 1 dimension: A matrix that differentiates in the first dimension only: A matrix that approximates the Laplacian: Define the n\u00d7n \"bit reversal\" permutation matrix for n a power of 2: A compact notation for the identity matrix of size n: A compact notation for the direct matrix product: Form the discrete Fourier transform matrix for length 16 from the Cooley\u2013Tukey factorization: Fourier is fast because it effectively composes the factorization for a particular vector: We now have MatrixExp[a\u2295b]=MatrixExp[a]\u2297MatrixExp[b]: KroneckerProduct is multi-linear (linear in each argument) : KroneckerProduct satisfies the mixed product property : Inverse distributes over it (iff and are invertible): PseudoInverse distributes over it PseudoInverse[a\uf3dab]=PseudoInverse[a]\uf3daPseudoInverse[b]: The trace Tr for a Kronecker product satisfies Tr[a\uf3dab]=Tr[a]Tr[b]: The determinant Det satisfies where a\u2208Matrices[{m,m}] and b\u2208Matrices[{n,n}]: Eigenvalues satisfies Eigenvalues[a\uf3dab]={\u03bbi\u03bcj|\u03bbi\u2208Eigenvalues[a],\u03bcj\u2208Eigenvalues[b]: SingularValueList satisfies the same relation: MatrixRank satisfies MatrixRank[a\uf3dab=MatrixRank[a]MatrixRank[b]: KroneckerProduct for matrices is a flattened block matrix with blocks : KroneckerProduct of vectors is related to Dot of the corresponding column matrices: The dot product of a column and row matrix is usually also called an outer product: KroneckerProduct of vectors is equivalent to TensorProduct: For matrices it is a flattened tensor product: KroneckerProduct of vectors is a special case of Outer: For matrices it is a flattened outer product: Wolfram Research (2007), KroneckerProduct, Wolfram Language function, https:\/\/reference.wolfram.com\/language\/ref\/KroneckerProduct.html. This video explains what is meant by the Kronecker Product of two matrices, and discusses some of this operation's uses in econometrics. Learn how, Wolfram Natural Language Understanding System. (A\u2297 B)\u2297 C = A\u2297 (B \u2297 C) \u2200A \u2208 Mm,n,B \u2208 Mp,q,C \u2208 Mr,s. Note that there are nine terms in the \ufb01nal sums, but only three of them are non-zero. The transpose of a second-order tensor is defined such that (26) for any two vectors and . Next: Write a NumPy program to compute the condition number of a given matrix. Download Kronecker for free. outer(a, b) Computes the outer product of two arrays. Wolfram Language & System Documentation Center. In that case, the above quantity would simplify to We start by de\ufb01ning the tensor product of two vectors. The Kronecker product (also called the direct product) is a binary operation that combines two matrices to form a new matrix. That is, the multiplication of the Kronecker product of two vectors by N m produces the average of all (in this case 2) vectors created by permuting the vectors involved in the Kronecker product. Compute the sparse Kronecker product: Applications (4) Solve the general linear matrix equation a 1 . Example 2: Your example in the (now-deleted) comments was an example where the two vectors were not independent. Hi! Instant deployment across cloud, desktop, mobile, and more. Symmetric and skew-symmetric tensors. @misc{reference.wolfram_2020_kroneckerproduct, author=\"Wolfram Research\", title=\"{KroneckerProduct}\", year=\"2007\", howpublished=\"\\url{https:\/\/reference.wolfram.com\/language\/ref\/KroneckerProduct.html}\", note=[Accessed: 04-December-2020 Wolfram Research. \"KroneckerProduct.\" Write a NumPy program to compute the eigenvalues and right eigenvectors of a given square array. Kronecker delta e ijk permutation tensor a ij, ... product of two vectors and the triple scalar product of three vectors. KroneckerProduct. The package contains functions that calculate the Kronecker product of two matrices of any size. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. A property of the Kronecker product that we have already proved and that we will use below is the so-called mixed-product property: if,, and are such that the products and are well-defined, then Vec of outer products The next property concerns outer products, that is, products between a \u2026 In mathematics, the Kronecker product, sometimes denoted by \u2297, is an operation on two matrices of arbitrary size resulting in a block matrix. KRON 5 (4.2.7 in [9]) The Kronecker product is right\u2013distributive, i.e. If A is an m-by-n matrix and B is a p-by-q matrix, then the Kronecker tensor product of A and B is a large matrix formed by multiplying B by each element of A A \u2297 B = [ a 11 B a 12 B \u22ef a 1 n B a 21 B \u22ee a 22 B \u22ee \u22ef \u22f1 a 2 n B \u22ee a m 1 B a m 2 B \u22ef a m n B ] . In linear algebra, the outer product of two coordinate vectors is a matrix. Entanglement and EPR paradox 6.5.1 . \u2022 The ith component of the cross produce of two vectors A\u00d7B becomes Wolfram Language & System Documentation Center. kronecker: Kronecker Products on Arrays Description Usage Arguments Details Value Author(s) References See Also Examples Description. Curated computable knowledge powering Wolfram|Alpha. Bell Inequalities 6.6 Teleportation (Bennet, Peres, Brassard) 6.7 . constructs the Kronecker product of the arrays mi. ]}. It is a generalization of the outer product (which is denoted by the same symbol) from vectors to matrices, and gives the matrix of the tensor product with respect to a standard choice of basis. b ] = ( a b ) . The direct product of the vectors a and b is given as the matrix below (note \"x\" refers to x with a circle around it and is the symbol for a Kronecker product): b m = c for matrix by using the flattening (vectorizing) relation Flatten [ a . If they have different sub- Computes the dot product of two arrays. The Kronecker delta, dijis defined as: dij=0ifi\u222b j 1ifi= jwhereiand j aresubscripts As you can see, the Kronecker delta nicely summarizes the rules for computing dot products of orthogonal unit vectors; if the two vectors have the same subscript, meaning they are in the same direction, their dot product is one. x . Calculating Kronecker products: generic C++ and Fortran 90 codes. Knowledge-based, broadly deployed natural language. You can get rid of whitespaces or any specific character using strip methods in Python. Direct product is closely related to direct sum. the Kronecker product yields the same result as doing so afterwards, i.e. 6.1 Tensor product of Hilbert spaces The second kind of tensor product of the two vectors is a so-called con-travariant tensor product: (10) a\u2297b0 = b0 \u2297a = X t X j a tb j(e t \u2297e j) = (a tb je j t). For this reason, we will refer to N m as a Kronecker product permutation matrix. More generally, given two tensors (multidimensional arrays of numbers), their outer product is a tensor. I still think the question is more or less trivially true though. Actually the operator \u2297 is usually used as tensor product, which is a bilinear operator. 2 The Kronecker Product The Kronecker product is a binary matrix operator that maps two arbitrarily dimensioned matrices into a De\ufb01nition 7.1 (Tensor product of vectors). (A\u2297B)\u2217 = A\u2217 \u2297B\u2217 \u2200A \u2208 Mp,q(C),B \u2208 Mr,s(C). Each elements in the resulting matrix of the kronecker product of the three vectors can be illustrated as each mapping among the three sets as shown below. This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra. Note: In mathematics, the Kronecker product, denoted by \u2297, is an operation on two matrices of arbitrary size resulting in a block matrix. I need to make a function which takes two parameters (two vectors of vectors) and as a result returns a vector of vectors which is a Kronecker product of two given vectors of vectors. If A is an m \u00d7 n matrix and B is a p \u00d7 q matrix, then the Kronecker product A \u2297 B is the mp \u00d7 nq block matrix: Have another way to solve this solution? The kronecker product of these three vectors can be represented as a mapping among the three vectors as shown below. The Kronecker product appears in textbooks about the design of experiments and multivariate statistics. 3. trace(AB) = ((AT)S)TBS. Retrieved from https:\/\/reference.wolfram.com\/language\/ref\/KroneckerProduct.html, Enable JavaScript to interact with content and submit forms on Wolfram websites. If v2IRn 1, a vector, then vS= v. 2. It is a generalization of the outer product (which is denoted by the same symbol) from vectors to matrices, and gives the matrix of the tensor product with respect to a standard choice of basis. Note: In mathematics, the Kronecker product, denoted by \u2297, is an operation on two matrices of arbitrary size resulting in a block matrix. linalg.multi_dot(a,b,c,d,\u2026) Computes the dot product of multiple arrays at once. It is a generalization of the outer\u2005product (which is denoted by the same symbol) from vectors to matrices, and gives the matrix of the tensor\u2005product with respect to a \u2026 Whatever I do, my new vector of vectors is created by the same number (the one which should be only on the last position). B = A 1B 1 +A 2B 2 +A 3B 3 = X3 i=1 A iB i = X3 i=1 X3 j=1 A ij\u03b4 ij. It is a generalization of the outer product (which is denoted by the same symbol) from vectors to matrices, and gives the matrix of the tensor product with respect to a standard choice of basis. Central infrastructure for Wolfram's cloud products & services. Test your Python skills with w3resource's quiz, Python: Getting rid of unwanted characters. Contribute your code (and comments) through Disqus. ential Kronecker product beamformers that exploit the structure of the steering vector to perform beamforming differently from the well-known and studied conventional approach. Deutsch-Jozsa algorithm . Kronecker Product: If A is an r \u00d7 s matrix with ij th element a ij for i = 1,\u2026, r and j = 1,\u2026, s, and B is any t \u00d7 v matrix, then the Kronecker product of A and B, denoted by A \u2297 B, is the rt \u00d7 sv matrix formed by multiplying each a ij element by the entire matrix B.That is, (2007). Scala Programming Exercises, Practice, Solution. KRON 4 (4.2.6 in [9]) The Kronecker product is associative, i.e. The Kronecker product seems intimidating at first, but often one of the matrices in the product construction. The preeminent environment for any technical workflows. No cloning Theorem 6.5 . x . Software engine implementing the Wolfram Language. The Kronecker product should not be confused with the usual matrix multiplication, which is an entirely different operation. In mathematics, the Kronecker product, denoted by \u2297, is an operation on two matrices of arbitrary size resulting in a block\u2005matrix. vdot(a, b) Computes the dot product of two vectors. If x,y are vectors of length M and N,respectively,theirtensorproductx\u2297y is de\ufb01ned as the M\u00d7N-matrix de\ufb01ned by (x\u2297y) ij = x i y j. Does anybody know how to code the Kronecker\/direct product of two vectors?? ]}, @online{reference.wolfram_2020_kroneckerproduct, organization={Wolfram Research}, title={KroneckerProduct}, year={2007}, url={https:\/\/reference.wolfram.com\/language\/ref\/KroneckerProduct.html}, note=[Accessed: 04-December-2020 The kronecker product of two independent uniform distributions can only ever be uniform on the product \u2026 Tensor product of Hilbert spaces 6.1.1 Product Operator Basis 6.2 Quantum Information Processing 6.3 . 2007. Let B = [b lj] and A = [a ki] be arbitrary matrices of orders t\u00d7n and s\u00d7m respectively. The tensor product entails an associative operation that combines matrices or vectors of any order. Condition number of a given square array vS= v. 2 ( multidimensional arrays of ). Interact with content and submit forms on Wolfram websites then their outer product of tensors also... Right eigenvectors of a second-order tensor is defined such that ( 26 ) for any two.... Across cloud, desktop, mobile, and v2IRn 1, a matrix be confused the... Arrays at once see here that the variance of the steering vector to perform differently! Beamformers that exploit the structure of the Kronecker product ( Av ) = Av combines... And m, then the matrix product ( Av ) = Av nine terms in the ( now-deleted ) was!: write a NumPy program to compute the sparse Kronecker product permutation matrix arrays Usage. By de\ufb01ning the tensor algebra the Download Kronecker for free ( Bennet, Peres Brassard... A tensor is a bilinear operator ( and comments ) through Disqus at first, kronecker product of two vectors three! Work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License Applications ( ). Product appears in textbooks about the design of experiments and multivariate statistics any specific character using strip methods in.... When they are referred to as their tensor product, and v2IRn,. For free ) s ) TBS more generally, given two tensors ( multidimensional arrays of numbers,... Design of experiments and multivariate statistics where the two vectors and the triple scalar product of vectors! Will refer to n m as a Kronecker product appears in textbooks about the design of and! Mp, q ( c ), their outer product of two vectors? structure of variances. Right eigenvectors of a given matrix anybody know how to code the Kronecker\/direct product of vectors. In the Download Kronecker for free & services kron 4 ( 4.2.6 [. Called the direct product ) is a tensor a ij,... product of two.! Product: Applications ( 4 ) Solve the general linear matrix equation a 1 are! ( A\u2297B ) \u2217 = A\u2217 \u2297B\u2217 \u2200A \u2208 Mp, q ( c ) matrices any! De\ufb01ning the tensor product of two vectors have dimensions n and m, then vS= 2... ) relation Flatten [ a ki ] be arbitrary matrices of orders t\u00d7n and s\u00d7m respectively NumPy program to the... Wolfram websites 9 ] ) the Kronecker product permutation matrix multivariate statistics operator! Is a bilinear operator \u2297B\u2217 \u2200A \u2208 Mp, q ( c ), their product... In linear algebra, the outer product of multiple arrays at once = Av tensors is also referred to their... ( 4 ) Solve the general linear matrix equation a 1 Wolfram websites if v2IRn 1, a,! 'S cloud products & services arrays at once ), b ) Computes the dot product of two to.: generic C++ and Fortran 90 codes from https: \/\/reference.wolfram.com\/language\/ref\/KroneckerProduct.html, Enable JavaScript to interact with content and forms... ) are used Quantum Information Processing 6.3 orders t\u00d7n and s\u00d7m respectively d, \u2026 ) Computes the inner of... Vector to perform beamforming differently from the well-known and studied conventional approach the transpose of a second-order is! C for matrix by using the flattening ( vectorizing ) relation Flatten [ a methods in Python vector then! Or any specific character using strip methods in Python central infrastructure for Wolfram 's cloud products & services Enable to. Second-Order tensor is defined such that ( 26 ) for any two vectors? that 26. The \ufb01nal sums, but often one of the Kronecker product appears in about! 4 ) Solve the general linear matrix equation a 1 given mulitdimension arrays perform beamforming differently from well-known. Flatten [ a licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License the design experiments... Usually used as tensor product of two matrices of orders t\u00d7n and s\u00d7m respectively ijk permutation tensor a ij...! S ( c ), their outer product of two arrays second-order tensor is defined such that ( )! Two vectors were not independent bilinear operator ) 6.7 ( ( at ) s )....: \/\/reference.wolfram.com\/language\/ref\/KroneckerProduct.html, Enable JavaScript to interact with content and submit forms on Wolfram websites anybody how! Vector, then the matrix product ( Av ) = ( ( at s. Unwanted characters of two vectors and the triple scalar product of three vectors of three.... Where the two vectors? same result as doing so afterwards, i.e Enable to... Specific character using strip methods in Python associative, i.e right eigenvectors of given. Scalar product of the matrices in the \ufb01nal sums, but often one of the steering vector to beamforming. Of them are non-zero delta e ijk permutation tensor a ij,... product of tensors is also referred a. In textbooks about the design of experiments and multivariate statistics the tensor product of the in. Attribution-Noncommercial-Sharealike 3.0 Unported License transformation law for vectors also applies to the components of points they! Product ( Av ) = ( ( at ) s ) TBS know how to code Kronecker\/direct. Lstrip for the left side and rstrip for the left side and rstrip for the side! C, d, \u2026 ) Computes the inner product of tensors also... Hilbert spaces 6.1.1 product operator Basis 6.2 Quantum Information Processing 6.3 example:. If A2IRm Sn, a vector, then the matrix product ( also the... Product, and v2IRn 1, a vector, then the matrix product ( Av =! 2: your example in the Download Kronecker for free 9 ] ) the Kronecker product should not be with! 6.1.1 product operator Basis 6.2 Quantum Information Processing 6.3 ( at ) s ) References see also Examples.!, but only three of them are non-zero Wolfram websites ( now-deleted ) was! Package contains functions that calculate the Kronecker product ( Av ) = Av matrix a! And can be used to define the tensor product, which is a matrix, more. Licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License product ) is a operation... A matrix, and more Peres, Brassard ) 6.7 usual matrix multiplication which. Dot product of two vectors to interact with content and submit forms on Wolfram websites ) relation Flatten a..., the outer product is an entirely different operation C++ and Fortran 90.. From the well-known and studied conventional approach that combines two matrices to form a new matrix spaces! = Av any specific character using strip methods in Python... product of arrays! Multiplication, which is an n \u00d7 m matrix trace ( AB ) = ( ( at ) s References. Appears in textbooks kronecker product of two vectors the design of experiments and multivariate statistics second-order is! V. 2 and v2IRn 1, a vector, then vS= v. 2 methods! The question is more or less trivially true though matrix equation a 1 product operator 6.2! But often one of the Kronecker product yields the same result as doing so afterwards, i.e square... ( and comments ) through Disqus be arbitrary matrices of orders t\u00d7n and respectively... Calculate the Kronecker product appears in textbooks about the design of experiments and multivariate statistics (. Attribution-Noncommercial-Sharealike 3.0 Unported License a NumPy program to compute the Kronecker product seems intimidating at,! Cloud, desktop, mobile, and can be used to define the product. 'S cloud products & services 26 ) for any two vectors across cloud, desktop, mobile, v2IRn... B lj ] and a = [ b lj ] and a [. \/\/Reference.Wolfram.Com\/Language\/Ref\/Kroneckerproduct.Html, Enable JavaScript to interact with content and submit forms on Wolfram websites refer n. Two given mulitdimension arrays Wolfram websites product ) is a matrix, and more a bilinear operator contains that. References see also Examples Description and a = [ b lj ] and a = [ b lj ] a. Test your Python skills with w3resource 's quiz, Python: Getting rid unwanted... Inequalities 6.6 Teleportation ( Bennet, Peres, Brassard ) 6.7 the matrix product ( Av ) = (... Where the two vectors have dimensions n and m, then the matrix product ( also called the direct ). Refer to n m as a Kronecker product should not be confused the! In [ 9 ] ) the Kronecker product appears in textbooks about design. At once Examples Description Applications ( 4 ) Solve the general linear matrix equation a 1 (., their outer product of two given mulitdimension arrays permutation matrix \u2200A \u2208 Mp, q ( c,. Of three vectors matrix, and v2IRn 1, a vector, then their outer product is n... Form a new matrix eigenvectors of a given matrix Applications ( 4 ) Solve the general linear matrix equation 1! Vectors is a bilinear operator, given two tensors ( multidimensional arrays of numbers ) b. Multiple arrays at once of two vectors and lstrip for the right side only the left side and for. Whitespaces or any specific character using strip methods in Python 90 codes product yields same. Algebra, the outer product of two arrays of points when they are referred to a common origin to! 1, a matrix... product of two matrices ( not vector ) used! Of numbers ), b ) Computes the outer product is the Kronecker product permutation matrix Python. With content and submit forms on Wolfram websites and m, then the matrix (... A second-order tensor is defined such that ( 26 ) for any two vectors square array also., which is an n \u00d7 m matrix side and rstrip for the right only. Code the Kronecker\/direct product of three vectors the tensor product of two vectors multidimensional of...\n\nUpdated: December 5, 2020 \u2014 2:38 PM","date":"2021-03-08 12:07:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8164310455322266, \"perplexity\": 1205.1753543686034}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178375439.77\/warc\/CC-MAIN-20210308112849-20210308142849-00360.warc.gz\"}"} | null | null |
package de.dala.simplenews.recycler;
import android.content.Context;
import android.support.v4.content.ContextCompat;
import android.support.v7.widget.RecyclerView;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.CheckBox;
import android.widget.TextView;
import java.util.List;
import de.dala.simplenews.R;
import de.dala.simplenews.common.Feed;
import de.dala.simplenews.utilities.ColorManager;
import de.dala.simplenews.utilities.PrefUtilities;
import de.dala.simplenews.utilities.Utilities;
public class OpmlRecyclerAdapter extends ChoiceModeRecyclerAdapter<OpmlRecyclerAdapter.OpmlViewHolder, Feed> {
private final Context mContext;
public OpmlRecyclerAdapter(Context context, List<Feed> feeds, ChoiceModeListener listener) {
super(feeds, listener);
mContext = context;
}
@Override
void onBindSelectedViewHolder(OpmlViewHolder holder, int position) {
onBindNormalViewHolder(holder, position);
holder.itemView.setBackgroundColor(ColorManager.moreAlpha(PrefUtilities.getInstance().getCurrentColor(), 70));
}
@Override
void onBindNormalViewHolder(final OpmlViewHolder holder, int position) {
final Feed feed = get(position);
holder.name.setText(feed.getTitle() == null ? mContext.getString(R.string.feed_title_not_found) : feed.getTitle());
holder.link.setText(feed.getXmlUrl());
holder.checkBox.setChecked(isItemChecked(position));
holder.checkBox.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
toggle(feed);
}
});
holder.itemView.setOnLongClickListener(new View.OnLongClickListener() {
@Override
public boolean onLongClick(View v) {
holder.checkBox.toggle();
toggle(feed);
return false;
}
});
holder.itemView.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
holder.checkBox.toggle();
toggle(feed);
}
});
int pad = mContext.getResources().getDimensionPixelSize(R.dimen.card_layout_padding);
holder.itemView.setPadding(pad, pad, pad, pad);
Utilities.setPressedColorRippleDrawable(ContextCompat.getColor(mContext, R.color.list_background), PrefUtilities.getInstance().getCurrentColor(), holder.itemView);
}
@Override
public OpmlViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
View itemView = LayoutInflater.from(parent.getContext()).inflate(R.layout.opml_list_item, parent, false);
return new OpmlViewHolder(itemView);
}
class OpmlViewHolder extends RecyclerView.ViewHolder {
final TextView name;
final TextView link;
final CheckBox checkBox;
OpmlViewHolder(View itemView) {
super(itemView);
name = (TextView) itemView.findViewById(R.id.title);
link = (TextView) itemView.findViewById(R.id.url);
checkBox = (CheckBox) itemView.findViewById(R.id.checkbox);
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 450 |
\section{Introduction}
Artificial neural networks have been widely used in machine learning systems. Though neural networks have been showing effectiveness and powerful ability in resolving complex problems, they are confined to systems which comply only to the lowest safety integrity levels since, in most of time, a neural network is viewed as a \emph{black box} without effective methods to assure safety specifications for its outputs. Neural networks are trained over a finite number of input and output data, and are expected to be able to generalize to produce desirable outputs for given inputs even including previously unseen inputs. However, in many practical applications, the number of inputs is essentially infinite, this means it is impossible to check all the possible inputs only by performing experiments and moreover, it has been observed that neural networks can react in unexpected and incorrect ways to even slight perturbations of their inputs \cite{szegedy2013intriguing}, which could result in unsafe systems. Hence, methods that are able to provide formal guarantees are in a great demand for verifying specifications or properties of neural networks. Verifying neural networks is a hard problem, even simple properties about them have been proven NP-complete
problems \cite{katz2017reluplex}. The difficulties mainly come from the presence of activation functions and the complex structures, making neural networks large-scale, nonlinear, non-convex and thus incomprehensible to humans.
The importance of methods of formal guarantees for neural networks has been well-recognized in literature. There exist a number of results for verification of feedforward neural networks, especially for Rectifier Linear Unit (ReLU) neural networks, and a few results are devoted to neural networks with broad classes of activation functions. Motivated to general class of neural networks such as those considered in \cite{xiang2018output}, our key contribution in this paper is to develop a specification-guided method for safety verification of feedforward neural network. First, we formulate the safety verification problem in the framework of interval arithmetic, and provide a computationally efficient formula to compute output interval sets. The developed formula is able to calculate the output intervals in a fast manner. Then, analogous to other state-of-the-art verification methods, such as counterexample-guided abstraction refinement (CEGAR) \cite{clarke2000counterexample} and property directed reachability (PDR) \cite{een2011efficient}, and inspired by the Moore-Skelboe algorithm \cite{skelboe1974computation}, a specification-guided algorithm is developed. Briefly speaking, the safety specification is utilized to examine the existence of intersections between output intervals and unsafe regions and then determine the bisection actions in the verification algorithm. By making use of the information of safety specification, the computation cost can be reduced significantly. We provide experimental evidences to show the advantages of specification-guided approach, which shows that our approach only needs about 3\%--7\% computational cost of the method proposed in \cite{xiang2018output} to solve the same safety verification problem.
\section{Related Work}
Many recent works are focusing on ReLU neural networks.
In \cite{katz2017reluplex}, an SMT solver named Reluplex is proposed for a special class of neural networks with ReLU activation functions. The Reluplex extends the well-known Simplex algorithm from linear functions to ReLU functions by making use of the piecewise linear feature of ReLU functions. In \cite{xiang2017reachable_arxiv}, A layer-by-layer approach is developed for the output reachable set computation of ReLU neural networks. The computation is formulated in the form of a set of manipulations for a union of polyhedra. A verification engine for ReLU neural networks called $\mathrm{AI}^2$ was proposed in \cite{gehr2018ai}. In their approach, the authors abstract perturbed inputs and safety specifications as zonotopes, and reason about their behavior using operations for zonotopes. An Linear Programming (LP)-based method is proposed \cite{ehlers2017formal}, and in \cite{Lomuscio2017an_arxiv} authors encoded the constraints of ReLU functions as a Mixed-Integer Linear Programming (MILP). Combining output specifications that are expressed in terms of LP, the verification problem for output set eventually turns to a feasibility problem of MILP. In \cite{dutta2017output,dutta2018output}, an MILP based verification engine called Sherlock that performs an output range analysis of ReLU feedforward neural networks is proposed, in which a combined local and global search is developed to more efficiently solve MILP.
Besides the results for ReLU neural networks, there are a few other results for neural networks with general activation functions. In \cite{pulina2010abstraction,pulina2012challenging}, a piecewise-linearization of the nonlinear activation functions is used to reason about their behaviors. In this framework, the authors replace the activation functions with piecewise constant approximations and use the bounded model checker hybrid satisfiability (HySAT) \cite{franzle2007hysat} to analyze various properties. In their papers, the authors highlight the difficulty of scaling this technique and, currently, are only able to tackle small networks with at most 20 hidden nodes. In \cite{huang2017safety}, the authors proposed a framework for verifying the safety of network image classification decisions by searching for adversarial examples within a specified region.
A adaptive nested optimization framework is proposed for reachability problem of neural networks in \cite{ruan2018reachability}.
In \cite{xiang2018output}, a simulation-based approach was developed, which used a finite number of simulations/computations to estimate the reachable set of multi-layer neural networks in a general form. Despite this success, the approach lacks the ability to resolve the reachable set computation problem for neural networks that are large-scale, non-convex, and nonlinear. Still, simulation-based approaches, like the one developed in \cite{xiang2018output}, present a plausibly practical and efficient way of reasoning about neural network behaviors. The critical step in improving simulation-based approaches is bridging the gap between finitely many simulations and the essentially infinite number of inputs that exist in the continuity set. Sometimes, the simulation-based approach requires a large number of simulations to obtain a tight reachable set estimation, which is computationally costly in practice. In this paper, our aim is to reduce the computational cost by avoiding unnecessary computations with the aid of a specification-guided method.
\section{Background}
\subsection{Feedforward Neural Networks}
Generally speaking, a neural network consists of a number of interconnected neurons and each neuron is a simple processing element that responds to the weighted inputs it received from other neurons. In this paper, we consider feed-forward neural networks, which generally consist of one input layer, multiple hidden layers and one output layer.
The action of a neuron depends on its activation function, which is in the form of
\begin{align}
y_i = \phi\left(\sum\nolimits_{j=1}^{n}\omega_{ij} x_j + \theta_i\right)
\end{align}
where $x_j$ is the $j$th input of the $i$th neuron, $\omega_{ij}$ is the weight from the $j$th input to the $i$th neuron, $\theta_i$ is called the bias of the $i$th neuron, $y_i$ is the output of the $i$th neuron, $\phi(\cdot)$ is the activation function. The activation function is generally a nonlinear continuous function describing the reaction of $i$th neuron with inputs $x_j$, $j=1,\cdots,n$. Typical activation functions include ReLU, logistic, tanh, exponential linear unit, linear functions, for instance. In this work, our approach aims at being capable of dealing with activation functions regardless of their specific forms.
A feedforward neural network has multiple layers, and each layer $\ell$, $1 \le \ell \le L $, has $n^{\{\ell\}}$ neurons. In particular, layer $\ell =0$ is used to denote the input layer and $n^{\{0\}}$ stands for the number of inputs in the rest of this paper. For the layer $\ell$, the corresponding input vector is denoted by $\mathbf{x}^{\{\ell\}}$ and the weight matrix is
\begin{equation}
\mathbf{W}^{\{\ell\}} = \left[\omega_{1}^{\{\ell\}},\ldots,\omega_{n^{\{\ell\}}}^{\{\ell\}}\right]^{\top}
\end{equation}
where $\omega_{i}^{\{\ell\}}$ is the weight vector. The bias vector for layer $\ell$ is
\begin{equation}
\boldsymbol {\uptheta}^{\{\ell\}}=\left[\theta_1^{\{\ell\}},\ldots,\theta_{n^{\{\ell\}}}^{\{\ell\}}\right]^{\top}.
\end{equation}
The output vector of layer $\ell$ can be expressed as
\begin{equation}
\mathbf{y}^{\{\ell\}}=\phi_{\ell}(\mathbf{W}^{\{\ell\}}\mathbf{x}^{\{\ell\}}+\uptheta^{\{\ell\}})
\end{equation}
where $\phi_{\ell}(\cdot)$ is the activation function of layer $\ell$.
The output of $\ell-1$ layer is the input of $\ell$ layer, and the mapping from the input of input layer, that is $\mathbf{x}^{[0]}$, to the output of output layer, namely $\mathbf{y}^{[L]}$, stands for the input-output relation of the neural network, denoted by
\begin{equation}\label{NN}
\mathbf{y}^{\{L\}} = \Phi (\mathbf{x}^{\{0\}})
\end{equation}
where $\Phi(\cdot) \triangleq \phi_L \circ \phi_{L - 1} \circ \cdots \circ \phi_1(\cdot) $.
\subsection{Problem Formulation}
We start by defining the neural network output set that will become of interest all through the rest of this paper.
\begin{definition}
Given a feedforward neural network in the form of (\ref{NN}) and an input
set $\mathcal{X} \subseteq \mathbb{R}^{n^{\{0\}}}$, the following set
\begin{align}
\mathcal{Y} = \left\{\mathbf{y} ^{\{L\}} \in \mathbb{R}^{n^{\{L\}}} \mid \mathbf{y}^{\{L\}} = \Phi (\mathbf{x}^{\{0\}}),~ \mathbf{x}^{\{0\}} \in \mathcal{X}\right\} \label{output_set}
\end{align}
is called the output set of neural network (\ref{NN}).
\end{definition}
The safety specification of a neural network is expressed by a set defined in the output space, describing the safety requirement.
\begin{definition}
Safety specification $\mathcal{S}$ formalizes the safety requirements for output $\mathbf{y}^{[L]}$ of neural network (\ref{NN}), and is a predicate over output $\mathbf{y}^{[L]}$ of neural network (\ref{NN}). The neural network (\ref{NN}) is safe if and only if the following condition is satisfied:
\begin{equation}\label{verification}
\mathcal{Y} \cap \neg \mathcal{S} = \emptyset
\end{equation}
where $\mathcal{Y}$ is the output set defined by (\ref{output_set}), and $\neg$ is the symbol for logical negation.
\end{definition}
The safety verification problem for the neural network (\ref{NN}) is stated as follows.
\begin{problem}\label{problem}
How does one verify the safety requirement described by (\ref{verification}), given a neural network (\ref{NN}) with a compact input set $\mathcal{X}$ and a
safety specification $\mathcal{S}$?
\end{problem}
The key for solving the safety verification Problem \ref{problem} is computing output set $\mathcal{Y}$. However, since neural networks are often nonlinear and non-convex, it is extremely difficult to compute the exact output set $\mathcal{Y}$. Rather than directly computing the exact output set for a neural network, a more practical and feasible way for safety verification is to derive an over-approximation of $\mathcal{Y}$.
\begin{definition}\label{def2}
A set $\mathcal{Y}_o$ is an over-approximation of $\mathcal{Y}$ if $\mathcal{Y} \subseteq \mathcal{Y}_o$ holds.
\end{definition}
The following lemma implies that it is sufficient to use the over-approximated output set for the safety verification of a neural network.
\begin{lemma}\label{lemma1}
Consider a neural network in the form of (\ref{NN}) and a safety specification $\mathcal{S}$, the neural network is safe if the following condition is satisfied
\begin{equation}\label{lemma1_1}
\mathcal{Y}_o \cap \neg \mathcal{S} = \emptyset
\end{equation}
where $\mathcal{Y} \subseteq\mathcal{Y}_o$.
\end{lemma}
\begin{proof}
Due to $\mathcal{Y} \subseteq\mathcal{Y}_o$, (\ref{lemma1_1}) implies $\mathcal{Y} \cap\neg \mathcal{S} = \emptyset$.
\end{proof}
From Lemma \ref{lemma1}, the problem turns to how to construct an appropriate over-approximation $\mathcal{Y}_o$. One natural way, as the method developed in \cite{xiang2018output}, is to find a set $\mathcal{Y}_o$ as small as possible to tightly over-approximate output set $\mathcal{Y}$ and further perform safety verification. However, this idea sometimes could be computationally expensive, and actually most of computations are unnecessary for safety verification. In the following, a specification-guided approach will be developed, and the over-approximation of output set is computed in an adaptive way with respect to a given safety specification.
\section{Safety Verification}
\subsection{Preliminaries and Notation}
Let $[x] = [\underline{x}, \overline{x}]$, $[y] = [\underline{y},\overline{y}]$ be real compact intervals and $\circ$ be one of the basic operations addition,
subtraction, multiplication and division, respectively, for real numbers, that is $\circ \in \{+,-,\cdot, / \}$, where it is assumed that $0 \notin [b]$ in case of division. We define these operations for intervals $[x]$ and $[y]$ by $[x] \circ [y] = \{x \circ y \mid x \in [y],x\in [y]\}$. The width of an interval $[x]$ is defined and denoted by $w([x]) = \overline{x} - \underline{x}$. The set of compact intervals in $\mathbb{R}$ is denoted by $\mathbb{IR}$. We say $[\phi]: \mathbb{IR} \to \mathbb{IR}$ is an interval extension of function $\phi: \mathbb{R} \to \mathbb{R}$, if for any degenerate interval arguments, $[\phi]$ agrees with $\phi$ such that $[\phi]([x,x]) = \phi(x)$. In order to consider multidimensional problems where $\mathbf{x} \in \mathbb{R}^{n}$ is taken into account, we denote $[\mathbf{x}] =[\underline{x}_1,\overline{x}_1]\times\cdots \times[\underline{x}_n,\overline{x}_n] \in \mathbb{IR}^{n}$, where $\mathbb{IR}^n$ denotes the set of compact interval in $\mathbb{R}^n$. The width of an interval vector $\mathbf{x}$ is the largest of the widths of any
of its component intervals $w([\mathbf{x}])= \max_{i=1,\ldots,n} (\overline{x}_i-\underline{x}_i)$. A mapping $[\Phi] : \mathbb{IR}^{n} \to \mathbb{IR}^{m}$ denotes the interval extension of a function $\Phi:\mathbb{R}^{n} \to \mathbb{R}^m$. An interval extension is inclusion monotonic if, for any $[\mathbf{x}_1],[\mathbf{x}_2] \in \mathbb{IR}^{n}$, $[\mathbf{x}_1] \subseteq [\mathbf{x}_2]$ implies $[\Phi]([\mathbf{x}_1]) \subseteq [\Phi]([\mathbf{x}_2])$. A fundamental property of inclusion monotonic interval extensions is that $\mathbf{x} \in [\mathbf{x}] \Rightarrow \Phi(\mathbf{x}) \in [\Phi]([\mathbf{x}])$, which means the value of $\Phi$ is contained in the interval $[\Phi]([\mathbf{x}])$ for every $\mathbf{x}$ in $[\mathbf{x}]$.
Several useful definitions and lemmas are presented.
\begin{definition} \cite{moore2009introduction}
Piece-wise monotone functions, including exponential, logarithm, rational power, absolute value, and trigonometric functions, constitute the set of standard functions.
\end{definition}
\begin{lemma} \label{lemma2}\cite{moore2009introduction}
A function $\Phi$ which is composed by finitely many elementary operations $\{+,-,\cdot, / \}$ and standard functions is inclusion monotone.
\end{lemma}
\begin{definition} \cite{moore2009introduction}
An interval extension $[\Phi]([\mathbf{x}])$ is said to be Lipschitz in $[\mathbf{x}_0]$ if there is a constant $\xi$
such that $w([\Phi]([\mathbf{x}]))\le \xi w([\mathbf{x}])$ for every $[\mathbf{x}] \subseteq [\mathbf{x}_0]$.
\end{definition}
\begin{lemma}\label{lemma3}\cite{moore2009introduction}
If a function $\Phi(\mathbf{x})$ satisfies an ordinary Lipschitz condition in $[\mathbf{x_0}]$,
\begin{equation}
\left\|\Phi(\mathbf{x}_2)-\Phi(\mathbf{x}_1)\right\| \le \xi\left\|\mathbf{x}_2-\mathbf{x}_1\right\|,~\mathbf{x}_1,\mathbf{x}_2 \in [\mathbf{x}_0]
\end{equation}
then the interval extension $[\Phi]([\mathbf{x}])$ is a Lipschitz interval extension in $[\mathbf{x}_0]$,
\begin{equation}
w([\Phi]([\mathbf{x}]))\le \xi w([\mathbf{x}]),~[\mathbf{x}] \subseteq [\mathbf{x_0}].
\end{equation}
\end{lemma}
The following trivial assumption is given for activation functions.
\begin{assumption}\label{assumption_0}
The activation function $\phi$ considered in this paper is composed by finitely many elementary operations and standard functions.
\end{assumption}
Based on Assumption \ref{assumption_0}, the following result can be obtained for a feedforward neural network.
\begin{theorem}\label{thm1}
The interval extension $[\Phi ]$ of neural network $\Phi$ composed by activation functions satisfying Assumption \ref{assumption_0} is inclusion monotonic and Lipschitz such that
\begin{equation}\label{L_NN}
w([\Phi]([\mathbf{x}]))\le \xi^{L}\prod\nolimits_{\ell = 1}^L {\left\| {\mathbf{W}^{\{ \ell\} } } \right\|} w([\mathbf{x}]),~[\mathbf{x}] \subseteq \mathbb{IR}^{n^{\{0\}}}
\end{equation}
where $\xi$ is a Lipschitz constant for all activation functions in $\Phi$.
\end{theorem}
\begin{proof}
Under Assumption \ref{assumption_0}, the inclusion monotonicity can be obtained directly based on Lemma \ref{lemma2}. Then, for the layer $\ell$, we denote $\hat\phi_{\ell}(\mathbf{x}^{\{\ell\}}) = \phi_{\ell} (\mathbf{W}^{\{\ell\}} \mathbf{x}^{\{\ell\}} + \boldsymbol{\uptheta}^{\{\ell\}} )$. For any $\mathbf{x}_1,\mathbf{x}_2$, it has
\begin{align*}
\left\| {\hat \phi _{\ell} (\mathbf{x}_2^{\{ \ell\} } ) - \hat \phi _{\ell} (\mathbf{x}_1^{\{ \ell\} } )} \right\| \leq \xi \left\| {\mathbf{W}^{\{ \ell\} } \mathbf{x}_2^{\{ \ell\} } - \mathbf{Wx}_1^{\{ \ell\} } } \right\| \nonumber
\\
\leq \xi \left\| {\mathbf{W}^{\{ \ell\} } } \right\|\left\| {\mathbf{x}_2^{\{ \ell\} } - \mathbf{x}_1^{\{ \ell\} } } \right\|.
\end{align*}
Due to $\mathbf{x}^{\{\ell\}}=\hat\phi_{\ell-1}(\mathbf{x}^{\{\ell-1\}})$, $\ell=1,\ldots,L$, we have $\xi^{L}\prod\nolimits_{\ell = 1}^L {\left\| {\mathbf{W}^{\{ \ell\} } } \right\|}$ the Lipschitz constant for $\Phi$, and (\ref{L_NN}) can be established by Lemma \ref{lemma3}.
\end{proof}
\subsection{Interval Analysis}
First, we consider a single layer $\mathbf{y} = \phi(\mathbf{W}\mathbf{x}+\boldsymbol{\uptheta})
$. Given an interval input $[\mathbf{x}]$, the interval extension is $[\phi](\mathbf{W}[\mathbf{x}]+\boldsymbol{\uptheta}) = [\underline{y}_1,\overline{y}_1]\times\cdots\times[\underline{y}_n,\overline{y}_n] = [\mathbf{y}]$, where
\begin{align}
\underline{y}_i &= \min_{\mathbf{x} \in [\mathbf{x}]} \phi\left(\sum\nolimits_{j=1}^{n}\omega_{ij} x_j + \theta_i\right)\label{thm1_1}
\\
\overline y_i &= \max_{\mathbf{x} \in [\mathbf{x}]} \phi\left(\sum\nolimits_{j=1}^{n}\omega_{ij} x_j + \theta_i\right) . \label{thm1_2}
\end{align}
To compute the interval extension $[\phi]$, we need to compute the minimum and maximum values of the output of nonlinear function $\phi$. For general nonlinear functions, the optimization problems are still challenging. Typical activation functions include ReLU, logistic, tanh, exponential linear unit, linear functions, for instance, satisfy the following monotonic assumption.
\begin{assumption}\label{assumption_1}
For any two scalars $z_1 \le z_2$, the activation function satisfies $\phi(z_1) \le \phi(z_2)$.
\end{assumption}
Assumption \ref{assumption_1} is a common property that can be satisfied by a variety of activation functions. For example, it is easy to verify that the most commonly used such as logistic, tanh, ReLU, all satisfy Assumption \ref{assumption_1}. Taking advantage of the monotonic property of $\phi$, the interval extension $[\phi]([z]) = [\phi(\underline{z}),\phi(\overline{z})]$. Therefore, $\underline{y}_i$ and $\overline{y}_i$ in (\ref{thm1_1}) and (\ref{thm1_2}) can be explicitly written out as
\begin{align} \label{y_1}
\underline{y}_i & = \sum\nolimits_{j=1}^{n}\underline{p}_{ij} + \theta_i
\\
\overline{y}_i &= \sum\nolimits_{j=1}^{n}\overline{p}_{ij} + \theta_i \label{y_2}
\end{align}
with $\underline{p}_{ij}$ and $\overline{p}_{ij}$ defined by
\begin{align} \label{y_3}
\underline{p}_{ij} &= \left\{ {\begin{array}{*{20}l}
{\omega _{ij} \underline{x}_j,} & {\omega _{ij}\geq 0} \\
{\omega _{ij} \overline x_j ,} & {\omega _{ij} < 0} \\
\end{array} } \right.
\\
\overline p_{ij}& = \left\{ {\begin{array}{*{20}c}
{\omega _{ij} \overline x_j ,} & {\omega _{ij} \geq 0} \\
{\omega _{ij} \underline{x}_j ,} & {\omega _{ij} < 0} \\
\end{array} } \right.. \label{y_4}
\end{align}
From (\ref{y_1})--(\ref{y_4}), the output interval of a single layer can be efficiently computed with these explicit expressions. Then, we consider the feedforward neural network $\mathbf{y}^{\{L\}}=\Phi(\mathbf{x}^{\{0\}})$ with multiple layers, the interval extension $[\Phi ]([\mathbf{x}^{\{ 0\} } ])$ can be computed by the following layer-by-layer computation.
\begin{theorem}\label{thm2}
Consider feedforward neural network (\ref{NN}) with activation function satisfying Assumption \ref{assumption_1} and an interval input $[\mathbf{x}^{\{0\}}]$, an interval extension can be determined by
\begin{equation} \label{thm2_1}
[\Phi ]([\mathbf{x}^{\{ 0\} } ]) = [\hat \phi _L ] \circ \cdots \circ [\hat \phi _1 ] \circ [\hat \phi _0 ]([\mathbf{x}^{\{ 0\} } ])
\end{equation}
where $[\hat \phi_{\ell}]([\mathbf{x}^{\{\ell\}}]) =[\phi_{\ell} ](\mathbf{W}^{\{\ell\}} [\mathbf{x}^{\{\ell\}} ] + \boldsymbol{\uptheta}^{\{\ell\}} )=[\mathbf{y}^{\{\ell\}}]$ in which
\begin{align} \label{thm2_2}
\underline{y}_i^{\{\ell\}} & = \sum\nolimits_{j=1}^{n^{\{\ell\}}}\underline{p}_{ij}^{\{\ell\}} + \theta_i^{\{\ell\}}
\\
\overline{y}_i^{\{\ell\}} &= \sum\nolimits_{j=1}^{n^{\{\ell\}}}\overline{p}_{ij}^{\{\ell\}} + \theta_i^{\{\ell\}} \label{thm2_3}
\end{align}
with $\underline{p}_{ij}^{\{\ell\}}$ and $\overline{p}_{ij}^{\{\ell\}}$ defined by
\begin{align} \label{thm2_4}
\underline{p}_{ij}^{\{\ell\}} &= \left\{ {\begin{array}{*{20}l}
{\omega _{ij}^{\{\ell\}} \underline{x}_j^{\{\ell\}},} & {\omega _{ij}^{\{\ell\}}\geq 0} \\
{\omega _{ij}^{\{\ell\}} \overline x_j^{\{\ell\}} ,} & {\omega _{ij}^{\{\ell\}} < 0} \\
\end{array} } \right.
\\
\overline p_{ij}^{\{\ell\}}& = \left\{ {\begin{array}{*{20}c}
{\omega _{ij}^{\{\ell\}} \overline x_j^{\{\ell\}} ,} & {\omega _{ij}^{\{\ell\}} \geq 0} \\
{\omega _{ij}^{\{\ell\}} \underline{x}_j^{\{\ell\}} ,} & {\omega _{ij}^{\{\ell\}} < 0} \\
\end{array} } \right.. \label{thm2_5}
\end{align}
\end{theorem}
\begin{proof}
We denote $\hat\phi_{\ell}(\mathbf{x}^{\{\ell\}}) = \phi_{\ell} (\mathbf{W}^{\{\ell\}} \mathbf{x}^{\{\ell\}} + \boldsymbol{\uptheta}^{\{\ell\}} )$. For a feedforward neural network, it essentially has $\mathbf{x}^{\{\ell\}}=\hat\phi_{\ell-1}(\mathbf{x}^{\{\ell-1\}})$, $\ell=1,\ldots,L$ which leads to (\ref{thm2_1}). Then, for each layer, the interval extension $[\mathbf{y}^{\{\ell\}}]$ computed by (\ref{thm2_2})--(\ref{thm2_5}) can be obtained directly from (\ref{y_1})--(\ref{y_4}).
\end{proof}
We denote the set image for neural network $\Phi$ as follows
\begin{equation}
\Phi([\mathbf{x}^{\{0\}}])=\{\Phi(\mathbf{x}^{\{0\}}):\mathbf{x}^{\{0\}} \in [\mathbf{x}^{\{0\}}]\}.
\end{equation}
Since $[\Phi]$ is inclusion monotonic according to Theorem \ref{thm1}, one has $\Phi([\mathbf{x}^{\{0\}}]) \subseteq [\Phi]([\mathbf{x}^{\{0\}}])$. Thus, it is sufficient to claim the neural network is safe if $[\Phi]([\mathbf{x}^{\{0\}}]) \cap \neg \mathcal{S} = \emptyset$ holds by Lemma \ref{lemma1}.
According to the explicit expressions (\ref{thm2_1})--(\ref{thm2_5}), the computation on interval extension $[\Phi]$ is fast. In the next step, we should discuss the conservativeness for the computation outcome of (\ref{thm2_1}). We
have $[\Phi]([\mathbf{x}^{\{0\}}]) = \Phi([\mathbf{x}^{\{0\}}]) + E([\mathbf{x}^{\{0\}}])$ for some interval-valued function $E([\mathbf{x}^{\{0\}}])$ with $w([\Phi]([\mathbf{x}^{\{0\}}])) = w(\Phi([\mathbf{x}^{\{0\}}])) + w(E([\mathbf{x}^{\{0\}}]))$.
\begin{definition}
We call
$w(E([\mathbf{x}^{\{0\}}])) = w([\Phi]([\mathbf{x}^{\{0\}}])) - w(\Phi([\mathbf{x}^{\{0\}}])) $
the excess width of interval extension of neural network $\Phi([\mathbf{x}^{\{0\}}])$.
\end{definition}
Explicitly, the excess width measures the conservativeness of interval extension $[\Phi]$ regarding its corresponding function $\Phi$. The following theorem gives the upper bound of the excess width $w(E([\mathbf{x}^{\{0\}}]))$.
\begin{theorem}\label{thm3}
Consider feedforward neural network (\ref{NN}) with an interval input $[\mathbf{x}^{\{0\}}]$, the excess width $w(E([\mathbf{x}^{\{0\}}]))$ satisfies
\begin{equation}\label{thm3_1}
w(E([\mathbf{x}^{\{ 0\} } ])) \leq\gamma w([\mathbf{x}^{\{0\}} ])
\end{equation}
where $\gamma = \xi^{L}\prod\nolimits_{\ell = 1}^L {\left\| {\mathbf{W}^{\{ \ell\} } } \right\|} $.
\end{theorem}
\begin{proof}
We have $[\Phi]([\mathbf{x}^{\{0\}}]) = \Phi([\mathbf{x}^{\{0\}}]) + E([\mathbf{x}^{\{0\}}])$ for some $E([\mathbf{x}^{\{0\}}])$ and
\begin{align*}
w(E([\mathbf{x}^{\{0\}}])) &= w([\Phi]([\mathbf{x}^{\{0\}}])) - w(\Phi([\mathbf{x}^{\{0\}}]))
\\
&\leq w([\Phi ]([\mathbf{x}^{\{ 0\} } ]))
\\
& \leq \xi ^L \prod\nolimits_{\ell = 1}^L {\left\| {\mathbf{W}^{\{ \ell\} } } \right\|} w([\mathbf{x}^{\{0\}} ])
\end{align*}
which means (\ref{thm3_1}) holds.
\end{proof}
Given a neural network $\Phi$ which means $\mathbf{W}^{\{\ell\}}$ and $\xi$ are fixed, Theorem \ref{thm3} implies that a less conservative result can be only obtained by reducing the width of input interval $[\mathbf{x}^{\{0\}}]$. On the other hand, a smaller $w([\mathbf{x}^{\{0\}}])$ means more subdivisions of an input interval which will bring more computational cost. Therefore, how to generate appropriate subdivisions of an input interval is the key for safety verification of neural networks in the framework of interval analysis. In the next section, an efficient specification-guided method is proposed to address this problem.
\subsection{Specification-Guided Safety Verification}
Inspired by the Moore-Skelboe algorithm \cite{skelboe1974computation}, we propose a specification-guided algorithm, which generates fine subdivisions particularly with respect to specification, and also avoid unnecessary subdivisions on the input interval for safety verification, see Algorithm \ref{alg1}.
\begin{algorithm}[ht!]
\caption{Specification-Guided Safety Verification} \label{alg1}
\begin{algorithmic}[1]
\Require A feedforward neural network $\Phi:\mathbb{R}^{n^{\{0\}}} \to \mathbb{R}^{n^{\{L\}}}$, an input set $\mathcal{X} \subseteq \mathbb{R}^{n^{\{0\}}}$, a safety specification $\mathcal{S} \subseteq \mathbb{R}^{n^{\{L\}}}$, a tolerance $\varepsilon > 0$
\Ensure Safe or Uncertain
\State $\underline{x}_i \gets \min_{\mathbf{x}\in\mathcal{X}}(x_i)$, $\overline{x}_i \gets \max_{\mathbf{x}\in\mathcal{X}}(x_i)$
\State $[\mathbf{x}] \gets [\underline{x}_1,\overline{x}_1]\times\ldots,\times[\underline{x}_{n^{\{0\}}},\overline{x}_{n^{\{0\}}}]$
\State $[\mathbf{y}] \gets [\Phi]([\mathbf{x}])$
\State $\mathcal{M} \gets \{([\mathbf{x}],[\mathbf{y}])\}$
\While{$\mathcal{M} \neq \emptyset$}
\State Select and remove an element $([\mathbf{x}],[\mathbf{y}])$ from $\mathcal{M}$
\If{$[\mathbf{y}]\cap\neg\mathcal{S} = \emptyset$}
\State Continue
\Else
\If{$w(\mathbf{[x]}) > \varepsilon$}
\State Bisect $[\mathbf{x}]$ to obtain $[\mathbf{x}_1]$ and $[\mathbf{x}_2]$
\For{$i=1:1:2$}
\If{$[\mathbf{x}_i]\cap \mathcal{X} \neq \emptyset$}
\State $[\mathbf{y}_i] \gets [\Phi]([\mathbf{x}_i])$
\State $\mathcal{M} \gets \mathcal{M} \cup \{([\mathbf{x}_i],[\mathbf{y}_i])\}$
\EndIf
\EndFor
\Else
\State \Return Uncertain
\EndIf
\EndIf
\EndWhile
\State \Return Safe
\end{algorithmic}
\end{algorithm}
The implementation of the specification-guided algorithm shown in Algorithm \ref{alg1} checks that the intersection between output set and unsafe region is empty, within a pre-defined tolerance $\varepsilon$. This is accomplished by dividing and checking the initial input interval into increasingly
smaller sub-intervals.
\begin{itemize}
\item \textbf{Initialization.} Set a tolerance $\varepsilon>0$. Since our approach is based on interval analysis, convert input set $\mathcal{X}$ to an interval $[\mathbf{x}]$ such that $\mathcal{X} \subseteq [\mathbf{x}]$. Compute the initial output interval $[\mathbf{y}] = [\Phi]([\mathbf{x}])$. Initialize set $\mathcal{M} = \{([\mathbf{x}],[\mathbf{y}])\}$.
\item \textbf{Specification-guided bisection.}
This is the key in the algorithm. Select an element $([\mathbf{x}],[\mathbf{y}])$ for specification-guided bisection. If the output interval $[\mathbf{y}]$ of sub-interval $[\mathbf{x}]$ has no intersection with the unsafe region, we can discard this sub-interval for the subsequent dividing and checking since it has been proven safe. Otherwise, the bisection action will be activated to produce finer subdivisions to be added to $\mathcal{M}$ for subsequent checking. The bisection process is guided by the given safety specification, since the activations of bisection actions are totally determined by the non-emptiness of the intersection between output interval sets and the given unsafe region. This distinguishing feature leads to finer subdivisions when the output set is getting close to the unsafe region, and on the other hand coarse subdivisions are sufficient for safety verification when the output set is far wary from the unsafe area. Therefore, unnecessary computational cost can be avoided. In the experiments section, it will be clearly observed how the bisection actions are guided by safety specification in a numeral example.
\item \textbf{Termination.} The specification-guided bisection procedure continues until $\mathcal{M}=\emptyset$ which means all sub-intervals have been proven safe, or the width of subdivisions becomes less than the pre-defined tolerance $\varepsilon$ which leads to an uncertain conclusion for the safety. Finally, when Algorithm \ref{alg1} outputs an uncertain verification result, we can select a smaller tolerance $\varepsilon$ to perform the safety verification.
\end{itemize}
\section{Experiments}
\subsection{Random Neural Network}
To demonstrate how the specification-guided idea works in safety verification, a neural network with two inputs and two outputs is proposed. The neural network has 5 hidden layers, and each layer contains 10 neurons. The weight matrices and bias vectors are randomly generated. The input set is assumed to be $[\mathbf{x}^{\{0\}}] = [-5,5]\times [-5,5]$ and the unsafe region is $\neg \mathcal{S} = [1,\infty)\times[1,\infty)$.
\begin{table}
\centering
\caption{Comparison on number of intervals and computational time to existing approach}\label{tab1}
\begin{tabular}{c|c|c}
\hline
& Intervals & Computational Time \\
\hline
Algorithm \ref{alg1} & 4095 & 21.45 s \\
\hline
Xiang et al. 2018 & 111556 & 294.37 s\\
\hline
\end{tabular}
\end{table}
We execute Algorithm \ref{alg1} with termination parameter $\varepsilon = 0.01$, the safety can be guaranteed by partitioning $[\mathbf{x}^{\{0\}}]$ into 4095 interval sets. The specification-guided partition of the input space is shown in Figure \ref{fig1}. A non-uniform input space partition is generated based on the specification-guided scheme. An obvious specification-guided effect can be observed in Figure \ref{fig1}. The specification-guided method requires much less computational complexity compared to the approach in \cite{xiang2018output} which utilizes a uniform partition of input space, and a comparison is listed in Table \ref{tab1}. The
computation is carried out using Matlab 2017
on a personal computer with Windows 7, Intel Core i5-4200U, 1.6GHz, 4 GB
RAM. It can be seen that the number of interval sets and computational time have been significantly reduced to 3.67\% and 7.28\%, respectively, compared to those needed in~\cite{xiang2018output}. Figure \ref{fig2} illustrates the union of 4095 output interval sets, which has no intersection with the unsafe region, illustrating the safety specification is verified. Figure \ref{fig2} shows that the output interval estimation is guided to be tight when it comes close to unsafe region, and when it is far way from the unsafe area, a coarse estimation is sufficient to verify safety.
\begin{figure}[ht!]
\includegraphics[width=9cm]{fig1}
\caption{Specification-guided bisections of input interval by Algorithm \ref{alg1}. Guided by safety specification, finer partitions are generated when the output intervals are close to the unsafe region, and coarse partitions are generated when the output intervals are far wary. }
\label{fig1}
\end{figure}
\begin{figure}
\includegraphics[width=9cm]{fig2}
\caption{Output set estimation of neural networks. Blue boxes are output intervals, red area is unsafe region, black dots are 5000 random outputs.}
\label{fig2}
\end{figure}
\subsection{Robotic Arm Model}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=4cm]{fig7}
\caption{Robotic arm with two joints. The normal working zone of $(\theta_1,\theta_2)$ is colored in green $\theta_1,\theta_2 \in [\frac{5\pi}{12},\frac{7\pi}{12}]$. The buffering zone is in yellow $\theta_1,\theta_2 \in [\frac{\pi}{3},\frac{5\pi}{12}] \cup [\frac{7\pi}{12},\frac{2\pi}{3}] $. The forbidden zone is $\theta_1,\theta_2 \in [0,\frac{\pi}{3}] \cup [\frac{2\pi}{3},2\pi] $.
}
\label{robotic_arm}
\end{center}
\end{figure}
In \cite{xiang2018output}, a \emph{learning forward kinematics} of a robotic arm model with two joints is proposed, shown in Figure \ref{robotic_arm}.
The learning task is using a feedforward neural network to predict the position $(x,y)$
of the end with knowing the joint angles $(\theta_1,\theta_2)$. The input space $[0,2\pi]\times [0,2\pi]$ for $(\theta_1,\theta_2)$ is classified into three zones for its operations: normal working zone $\theta_1,\theta_2 \in [\frac{5\pi}{12},\frac{7\pi}{12}]$, buffering zone $\theta_1,\theta_2 \in [\frac{\pi}{3},\frac{5\pi}{12}] \cup [\frac{7\pi}{12},\frac{2\pi}{3}] $ and forbidden zone $\theta_1,\theta_2 \in [0,\frac{\pi}{3}] \cup [\frac{2\pi}{3},2\pi]$. The detailed formulation for this robotic arm model and neural network training can be found in \cite{xiang2018output}.
The safety specification for the position $(x,y)$ is
$\mathcal{S}=\{(x,y)\mid -14 \le x\le 3~\mathrm{and}~1 \le y \le 17\}$. The input set of the robotic arm is the union of normal working and buffering zones, that is $(\theta_1,\theta_2) \in [\frac{\pi}{3},\frac{2\pi}{3}] \times [\frac{\pi}{3},\frac{2\pi}{3}]$.
In the safety point of view, the neural network needs to be verified that all the outputs produced by the inputs in the normal working zone and buffering zone will satisfy safety specification $\mathcal{S}$. In \cite{xiang2018output}, a uniform partition for input space is used, and thus 729 intervals are produced to verify the safety property. Using our specification-guided approach, the safety can be guaranteed by partitioning the input space into only 15 intervals, see Figure \ref{fig3} and Figure \ref{fig4}. Due to the small number of intervals involved in the verification process, the computational time is only 0.27 seconds for specification-guided approach.
\begin{figure}[ht!]
\includegraphics[width=9cm]{robot_1}
\caption{15 sub-intervals for robotic arm safety verification.}
\label{fig3}
\end{figure}
\begin{figure}
\includegraphics[width=9cm]{robot_2}
\caption{Safety verification for neural network of robotic arm. Blue boxes are output intervals, red box are boundary for unsafe region, black dots are 5000 random outputs. 15 output intervals are sufficient to prove the safety. }
\label{fig4}
\end{figure}
\begin{figure}[ht!]
\includegraphics[width=9cm]{image_1}
\caption{Examples from the MNIST handwritten digit dataset. }
\label{fig5}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=9cm]{image_3_combine}
\caption{Perturbed image of digit 2 with perturbation in $[-0.5.0.5]$. (a) $4 \times 4$ perturbation at the left-top corner, the neural network will wrongly label it as digit 1. (b) $3 \times 3$ perturbation at the left-top corner, the neural network can be proved to be robust for this class of perturbations. }
\label{fig6}
\end{figure}
\subsection{Handwriting Image Recognition}
In this handwriting image recognition task, we use 5000 training examples of handwritten digits which is a subset of the MNIST handwritten digit dataset (http://yann.lecun.com/exdb/mnist/), examples from the dataset are shown in Figure \ref{fig5}. Each training
example is a 20 pixel by 20 pixel grayscale image of the digit. Each pixel is
represented by a floating point number indicating the grayscale intensity at that location. We first train a neural network with 400 inputs, one hidden layer with 25 neurons and 10 output units corresponding to the 10 digits. The activation functions for both hidden and output layers are sigmoid functions. A trained neural network with about 97.5\% accuracy is obtained.
Under adversarial perturbations, the neural network may produce a wrong prediction. For example in Figure \ref{fig6}(a) which is an image of digit $2$, the label predicted by the neural network will turn to $1$ as a $4 \times 4$ perturbation belonging to $[-0.5,0.5]$ attacks the left-top corner of the image. With our developed verification method, we wish to prove that the neural network is robust to certain classes of perturbations, that is no perturbation belonging to those classes can alter the prediction of the neural network for a perturbed image. Since there exists one adversarial example for $4 \time 4$ perturbations at the left-top corner, it implies this image is not robust to this class of perturbation. We consider another class of perturbations, $3\times 3$ perturbations at the left-top corner, see Figure \ref{fig6}(b). Using Algorithm \ref{alg1}, the neural network can be proved to be robust to all $3\times 3$ perturbations located at at the left-top corner of the image, after 512 bisections.
Moreover, applying Algorithm \ref{alg1} to all 5000 images with $3\times 3$ perturbations belonging to $[-0.5,0.5]$ and located at the left-top corner, it can be verified that the neural network is robust to this class of perturbations for all images. This result means this class of perturbations will not affect the prediction accuracy of the neural network. The neural network is able to maintain its 97.5\% accuracy even subject to any perturbations belonging to this class of $3 \times 3$ perturbations.
\section{Conclusion and Future Work}
In this paper, we introduce a specification-guided approach for safety verification of feedforward neural networks with general activation functions. By formulating the safety verification problem into the framework of interval analysis, a fast computation formula for calculating output intervals of feedforward neural networks is developed. Then, a safety verification algorithm which is called specification-guided is developed. The algorithm is specification-guided since the activation of bisection actions are totally determined by the existence of intersections between the computed output intervals and unsafe sets. This distinguishing feature makes the specification-guided approach be able to avoid unnecessary computations and significantly reduce the computational cost. Several experiments are proposed to show the advantages of our approach.
Though our approach is general in the sense that it is not tailored to specific activation functions, the specification-guided idea has potential to be further applied to other methods dealing with specific activation functions such as ReLU neural networks to enhance their scalability. Moreover, since our approach can compute the output intervals of a neural network, it can be incorporated with other reachable set estimation methods to compute the dynamical system models with neural network components inside such as extension of \cite{xiang2018output} to closed-loop systems \cite{xiang2019reachable} and neural network models of nonlinear dynamics \cite{xiang2018reachable_b}.
\bibliographystyle{ieeetr}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,140 |
'Stop the Nonsense': Nitish Kumar Loses Calm at Bihar Poll Rally
Bihar CM Nitish Kumar was addressing an election rally when some people raised 'Lalu Zindabad' slogans.
Published: 21 Oct 2020, 8:43 PM IST
Bihar Chief Minister Nitish Kumar was seen losing his cool at an election rally in Saran district on Wednesday, 21 October, when some people at the venue raised 'Lalu zindabad' slogans.
At a gathering held to campaign for long-time RJD leader and Lalu Yadav aide, Chandrika Rai, who recently crossed over to the ruling JDU, a furious Kumar was heard saying, "Do not do this nonsense here. If you don't want to vote, don't vote."
An agitated Kumar further said, "You will do harm to the person for whom you're here."
He further went on to ask the crowd whether the conduct of those raising the slogans was acceptable, to which his supporters responded with a resounding "no".
Bihar Oppn Believes Specific Religion Has Right on Resources: Yogi
This comes as Opposition candidate and RJD chief Tejashwi Yadav's rallies have seen supporters pouring in in large numbers.
On Tuesday, Kumar, four-time Bihar CM, ridiculed the RJD leader's claims at a rally, calling them "impossible" to fulfill.
The elections for 243 Assembly seats will be held in three phases – for 71 seats on 28 October, for 94 seats on 3 November, and remaining 78 on 7 November. The results will be announced on 10 November.
Rashtriya Janata Dal (RJD) leader Lalu Prasad Yadav on Monday, 19 October, attacked Chief Minister Kumar and asked him: "Do we now have to send the Indian Ocean for the development of Bihar?"
Lalu posted a cartoon and a statement in the Bhojpuri language on social media to slam Kumar, for his statement that big industries had not come to Bihar as it was not a coastal area.
While giving a clarification on the lack of jobs in Bihar, Kumar, during a virtual rally in the second week of October, had said that although the state was on the development path, big industries had not come here as it is not situated near a sea coast. Industrialists generally prefer coastal areas to establish industries, he had said.
Nitish's Status Dips, Still Most-Preferred CM: Bihar Opinion Poll | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,157 |
Silt of leem (België) is een sediment dat qua grootte tussen lutum en zand wordt ingedeeld. Een deeltje wordt silt genoemd als het wat grootte betreft, tussen de 2 en 63 micrometer valt. In België wordt silt leem genoemd, met de reden dat dit voor het overgrote deel uit leemgrond bestaat. Hiermee kent België een dubbel gebruik van het woord leem voor zowel de grondsoort als de textuurklasse.
Tussen de gebruikte classificatiesystemen valt op dat er verschillende grenzen gehanteerd worden voor de klasse silt. In de Udden-Wentworth schaal (afkomstig van Krumbein), vallen siltdeeltjes qua grootte tussen 1⁄256 en 1⁄16 mm (3,9 tot 62,5 μm). ISO 14688 deelt silt in tussen 2 μm and 63 μm, waarbij de kleideeltjes kleiner zijn en de zanddeeltjes groter. In het USDA Soil Texture Classification system, ligt de grens tussen zand en silt bij de 50 μm grootte van de deeltjes. Het USDA system is overgenomen door de Food and Agriculture Organization (FAO). In het Unified Soil Classification System (USCS) and het AASHTO Soil Classification system wordt de grens tussen zand en silt gelegd bij de deeltjesgrootte van 75 μm (dat wil zeggen materiaal dat door de #200 zeef gaat).
Feitelijk verschilt silt chemisch behoorlijk van klei. Verder zijn siltkorreltjes ongeveer gelijk qua grootte in alle dimensies, in tegenstelling tot kleideeltjes, die bestaan uit plaatvormige deeltjes die bij elkaar worden gehouden door elektrostatische krachten en daardoor veel cohesie hebben. Silten en kleien hebben een kenmerkende plasticiteit.
Silt is een zeer veelvoorkomend sediment in allerlei sedimentaire gesteenten. Een gesteente dat geheel uit silt bestaat, wordt siltsteen genoemd. Silt vormt vaak de matrix van grovere gesteenten, zoals zandstenen of conglomeraten.
Grondsoort | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,068 |
\section{Introduction}\label{sec:intro}
Latent graphical models provide a succinct representation of the dependencies among observed and latent variables.
Each node in the graphical model represents a random variable or a random vector, and
the dependencies among these variables are captured by the edges among nodes. Graphical models are widely used in domains from biology \cite{saitou1987neighbor}, computer vision \cite{tang2014latent}
and social networks \cite{eisenstein2010latent}.
This paper focuses on the structure learning of latent tree \ac{ggm} in which the node observations are random {\em vectors} and a subset of the observations can be {\em arbitrarily corrupted}. This classical problem, in which the variables are {\em clean scalar} random variables, has been studied extensively in the past decades. The first information distance-based method, \ac{nj},
was proposed in \cite{saitou1987neighbor} to learn the structure of phylogenetic trees. This method makes use of additive information distances to deduce the existence of hidden nodes and introduce
edges between hidden and observed nodes. \ac{rg}, proposed in \cite{choi2011learning}, generalizes the information distance-based methods to make it applicable for the latent graphical models
with general structures. Different from these information distance-based methods, quartet-based methods \cite{anandkumar2011spectral} utilize the relative geometry of every four nodes
to estimate the structure of the whole graph. Although experimental comparisons of these algorithms were conducted in some works \cite{choi2011learning,jaffe2021spectral,Casanellas21arxiv}, since there is no instance-dependent impossibility
result of the sample complexity of structure learning of latent tree graphical models, no thorough theoretical comparisons have been made, and the optimal dependencies on the
diameter of graphs and the maximal distance between nodes $\rho_{\max}$ have not been found.
The success of the previously-mentioned algorithms relies on the assumption that the observations are i.i.d.\ samples from the generating distribution. The structure learning of latent graphical models in
presence of (random or adversarial) noise remains a relatively unexplored problem. There are some works studying the problem of structure learning of graphical models with noisy samples, where all the nodes in the graphical models are
observed and not hidden. Several assumptions on the additive noise are made in these works, which limit the use of these proposed algorithms. For example, the covariance matrix of the noise is specified in
\cite{katiyar2019robust}, the independence and/or distribution of the noise is assumed in \cite{nikolakakis2019learning,tandon2020exact,tandon2021sga,Casanellas21arxiv}. In contrast, we consider the structure
learning of latent tree graphical models with \emph{arbitrary} corruptions, where the boundness and independence of noise are not required \cite{wang2017robust}. Furthermore, the
corruptions are allowed to be presented at any position in the data; they do not appear solely as outliers. In this work, we derive a bound on the maximum number of corruptions that can be tolerated and yet structure
learning can succeed with high probability.
Firstly, we derive the sample complexities of \ac{rg} and \ac{clrg} where each node represents a random {\em vector}; this differs from previous works where each node is {\em scalar} random variable (e.g., \cite{choi2011learning,parikh11}).
We explore the dependence of the sample complexities on the parameters. The vanilla \ac{clrg} takes in the mutual informations and builds a max-weight spanning tree to learn the tree. However, for latent trees, we work with information distances and the mutual information is, in general, not monotone in the distance (unless the variables are scalar). To implement \ac{clrg} on \ac{ggm}s with vector variables,
we consider a class of \ac{ggm}s in which parents and children nodes are connected by linear Gaussian channels. Sufficient conditions are derived to ensure that the mutual information is monotone in the distance, ensuring that \ac{clrg} is correctly implemented.
Our sample complexity analysis proves the effectiveness of the Chow-Liu initialization
in \ac{clrg}; this has been only verified experimentally~\cite{choi2011learning}. In the \ac{hmm}, we show that the Chow-Liu initialization reduces the sample complexity of
\ac{rg} which is $O\big((\frac{9}{2})^{\mathrm{Diam}(\mathbb{T})}\big)$ to $O\big(\log \mathrm{Diam}(\mathbb{T})\big)$, where $\mathrm{Diam}(\mathbb{T})$ is the tree diameter.
Secondly, we robustify \ac{rg}, \ac{clrg}, \ac{nj} and \ac{snj} by using the truncated inner product \cite{chen2013robust} to estimate the information distances in the presence of arbitrary corruptions. We derive their sample complexities and show that they can tolerate $n_{1}=O\big(\frac{\sqrt{n_{2}}}{\log n_{2}}\big)$ corruptions, where $n_{2}$ is the number of clean samples.
Finally, we derive the first known instance-dependent impossibility result for learning latent trees. The dependencies on the number of observed nodes and the maximum distance
$\rho_{\max}$ are delineated. The comparison of the sample complexities of the structure learning algorithms and the impossibility result demonstrates the optimality of \ac{rclrg} and \ac{rnj} in
$\mathrm{Diam}(\mathbb{T})$ for some archetypal latent tree structures.
\paragraph{Notation}
We use san-serif letters $x$, boldface letters $\mathbf{x}$, and bold uppercase letters $\mathbf{X}$ to denote variables, vectors and matrices, respectively. The notations $[\mathbf{x}]_{i}$,
$[\mathbf{X}]_{ij}$, $[\mathbf{X}]_{:,j}$ and $\mathrm{diag}(\mathbf{X})$ are respectively the $i^{\mathrm{th}}$ entry of vector $\mathbf{x}$, the $(i,j)^{\mathrm{th}}$ of entry $\mathbf{X}$, the
$j^{\mathrm{th}}$ column of $\mathbf{X}$, and the diagonal entries of matrix $\mathbf{X}$. The notation $x^{(k)}$ represents the $k^{\mathrm{th}}$ sample of $x$.
For a tree $\mathbb{T}=(\mathcal{V},\mathcal{E})$, the internal (non-leaf) nodes, the maximal degree and the diameter of $\mathbb{T}$ are denoted as $\mathrm{Int}(\mathbb{T})$,
$\mathrm{Deg}(\mathbb{T})$ and $\mathrm{Diam}(\mathbb{T})$, respectively. We denote the closed neighborhood and the degree of $x_{i}$ as $\mathrm{nbd}[x_{i};\mathbb{T}]$ and $\mathrm{deg}(i)$, respectively. The
length of the path connecting $x_{i}$ and $x_{j}$ is denoted as $\mathrm{d}_{\mathbb{T}}(x_{i},x_{j})$.
\section{Preliminaries and problem statement}
A \ac{ggm} \cite{lauritzen1996graphical} is a multivariate Gaussian distribution that factorizes according to an undirected graph $\mathbb{G}=(\mathcal{V},\mathcal{E})$. More precisely, a $l_{\mathrm{sum}}$-dimensional random vector $\mathbf{x}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{d}]^{\mathrm{T}}$,
where $\mathbf{x}_{i}\in \mathbb{R}^{l_{i}}$ and $l_{\mathrm{sum}}=\sum_{i=1}^{d} l_{i}$, follows a Gaussian distribution $\mathcal{N}(\mathbf{0},\mathbf{\Sigma})$, and it is said to be \emph{Markov} on a graph $\mathbb{G}=(\mathcal{V},\mathcal{E})$ with vertex set $\mathcal{V}=\{x_{1},\ldots,x_{d}\}$ and
edge set $\mathcal{E}\subseteq \binom{\mathcal{V}}{2}$ and $(x_{i},x_{j})\in \mathcal{E}$ if and only if the $(i,j)^{\mathrm{th}}$ block $\mathbf{\Theta}_{ij}$ of the precision $\mathbf{\Theta}=\mathbf{\Sigma}^{-1}$ is not the zero matrix $\mathbf{0}$. We focus on tree-structured graphical models, which factorize according to acyclic and connected (tree) graphs.
A special class of graphical models is the set of {\em latent} graphical models $\mathbb{G}=(\mathcal{V},\mathcal{E})$. The vertex set $\mathcal{V}$ is decomposed as $\mathcal{V}=\mathcal{V}_{\mathrm{hid}}\cup\mathcal{V}_{\mathrm{obs}}$.
We only have access to $n$ i.i.d.\ samples drawn from the observed set of nodes $\mathcal{V}_{\mathrm{obs}}$. The two goals of any structure learning algorithm are to learn the identities of the hidden nodes $\mathcal{V}_{\mathrm{hid}}$ and how they are connected to the observed nodes.
\subsection{System model for arbitrary corruptions}\label{subsec:sysmodel}
We consider tree-structured \ac{ggm}s $\mathbb{T}=(\mathcal{V},\mathcal{E})$ with observed nodes $\mathcal{V}_{\mathrm{obs}}=\{x_{1},\cdots,x_{o}\}$ and hidden nodes $\mathcal{V}_{\mathrm{hid}}=\{x_{o+1},\cdots,x_{o+h}\}$,
where $\mathcal{V}=\mathcal{V}_{\mathrm{hid}}\cup\mathcal{V}_{\mathrm{obs}}$ and $\mathcal{E}\subseteq \binom{\mathcal{V}}{2}$. Each node $x_{i}$ represents a random {\em vector} $\mathbf{x}_{i}\in \mathbb{R}^{l_{i}}$. The concatenation of
these random vectors is a multivariate Gaussian random vector with zero mean and covariance matrix $\mathbf{\Sigma}$ with size $l_{\mathrm{sum}}\times l_{\mathrm{sum}}$.
We have $n$ i.i.d.\ samples $\mathbf{X}_{j}=[\mathbf{x}_{1}^{(j)\rm T},\cdots,\mathbf{x}_{o}^{(j)\rm T}]^{\rm T}\in\mathbb{R}^{l_{\mathrm{sum}}}, j=1,\ldots,n$ drawn from the observed nodes $\mathcal{V}_{\mathrm{obs}}=\{x_{1},\cdots,x_{o}\}$.
Furthermore, the data matrix $\mathbf{X}_{1}^{n}=[\mathbf{X}_{1},\cdots,\mathbf{X}_{n}]^{\rm T}\in \mathbb{R}^{n\times l_{\mathrm{sum}}}$ may contain some corrupted elements. We allow $n_{1}/2$ samples of a
variable to be arbitrarily corrupted, which means that there are at most $n_{1}/2$ corrupted terms in each column of $\mathbf{X}_{1}^{n}$, and the remaining $n-n_{1}/2$ samples in this column are clean. In particular,
the corrupted samples in different columns need not be in the same rows. If the corruptions in different columns lie in the same rows, as shown in (the left of) Fig.~\ref{corrup_pic}, all the samples in the corresponding rows
are corrupted; these are called \emph{outliers}. Obviously, outliers form a special case of our corruption model. Since each variable has at most $n_{1}/2$ corrupted samples, the sample-wise inner product between two variables has
at least $n_{2}=n-n_{1}$ clean samples. There is no constraint on the statistical dependence or patterns of the corruptions.
Unlike fixing the covariance matrix of the noise \cite{katiyar2019robust} or keeping the noise independent \cite{nikolakakis2019learning},
we allow \emph{arbitrary} corruptions on the samples, which means that the noise can have unbounded amplitude, can be dependent, and even can be
generated from another graphical model (as we will see in the experimental results in Section~\ref{sec:simu}).
\subsection{Structural and distributional assumptions}\label{sec:assump}
To construct the correct latent tree from samples of observed nodes, it is imperative to constrain the class of latent trees to guarantee that the information from the distribution of observed nodes $p(\mathbf{x}_{1},\ldots,\mathbf{x}_{o})$ is sufficient to construct the
tree. The distribution $p(\mathbf{x}_{1},\ldots,\mathbf{x}_{o+h})$ of the observed and hidden nodes is said to have a \emph{redundant} hidden node $x_{j}$ if the distribution observed nodes $p(\mathbf{x}_{1},\ldots,\mathbf{x}_{o})$ remains the same after we marginalize over $x_{j}$. To ensure that a latent tree can be constructed
with no ambiguity, we need to guarantee that the true distribution does not have any redundant hidden node(s), which is achieved by following two conditions \cite{pearl2014probabilistic}: (C1) Each hidden node has at least three neighbors; the set of such latent trees is denoted as $\mathcal{T}_{\geq 3}$; (C2)
Any two variables connected by an edge are neither perfectly dependent nor independent.
\begin{assumption}\label{assupleng}
The dimensions of all the random vectors are all equal to $l_{\max}$.
\end{assumption}
In fact, we only require the random vectors of the internal (non-leaf) nodes to have the same length. However, for ease of notation,
we assume that the dimensions of all random vectors are $l_{\max}$.
\begin{assumption}\label{assupsing}
For every $x_{i},x_{j}\in \mathcal{V}$, the covariance matrix $\mathbf{\Sigma}_{ij}=\mathbb{E}\big[\mathbf{x}_{i}\mathbf{x}_{j}^{\mathrm{T}}\big]$ has full rank, and the smallest singular value of $\mathbf{\Sigma}_{ij}$ is lower bounded by $\gamma_{\min}$, i.e.,
\begin{align}
\sigma_{l_{\max}}(\mathbf{\Sigma}_{ij})\geq \gamma_{\min} \quad\text{for all}\quad x_{i},x_{j}\in \mathcal{V},
\end{align}
where $\sigma_{i}(\mathbf{\Sigma})$ is the $i^{\mathrm{th}}$ largest singular value of $\mathbf{\Sigma}$.
\end{assumption}
This assumption is a strengthening of Condition (C2) when each node represents a random vector.
\begin{assumption}\label{assupdet}
The determinant of the covariance matrix of any node $\mathbf{\Sigma}_{ii}=\mathbb{E}\big[\mathbf{x}_{i}\mathbf{x}_{i}^{\mathrm{T}}\big]$ is lower bounded by $\delta_{\min}$, and the diagonal terms of the covariance matrix are upper bounded by $\sigma_{\max}^{2}$, i.e.,
\begin{align}
\min_{x_{i}\in\mathcal{V}} \det(\mathbf{\Sigma}_{ii})\geq \delta_{\min} \quad\mbox{and}\quad \max_{x_{i}\in\mathcal{V}}\mathrm{diag}\big(\mathbf{\Sigma}_{ii}\big) \leq \sigma_{\max}^{2}.
\end{align}
\end{assumption}
Assumption~\ref{assupdet} is natural; otherwise, $\mathbf{\Sigma}_{ii}$ may be arbitrarily close to a singular matrix.
\begin{assumption}\label{assupdegree}
The degree of each node is upper bounded by $d_{\max}$, i.e., $\mathrm{Deg}(\mathbb{T})\leq d_{\max}$.
\end{assumption}
\subsection{Information distance}
We define the \emph{information distance} for Gaussian random vectors and prove that it is additive for trees.
\begin{definition}\label{def:dist}
The information distance between nodes $x_{i}$ and $x_{j}$ is
\begin{align}
\mathrm{d}(x_{i},x_{j})=-\log\frac{\prod_{k=1}^{l_{\max}}\sigma_{k}\big(\mathbf{\Sigma}_{ij}\big)}{\sqrt{\mathrm{det}\big(\mathbf{\Sigma}_{ii}\big)\mathrm{det}\big(\mathbf{\Sigma}_{jj}\big)}}.
\end{align}
\end{definition}
Condition (C2) can be equivalently restated as constraints on the information distance.
\begin{assumption}\label{assupdist}
There exist two constants $0<\rho_{\min}\leq\rho_{\max}<\infty$ such that.
\begin{align}
\rho_{\min}\leq\mathrm{d}(x_{i},x_{j})\leq \rho_{\max} \quad \text{for all}\quad x_{i},x_{j}\in \mathcal{V}.
\end{align}
\end{assumption}
Assumptions \ref{assupsing} and \ref{assupdist} both describe the properties of the correlation between random vectors from different perspectives. In fact, we can relate the constraints in these two
assumptions as follows:
\begin{align}\label{eq:simi}
\gamma_{\min}e^{\rho_{\max}/l_{\max}}\geq \delta_{\min}^{1/l_{\max}}.
\end{align}
\begin{proposition}\label{prop:add}
If Assumptions \ref{assupleng} and \ref{assupsing} hold, $\mathrm{d}(\cdot,\cdot)$ defined in Definition \ref{def:dist} is additive on the tree-structured \ac{ggm} $\mathbb{T}=(\mathcal{V},\mathcal{E})$. In
other words, $\mathrm{d}(x_{i},x_{k})=\mathrm{d}(x_{i},x_{j})+\mathrm{d}(x_{j},x_{k})$ holds for any two nodes $x_{i},x_{k}\in \mathcal{V}$ and any node $x_{j}$ on the path connecting $x_{i}$ and $x_{k}$ in $\mathbb{T}$.
\end{proposition}
This additivity property is used extensively in the following algorithms. It was first stated and proved in Huang et al.~\cite{huang2020guaranteed}. We provide an alternative proof in Appendix \ref{appendix:add_dist}.
\section{Robustifying latent tree structure learning algorithms} \label{sec:robustifying}
\subsection{Robust estimation of information distances}\label{subsec:infodistest}
Before delving into the details of robustifying latent tree structure learning algorithms, we first introduce the truncated inner product \cite{chen2013robust}, which estimates the correlation against arbitrary corruption effectively and
serves as a basis for the robust latent tree structure learning algorithms. Given $\mathbf{a},\mathbf{b}\in \mathbb{R}^{n}$ and an integer $n_{1}$, we compute $q_{i}=a_{i}b_{i}$ for $i=1,2,\ldots,n$ and sort $\{|q_{i}|\}$. Let $\Upsilon$ be the index set of the $n-n_{1}$ smallest $|q_i|$'s. The truncated inner product is $\langle\mathbf{a},\mathbf{b}\rangle_{n_{1}}=\sum_{i\in\Upsilon}q_{i}$.
Note that the implementation of the truncated inner product requires the knowledge of corruption level $n_{1}$.
To estimate the information distance defined in Definition \ref{def:dist}, we implement the truncated inner product to estimate each term of $\mathbf{\Sigma}_{ij}$, i.e., $[\hat{\mathbf{\Sigma}}_{ij}]_{st}=\frac{1}{n-n_{1}}\langle[\mathbf{X}_{1}^{n}]_{:,(i-1)l_{\max}+s},[\mathbf{X}_{1}^{n}]_{:,(j-1)l_{\max}+t}\rangle_{n_{1}}$.
Then the information distance is computed based on this estimate
of $\mathbf{\Sigma}_{ij}$ as
\begin{align}
\hat{\mathrm{d}}(x_{i},x_{j})=-\log\prod_{k=1}^{l_{\max}}\sigma_{k}\big(\hat{\mathbf{\Sigma}}_{ij}\big)+\frac{1}{2}\log\mathrm{det}\big(\hat{\mathbf{\Sigma}}_{ii}\big)+\frac{1}{2}\log\mathrm{det}\big(\hat{\mathbf{\Sigma}}_{jj}\big).
\end{align}
The truncated inner product guarantees that $\hat{\mathbf{\Sigma}}_{ij}$ converges in probability to $\mathbf{\Sigma}_{ij}$, which further ensures the convergence of the singular values and the determinant of $\mathbf{\Sigma}_{ij}$ to their nominal values.
\begin{proposition}\label{distconcen}
If Assumptions \ref{assupleng} and \ref{assupsing} hold, the estimate of the information distance between $x_{i}$ and $x_{j}$ based on the truncated inner product $\hat{\mathrm{d}}(x_{i},x_{j})$ satisfies
\begin{align}
\mathbb{P}\Big(\big|\hat{\mathrm{d}}(x_{i},x_{j})-\mathrm{d}(x_{i},x_{j})\big|> \frac{2l_{\max}^{2}}{\gamma_{\min}}(t_{1}+t_{2})\Big)\le 2l_{\max}^{2}e^{-\frac{3n_{2}}{16\kappa n_{1}} t_{1}}+l_{\max}^{2}e^{-c\frac{n_{2}}{\kappa^{2}}t_{2}^{2}}, \label{eqn:tail}
\end{align}
where $t_{2}<\kappa=\max\{\sigma_{\max}^{2},\rho_{\min}\}$, and $c$ is an absolute constant.
\end{proposition}
The first and second parts of \eqref{eqn:tail} originate from the corrupted and clean samples respectively.
\subsection{Robust Recursive Grouping algorithm}\label{subsec:rrg}
The \ac{rg} algorithm was proposed by \cite{choi2011learning} to learn latent tree models with additive information distances. We extend the \ac{rg} to be applicable to \ac{ggm}s with vector observations and robustify it to learn the
tree structure against arbitrary corruptions. We call this robustified algorithm \ac{rrg}. \ac{rrg} makes use of the additivity of information distance to identify the relationship between nodes. For any three nodes $x_{i}$, $x_{j}$ and $x_{k}$, the difference between the information distances $\mathrm{d}(x_{i},x_{k})$ and $\mathrm{d}(x_{j},x_{k})$
is denoted as $\Phi_{ijk}=\mathrm{d}(x_{i},x_{k})-\mathrm{d}(x_{j},x_{k})$.
\begin{lemma}{\cite{choi2011learning}}\label{lem:identify}
For information distances $\mathrm{d}(x_{i},x_{j})$ for all nodes $x_{i},x_{j}\in \mathcal{V}$ in a tree $\mathbb{T}\in \mathcal{T}_{\geq 3}$, $\Phi_{ijk}$ has following two properties: (1) $\Phi_{ijk}=\mathrm{d}(x_{i},x_{j}) \text{ for all } x_{k}\in \mathcal{V}\backslash\{x_{i},x_{j}\}$ if and only if $x_{j}$ is a leaf node and $x_{i}$ is the parent of $x_{j}$ and (2) $-\mathrm{d}(x_{i},x_{j})<\Phi_{ijk^{\prime}}=\Phi_{ijk}<\mathrm{d}(x_{i},x_{j}) \text{ for all } x_{k},x_{k^{\prime}}\in \mathcal{V}\backslash\{x_{i},x_{j}\}$ if and only if $x_{i}$ and $x_{j}$ are leaves and share the same parent.
\end{lemma}
\ac{rrg} initializes the active set $\Gamma^{1}$ to be the set of all observed nodes. In the $i^{\mathrm{th}}$ iteration, as shown in Algorithm \ref{algo:rrg}, \ac{rrg} adopts Lemma \ref{lem:identify} to identify relationships among nodes in active set $\Gamma^{i}$, and it
removes nodes identified as siblings from $\Gamma^{i}$ and adds newly introduced hidden nodes to form the active set $\Gamma^{i+1}$ in the $(i+1)^{\mathrm{st}}$ iteration. The procedure of estimating the distances between the newly-introduced hidden node $x_{\mathrm{new}}$ and other nodes is as
follows. For the node $x_{i}$ which is the child of $x_{\mathrm{new}}$, i.e., $x_{i}\in \mathcal{C}(x_{\mathrm{new}})$, the information distance is estimated as
\begin{align}\label{distup1}
\hat{\mathrm{d}}(x_{i},x_{\mathrm{new}})=\frac{1}{2\big(|\mathcal{C}(x_{\mathrm{new}})|-1\big)}\bigg(\sum_{j\in\mathcal{C}(x_{\mathrm{new}})}\hat{\mathrm{d}}(x_{i},x_{j})+\frac{1}{|\mathcal{K}_{ij}|}\sum_{k\in\mathcal{K}_{ij}}\hat{\Phi}_{ijk}\bigg),
\end{align}
where $\mathcal{K}_{ij}=\big\{x_{k}\in \mathcal{V}\backslash \{x_{i},x_{j}\} :\max\big\{\hat{\mathrm{d}}(x_{i},x_{k}),\hat{\mathrm{d}}(x_{j},x_{k})\big\}<\tau \big\}$ for some threshold $\tau>0$. For $x_{i}\notin \mathcal{C}(x_{\mathrm{new}})$,
the distance is estimated as
\begin{align}\label{distup2}
\hat{\mathrm{d}}(x_{i},x_{\mathrm{new}})=\left\{
\begin{array}{ll}
\sum_{x_{k}\in \mathcal{C}(x_{\mathrm{new}})}\frac{\hat{\mathrm{d}}(x_{k},x_{i})-\hat{\mathrm{d}}(x_{k},x_{\mathrm{new}})}{|\mathcal{C}(x_{\mathrm{new}})|}. & \text{if }x_{i}\in \mathcal{V}_{\mathrm{obs}} \\
\sum_{(x_{k},x_{j})\in \mathcal{C}(x_{\mathrm{new}})\times \mathcal{C}(i)}\frac{\hat{\mathrm{d}}(x_{k},x_{j})-\hat{\mathrm{d}}(x_{k},x_{\mathrm{new}})-\hat{\mathrm{d}}(x_{j},y_{i})}{|\mathcal{C}(x_{\mathrm{new}})||\mathcal{C}(i)|} &\text{otherwise }
\end{array}
\right. .
\end{align}
\begin{wrapfigure}{r}{4.5cm}
\centering
\includegraphics[width=0.3\textwidth]{ite_num2.eps}
\caption{\small An illustration of the active set. The shaded nodes are the observed nodes and the rest are hidden nodes.}
\label{fig:itefig}
\end{wrapfigure}
The set $\mathcal{K}_{ij}$ is designed to ensure that the nodes involved in the calculation of information distances are not too far, since estimating long distances requires a large number of samples. The maximal cardinality of $\mathcal{K}_{ij}$ over all nodes $x_{i},x_{j}\in \mathcal{V}$ can be found, and we denote this as $N_{\tau}$, i.e., $|\mathcal{K}_{ij}|\leq N_{\tau}$.
The observed nodes are placed in the $0^{\mathrm{th}}$ layer. The hidden nodes introduced in $i^{\mathrm{th}}$ iteration are placed in $i^{\mathrm{th}}$ layer. The nodes in the $i^{\mathrm{th}}$ layer are in the active set $\Gamma^{i+1}$
in the $(i+1)^{\mathrm{st}}$ iteration, but nodes in $\Gamma^{i+1}$ can be nodes created in the $j^{\mathrm{th}}$ iteration, where $j<i$. For example, in Fig.~\ref{fig:itefig}, nodes $x_{12}$, $x_{14}$ and $x_{15}$ are created in the $1^{\mathrm{st}}$ iteration, and they
are in $\Gamma^{2}$. Nodes $x_{1}$, $x_{2}$ and $x_{5}$ are also in $\Gamma^{2}$, which are observed nodes. Eqns.~\eqref{distup1} and~\eqref{distup2} imply that
the estimation error in the $0^{\mathrm{th}}$ layer will propagate to the nodes in higher layers, and it is necessary to derive concentration results for the information distance related to the nodes in higher layers. To avoid repeating complicated
expressions in the various concentration bounds to follow, we define the function
\begin{align}
f(x)\triangleq 2l_{\max}^{2}e^{-\frac{3n_{2}}{32\lambda\kappa n_{1}} x}+l_{\max}^{2}e^{-c\frac{n_{2}}{4\lambda^{2}\kappa^{2}}x^{2}}=: ae^{-wx}+be^{-ux^{2}},\nonumber
\end{align}
where $\lambda=2l_{\max}^{2}e^{\rho_{\max}/l_{\max}}/\delta_{\min}^{1/l_{\max}}$, $w=\frac{3n_{2}}{32\lambda\kappa n_{1}}$, $u=c\frac{n_{2}}{4\lambda^{2}\kappa^{2}}$, $a=2l_{\max}^{2}$ and $b=l_{\max}^{2}$. To assess the proximity of the estimates $\hat{\mathrm{d}}(x_{i},x_{\mathrm{new}})$ in
\eqref{distup1} and \eqref{distup2} to their nominal versions, we define
\begin{align}
h^{(l)}(x)\triangleq s^{l}f(m^{l}x)=s^{l}\big(ae^{-wm^{l}x}+be^{-um^{2l}x^{2}}\big) \quad \text{for all}\quad l \in \mathbb{N}\cup\{0\}. \label{eqn:hl}
\end{align}
where $s=d_{\max}^{2}+2d_{\max}^{3}(1+2N_{\tau})$ and $ m=2/9$. The following proposition yields {\em recursive estimates} for the errors of the distances at various layers of the learned latent tree.
\begin{proposition}\label{errorpropg}
With Assumptions \ref{assupleng}--\ref{assupdist}, if we implement the truncated inner product to estimate the information distance among observed nodes and adopt \eqref{distup1} and \eqref{distup2} to estimate
the information distances related to newly introduced hidden nodes, then the information distance related to the hidden nodes $x_{\mathrm{new}}$ created in the $l^{\mathrm{th}}$ layer $\hat{\mathrm{d}}(x_{i},x_{\mathrm{new}})$ satisfies
\begin{align}\label{expotail}
\mathbb{P}\Big(\big|\hat{\mathrm{d}}(x_{i},x_{\mathrm{new}})-\mathrm{d}(x_{i},x_{\mathrm{new}})\big|>\varepsilon\Big)<h^{(l)}(\varepsilon)\quad \mbox{for all}\quad x_{i}\in \Gamma^{l+1} \quad\mbox{and}\quad l \in \mathbb{N}\cup\{0\}.
\end{align}
\end{proposition}
We note that Proposition \ref{errorpropg} demonstrates that the coefficient of exponential terms in \eqref{expotail} grow exponentially with increasing layers (i.e., $m^{l}$ and $m^{2l}$ in \eqref{eqn:hl}), which requires a commensurately large number of samples to control
the tail probabilities.
\begin{theorem}\label{theo:rrgsamplecomp}
Under Assumptions \ref{assupleng}--\ref{assupdist}, \ac{rrg} learns the correct latent tree with probability $1-\eta$ if
\begin{align}\label{eqn:sampcomplex}
n_{2}=\Omega\Big(\frac{l_{\max}^{4}e^{2\rho_{\max}/l_{\max}} \kappa^{2}}{\delta_{\min}^{2/l_{\max}}\rho_{\min}^{2}}\big(\frac{9}{2}\big)^{2L_{\mathrm{R}}}\log\frac{|\mathcal{V}_{\mathrm{obs}}|^{3}}{\eta}\Big)\quad \text{and}\quad n_{1}=O\Big(\frac{\sqrt{n_{2}}}{\log n_{2}}\Big),
\end{align}
where $L_{\mathrm{R}}$ is the number of iterations of \ac{rrg} needed to construct the tree.
\end{theorem}
Theorem \ref{theo:rrgsamplecomp} indicates that the number of clean samples $n_{2}$ required by \ac{rrg} to learn the correct structure grows exponentially with the number of iterations $L_{\mathrm{R}}$. Specifically, for the full $m$-tree illustrated in Fig.~\ref{fig:comp}, $n_{2}$ is exponential in the depth of the tree with high probability for structure learning to succeed. The sample complexity of \ac{rrg} depends on $e^{2\rho_{\max}/l_{\max}}$, and the exponential relationship with
$\rho_{\max}$ will be shown to be unavoidable in view of our impossibility result in Theorem~\ref{theo:converse}. Huang et al.~\cite[Lemma~7.2 ]{huang2020guaranteed} also derived a sample complexity result for learning latent trees but the algorithm is based on~\cite{anandkumar2011spectral} instead of \ac{rg}. \ac{rrg} is able to tolerate $n_{1}=O(\sqrt{n_{2}}/\log n_{2})$ corruptions.
This tolerance level originates from the properties of the truncated inner product; similar tolerances will also be seen for the sample complexities of subsequent algorithms. We expect this is also the case for \cite{huang2020guaranteed}, which is based on \cite{anandkumar2011spectral}, though we have not shown this formally. In addition, the sample complexity is applicable to a wide class of graphical models that satisfies the Assumptions~\ref{assupleng} to~\ref{assupdist}, while the sample complexity result \cite[Theorem 11]{choi2011learning}, which hides the dependencies on the parameters, only holds for a limited class of graphical models whose effective depths (the maximal length of paths between hidden nodes and their closest observed nodes) are bounded in $|\mathcal{V}_{\mathrm{obs}}|$.
\subsection{Robust Neighbor Joining and Spectral Neighbor Joining algorithms}\label{subsec:snjnj}
The \ac{nj} algorithm \cite{saitou1987neighbor} also makes use of additive distances to identify the existence of hidden nodes. To robustify the \ac{nj}
algorithm, we adopt robust estimates of information distances as the additive distances in the so-called \ac{rnj} algorithm. We first recap a result by Atteson~\cite{atteson1999performance}.
\begin{proposition}\label{prop:njsuff}
If all the nodes have exactly two children, \ac{nj} will output the correct latent tree if
\begin{align}
\max_{x_{i},x_{j}\in\mathcal{V}_{\mathrm{obs}}} \big|\hat{\mathrm{d}}(x_{i},x_{j})-\mathrm{d}(x_{i},x_{j})\big|\leq {\rho_{\min}}/{2}.
\end{align}
\end{proposition}
Unlike \ac{rg}, \ac{nj} does not identify the parent relationship among nodes, so it is only applicable to binary trees in which each node has at most two children.
\begin{theorem}\label{theo:rnjsamplecomp}
If Assumptions \ref{assupleng}--\ref{assupdist} hold and all the nodes have exactly two children, \ac{rnj} constructs the correct latent tree with probability at least $1-\eta$ if
\begin{align}
n_{2}=\Omega\Big(\frac{l_{\max}^{4}e^{2\rho_{\max}/l_{\max}}\kappa^{2}}{\delta_{\min}^{2/l_{\max}}\rho_{\min}^{2}}\log\frac{|\mathcal{V}_{\mathrm{obs}}|^{2}}{\eta}\Big)\quad \text{and}\quad n_{1}=O\Big(\frac{\sqrt{n_{2}}}{\log n_{2}}\Big).
\end{align}
\end{theorem}
Theorem \ref{theo:rnjsamplecomp} indicates that the sample complexity of \ac{rnj} grows as $\log |\mathcal{V}_{\mathrm{obs}}|$, which is much better than \ac{rrg}. Similarly to RRG,
the sample complexity has an exponential dependence on $\rho_{\max}$.
In recent years, several variants of \ac{nj} algorithms have been proposed. The additivity of information distances results in certain properties of the rank of the matrix $\mathbf{R}\in \mathbb{R}^{|\mathcal{V}_{\mathrm{obs}}|\times|\mathcal{V}_{\mathrm{obs}}|}$, where
$\mathbf{R}(i,j)=\exp(-\mathrm{d}(x_{i},x_{j}))$ for all $x_{i},x_{j}\in \mathcal{V}_{\mathrm{obs}}$. Jaffe \emph{et al.} \cite{jaffe2021spectral} proposed \ac{snj} which utilizes the rank of $\mathbf{R}$ to deduce the
sibling relationships among nodes. We robustify the \ac{snj} algorithm by implementing the robust estimation of information distances, as shown in Algorithm \ref{algo:rsnj}.
Although \ac{snj} was designed for discrete random variables, the additivity of the information distance proved in Proposition \ref{prop:add} guarantees the consistency of \ac{rsnj} for \ac{ggm}s with vector variables.
A sufficient condition for \ac{rsnj} to learn the correct tree can be generalized from \cite{jaffe2021spectral}.
\begin{proposition}\label{prop:snjsuff}
If Assumptions \ref{assupleng}--\ref{assupdist} hold and all the nodes have exactly two children, a sufficient condition for \ac{rsnj} to recover
the correct tree from $\hat{\mathbf{R}}$ is
\begin{align}
\|\hat{\mathbf{R}}-\mathbf{R}\|_{2}\leq g(|\mathcal{V}_{\mathrm{obs}}|,\rho_{\min},\rho_{\max}),
\end{align}
where
\begin{align}
g(x,\rho_{\min},\rho_{\max})&=\left\{
\begin{array}{lr}
\frac{1}{2}(2e^{-\rho_{\max}})^{\log_{2}(x/2)}e^{-\rho_{\max}}(1-e^{-2\rho_{\min}}),& \quad e^{-2\rho_{\max}}\leq 0.5\\
e^{-3\rho_{\max}}(1-e^{-2\rho_{\min}}),& \quad e^{-2\rho_{\max}}> 0.5
\end{array}
\right. .\nonumber
\end{align}
\end{proposition}
Similar with \ac{rnj},
\ac{rsnj} also does not identify the parent relationship between nodes, so it only applies to binary trees. To state the next result succinctly, we assume that $\rho_{\max}\ge \frac{1}{2}\log 2$; this is the regime of interest because we consider large trees which implies that $\rho_{\max}$ is typically large.
\begin{theorem}\label{theo:rsnjsamplecomp}
If Assumptions \ref{assupleng}--\ref{assupdist} hold, $\rho_{\max}\ge \frac{1}{2}\log 2$, and all the nodes have exactly two children, \ac{rsnj} reconstructs the correct latent tree with probability at least $1-\eta$ if
\begin{align}
n_{2}=\Omega\Big( \frac{l_{\max}^{4}e^{2\rho_{\max}(1/l_{\max}+\log_{2}(|\mathcal{V}_{\mathrm{obs}}|/2)+1)}\kappa^{2} }{\delta_{\min}^{2/l_{\max}}e^{2\rho_{\min}}}\log\frac{|\mathcal{V}_{\mathrm{obs}}|^{2}}{\eta}\Big)\quad \text{and}\quad n_{1}=O\Big(\frac{\sqrt{n_{2}}}{\log n_{2}}\Big).
\end{align}
\end{theorem}
Theorem \ref{theo:rsnjsamplecomp} indicates that the sample complexity of \ac{rsnj} grows as $\mathrm{poly}(|\mathcal{V}_{\mathrm{obs}}|)$. Specifically, in the binary tree case, the sample complexity grows exponentially with the depth of the tree. Also, the dependence of sample complexity on $\rho_{\max}$ is exponential, i.e., $O\big(e^{2(1/l_{\max}+\log_{2}(|\mathcal{V}_{\mathrm{obs}}|/2)+1)\rho_{\max}}\big)$, but the coefficient of $\rho_{\max}$ is
larger than those of \ac{rrg} and \ac{rnj}, which are $O\big(e^{2\rho_{\max}/l_{\max}}\big)$. Compared to the sample complexity of \ac{snj} in \cite{jaffe2021spectral}, the sample complexity of \ac{rsnj} has the same dependence on the number of observed
nodes $|\mathcal{V}_{\mathrm{obs}}|$, which means that the robustification of \ac{snj} using the truncated inner product is able to tolerate $O\big(\frac{\sqrt{n_{2}}}{\log n_{2}}\big)$ corruptions.
\subsection{Robust Chow-Liu Recursive Grouping}\label{subsec:rclrg}
In this section, we show that the exponential dependence on $L_{\mathrm{R}}$ in Theorem \ref{theo:rrgsamplecomp} can be provably mitigated with an accurate initialization of the structure. Different from \ac{rrg}, \ac{rclrg} takes Chow-Liu algorithm as the initialization stage, as shown in Algorithm \ref{algo:rclrg}.
The Chow-Liu algorithm~\cite{chow1968approximating} learns the maximum likelihood estimate of the tree structure by finding the maximum weight spanning tree of the graph whose edge weights are the mutual information quantities
between these variables. With the monotonicity between mutual information and information distance, we can construct the Chow-Liu tree as the \ac{mst} of the graph whose weights are information distances. However, this
monotonicity property is violated in general for random {\em vectors}, since the determinants of $\mathbf{I}-\mathbf{X}$ and $\mathbf{X}$ are not monotonic functions of each other for general $\mathbf{X}\in \mathbb{R}^{l_{\max}\times l_{\max }}$.
We now show that the required monotonicity property is guaranteed by a wide class of \ac{ggm}s.
We choose any node in the tree as the root node $\mathbf{x}_{\mathrm{r}}$, and define the parent node and set of children nodes (in the rooted tree) of any node $x_{i}$ as $\mathrm{pa}(i)$ and $\mathcal{C}(x_{i})$ respectively. The depth of a node $x_{i}$ is
$\mathrm{d}_{\mathbb{T}}(x_{i},x_{\mathrm{r}})$. We specify the model where
\begin{align}\label{eqn:chanl}
\mathbf{x}_{i}=\mathbf{A}\mathbf{x}_{\text{pa}(i)}+\mathbf{n}_{i} \quad\text{for all}\quad x_{i}\in \mathcal{V}
\end{align}
where $\mathbf{A} \in \mathbb{R}^{l_{\max}\times l_{\max}}$ is non-singular, $\mathbf{n}_{i}\sim \mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{i})$ and $\mathbf{n}_{i}$'s are mutually independent. Since the root node has no parent, it is natural to set $\mathbf{x}_{\text{pa}(\mathrm{r})}=\mathbf{0} $ and $\mathbf{n}_{\mathrm{r}}\sim \mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{\mathrm{r}})$.
It is easy to verify that the model specified by \eqref{eqn:chanl} and this initial condition is an undirected \ac{ggm}. To guarantee that the mutual information is a monotonic function of the information
distance, it is natural to consider the situation in which the covariance matrices of all variables are the same up to a constant scale factor.
\begin{proposition}\label{prop:homo}
If $\mathbf{n}_{i}$'s for the variables at depth $l$ are distributed as $\mathcal{N}(0,\alpha^{l-1}\mathbf{\Sigma}_{\mathrm{n}})$, and
\begin{align}\label{eqn:homo}
\mathbf{A}\mathbf{\Sigma}_{\mathrm{r}}\mathbf{A}^{\mathrm{T}}+\mathbf{\Sigma}_{\mathrm{n}}=\alpha\mathbf{\Sigma}_{\mathrm{r}}
\end{align}
where $\alpha>0$ is a constant, then the covariance matrix of the variable at depth $l$ is $\alpha^{l}\mathbf{\Sigma}_{\mathrm{r}}$.
\end{proposition}
We call \eqref{eqn:homo} the \emph{$(\mathbf{A},\mathbf{\Sigma}_{\mathrm{r}},\mathbf{\Sigma}_{\mathrm{n}})$-homogenous} condition, which guarantees that covariance matrices of the random vectors in the tree are same up to a scale
factor. We now provide a sufficient condition on $\mathbf{A},\mathbf{\Sigma}_{\mathrm{r}},\mathbf{\Sigma}_{\mathrm{n}}$ to achieve the monotonicity between mutual information and information distance.
\begin{proposition}\label{prop:mono}
In the \ac{ggm} specified by \eqref{eqn:chanl}, if (i) the $\mathbf{n}_{i}$'s for the variables at depth $l$ are distributed as $\mathcal{N}(\mathbf{0},\alpha^{l-1}\mathbf{\Sigma}_{\mathrm{n}})$ for some $\alpha> 0$; (ii) the
$(\mathbf{A},\mathbf{\Sigma}_{\mathrm{r}},\mathbf{\Sigma}_{\mathrm{n}})$-homogeneous condition in~\eqref{eqn:homo} is satisfied; and (iii) $\mathbf{\Sigma}_{\mathrm{r}}$ and $\mathbf{A}$ commute,
then the mutual information is a monotonically decreasing function of the information distance. Furthermore, the mutual information and the information distance can be expressed in closed form in terms of $(\alpha,\mathrm{d_{\mathbb{T}}}(x_{i},x_{j}), \mathbf{A})$. See \eqref{eqn:mi} and \eqref{eqn:dist} in Appendix~\ref{app:sec34}.
\end{proposition}
This property is trivially satisfied in the scalar case~\cite{choi2011learning}, but is more subtle in the vector case. With this property, the Chow-Liu algorithm \cite{chow1968approximating} can be implemented by finding the \ac{mst} with information distances as edge weights.
\begin{lemma}
If the three conditions in Proposition \ref{prop:mono} are satisfied, the Chow-Liu tree reduces to the \ac{mst} where edge weights are the information distances, i.e.,
\begin{align}
\mathbb{T}_{\mathrm{CL}}=\mathrm{MST}(\mathcal{V}_{\mathrm{obs}};\mathbf{D}):=\mathop{\arg\min}_{\mathbb{T} \in \mathcal{T}_{\mathcal{V}_{\mathrm{obs}}}} \ \ \sum_{(x_{i},x_{j})\in \mathbb{T}} \mathrm{d}(x_{i},x_{j}),
\end{align}
where $\mathcal{T}_{\mathcal{V}_{\mathrm{obs}}}$ is the set of all the trees with node set $\mathcal{V}=\mathcal{V}_{\mathrm{obs}}$.
\end{lemma}
\begin{definition}\label{def:surnode}
Given the latent tree $\mathbb{T}=(\mathcal{V},\mathcal{E})$ and any node $x_{i} \in \mathcal{V}$, the surrogate node \cite{choi2011learning} of $x_{i}$ is $\mathrm{Sg}(x_{i};\mathbb{T},\mathcal{V}_{\mathrm{obs}})=\mathop{\arg\min}_{x_{j}\in \mathcal{V}_{\mathrm{obs}}} \ \mathrm{d}(x_{i},x_{j}).$
\end{definition}
We introduce a new notion of distance that quantifies the sample complexity of \ac{rclrg}.
\begin{definition}\label{def:contdist}
Given the latent tree $\mathbb{T}=(\mathcal{V},\mathcal{E})$ and any node $x_{i}\in\mathcal{V}$, the \emph{contrastive distance} of $x_{i}$ with respect to
$\mathcal{V}_{\mathrm{obs}}$ is defined as
\begin{align}
\mathrm{d_{ct}}(x_{i};\mathbb{T},\mathcal{V}_{\mathrm{obs}})=\mathop{\min}_{x_{j}\in \mathcal{V}_{\mathrm{obs}}\backslash{\mathrm{Sg}(x_{i};\mathbb{T},\mathcal{V}_{\mathrm{obs}})}} \ \ \mathrm{d}(x_{i},x_{j})-\mathop{\min}_{x_{j}\in \mathcal{V}_{\mathrm{obs}}} \ \ \mathrm{d}(x_{i},x_{j}).
\end{align}
\end{definition}
Definitions \ref{def:surnode} and \ref{def:contdist} imply that the surrogate node $\mathrm{Sg}(x_{i};\mathbb{T},\mathcal{V}_{\mathrm{obs}})$ of any observed node $x_{i}$ is itself $x_{i}$,
and its contrastive distance is the information distance between the closest observed node and itself. It is shown that the Chow-Liu tree $\mathbb{T}_{\mathrm{CL}}$ is equal
to the tree where all the hidden nodes are contracted to their surrogate nodes \cite{choi2011learning}, so it will be difficult to identify the surrogate node of some node if its contrastive
distance is small. Under this scenario, more accurate estimates of the information distances are required to construct the correct Chow-Liu tree.
\begin{proposition}\label{prop:corctmst}
The Chow-Liu tree $\mathrm{MST}(\mathcal{V}_{\mathrm{obs}};\mathbf{\hat{D}})$ is constructed correctly if
\begin{align}
\big|\hat{\mathrm{d}}(x_{i},x_{j})-\mathrm{d}(x_{i},x_{j})\big|< {\Delta_{\mathrm{MST}}}/{2} \quad \text{for all}\quad x_{i},x_{j}\in \mathcal{V}_{\mathrm{obs}},
\end{align}
where $\Delta_{\mathrm{MST}}:=\mathop{\min}_{x_{j}\in \mathrm{Int}(\mathbb{T})} \, \mathrm{d_{ct}}(x_{j};\mathbb{T},\mathcal{V}_{\mathrm{obs}})$.
\end{proposition}
Hence, the contrastive distance describes the difficulty of learning the correct Chow-Liu tree.
\begin{theorem}\label{theo:clrgsamplcomp}
With Assumptions \ref{assupleng}--\ref{assupdegree} and the conditions of Proposition \ref{prop:mono}, \ac{rclrg} constructs the correct latent tree with probability
at least $1-\eta$ if
\begin{align}
\!\!\!\!\! n_{2}\!=\!\Omega\bigg(\!\max\Big\{\frac{1}{\rho_{\min}^{2}}\!\big(\frac{9}{2}\big)^{2L_{\mathrm{C}}},\frac{1}{\Delta_{\mathrm{MST}}^{2}}\Big\}\frac{l_{\max}^{4}e^{2\rho_{\max}/l_{\max}}\kappa^{2}}{\delta_{\min}^{2/l_{\max}}}\log\frac{|\mathcal{V}_{\mathrm{obs}}|^{3}}{\eta}\! \bigg)\;\;\mbox{and}\;\; n_{1}\!=\!O\Big(\frac{\sqrt{n_{2}}}{\log n_{2}}\Big),\!\!
\end{align}
where $L_{\mathrm{C}}$ is the maximum number of iterations of \ac{rrg} (over each internal node of the constructed Chow-Liu tree) in \ac{rclrg} needed to construct the tree.
\end{theorem}
If we implement \ac{rclrg} with \emph{true} information distances, $L_{\mathrm{C}}\le \lceil \frac{1}{2}\mathrm{Deg}(\mathrm{MST}(\mathcal{V}_{\mathrm{obs}};\mathbf{\hat{D}})) - 1\rceil$. Theorem~\ref{theo:clrgsamplcomp}
indicates that the sample complexity of \ac{rclrg} grows exponentially in $L_{\mathrm{C}}\ll L_{\mathrm{R}}$.
Compared with
\cite[Theorem 12]{choi2011learning}, the sample complexity of \ac{rclrg} in Theorem \ref{theo:clrgsamplcomp} is applicable to a wide class of graphical models that satisfy Assumptions \ref{assupleng} to \ref{assupdist}, while the \cite[Theorem 12]{choi2011learning} requires the assumption that the effective depths of latent trees are {\em bounded} in $|\mathcal{V}_{\mathrm{obs}}|$, which is rather restrictive.
\subsection{Comparison of robust latent tree learning algorithms}\label{subsec:comp}
Since the sample complexities of \ac{rrg}, \ac{rclrg}, \ac{rsnj} and \ac{rnj} depend on different parameters and different structures of the underlying graphs, it is instructive to compare the sample complexities of these algorithms on some representative tree structures. These trees are illustrated in Fig.~\ref{fig:comp}. \ac{rsnj} and \ac{rnj} are not able to identify the parent relationship among nodes, so they are only applicable to trees whose maximal degrees are no larger that $3$,
including the double-binary tree and the \ac{hmm}. In particular, \ac{rnj} and \ac{rsnj} are not applicable to the full $m$-tree $(\text{for }m\geq 3)$ and the double star.
Derivations and more detailed discussions of the sample complexities are deferred to Appendix~\ref{app:table}.
\begin{table}[H]
\scriptsize
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
\diagbox{Tree}{$n_{2}$}{Algorithm}&\ac{rrg}&\ac{rclrg}&\ac{rsnj}&\ac{rnj}\\
\hline
Double-binary tree&$ O\big(\psi(\frac{9}{2})^{\mathrm{Diam}(\mathbb{T})}\big)$&$O\big(\psi(\frac{9}{2})^{\frac{1}{2}\mathrm{Diam}(\mathbb{T})}\big)$&$O\big(e^{2t\rho_{\max}}\mathrm{Diam}(\mathbb{T})\big)$&$O\big(\psi\mathrm{Diam}(\mathbb{T})\big)$\\
\hline
\ac{hmm}&$O\big(\psi(\frac{9}{2})^{\mathrm{Diam}(\mathbb{T})}\big)$&$O\big(\psi\log\mathrm{Diam}(\mathbb{T})\big)$&$O\big(e^{2t\rho_{\max}}\log \mathrm{Diam}(\mathbb{T})$&$O\big(\psi\log\mathrm{Diam}(\mathbb{T})\big)$\\
\hline
Full $m$-tree&$O\big(\psi(\frac{9}{2})^{\mathrm{Diam}(\mathbb{T})}\big) $&$O\big(\psi\mathrm{Diam}(\mathbb{T})\big)$& N.A.& N.A.\\
\hline
Double star&$O(\psi\log d_{\max})$&$O\big(\psi\log d_{\max}\big)$&N.A.&N.A.\\
\hline
\end{tabular}
\caption{The sample complexities of \ac{rrg}, \ac{rclrg}, \ac{rsnj} and \ac{rnj} on the double-binary tree, the \ac{hmm}, the full $m$-tree and the double star. We set $\psi:=e^{2\rho_{\max}/l_{\max}}$ and $t=O(l_{\max}^{-1}+\log |\mathcal{V}_{\mathrm{obs}}|)$.}
\label{table:samplecompare}
\end{table}
\subsection{Experimental results}\label{sec:simu}
We present simulation results to demonstrate the efficacy of the robustified algorithms. Samples are generated from a \ac{hmm} with $l_{\max}=3$ and $\mathrm{Diam}(\mathbb{T})=80$. The three conditions in Proposition \ref{prop:mono} are satisfied with $\alpha=1$. The Robinson-Foulds distance~\cite{robinson1981comparison} between the true and estimated trees is adopted to measure the performances of the algorithms. For the implementations of \ac{clrg} and \ac{rg}, we use the code from~\cite{choi2011learning}. Other settings and more extensive experiments are given in Appendix~\ref{app:num}.
We consider three corruption patterns here. (i) {\em Uniform corruptions} are independent additive noises in $[-2A,2A]$; (ii) {\em Constant magnitude corruptions} are also independent additive noises but taking values in $\{-A,+A\}$ with probability $0.5$. These two types of noises are distributed randomly in $\mathbf{X}_1^n$; (iii) {\em \ac{hmm} corruptions} are generated by a \ac{hmm} which has the same structure as the original \ac{hmm} but has different parameters. They replace the entries in $\mathbf{X}_1^n$ with samples generated by the variables in the same positions. In our simulations, $A$ is set to $60$, and the number of corruptions $n_{1}$ is $100$.
\begin{figure}[H]
\centering
\subfigure[Uniform corruptions]{
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=1.9in]{hmm_indeplarge1_rf.eps}
\end{minipage}%
}%
\subfigure[Constant magnitude corruptions]{
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=1.9in]{hmm_const1_rf.eps}
\end{minipage}%
}%
\subfigure[\ac{hmm} corruptions]{
\begin{minipage}[t]{0.33\linewidth}
\centering
\includegraphics[width=1.9in]{hmm_hmm2_rf.eps}
\end{minipage}
}%
\centering
\caption{Robinson-Foulds distances of robustified and original algorithms averaged over $100$ trials}
\label{fig:simu}
\end{figure}
Fig.~\ref{fig:simu} (error bars are in Appendix~\ref{app:stddev}) demonstrates the superiority of \ac{rclrg} in learning \ac{hmm}s compared to other algorithms.
The robustified algorithms also result in smaller estimation errors (Robinson-Foulds distances)
compared to their unrobustified counterparts in presence of corruptions.
\section{Impossibility result}\label{subsec:converse}
\begin{definition}
Given a triple $(|\mathcal{V}_{\mathrm{obs}}|,\rho_{\max},l_{\max})$, the set $\mathcal{T}(|\mathcal{V}_{\mathrm{obs}}|,\rho_{\max},l_{\max})$ consists of all multivariate Gaussian distributions $\mathcal{N}(\mathbf{0},\mathbf{\Sigma})$ such that: (1) The
underlying graph $\mathbb{T}=(\mathcal{V},\mathcal{E})$ is a tree $\mathbb{T}\in \mathcal{T}_{\geq 3}$, and the size of the set of observed nodes is $|\mathcal{V}_{\mathrm{obs}}|$. (2) The distribution $\mathcal{N}(\mathbf{0},\mathbf{\Sigma})$ satisfies Assumptions~\ref{assupleng} and~\ref{assupdist}
with parameters $l_{\max}$ and $\rho_{\max}$.
\end{definition}
For the given class of graphical models $\mathcal{T}(|\mathcal{V}_{\mathrm{obs}}|,\rho_{\max},l_{\max})$, nature chooses some parameter $\theta=\mathbf{\Sigma}$ and generates $n$ i.i.d.\ samples $\mathbf{X}_{1}^{n}$ from $\mathbb{P}_{\theta}$. The
goal of the statistician is to use the observations $\mathbf{X}_{1}^{n}$ to learn the underlying graph $\mathbb{T}$, which entails the design of a decoder $\phi:\mathbb{R}^{n|\mathcal{V}_{\mathrm{obs}}|l_{\max} } \rightarrow \mathcal{T}_{\geq |\mathcal{V}_{\mathrm{obs}}|}$,
where $\mathcal{T}_{\geq |\mathcal{V}_{\mathrm{obs}}|}$ is the set of trees whose size of the node set is at least $|\mathcal{V}_{\mathrm{obs}}|$.
\begin{theorem}\label{theo:converse}
Consider the class of graphical models $\mathcal{T}(|\mathcal{V}_{\mathrm{obs}}|,\rho_{\max},l_{\max})$, where $|\mathcal{V}_{\mathrm{obs}}|\geq 3$. If there exists a graph decoder learns from $n$ i.i.d.\ samples such that
\begin{align}
\max_{\theta(\mathbb{T})\in \mathcal{T}(|\mathcal{V}_{\mathrm{obs}}|,\rho_{\max},l_{\max})} \mathbb{P}_{\theta(\mathbb{T})}(\phi(\mathbf{X}_{1}^{n})\neq \mathbb{T})< \delta,
\end{align}
then (as $\rho_{\max}\rightarrow \infty$ and $|\mathcal{V}_{\mathrm{obs}}|\rightarrow \infty$),
\begin{align}
n=\max\big\{\Omega\big((1-\delta)e^{\frac{\rho_{\max}}{\lfloor\log_{3} |\mathcal{V}_{\mathrm{obs}}| \rfloor l_{\max}}} \log |\mathcal{V}_{\mathrm{obs}}|\big),\Omega\big((1-\delta)e^{\frac{2\rho_{\max}}{3l_{\max}}}\big)\big\}. \label{eqn:n_bound}
\end{align}
\end{theorem}
Theorem~\ref{theo:converse} implies that the optimal sample complexity grows as $\Omega (\log|\mathcal{V}_{\mathrm{obs}}|)$ as $|\mathcal{V}_{\mathrm{obs}}|$ grows. Table~\ref{table:samplecompare} indicates that the sample complexity of \ac{rclrg} when the underlying latent tree is a full $m$-tree (for $m\geq3$) or a \ac{hmm} is optimal in the dependence on $|\mathcal{V}_{\mathrm{obs}}|$. The sample complexity of \ac{rnj} is also optimal in $|\mathcal{V}_{\mathrm{obs}}|$ for double binary trees and \ac{hmm}s. In contrast, the derived sample complexities of \ac{rrg} and \ac{rsnj} are suboptimal in relation to Theorem~\ref{theo:converse}. However, one caveat of our analyses of the latent tree learning algorithms in Section~\ref{sec:robustifying} is that we are not claiming that they are the best possible for the given algorithm; there may be room for improvement.
When the maximum information distance $\rho_{\max}$ grows, Theorem~\ref{theo:converse} indicates that the optimal sample
complexity grows as $\Omega (e^{\frac{2\rho_{\max}}{3l_{\max}}})$. Table \ref{table:samplecompare} shows the sample complexities of \ac{rrg}, \ac{rclrg} and \ac{rnj} grow as $O(e^{2\frac{\rho_{\max}}{l_{\max}}})$, which has the alike dependence as the impossibility result. However, the sample complexity of \ac{rsnj} grows as $O\big(e^{2t\rho_{\max}}\big)$,
which is larger (looser) than that prescribed by Theorem~\ref{theo:converse}.
\bibliographystyle{unsrt
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 349 |
package com.mehmetakiftutuncu.muezzinapi.controllers
import com.github.mehmetakiftutuncu.errors.{CommonError, Errors}
import com.mehmetakiftutuncu.muezzinapi.models._
import com.mehmetakiftutuncu.muezzinapi.services._
import com.mehmetakiftutuncu.muezzinapi.utilities.{ControllerExtras, DateFormatter}
import javax.inject.{Inject, Singleton}
import play.api.libs.json.{JsObject, Json}
import play.api.mvc._
import scala.concurrent.ExecutionContext.Implicits.global
@Singleton
class PrayerTimesController @Inject()(ControllerComponents: ControllerComponents,
CountryService: AbstractCountryService,
CityService: AbstractCityService,
DistrictService: AbstractDistrictService,
PrayerTimesService: AbstractPrayerTimesService,
DateFormatter: DateFormatter) extends AbstractController(ControllerComponents) with ControllerExtras {
def getPrayerTimes(countryId: Int, cityId: Int, districtId: Option[Int]): Action[AnyContent] = Action.async {
val log: String = s"""Failed to get prayer times for country "$countryId", city "$cityId" and district "$districtId"!"""
CountryService.getCountries.flatMap {
case Left(countryErrors: Errors) =>
futureFailWithErrors(log, countryErrors)
case Right(countries: List[Country]) =>
val countryAsOpt: Option[Country] = countries.find(_.id == countryId)
if (countryAsOpt.isEmpty) {
futureFailWithErrors(log, Errors(CommonError.notFound.data(countryId.toString)))
} else {
CityService.getCities(countryId).flatMap {
case Left(cityErrors: Errors) =>
futureFailWithErrors(log, cityErrors)
case Right(cities: List[City]) =>
val cityAsOpt: Option[City] = cities.find(_.id == cityId)
if (cityAsOpt.isEmpty) {
futureFailWithErrors(log, Errors(CommonError.notFound.data(cityId.toString)))
} else {
DistrictService.getDistricts(countryId, cityId).flatMap {
case Left(districtErrors: Errors) =>
futureFailWithErrors(log, districtErrors)
case Right(districts: List[District]) =>
val districtAsOpt: Option[District] = districtId.flatMap(did => districts.find(_.id == did))
if (districtId.isDefined && districtAsOpt.isEmpty) {
futureFailWithErrors(log, Errors(CommonError.invalidRequest.reason(s"""City "$cityId" doesn't have district "${districtId.get}"!""")))
} else if (districtId.isEmpty && districts.nonEmpty) {
futureFailWithErrors(log, Errors(CommonError.invalidRequest.reason(s"""District id is not provided but city "$cityId" has districts available!""")))
} else {
val place: Place = Place(countryId, Some(cityId), districtId)
PrayerTimesService.getPrayerTimes(place).map {
case Left(prayerTimesErrors: Errors) =>
failWithErrors(log, prayerTimesErrors)
case Right(prayerTimes: List[PrayerTimesOfDay]) =>
val result: JsObject = Json.obj(
"prayerTimes" -> prayerTimes.foldLeft(Json.obj())(_ ++ _.toJson(DateFormatter.dateFormatter))
)
success(result)
}
}
}
}
}
}
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,834 |
Take advantage of being able to completely customize your prefab. Learn how to help your modular house stand out from the crowd and feel like home.
It cannot be overstated how completely a front porch will change the overall look of your modular home. Some home styles – most notably the colonial style – seem incomplete without them, but adding a porch can give your home an inviting feeling while also adding extra living space.
Dormers are an inexpensive way to add character to your home by changing the silhouette of your house and adding some variety and character to your roof. Dormers can be functional, aesthetic, or both. Sometimes they are added on to the roof and change the look of the exterior without actually opening up into the house. Other times the roof will be cut away where the dormer is attached and they can provide lots of natural light to a room.
Some exterior aesthetic changes are much less expensive when they're made to a modular home – attractive roofing is one of them. For a site-built home it might require a specialty roofer to purchase and install expensive tiles, slate, or ceramic roofing, but most modular manufacturers take advantage of their bulk purchasing power and can get you great deals on beautiful roofing materials.
The front door of a home can say a lot about the people who live there. Having a unique or decorative front door can be an inexpensive way to set your modular home apart from the other houses around you. Remember though that a front door is primarily functional, not decorative. Too much glass can pose a safety risk and if the door is too thin or doesn't fit well in its frame, your house could be wasting a lot of energy.
5 – Don't Forget The Paint!
Never underestimate how much a good paint job can change the look and feel of a house. For many people, paint is an afterthought, but it not only has a major effect on how your house looks, but the color of your home can actually influence your mood on a daily basis. After a long day at work, coming home and seeing welcoming colors that you love can make a subtle but substantial difference.
Perhaps the easiest and least expensive change to make to your modular home is what sort of windows you want to have. Most manufacturers will give you an extensive list of choices, even with their standard plans. Having windows of different shapes and sizes on the front of your house can help the house look more interesting, but some modular home styles, like the Cape Cod style home, work better with very symmetrical looking fronts.
If you are building on a large enough plot of land, consider setting your house farther back from the street than you might otherwise do. Even a difference of a few yards can give you a sense of privacy and quiet that you might not otherwise get on a busy street. Always check your local zoning regulations; sometimes neighborhoods mandate exactly where a house can be placed. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,021 |
Q: Numpy.array reshape from multiple brackets to 2 brackets I have same data which I need in a one-dimensional numpy.array, but for some reason I don't get them in the right format. My biggest problem is that I don't really know what to look for.
My data is in a form like this:
yTrue
[[27.23]
[26.38]
[26.19]
[26.21]
[26.24]
[27.47]
[37.85]
[53.35]]
but in order to calculate it I need my data to be stored as a one-dimensional array, if I'm right, so they have to look like this:
Ypred
[26.63003973 26.34320268 26.05945521 25.77876403 25.50109623 25.22641923]
type() tells me that both variables are the same class:<class 'numpy.ndarray'>
A: I think you're looking for the .flat attribute of your array. If that isn't quite what you're looking for, take a look at this question for other ideas.
A: You have a (n,1) shape array, like:
In [39]: arr = np.random.rand(5,1)*100
In [40]: arr
Out[40]:
array([[39.12922352],
[66.79745338],
[51.97361542],
[97.60386022],
[85.89486218]])
there are many ways to reshape it to (n,), 1d:
In [41]: arr.ravel()
Out[41]: array([39.12922352, 66.79745338, 51.97361542, 97.60386022, 85.89486218])
In [42]: arr.reshape(5)
Out[42]: array([39.12922352, 66.79745338, 51.97361542, 97.60386022, 85.89486218])
In [43]: arr.reshape(-1)
Out[43]: array([39.12922352, 66.79745338, 51.97361542, 97.60386022, 85.89486218])
In [44]: arr.flatten()
Out[44]: array([39.12922352, 66.79745338, 51.97361542, 97.60386022, 85.89486218])
In [45]: arr[:,0]
Out[45]: array([39.12922352, 66.79745338, 51.97361542, 97.60386022, 85.89486218])
Take your pick, read their docs, experiment.
What you show is the str representation:
In [46]: print(arr)
[[39.12922352]
[66.79745338]
[51.97361542]
[97.60386022]
[85.89486218]]
A: Thanks for the help. It worked out. My real problem was that i didnt understand the differences of the arays in that moment.
Thanks
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,584 |
\section{Introduction}
A square matrix whose diagonal entries are all zero, is sometimes called a
\emph{hollow matrix}, e.g.\ \cite{CharFarb13, FarbJohn15, KuraBapa16,
NeveBast18}. By a theorem of
Fillmore \cite{Fill69}, which is closely related to older results of
Horn and Schur \cite{Schu23, Horn54}, every real square zero-trace matrix is orthogonally similar to a hollow
matrix. Taken with a pinch of salt, the structure of a hollow matrix can be viewed as
the negative of the spectral normal form (e.g.\ of a symmetric matrix), where the zeros are
placed outside the diagonal. While the spectral form reveals
an orthogonal basis of eigenvectors, a hollow form reveals an
orthogonal basis of neutral vectors, i.e.\ vectors for which the
quadratic form associated to the matrix vanishes.\\
This property turns out
to be relevant in asymptotic eigenvalue considerations. More concretely,
we use it to extend and give new proofs for results on stabilization
of linear systems by rotational forces or by noise. Since the
pioneering work \cite{ArnoCrau83} these phenomena have received
ongoing attention, with current interest e.g.\ in stochastic partial differential
equations or Hamiltonian systems, \cite{SrinLali00, CaraRobi04, KolbConi19}. Our new contribution concerns simultaneous stabilization by
noise and features a new method of proof, which relies on
an orthogonal transformation of matrices to hollow form.\\
It is easy to see that -- in contrast to
the spectral transformation -- the
transformation to hollow form leaves a lot of freedom to require further properties.
In the present note, we first show that it is
possible to transform two zero-trace matrices simultaneously to an almost hollow
form, as will be specified in Section \ref{sec:Hollow}. In a non-constructive
manner, the proof can be based on
Brickman's theorem \cite{Bric61} that the real joint numerical range of two real
matrices is convex. But to make the transformation computable, we
provide a different proof, which is fully constructive. As a side result,
this also leads to a new derivation of Brickman's theorem.
Moreover, the simultaneous transformation result allows to prove a stronger version of
Fillmore's theorem, namely that every real square zero-trace matrix is
orthogonal-symplectically similar to a hollow matrix. \\
We mainly treat the real case, because it is slightly more
involved than its complex counterpart. Complex versions of our results can be obtained easily and
are stated in Section \ref{sec:complex-case}. It turns out that any
pair of Hermitian zero-trace matrices is unitarily similar to a hollow
pair (not just an almost hollow pair as in the real case).
In \cite{NeveBast18} the term \emph{simultaneous unitary
hollowisation} is used for such a tranformation, and it is put in the context of a quantum
separability problem. The authors show that a certain quantum state is
separable, if and only if an associated set of matrices is simultaneously unitarily
hollowisable. This is a non-trivial restriction, if there are more
than two matrices.
For an arbitrary triple of Hermitian matrices, however, we can show
that it is unitarily similar to an almost hollow
triple, i.e.\ \emph{almost hollowisable}, so to speak. We see this as a first
step towards criteria for larger sets of matrices to be simultaneously
unitarily (almost) hollowisable. Thus, to the best of our knowledge, the current note is the first to treat hollowisation
problems from the matrix theoretic side. \\
All our results are constructive and can be implemented in a
straight-forward way. Computational aspects of the real transformations are discussed in Section
\ref{sec:comp_aspects}. The orthogonal symplectic transformation of
$4\times 4$-matrices requires detailed explicit calculations which
have been shifted to the appendix.
We show analytically that for $n\times n$-matrices the
computational cost of our hollowising transformations is $O(n^2)$ and report on numerical
experiments. \\
In Section \ref{sec:appl_stab}, we present the
applications of our results in stabilization theory. We show that a
number of linear dissipative systems can be stabilized
simultaneously by the same stochastic noise process, provided the
coefficient matrices can be made almost hollow simultaneously by an
orthogonal transformation. The results are illustrated by numerical examples.
\section{Hollow matrices and orthogonal transformations}
\label{sec:Hollow}
We first review some known facts on hollow matrices and then present our
main results.
\begin{definition}
Let $A=(a_{ij})\in \mathbb{R}^{n\times n}$.
\begin{enumerate}
\item[(i)] We call $A$ \emph{hollow}, if $a_{ii}=0$ for all
$i=1,\ldots,n$. \item[(ii)] We call $A$ \emph{almost hollow}, if $a_{ii}=0$
for $i=1,\ldots,n-2$ and $a_{n-1,n-1}=-a_{nn}$. \item[(iii)] If $\tr A=0$, then
$A$ is called a \emph{zero trace} matrix.
\end{enumerate}
\end{definition}
Obviously, every hollow matrix is also almost hollow, and every almost
hollow matrix is zero trace. Vice versa, $\tr A=0$ implies that $A$ is orthogonally
similar to a hollow matrix. This result has been proven in Fillmore (1969)
\cite{Fill69}. We add a proof, because similar arguments will be used in
the later discussion.
\begin{lemma}\label{lemma:fillmore} Let $A\in\mathbb{R}^{n\times n}$ with $\tr A=0$.
\begin{itemize}
\item[(a)] There exists a vector $v\in\mathbb{R}^n$ with $v\neq 0$,
such that $v^TAv=0$.
\item[(b)] There exists an orthogonal matrix $V\in\mathbb{R}^{n\times
n}$, such that $V^TAV$ is hollow.
\end{itemize}
\end{lemma}
\bf Proof: \nopagebreak \rm
(a) If $a_{11}=0$, then we can choose $v=e_1$. Otherwise let (after possibly
dividing $A$ by $a_{11}$) w.l.o.g.\
$a_{11}=1$. Since $\tr A=0$, there exists $j\in\{2,\ldots,n\}$ with
$a_{jj}<0$. For $v=xe_1+e_j$ with $x\in\mathbb{R}$, we have
\begin{align*}
v^TAv&=x^2+(a_{1j}+a_{j1})x+a_{jj}
\end{align*}
which has two real zeros.
Hence (a) follows.\\
(b) Extend $v_1=v/\|v\|$ with $v$ from (a), to an orthonormal matrix
$V_1=[v_1,\ldots,v_n]$. Then $V^TAV=\left[
\begin{array}{cc}
0&\star\\\star&A_1
\end{array}
\right]$ with $A_1\in\mathbb{R}^{(n-1)\times (n-1)}$ and $\tr A_1=\tr A=0$.
Therefore we can proceed with $A_1$ as with $A$.
\eprf
\begin{corollary}\label{cor:fillmore}
For $A\in\mathbb{R}^{n\times n}$, there exists an orthogonal matrix $V\in\mathbb{R}^{n\times
n}$, such that all diagonal entries of $V^TAV$ are equal.
\end{corollary}
\bf Proof: \nopagebreak \rm
We set $A_0=A-\frac{\tr A}n I$. By Lemma
\ref{lemma:fillmore} there exists an orthogonal matrix $V$ such that
$V^TA_0V$ is hollow. Then $V^TAV=V^TA_0V+\frac{\tr A}n I$.
\eprf
\begin{remark}\label{rem:hollow}
\begin{enumerate}
\item[(a)] A transformation matrix $V$ making $V^TAV$ hollow as in Lemma \ref{lemma:fillmore}
will sometimes be called an \emph{(orthogonal) hollowiser (for $A$)}.
\item[(b)] As is evident from the construction, the hollowiser
$V$ is not unique.
In the following
we will exploit this freedom to transform two matrices
simultaneously or to replace $V$ by an orthogonal symplectic
matrix.
\item[(c)] Since $V^TAV$ is hollow, if and only if $V^T(A+A^T)V$ is
hollow, there is no restriction in considering only symmetric
matrices.
\item[(d)] We are mainly interested in the real case, but it is
immediate to transfer our results to the complex
case, where $A\in\mathbb{C}^{n\times n}$ and $V$ is unitary. This
is sketched in subsection \ref{sec:complex-case}.
\end{enumerate}
\end{remark}
\subsection{Simultaneous transformation of two matrices}
\label{sec:simult-transf-two}
Simultaneous transformation of several matrices to a certain form (e.g.\ spectral
form) usually requires quite restrictive assumptions. Therefore it is
remarkable that an arbitrary pair of zero trace matrices can simultaneously be
transformed to an almost hollow pair. The precise statement is given
in the following result.
\begin{proposition}\label{prop:fillmore_simultan}
Consider $A,B\in\mathbb{R}^{n\times n}$ with
$\tr A=\tr B=0$.
\begin{itemize}
\item[(a)] If $n\ge 3$, there exists a nonzero vector $v\in\mathbb{R}^n$, such that $v^TAv=v^TBv=0$.
\item[(b)] There exists an orthogonal matrix $V\in\mathbb{R}^{n\times
n}$ such that $V^TAV$ is hollow and $V^TBV$ is almost hollow.
\end{itemize}
\end{proposition}
\bf Proof: \nopagebreak \rm
(b): We first note that (b) follows easily from (a). If (a) holds, then the orthogonal transformation $V$ is obtained by applying (a)
repeatedly as in the proof of Lemma \ref{lemma:fillmore}(b) until
the remaining submatrix is smaller than $3\times 3$ (where (a) is applied only for $A$). \\
For (a) we provide two different proofs. The first is quite short, but
not constructive. It exploits Brickman's theorem \cite{Bric61} on the
convexity of the joint real numerical
range of two matrices, see Theorem~\ref{lemma:realJNR_convex} below. The second is constructive, but considerably
longer. It is the basis for our algorithmic approach.
\emph{short proof} of (a):
By Lemma \ref{lemma:fillmore}, we can assume w.l.o.g.\ that $A$ is
hollow. If $b_{jj}=0$ for some $j$, then we
can choose $v=e_j$. Otherwise, since $\tr B=0$, not all the signs
of the $b_{jj}$ are equal. For simplicity of notation assume that
$b_{11}>0$ and $b_{22}<0$. The points
$(e_1^TAe_1,e_1^TBe_1)=(0,b_{11})$ and $(e_2^TAe_2,e_2^TBe_2)
=(0,b_{22})$ lie in the joint real numerical range of $A$ and $B$,
defined as
\begin{align*}
W(A,B)&= \{(x^TAx,x^TBx)\;\big|\; x\in\mathbb{R}^n, \|x\|=1\}\subset\mathbb{R}^2\;.
\end{align*}
According to Theorem~\ref{lemma:realJNR_convex}
the set $W(A,B)$ is convex for $n\ge3$. Hence it also
contains $(0,0)=(v^TAv,v^TBv)$ for some unit vector $v\in\mathbb{R}^n$.
\emph{constructive proof} of (a): By Remark \ref{rem:hollow}, we can assume that $A$ and $B$ are
symmetric, and by Lemma \ref{lemma:fillmore}, we can assume w.l.o.g.\ that $A$ is
hollow. If $b_{jj}=0$ for some $j$, then we
can choose $v=e_j$. For the remaining discussion let $b_{jj}\neq 0$ for all $j$.
Since $\tr B=0$, not all the signs of the $b_{jj}$ are equal. After possible
permutation and division of $B$ by one of the diagonal entries, we
can assume that the left upper $3\times 3$ blocks of $A$ and $B$ are
\begin{align}\label{eq:A3B3}
A_3&=\frac12\left[\begin{array}{ccc}
0&a&b\\a&0&c\\b&c&0
\end{array}
\right]\;,\quad B_3=\frac12\left[\begin{array}{ccc}
2d_-&\alpha&\beta\\\alpha&2d_+&\gamma\\\beta&\gamma&2
\end{array}
\right]\,\quad \text{ with } d_-<0, d_+>0\;.
\end{align}
If possible, we try to find $v_3=\left[
\begin{smallmatrix}
1\\x\\y
\end{smallmatrix}
\right]$ with $x,y\in\mathbb{R}$, such that
$v_3^TA_3v_3=v_3^TB_3v_3=0$. This leads to the conditions
\begin{align}\label{eq:vAv}
0&=v_3^TA_3v_3=ax+by+cxy=ax+(b+cx)y\\
0&=v_3^TB_3v_3=d_-+\alpha x+\beta y+\gamma xy+d_+x^2+y^2\;.
\label{eq:vBv}
\end{align}
We distinguish a number of cases.\\
{\bf\boldmath Case $a=0$ or $b=0$:}
If $a=0$, then \eqref{eq:vAv} holds with $y=0$ and \eqref{eq:vBv}
reduces to $0=d_-+\alpha x+d_+x^2$, which has a real solution $x$,
because $d_-<0$, $d_+>0$. Analogously, if $b=0$ then \eqref{eq:vAv}
holds with $x=0$ and \eqref{eq:vBv} again has a real solution. \\
{\bf\boldmath Case $a\neq0$, $b\neq0$, and $c\neq 0$:}
From now on let $a\neq 0$ and $b\neq 0$.
If equation \eqref{eq:vAv} holds with $b+cx=0$, then also $ax=0$,
i.e.\ $a=0$ or $x=0$, where the latter implies $b=0$ and thus both
cases contradict our assumption. Therefore we can exclude the case
$b+cx=0$ and solve for $y=-\frac{ax}{b+cx}$. Inserting this in \eqref{eq:vBv} yields
\begin{align*}
0&=d_-+\alpha x- \frac{\beta ax}{b+cx}-\frac{\gamma ax^2}{b+cx}+d_+x^2+\frac{a^2x^2}{(b+cx)^2}\;.
\end{align*}
If we multiply the equation with $(b+cx)^2$ and consider only the
coefficients at $x^0$ and $x^4$, we have
\begin{align}\label{eq:quartic}
0&=d_-b^2+\ldots+d_+c^2x^4\;.
\end{align}
If $c\neq 0$, then $d_+c^2>0$ and $d_-b^2<0$ imply the existence of a
real root $x$. \\
{\bf\boldmath Case $a\neq0$, $b\neq0$, and $c= 0$:}
The final case to be considered is $c=0$. Now \eqref{eq:vAv} gives
$y=-\frac{a}bx$, which inserted in \eqref{eq:vBv} leads to
\begin{align*}
0&= d_-+\alpha x-\beta \frac{a}bx+\left(-\gamma \frac{a}b+d_++\frac{a^2}{b^2}\right)x^2\;.
\end{align*}
Because $d_-<0$, the existence of a real root $x$ is guaranteed, if
$-\gamma \frac{a}b+d_++\frac{a^2}{b^2}>0$.
On the other hand note, that for any $\tilde v_3=\left[
\begin{smallmatrix}
0\\x\\y
\end{smallmatrix}
\right]$, we have $\tilde v_3^T A_3\tilde v_3=0$. If moreover the
submatrix $\left[
\begin{smallmatrix}
2d_+&\gamma\\\gamma&2
\end{smallmatrix}
\right]$ is not positive definite, i.e.\
$\gamma^2\ge 4d_+$,
then there exists a
nonzero $\tilde v_3$, satisfying $\tilde v_3^T B_3\tilde v_3=0$.
To conclude the proof, it suffices to note that the inequalities
$-\gamma \frac{a}b+d_++\frac{a^2}{b^2}\le 0$
and $\gamma^2< 4d_+$ contradict each other via
\begin{align*}
0&\ge d_+-\gamma
\frac{a}b+\frac{a^2}{b^2} > \frac{\gamma^2}4-\gamma
\frac{a}b+\frac{a^2}{b^2}
=\left(\frac\gamma2-\frac{a}b\right)^2\ge 0\;.
\end{align*}
The desired vector $v$ is now given by $v=\left[
\begin{smallmatrix}
v_3\\0
\end{smallmatrix}
\right]$, or $v=\left[
\begin{smallmatrix}
\tilde v_3\\0
\end{smallmatrix}
\right]$, respectively.
\eprf
\begin{remark}\rm
The assumption in Proposition \ref{prop:fillmore_simultan}(a) that $n\ge
3$ is essential. As the standard example (e.g.\ \cite{Bric61}), consider the two symmetric matrices $A=\left[
\begin{array}{cc}
1&0\\0&-1
\end{array}
\right]$ and $B=\left[
\begin{array}{cc}
0&1\\1&0
\end{array}
\right]$ with $\tr A=\tr B=0$. For $v= \left[
\begin{smallmatrix}
x\\y
\end{smallmatrix}
\right]$, we have $v^TAv=x^2-y^2$ and $v^TBv=2xy$. If both forms are
zero, then necessarily $x=y=0$. Therefore, in general, a pair of symmetric
matrices with zero trace is not simultaneously orthogonally similar
to a pair of hollow matrices.
\end{remark}
\subsection{A constructive proof of Brickman's theorem}
\label{sec:constr-proof-brickm}
The following theorem was used in the short proof of Proposition \ref{prop:fillmore_simultan}(a). It was derived in
\cite{Bric61, Bind85} by topological methods. More elementary
approaches using only connectivity properties of quadrics in
$\mathbb{R}^3$ were given in \cite{Yaku71, Pepi04, Mart05} and
surveyed e.g.\ in \cite{Poly98, PoliTerl07}.
Below, we provide yet another
derivation, which exploits the $3\times 3$ case discussed in the
constructive proof. While our approach might not be as elegant as some of the previous proofs, it easily lends itself for
computational purposes.
\begin{theorem}[Brickman \cite{Bric61}]\label{lemma:realJNR_convex}
Let $A,B\in\mathbb{R}^{n\times n}$ with $n\ge 3$. Then the set
\begin{align*}
W(A,B)&= \{(x^TAx,x^TBx)\;\big|\; x\in\mathbb{R}^n, \|x\|=1\}
\end{align*}
is convex.
\end{theorem}
\bf Proof: \nopagebreak \rm
Consider two linearly independent unit vectors $u,v\in\mathbb{R}^n$ and
set
\begin{align*}
a=(a_1,a_2)=(u^TAu,u^TBu), \;b=(b_1,b_2)=(v^TAv,v^TBv)\;.
\end{align*}
For $0<t<1$ let
$c= (c_1,c_2)=(1-t)a+tb$. We have to show that $c\in W(A,B)$,
i.e.\
there exists a unit vector $x\in\mathbb{R}^n$, satisfying
$(x^TAx,x^TBx)=c$.\\
If $u^TAu=v^TAv$, then \emph{either}
$[u,v]^TA[u,v]=c_1I_2$,
and we can choose $x\in\spann\{u,v\}$ ---
\emph{or} $[u,v]^TA[u,v]-c_1I_2$ is indefinite, in which case there exist
$z_\pm\in\spann\{u,v\}$ with $\|z_\pm\|=1$ such that $z_+^TAz_+>c_1$,
$z_-^TAz_-<c_1$. If $u^TAu\neq v^TAv$, then we can trivially choose
$z_\pm\in\{u,v\}$ with the same properties. From now on, we assume
such vectors $z_\pm$ to be given.\\
Since $n\ge 3$ there exists another unit vector $y\in\mathbb{R}^n$
orthogonal to $z_\pm$. Depending on whether $y^TAy\ge c_1$ or
$y^TAy\le c_1$, we can choose a linear combination $w=\alpha y
+\beta z_-$ or $w=\alpha y
+\beta z_+$, $\alpha\neq 0$,
such that $w^TAw=c_1$ and
$\|w\|=1$.
With
the nonsingular matrix $U=[\sqrt{1-t}\,u,\sqrt{t}\,v,w]$, we define
\begin{align*}
\tilde A&=U^T(A-c_1I)U=\left[
\begin{array}{ccc}
(1-t) (a_1-c_1)&\star&\star\\\star&t(b_1-c_1)&\star\\\star&\star&0
\end{array}\right]\\
\tilde B&=U^T(B-c_2I)U=\left[
\begin{array}{ccc}
(1-t) (a_2-c_2)&\star&\star\\\star&t(b_2-c_2)&\star\\\star&\star&w^TBw-c_2
\end{array}
\right]\;.
\end{align*}
By construction, $0=\tilde a_{11}+\tilde a_{22}=\tilde b_{11}+\tilde
b_{22}$. Hence, by Lemma \ref{lemma:fillmore}, there exists an orthogonal matrix
$Q_1\in\mathbb{R}^{2\times 2}$, such that for $Q=\left[
\begin{array}{cc}
Q_1&0\\0&1
\end{array}
\right]$ we have
\begin{align*}
Q^T \tilde A Q&=\left[
\begin{array}{ccc}
0&\star&\star\\\star&0&\star\\\star&\star&0
\end{array}\right],\quad
Q^T \tilde B Q=\left[
\begin{array}{ccc}
d_1&\star&\star\\\star&d_2&\star\\\star&\star&w^TBw-c_2
\end{array}\right]\text{ where } d_1=-d_2\;.
\end{align*}
If
$z^T Q^T \tilde A Q z=z^T Q^T \tilde B Q z=0$ for some vector
$z\in\mathbb{R}^3$, then $x=\frac{UQz}{\|UQz\|}\in\mathbb{R}^n$
yields
\begin{align*}
x^TAx
=\frac{z^TQ^T(\tilde A+c_1U^TU)Qz}{\|UQz\|^2}=c_1\;\text{ and }\;
x^TB
=\frac{z^TQ^T(\tilde B+c_2U^TU)Qz}{\|UQz\|^2}=c_2,
\end{align*}
as desired.
Such a vector $z$ can be found as in the
constructive proof of Proposition~\ref{prop:fillmore_simultan}(a). If
$d_1=0$ or
$w^TBw=c_2$, then $x=e_1$ or $x=e_3$ is
suitable. Otherwise, after renormalization, the pair $(Q^T\tilde
AQ,Q^T\tilde BQ)$ has the same structure as $(A_3,B_3)$ in \eqref{eq:A3B3}.
This completes the proof.
\eprf
\subsection{Symplectic transformation of a matrix}
\label{sec:orth-sympl-transf}
Symplectic transformations play an important role in Hamiltonian
systems, e.g.\ \cite{MeyeHall09}. We briefly recapitulate some
elementary facts.
A real \emph{Hamiltonian matrix} has the form
\begin{align*}
H&=\left[
\begin{array}{cc}
A&P\\Q&-A^T
\end{array}
\right]\in\mathbb{R}^{2n\times 2n}\;,
\end{align*}
where $A \in\mathbb{R}^{n\times n}$ is arbitrary, while $P,Q
\in\mathbb{R}^{n\times n}$ are symmetric. If $J=\left[
\begin{array}{cc}
0&I\\-I&0
\end{array}
\right]$, then all real Hamiltonian matrices are characterized by the
property that $JH$ is symmetric. A real matrix $U \in\mathbb{R}^{2n\times 2n}$ is called
\emph{symplectic} if $U^TJU=J$. If $U$ is symplectic, then the transformation $H\mapsto U^{-1}HU$ preserves the
Hamiltonian structure. Amongst other things, symplectic orthogonal
transformations are relevant for the Hamiltonian eigenvalue problem, e.g.\
\cite{PaigLoan81, Loan84, Fass00}. There is a rich theory on normal
forms of Hamiltonian matrices under orthogonal symplectic
transformations (e.g.\ \cite{Byer86, LinMehr99}).
It is, however, a surprising improvement of Lemma \ref{lemma:fillmore} that an arbitrary zero trace matrix can
be hollowised by a symplectic orthogonal transformation.
Before we state the main result of this section, we provide some
examples of symplectic orthogonal matrices, which will be relevant in
the proof and the computations.
\begin{ex}\rm
It is well-known and straight-forward to verify that an orthogonal matrix $U\in\mathbb{R}^{2n\times 2n}$
is symplectic, if and only if it has the form
\begin{align*}
U=\left[
\begin{array}{cc}
U_1&U_2\\-U_2&U_1
\end{array}
\right],
\text{ where $U_1,U_2\in\mathbb{R}^{n\times n}$.}
\end{align*}
This allows to construct elementary symplectic orthogonal
matrices (see e.g.\ \cite{MackMack03}).
\begin{enumerate}
\item If
$V\in\mathbb{R}^{n\times n}$ is orthogonal, then $U=\left[
\begin{smallmatrix}
V&0\\0&V
\end{smallmatrix}
\right]$
is symplectic orthogonal.
\item If $c^2+s^2=1$ then we define the Givens-type symplectic orthogonal matrices
\begin{align}\label{eq:G2nics}
G_k(c,s)&=\left[
\begin{array}{ccc|ccc}
I_{k-1}&&&&&\\&c&&&s&\\&&I_{n-k}&&&\\\hline&&&I_{k-1}&&\\&-s&&&c&\\&&&&&I_{n-k}
\end{array}
\right],\; k\in\{1,\ldots,n\}\\
\mathcal{G}(c,s)&=
\left[
\begin{array}{ccc|ccc}
I_{n-2}&&&&&\\&c&s&&&\\&-s&c&&&\\\hline&&&I_{n-2}&&\\&&&&c&s\\&&&&-s&c
\end{array}
\right]\;. \label{eq:calGncs}
\end{align}
\item For $p_0^2+p_1^2+p_2^2+p_3^2=1$ we have the symplectic
orthogonal $4\times 4$-matrix
\begin{align}
S &= \begin{bmatrix}
p_0 & -p_1 & -p_2& -p_3\\
p_1 & p_0 & -p_3 & p_2\\
p_2 & p_3 & p_0 & -p_1\\
p_3 & -p_2 & p_1 & p_0
\end{bmatrix}\;.
\label{eq:4x4sympl}
\end{align}
\end{enumerate}
\end{ex}\rm
\begin{theorem}\label{thm:SymplOrth}
Consider a matrix $A\in\mathbb{R}^{2n\times 2n}$ with
$n\ge1$. Then there exists a symplectic orthogonal
matrix $U$, such that $U^TAU$ has constant diagonal.
\end{theorem}
\bf Proof: \nopagebreak \rm
W.l.o.g.\ we can assume that $A$ is symmetric with $\tr A=0$.
The transformation $U$ is constructed in several steps, where we make
use of the orthogonal symplectic transformations above.
{\bf 1st step:}
Let $d_1,\ldots,d_{2n}$ denote the diagonal entries of $A$.
Applying $G_k(c,s)$ from \eqref{eq:G2nics} for the transformation $A^+=G_k(c,s)^TA G_k(c,s)$
we can achieve that $d_{k}^+=d_{k+n}^+$.
After $n$ such transformations we have
\begin{align}
A^+&=\left[
\begin{array}{cc}A_{1}^+&\star\\\star&A_2^+ \end{array}
\right]=
\left[ \begin{array}{cc}
\begin{array}{ccc}
d_{1}^+&&\star\\&\ddots&\\\star&&d_{n}^+
\end{array}
&\star\\\star&\begin{array}{ccc}
d_{1}^+& &\star\\&\ddots&\\\star&&d_{n}^+
\end{array}
\end{array}
\right]\;.\label{eq:Aplus}
\end{align}
In particular $\tr A_1^+=\tr A_2^+=0$.
{\bf 2nd step:} By Proposition
\ref{prop:fillmore_simultan}, there exists an orthogonal matrix
$V\in\mathbb{R}^{n\times n}$, such that $V^TA_1^+V$ is hollow and
$V^TA_2^+V$ is almost hollow.
Thus, for the symplectic orthogonal matrix $U=\left[
\begin{array}{cc}
V&0\\0&V
\end{array}
\right]$, we have (with $d_1=0$)
\begin{align*}
U^TA^+U&=
\left[
\begin{array}{c|c}V^TA_{1}^+V&\star\\\hline\star&V^TA_2^+V \end{array}
\right]=
\left[ \begin{array}{c|c}
\begin{smallmatrix
0&&\star\\[-2mm]&\ddots&\\\star&&\left[
\begin{smallmatrix
d_1&a\\a&-d_1
\end{smallmatrix}
\right]
\end{smallmatrix}
&\star\\\hline\star&\begin{smallmatrix
0& &\star\\[-2mm]&\ddots&\\\star&&\left[
\begin{smallmatrix
d_2&b\\b&-d_2
\end{smallmatrix}
\right]
\end{smallmatrix}
\end{array}
\right]\;
\end{align*}
{\bf 3rd step:}
In the following we can restrict our attention to the submatrix of
$U^TA^+U$ formed by the rows and columns with indices $n-1,n,2n-1,2n$.
Therefore, we now work with symplectic orthogonal matrices
$G_k(c,s)$ from \eqref{eq:G2nics}, where $k\in\{n-1,n\}$ or
$\mathcal{G}(c,s)$ from \eqref{eq:calGncs}.
Then it suffices to transform a $4\times 4$ symmetric matrix $A_4= \left[
\begin{array}{cccc}
d_1&a&\star&\star\\a&-d_1&\star&\star\\\star&\star&d_2&b\\\star&\star&b&-d_2
\end{array}
\right]$
with the symplectic Givens rotations
\begin{align*}
G_{12}&=
\left[\begin{array}{cccc}
c&s&0&0\\-s&c&0&0\\0&0&c&s\\0&0&-s&c
\end{array}\right],
\; G_{13}=
\left[\begin{array}{cccc}
c&0&s&0\\0&1&0&0\\-s&0&c&0\\0&0&0&1
\end{array}\right],\;
G_{24}=\left[\begin{array}{cccc}
1&0&0&0\\0& c&0&s\\0&0&1&0\\0&-s&0&c
\end{array}\right]\;.
\end{align*}
In an iterative approach, we show that for each such matrix $A_4$ with $|d_1|+|d_2|\neq0$ there
exists a product $G$ of matrices from this list so that
\begin{align*}
G^TA_4G=\left[
\begin{array}{cccc}
d_1^+&a^+&\star&\star\\a^+&-d_1^+&\star&\star\\\star&\star&d_2^+&b^+\\\star&\star&b^+&-d_2^+
\end{array}
\right]\quad\text{ with } \quad |d_1^+|+|d_2^+| < |d_1|+|d_2|\;.
\end{align*}
We distinguish between different cases.
If $d_1\neq d_2$, then we can apply transformations with suitable $G_{13}$
and $G_{24}$ so that $d_1^+=d_2^+=(d_1+d_2)/2$. In particular $
|d_1^+|+|d_2^+| \le |d_1|+|d_2|$.
If $d_1=d_2=:d$, let us assume w.l.o.g.\ that $|a|\ge|b|$. Moreover
assume that $d>0$ and $a>0$. Other combinations can be treated
analogously to the following considerations.
Setting
$A_4^+=G_{12}^TA_4G_{12}$, we have
\begin{align*}
d_1^+& =d_1^+(c,s)
= d(c^2-s^2) - 2acs \;,\quad
d_2^+ =d_2^+(c,s)
= d(c^2-s^2) - 2bcs \;.
\end{align*}
If $c=\cos(t)$, $s=\sin(t)$, then $d_1^+$ is positive for $t=0$,
negative for $t=\pi/4$ and strictly decreasing in $t$ on the interval $[0,\pi/4]$.
A direct calculation shows that $d_1^+=0$ for
\begin{align}\label{eq:defcs}
c&=\left(\frac12+\frac{a}2(d^2+a^2)^{-1/2}\right)^{1/2}\;,\quad
s=\left(\frac12-\frac{a}2(d^2+a^2)^{-1/2}\right)^{1/2}\;.
\end{align}
Here $c=\cos(t_0)$, $s=\sin(t_0)>0$ with minimal $t_0\in ]0,\pi/4[$,
and therefore $c^2>s^2$ and $c,s>0$.
Hence, if $b\ge 0$, then $a\ge b$ implies
\begin{align*}
d&> d_2^+= d(c^2-s^2) - 2bcs \ge 0\;.
\end{align*}
In this case $|d_1^+|+|d_2^+|=|d_2^+| \le |d|=\frac12
(|d_1|+|d_2|)$ as desired.\\
The case $b<0$ is slightly more subtle. We first derive a lower bound
for $s$ in \eqref{eq:defcs}. To this end note that the norm $\|A_4\|_2=\Delta$ is
invariant under orthogonal transformations and $a\le\Delta$.
Hence, for a given $d>0$, we have
$$s\ge \left(\frac12-\frac{\Delta}2(d^2+\Delta^2)^{-1/2}\right)^{1/2}=:\mu(d)>0\;.$$
Since $d_1^+,d_2^+\ge 0$, we have
\begin{align*}
|d_1^+|+|d_2^+|= d_2^+(c,s)&=2d(c^2 -s^2) - (a+b)cs\le 2d(1-2s^2)\le 2d(1-\mu(d)^2)<2d\;.
\end{align*}
Altogether, given $A_4$ we set $G=G_{12}G_{13}G_{24}$, where the
transformation with $G_{13}G_{24}$ achieves $d_1=d_2$ and $G_{12}$
makes $|d_1|+|d_2|$ smaller. Applying these transformations repeatedly, we obtain a sequence
$[d_1^{(k)},d_2^{(k)}]$ of diagonal entries, whose norm $|d_1^{(k)}|+|d_2^{(k)}|$ is monotonically
decreasing, and in the limit necessarily $\mu(d)=0$, which implies that
$[d_1^{(k)},d_2^{(k)}]\stackrel{k\to\infty}\longrightarrow 0$.
\eprf
\begin{remark}
The previous proof is constructive, but the iterative approach to
the $4\times 4$ case in the 3rd step is numerically inefficient. In
the appendix we provide a direct construction of the transformation,
which exploits also transformations of the special type \eqref{eq:4x4sympl}.
\end{remark}
\subsection{The complex Hermitian case}
\label{sec:complex-case}
The joint numerical range has been studied in even more detail for the
complex Hermitian case than for the real case. Some of our results
simplify or become even stronger if we allow for complex unitary
instead of real orthogonal transformations. In the current subsection
we sketch briefly how the results can be transferred.
For completeness we start with the complex version of Lemma
\ref{lemma:fillmore}, whose immediate proof is omitted, see \cite{Fill69}.
\begin{lemma}\label{lemma:fillmore_complex} Let
$A\in\mathbb{C}^{n\times n}$ be Hermitian with $\tr A=0$.
Then there exists a unitary matrix $V\in\mathbb{C}^{n\times
n}$, such that $V^*AV$ is hollow.
\end{lemma}
From our approach it is less obvious than in the real case
that the statement of this lemma holds for non-Hermitian $A$, too (a
fact already proven in \cite{Fill69}).
Our proof of Lemma \ref{lemma:fillmore} requires realness of the
diagonal entries, and in contrast to Remark \ref{rem:hollow}(c), the
property of $V^*AV$ being hollow is not equivalent to $V^*(A+A^*)V$
being hollow (take e.g.\ $A=iI$).
We will obtain the non-Hermitian version of Lemma
\ref{lemma:fillmore_complex} as a consequence of Proposition
\ref{prop:fillmore_simultan_complex} below. For the other statements
in this subsection we are not able to drop the Hermitian assumption
(see also Remark \ref{rem:Counterexamples}).
A complex version of Brickman's theorem has been
proven in \cite{Bind85}.
\begin{theorem}\label{thm:complexJNR_convex}
Consider Hermitian matrices $A,B,C\in\mathbb{C}^{n\times n}$. Depending on $n$, the
following sets are convex:
\begin{align*}
n\ge1:& \quad W(A,B):= \{(x^*Ax,x^*Bx)\;\big|\; x\in\mathbb{C}^n, \|x\|=1\}\;,\\
n\ge 3:& \quad W(A,B,C):= \{(x^*Ax,x^*Bx,x^*Cx)\;\big|\; x\in\mathbb{C}^n, \|x\|=1\}\;.
\end{align*}
\end{theorem}
Based on Theorem~\ref{thm:complexJNR_convex}, it is easy to derive complex versions of Proposition
\ref{prop:fillmore_simultan} and Theorem~\ref{thm:SymplOrth}.
\begin{proposition}\label{prop:fillmore_simultan_complex}
Let $A,B,C\in\mathbb{C}^{n\times n}$ be zero-trace Hermitian matrices.
\begin{itemize}
\item[(a)] If $n\ge 3$, there exists $v\in\mathbb{C}^n\setminus\{0\}$, such that $v^*Av=v^*Bv=v^*Cv=0$.
\item[(b)] There exists a unitary matrix $V\in\mathbb{C}^{n\times
n}$ such that $V^*AV$ and $V^*BV$ are hollow, while $V^*CV$ is almost hollow.
\end{itemize}
\end{proposition}
\bf Proof: \nopagebreak \rm
We first consider only $A$ and $B$. By Lemma
\ref{lemma:fillmore_complex}, we can assume $A$ to be hollow. Literally
as in the short proof of Proposition \ref{prop:fillmore_simultan}(a)
it follows then that $(0,0)$ lies in the convex hull of $W(A,B)$ and
thus in $W(A,B)$ itself by Theorem~\ref{thm:complexJNR_convex}.
Hence, as in the proof of
Proposition \ref{prop:fillmore_simultan}(b), there exists a unitary
matrix $V$, such that $V^*AV$ and $V^*BV$ are hollow. If $n<3$ this
proves (b).\\
If $n\ge 3$ we assume for simplicity that $A$ and $B$ are already
hollow. If one of the diagonal entries of $C$ vanishes, say
$c_{jj}=0$, then we can choose $v=e_j$. Otherwise, there exist
$j,k\in\{1,\ldots,n\}$ such that $c_{jj}c_{kk}<0$. Since
$(0,0,c_{jj}),(0,0,c_{kk})\in W(A,B,C)$, another application of Theorem~\ref{thm:complexJNR_convex} yields $0\in W(A,B,C)$ and thus
(a). As before, (b) is a consequence of (a).
\eprf
\begin{corollary}\label{cor:fillmore_complex}
Let $A\in\mathbb{C}^{n\times n}$ with $\tr A=0$.
Then there exists a unitary matrix $V\in\mathbb{C}^{n\times
n}$, such that $V^*AV$ is hollow.
\end{corollary}
\bf Proof: \nopagebreak \rm
The matrices $\Real A=\frac12(A+A^*)$ and $\Imag A=\frac1{2i}(A-A^*)$ are
Hermitian with zero trace. By Proposition
\ref{prop:fillmore_simultan_complex}(b), there exists a unitary $V$
such that $V^*(\Real A) V$ and $V^*(\Imag A) V$ are hollow. Thus $V^*AV$ is
hollow as well.
\eprf
\begin{corollary}\label{cor:SymplUnit}
Consider a Hermitian matrix $A\in\mathbb{C}^{2n\times 2n}$ with
$\tr A=0$.
\begin{itemize}
\item[(a)] There exists a unitary matrix $U$, such that $U^*JU=J$
and $U^*AU$ is hollow.
\item[(b)] There exists a unitary matrix $U$, such that $U^TJU=J$
and $U^*AU$ is hollow.
\end{itemize}
\end{corollary}
In the terminology of \cite{MackMack03}, the unitary matrix $U$ is
called \emph{conjugate
symplectic} in (a) and \emph{complex symplectic} in (b).\\
\bf Proof: \nopagebreak \rm
We repeat the first two steps in the proof of Theorem
\ref{thm:SymplOrth}. Since $A$ is Hermitian, the first step can be
carried out with a real transformation. Therefore we can assume that
$A$ has the form $A=A^+$ from \eqref{eq:Aplus}. By Proposition
\ref{prop:fillmore_simultan_complex}, there exists a unitary
$V\in\mathbb{C}^{n\times n}$ such that $V^*A_1^+V$ is hollow and (a)
$V^*A_2^+V$ is hollow or (b) $V^*\bar A_2^+V$ is hollow. Then $U=\left[
\begin{smallmatrix}
V&0\\0&V
\end{smallmatrix}
\right]$ fulfils (a) or $U=\left[
\begin{smallmatrix}
V&0\\0&\bar V
\end{smallmatrix}
\right]$ fulfils (b), respectively.
\eprf
\begin{remark}\label{rem:Counterexamples}
If some of the assumptions are dropped, we can produce counter\-examples
to the statements of
Theorem \ref{thm:complexJNR_convex} and Proposition
\ref{prop:fillmore_simultan_complex}.
\begin{enumerate}
\item Let $n=2$. For Hermitian matrices $A,B,C\in\mathbb{C}^{2\times 2}$ the
set $W(A,B,C)$ needs not be convex. In \cite{GutkJonc04} it was
shown that $W(A,B,C)$ is the unit sphere in $\mathbb{R}^3$ for $$A=\left[
\begin{array}{cc}
1&0\\0&-1
\end{array}
\right]\;,\quad B=\left[
\begin{array}{cc}
0&1\\1&0
\end{array}
\right]\;,\quad C=\left[
\begin{array}{cc}
0&i\\-i&0
\end{array}
\right]\;.$$
In particular $W(A,B,C)$ is not convex and $0\not\in W(A,B,C)$ for these matrices, implying that
there is no $v\neq0$ with $v^*Av=v^*Bv=v^*Cv=0$.
\item For non-Hermitian zero-trace matrices $A,B\in\mathbb{C}^{n\times
n}$ there might be no $v\neq0$ with $v^*Av=v^*Bv=0$, and
(consequently) $W(A,B)$ may
be non-convex. As an example for arbitrary $n\ge 2$ consider
\begin{align*}
A=\left[
\begin{array}{cc|c}
1&0&0\\0&-1-(n-2)i&0\\\hline 0&0&iI_{n-2}
\end{array}
\right]\;,\quad B=\left[
\begin{array}{cc|c}
0&0&0\\1&0&0\\\hline0&0&0_{n-2}
\end{array}
\right] \;.
\end{align*}
Obviously $(e_1^*Ae_1,e_1^*Be_1)=(1,0)$ and
$(e^*Ae,e^*Be)=\left(-(n-1)^{-1/2},0\right)$ for $e=(n-1)^{-1/2}\sum_{j=2}^ne_j$
with $\|e\|=1$. Hence $(0,0)$ lies in the convex hull of $W(A,B)$. But
the ansatz $(v^*Av,v^*Bv)=(0,0)$ with
$v=\left[\begin{smallmatrix}
x\\y\\z
\end{smallmatrix}\right]$, $x,y\in\mathbb{C}$, $z\in\mathbb{C}^{n-2}$ yields
\begin{align*}
|x|^2-|y|^2+i\big(\|z\|^2-(n-2)|y|\big)=0\;\text{ and }\; \bar y x=0\;.
\end{align*}
By the second equation we have $x=0$ or $y=0$. Together with the real part of
the first equation this implies $x=y=0$. The imaginary part of the
first equation then yields also $z=0$, i.e.\ $v=0$.
\end{enumerate}
\end{remark}
\section{Computational aspects}
\label{sec:comp_aspects}
The orthogonal transformation of a single matrix $A$ with $\tr A=0$
to a hollow matrix is straightforward along the proof of Lemma
\ref{lemma:fillmore}. Note that each nonzero diagonal entry can be
eliminated by one Givens rotation. Hence, if there are $\nu$ nonzero
diagonal entries, then $\nu-1$ Givens rotations are required.
\subsection{Simultaneous transformation of two matrices}
The transformation of a pair $(A,B)$ of zero trace matrices follows
the constructive proof of Proposition \ref{prop:fillmore_simultan}.
In the first step, $A$ is transformed to hollow form.
Given a pair $(A,B)$ of $k\times k$ matrices, $k\ge 3$, with $A$ hollow and $B$ zero-trace, we first
check, whether $b_{11}=0$. If so, then the
dimension can be reduced immediately.
Else, let $i_2\neq i_3$ with
$b_{i_2,i_2}=\min\{b_{22},\ldots,b_{nn}\}$ and $b_{i_3,i_3}=\max\{b_{22},\ldots,b_{nn}\}$. For the submatrices $A_3$ of $A$
and $B_3$ of $B$ corresponding to the rows and columns $1,i_2,i_3$ as in \eqref{eq:A3B3}, a common neutral vector $v_3\in\mathbb{R}^3$
is computed. Generically, this requires the solution of a quartic
equation as in \eqref{eq:quartic}. The vector $v_3$ can be extended to
an orthogonal $k\times k$ matrix $V$ which differs from a permutation
matrix only in a $3\times 3$ subblock. After the transformation
\begin{align}
(A,B)\leftarrow
(V^TAV,V^TBV)=\left(\left[
\begin{array}{c|c}
0&\star\\\hline \star &\tilde A
\end{array}
\right], \left[\begin{array}{c|c}
0&\star\\\hline\star&\tilde B
\end{array}
\right]\right)\label{eq:trafoAB}
\end{align}
we have $\tr\tilde A=\tr\tilde B=0$, where at most
two diagonal entries of $\tilde A$ are non-zero. Hence, by another Givens rotation we have reduced the problem from dimension $k$
to $k-1$. Since each Givens rotation and each transformation
\eqref{eq:trafoAB}
requires $O(k)$ elementary operations, the whole algorithm
has complexity $O(n^2)$ including the solution of at most $n-2$
quartic equations.\\
We carried out experiments on a 2016 MacBook Pro with a 3.3 GHz
Intel Core i7 processor and 16 GB Memory running OS X 10.14.6 using
MATLAB version R2019b. For $20$ random pairs of $n\times n$ matrices $A$, $B$ we
avaraged the computing times, see Table \ref{tab:comp_sim}. Although the theoretical complexity is
not manifest in the outcome, we see that the algorithm
is quite fast also for large matrices.
\begin{table}[tbhp]
\centering
\begin{tabular}{c|rrrrrrr}
size $n$&100&200&400&800&1600&3200&6400\\\hline
time in sec&0.016&0.037&0.10&0.72&7.9&71&852
\end{tabular}
\caption{Computing times for simultaneous orthogonal transformation to
hollow form}
\label{tab:comp_sim}
\end{table}
\subsection{Symplectic transformation of a matrix}
The symplectic orthogonal transformation of a single matrix follows
the three steps in the proof of Theorem~\ref{thm:SymplOrth}. In the
3rd step the direct construction in Appendix \ref{app.1} is used.
This also gives an algorithm of complexity $O(n^2)$. Numerical
experiments with MATLAB were carried out as in the previous
subsection. Again, the theoretical complexity is not really expressed
by the computing times in Table \ref{tab:comp_sym} (or only roughly between $2n=200$ and $2n=800$), but
most likely this is due to other effects such as memory management for large $n$.
\begin{table}[tbhp]
\centering
\begin{tabular}{c|rrrrrrr}
size $2n$&100&200&400&800&1600&3200&6400\\\hline
time in sec&0.010 & 0.013& 0.038 & 0.17 & 1.2& 11.7 & 97
\end{tabular}
\caption{Computing times for symplectic orthogonal transformation to
hollow form}
\label{tab:comp_sym}
\end{table}
\section{Applications to stabilization problems}
\label{sec:appl_stab}
In this section we present two related stabilization problems. Both
deal with unstable linear ordinary differential equations, whose coefficient
matrix has negative trace. Such systems have stable and unstable
modes, but the stable ones dominate. By a mixing of the modes the system
can be stabilized. This mixing can be achieved e.g.\ by adding
rotational forces or stochastic terms. For both cases we extend known
results from the literature. The basic idea lies in an asymptotic
analysis based on the hollow forms constructed in the previous sections.
\subsection{Hamiltonian stabilization by rotation}
\label{sec:hamilt-gyrosc-stab}
A linear autonomous system
$
\dot x=Ax
$
is called asymptotically stable, if all solutions $x(t)$ converge to
$0$ for $t\to\infty$. It is well known, that this is equivalent to the spectrum of
$A$ being contained in the open left half plane,
$\sigma(A)\subset\mathbb{C}_-$. In this case, necessarily $\tr
A<0$. Vice versa, one can ask, whether for any matrix $A$ with $\tr
A<0$, there exists a zero trace matrix $M$ of a certain type, such that
$\sigma(A+M)\subset\mathbb{C}_-$. In \cite{CrauDamm07} it
has been shown, that such a matrix $M$ can always be chosen to be
skew-symmetric. Then we say that $M$ stabilizes $A$ or
by rotation, see e.g.\ \cite{BaxeHenn93}.
The following theorem extends this result.
\begin{theorem}\label{thm:HamSkew}
Let $A\in\mathbb{R}^{2n\times 2n}$ with $\tr A<0$. Then there
exists a skew-symmetric Hamiltonian matrix $M$, such that $\sigma(A+M) \subset\mathbb{C}_-$.
\end{theorem}
\bf Proof: \nopagebreak \rm
By Theorem~\ref{thm:SymplOrth} there exists a symplectic orthogonal matrix $U$, such
that $U^TAU$ has all diagonal entries equal to $\alpha=\tfrac{\tr A}{2n}<0$.
Consider $M_0=\left[
\begin{array}{cc}
0&\Lambda\\-\Lambda&0
\end{array}
\right]$ with
$\Lambda=\diag(\lambda_1,\ldots,\lambda_n)\in\mathbb{R}^{n\times n}$, where
$|\lambda_j|\neq|\lambda_k|$ for $j\neq k$. Then $M_0$ is Hamiltonian and
skew-symmetric, and
has all simple eigenvalues
$\pm i\lambda_k$
with respective eigenvectors $e_k\pm ie_{k+n}$. \\
For $\varepsilon>0$ we perturb $M_0$ to $M_\varepsilon=M_0+\varepsilon U^TAU$.
By \cite[Theorem 3.1]{CrauDamm07} (see also \cite{StewSun90,
HinrPrit05}) the eigenvalues of $M_\varepsilon$ have the expansion
\begin{align*}
\pm i\lambda_k+\varepsilon(e_k\pm ie_{k+n})^*U^TAU (e_k\pm ie_{k+n})+O(\varepsilon^2)
=\pm i\lambda_k+\varepsilon\alpha+O(\varepsilon^2)\;.
\end{align*}
Hence
\begin{align*}
\sigma(A+\tfrac1\varepsilon UM_0U^T)=\{\alpha\pm\tfrac1\varepsilon i\lambda_k+O(\varepsilon)\;\big|\;k=1,\ldots,n\}\subset\mathbb{C}_-
\end{align*}
for sufficiently small $\varepsilon$. The matrix $M=\frac1\varepsilon
UM_0U^T$ stabilizes $A$ by rotation. Since $U$ is symplectic
orthogonal, the matrix $M$ is skew-symmetric Hamiltonian.
\eprf
\begin{ex}\rm
We illustrate Theorem~\ref{thm:HamSkew} by $A=\diag(1,1,1,-4)$
and $M_0$ as above with $\Lambda=\diag(1,2)$. The matrix $A$ is
hollowised by the orthogonal symplectic matrix
$U=\tfrac12\left[
\begin{smallmatrix} \sqrt{2} & \sqrt{2} & 0 & 0\\ 1 & -1 & 1 &
-1\\ 0 & 0 & \sqrt{2} & \sqrt{2}\\ -1 & 1 & 1 & -1
\end{smallmatrix}\right]$. Then $\tilde M_0=UM_0U^T=\tfrac14\left[
\begin{smallmatrix}
0 & -\sqrt{2} & 6 & -\sqrt{2}\\ \sqrt{2} & 0 & -\sqrt{2} & 6\\ -6 &
\sqrt{2} & 0 & -\sqrt{2}\\ \sqrt{2} & -6 & \sqrt{2} & 0
\end{smallmatrix}\right]$ is skew-symmetric and Hamiltonian. The spectral abscissa
$\alpha(\mu)=\max\Real\sigma\left(A+\mu \tilde M_0\right)$ for $\mu>0$ is depicted in Fig.\
\ref{fig:specAbscHam}. It becomes negative for $\mu\approx3.7$.
Hence for $\mu>3.7$ the system $\dot x=(A+\mu \tilde M_0)x$ is
asymptotically stable. In \cite{CrauDamm07} a servo-mechanism was
described, which chooses a suitable gain $\mu$ adaptively via the
feedback equation
\begin{align}\label{eq:adaptively}
\dot x&=(A+\mu(t) \tilde M_0)\,x\;,\quad \dot\mu =\|x(t)\|\;.
\end{align}
This method also works in the current example (see the right plot in
Fig.\ \ref{fig:specAbscHam}), where $\mu$ roughly
converges to $e^{2.73}-1\approx14.37$.
\begin{figure}[h]\centering
\begin{minipage}{.48\linewidth}
\input{./SpecAbscHam.tex}
\end{minipage
\begin{minipage}{.48\linewidth}
\input{./SimHamStab.tex}
\end{minipage}
\caption{Left: Spectral abscissa $\alpha_j$ as a function of
$\mu$. Right: Adaptively stabilized
system \eqref{eq:adaptively} with $x(0)=[1,1,1,1]^T$, $\mu(0)=0$.}\label{fig:specAbscHam}
\end{figure}
\end{ex}
\subsection{Simultaneous stabilization by noise}
Stabilization of a dynamic system by noise processes is an interesting
phenomenon, which was analyzed in \cite{ArnoCrau83} (see also
e.g.\ \cite{Arno90, CaraRobi04}). As a particular situation, we consider the Stratonovich equation
\begin{align}\label{eq:StratoM}
dx&=Ax\,dt+Mx\circ dw\;.
\end{align}
In this subsection we assume basic knowledge of stochastic calculus as e.g.\ in
\cite{Gard88a, KloePlat95}, but actually we only need the
spectral characterization of stability given in \eqref{eq:ItoLyap}.
Nevertheless we outline the background.
Informally, \eqref{eq:StratoM} can be regarded as an ordinary differential
equation with noise perturbed coefficients, $\dot x(t)=(A+M\dot w(t))x(t)$.
Here $w(t)$ is a (stochastic) Wiener process, and the equation is understood as an
integral equation $x(t)=\int^t A x(\tau)\,d\tau+\int^t
Mx(\tau)\diamond dw(\tau)$ (the symbol $\diamond$ is explained below). The stochastic integral is approximated by
Riemann-Stieltjes type sums
\begin{align*}
\int^t Mx(\tau)\diamond dw(\tau)=\lim\sum_{j=1}^N Mx(\tilde\tau_j)\big(w(\tau_j)-w(\tau_{j-1})\big)\;.
\end{align*}
Since $w$ is not of bounded variation, the choice of
$\tilde\tau_j$ is essential. In the Stratonovich case (where we write $\diamond=\circ$), one sets
$\tilde\tau_j=(\tau_j+\tau_{j-1})/2$; in the It\^o-case (where
$\diamond$ is left out), one sets $\tilde\tau_j=\tau_j$. While the
\emph{Stratonovich} interpretation is often more appropriate for \emph{modelling}
physical systems, \emph{analysis} and \emph{numerical solution} are easier for
\emph{It\^o} equations. Therefore we will make use of transformations between
the solutions of the different types.
We call \eqref{eq:StratoM} \emph{asymptotically mean square} (or
\emph{2nd mean}) \emph{stable}, if for all solutions $x(t)$
the expected value of the squared norm $E(\|x(t)\|^2)$ converges to
zero as $t\to\infty$ (see e.g.\ \cite{KloePlat95, Damm04}).\\
For a given matrix $A\in\mathbb{R}^{n\times n}$ we want to construct
$M$ such that \eqref{eq:StratoM} is asymptotically mean square stable.
It follows
from results in \cite{ArnoCrau83} that this is possible (with a
skew-symmetric $M$), if and only
if $\tr A<0$. Here we derive the following generalization.
\begin{theorem}\label{thm:ArnoCrau83plus}
Let $A_1,A_2\in\mathbb{R}^{n\times n}$ with $\tr A_1<0$ and $\tr
A_2<0$ be given. Then there exists a common skew-symmetric matrix $M$, such that
the systems
\begin{align}\label{eq:StratoMa}
dx_1&=A_1x_1\,dt+Mx_1\circ dw_1\\
dx_2&=A_2x_2\,dt+Mx_2\circ dw_2 \label{eq:StratoMb}
\end{align}
are both asymptotically mean square stable.
\end{theorem}
\bf Proof: \nopagebreak \rm
Let $\alpha_1=\tfrac{\tr A_1}n<0$ and $\alpha_2=\tfrac{\tr A_2}n<0$.
By Proposition \ref{prop:fillmore_simultan} there exists an orthogonal
matrix $V$ such that $V^T(A_1-\alpha_1 I)V$ is hollow and
$V^T(A_2-\alpha_2 I)V$ is almost hollow. Transforming $x_j\mapsto V^Tx_j$ we can assume that $A_1-\alpha_1 I$ is hollow and
$A_2-\alpha_2 I$ is almost hollow.
For brevity we only elaborate on the case of odd $n=2k+1$. The transfer to the
even case is then even easier (see Example \ref{ex:simstabnoise}). Let
$\omega=\left[\omega_1,\ldots,\omega_k\right]$ with
$0<\omega_1<\ldots<\omega_k$, and set
\begin{align*}
M(\omega)&= \left[\begin{array}{cccc}
\begin{smallmatrix}
0
\end{smallmatrix}
&&&\\
&\begin{smallmatrix
0& \omega_1\\-\omega_1&0
\end{smallmatrix}
&&\\
&&\ddots&\\
&&&\begin{smallmatrix
0& \omega_k\\-\omega_k&0
\end{smallmatrix}
\end{array}\right]\in\mathbb{R}^{n\times n}\;.
\end{align*}
We claim, that for $M=\mu M(\omega)$ with sufficiently large $\mu>0$
both \eqref{eq:StratoMa} and \eqref{eq:StratoMb} are asymptotically
mean square stable.\\
Note, that all eigenvalues of $M(\omega)$ are simple. An orthonormal set
of eigenvectors is given by $u_1=e_1$ and
$u_j=\frac1{\sqrt2}(e_j+ie_{j+1})$,
$u_{j+1}=\frac1{\sqrt2}(e_j-ie_{j+1})$ for even $j$.
Hence with $U=[u_1,\ldots,u_n]$, we have
\begin{align}\label{eq:specdecM}
U^*M(\omega)U=\diag(0,i\omega_1,-i\omega_1,\ldots,i\omega_k,-i\omega_k)=:\diag(i\tilde
\omega_1,\ldots,i\tilde
\omega_n)\;.
\end{align}
We rewrite the Stratonovich equations as the equivalent It\^o
equations (e.g.\ \cite{Gard88a})
\begin{align}\label{eq:ItoM}
dx_j&=\left(A_j+\frac12M^2\right)x_j\,dt+Mx_j\,dw_j\;.
\end{align}
It is well-known (e.g.\ \cite{Damm04}), that \eqref{eq:ItoM} is asymptotically mean square stable, if and only if
\begin{align}\label{eq:ItoLyap}
\sigma(\mathcal{L}_{A_j+\tfrac12M^2}+\Pi_M)\subset\mathbb{C_-}\;.
\end{align}
Here $\mathcal{L}_{N}:X\mapsto NX+XN^T$ for arbitrary
$N\in\mathbb{R}^{n\times n}$, and $\Pi_{M}:X\mapsto MXM^T$. We replace
$M$ by $\mu M(\omega)$. Then for large $\mu^2=1/\varepsilon$, we interpret
\begin{align
\tfrac{1}{\mu^2} \left(\mathcal{L}_{A_j+\tfrac12(\mu M(\omega))^2}+\Pi_{\mu M(\omega)}\right
&=\left(\mathcal{L}_{M(\omega)^2/2}+\Pi_{M(\omega)}\right)+\varepsilon\mathcal{L}_{A_j}\label{eq:perturbedL}
\end{align}
as a perturbation of $\mathcal{L}_{M(\omega)^2/2}+\Pi_{M(\omega)}$. It follows from \eqref{eq:specdecM} that
\begin{align*}
(\mathcal{L}_{M(\omega)^2/2}+\Pi_M(\omega))(u_ku_\ell^*)&=\tfrac12\left(M(\omega)^2
u_ku_\ell^*+u_ku_\ell^*M(\omega)^2\right)+M(\omega)
u_ku_\ell^*M(\omega)\\
&=-\tfrac12\left(\tilde\omega_k^2+\tilde\omega_\ell^2-2\tilde\omega_k\tilde\omega_\ell\right) u_ku_\ell^* =-\tfrac12\left(\tilde\omega_k-\tilde\omega_\ell\right)^2 u_ku_\ell^
\end{align*}
with $\tilde\omega_k-\tilde\omega_\ell=0$, if and only if $k=\ell$.
Thus, $\mathcal{L}_{M(\omega)^2/2}+\Pi_M(\omega)$ has an $n$-fold
eigenvalue~$0$ while all other eigenvalues are strictly negative.
We only have to consider the perturbation of the eigenvalue $0$. For
small $\varepsilon$, the perturbed mapping \eqref{eq:perturbedL} has an
$n$-dimensional invariant subspace with a basis, which depends
smoothly on $\varepsilon$ and coincides with $u_1u_1^*,\ldots,u_nu_n^*$
for $\varepsilon=0$, see \cite{StewSun90}. The restriction of \eqref{eq:perturbedL} to this
subspace has the matrix representation $B_j=(b_{k\ell}^{(j)})$ with
\begin{align*}
b_{k\ell}^{(j)}&=\tr\left(\mathcal{L}_{A_j}(u_\ell u_\ell^*\right)u_ku_k^*)=u_k^*\left(A_j
u_\ell u_\ell^*+u_\ell u_\ell^*A_j^T\right)u_k\\&=\left\{
\begin{array}{ll}
0&\ell\neq k\\
u_k^*(A_j+A_j^T)u_k=2\alpha_j&\ell= k
\end{array}
\right. \;,
\end{align*}
since both $A_j-\alpha_j I$ are almost hollow.
Hence
$B_j=2\alpha_j I$ has all eigenvalues in $\mathbb{C}_-$ and so has the
matrix in \eqref{eq:perturbedL} for sufficiently small $\varepsilon$.
This proves that for $M=\mu M(\omega)$ with sufficiently large $\mu$,
both
\eqref{eq:StratoMa} and \eqref{eq:StratoMb} are asymptotically mean square stable.
\eprf
\begin{ex}\rm\label{ex:simstabnoise}
For an illustration with even $n$, we choose the simple but
arbitrary matrix pair
\begin{align*}
(A_1,A_2)&=\left(\left[\begin{smallmatrix}
-1&1&1&1&1&1\\
1&0&1&1&1&1\\
0&1&0&1&1&1\\
0&0&1&0&1&1\\
0&0&0&1&0&1\\
0&0&0&0&1&0 \end{smallmatrix}\right],
\left[\begin{smallmatrix}1 & -1 & 0 & 0 & 0 & 0\\ 1 & 1 & -1 & 0 &
0 & 0\\ 1 & 0 & 1 & -1 & 0 & 0\\ 1 & 0 & 0 & 1 & -1 & 0\\ 1 & 0 & 0 & 0 & 1 & -1\\ 1 & 0 & 0 & 0 & 0 & -6 \end{smallmatrix}\right]\right)\;.
\end{align*}
The orthogonal matrix
\begin{align*}
U= \left[\begin{smallmatrix}
\phantom-0.1919&\phantom-0.1709&-0.1182&\phantom-0.4410&\phantom-0.3961&\phantom-0.7541\\
-0.8960&-0.1266&\phantom-0.1726&-0.0363&-0.1203&\phantom-0.3682\\
\phantom-0.0159&-0.6560&-0.1059&-0.3989&\phantom-0.6311&\phantom-0.0298\\
\phantom-0.0144&\phantom-0.0086&-0.8175&-0.3556&-0.3660&\phantom-0.2664\\
\phantom-0.0138&\phantom-0.6274&\phantom-0.2379&-0.6786&\phantom-0.2555&\phantom-0.1542\\
-0.3996&\phantom-0.3616&-0.4692&\phantom-0.2411&\phantom-0.4808&-0.4473
\end{smallmatrix}\right]
\end{align*}
transforms $(A_1,A_2)$ to $(\tilde
A_1,\tilde A_2)$ with $(\tilde A_1+\frac1nI,\tilde A_2+\frac1nI)$
being almost hollow, where
\begin{align*}
\tilde A_1&=\left[\begin{smallmatrix}
-0.1667&-0.6778&\phantom-0.8432&\phantom-0.5969&-1.2359&-0.8144\\
\phantom-0.3655&-0.1667&-0.1359&\phantom-0.0294&-0.0818&-0.4453\\
\phantom-0.4809&-0.4877&-0.1667&\phantom-1.1305&-1.0531&\phantom-0.2396\\
\phantom-0.2712&-0.5650&\phantom-1.0652&-0.1667&-0.4391&-0.0790\\
-1.3083&\phantom-0.7799&-0.8330&-1.1435&-0.1667&\phantom-0.0971\\
-1.3506&\phantom-0.1132&-1.5411&-1.4969&\phantom-1.1554&-0.1667
\end{smallmatrix}\right]\;,\\
\tilde A_2&=\left[\begin{smallmatrix}
-0.1667&\phantom-0.2200&-1.2765&-0.2157&\phantom-1.4333&-2.2393\\
\phantom-1.4680&-0.1667&\phantom-0.8754&-0.9385&-1.5753&\phantom-1.6896\\
-1.5017&\phantom-1.5458&-0.1667&-0.2265&\phantom-1.1226&-1.9108\\
\phantom-0.5741&-0.3164&\phantom-0.2973&-0.1667&-0.9509&-0.4748\\
\phantom-1.9688&-0.9634&\phantom-2.1166&-0.5422&\phantom-0.0562&\phantom-2.0303\\
-0.4528&\phantom-1.3096&-1.5708&\phantom-1.2474&\phantom-1.3797&-0.3895
\end{smallmatrix}\right]\;.
\end{align*}
For
M\left(\left[
\begin{smallmatrix}
1\\2\\3
\end{smallmatrix}
\right]\right)= \left[\begin{array}{ccc}
\begin{smallmatrix
0& 1\\-1&0
\end{smallmatrix}
&&\\
& \begin{smallmatrix
0& 2\\-2&0
\end{smallmatrix}&\\
&&\begin{smallmatrix
0&3\\-3&0
\end{smallmatrix}
\end{array}\right]
$
we obtain the stabilizing skew-symmetric matrix
\begin{align*}
M= UM\left(\left[
\begin{smallmatrix}
1\\2\\3
\end{smallmatrix}
\right]\right)U^T&=\left[
\begin{smallmatrix}
\phantom-0.0000&\phantom-0.6949&-1.3331&\phantom-1.9489&-0.3262&-1.1247\\
-0.6949&-0.0000&-0.2634&\phantom-0.1201&-1.1153&-0.6950\\
\phantom-1.3331&\phantom-0.2634&\phantom-0.0000&-0.0300&\phantom-0.6217&-1.5717\\
-1.9489&-0.1201&\phantom-0.0300&\phantom-0.0000&\phantom-0.9140&-0.6124\\
\phantom-0.3262&\phantom-1.1153&-0.6217&-0.9140&\phantom-0.0000&-0.8317\\
\phantom-1.1247&\phantom-0.6950&\phantom-1.5717&\phantom-0.6124&\phantom-0.8317&-0.0000
\end{smallmatrix}
\right]\;.
\end{align*}
In Fig.\ \ref{fig:SpecAbsc}, we have plotted the spectral abscissae
\begin{align*}
\alpha_j(\mu)=\max\Real\sigma\left(\mathcal{L}_{A_j+\tfrac12(\mu M)^2}+\Pi_{\mu M}\right)
\end{align*}
for $j=1,2$ depending on $\mu$. Roughly for $\mu\ge 7$ both are
negative. We
chose $\mu=5$ and $\mu=20$ for simulations, where
$\alpha_1(5)\approx-0.03<0$, $\alpha_2(5)=0.25>0$,
$\alpha_1(20)\approx-0.32<0$, $\alpha_2(20)=-0.29<0$. For both cases,
Fig.\ \ref{fig:Samples} shows five sample paths of
$\|x_j\|$, $j=1,2$, with random initial conditions $x_0$ satisfying $\|x_0\|=1$. The solutions were computed by the
Euler-Maruyama scheme (e.g.\ \cite{KloePlat95}) with step size $1e-5$ applied to the It\^o
formulation \eqref{eq:ItoM} of the Stratonovich equation. The plots
exhibit the expected stability behaviour.
\begin{figure}[h]\centering
\begin{minipage}{.5\linewidth}
\input{./SpecAbs.tex}
\end{minipage
\caption{Spectral abscissa $\alpha_j$ as a function of
$\mu$.}\label{fig:SpecAbsc}
\end{figure}
\begin{figure}[h]\centering
\begin{minipage}{.5\linewidth}
\input{./SimStrato5a.tex}
\end{minipage}\hfill
\begin{minipage}{.5\linewidth}
\input{./SimStrato.tex}
\end{minipage}
\caption{Sample paths of $\|x_j(t)\|$ for $\mu=5$ (left) and
$\mu=20$ (right)}\label{fig:Samples}
\end{figure}
\end{ex}
\begin{remark}\label{rem:common22}
There even exists a common skew-symmetric matrix $M$ so that $m$ equations
\begin{align}\label{eq:StratoMj}
dx_j&=A_jx_j\,dt+Mx_j\circ dw_j \text{ with }\tr A_j<0 \quad j=1,\ldots,m
\end{align}
are simultaneously stabilized, if a common orthogonal matrix $U$ can be
found, so that for all $j$
\begin{align*}
\diag\left(U^T\Big(A_j-\tfrac{\tr A_j}nI\Big)U\right)&=[d^{(j)}_1,-d^{(j)}_1,\ldots,d^{(j)}_k,-d^{(j)}_k,0],
\text{ if } n=2k+1, \text{ or }\\
\diag\left(U^T\Big(A_j-\tfrac{\tr
A_j}nI\Big)U\right)&=[d^{(j)}_1,-d^{(j)}_1,\ldots,d^{(j)}_k,-d^{(j)}_k],
\text{ if } n=2k.
\end{align*}
The proof of Theorem \ref{thm:ArnoCrau83plus} applies literally in
this case.\\
If the matrix $U$ can be chosen symplectic, then $M$ can be chosen
Hamiltonian, as a combination with the proof of Theorem
\ref{thm:HamSkew} shows.
\end{remark}
\section{Conclusion and outlook}
As our main theoretic contribution we see Theorem \ref{thm:SymplOrth},
which states that every real matrix is symplectic-orthogonally similar to a
matrix with constant diagonal, (w.l.o.g.\ a hollow matrix, if the
trace is subtracted). The proof requires a result on the simultaneous
transformation of two matrices which is closely related to properties
of the joint numerical range. For our applications it turns out that
the hollow form can be weakened to a $2\times 2$-block hollow form,
where only $a_{ii}+a_{i+1,i+1}=0$ for $i=1,3,\ldots$ (see Remark \ref{rem:common22}).
This gives rise to further connections and questions, which were not
discussed here. For instance, a simultaneous transformation to a
$2\times 2$-block hollow form is related to the real $2$-nd numerical range (cf.\
\cite{FillWill71, LiPoon00}). General conditions on the convexity of
the real $2$-nd numerical range (like e.g.\ in \cite{GutkJonc04}) do
not seem to be available. Therefore it is unclear, whether more than two
zero-trace matrices can always be transformed to $2\times 2$-block hollow form.\\
Numerically, also the following variant of Proposition
\ref{prop:fillmore_simultan} seems to hold, but we were not able to
prove it. We state it as a conjecture.
\begin{conj}
Consider $A,B\in\mathbb{R}^{n\times n}$ with
$\tr A=\tr B=0$.
There exists an orthogonal matrix $V\in\mathbb{R}^{n\times
n}$ such that $V^TAV$ is hollow and $VBV^T$ is almost hollow.
Note that here $A\mapsto V^TAV$, but $B\mapsto VBV^T$; the
transformation applied to $A$ is the inverse (also adjoint) of the
one applied to $B$ (unlike in Proposition
\ref{prop:fillmore_simultan}).
\end{conj}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,603 |
Events celebrating IWD
List of past International Women's Day Events.
Men behaving badly; exposing institutional scandal
In the wake of allegations against Weinstein, Trump, the President's Club and MPs, claims of sexual harassment of women within business and organisations are being reported on a daily basis.
City, University of London was delighted to welcome Madison Marriage, the FT journalist who exposed the President's Club story and leading experts to discuss these very current issues and subsequent reporting by the media.
The panel included:
Madison Marriage - journalist at the Financial Times, where she has been covering corporate tax and accounting since 2017. Madison recently went undercover as a hostess at the all-male President's Club charity dinner at London's Dorchester Hotel. Previously she wrote about the global asset management industry. Madison joined the FT in 2012 from Incisive Media where she worked on two investment titles.
Professor Chris Greer- Criminologist with expertise is organisational scandal. Professor Greer recently presented a BBC Radio 4 documentary The Scandal Machine where he traces the evolution of scandals involving high-profile public figures and how the media report them.
Professor Heather Brooke - Investigative Journalist who helped expose the 2009 MP expenses scandal. Professor Brooke is the author of Your Right to Know (2006), The Silent State (2010), and The Revolution Will Be Digitised (2011).
Our chair for the evening was Professor Lis Howell, Director of Broadcasting at City and expert and researcher on women in broadcasting.
Women in the workplace: what does professional look like?
This year City, University of London invited leading experts to discuss the issues surrounding women's appearance in the workplace.
Angela Jackman - Senior Law Lecturer and partner at Simpson Millar
Angela's recent research centres on the way African-Caribbean women choose to wear their hair. She recently had some evidence published in the House of Commons Joint Committee report into "High heels and workplace dress codes".
Charlotte Proudman - Barrister and Doctoral Researcher at the University of Cambridge
Charlotte is also an expert and media commentator on women's rights.
Dr Florence Sutcliffe-Braithwaite - Lecturer in twentieth century British History, UCL
Florence is a historian of 20th century Britain, and has published on Thatcherism, New Labour, and ideas about class, gender and sexuality in the late 20th century. She is also co-editor of Renewal: A Journal of Social Democracy.
Nicola Thorp – actress and equality campaigner
Nicola was recently subjected to gender discrimination in a temporary role as a receptionist, when she was sent home for not wearing high heels. The incident has resulted in a parliamentary debate to review the law around discriminatory dress codes
We were delighted to welcome our chair for the evening Professor Lis Howell, Director of Broadcasting at City and expert and researcher on women in broadcasting.
On International Women's Day City, University of London invited leading experts to discuss the very current issues surrounding women's appearance in the workplace. 2 minutes
Is parity possible?
A thought-provoking panel discussion with leading female academics who have research or expertise in the field of women in leadership. Our panellists discussed how the situation is improving in certain areas, whether we will reach a sticking point and what could or should be done to break through the barriers.
Dr Ruth Sealy – Lecturer and researcher in Organisational Psychology. Areas of expertise include Women in Leadership; Board composition; Role Models; Diversity and Intersectionality; and Corporate Governance.
Dr Amanda Goodall – Senior Lecturer in Management. Amanda's research is on leadership and organisational performance.
Professor Lis Howell – Director of Broadcasting. Founder of the Expert Women on news campaign
Chair: Penny Marshall.
On International Women's Day 2016 City hosted a thought provoking panel debate chaired by ITV's Penny Marshall. 6 minutes
BAME Networking Group event with Bonnie Greer OBE
A 2015 report found that there are 17 black female professors in the entire HE system.
72% of white students get a First or Upper Second degree.
53% black students get the same degree classification even when they enter institutions with the same A level grades.
The BAME Networking Group at City, University of London welcomed author Bonnie Greer OBE, Chancellor of Kingston University to discuss "How do we go forward from here?" | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,472 |
\section{Introduction}
The interaction of ions with condensed matter has drawn the
attention of
many researchers from the beginning of this century
\cite{rut:pm21:11} . A
great deal of
work in this field has dealt with the energy loss of swift ions
in
solids. In this regard the work of Bethe \cite{bet:ap5:30} , Fermi
\cite{fer:zf29:27} , Williams \cite{wil:rmp17:45},
and Lindhard \cite{lin:mfm28:54} opened the modern way of calculating
the stopping power of
swift ions in condensed matter. The case of low-velocity projectiles is
much more complicated due to the strong interaction of the moving ion
with the solid. In this case, the projectile is dressed by a number of
electrons that strongly screen the ion-solid interaction. Brandt
\cite{bra:acs:75}
introduced a
Thomas-Fermi statistical model in order to define an effective
ion charge that takes into account how the bound electrons dress the
projectile. Other researchers \cite{rit:jpcs26:65} have developed the
Lindhard approach and
have calculated the stopping power using a linear-response
function. In
recent developments \cite{arn:prl65:90,flo:icpss:91} , the
stopping power for ions
at low and intermediate velocities has been obtained by introducing the
different
electron-loss and -capture processes associated with the
interaction of
the projectile with the target \cite{ech:ssps43:90} .
An important development in the calculation of the stopping power
for
very-low-velocity ions in solids appeared with the application
of the
local density theory to this field.
Echenique, Nieminen, and Ritchie \cite{ech:ssc37:81}
calculated the stopping power for very slow ions moving in a
uniform
electron gas, using well-known techniques in this field. This
approach
has yielded a substantial improvement in the agreement between
experimental data and theoretical calculations. The main
limitation of
this approach, as it has been used in the actual calculations,
is the assumption of having an uniform electron gas
in
the solid. Although some attempts are currently
tried to
improve over this simplification
\cite{bau:prl69:92,gra:pla:92}, it could be convenient to try,
at
the same time, other alternatives that might be appropriate in
the case of having very ionic or covalent solids.
The aim of the work presented in this paper is to apply to the
stopping
power field an approach recently developed for the calculation
of
electronic properties of solids \cite{gol:prb39:89,gar:prb44:91}
. This is a linear-combination-of-atomic-orbitals (LCAO)
approach, whereby the
electronic properties of the solids are calculated from the
localized wave functions of the atoms of the solid. This approach
tries
to emphasize the local chemical properties of the solid and is
deeply
related to the work done by other groups, trying to calculate the
stopping power for ions in solids using the stopping power in the
vapour
target \cite{sab:pra42:90} . The advantage of these approaches is
related
to the non-uniformity of the target, since a
local-density-approximation (LDA) calculation usually
assumes
a uniform electron gas inside a crystal. Thus the long-term aim
of our
approach is first,
to calculate the stopping power for ions as a function
of the
ion position, in particular, near crystal surfaces; and
secondly, to take into account the contribution of the
different
atomic
orbitals of the target, mainly those orbitals which have such a
localized size that can not be replaced by an uniform electron
gas.
In this paper we have chosen to analyze the case of helium
interacting
with alkali metals. This is a case in which the interaction of
the
projectile and the target is simple. It is, however, a
complicated system since it presents a long-range interaction
between
orbitals located at large separations. In these
metals, the local density approach can be expected to be very good;
therefore, we have
chosen it as a strong test to the method we have developed, and
the results obtained give strong support to it.
In the Secs. II and III, we present our model, the formalism used
to solve it, and its application to the case of He in metals.
In Sec. IV we discuss our results, and in Sec. V we present our
conclusions.
\section{Model and formalism}
\subsection{General formalism}
Our model is an extension of a previous approach to the
calculation of the electronic
properties of the solids using a LCAO method
\cite{gol:prb39:89,gar:prb44:91}.
The basic idea is to
introduce the atomic orbitals $ \psi_{\nu} , \nu =i,\alpha$, $i$
referring to
the crystal site and $\alpha$ to a particular orbital, and the
orthonormal basis
$\phi_{\mu}$
\begin{equation}
\phi_{\mu} = \sum_{\nu} (S^{-1/2})_{\mu,\nu}
\psi_{\nu},
\label{eq:-1}
\end{equation}
with
\begin{equation}
S_{\nu \mu} = \langle \psi_{\nu} \mid \psi_{\mu} \rangle,
\end{equation}
obtained using L\"{o}wdin's orthonormalization procedure
\cite{Low}.
Using this new basis, the electron Hamiltonian of a given system
can be
written in the following way:
\begin{eqnarray}
\hat{H} = & \sum_{\nu,\sigma}
E_{\nu}^{\sigma}\hat{n}_{\nu\sigma} +\sum_{\nu,\mu\neq\nu,\sigma}
T_{\nu\mu}^{\sigma} (c_{\mu\sigma}^{\dagger}c_{\nu\sigma} +
c_{\nu\sigma}^{+}c_{\mu\sigma})
+ \sum_{\nu} U_{\nu}^{(0)} \hat{n}_{\nu\uparrow}
\hat{n}_{\nu\downarrow} \nonumber \\
& + \frac{1}{2} \sum_{\nu,\mu\neq\nu,\sigma} [ J_{\nu\mu}^{(0)}
\hat{n}_{\mu\sigma} \hat{n}_{\nu-\sigma}
+ (J_{\nu\mu}^{(0)} - J_{x,\nu\mu}^{(0)} + J_{\nu\mu}^{(0)}
S_{\nu\mu}^{2}) \hat{n}_{\mu\sigma} \hat{n}_{\nu\sigma} ]
\label{eq:1}
\end{eqnarray}
with the operators $c_{\nu\sigma}^{\dagger}$ and $c_{\nu\sigma}$
related to the
orthonormalized wave functions $\phi_{\nu}$. The
different terms in Eq. (\ref{eq:1}) are analyzed in Refs.
\cite{gol:prb39:89} and \cite{gar:prb44:91} ; here we only
comment how to introduce the many-body terms of hamiltonian
(\ref{eq:1})
in a one-body Hamiltonian by means of a Slater-like
potential. This implies replacing Eq. (\ref{eq:1}) by the effective
Hamiltonian:
\begin{equation}
\hat{H}_{{\rm eff}} = \sum_{\nu,\sigma}
\tilde{E}_{\nu}^{\sigma} \hat{n}_{\nu\sigma}
+ \sum_{\nu,\mu\neq\nu,\sigma} T_{\nu\mu}^{\sigma}
(c_{\mu\sigma}^{+}c_{\nu\sigma} + c_{\nu\sigma}^{+}c_{\mu\sigma})
\label{eq:1a}
\end{equation}
where
\begin{eqnarray}
\tilde{E}_{\nu}^{\sigma} & = & E_{\nu}^{\sigma} + U_{\nu}^{(0)}
\langle \hat{n}_{\nu -\sigma} \rangle \nonumber \\
& &
+ \sum_{nu,\mu\neq\nu}
J_{\nu\mu}^{(0)}
\langle \hat{n}_{\mu - \sigma} \rangle
+ \sum_{nu,\mu\neq\nu}
( J_{\nu\mu}^{(0)} - J_{x, \nu \mu}^{0}
+J_{\nu\mu}^{(o)} S_{\nu \mu}^{2}
+V_{\nu \mu}^{{\rm x,c}} )
\langle \hat{n}_{\mu \sigma} \rangle
\end{eqnarray}
$ V_{\mu}^{x,c} $ is the exchange and correlation potential
\cite{gar:prb44:91}
associated with the many-body terms of Eq. (\ref{eq:1}).
We start from Hamiltonian (\ref{eq:1a}) and assume that its solution can
be obtained in the static limit $ v \rightarrow 0 $, for the case
of an atom moving inside a crystal (see Fig. 1). In our model,
the different parameters of Hamiltonian (\ref{eq:1a}), as well as its
static solution, are calculated for a geometrical configuration,
at each position of the external atom inside the crystal.
To proceed further, we assume that, due to the atomic motion,
there is a time dependence of hamiltonian (\ref{eq:1a}) through the ion
velocity. This implies introducing a quasiadiabatic Hamiltonian,
$\hat{H}_{{\rm eff}}(t)$, with the different parameters,
$E_{\nu}^{\sigma}$ and $T_{\nu\mu}$, having an explicit, but
slowly, time dependence.
In order to calculate the stopping power at a given time and
atomic position, the static solution of hamiltonian
$\hat{H}_{{\rm eff}}$ is introduced. This implies writing
\begin{equation}
\hat{H}_{eff} \mid n \rangle = E_{n} \mid n \rangle .
\end{equation}
Then, the stopping power (written as a function of the local time
t, defining the projectile position) is given by the following
equation
\cite{sol:td:85}
\begin{equation}
\frac{dE}{dt} = -2 {\rm Re} \sum_{n} \int_{-\infty}^{t} dt{'}
\frac{e^{-iw_{n0}(t-t^{'})}}{ w_{n0}} \langle 0 \left|
\frac{d\hat{H}_{ {\rm eff}}(t)}{dt} \right| n
\rangle \langle n \left|
\frac{d\hat{H}_{{\rm eff}}(t^{'})}{dt^{'}} \right| 0 \rangle .
\label{eq:5}
\end{equation}
[We are using atomic units ($\hbar = m = e^{-} =1$).]
Equation (\ref{eq:5}) is only valid in the quasiadiabatic limit,
with the ion
velocity going to zero. Notice that in Eq. (\ref{eq:5}), the eigenstates
$\mid n \rangle$ correspond to the full Hamiltonian $\hat{H}_{{\rm
eff}}$, including
the external ion, at the final time t. This
approximation is obviously only appropriate for $v \rightarrow 0$.
Equation (\ref{eq:5}) can be further modified by noting that
the dependence of
$\hat{H}_{{\rm eff}}$ with t appears through the coordinate ${\bf R}
= {\bf R}_{0} + {\bf v} t$, of the external atom. Thus we
write $ \frac{d \hat{H}_{{\rm eff}}(t)}{dt} =
( {\bf v} \cdot {\bf \nabla} ) \hat{H}_{{\rm eff}} ( {\bf R})$, and
introduce
the Fourier-transform $ \hat{H}_{{\rm eff}}( {\bf q})$ of
$\hat{H}_{{\rm eff}}( {\bf R})$. This yields
\begin{eqnarray}
\frac{dE}{dt} & = &
-2 {\rm Re} \sum_{n}
\int \frac{ d {\bf q} }{ ( 2 \pi) ^{3} }
\frac{ d {\bf q}' }{ ( 2 \pi) ^{3} }
\int_{-\infty}^{t}
\frac{
e^{-i w_{n0} ( t-t')}}
{ w_{n0} }
({\bf q} \cdot {\bf v} )
({\bf q'} \cdot {\bf v} ) \nonumber \\
& &
\times e^{i {\bf q} \cdot ( {\bf R}_{0} + {\bf v} t) }
e^{- i {\bf q}' \cdot ( {\bf R}_{0} + {\bf v} t') }
\langle 0 \mid \hat{H}_{eff}( {\bf q} ) \mid n \rangle
\langle n \mid \hat{H}_{eff}( {\bf q}') \mid 0 \rangle .
\label{eq:6}
\end{eqnarray}
This equation can be easily integrated on t$^{'}$. Moreover, we
introduce the one-electron eigenfunctions and eigenvalues,
$\mid {\bf k} \rangle$,
$\varepsilon_{k}$ of
Hamiltonian $\hat{H}_{{\rm eff}}$ in Eq. (\ref{eq:6}) to define
$\mid n \rangle$ and $w_{n0}$.
These steps yield the following results:
\begin{eqnarray}
\frac{dE}{dt} & = &
4 \pi
\sum_{k \langle k_{F} , k' \rangle k_{F}}
\int \frac{ d {\bf q} }{(2\pi)^{3}}
\int \frac{d {\bf q'} }{(2\pi)^{3}}
\frac{ ( {\bf q} \cdot {\bf v} )
( {\bf q} \ ' \cdot {\bf v} )}
{ w_{kk'}} \nonumber \nonumber \\
& & \times
\langle {\bf k}' \mid
\hat{H}_{{\rm eff}}( {\bf q})
e^{i {\bf q} \cdot {\bf R} }
\mid {\bf k} \rangle
\langle {\bf k} \mid
\hat{H}_{{\rm eff}}( {\bf q})
e^{-i {\bf q}' \cdot {\bf R} }
\mid {\bf k}' \rangle \nonumber \\
& & \times
\delta( w_{kk'} + {\bf q} \cdot {\bf v} ) ,
\label{eq:7}
\end{eqnarray}
where the spin has been added up and
$w_{kk'} = \varepsilon_{k'} - \varepsilon_{k}$ . Note that
$\mid {\bf k} \rangle$ and $\varepsilon_{k}$ are the eigenfunctions and
eigenvalues of
the total Hamiltonian, $\hat{H}_{{\rm eff}}( {\bf R})$ , with the
external ion
included. One should remember, however, that
$\mid {\bf k} \rangle$ and $\mid {\bf k}' \rangle$ are not
eigenfunctions of $\hat{H}_{{\rm eff}}( {\bf q}):$
\begin{equation}
\hat{H}_{{\rm eff}}( {\bf q}) =
\int d {\bf R}' \
e^{-i {\bf q} \ \cdot {\bf R} \ ' }
\hat{H}_{{\rm eff}}( {\bf R} \ ' ) .
\label{eq:7a}
\end{equation}
It is of interest to make contact between Eq. (\ref{eq:7}) and
the linear-response
theory. In this case, the total Hamiltonian is written as the sum
of the
unperturbed hamiltonian $\hat{H}_{0}$ and a perturbation
$\hat{H}_{{\rm pert}} = \hat{V}$. Then, Eq. (\ref{eq:7}) can be
transformed by taking for
$\mid {\bf k} \rangle$ and $\mid {\bf k}' \rangle$ ,
the eigenfunctions of $\hat{H}_{0}$; moreover, the
perturbation $\hat{V}$, can be written as follows:
\begin{equation}
\hat{V} =
\int d {\bf r} \
\frac{Z}{
\mid {\bf R} - {\bf r} \mid
}
\hat{\rho}( {\bf r}) ,
\end{equation}
where $Z$ is the external ion charge and ${\bf R}$ its position.
Then, the
power loss is given by the following equation (linear theory):
\begin{eqnarray}
\frac{dE}{dt} & = &
4 \pi
\sum_{k \langle k_{F} , k' \rangle k_{F}}
\int \frac{ d {\bf q} }{(2\pi)^{3}}
\int \frac{ d {\bf q}' }{(2\pi)^{3}}
\left( \frac{4\pi Z}{q^{2}} \right)
\left( \frac{4\pi Z}{q'^{2}} \right)
( {\bf q} \cdot {\bf v} )
\nonumber \\
& & \times
e^{i( {\bf q}- {\bf q}' ) \cdot {\bf R} }
\langle {\bf k}' \mid
\rho ^{+} ( {\bf q})
\mid {\bf k} \rangle
\langle {\bf k} \mid
\rho( {\bf q}')
\mid{\bf k}' \rangle
\delta( w_{kk'} + {\bf q} \cdot {\bf v} )
\label{eq:9a}
\end{eqnarray}
or, equivalently,
\begin{eqnarray}
\frac{dE}{dt} & = &
2 \
\int \frac{ d {\bf q} }{(2\pi)^{3}}
\int d {\bf q'}
\left( \frac{4\pi Z}{q^{2}} \right)
\left( \frac{4\pi Z}{q'^{2}} \right)
( {\bf q} \cdot {\bf v} ) \nonumber \\
& & \times
e^{i( {\bf q}- {\bf q}' ) \cdot {\bf R} }
{\rm Im} \chi ( {\bf q}, {\bf q}' ;- {\bf q} \cdot {\bf v}),
\label{eq:9b}
\end{eqnarray}
where
${\rm Im} \chi ( {\bf q}, {\bf q}' ;w)$
is the metal polarizability.
For an homogeneous system, only $ {\bf q} = {\bf q}' $ contributes,
and Eq. (\ref{eq:9b}) yields
\begin{equation}
\frac{dE}{dt} =
2 \
\int \frac{d {\bf q} }{(2\pi)^{3}}
\left( \frac{4\pi Z}{q^{2}} \right) ^{2}
( {\bf q} \cdot {\bf v} )
{\rm Im} \chi ( {\bf q};- {\bf q} \cdot {\bf v}) ,
\end{equation}
in agreement with other Refs. \cite{lin:mfm28:54,sol:td:85}.
Equation (\ref{eq:7}) is the basic equation giving the stopping power of
the moving
ion, in the low velocity limit, within our LCAO approach. In
Eq. (\ref{eq:7})
the critical quantity to calculate, using the static interaction
between the external charge and the solid, is
$\langle {\bf k} \mid \hat{H}_{{\rm eff}} ( {\bf q} ) \mid {\bf k}' \rangle$.
In
this paper we
shall concentrate on the He case; this provides a simple case in
which to test the method discussed here.
\subsection{Static interaction of He with a metal}
In this section, we will present a summary of the main results
discussed
in Ref. \cite{gol:prb39:89}. We shall also extend this discussion in
order to calculate the
matrix elements
$\langle {\bf k} \mid \hat{H}_{{\rm eff}} ( {\bf q} ) \mid {\bf k}' \rangle$,
needed for the calculation of the stopping power.
Following Ref. \cite{gol:prb39:89}, we start by considering the
one-electron
interactions between the He 1$s$ level and a metal band that is
represented in Fig. 2 by a half-occupied $s$ level.
As discussed in Ref. \cite{gol:prb39:89}, there are two
different one-electron
interactions. First, due to the overlap $S$ between the He
1$s$ wave function
and the metal orbital
$(S = \langle \psi_{M} \mid \psi_{{\rm He}} \rangle)$, there is an increase
in the
kinetic energy of the electrons of the system.
This is measured by the following shift of the one-electron
terms:
\begin{equation}
\delta E_{M}^{(1)} =
\frac{1}{4} S^{2} ( E_{M}^{0} -E_{{\rm He}}^{0} )
-ST ,
\label{eq:10a}
\end{equation}
\begin{equation}
\delta E_{{\rm He}}^{(1)} =
- \frac{1}{4} S^{2} ( E_{M}^{0} -E_{{\rm He}}^{0} )
-ST ,
\label{eq:10b}
\end{equation}
where $T$, the hopping between the two orbitals,
$\psi_{M} $ and $ \psi_{{\rm He}}$,
is found to be
$- \frac{1}{2} S ( E_{M}^{0} -E_{{\rm He}}^{0})$.
$E_{M}^{0}$ and $E_{{\rm He}}^{0}$ are the metal and He energy levels.
Second, due to the hopping $T$ between the two orbitals we find a
hybridization contribution to the total energy given by the
following shift in
$E_{M}^{0}$ and $E_{{\rm He}}^{0}:$
\begin{equation}
\delta E_{M}^{(2)} =
\frac{T^{2}}{ ( E_{M} -E_{{\rm He}} ) } ,
\label{eq:11a}
\end{equation}
\begin{equation}
\delta E_{{\rm He}}^{(2)} =
- \frac{T^{2}}{ ( E_{M} -E_{{\rm He}} ) } .
\label{eq:11b}
\end{equation}
Combining Eqs. (\ref{eq:10a})-(\ref{eq:11b}) ,
we find the following contributions:
\begin{equation}
\delta E_{M} =
S^{2} ( E_{M} -E_{{\rm He}} ),
\label{eq:11c}
\end{equation}
\begin{equation}
\delta E_{{\rm He}} =
0.
\end{equation}
These shifts in the one-electron levels yield the following
contribution
to the repulsive energy:
\begin{equation}
\delta V_{{\rm repulsive}}^{{\rm one-electron}} =
n_{M}
S^{2} ( E_{M} -E_{{\rm He}} ),
\label{eq:13}
\end{equation}
where $n_{M}$ is the number of electrons in the metal
orbital.
Many-body contributions have also been discussed in Ref.
\cite{gol:prb39:89}. These terms
can be written in a way similar to Eq. (\ref{eq:13});
in Ref. \cite{gol:prb39:89} it was found
that the total repulsive energy between the metal atom and
He is given by
\begin{equation}
\delta V_{{\rm repulsive}}^{{\rm one-electron}} =
n_{M}
S^{2} ( E_{M} -E_{{\rm He}} )
+ n_{M}
( -J_{x}^{0} + S^{2} J_{0} )
+ V_{{\rm electrostatic}} ,
\label{eq:14}
\end{equation}
where $J_{x}^{0}$ is the exchange integral between the metal
and the
He orbitals, $J_{0}$ the Coulomb interaction between the same
orbitals, and $V_{{\rm electrostaric}}$ the electrostatic interaction
between the
total charges of the two atoms. For a He-orbital going like
$ (\frac{\beta ^{3} }{\pi})^{1/2}
e^{-\beta r }
$
we find that
\begin{equation}
-J_{x}^{0} +
V_{{\rm electrostatic}}
=
- \frac{3}{8} \beta S^{2} .
\end{equation}
This shows that the repulsive potential can be written as
follows:
\begin{equation}
V_{{\rm repulsive}} =
n_{M}
S^{2} ( E_{M} -E_{{\rm He}}
- \frac{3}{8} \beta + J_{0} ).
\label{eq:15b}
\end{equation}
In our actual problem we are interested in calculating
$\langle {\bf k}' \mid \hat{H}_{{\rm eff}} ( {\bf R} ) \mid {\bf k}
\rangle$
, the matrix element
of the total Hamiltonian between the one-electron states
$\mid {\bf k} \rangle$
. We will show how Eq. (\ref{eq:15b}) can be related to
$\langle {\bf k}' \mid \hat{H}_{{\rm eff}} ( {\bf R} ) \mid {\bf k}
\rangle $
. To this end, we
start by discussing the solution of the total Hamiltonian
(crystal plus the external
atom)
within a one-electron approximation. The solution of this
Hamiltonian $\hat{H}$ is given by
\begin{equation}
\psi =
\sum_{k}
c_{k} \psi_{k} +
c_{{\rm He}} \psi_{{\rm He}} ,
\label{eq:16}
\end{equation}
where
$\psi_{k}$
are the eigenfunctions of the crystal Hamiltonian, $\hat{H}_{0}
$, and
$\psi_{{\rm He}}$
the 1s orbital of He. In writing Eq. (\ref{eq:16}) ,
we assume that the total
Hamiltonian (in our one-electron approximation) is given by
$\hat{H} = \hat{H}_{0} + \hat{V}_{{\rm He}}$
, where $\hat{V}_{{\rm He}}$
defines the one-electron potential created by the atom.
The eigenvalues and
the eigenfunctions of $\hat{H}$ are given by the secular
equation
\begin{equation}
{\rm det} \mid \langle \psi_{i} \mid -E + \hat{H} \mid \psi_{j}
\rangle
\mid = 0.
\end{equation}
Now, we follow Ref. \cite{gol:prb39:89} and introduce the
orthonormalized
wave functions [as done in Eq. (\ref{eq:-1}) for the basis
$\psi_{ \nu }$]
\begin{equation}
\phi_{i} =
\sum_{i'}
( S^{-1/2})_{ii'}
\psi_{i'} ,
\label{eq:18a}
\end{equation}
with
\begin{equation}
S_{k{\rm He}} = \langle \psi_{k} \mid \psi_{{\rm He}} \rangle,
\end{equation}
and
\begin{equation}
S_{kk'} = \langle \psi_{k} \mid \psi_{k'} \rangle= 0 .
\end{equation}
Using Eq. (\ref{eq:18a}) we define the following effective
hamiltonian
\begin{equation}
\hat{H}_{{\rm eff}} =
S^{-1/2}
\hat{H}
S^{-1/2}.
\end{equation}
In Ref. \cite{gol:prb39:89}, the diagonal terms of the effective
Hamiltonian were calculated up to second order in the overlap, a
small parameter used for calculating $S^{-1/2}$ in a
series expansion, while
the off-diagonal terms were only obtained up to first order.
In our actual problem we need to calculate
$(\hat{H}_{{\rm eff}})_{kk'}$ up to second order in the overlap, the
smallest surviving term of the expansion.
Proceeding in this way, we obtain the following results:
\begin{equation}
(\hat{H}_{{\rm eff}})_{k{\rm He}}
=
T_{k{\rm He}} =
- \frac{1}{2} S_{k{\rm He}} ( E_{k}^{0} -E_{{\rm He}}^{0} ) ,
\label{eq:20a}
\end{equation}
\begin{eqnarray}
(\hat{H}_{{\rm eff}})_{kk'}=
T_{kk'} & = &
(V_{{\rm He}})_{kk'}
- \frac{1}{2}
( T_{k{\rm He}} S_{{\rm He}k'} +
T_{k'{\rm He}} S_{{\rm He}k} ) \nonumber \\
& &
+ \frac{1}{4}
( \frac{E_{k}^{0}+E_{k'}^{0}}{2} - E_{{\rm He}} )
S_{{\rm He}k'}
S_{{\rm He}k} ,
\label{eq:20b}
\end{eqnarray}
where $E_{k}^{0}$ and $E_{{\rm He}}^{0}$ are the k-state and the
atomic levels, respectively.
Equation (\ref{eq:20a}) was already discussed in
Ref. \cite{gol:prb39:89}
, and found to be valid for a very localized
wavefunction like the He 1$s$ level. Equation (\ref{eq:20b})
is the new
equation we are looking for; here
$ (V_{He})_{kk'} $
is associated with the direct perturbation introduced by the He atom
on the metal. This perturbation is basically due to the atomic
Hartree
potential, and to the exchange perturbation created by the He
1$s$ level.
Equations (\ref{eq:20a}) and (\ref{eq:20b}) can be further
approximated by taking
$E_{k}^{0}$, the one-electron $k$ state levels, equal to
$E_{M}^{0}$ a mean level of the metal band (notice that
the He level $E_{{\rm He}}^{0}$ is very deep and that replacing
$E_{k}^{0}$ by $E_{M}^{0}$ is a good approximation) . Then
Eqs. (\ref{eq:20a}) and (\ref{eq:20b}) read
\begin{equation}
T_{k{\rm He}} =
- \frac{1}{2} S_{kHe} ( E_{M} - E_{{\rm He}} ) ,
\label{eq:21a}
\end{equation}
\begin{eqnarray}
T_{kk'} & = &
(V_{{\rm He}})_{kk'}
- \frac{1}{2}
( T_{k{\rm He}} S_{{\rm He}k'} +
T_{k'{\rm He}} S_{{\rm Hek}} )
+ \frac{1}{4}
( E_{M}- E_{{\rm He}} )
S_{{\rm He}k'}
S_{{\rm He}k} \nonumber \\
& = &
(V_{{\rm He}})_{kk'}
+ \frac{3}{4}
( E_{M}- E_{{\rm He}} )
S_{{\rm He}k'}
S_{{\rm He}k} . \label{eq:21b}
\end{eqnarray}
The terms appearing in Eq. (\ref{eq:21b}), that depend
on $T_{k{\rm He}}$
and $S_{k{\rm He}}$, are equivalent to the ones going like
$(-ST)$
, in Eq. (\ref{eq:10a}), if $T$ is replaced here by
$-\frac{1}{2} S ( E_{M} - E_{{\rm He}} )$
; this shows how the one-electron correction to the metal level
$\frac{3}{4} S^{2} ( E_{M} - E_{{\rm He}} )$
coincides with the one-electron contribution to the off-diagonal
term in
$T_{kk'}$ if $S^{2}$ is replaced by
$S_{k{\rm He}} S_{{\rm He}k'}$.
Returning to Eq. (\ref{eq:21a}), we should comment that
$T_{k{\rm He}}$ is a first order term in the overlap $S_{{\rm
Hek}}$
while $T_{kk'}$
is second order [$(V_{{\rm He}})_{kk'}$ included]. The first order
term $T_{k{\rm He}}$ introduces an
effective second order contribution to
$T_{kk'}$ given by
\begin{equation}
\frac{T_{k{\rm He}}
T_{{\rm He}k'}}
{E_{M}^{0}-E_{{\rm He}}^{0}} .
\label{eq:22}
\end{equation}
Combining Eqs. (\ref{eq:21a}) and (\ref{eq:21b}) with Eq.
(\ref{eq:22}) we get the following effective interaction:
\begin{equation}
T_{kk'} =
(V_{{\rm He}})_{kk'}
+ (E_{M}^{0}-E_{{\rm He}}^{0})
S_{k{\rm He}} S_{{\rm He}k'} .
\label{eq:22a}
\end{equation}
This is the one-electron contribution to the effective hopping
between the crystal wave functions $\mid {\bf k} \rangle$ and
$\mid {\bf k}' \rangle$, as induced by
the external atom. When the crystal wavefunctions
$\mid {\bf k} \rangle$
are developed in a local basis
\begin{equation}
\mid {\bf k} > =
\sum_{i} c_{i}( {\bf k}) \phi_{i} ,
\end{equation}
$\phi_{i}$ being the orthonormalized wave functions associated
with the metal atom
, Eq. (\ref{eq:22a}) reads as follows:
\begin{equation}
T_{ii'} =
(V_{{\rm He}})_{ii'}
+ (E_{M}^{0}-E_{{\rm He}}^{0})
S_{i{\rm He}} S_{{\rm He}i'} .
\label{eq:25}
\end{equation}
Equation (\ref{eq:25}) is the fundamental equation making contact between
the repulsive potential given by Eq. (\ref{eq:11c}) and
$T_{ii'}$. Many-body contributions are partially taken into
account in Eq. (\ref{eq:25}) by means of the term
$(V_{He})_{ii'}$ which includes the bare Hartree and bare exchange
contributions,
equivalent
to $V_{{\rm electrostatic}}$ and $-J_{x}^{0} $ in Eq. (\ref{eq:14}).
The extra
term $S^{2}J_{0}$, appearing in Eq. (\ref{eq:14}) is due to the
effect of the overlap between
the $\mid {\bf k} \rangle$
and He orbitals in the total exchange interaction.
This discussion and the results of Eq. (\ref{eq:25}) suggest to
introduce
the following effective interaction between the $i$ and $i'$ orbitals
\begin{equation}
( T_{{\rm eff}})_{ii'} =
S_{i{\rm He}} S_{{\rm He}i'}
(E_{M}^{0}-E_{{\rm He}}^{0}
-\frac{3}{8} \beta
+ \langle J^{0} \rangle ) .
\label{eq:26a}
\end{equation}
This equation should be compared with Eq. (\ref{eq:15b})
that yields the total
repulsive potential between He and the metal atoms.
In this equation, $\langle J^{0} \rangle$ is associated with the effect of the
overlap between the He 1$s$ orbital and the atomic
wave functions of the
metal in the exchange interaction created by the He-orbital. In
Eq. (\ref{eq:15b}) ,
$J^{0}$ is the Coulomb interaction between the atomic wavefunction and
the He
1$s$ orbital; in Eq. (\ref{eq:26a}) we have introduced
$\langle J^{0} \rangle$
, the mean value of this Coulomb interaction in the crystal unit cell
(the change of $J^{0}$ along this unit cell is small, less than
10\%).
Equation (\ref{eq:26a}) is the main equation giving the effective matrix
elements
creating the excitation between the $i$ and $i'$ orbitals, or the Bloch
wave functions $\mid {\bf k} \rangle$ and $\mid {\bf k}' \rangle$ in
the crystal, in this basis:
\begin{equation}
( T_{{\rm eff}})_{kk'} =
S_{k{\rm He}} S_{{\rm He}k'}
(E_{M}^{0}-E_{{\rm He}}^{0}
-\frac{3}{8} \beta
+ \langle J^{0} \rangle ) .
\label{eq:26b}
\end{equation}
\section{Dynamic interaction of Helium with a metal}
Once we have obtained the static interaction of He with the
metal, and the effective matrix elements, we will discuss
how to combine this
result with the general Eq. (\ref{eq:7}) to calculate the
stopping power for He.
First of all, let us mention that we shall use Eq.
(\ref{eq:7}) by
assuming that $\mid {\bf k} \rangle$ and $\mid {\bf k}'
\rangle$
are well described, for the He case, by
the unperturbed crystal wave functions. This is a good
approximation in our current case due to the small overlap
between the
He 1$s$ and the localized metal wave functions.
Then, the starting point is the equation
\begin{equation}
[\hat{T}_{{\rm eff}}( {\bf R})]_{kk'} =
V_{0} S_{k{\rm He}} S_{{\rm He}k'} ,
\end{equation}
where
\begin{equation}
V_{0}
=
( E_{M} - E_{{\rm He}}
- \frac{3}{8} \beta
+ \langle J^{0} \rangle ) .
\end{equation}
The overlap between the 1$s$ He state and the $\mid {\bf k} \rangle$
wave functions is written in the following way
\begin{equation}
\langle {\bf k} \mid \psi_{{\rm He}} \rangle =
\int d {\bf r} \
\psi_{k}^{*}( {\bf r}) \psi_{{\rm He}}( {\bf r})
\simeq
\psi_{k}^{*}( {\bf R}_{{\rm He}})
\int d {\bf r} \
\psi_{{\rm He}}( {\bf r}) ,
\label{eq:28}
\end{equation}
where we replace
$
\psi_{k}^{*}( {\bf r}) $
by
$
\psi_{k}^{*}( {\bf R}_{{\rm He}}) $ ,
assuming the He 1$s$ level to be very localized. This allows us to
write:
\begin{eqnarray}
\langle {\bf k} \mid \hat{H}_{{\rm eff}}( {\bf R}) \mid
{\bf k}' \rangle & = &
V_{0}
\psi_{k}^{*}( {\bf R}_{{\rm He}})
\psi_{k'}( {\bf R}_{{\rm He}})
[ \int d {\bf r} \
\psi_{{\rm He}}( {\bf r}) ] ^{2} \nonumber \\
& = &
V_{0}'\psi_{k}^{*} ( {\bf R}_{{\rm He}}) \psi_{k'}
( {\bf R}_{{\rm He}}) .
\label{eq:29}
\end{eqnarray}
This yields [see Eqs. (\ref{eq:7}), (\ref{eq:7a})]
\begin{eqnarray}
H_{{\rm eff}}( {\bf q}) & = &
V_{0}'
\int d {\bf R}
e^{i {\bf q} \cdot {\bf R} }
\psi_{k}^{*} ( {\bf R}) \psi_{k'}( {\bf R}) \nonumber \\
& = &
V_{0}'
I_{kk'}( {\bf q})
\label{eq:30}
\end{eqnarray}
and
\begin{eqnarray}
\frac{dE}{dt} & = &
4 \pi
\sum_{k \langle k_{F}, k' \rangle k_{F} }
( V_{0}' )^{2}
\int \frac{d {\bf q} }{(2\pi)^{3}}
\int \frac{d {\bf q}' }{(2\pi)^{3}} \
\frac{ ( {\bf q} \cdot {\bf v} )
( {\bf q'} \cdot {\bf v} )}
{ w_{kk'} } \nonumber \\
& & \times
I_{kk'}( {\bf q})
I_{k'k}( {\bf q'})
e^{i ( {\bf q} - {\bf q'} ) \cdot {\bf R} }
\delta ( w_{kk'} + {\bf q} \cdot {\bf v} ) .
\label{eq:31}
\end{eqnarray}
This is the general equation giving the power loss at a given
point ${\bf R}$. Notice that
due to the crystal simmetry
\hbox {${\bf k}' = {\bf k} - {\bf q}$} and
\hbox {${\bf q}' = {\bf q} - {\bf G}$},
${\bf G}$ being a crystal reciprocal vector.
If we are only interested in the mean power loss and
neglect the ${\bf R}$ dependence, we should concentrate on the
${\bf q}= {\bf q'}$
contribution. Then Eq. (\ref{eq:31}) yields
\begin{eqnarray}
\frac{dE}{dt} & = &
4 \pi
( V_{0}' ) ^{2}
\int_{-\infty}^{\infty} \frac{d {\bf k} }{(2\pi)^{3}}
\int_{-\infty}^{\infty} \frac{d {\bf q} }{(2\pi)^{3}} \
( {\bf q} \cdot {\bf v} )
\Theta ( k_{F} - k) \Theta (k' -k_{F} ) \nonumber \\
& & \times
I_{kk'}( {\bf q}) \
I_{k'k}( {\bf q}) \
\delta ( w_{kk'} + {\bf q} \cdot {\bf v} ) ,
\label{eq:32}
\end{eqnarray}
with
${\bf k'} = {\bf k} - {\bf q}$
and $\Theta$ is the step function.
Equation (\ref{eq:32}) depends on the velocity direction of the
projectile. As we shall only consider the case of He moving in alkali
metals, crystals that have a very small anisotropy, we shall calculate
the stopping power by taking an average on all the
\hbox {${\bf v}$ directions}
, which will enable us to compare our results with other works
\cite{ech:ssc37:81}. Then:
\begin{equation}
\frac{1}{v}
\frac{dE}{dx} =
\frac{
{\normalsize \int_{-1}^{1}} d cos\theta_{v}
dE/dt }
{2 \ v^{2} },
\label{eq:33}
\end{equation}
which redefines $dE/dx$.
Equation (\ref{eq:32}) is our fundamental equation for calculating the
stopping power for He, in the
low-velocity limit. This equation can be written in a local basis by
developing the ${\bf k}$ states in the atomic orbitals of the crystal.
In general, we shall assume that the metal wavefunctions are given
by an effective one-electron Hamiltonian
$\hat{H}_{0}$, such that
\begin{equation}
\hat{H}_{0} \mid {\bf k} \rangle = E( {\bf k}) \mid {\bf k}
\rangle .
\end{equation}
Then, the solution of this hamiltonian yields
\begin{equation}
\mid {\bf k} \rangle =
\sum_{i, \alpha}
c_{\alpha} ( k )
e^{i {\bf k} \cdot {\bf R}_{i} }
\phi_{i \alpha} ( {\bf r} - {\bf R}_{i} ) ,
\label{eq:34}
\end{equation}
where
$\phi_{i \alpha} ( {\bf r}- {\bf R}_{i})$
are the orthonormalized wave functions associated with the $i$
site
($
\alpha$ measures the number of orbitals per site). On the
other hand
$\phi_{i \alpha} ( {\bf r}- {\bf R}_{i})$
should be expressed as a function of the localized atomic
orbitals
$\psi_{i \alpha} ( {\bf r}- {\bf R}_{i})$
using Eq. (1). By substituting Eqs. (\ref{eq:34}) and
(\ref{eq:-1}) into Eq. (\ref{eq:32}), we find the
following result
\begin{eqnarray}
\frac{1}{v}
\frac{dE}{dx} & = &
2 \pi
( V_{0}' ) ^{2}
\int_{-1}^{1} d cos\theta_{v}
\int_{-\infty}^{\infty} \frac{d {\bf k} }{(2\pi)^{3}}
\int_{-\infty}^{\infty} \frac{d {\bf q} }{(2\pi)^{3}} \
\frac{( {\bf q} \cdot {\bf v} )}{v^{2}}
\Theta ( k_{F} - k)
\Theta ( k' - k_{F} ) \nonumber \\
& & \times
\sum_{ {\bf R}_{1}, {\bf R}_{2} }
\sum_{\alpha , \beta , \gamma , \delta }
c_{ \alpha}^{*} ( {\bf k}) c_{ \gamma} ( {\bf k})
c_{ \beta}^{*} ( {\bf k'}) c_{ \delta} ( {\bf k'})
\nonumber \\
& & \times
\sum_{ \alpha ' , \beta '}
(S( {\bf k})^{-1/2})_{\alpha \alpha ' }
I_{ \alpha ' \beta ' }^{ {\bf R}_{1} } ( {\bf q} )
(S( {\bf k})^{-1/2})_{\beta \beta ' } \nonumber \\
& & \times
\sum_{ \gamma ' , \delta '}
(S( {\bf k'})^{-1/2})_{\gamma \gamma ' }
I_{ \gamma ' \delta ' }^{ {\bf R}_{2} } ( {\bf q} )
(S( {\bf k'})^{-1/2})_{\delta \delta ' } \nonumber \\
& & \times
e^{i( {\bf k} - {\bf q}) \cdot ({\bf R}_{1}-{\bf R}_{2})}
\delta ( w_{kk'} + {\bf q} \cdot {\bf v} ) ,
\label{eq:35}
\end{eqnarray}
where
\begin{equation}
I_{\beta \gamma}^{ {\bf R}_{1}} ( {\bf q}) =
\int d {\bf r } \
e^{i {\bf q} \cdot {\bf r} }
\psi_{\beta}( {\bf r})
\psi_{\gamma}( {\bf r}- {\bf R}_{1}) ,
\label{eq:35b}
\end{equation}
\begin{equation}
(S( {\bf k})^{-1/2})_{\alpha \beta} =
\sum_{ {\bf R}}
e^{i {\bf k} \cdot {\bf R} }
( S^{-1/2}( {\bf R}))_{ \alpha \beta} ,
\end{equation}
\begin{equation}
S( {\bf k})_{\alpha \beta} =
\sum_{ {\bf R}}
e^{i {\bf k} \cdot {\bf R} }
\int d {\bf r} \
\psi_{\alpha}( {\bf r})
\psi_{\beta}( {\bf r}- {\bf R}) .
\end{equation}
Finally, we relate
$c_{\beta}^{*} ( {\bf k}) c_{\alpha}( {\bf k})$
to the metal Green functions
$G_{\beta \alpha} ( {\bf k},w)$ by the equations
\begin{equation}
\Theta(k_{F}-k)
c_{\beta}( {\bf k}) c_{\alpha}^{*}( {\bf k})
=
\frac{1}{\pi}
\int_{-\infty}^{E_{F}} dw \
{\rm Im} G_{\beta \alpha} ( {\bf k},w)
\end{equation}
\begin{equation}
\Theta(k' - k_{F})
c_{\delta}^{*} ( {\bf k}) c_{\gamma}( {\bf k})
=
- \frac{1}{\pi}
\int_{E_{F}}^{\infty} dw \
{\rm Im} G_{\delta \gamma} ( {\bf k},w).
\end{equation}
This yields
\begin{eqnarray}
\frac{1}{v}
\frac{dE}{dx} & = &
2 \pi
( V_{0}' ) ^{2}
\int_{-1}^{1} d cos\theta_{v}
\int_{-\infty}^{\infty} \frac{d {\bf k} }{(2\pi)^{3}}
\int_{-\infty}^{\infty} \frac{d {\bf q} }{(2\pi)^{3}} \
\frac{( {\bf q} \cdot {\bf v} )}{v^{2}}
\nonumber \\
& & \times
\sum_{ {\bf R}_{1} , {\bf R}_{2} }
\sum_{\alpha , \beta , \gamma , \delta }
{\rm Im} [ G_{\alpha \gamma} ( {\bf k} ) ]
{\rm Im} [ \stackrel{-}{G}_{\beta \delta} ( {\bf k} ) ]
\nonumber \\
& & \times
\sum_{ \alpha ' , \beta '}
(S( {\bf k})^{-1/2})_{\alpha \alpha ' }
I_{ \alpha ' \beta ' }^{ {\bf R}_{1} } ( {\bf q} )
(S( {\bf k})^{-1/2})_{\beta \beta ' } \nonumber \\
& & \times
\sum_{ \gamma ' , \delta '}
(S( {\bf k'})^{-1/2})_{\gamma \gamma ' }
I_{ \gamma ' \delta ' }^{ {\bf R}_{2} } ( {\bf q} )
(S( {\bf k'})^{-1/2})_{\delta \delta ' } \nonumber \\
& & \times
e^{i( {\bf k} - {\bf q}) \cdot ( {\bf R}_{1} - {\bf R}_{2} ) }
\delta ( w_{kk'} + {\bf q} \cdot {\bf v} ) ,
\label{eq:38}
\end{eqnarray}
where
\begin{equation}
G_{ \beta \alpha } ( {\bf k} ) =
\int_{-\infty}^{E_{F}}
\frac{dw}{\pi}
G_{ \beta \alpha } ( {\bf k}, w)
\end{equation}
and
\begin{equation}
\stackrel{-}{G}_{ \delta \gamma } ( {\bf k'} ) =
\int_{\infty}^{E_{F}}
\frac{dw'}{\pi}
G_{ \delta \gamma } ( {\bf k'}, w') .
\end{equation}
Equation (\ref{eq:38}) allows us to calculate
the stopping power for He in
metals, as a function of the Green-function components
$G_{ \alpha \beta } ( {\bf k} )$ of the metal
(calculated in the
orthonormalized basis), using a one-electron hamiltonian
$\hat{H}_{0} ( {\bf k} )$ and the overlap matrix
$S^{-1/2} _{\alpha \beta } ( {\bf k} )$ associated with the
atomic wave functions $\psi_{\alpha}$ and $\psi_{\beta}$
. Moreover,
$ \frac{1}{v} \frac{dE}{dx} $ also depends on
$ I_{\beta \gamma} ^{ {\bf R}} ( {\bf q} ) $, the
Fourier-transform of
the overlap between the atomic orbitals
$ \psi_{\beta} ( {\bf r} ) $ and $ \psi_{\gamma} ( {\bf r} -
{\bf R} ) $ as given by Eq. (\ref{eq:35b}) .
On the other hand, in order to analyze the stopping power
as a function
of $ {\bf R} $ we take in Eq. (\ref{eq:31})
\hbox{${\bf q'} = {\bf q} - {\bf G}$}, and only the
${\bf G}$ vectors perpendicular to the ${\bf v}$ direction.
This yields for the
${\bf G}$ component of \hbox{$\frac{1}{v} \frac{dE}{dx}$},
\begin{eqnarray}
S_{ {\bf G}} \equiv
\left( \frac{1}{v} \frac{dE}{dx} \right)_{ {\bf G}}
& = &
4 \pi
( V_{0}' ) ^{2}
\int_{-\infty}^{\infty} \frac{d {\bf k} }{(2\pi)^{3}}
\int_{-\infty}^{\infty} \frac{d {\bf q} }{(2\pi)^{3}} \
\frac{( {\bf q} \cdot {\bf v} )}{v^{2}}
\Theta ( k_{F} - k) \Theta (k' -k_{F} ) \nonumber \\
& \ & \times
I_{kk'}( {\bf q})
I_{k'k}( {\bf q} - {\bf G})
e^{ - i {\bf G} \cdot {\bf R} }
\delta ( w_{kk'} + {\bf q} \cdot {\bf v} ) ,
\label{eq:39}
\end{eqnarray}
and remember that \hbox{ $ {\bf k}' = {\bf k} - {\bf q} $}.
This equation can be written in a way similar to Eq.
(\ref{eq:38}), as a function of \hbox{$S( {\bf k})$}
, \hbox{$G_{\alpha \beta} ( {\bf k} )$} and
, \hbox{$ I_{\alpha \beta} ^{{\bf R}}$}
. For the sake of brevity, we
shall only mention here that in general the stopping power
\hbox{$S = \frac{1}{v} \frac{dE}{dx}$}
can be written as follows:
\begin{equation}
S_{ {\bf R} } = S_{0} +
\sum_{ {\bf G}}
S_{ {\bf G}} e^{i {\bf G} \cdot {\bf R} } ,
\label{eq:40:II}
\end{equation}
as a function of ${\bf R}$,
where $S_{0}$ is the mean stopping power given by Eq.
(\ref{eq:38}), and $S_{ {\bf G}}$ the ${\bf G}$ component
of Eq.
(\ref{eq:39}).
Once we have chosen the ${\bf G}$ vectors perpendicular to
${\bf v}$, we have calculated $S_{ {\bf G}}$ by taking
an average on the angle between ${\bf v}$ and ${\bf q}$
as in Eq. (\ref{eq:33}).
\section{Results and discussions}
We have applied the previous formalism to the calculation of the
stopping
power for He in alkali metals. For simplicity, the band is assumed
to be
well described by means of a single $s$ orbital. Then, Eq.
(\ref{eq:35}) can be further
simplified into the following equation:
\begin{eqnarray}
\frac{1}{v}
\frac{dE}{dx} & = &
2 \pi
( V_{0}')^{2}
\int_{-1}^{1} d cos\theta_{v}
\int_{-\infty}^{\infty} \frac{d {\bf k} }{(2\pi)^{3}}
\int_{-\infty}^{\infty} \frac{d {\bf q} }{(2\pi)^{3}} \
\frac{( {\bf q} \cdot {\bf v} )}{v^{2}}
\Theta ( k_{F} - k)
\Theta ( k' - k_{F} ) \nonumber \\
& & \times
\sum_{ {\bf R}_{1} , {\bf R}_{2} }
(S( {\bf k})^{-1/2})
I^{ {\bf R}_{1}}( {\bf q})
(S( {\bf k})^{-1/2})
(S( {\bf k'})^{-1/2})
I^{ {\bf R}_{2}}( {\bf q})
(S( {\bf k'})^{-1/2}) \nonumber \\
& & \times
e^{i( {\bf k} - {\bf q}) ( {\bf R}_{1} - {\bf R}_{2} ) }
\delta ( w_{kk'} + {\bf q} \cdot {\bf v} ) ,
\label{eq:40:IV}
\end{eqnarray}
or
\begin{eqnarray}
\frac{1}{v}
\frac{dE}{dx} & = &
2 \pi
( V_{0}' )^{2}
\int_{-1}^{1} d cos\theta_{v}
\int_{-\infty}^{\infty} \frac{d {\bf k} }{(2\pi)^{3}}
\int_{-\infty}^{\infty} \frac{d {\bf q} }{(2\pi)^{3}} \
\frac{({\bf q} \cdot {\bf v} )}{v^{2}}
\Theta ( k_{F} - k)
\Theta ( k' - k_{F} ) \nonumber \\
& & \times
\sum_{ {\bf R}_{1} , {\bf R}_{2} }
S( {\bf k})^{-1}
S( {\bf k'})^{-1}
\left( \sum_{ {\bf R}_{1}}
e^{i ({\bf k}-{\bf q}) \cdot {\bf R}_{1}}
I^{{\bf R}_{1}}( {\bf q})
\right)
\left( \sum_{ {\bf R}_{2}}
e^{-i ( {\bf k}-{\bf q}) \cdot {\bf R}_{2}}
I^{{\bf R}_{2}}({\bf q})
\right) \nonumber \\
& & \times
\delta ( w_{kk'} + {\bf q} \cdot {\bf v} ) ,
\label{eq:40}
\end{eqnarray}
where
\begin{equation}
I^{ {\bf R}_{1}} ({\bf q}) =
\int d {\bf r} \
e^{i {\bf q} \cdot {\bf r} }
\psi( {\bf r})
\psi( {\bf r}-{\bf R}_{1})
\label{eq:41a}
\end{equation}
and
\begin{equation}
S( {\bf k}) =
\sum_{ {\bf R}}
e^{i {\bf k} \cdot {\bf R} }
\int d {\bf r} \
\psi( {\bf r})
\psi( {\bf r}- {\bf R}) .
\end{equation}
Equation (\ref{eq:41a}) can be written in a more
symmetric way as follows. Take
\begin{eqnarray}
{I}^{ {\bf R}_{1}} ({\bf q}) & = &
e^{i {\bf q} \cdot {\bf R}_{1}/2 }
\int d {\bf r} \
e^{i {\bf q} \cdot ( {\bf r}-{\bf R}_{1}/2 ) }
\psi({\bf r})
\psi({\bf r}-{\bf R}_{1}) \nonumber \\
& = &
e^{i {\bf q} \cdot {\bf R}_{1} /2 }
\stackrel{-}{I} ^{{\bf R}_{1}} ({\bf q}),
\end{eqnarray}
then
\begin{equation}
\sum_{{\bf R}_{1}}
e^{i ({\bf k}-{\bf q}) \cdot {\bf R}_{1}}
I^{{\bf R}_{1}}({\bf q}) =
\sum_{{\bf R}_{1}}
e^{i ({\bf k}-{\bf q}/2) \cdot {\bf R}_{1}}
\stackrel{-}{I} \ ^{{\bf R}_{1}}({\bf q})
\label{eq:43a}
\end{equation}
and
\begin{equation}
\sum_{{\bf R}_{2}}
e^{-i ({\bf k}-{\bf q}) \cdot {\bf R}_{2}}
I^{{\bf R}_{2}}({\bf q}) =
\sum_{{\bf R}_{2}}
e^{-i ({\bf k}-{\bf q}/2) \cdot {\bf R}_{2}}
\stackrel{-}{I} \ ^{{\bf R}_{2}}({\bf q}) .
\label{eq:43b}
\end{equation}
Equations (\ref{eq:40:IV}) and
(\ref{eq:40})
yield the stopping power for He as a function
of $S({\bf k})$ and
$\stackrel{-}{I} \ ^{{\bf R}}({\bf q})$.
$S({\bf k})$ has been calculated using the atomic
wave functions given in Ref. \cite{cle:andt14:74}
. The calculation of
$ \stackrel{-}{I} \ ^{{\bf R}}({\bf q}) $
is more complicated since the Fouriertransform of the
atomic
wavefuncions centered on different sites are needed.
This is the well-known problem of
multicenter integrals. Several solutions have been tried in
the literature
\cite{wor:ss258:91} such as expanding the
Slater-type basis in a
Gaussian one \cite{mor:nim:92,boy:prsa200:50} .
We have used, however,
an adaptative algorithm
of integration by Monte Carlo techniques \cite{pet:jcp27:78} to
perform \hbox{$\stackrel{-}{I} \ ^{{\bf R}}$}.
An approximate solution, which yields good results for $S_{0}$,
is
obtained by replacing
\begin{equation}
\stackrel{-}{I} \ ^{ {\bf R} }( {\bf q})
\simeq
S( {\bf R} ) \ * \ I( {\bf q} ) ,
\label{eq:44}
\end{equation}
where
\begin{equation}
S( {\bf R} ) =
\int d {\bf r} \
\psi( {\bf r})
\psi( {\bf r}- {\bf R})
\end{equation}
and
\begin{equation}
I( {\bf q} ) =
\int d {\bf r} \
e^{i {\bf q} \cdot {\bf r} }
\psi({\bf r})
\psi({\bf r}) .
\end{equation}
Equation (\ref{eq:44}) is exact in the limit
${\bf R}_{i} = 0$ or ${\bf q} \rightarrow 0$.
In general, we expect Eq. (\ref{eq:44}) to give
a good approximation
to \hbox{$I^{{\bf R}_{i} } ({\bf q})$}
if $ {\bf q} \cdot {\bf R}_{i} /2$ is small.
Introducing Eq. (\ref{eq:44}) into Eq.
(\ref{eq:40}) yields:
\begin{eqnarray}
\frac{1}{v}
\frac{dE}{dx} & = &
2 \pi
( V_{0}' )^{2}
\int_{-1}^{1} d cos\theta_{v}
\int_{-\infty}^{\infty} \frac{d {\bf k} }{(2\pi)^{3}}
\int_{-\infty}^{\infty} \frac{d {\bf q} }{(2\pi)^{3}} \
\frac{({\bf q} \cdot {\bf v} )}{v^{2}}
\Theta ( k_{F} - k)
\Theta ( k' - k_{F} ) \nonumber \\
& \times &
\frac{I(q)}{S( {\bf k})}
\frac{I(q)}{S({\bf k'})}
\mid
\sum_{{\bf R}}
S({\bf R})
e^{i ({\bf k}-{\bf q}/2) \cdot {\bf R} }
\mid ^{2} \
\delta ( w_{kk'} + {\bf q} \cdot {\bf v} ) .
\label{eq:45a}
\end{eqnarray}
Equation (\ref{eq:45a}) is the basis of our approximation
to Eqs.
(\ref{eq:40:IV}) and
(\ref{eq:40}). We
should also mention that $ E_{k} $ (the electron energy band of
the
alkali metal) has been assumed to follow a free electron
dispersion law.
Before discussing the numerical results given by
Eq. (\ref{eq:45a}), it is
worth considering the results obtained by neglecting all
the overlaps
between the alkali atom wavefunctions. Then we write
\begin{eqnarray}
S( {\bf R}) =
\left \{
\begin{array}{cc}
1, & \ \ {\bf R} = 0 \\
0, & \ \ {\bf R} \neq 0 \\
\end{array}
\right.
\end{eqnarray}
and
\begin{equation}
S({\bf k}) = 1
\end{equation}
and replace Eq. (\ref{eq:45a}) by the following equation:
\begin{eqnarray}
\frac{1}{v}
\frac{dE}{dx} & = &
2 \pi
( V_{o}' )^{2}
\int_{-1}^{1} d cos\theta_{v}
\int_{-\infty}^{\infty} \frac{d {\bf k} }{(2\pi)^{3}}
\int_{-\infty}^{\infty} \frac{d {\bf q} }{(2\pi)^{3}} \
\frac{({\bf q} \cdot {\bf v} )}{v^{2}}
\Theta ( k_{F} - k)
\Theta ( k' - k_{F} ) \nonumber \\
& \times &
\left|
\int d {\bf r} \
\psi^{2}({\bf r})
e^{i {\bf q} \cdot {\bf r} }
\right| ^{2}
\delta ( w_{kk'} + {\bf q} \cdot {\bf v} ) .
\label{eq:46}
\end{eqnarray}
It is also convenient to discuss at this point the stopping power
given
by the following simple model: a uniform electron gas interacting
with a slowly moving He atom by means of the following contact
potential
\begin{equation}
\hat{H} _{{\rm pert}} =
V_{0}'
\delta ( {\bf r} - {\bf v} t )
\label{eq:47} .
\end{equation}
Here $V_{0}'$ is assumed to be the same local potential
introduced in
Eq. (\ref{eq:29}). It
is an easy task to develop this model following the same steps
as discussed above for the LCAO approach and find the following
expression
for the stopping power
\begin{equation}
\frac{1}{v}
\frac{dE}{dx} =
4 \pi
( V_{0}')^{2}
\int_{-\infty}^{\infty} \frac{d {\bf k} }{(2\pi)^{3}}
\int_{-\infty}^{\infty} \frac{d {\bf q} }{(2\pi)^{3}} \
\frac{({\bf q} \cdot {\bf v} )}{v^{2}}
\Theta ( k_{F} - k)
\Theta ( k' - k_{F} )
\delta ( w_{kk'} + {\bf q} \cdot {\bf v} ).
\label{eq:48}
\end{equation}
The integral in ${\rm cos}\theta_{v}$ equals 2 because in the latter
expression
$ \frac{1}{v}
\frac{dE}{dx} $
depends only on
$ \mid {\bf v} \mid $ .
Comparing Eqs. (\ref{eq:46}) and (\ref{eq:48}),
we see that their only difference
is associated with the term
\hbox {$
\mid I(q) \mid ^{2} =
\mid
\int d {\bf r} \
\psi ^{2} ( {\bf r})
e^{i {\bf q} \cdot {\bf r} }
\mid ^{2} $
}, which
gives the form factor of the metal orbital.
We should also comment, regarding Eq. (\ref{eq:45a})
that in the alkali metals
\hbox {$
S({\bf k}) \sim S(k_{F}) $
}
since, in the low velocity limit we are considering $ {\bf k} $
and
${\bf k'}$ are located near the Fermi sphere,
\hbox{$ S({\bf k}) $}
being almost constant on this surface that presents a very small
anisotropy. Then,
Eq. (\ref{eq:45a}) can be obtained from Eq.
(\ref{eq:46}) by
replacing the form factor \hbox{ $ I ({\bf q}) $} by
\begin{equation}
D( {\bf q}, {\bf k} ) =
\frac{I(q) S( {\bf k}- {\bf q} /2 ) }
{ S({\bf k}) } ,
\label{eq:49}
\end{equation}
where
\begin{equation}
S( {\bf k}- {\bf q}/2) =
\sum_{ {\bf R}}
S( {\bf R})
e^{i ( {\bf k} -{\bf q} / 2) \cdot {\bf R} } .
\end{equation}
Thus the three different cases we are considering yield the same
equation for the stopping power, but for a specific factor taking
the
values 1, $I(q)$ and $ D({\bf q}, {\bf k} )$, for the
free-electron
gas (FEG), the
LCAO model with $ S ({\bf R}) = 0 $ for $ {\bf R} \neq 0 $
(LCAO-I), and the
LCAO model with $ S ({\bf R}) \neq 0 $ (LCAO-II),
respectively. What is of interest to realize
about this discussion is that the free-electron-gas model
overestimates
the stopping power, while the simplest LCAO model underestimates
it. In
Table \ref{tab1}, we give the three values of the mean stopping power
\hbox{$S_{0}$}, for He
in Na as calculated from these equations.
As shown in this Table \ref{tab1}
the free electron gas model yields a
stopping
power three times too large, while in the LCAO-I model $
\frac{1}{v} \frac{dE}{dx}$ is about eight times too
small.
One word of caution must be put here. The FEG model discussed
here can
not be compared directly with the
LDA used to
calculate the stopping power of He in metals. The point to notice
is that in the model defined by Eq. (\ref{eq:47}),
$V_{0}'$
is the contact potential for the interaction of He with the
$s$ orbitals
of the alkali-metal atoms. The model of Eq. (\ref{eq:47}) is only
introduced here
in order to explain how the form factor of Eqs. (\ref{eq:49})
or (\ref{eq:46}) is the
main term controlling the He stopping power.
As regards the factor
$D({\bf q}, {\bf k} )$
used to calculate
$\frac{1}{v}
\frac{dE}{dx}$
in the LCAO-II approximation, notice the strong
dependence that
$D({\bf q}, {\bf k} )$
has on the number of neighbors used to calculate
\hbox{
$ S({\bf k} ) =
\sum_{{\bf R}}
S({\bf R})
e^{i {\bf k} \cdot {\bf R} } $
}
and
$ S( {\bf k}-{\bf q}/2 ) $
in Eq. (\ref{eq:49}). We have found that in order to get a
reasonable accuracy (around \hbox{5\%}) it is necessary to
add up to the fifth or sixth
neighbor, depending on the alkali metal.
As mentioned above, Eq. (\ref{eq:40}) has been accurately
calculated for Na using Monte Carlo
techniques. We have found that this Monte Carlo calculation
yields \begin{equation}
\frac{1}{v}
\frac{dE}{dx} = 0.085 \ a.u. \ ({\rm Na}),
\end{equation}
a value a little larger than the one found using our LCAO-II
approximation. By assuming the same correction factor for all the
alkali
metals, we find the results given in Table \ref{tab2}, column
(a). This Table
also
shows the theoretical figures obtained by Echenique, Nieminen,
and Ritchie
\cite{ech:ssc37:81}.
We see from Table \ref{tab2}, column (a),
that the results for K and Rb are in
excellent agreement with Ref. \cite{ech:ssc37:81},
although the stopping powers we find for Li and Na are a little
larger.
This difference can be partially attributed to the simple model we are
using, since
a single $s$ orbital per alkali-metal atom has been assumed to form
the metal conduction band. This approximation can be expected to be
a reasonable one for very electropositive atoms like K and Rb
, but not so appropriate at least for Li.
Thus, in the calculations of Papaconstantopoulos
\cite{pap:hbses:86} for Li, only
52\% of the occupied density of states has a $s$ like character.
If we
introduce in the results of Table II, a factor
\begin{equation}
n_{s}^{2} / n_{s}^{2} \ ({\rm Rb} )
\end{equation}
which normalizes the stopping power of each alkali metal to the
total
number of $s$ electrons with respect to Rb, we find the
results of Table
\ref{tab2}, column (b), in much better
agreement with the LDA calculations.
The conclusion we can draw from these results is that the method
developed in this paper is quite appropriate to calculate the
stopping
power for He moving slowly in alkali metals. We can also expect
that the
method will be useful to calculate stopping powers for atoms in
transition metals.
In a further step we have calculated, using
Monte Carlo techniques,
the stopping power dependence on the ion position
(for He moving in a
channeled direction). We have considered that
He moves in a Na crystal
along the [100] direction. We have calculated the different
${\bf G}$ reciprocal vectors contributing to
the stopping power [Eq. (\ref{eq:39})]; this implies taking
the $
{\bf G}$ vectors perpendicular to the [100] direction. In a bcc
lattice, the first reciprocal vectors to be considered are the
followings: \hbox{${\bf G} \equiv \frac{2 \pi}{a} (0,1,1) $,}
\hbox{$ \frac{2 \pi}{a} (0,0,2) $} etc. Using Eqs.
(\ref{eq:39}) and (\ref{eq:40:II}) we have obtained the
stopping power Fourier components \hbox{$ S_{G} $}
shown in Table \ref{tab4}.
Figure 3 shows \hbox{$ S_{{\bf R}} $}, with ${\bf R}$
changing in a
[100] plane. The main conclusion we can draw from these
calculations is
the strong dependence that the stopping power shows as a
function of the
impact parameter: the stopping power can vary as much as
\hbox{100\%} for
different impact distances. We should comment that these
changes are not
associated with the electronic metal charge
\cite{gra:pla:92}; this charge,
as obtained in our LCAO approach
with an $s$ level per atom, appears to be almost constant in the
crystal lattice except very close to the atomic sites.
For a He-atom channeled along the Na [100] direction,
one expects some kind of oscillatory motion of the atom,
with the impact parameter changing along
the He trajectory. Then, the mean-stopping power for the
channeled case
would appear as an average of the different values shown
in Fig. 3 around
the minimum value of the stopping power. Each case should
be analyzed
specifically, but assuming the incoming atom to explore only
half of the
total available space, one would get around 50-60\% of $S_{0}$,
namely 0.04 a.u., 80\% of the value calculated in LDA.
\section{Conclusions}
The aim of this work has been to develop a first-principles,
free-parameter, approach based on a LCAO method to calculate the
stopping
power for atoms
moving in condensed matter. In the past few years the interest
in,
generally speaking, tight-binding methods \cite{har:esps:80}
for analyzing the
electronic properties of solids has increased a lot. This
emphasis is
partially due to the interest in using a local point of view,
closely
related to the chemistry of the local environment. The work presented
in this
paper follows this general
trend and tries to apply the ideas recently developed in Refs.
\cite{gol:prb39:89,gar:prb44:91}
for analyzing the electronic properties of solids following a
LCAO method,
to the stopping power area. In the long term, this approach can
be expected to be also useful for analyzing other dynamical
processes like
the charge transfer between moving ions and the solid, sticking
mechanisms, etc.
In Sec. II, we have presented our general approach and have
related
the stopping power for atoms, in the low-velocity limit, to the
electronic properties of the crystal as described using a LCAO
method.
All the parameters appearing in Eq. (\ref{eq:7}), the general
equation giving
the stopping power, can be obtained from the local wave funcions
of the
atoms forming the crystal. Equation (\ref{eq:7}) has been applied
to the case of
He moving in alkali metals. He is a simple atom, but the alkali
metals
present a strong test to our method as their atomic wave funcions
interact
strongly with each other up to large separations.
In Sec. IV, we have presented
our results and have found that the stopping power for He is
very well described with our local LCAO approach, if the
interaction between different alkali metal atomic orbitals
is included, at least, up to fifth neighbor.
We conclude that the LCAO method discussed in this paper
offers the
possibility of calculating accurately the stopping power for ions
moving
in solids. This could be a convenient framework for analyzing
solids having
localized $d$ bands and for discussing specific geometries like the
case of
atoms moving near surfaces, or the channeled case discussed in
Sec. V.
\acknowledgments
This work has been partially funded by the Spanish CICYT under
contract
no. PB89-165. One of the authors (J.J.D.) thanks Ministerio de
Educaci\'on y Ciencia and Universidad Aut\'onoma de Madrid,
Spain,
for their financial support. F.F. acknowledges support by
Iberdrola
S.A. We thank R. Ritchie, N. Lorente, P.M. Echenique, and M.
Jakas for helpful discussion.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,522 |
{"url":"https:\/\/engineeringprep.com\/problems\/437","text":"## Distance in 2D Space\n\nConsider points (10, 5) and (4, 30) exist in a two-dimensional space. What is the distance between the two points?\n\nHint\nIn a two-dimensional space, the distance between two points is\n$$d=\\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$$$Hint 2 Since the difference between coordinates is squared, it doesn\u2019t matter if Point 1 is assigned $$(x_1, y_1)$$ or $$(x_2, y_2)$$ as long as Point 2 is assigned as the other set. In a two-dimensional space, the distance between two points is $$d=\\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$$$\nSince the difference between coordinates is squared, it doesn\u2019t matter if Point 1 is assigned $$(x_1, y_1)$$ or $$(x_2, y_2)$$ as long as Point 2 is assigned as the other set. Let\u2019s arbitrarily set $$(10, 5)$$ as $$(x_1, y_1)$$ :\n$$d=\\sqrt{(4-10)^2+(30-5)^2}$$$$$=\\sqrt{(-6)^2+(25)^2}=\\sqrt{36+625}=\\sqrt{661}=25.7$$$\n25.7","date":"2022-08-19 10:54:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7055283188819885, \"perplexity\": 2893.455333260769}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882573667.83\/warc\/CC-MAIN-20220819100644-20220819130644-00489.warc.gz\"}"} | null | null |
Gun Nordlund (ur. 31 stycznia 1949) – fińska lekkoatletka, skoczkini wzwyż.
Brązowa medalistka europejskich igrzysk juniorów (1966).
14. zawodniczka mistrzostw Europy (1966).
Co najmniej pięć razy zdobywała złoty medal mistrzostw Finlandii (stadion – 1965, 1966 i 1971; hala – 1967 i 1972).
Przypisy
Fińscy skoczkowie wzwyż
Urodzeni w 1949 | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 7,600 |
Wounds inflicted by type arrows so prepared may excite severe inflammation, or even induce fatal septicaemia; but whether the use of absolute poison is common is doubtful. The condition generally terminates only rarely does resolution take place: diabetes. By diminishing these irritants the heart and The hygienic treatment must embrace a regulation of all the treat habits of body and modes of life. Pain of the latter disease, liver and by the characteristic inspiratory retraction of the lower portions of the chest and the smaller areas of dulness. The muscular coat is hypertrophied, and chronic as thickening of the appendix may result. Is - there were pains, he said, due to the lithemic diathesis, and to traumatism. Online - the course includes microscopic study of the histopathologic findings, their interpretation and significance with special emphasis on diseases of the liver and kidney.
The life of man, we have obferved, confifts in the adlivity and exercife of his organs-, which grow up and acquire llrength during infancy, than it begins to decline (clonazepam). If necessary, the wearing of an appropriate many cases of movable kidney dogs are unaccompanied by symptoms. For injecting "valium" pathological new formations, cold-flowing masses are to be preferred. The latter brings to our recolledlion the the colour of wine and its fermentation "same" to the ferruginous particles of the grape, and to their union by inagnetifm. Epithelioma is a growth of a fibrous character, and usually of a malignant nature, but occasionally occurs in a benign form; in the earliest stages epilettiche of its growth, it is a matter of great difficulty to state definitely whether it is of a malignant or a non-malignant character. The kidney may attain an enormous size, and the condition exist "singapore" for a considerable length of time, without any sign of disease being presented. They grow in the cortex of the kidney in the form of small nodular does masses, which in some malignant. Doses - hence dyspnea is commonly observed.
Thus animals artificially infected with bacteria have shown amyloid humans change in the liver, spleen, etc. To prevent the dampness of the soil from rising in the walls by capillary attraction, the foundations must be laid in concrete and hydraulic cement and a horizontal course of slate bedded in cement should be interposed between the concrete footings and wall, and another course of slate just as the foundation walls reach the ground level, since these slate courses are liable to fracture, the last damp-proof course should consist of vitrified hollow brick, which, moreover, possess the advantage of The exterior walls of a house, whenever practicable, should be separated from the ground by an"open area," extending from the foundation upward; but where this cannot be done, a" dry area" may be formed by constructing a hollow wall to the ground level, provided with the usual damp-proof courses, and if springy, also with a subsoil drain at the bottom, at the same time protecting the wall in contact of with the ground with a coat of slate embedded in cement.
In those who have an hereditary history, the when chances as to whether the fits become arrested, improved or confirmed, are in any given case about equal.
And - villers has, since his entrance to a medical career, consecrated his activity, all his energy and all his science to the prosperity of our society, and he has not left his important functions of secretary until he Avas assured of the aid of a successor I am glad to greet at this time Dr. This epidemic, no doubt, resulted from importation, although a clear history of in its introduction was not made out at the time, and the leading physicians of the city were inclined to attribute it to local origin, as a result of unsanitary conditions in connection with an unusually high temperature. Protrusion of brainsubstance in compound fractures of the skull is not considered here, though sometimes improperly called a hernia cerebri; the correct designation is prolapsus cerebri (sleep). Cheapest - a series of lectures in General Medicine, Neurology, and Clinical Medicine are given to the entire junior class on an elective selected patients, participate in the workup of chronically ill patients at the Montebello Chronic Disease Hospital, and attend consultative rounds in cardiology, infectious diseases, gastroenterology, arthritis, radioisotopes, neurology, hematology, endocrinology, and pulmonary diseases on the wards of the University Hospital.
In some countries everything tends disease to hinder. HYPERNEPHROMA is a term suggested originally by Bircli-Hirschfeld, and since adopted by other writers, to designate tumors that result from proliferation of adrenal the adrenal effects itself or in regions where aberrant or misplaced (heterotopic) portions of adrenal tissue are known to occur.
Is also recommended; it is a powerful astringent, and if given, it should be in illegal tannic acid. There are, of course, severe and milder types: buy. Whether existing independently of, or as secondary to, intra-tympaiiic inflammation, such extreme pain attending traction on the auricle, added to the subjective symptoms of pain in the ear (especially if notablj in creased from t he start by movements of the jaw in chew if slight deafness and tinnitus, and sliidil fulness in the head, and very often Blight itching, should lead the examiner to search for prominent and tender spots near existence of one or more furuncles as tl his patient's discomfort; and it would be onlj afti eluding these thai hi- could aSSUme tie existence ol a more diffused and probably eczematous Inflammation of the Tenderness to pressure, made either in front of the tra L'u- to the insertion of the auricle bestandteile over the liicli it ii. Light watery vegetables, fruits, and cereals may be gradually added to migraines the diet-list, although milk should be mainly used. THE REPORT OF THE REGISTRAR-GENERAL OF ONTARIO The report of the take Reg'i.strar-General of Ontario for the year In the opinion of the Registrar-General, the birth rate in this Province is nnsatisf actorv; natural conditions are being interfered with, (ir supplanted by those of a preventive character and criminal in tendency.
Best Substitute For Valium
Valium Phlebitis
Rumalaya Forte Review
Ovulation Day Femara
Combivent Albuterol | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,203 |
Q: Помогите оформить вывод таблицы Delphi program lab2v2;
// Данная программа сравнивает шейкерную сортировку и
// сортировку бинарными вставками по количеству
// присваиваний
{$APPTYPE CONSOLE}
uses
System.SysUtils;
type
TArray = array [1 .. 3000] of integer;
TDateArray = array [1 .. 6] of integer;
{ TBaseOfArray - базовый типа массива
TArray - тип-массив }
const
ArrLength: TDateArray = (100, 250, 500, 1000, 2000, 3000);
{ ArrLength - массив, хранящий длины массивов }
// Процедура, меняющая местами элементы массива Element1
// и Element2
{ Element1, Element2 - две элемента массива, меняющиеся местами }
procedure Swap(var Element1, Element2: integer);
var
Temp: integer;
{ Temp - переменная для хранения одного из элементов при замене }
begin
Temp := Element1;
Element1 := Element2;
Element2 := Temp;
end;
// Шейкерная сортировка
{ NumberOfSwaps - число перестановок
Arr - сортируемый массив }
function CoctailSort(var Arr: TArray; var kol: integer): integer;
var
i, Left, Right, NumberOfSwaps: integer;
{ i - счётчик цикла
Left, Right - левая и правая границы в шейкерной
сортировке
LastSwap - место последнего обмена для более быстрого
сужения границ }
begin
NumberOfSwaps := 0;
Left := 1;
Right := kol;
while Left < Right do
begin
// Проход слева направо
for i := Left to Right - 1 do
if Arr[i] > Arr[i + 1] then
begin
Swap(Arr[i], Arr[i + 1]);
Inc(NumberOfSwaps, 1);
end;
// Проход справа налево
for i := Right downto Left + 1 do
if Arr[i] < Arr[i - 1] then
begin
Swap(Arr[i], Arr[i - 1]);
Inc(NumberOfSwaps, 1);
end;
// Сдвиг левой границы вправо
Left := Left + 1;
Right := Right - 1;
end;
CoctailSort := NumberOfSwaps;
end;
// Сортировка бинарными вставками
{ NumberOfSwaps - число перестановок
Arr - сортируемый массив }
function BinaryInsertSort(var Arr: TArray; var kol: integer): integer;
var
i, j, Left, Right, Middle, NumberOfSwaps, Temp: integer;
{ i, j - счётчики циклов
Left, Right - левая и правая границы в сортировке
бинарными вставками
Middle - середина рассматриваемого отрезка массива в
сортировке
Temp - переменнаяя для хранения вставляемого элемента }
begin
NumberOfSwaps := 0;
for i := 2 to kol do
begin
Left := 1;
Right := i - 1;
Temp := Arr[i];
// Поиск позиции для вставляемого элемента методом
// бинарного поиска
repeat
Middle := (Left + Right) div 2;
if Arr[Middle] < Temp then
Left := Middle + 1
else
Right := Middle - 1;
until Left > Right;
// Сдвиг элементов вправо
for j := i - 1 downto Left do
begin
Arr[j + 1] := Arr[j];
Inc(NumberOfSwaps);
end;
// Вставляем элемент
Arr[Left] := Temp;
Inc(NumberOfSwaps, 2);
end;
BinaryInsertSort := NumberOfSwaps;
end;
// Заполнение массива Arr случайными значениями
{ Arr - заполняемый массив
MaxNumber - максимальный возможный элемент массива }
procedure RandomFill(var Arr: TArray; kol: integer);
var
i: integer;
{ i - счётчик цикла }
begin
Randomize;
for i := 1 to kol do
// Элемент массива инициализируется случайным
// натуральным числом не больше MaxNumber
Arr[i] := Random(kol) + 1;
end;
// Переворачивание массива
{ Arr - переворачиваемый массив }
procedure Reverse(var Arr: TArray; kol: integer);
var
i: integer;
{ i - счётчик цикла }
begin
for i := 1 to kol div 2 do
Swap(Arr[i], Arr[kol - i + 1]);
end;
// Основной алгоритм
var
i, j, k, m, Swaps, N, res: integer;
Arr: TArray;
MaxElement: integer;
CoctailSwaps: array [1 .. 3, 1 .. 6] of integer;
InsertSwaps: array [1 .. 3, 1 .. 6] of integer;
ErrorString: String;
ErrorCode: integer;
ErrorFlag: Boolean;
{ i, j - счётчики цикла
Swaps - количество перестановок
Arr - сортируемый массив
CoctailSwaps - массив, хранящий количество
перестановок в шейкерной сортировке для массивов
разных размеров
InsertSwaps - массив, хранящий количество перестановок
в сортировке бинарными вставками для массивов разных
размеров }
begin
// Заполнение массивов, хранящих перестановки, нулями
// FillChar заполняет 3 * SizeOf(TDateArray) байт в
// CoctailSwaps значениями 0
FillChar(CoctailSwaps, 3 * SizeOf(TDateArray), 0);
FillChar(InsertSwaps, 3 * SizeOf(TDateArray), 0);
for i := 1 to 6 do
begin
N := ArrLength[i];
// Измеряем количество перестановок 3 раза и находим среднее
for j := 1 to 2 do
begin
(* Шейкерная сортировка *)
// Сортировка случайного массива
RandomFill(Arr, N);
res := CoctailSort(Arr, N);
Inc(CoctailSwaps[1][i], res);
(* Бинарные вставки *)
// Сортировка случайного массива
RandomFill(Arr, N);
res := BinaryInsertSort(Arr, N);
Inc(InsertSwaps[1][i], res);
end;
(* Шейкерная сортировка *)
// Сортировка случайного массива
RandomFill(Arr, N);
res := CoctailSort(Arr, N);
Inc(CoctailSwaps[1][i], res);
// Сортировка отсортированного массива
res := CoctailSort(Arr, N);
CoctailSwaps[2][i] := res;
// Сортировка перевернутого массива
Reverse(Arr, N);
res := CoctailSort(Arr, N);
CoctailSwaps[3][i] := res;
(* Бинарные вставки *)
// Сортировка случайного массива
RandomFill(Arr, N);
res := BinaryInsertSort(Arr, N);
Inc(InsertSwaps[1][i], res);
// Сортировка отсортированного массива
res := BinaryInsertSort(Arr, N);
InsertSwaps[2][i] := res;
// Сортировка перевернутого массива
Reverse(Arr, N);
res := BinaryInsertSort(Arr, N);
InsertSwaps[3][i] := res;
CoctailSwaps[1][i] := CoctailSwaps[1][i] div 3;
InsertSwaps[1][i] := InsertSwaps[1][i] div 3;
end;
// Вывод результатов
for i := 1 to 36 do
Write('_');
WriteLn;
Write('| array | array |cocktail| binary|');
WriteLn;
Write('|dimension| type | sorting| insert|');
WriteLn;
Write('| | | |sorting|');
WriteLn;
Write('|_________|_______|________|_______|');
WriteLn;
for k := 1 to 6 do
begin
Write('|N = ', ArrLength[k]);
Write(' |random| ', CoctailSwaps[1][k]:7, '|', InsertSwaps[1][k]:7, '|');
WriteLn;
Write('| |_______|________|_______|');
WriteLn;
Write('| | sorted|', CoctailSwaps[2][k]:7, '|',
InsertSwaps[2][k]:8, '|');
WriteLn;
Write('| |_______|________|_______|');
WriteLn;
Write('| |reverse|', CoctailSwaps[3][k]:7, '|',
InsertSwaps[3][k]:8, '|');
WriteLn;
Write('|_________|_______|________|_______|');
WriteLn;
end;
ReadLn;
end.
Никак не могу оформить вывод таблицы, помогите разобраться с табуляцией. Все время съезжают границы в некоторых частях таблицы(см. рисунок)
A: for i := 1 to 39 do
Write('_');
WriteLn;
Writeln('| array | array |cocktail | binary |');
Writeln('|dimension | type | sorting | insert |');
Writeln('| | | |sorting |');
Writeln('|__________|________|_________|________|');
for k := 1 to 6 do
begin
Writeln('|N = ', ArrLength[k]:4,' | random | ', CoctailSwaps[1][k]:8, '|', InsertSwaps[1][k]:8, '|');
Writeln('| |________|_________|________|');
Writeln('| | sorted | ', CoctailSwaps[2][k]:8, '|', InsertSwaps[2][k]:8, '|');
Writeln('| |________|_________|________|');
Writeln('| | reverse| ', CoctailSwaps[3][k]:8, '|', InsertSwaps[3][k]:8, '|');
Writeln('|__________|________|_________|________|');
end;
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,013 |
Archives Hub: University of Manchester Special Collections (ELGAR)
About the Archives Hub
About Repository Websites
Mainwaring Court Book for the Manors of Baddiley and Peover
Bookmark:https://archiveshub.jisc.ac.uk/manchesteruniversity/data/gb133-engms701
GB 133 Eng MS 701
17th to 18th Century
Name of Author: Mainwaring family
358 x 233 mm. 1 volume (237 + i folios); Binding: original reversed calf over boards (rebacked); blind stamped panels on both boards; two brass clasps.
The contents of the volume are as follows:
folios 1-125v: records of proceedings in the view of frankpledge and court leet for Baddiley, following the usual format of court rolls, 13 November 1665 to 26 November 1728;
folios 237v-237r: list of constables for Peover, 1644-1714 (from the rear of the volume);
folios 238v-178: records of proceedings in the view of frankpledge, court leet and court baron for Peover, 26 July 1681 to 17 July 1711 (from the rear of the volume).
The last section contains the note (on f. 178): This was the Last Court of Sir Thomas Mainwaringe in Peover. Folios 126-177v are blank.
Administrative / Biographical History
The Mainwaring family held a prominent place among the Cheshire gentry for over five hundred years, although their influence rarely extended beyond the county. Through marriage alliances with other Cheshire families they developed their estates throughout the 15th and 16th centuries. In 1400 John Mainwaring had land or was drawing rents from land in Baddiley, Brindley, Burland, Chester, Eaton, Faddiley, Hulme Walfield, Lawton, Peover, Stoke, Upton and Poulton in Wirral. In 1405 the family acquired lands in Chelford and Dittington, and deeds from 1444 refer to Mainwaring interests in Aston, Baddiley, Chester, Fouleshurst, Nantwich, Newhall, Peover and Withington. The estates were centred on Peover, where the halmote court met.
Throughout the 15th century the family was active in local administration, serving as sheriffs of Cheshire, tax officials and commissioners. Sir John Mainwaring I (d. 1483) supported the Lancastrian cause, associating himself with two prominent Lancastrians, James Touchet, Lord Audley, and Humphrey Stafford, Duke of Buckingham. Mainwaring entered an agreement to serve the latter in peace and war. Sir John's great grandson, Sir John Mainwaring II (d. 1516), was one of the Cheshire gentry who granted a subsidy of one thousand marks towards the war with Scotland and was appointed sheriff of Flintshire in 1506. He was knighted for his service in the French campaign of 1513.
Sir Thomas Mainwaring (1623-1689), 1st baronet, antiquary and local politician, inherited the family estates in 1647. He was assiduous in local committee work, becoming a JP in 1649, sitting as a commissioner for assessment, the militia, and the regulation of ministers, and serving as sheriff in 1657-8. He sat for Cheshire in the Convention Parliament. After the Restoration he was reappointed to his local offices, nominated to the order of the Royal Oak, and created a baronet on 22 November 1660. Between 1675 and 1681 he served as a deputy lieutenant for Cheshire. Sir Thomas loved books and cultivated learning. Between 1673 and 1679 he and his kinsman Sir Peter Leycester exchanged insults and arguments in print over the illegitimacy of their remote common ancestress, Amicia, daughter of Hugh of Cyfeilog, earl of Chester, alleged in Leycester's Historical Antiquities(1673) and denied by Mainwaring. Eventually their arguments ranged over much of the social and political life of the twelfth century, and in so doing represented a milestone in historiography.
Sir John Mainwaring (1656-1702) was the fourth but oldest surviving son of Sir Thomas Mainwaring. He succeeded to the baronetcy and sat as a whig in six Parliaments between 1689 and 1701. He married Elizabeth (d. 1719), eldest daughter of Roger Whitley of Peel in Cheshire on 28 September 1676. They had five sons, four of whom died young or without issue, and two daughters. He died in debtors' prison on 4 November 1702.
Source: Hans Norton, 'Mainwaring, Sir Thomas, first baronet (1623-1689)', Oxford Dictionary of National Biography, Oxford University Press, 2004. By permission of Oxford University Press - http://dx.doi.org/10.1093/ref:odnb/17813.
The manuscript is available for consultation by any accredited reader.
Acquisition Information
Purchased by the John Rylands Library from the Manchester booksellers Sherratt & Hughes on 30 April 1931.
Description compiled by Henry Sullivan and Jo Humpleby, project archivists, with reference to:
Oxford Dictionary of National Biography article on Sir Thomas Mainwaring.
http://home.clara.net/craigthornber/cheshire/htmlfiles/peover.html.
Other Finding Aids
Catalogued in the Hand-List of the Collection of English Manuscripts in the John Rylands Library, 1928-35 (English MS 701).
The JRUL holds the Mainwaring Muniments (ref.: GB 133 MNW): see published handlist, Robert Fawtier, Hand-List of the Mainwaring and Jodrell Manuscripts at Present in the Custody of the John Rylands Library (Manchester, 1923).
Manorial courtsCheshireEngland1644-1728
Mainwaring family 1100 - 1902 baronets of Over Peover
Mainwaring Thomas 1623-1689 Sir 1st Baronet
Mainwaring John 1656-1702 Sir 2nd Baronet
Mainwaring Thomas 1681-1726 Sir 3rd Baronet
Mainwaring Henry 1726-1797 Sir
Baddiley Cheshire England 1665-1728
Peover Superior Cheshire England 1644-1714 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,348 |
\section{Introduction}
Ultra-high energy neutrinos (UHE, $>10^{17}$\,eV) are a unique window on the distant, high energy universe.
In addition to gravitational waves, they are the only Standard Model messengers capable of traveling cosmic distances undeflected and unattenuated.
Cosmic rays have their trajectories bent by magnetic fields, and for sources more distant than ${\sim}50$\,Mpc, above ${\sim}10^{19.5}$\,eV cosmic rays are expected to be degraded in energy through interactions with the Cosmic Microwave Background (CMB) via the Greisen-Zatsepin-Kuz'min (GZK) effect~\cite{Greisen1966, Zatsepin:1966jv}. Cosmic ray nuclei are additionally degraded in-flight to earth through their natural beta and inverse-beta decay processes, as well as photo-disintegration e.g.\ the Giant Dipole Resonance~\cite{Berman:1975tt}.
High-energy gamma rays ($\gtrsim100$\,TeV) are similarly expected to pair-annihilate off the CMB and Extragalactic Background Light (EBL)~\cite{Gould1967}.
Predictions for the sources of very high energy neutrinos fall broadly into two classes. First, \textit{astrophysical neutrinos} are expected from the site of cosmic ray acceleration, for example gamma ray bursts and active galactic nuclei \cite{Waxman:1999ai,Murase:2015ndr}.
The IceCube experiment has confirmed the existence, and measured the spectrum, of TeV-PeV astrophysical neutrinos \cite{Aartsen:2015knd}, and has identified a first potential source in the blazar TXS 0506+056~\cite{IceCube:2018dnn,IceCube:2018cha}. Second, \textit{cosmogenic neutrinos} are expected from the destruction of cosmic rays through the aforementioned processes~\cite{Beresinsky:1969qj}. A more complete discussion of how the flux of cosmogenic neutrinos depends on the primary cosmic-ray composition, and the effects of various interaction and decay processes, can be found in the literature~\cite{Hooper:2004jc,Allard:2006mv,Kotera:2010yn,Kotera:2011cp,vanVliet:2017obm}.
At energies above above $10^{16}$\,eV, low predicted fluxes \cite{Ahlers:2012rz, Thomas:2017dft} combined with small expected cross sections \cite{Connolly:2011vc,CooperSarkar:2011pa} lead to $\mathcal{O}(10^{-2})$\,neutrino interactions per cubic-kilometer of ice per year per energy decade.
As such, the active volumes of the instruments required to detect this UHE flux must necessarily approach the scale of $100$\,km$^3$ water equivalent. Several experiments are operating or under-construction to reach this high energy flux, including IceCube~\cite{Aartsen:2018vtx}, Pierre Auger~\cite{Aab:2019auo}, NuMoon~\cite{NuMoon}, ANITA~\cite{Gorham:2019guw}, ARIANNA~\cite{Anker:2019rzo}, GRAND~\cite{Alvarez-Muniz:2018bhp}, and ARA~\cite{Allison:2011wk}, which is the focus on this work.
The Askaryan Radio Array (ARA) is a UHE neutrino detector deployed at the South Pole seeking to observe these ultra-high energy neutrinos. ARA searches for neutrinos by looking for the broadband (few hundred MHz to few GHz) radio impulse, or ``Askaryan emission"~\cite{Askaryan:1962hbi, Askaryan:1965}, that accompanies neutrino-nucleon interactions.
This effect, caused by a $\sim20\%$ negative charge asymmetry in electromagnetic showers in media, and acting as a coherently radiating current distribution, has been observed in the laboratory at accelerator facilities~\cite{Gorham:2006fy}.
The radiation has a Cherenkov-like beam pattern, with a cone thickness of a few degrees. The leading edge of the electric field pulse points toward the shower axis.
Experiments looking for Askaryan radiation are deployed in dielectric media such as ice, salt, and sand, which are expected to be sufficiently transparent to radio waves as to make the radio signal observable. In the case of ARA, the long (generally greater than 500\,m \cite{barwick2005south}) attenuation length of radio waves in South Pole ice allows naturally occurring detector volumes to be instrumented sparsely and economically.
A diagram of how a neutrino interaction might be observed in an ARA detector is given in Fig.~\ref{fig:interaction_diagram}.
\begin{figure*}[htp]
\centering
\includegraphics[width=0.75\textwidth]{interaction_figure.pdf}
\caption{A diagram showing how a high energy neutrino interaction might be observed in an ARA station. The insets show how the Askaryan emission and its polarization would be observed if seen along, and perpenduclar to, the shower axis. A more detailed view of an ARA station can be found in Fig.~\ref{fig:ara5_layout}.}
\label{fig:interaction_diagram}
\end{figure*}
In this paper, we report constraints on the diffuse flux of UHE neutrinos over the energy interval $10^{16}-10^{21}$\,eV. This result is based on two complementary searches for neutrinos in four years of data from ARA stations A2 and A3 recorded between February of 2013 and December of 2016.
This paper is organized as follows.
In Sec.~\ref{sec:instrument_description}, we describe the ARA instrument.
In Sec.~\ref{sec:data_analysis_methods}, we describe the data analysis methods used in two parallel analyses, and in Sec.~\ref{sec:results} we discuss our findings. In Sec.~\ref{sec:systematics}, we discuss systematic uncertainties. Finally, Sec.~\ref{sec:discuss} we discuss the result and its implications, as well as prospects for the future.
We also include an appendix, App.~\ref{app:limit_calc}, where we discuss the calculation of our limit and detail the livetime of the instrument.
\section{Instrument Description}
\label{sec:instrument_description}
The Askaryan Radio Array is a UHE radio neutrino detector consisting of five stations located a few kilometers grid-west of the geographic South Pole in Antarctica, as drawn in Fig.~\ref{fig:ara5_layout}~\cite{Allison:2011wk}.
A single station consists of 16 antennas, eight for detecting horizontally-polarized (HPol) radiation and eight for detecting vertically-polarized (VPol) radiation, along with signal conditioning and Data Acquisition (DAQ) electronics.
The antennas are deployed at the bottom of holes at up to 200\,m depth on four ``measurement strings,'' forming an rectangular solid 20\,m tall and 15\,m deep and wide.
At each corner of the rectangle an HPol quad-slotted cylinder antenna sits a few meters above a VPol wire-frame bicone antenna.
Each antenna is approximately sensitive to radiation in the 150-850\,MHz band~\cite{Allison:2011wk}.
Two ``calibration strings" are deployed about 40\,m radially away from the center of the station.
Each calibration string contains a VPol and an HPol antenna, and is capable of emitting broadband RF pulses, which provide an \textit{in-situ} calibration of station geometry and timing, as well as a measurement of livetime.
Construction of ARA began in 2011, when a prototype station (Testbed) was deployed \cite{Allison:2011wk, Allison:2014kha} at 30~m depth to evaluate the RF environment and electronics performance.
The first design station (A1) was deployed in 2012, but only up to 100\,m depth due to limited drill performance.
In 2013, two deep stations (A2, A3) that are the focus of this work were deployed at up to 200\,m depth~\cite{Allison:2015eky}. Two more 200~m depth stations (A4, A5) were deployed in 2018.
\begin{figure*}[htp]
\centering
\includegraphics[width=0.49\textwidth]{a5_for_paper.pdf}
\includegraphics[width=0.49\textwidth]{ARAstation.pdf}
\caption{(Left) A top-down view of the ARA5 instrument as deployed at the South Pole, with stations color-coded by the year they were deployed. The green stations, A2 and A3, are the focus of the analysis described in this paper. (Right) A schematic of the electronics and instrumentation in an ARA station; ``FO" is a fiber-optic transmitter.}
\label{fig:ara5_layout}
\end{figure*}
\subsection{The ARA Electronics}
\label{sec:ara_elec}
A schematic drawing of the ARA instrumentation and electronics is shown in the right of Fig.~\ref{fig:ara5_layout}. After an incoming signal excites an antenna, it enters an antenna-mounted front-end signal-conditioning module; there, the signal undergoes a strong ($>50$\,dB) notch filter at 450~MHz to remove South Pole Station communications, is band-passed between 150-850\,MHz, and boosted by approximately 40~dB through two stages in low Low-Noise Amplifiers (LNAs).
The signal is then transmitted
to the surface via RF-over-Fiber (RFoF) to reduce attenuation over the 200\,m journey to the top of the borehole.
At the surface, the optical signal is converted back to an electronic signal, amplified again by $40$\,dB, before finally being bandpass filtered once more to remove any out-of-band noise contributed by the amplifiers.
The signal is then split into two paths, one for triggering and one for digitization.
The trigger path is routed through a tunnel-diode which serves as a passive, few-nanosecond power integrator.
When the rising edge of the tunnel diode output exceeds roughly five times the ambient thermal noise level, the lowest-level single channel trigger fires.
If three same-polarization antennas register a single channel trigger within 170~ns (the light propagation time in the ice across the station's diagonal) all 16 antennas in the station are read out.
This scheme is optimized to trigger on Askaryan pulses, which should generate significant power in very short time windows
and traverse the array at the speed of radio propagation in ice (${\sim}0.16$m/ns).
The signal is recorded through the digitization path.
The signal is stored in the circular buffer of an IceRay Sampler 2 (IRS2) chip, which is a high-speed 3.2\,Gs/s digitizer ASIC~\cite{VarnerIRS2}.
To minimize power consumption, the buffers are implemented in analog as Switched Capacitor Arrays (SCA)~\cite{Varner:2007zz,Roberts:2018xyf}.
After a global trigger is issued, sampling is halted and analog-to-digital conversion commences.
Each readout records 400-600\,ns of waveform, roughly centered on the trigger.
The bundle of 16 waveforms and the associated housekeeping data (UTC timestamp, etc.) defines an \textit{event}. An example VPol calibration pulser event is shown in Fig.~\ref{fig:event_display}, where ``TVPol" notes a vertically-polarized antenna deployed at the top of a string, ``BHPol" notes a horizontally-polarized antenna deployed at the bottom of a string, and so forth.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{sixteen_graphs.png}
\caption{An event display showing the sixteen waveforms recorded in A2 for a VPol calibration pulser.}
\label{fig:event_display}
\end{figure}
Triggering is performed by four Triggering DAughter boards (TDAs), while digitization is handled by four Digitizing DAughter boards (DDAs), with four RF channels per board.
The logic and readout to storage for the eight daughter boards is managed by the ARA Triggering and Readout Interface (ATRI).
The ATRI communicates via USB with a Linux Single Board Computer (SBC) for run control and data archiving. A more detailed discussion of the ARA electronics can be found in previous work~\cite{Allison:2011wk,Allison:2015eky}.
The precise triggering threshold for a given antenna is adjusted to maintain a single channel trigger rate for that antenna. The targeted single channel rates are chosen so that the global trigger rate, after taking into account combinatorics and trigger coincidence windows, is maintained at 5\,Hz.
The dominant source of these ``RF triggers" is fluctuations in the blackbody thermal noise background of the ice, but also includes any potential neutrino signals, as well as anthropogenic (human-made) signals such as aircraft, motor vehicles, etc.
In addition, each station collects a sample of background ``software'' internally-generated triggers as well as the calibration pulses, both at 1\,Hz, for a total 7\,Hz global trigger rate. Every triggered event invokes approximately 1\,ms of deadtime in the electronics readout system, which has negligible $<1$\% impact on the livetime.
\subsection{Detector Livetime}
\label{sec:livetime}
This analysis comprises data recorded by ARA Stations 2 and 3 (A2 and A3) between initial deployment in February 2013 and the end of December 2016. Over the course of these four years, each station accumulated roughly 1100\,days of livetime, as shown in Fig~\ref{fig:A23_uptime}, recording over 1.2 billion events total between the two stations. The two detectors were operated in several different ``configurations", representing different combinations of operating parameters such as trigger window size, etc. We summarize the five data taking configurations for each station in
Tab.~\ref{tab:configs} of App.~\ref{app:livetime}. For all configurations in A2, the bottom HPol channel of string 4 was non-operational, and it is excluded from participating in the trigger for configurations 3-5. Additionally, for configurations 3-5 of A3, the fourth string of the detector
participates in forming triggers normally, but due to technical problems in the digitization chain it does not produce useful signal for analysis.
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{livetimes_a23.png}
\caption{Operational fractional livetimes for A2 (left) and A3 (right) from deployment in February 2013 through the end of the analysis period in 2016; each bin is one month wide. From the 4\,years of deployment, 1141 days from A2, and 1021 days from A3, are good for analysis. This is mostly due to intermittent downtime; quality cuts remove less than 2\% of livetime.
}
\label{fig:A23_uptime}
\end{figure*}
\subsection{Simulation}
\label{sec:simulation_description}
We generate simulated data sets with the Monte Carlo package \texttt{AraSim}, which has been previously described extensively in Allison \textit{et al.} \cite{Allison:2014kha, Allison:2015eky}.
This code models the generation of neutrino events from a diffuse flux and their interactions with Earth and Antarctica.
After simulating interactions in-ice, \texttt{AraSim} renders a time-domain parameterization of the Askaryan radiation and propagates that radiation through the ice, taking into account signal attenuation and ray bending based on a depth-dependent index of refraction model.
When the radiation arrives at a simulated station, it is convolved with a frequency-dependent model of the detector, including the antennas, signal chain, and the trigger logic. The model of the instrument includes the dispersive effect of the signal chain that induces a frequency-dependent group delay.
If the event satisfies a simulated trigger, it is stored in the same format as real data so that our analysis codes can be executed on either data or simulated
events interchangeably.
The models of the A2 and A3 stations are data-driven, and include calibrations derived from the 2012-2013 dataset as described in \cite{Allison:2015eky}.
In particular, the antenna locations, the noise temperature of the ice, and the gain of every channel are all implemented in the model based on \textit{in situ} measurements. The simulation also models the configuration-specific variations in the electronics behavior (readout length, trigger window size, trigger delay values, etc.) as detailed in App.~\ref{app:livetime}.
In Fig.~\ref{fig:aeff}, we show the aperture ($[A\Omega]_{\rm{eff}}$) of A2, averaged over configurations. The effective area is derived via Monte Carlo techniques with \texttt{AraSim} as described in App.~\ref{app:limit_calc}. For comparison, we also plot the effective area of the IceCube experiment~\cite{Aartsen:2013dsm}. As can be seen in the bottom panel, we find that A2 and A3 have comparable effective areas to within a few percent. We additionally find that triggering and readout parameters specific to each livetime configuration, as discussed in Sec.~\ref{sec:livetime}, do not result in differences in the trigger level effective area in excess of a few percent. The two detectors, A2 and A3, are simulated independently; previous studies have shown that only a small fraction of events trigger both A2 and A3 simultaneously, amounting to about 5\% of events at 1 EeV ~\cite{Allison:2015eky}.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{a23_veff.png}
\caption{(Top) The simulated trigger-level effective area-steradian ($[A\Omega]_{\textrm{eff}}$) for A2, averaged across configurations. For comparison, we also show the analysis-level sensitivity of IceCube~\cite{Aartsen:2013dsm}. (Bottom) The percent difference between the A2 and A3 effective areas.}
\label{fig:aeff}
\end{figure}
\section{Data Analysis}
\label{sec:data_analysis_methods}
Our data analysis searches for a diffuse flux of neutrinos between $10^{16}{-}10^{21}$\,eV.
The analysis is designed to remove background events, principally thermal and anthropogenic noise, while preserving sensitivity to neutrinos.
The analysis proceeds in a ``blind'' fashion, where the ARA data is divided into two subsets.
A ``burn'' sample of 10\% of the data, which is assumed to be representative of the full data sample, is set aside and used to tune cuts and understand backgrounds.
The remaining 90\% of the data is kept blinded
until cuts are finalized.
Before unblinding, it was decided that in the absence of a detection, the analysis with the best \textit{expected} limit would be used to set the limit.
\subsection{Summary of Blind Analyses}
Two parallel, complementary analyses were performed on the four-year data samples, which we refer to as Analysis A and Analysis B. In this section, we outline the strategies followed by both, with Sec.~\ref{sec:analysis_highlights} describing features specific to the two separate analyses.
Both analyses follow similar strategies. First, a set of basic data quality and livetime cuts are applied to remove detector `glitches', calibration events, and periods of livetime known to be contaminated with anthropogenic activities. Second, fast event-level filters designed to reduce the quantity of data by an order of magnitude or more are applied. Third, interferometric-based reconstruction is performed to identify the arrival direction of a recorded signal, and geometric cuts invoked to reject events that originate from above the ice surface or in the direction of known calibration pulsers. Finally, a bivariate cut is applied on the signal strength and a reconstruction quality parameter. Analysis A considers events only in the vertical polarization, while Analysis B includes events in both the horizontal and vertical polarizations. Both derive data-driven models of the background, and set their final cuts such that $\sim 0.01$ background events are expected to pass the analysis in the 1100 days of livetime, corresponding to the level at which we find the best expected limit.
Both analyses use the 10\% ``burn" sample to tune cuts and understand backgrounds. The number of neutrinos expected in the burn sample based on allowed models is $<0.02$, so the chance of excluding a neutrino candidate in the burn sample is negligible. Moreover, cuts were motivated by an optimization procedure and by examining outlying events that did not have neutrino-like properties, not by the objective to eliminate all events in the burn sample. The distribution of the background was smooth and followed a statistical distribution, and we did not readjust cuts to remove specific neutrino-like events.
\subsection{Data Quality Cuts}
Before analysis begins, we remove periods of livetime which are known to contain human and calibration activity. This includes, for example, maintenance operations on the detector during the Austral summer, and the operation of surface pulsers or pulsers deployed on two of the final IceCube strings (strings 1 and 22), known as the ``IceCube Deep Radio Pulsers." These livetime cuts remove less than 2\% of the total livetime recorded by the instrument.
Next, both analyses deploy a nearly common set of data quality cuts designed to remove instrumental glitches and remaining calibration events from the dataset. Glitches are typically present either as waveforms that are shorter than those generated during normal readout, or waveforms with unphysical discontinuities (likely due to digital errors in the readout electronics), and comprise less than 0.001\% of events.
Additionally, we remove the internally-generated ``software" triggers described above in Sec.~\ref{sec:ara_elec}, as well as ``tagged" calibration pulser events. We are able to ``tag" calibration pulsers under normal operating conditions as they are nominally triggered by the pulse-per-second (PPS) TTL inside the DAQ, so these events are readily identified by their timestamps. This has negligible affect on the detector livetime and neutrino sensitivity. The quality cuts that are not common between analyses focus on slightly different methods for detecting unphysical discontinuities in the waveforms, as well as the identification of out-of-band power content.
\subsection{Event Filter}
Because of the large size of the ARA dataset (${>1.5\,\times\,10^{8}}$\,events/station/year), and the expectation that most triggers are upward fluctuations of the thermal noise environment, each analysis applies a computationally simple cut that rejects $>$90\% of triggered events. Analysis A utilizes an event filter based on a multiplicity condition, which requires that more than three VPol channels each have an signal strength above a threshold. Analysis B utilizes a wavefront-RMS filter, which requires that the pattern of arrival times across the array be consistent with that of a plane-wave. Both algorithms have been described elsewhere \cite{Lu:2017amt}. Analysis A tunes its filter such that 99\% of triggered events do not pass the filter, while Analysis B tunes its filter such that approximately 90\% of triggered events do not pass. In Analysis A, the signal strength threshold is tuned. In Analysis B, the signal strength threshold and tolerance parameter for deviation from plane wave-like timing is tuned.
In Analysis~A, the multiplicity trigger is approximately 70\% efficient for $10^{18}$\,eV neutrinos, where for Analysis~B the wavefront-RMS filter efficiency is approximately 90\%.
\subsection{Reconstruction and Geometric Cuts}
\label{sec:reco}
For events passing the event filter, we perform an interferometric-based reconstruction to determine the direction of the source of measured incoming radio waves. This interferometric reconstruction technique has been used in other ARA analyses~\cite{Allison:2014kha, Allison:2015eky, Allison:2015lnj, Allison:2018whu} and in the ANITA experiment~\cite{Romero-Wolf:2014pua}. The interferometric technique relies on the relationship between the location of an emitting source in space and the time delays expected for two measurement antennas with known separation.
For a given pair of antenna waveforms, the cross-correlation $C_{\textrm{i,j}}$ between the voltage waveform on the $i$-th antenna ($V_{\textrm{i}}$) and the voltage waveform on the $j$-th antenna ($V_{\textrm{j}}$) as a function of time lag $\tau$ can be expressed in Eq.~\ref{equ:crosscorr}:
\begin{equation}
C_{i,j}(\tau)=\frac{\sum\limits_{t}V_i(t)V_j(t+\tau)}{RMS_i \times RMS_j}
\label{equ:crosscorr}
\end{equation}
where the $RMS$ are the root-mean-square voltages of the waveforms in the absence of signal. The lag $\tau$ defines the the time delay of one antenna waveform relative to the other and depends on the position of the source emitter relative to the array center, characterized by an elevation angle ($\theta$), an azimuthal angle ($\phi$), and a distance to the source ($R$). The array center is defined as the centroid of all sixteen measurement antennas in the station.
The pairwise time lags $\tau$ for a given point on the sky $\theta, \phi$ are computed by calculating the path a light ray would take from a hypothesized source located at a distance $R$ to an antenna. The calculation accounts for the changing index of refraction of the Antarctic firn, which causes rays to follow curved, rather than rectilinear trajectories. With $n(z)$ the depth-dependent index-of-refraction, and $z$ the (negative) depth from the ice surface, the ray-tracing method models the changing index of refraction as:
\begin{equation}
n(z)=1.78 - 1.35 e^{0.0132z}.
\end{equation}
This index of refraction model was determined by fitting data collected by the RICE experiment in Antarctica~\cite{kravchenko_besson_meyers_2004}. We consider the index to be unity above the surface.
The total cross-correlation strength for a given point on the sky is given by summing over all like-polarization pairs of antennas as in Eq.~\ref{eq:skycorr2}:
\begin{equation}
C_{\rm{sky}}(\theta,\phi; R)=\frac{\sum_{i=1}^{n_{ant}-1}\sum_{j=i+1}^{n_{ant}}C_{i,j}[\tau(\theta,\phi; R)]}{n_{\rm{ant}}}
\label{eq:skycorr2}
\end{equation}
To smooth uncertainties in the ice model and other systematics (such as differences in the phase responses of the various contributing antennas), we calculate the Hilbert envelope of the cross-correlation function before summing over pairs, as is done in previous analyses. The Hilbert envelope of the cross-correlation $H(C_{\textrm{i,j}})$ is calculated according to Eq.~\ref{eq:hilbert}:
\begin{equation}
H(C_{i,j}) = \sqrt{C_{i,j}^2 + h^2({C_{i,j}})}
\label{eq:hilbert}
\end{equation}
where $h(C_{\textrm{i,j}})$ denotes the Hilbert transform.
The cross-correlation function for an individual pair of antennas, $C_{\textrm{i,j}}$, is expected to be maximal when the lag is equal to the true difference in the arrival times of a signal at the two different antennas. The sky map is therefore expected to have a peak at the putative source direction.
For determining source direction, Analysis~A tests radii from 40-5000\,m to locate a hypothesis radius which maximizes $C_{\textrm{sky}}$, while Analysis~B reconstructs only at 41\,m and 300\,m, corresponding to the radius of the calibration pulser and a radius taken as a plane-wave proxy. That one analysis performs a radius scan is a setup inherited from a separate investigation regarding our ability to determine the radius of curvature for signals, which we found to be limited for sources beyond a few hundred meters, given the instrumental timing resolution. After finding the best reconstruction direction (the direction which maximizes $C_{\textrm{sky}}$), both analyses impose two geometric cuts. The first is an angular cut in the direction of the calibration pulser system. The second is a cut on the reconstructed elevation ($\theta$) of the hypothetical source relative to the station center, and is used to reject events coming from above the surface.
The cuts on the angular region around the calibration pulser systems is necessary to reject untagged calibration pulser events; approximately 1 in $10^{4}$ calibration pulser signals are emitted outside of the time window expected; the cause of this ``misfiring" is not well understood. Additionally, one configuration in A3 (configuration 2) did not have the calibration pulser system correctly synchronized to the PPS clock, and so a purely geometric rejection criterion is needed. To determine this geometric cut region, the angular distribution of tagged calibration pulsers is fit (either with a Gaussian or a Kernel Density Estimator), and a cut region determined such that fewer than $10^{-3}$ calibration pulser events are expected to reconstruct outside of that angular region for the entire livetime period. The angular cut region is an approximately $10^{\circ}\times10^{\circ}$ box around the true calibration pulser location. The value of $10^{-3}$ is approximately an order of magnitude less than the number of background events expected to pass all analysis cuts. Less than 3\% of neutrinos are cut by this calibration pulser geometric cut requirement.
The geometric cut at the surface is used primarily to reject anthropogenic noise, as well as potential downgoing physics signals such as cosmic rays. We make the cut on events from above the surface because we expect neutrino events to predominantly yield up-coming signals. The cut on events from above the surface proceeds similarly to the calibration pulser geometric cut. We fit the distribution of events in $\sin(\theta)$ near the transition between the air-ice boundary, and place an angular cut such that fewer than $10^{-3}$ events from the above-ice distribution are expected to reconstruct within the ice. In Analysis~A, events are only reconstructed in the vertical polarization, while in Analysis~B, an event may be classified as having an above-the-surface origin in either polarization, and if so it is rejected from consideration in the searches in either polarization. The cut on the reconstruction angle $\theta$ varies from $11$-$35^{\circ}$, and approximately 10-30\% of neutrinos are cut by the surface cut at $10^{18}$~eV, depending on the analysis, station, and configuration. For example, in Analysis A, the angular cut is $\sim30^{\circ}$ for A2, but is $\sim10^{\circ}$ for A3. The reduction in efficiency is partially because radio waves can follow curved trajectories as they traverse the varying index-of-refraction, and can appear as downgoing signals when they in fact arise from sources within the ice.
\subsection{Bivariate Cut and Background Estimate}
Both analyses implement their final separation of noise from potential neutrino signals as a bivariate cut in the peak cross-correlation ($C_{\textrm{sky}}$) vs.\ signal strength ($\Gamma$) plane. For an event to ``pass," Analysis~A imposes a box cut requiring that an event's $C_{\textrm{sky}}$ and $\Gamma$ both exceed a station and configuration specific threshold: $C_{\textrm{sky}}>C_{\textrm{min}}$ and $\Gamma>\Gamma_{\textrm{min}}$. In Analysis~B, an event is required to pass a linear combination of the two, such that $\Gamma > m \, C_{\textrm{sky}} + b$, where $m$ and $b$ are station- and configuration- specific analysis parameters. An example of the box cut for A3 configuration 3, in Analysis~A, is provided in Fig.~\ref{fig:LDcut}.
For the purpose of Fig.~\ref{fig:LDcut}, we show $\Gamma$ in the way it was computed to perform cuts in Analysis~A. This definition of signal strength we call the root-power-ratio (RPR), and is defined as $RPR=E_{\textrm{j,max}}/\sigma_{E_{\textrm{j,noise}}}$, where $E_{\textrm{j,max}}$ is the maximum of the square-root of a rolling 25\,ns integrated power average of the waveform, specifically:
\begin{equation}
\label{eq:RPR}
E_j = \sqrt{\frac{1}{n} \sum_{i=j}^{j+n} V_i^2}
\end{equation}
where $n$ is the number of samples in the 25\,ns window and $\sigma_{E_{\textrm{j,noise}}}$ is the RMS value of $E_{\textrm{j}}$ in the half of the waveform that does not contain the maximum. This RPR variable has been used in a previous ARA analysis~\cite{Allison:2015eky}, and was chosen to more-closely emulate the power-integrated envelope that is used in the ARA trigger.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.49\linewidth]{ARA03_type3_snrMode1_nMinusCoherenceSNR_c_snr_outOfBand_logX_distinctColorsV3_rpr.pdf}
\includegraphics[width=0.49\linewidth]{ARA03_type3_E18_snrMode1_nMinusCoherenceSNR_c_snr_outOfBand_extendedAxesRange_logX_distinctColorsV3_rpr.pdf}
\caption{An example of the bivariate cut plane, for which the final 2-D box cut is made for A3 configuration 3. (Left) The plane as observed in 10\% ``burn-sample" data, showing events clustering at low-correlation and low-root-power-ratio. (Right) The plane populated with simulated neutrinos at $10^{18}$\,eV, showing events distributed throughout. Events at low-correlation and low-root-power-ratio are cut; events at higher values define the signal region, and pass the analysis.}
\label{fig:LDcut}
\end{figure*}
Both analyses use a data-driven model of the backgrounds in order to set final cuts and estimate the expected number of background events passing all cuts. As in previous analyses, the model is constructed by fitting the distribution of events as a function of the cut parameters ($C_{\textrm{sky}}$, RPR, etc.), and setting the cut such that fewer than $\sim0.01$ background events are expected to pass all cuts, which is the level at which we find the best expected limit based on statistical uncertainties only.
Before examining the neutrino signal region, defined as events passing all cuts in the analysis, both analyses first reversed the requirement that events reconstruct inside the ice. That is, we examined events which failed the geometric cut by reconstructing to the surface. This is done in order to identify bursts of activity from the surface, and we exclude runs which have $\gtrsim11$ events reconstructing to the surface. At this stage, we do not exclude single, isolated events, ``singlets," as neutrinos are expected to arrive isolated in time and space. In both analyses, this ``surface-noisy" cut eliminated approximately an additional week of livetime.
\subsection{Analysis-Specific Comments}
\label{sec:analysis_highlights}
\subsubsection{Analysis A}
Analysis A uses solely signal from VPol antennas for the search. This is motivated by the fact that the majority ($\sim 70$\%) of simulated signal events contain VPol triggers. This is partly because VPol antennas are more sensitive than HPols antennas, especially at low frequencies. To define the surface geometric cut, Analysis A reconstructs the incident angle of each event with signal arrival times calculated assuming a bulk-ice model with a constant index of refraction 1.76 and a putative source distance of 5\,km to emulate a distant source at the ice surface. This approach proved to be the most successful in reconstructing a radio emitter system installed on the rooftop of the IceCube Lab, which served as a proxy for distant surface signals. The cut is then placed on the elevation angle of the result of this reconstruction as described in Sec.~\ref{sec:reco}.
One category of background present in ARA data is continuous-wave (CW) emission. CW emission is anthropogenic in origin and presents as a strong spectral peak in the power spectral density of an event. The most common type of CW encountered in ARA is generated by the $\sim$403\,MHz radiosonde attached to NOAA weather balloons that are launched once or twice daily from the South Pole; we additionally see 125\,MHz emission from an as-yet unidentified source.
To eliminate the contamination of CW emission, Analysis~A places an out-of-band cut, where an event is considered CW-contaminated if more than three channels in either polarization demonstrate peak spectral density below 170~MHz. This frequency threshold is motivated by the edge of the pass-band filter. We discard the event entirely if such CW contamination is found. This cut represents negligible signal efficiency loss below $10^{19}$\,eV, and a $\sim 10$\% loss at $10^{21}$\,eV from off-cone signals. To reject CW contamination in higher frequencies, we observe that such events, while producing high $C_{\textrm{sky}}$ values due to their CW nature, do not produce high RPR values on the $C_{\textrm{sky}}$-RPR plane. Therefore, Analysis~A rejects these events with the 2-D box cut.
\subsubsection{Analysis B}
Analysis B features two major differences from Analysis~A. First, Analysis~B performs the neutrino search in both polarizations, VPol and HPol. Second, Analysis B filters power in events with CW contamination.
CW contamination is identified with two methods: first by looking for spectral peaks over run-specific baselines as in the prototype station analysis~\cite{Allison:2014kha} and second by looking for stability between phasors at a given frequency as is done in the LOFAR experiment~\cite{Schellart:2013bba}. Once CW has been identified at a specific frequency, this contamination is removed using a filtering technique developed and used by the ANITA collaboration which operates in a similar frequency domain~\cite{DaileyThesis,Allison:2018cxu}. The filter notches spectral peaks in the amplitude domain, while reconstructing the phasors representing the signal and thernal noise contributions only, with CW contamination removed.
Once an event has been filtered of its contaminating CW emission, it proceeds in the analysis as above.
Development and use of techniques to mitigate CW contamination is important because the $\sim$403\,MHz emission at South Pole can contaminate up to 10\% of ARA's daily livetime. As the detectors continue to accrue livetime, and sensitivity to weak signals improves, the ability to filter events of contaminating CW emission will be important for leveraging the full livetime of the array.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.49\linewidth]{A23_livetimeWeightedAvgSignalEfficiencyVsChannelInWindowSNR_xLabelSNR_E213.png}
\includegraphics[width=0.49\linewidth]{A23_signalEfficiency_vnchnl3NoMasking_tunedCuts_enrichedStat_livetimeWeighted_err.png}
\caption{Monte-Carlo estimated analysis efficiency as a function of signal-to-noise ratio (left) and neutrino energy (right) for Analysis A. For context, the trigger efficiency of an ARA station has been measured to reach 50\% at an SNR of 3.7~\cite{Allison:2018ynt}. In the left figure, we assume an unbroken power-law spectrum with a spectral index of -2.13 to weight the energies contributing to the efficiency. The efficiency decrease around SNR=14 is due to waveform saturation effects as simulated in \texttt{AraSim}.}
\label{fig:efficiency}
\end{figure*}
\section{Results}
\label{sec:results}
After rejecting data containing bursts of surface activity, both analyses examined the neutrino signal region in A2 before examining the signal region in A3. Each analyses' individual unblinding results are discussed below in Sec.~\ref{sec:results_analysisA} and Sec.~\ref{sec:results_analysisB}.
Neither analysis observes a statistically significant excess of events, observing zero events against the estimated background. In the absence of detection, in Fig.~\ref{fig:limit} we compute the 90\% confidence level (CL) upper limit on the diffuse flux of neutrinos.
Further details on the upper limit calculation, including inclusion of the systematic uncertainties discussed in Sec.~\ref{sec:systematics}, can be found in App.~\ref{app:limit_calc}. Inclusion of the systematic uncertainties in the limit has an $\mathcal{O}(5\%)$ effect.
We report the limit set by Analysis A, which had slightly superior \textit{expected} sensitivity, by up to 15\%, depending on energy. As a benchmark, the number of events expected to be observed in the analysis ranges from 0.25 for an unbroken extrapolation of the astrophysical neutrino flux as measured by IceCube with a spectral index of -2.13~\cite{Aartsen:2016xlq}, to 0.027 in the case of a cosmogenic neutrino flux where protons make up only 10\% of cosmic ray primaries~\cite{Ahlers:2012rz}.
In Fig.~\ref{fig:efficiency}, we present the analysis efficiency of Analysis~A for both A2 and A3; we plot the average signal efficiency, taking into account the variations due to different run configurations and their respective livetimes. The signal efficiency is calculated by simulating neutrinos in AraSim, and taking the ratio of the number of neutrinos passing the analysis cuts to the number of neutrinos that trigger the detector.
We show the efficiency for Analysis~A, as it is the analysis used to set our limit, though the efficiencies for Analysis~B (which was developed in a parallel and independent fashion) are comparable. On the left, we show the efficiency as a function of SNR, where SNR is computed as the third highest $V_{peak}/RMS$, where $V_{peak}$ is the highest absolute voltage peak in a waveform, and the RMS is the root-mean-square of the voltage values in that waveform. We present the figure with this definition of SNR as it more closely aligns with that commonly used for comparison purposes in the literature.
The analysis becomes efficient near an SNR of 6, and does not fully saturate to a value between 75-90\% until it is above an SNR of 8. The saturated efficiency for A3 is $\sim10$\% lower than for A2 because A3 required a larger angular cut region to reject surface events, as discuss in Sec.~\ref{sec:reco}. On the right, we show the efficiency as a function of energy. At $10^{16}$\,eV, the analysis has a relatively low efficiency of about 5\%. The efficiency rises to $\sim$\,35\% by $10^{18}$\,eV and peaks near $10^{20}$\,eV at between 50-60\%, depending on the specific station. Efficiencies for all stations and configurations are provided in additional Figures in App.~\ref{app:livetime}.
\subsection{Analysis A Results}
\label{sec:results_analysisA}
After post-unblinding examination, Analysis~A observes 0 events on a background expectation of ${(5\pm2)\times10^{-2}}$ background events per station.
At unblinding, Analysis~A observed two events in the candidate neutrino signal region in A2. While both reconstruct inside the ice using an interferometric technique which utilizes all VPol channels of the array, both only have visibly identifiable signals in the bottom row of VPol antennas. When the reconstruction is repeated utilizing only antennas
where the signal strength exceeds the event filter threshold, both events confidently reconstruct to above the surface. We consider both of these events to be backgrounds of surface origin.
At unblinding, Analysis~A observed four events in the candidate neutrino signal region in A3. Three cluster in time to within a few minutes, and are located in a run which contains a burst of surface noise, but was technically sub-threshold in the ``surface-noisy" cut as described above in Sec.~\ref{sec:results}. The fourth event is reconstructed inside the ice when all VPol channels participate in the interferometry.
Again, if only channels with signal strength above the event filter threshold are considered, the event reconstructs to above the surface. It is therefore determined to be consistent with a background of surface origin.
Since all events observed in Analysis~A can, with currently available tools, be identified to be of surface origin, or cluster in time with bursts of surface activity, we do not consider Analysis~A to have measured any events. The post-unblinding cut necessary to remove the misreconstructed surface events results in a negligible efficiency loss ($\leq 0.25\%$). As Analysis~A provided the better expected limit, we proceed to compute the limit as described in App.~\ref{app:limit_calc} with a observed number of events of zero.
\subsection{Analysis B Results}
\label{sec:results_analysisB}
After post-unblinding examination, Analysis~B observes 0 events on a background expectation of ${(1\pm0.3)\times10^{-2}}$ events per station.
At unblinding, Analysis~B observed 19 events in the candidate neutrino signal region in A2. Of these, seven were ``near-surface" events, and were addressed by more stringent, data-driven surface cuts, as described in Sec.~\ref{sec:reco}. Analysis~B had originally used a geometric argument to determine the value of the surface cut, as opposed to data-driven methods. An additional seven events were of a type not observed in the burn-sample, where an unphysical amount of power was deposited in one or two strings. These were removed with an update to the quality cuts, and the update had negligible impact on the signal efficiency. One event was a calibration pulser that ``mis-fired" during a time when it was not enabled by the software. It was misreconstructed in the 41\,m interferometry, but was correctly reconstructed in the 300\,m radius, and was removed by additionally rejecting events if they reconstructed towards the calibration pulser in either interferometric radii. This additional calibration pulser geometric rejection also had negligible impact on the analysis efficiency. To address the remaining four events, an additional hit-time based reconstruction method, which traces its lineage to the RICE experiment, was added. The method uses a integrated-power envelope (the same as described in in the definition of RPR in Eq.~\ref{eq:RPR}) to identify hit times in the waveforms, and requires at least four waveforms in the event to have an RPR above a threshold of eight. The method then searches for the location of a source emitter ($\theta, \phi, R$) which minimizes the differences between the predicted and observed time delays between channels. With this additional cut, all four of the remaining events in Analysis~B are rejected---one fails to have enough hits to be reconstructed, and the remaining three reconstruct to above the surface.
At unblinding, Analysis~B observed three events in the candidate neutrino signal region in A3. Two cluster in time within a few minutes, and are located in the same run which generated the three passing events in Analysis~A. The third is the same event observed in Analysis~A, which was determined to be downgoing both in Analysis~A through the revised interferometric method described above, and also in Analysis~B independently with the hit-time based reconstruction method. Like in Analysis~A, all three events are determined to be of surface origin, or associated with a burst of surface activity.
Since all events observed in Analysis~B can, with currently available tools, be identified to be of surface origin, or cluster in time with bursts of surface activity, we do not consider Analysis~B to have measured any events. The additional post-unblinding hit-time based reconstruction cut results in no more than an additional 2\% efficiency loss.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{limit_EFE.png}
\caption{The 90\% confidence-level upper limit on the all-flavor diffuse flux of neutrinos set by this analysis (thick black line). The limit accounts for uncertainties in the background estimate and systematic uncertainties on the neutrino sensitivity. We also plot the projected trigger-level single-event sensitivity (TL SES) for the five-station ARA5 array by 2022 as a black-dashed curve. Also shown are the latest limits and flux measurements from IceCube~\cite{Aartsen:2018vtx, Aartsen:2016xlq}, Auger~\cite{Aab:2019auo} (rescaled with decade-wide bins and for all-flavors), ANITA~\cite{Gorham:2019guw} (rescaled with decade-wide bins), and ARIANNA~\cite{Anker:2019rzo}. Shown for comparison are several benchmark cosmogenic neutrino flux models~\cite{Olinto:2011ng,Kotera:2010yn, Ahlers:2012rz}.}
\label{fig:limit}
\end{figure}
\section{Systematic uncertainties}
\label{sec:systematics}
In this section, we describe the systematic uncertainties considered in the analysis. The impact of these systematics on $[A\Omega]_{\textrm{eff}}$ are shown in Fig.~\ref{fig:rel_diff}, and a table summarizing the magnitude of their effects at $10^{18}$\,eV is provided in Tab.~\ref{tab:systematic_sizes}. We consider systematic uncertainties broadly in two classes. The first class is associated with theoretical uncertainties surrounding the neutrino-nucleon cross section and Askaryan emission, and are shown in Fig.~\ref{fig:rel_diff} as solid bands, reported at the trigger level. The second class is associated with uncertainties in our understanding of the detection medium and our instrument. The latter are taken into account in setting the final limit as described in App.~\ref{app:limit_calc}, and are shown as dashed/dotted lines in Fig.~\ref{fig:rel_diff} at the analysis level.
For the neutrino-nucleon cross section ($\sigma_{\nu-\textrm{N}}$), \texttt{AraSim} uses the model derived by Connolly, Thorne, and Waters (CTW)~\cite{Connolly:2011vc}. The upper and lower bounds for $\sigma_{\nu-N}$ are substituted for the central value in the simulation to estimate the effect of the uncertainty on the simulated $[A\Omega]_{\textrm{eff}}$ at the trigger level. In the CTW model, the uncertainties on $\sigma_{\nu-\textrm{N}}$ are large and grow as a function of energy, exceeding 100\% above $10^{21}$\,eV. At $10^{18}$\,eV the uncertainties on the trigger-level effective area due to the cross-section are estimated at -15\%/+18\%. In Fig.~\ref{fig:rel_diff}, for comparison we also show the uncertainties if we use an alternative cross-section developed by Cooper-Sarkar~\textit{et. al.}(CS)~\cite{CooperSarkar:2011pa} which has smaller uncertainties at high energies by about a factor of four.
We additionally studied $d[A\Omega]_{\textrm{eff}}/d[\sigma_{\nu-\textrm{N}}]$, and find it to be approximately linear; for example, at 1~EeV, a 10\% increase in $\sigma_{\nu-\textrm{N}}$ corresponded to a 10\% increase in $[A\Omega]_{\textrm{eff}}$.
For the Askaryan emission, \texttt{AraSim} implements a modified version of the model derived by Alvarez-Mu\~niz \textit{et. al.}~\cite{AlvarezMuniz:2011ya}. A full description of modifications is provided elsewhere~\cite{Allison:2014kha}, but the primary differences arise due to the inclusion of of the LPM effect by Alvarez-Mu\~niz but not by \texttt{AraSim}, and in \texttt{AraSim}'s use of functional parameterizations for the shower profile instead of directly simulated shower profiles. The relative difference between waveform amplitudes produced by \texttt{AraSim}, and those derived from a full shower Monte-Carlo are at most ${\sim}12$\%~\cite{HongThesis}. We conservatively estimate the effect of this systematic uncertainty by reducing or increasing the simulated field amplitude by $\pm12\%$ and assessing the change in $[A\Omega]_{\textrm{eff}}$ at the trigger level. The relative difference between the default parameterization and the scaled parameterization has a maximum value of about 25\% near $10^{16}$\,eV, and starts falling as energy increases. This is because at high energies the instrument acceptance becomes dominated by geometric effects (ray tracing, etc.) and not signal amplitude.
At $10^{18}$\,eV the estimated uncertainties due to the Askaryan emission model are -11\%/+13\%.
In the second category of uncertainties, we consider those arising from our detector response and from measurements of quantities such as the index of refraction in ice and the attenuation length of radio waves in ice. These systematics are included in our calculation of the final limit. We consider uncertainties associated with (1) the attenuation length ($L_{\textrm{att}}$) of South Pole Ice and (2) the depth-dependent index of refraction ($n(z)$) of South Pole ice, (3) the calibration of the ARA signal chain, and (4) the triggering efficiency of the detector.
The model for the attenuation length ($L_{\textrm{att}}$) of South Pole ice was derived from data taken with the ARA Testbed prototype~\cite{Allison:2011wk}. Confidence bands providing an upper and lower limit on $L_{\textrm{att}}$ are given in the model. To set upper/lower limits on our sensitivity, in \texttt{AraSim}, the upper and lower bounds for $L_{\textrm{att}}$ are substituted for the central value.
At $10^{18}$\,eV the uncertainty on the analysis level effective area due to uncertainties in attenuation length are -8\%/+50\%.
The model for the depth-dependent index of refraction $n(z)$ was obtained by fitting data obtained by the RICE experiment~\cite{kravchenko_besson_meyers_2004}. The data was fitted with an exponential as a function of (negative) depth $z$ of the form $n_d-(n_d-n_s)e^{z\cdot n_c}$, finding the following parameter values and their respective uncertainties: $n_d=1.788\pm 0.016,$, $n_s=1.359\pm 0.022$ and $n_c =0.0132 \pm 0.0017\, \textrm{m}^{-1}$. We recalculate the sensitivity, setting all parameters to their upper and lower limits simultaneously. The lower (upper) limit generally corresponds to a slower (faster) transition from surface to deep ice, and correspondingly have a smaller (larger) geometric acceptance for neutrinos. Additionally, since we do not change the ice-model assumption used to reconstruct the incoming direction of the RF emission as discussed in Sec.~\ref{sec:reco}, this systematic uncertainty also captures errors which may be present if the true ice model for radio wave propagation does not match that used for reconstruction. At $10^{18}$\,eV the uncertainties on the analysis level effective area due to the index of refraction model are 5\%.
We consider four sources of uncertainties that exist in the signal chain. They are the transmission coefficient $t$ representing the impedance mismatch between the ice and the antenna, as well as between the antenna and the coaxial cable, the ambient noise power received $N_{\textrm{ant}}$, the signal chain noise power $N_{\textrm{sc}}$, and the antenna directivity $D$. We follow the treatment used in the previous ARA result~\cite{Allison:2015eky} where we consider the system signal-to-noise ratio representing the ratio of input signal power to total system noise power in a given channel:
\begin{equation}
SNR_{\rm{sys}} = \dfrac{tDP_{\rm{sig}}}{tN_{\rm{ant}}+N_{\rm{sc}}}
\end{equation}
with $P_{\textrm{sig}}$ being the received signal power. The four sources of uncertainty translate to an uncertainty in $SNR_{\textrm{sys}}$ by standard error propagation, which is then implemented as an uncertainty in the antenna gain $G$ in code ($\Delta G = \Delta SNR_{\textrm{sys}} / P_{\textrm{sig}}$). In line with previous ARA work, here we only consider the case where the effective gain of the instrument is reduced, providing a conservative estimate of our sensitivity. This is done because we lack sufficient calibration data at this time to constrain the upper bound on the gain. The VPol antenna gain has an overall estimated uncertainty of -10\%, while the HPol antenna gain is estimated at -32\%. The modified gain values are substituted in the simulation to assess the impact of this uncertainty, and the uncertainty at $10^{18}$\,eV is found to be -3\%.
\begin{figure}[!htb]
\centering
\includegraphics[width=\columnwidth]{rel_diff_all.pdf}
\caption{Uncertainties between the central values used in the simulation and upper/lower bounds for each model parameters. Theoretical systematics (shaded regions), such as the Askaryan model and the neutrino-nucleon cross section, are not accounted for when calculating the neutrino limit. Uncertainties associated with the detector and medium (dashed and solid lines) are accounted for in the calculation.}
\label{fig:rel_diff}
\end{figure}
For the systematic uncertainty associated with the trigger efficiency of the detector as a function of RPR, $\epsilon(RPR)$, we compare the simulated trigger efficiency $\epsilon_{\textrm{sim}}(RPR)$ to the measured trigger efficiency in calibration pulser data $\epsilon_{\textrm{dat}}(RPR)$: ${\Delta \epsilon = \epsilon_{\textrm{dat}}(RPR) - \epsilon_{\textrm{sim}}(RPR)}$. We measure $\epsilon_{\textrm{dat}}(RPR)$ by varying a tunable attenuator on the local calibration pulsers described in Sec.~\ref{sec:instrument_description} and counting the number of calibration pulsers recorded.
Using \texttt{Arasim} we find that the uncertainties on the trigger efficiency decreases the simulated $[A\Omega]_{\textrm{eff}}$ from between 2-5\% depending on energy, and at $10^{18}$\,eV the size of the effect is
-3\%.
We observed in previous calibration exercises that the stations trigger inefficiently on calibration pulsers whose direct ray-tracing solution intercepts the array at an angle steeper than $-25^{\circ}$ from from horizontal; this can be seen in Ref.~\cite{Allison:2019rgg}, where there is a deficiency of triggers in A2 and A3 after the pulser is lowered below 1300m depth, despite the pulser being lowered to a total depth of 1700m. Therefore, for the calculation of $[A\Omega]_{\textrm{eff}}$ used in the limit, we conservatively exclude neutrino simulated events with the same ray-tracing conditions.
This results in a ${\sim}10-30\%$ reduction in sensitivity, depending on energy. Excluding these steeply upgoing events is a conservative approach, as more exhaustive future studies might reveal that the cause of the trigger inefficiency to the calibration pulses does not have the same effect on neutrino events.
\begin{table}
\small
\centering
\begin{tabular}{p{35mm}| p{10mm} | p{10mm}}
\hline \hline{}
Systematic Uncertainty & + (\%) & - (\%) \\
\hline
Cross-Section (CTW) & 18 & 15 \\
Askaryan Emission & 13 & 11 \\
\hline\hline
Attenuation Length & 50 & 8 \\
Index of Refraction & 5 & 5 \\
Signal Chain & & 3 \\
Triggering Efficiency & & 3 \\
Total & 50 & 11\\
\hline
\end{tabular}
\caption{A summary of the systematic uncertainties in the neutrino sensitivity at a neutrino energy of $10^{18}$\,eV.}
\label{tab:systematic_sizes}
\end{table}
\section{Discussion and Outlook}
\label{sec:discuss}
In this paper, we present constraints on the flux of UHE neutrinos between $10^{16}$ and $10^{21}$\,eV from four years of data in A2 and A3.
We have presented a description of the livetime and the instrument, and detailed the cuts used to eliminate backgrounds in two complementary, blind analyses.
The resultant limit from this search is the strongest limit set by ARA to date, and the strongest limit set by an in-ice radio neutrino detector above $10^{17}$\,eV.
The result utilizes more than quadruple the livetime of the previously published ARA analysis, and maintains reasonable efficiency to neutrinos while remaining general to signal shape and not requiring costly cuts on livetime in Austral summer or angular cuts in the direction of anthropogenic sources like South Pole Station.
We are encouraged that the two analyses, which leveraged complimentary sets of reconstruction and analysis tools, have similar sensitivity and produced consistent expected limits within 15\% for all energy bins.
Post-unblinding, we were additionally able to further study our surface related backgrounds. As discussed previously, we observed zero events, consistent with our background estimates, including our estimated $10^{-3}$ events from above the surface. If we check the data taking runs that were excluded pre-unblinding because of the presence of large amounts of surface noise, we do observe a few events passing all cuts. We additionally are able to roughly estimate the probability of a single event of surface-origin being misreconstructed as coming from within the ice. To do so, we take the product of the fraction of runs in which we observe only one surface event and multiply by our estimated misreconstruction rate. We estimate the misreconstruction rate by taking the ratio of the number of events reconstructing inside the ice, relative to those reconstructing outside the ice, in the surface noisy runs. For example, in A3, we find that there may be approximately 0.2 such ``misreconstructing singlets." We note that this estimate is biased to larger values, because in order to measure the misreconstruction rate, we rely on the number of events reconstructing inside the ice in runs which demonstrate large amounts of surface noise, and were decided pre-unblinding to be unfit for analysis. These two post-unblinding studies demonstrate the role of the surface-noisy cut in the present analysis, and represent an opportunity for growth during the development of future reconstruction techniques.
We underscore several important features of this newest result. First, it demonstrates ARA's capability to analyze its growing dataset. Compared to our previous result, which analyzed the first 10 months of data from stations A2 and A3 \cite{Allison:2015eky}, this analysis leverages data from four years of data-taking in each of the two stations. After removing intermittent periods of downtime we have about 1100 days (75\%) of livetime that was good for analysis for each station. This amounts to 2162 days of combined livetime.
This analysis is therefore the first ARA result to analyze $\mathcal{O}(10)$ station-years of data. This demonstrates the capability to analyze our growing dataset, which will be important as ARA looks to the future.
There is roughly 4080 additional days of livetime awaiting analysis on archive, with the analysis pending ongoing calibration efforts.
With the full five-station ARA array collecting data since January 2018, the data set is expected to roughly double again
by 2022 (total of approximately 11k days of livetime). In Fig.~\ref{fig:limit} we additionally show the projected trigger-level single-event sensitivity that the five-station ARA5 array can achieve with data that will have been accumulated through 2022. As can be seen, ARA is poised to be the leading UHE neutrino detector above $10^{17}$~eV; the IceCube and Auger experiments will also accumulate additional livetime amounting to about 40\% and 25\% increases over their respective published limits.
Second, the analysis maintains reasonable efficiency (${\sim}35$\% at $10^{18}$\,eV, and reaching 50\% efficiency near a voltage signal-to-noise ratio (SNR) of 6) while remaining general and not relying on quantities that are
strongly model-dependent, such as a correlation with a signal template. This is advantageous because although the Askaryan signal has been observed in the laboratory {and in the atmosphere from cosmic-ray air showers~\cite{Belletoile:2015rea, Aab:2014esa, Scholten:2016gmj}, it has never before been observed in a dense media in nature.
In line with our previous two-station result \cite{Allison:2015eky}, this analysis did not require excluding data recorded during the Austral summer, nor did it require geometric rejection regions specifically in the direction of the South Pole. In the prior analyses of the Prototype station~\cite{Allison:2014kha,Allison:2015lnj}, 31\% of livetime was lost due to anthropogenic activities during the Austral summer, as well as 9\% due to the detector's solid angle coverage in directions near the South Pole.
We note three challenges overcome in these analyses that have resulted in improvements moving forward, especially as the ARA dataset continues to grow, the diversity of the array increases, and the field looks forward to a large-scale radio array in IceCube-Gen2.
The first challenge was managing the time-dependent nature of the ARA instruments. Some of the time dependent nature is owed to the different data taking configurations, as described in App.~\ref{app:livetime}, and often reflect improved understanding of the instrument and the ice. For example, an early trigger configuration led to triggering signals being off-center in the digitized waveform, and this was later corrected. The change to the readout length was made after Monte Carlo studies revealed that longer readout windows increased the probability with which a station records both the direct and refracted/reflected pulse that are possible because of the depth dependent index of refraction. Since learning from these processes, we have reached more stable operations configurations, and are working on additional streamlining. Some time dependence is owed to changing detector characteristics; for example, for some periods of time in ARA station 3, a digitization board exhibited a high amount of readout noise.
Such time-dependent detector features required adjustments to analysis algorithms and analysis thresholds. As a result of the analyses described herein, identification of such time periods has also been considerably streamlined.
The second challenge was improvement in intra-collaboration communication between the ARA operations and analysis teams. In many cases, periods of livetime that were contaminated with calibration activity were recorded in operations reports, but were only later accounted for the analyses. We plan to work to streamline this pipeline for future ARA analyses.
The third challenge was managing anthropogenic activity from the South Pole over several Austral summers. Despite most human activity being isolated nearly two miles away, the analysis requires aggressive cuts on downgoing signals, which eliminated 10-30\% of neutrino events. Improvements to reconstruction algorithms to more confidently reject downgoing events without requiring such substantial cuts on solid angle, or to more confidently reconstruct events with low hit-multiplicity, will improve the analysis efficiencies in the future.
\section{Acknowledgments}
The main authors of this manuscript were Brian Clark, Ming-Yuan Lu, and Jorge Torres, with Brian Clark and Ming-Yuan Lu leading the data analysis for this result. The ARA Collaboration designed, constructed and now operates the ARA detectors. Data processing and calibration, Monte Carlo simulations of the detector and of theoretical models and data analyses were performed by a large number of collaboration members, who also discussed and approved the scientific results presented here. We are grateful for contributions and discussions from Lucas Smith and Suren Gourapura.
We are thankful to the National Science Foundation (NSF) Office of Polar Programs and Physics Division for funding support through grants 1806923, 1404266, OPP-902483, OPP-1359535, and 1607555.
We further thank the Taiwan National Science Councils Vanguard Program NSC 92-2628-M-002-09 and the Belgian F.R.S.-FNRS Grant 4.4508.01.
We also thank the University of Wisconsin Alumni Research Foundation, the University of Maryland, and the Ohio State University for their support.
B.~A.~Clark thanks the NSF for support through the Graduate Research Fellowship Program Award DGE-1343012 and the Astronomy and Astrophysics Postdoctoral Fellowship under Award 1903885, as well as the Institute for Cyber-Enabled Research at Michigan State University.
A.~Connolly thanks the NSF for CAREER Award 1255557 and Award GRT00049285 and also the Ohio Supercomputer Center.
K.~Hoffman likewise thanks the NSF for their support through CAREER award 0847658.
S. A. Wissel thanks the NSF for support through CAREER Award 1752922 and the Bill and Linda Frost Fund at the California Polytechnic State University.
A.~Connolly, H.~Landsman, and D.~Besson thank the United States-Israel Binational Science Foundation for their support through Grant 2012077.
A.~Connolly, A.~Karle, and J.~Kelley thank the NSF for the support through BIGDATA Grant 1250720.
D.~Besson and A.~Novikov acknowledge support from National Research Nuclear University MEPhi (Moscow Engineering Physics Institute).
K.~Hughes thanks the NSF for support through the Graduate Research Fellowship Program Award DGE-1746045.
A.~Vieregg thanks the Sloan Foundation and the Research Corporation for Science Advancement.
R.~Nichol thanks the Leverhulme Trust for their support.
K.D. de Vries is supported by European Research Council under the EU-ropean Unions Horizon 2020 research and innovation program (grant agreement No 805486).
Finally, we are thankful to the Raytheon Polar Services Corporation, Lockheed Martin, and the Antarctic Support Contractor for field support and enabling our work on the harshest continent.
\bibliographystyle{apsrev4-2}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,035 |
{"url":"https:\/\/bison.inl.gov\/Documentation\/source\/mesh\/FileMesh.aspx","text":"FileMesh\n\nRead a mesh from a file.\n\nSupport File Formats\n\nThe FileMesh is the default type for MOOSE and as the name suggests it reads the mesh from an external file. MOOSE supports reading and writing a large number of formats and could be extended to read more.\n\nExtension | Description :- | :- .e, .exd | Sandia's ExodusII format .dat | Tecplot ASCII file .fro | ACDL's surface triangulation file .gmv | LANL's GMV (General Mesh Viewer) format .mat | Matlab triangular ASCII file (read only) .msh | GMSH ASCII file .n, .nem | Sandia's Nemesis format .plt | Tecplot binary file (write only) .node, .ele; .poly | TetGen ASCII file (read; write) .inp | Abaqus .inp format (read only) .ucd | AVS's ASCII UCD format .unv | I-deas Universal format .xda, .xdr | libMesh formats .vtk, .pvtu | Visualization Toolkit\n\nInput Parameters\n\n\u2022 fileThe name of the mesh file to read\n\nC++ Type:MeshFileName\n\nDescription:The name of the mesh file to read\n\nRequired Parameters\n\n\u2022 ghosting_patch_sizeThe number of nearest neighbors considered for ghosting purposes when 'iteration' patch update strategy is used. Default is 5 * patch_size.\n\nC++ Type:unsigned int\n\nDescription:The number of nearest neighbors considered for ghosting purposes when 'iteration' patch update strategy is used. Default is 5 * patch_size.\n\n\u2022 parallel_typeDEFAULTDISTRIBUTED: Always use libMesh::DistributedMesh REPLICATED: Always use libMesh::ReplicatedMesh DEFAULT: Use libMesh::ReplicatedMesh unless --distributed-mesh is specified on the command line\n\nDefault:DEFAULT\n\nC++ Type:MooseEnum\n\nDescription:DISTRIBUTED: Always use libMesh::DistributedMesh REPLICATED: Always use libMesh::ReplicatedMesh DEFAULT: Use libMesh::ReplicatedMesh unless --distributed-mesh is specified on the command line\n\n\u2022 max_leaf_size10The maximum number of points in each leaf of the KDTree used in the nearest neighbor search. As the leaf size becomes larger,KDTree construction becomes faster but the nearest neighbor searchbecomes slower.\n\nDefault:10\n\nC++ Type:unsigned int\n\nDescription:The maximum number of points in each leaf of the KDTree used in the nearest neighbor search. As the leaf size becomes larger,KDTree construction becomes faster but the nearest neighbor searchbecomes slower.\n\n\u2022 allow_renumberingTrueIf allow_renumbering=false, node and element numbers are kept fixed until deletion\n\nDefault:True\n\nC++ Type:bool\n\nDescription:If allow_renumbering=false, node and element numbers are kept fixed until deletion\n\nOptional Parameters\n\n\u2022 partitionerdefaultSpecifies a mesh partitioner to use when splitting the mesh for a parallel computation.\n\nDefault:default\n\nC++ Type:MooseEnum\n\nDescription:Specifies a mesh partitioner to use when splitting the mesh for a parallel computation.\n\n\u2022 centroid_partitioner_directionSpecifies the sort direction if using the centroid partitioner. Available options: x, y, z, radial\n\nC++ Type:MooseEnum\n\nDescription:Specifies the sort direction if using the centroid partitioner. Available options: x, y, z, radial\n\nPartitioning Parameters\n\n\u2022 dim1This is only required for certain mesh formats where the dimension of the mesh cannot be autodetected. In particular you must supply this for GMSH meshes. Note: This is completely ignored for ExodusII meshes!\n\nDefault:1\n\nC++ Type:MooseEnum\n\nDescription:This is only required for certain mesh formats where the dimension of the mesh cannot be autodetected. In particular you must supply this for GMSH meshes. Note: This is completely ignored for ExodusII meshes!\n\n\u2022 enableTrueSet the enabled status of the MooseObject.\n\nDefault:True\n\nC++ Type:bool\n\nDescription:Set the enabled status of the MooseObject.\n\n\u2022 patch_update_strategyneverHow often to update the geometric search 'patch'. The default is to never update it (which is the most efficient but could be a problem with lots of relative motion). 'always' will update the patch for all slave nodes at the beginning of every timestep which might be time consuming. 'auto' will attempt to determine at the start of which timesteps the patch for all slave nodes needs to be updated automatically.'iteration' updates the patch at every nonlinear iteration for a subset of slave nodes for which penetration is not detected. If there can be substantial relative motion between the master and slave surfaces during the nonlinear iterations within a timestep, it is advisable to use 'iteration' option to ensure accurate contact detection.\n\nDefault:never\n\nC++ Type:MooseEnum\n\nDescription:How often to update the geometric search 'patch'. The default is to never update it (which is the most efficient but could be a problem with lots of relative motion). 'always' will update the patch for all slave nodes at the beginning of every timestep which might be time consuming. 'auto' will attempt to determine at the start of which timesteps the patch for all slave nodes needs to be updated automatically.'iteration' updates the patch at every nonlinear iteration for a subset of slave nodes for which penetration is not detected. If there can be substantial relative motion between the master and slave surfaces during the nonlinear iterations within a timestep, it is advisable to use 'iteration' option to ensure accurate contact detection.\n\n\u2022 control_tagsAdds user-defined labels for accessing object parameters via control logic.\n\nC++ Type:std::vector\n\nDescription:Adds user-defined labels for accessing object parameters via control logic.\n\n\u2022 nemesisFalseIf nemesis=true and file=foo.e, actually reads foo.e.N.0, foo.e.N.1, ... foo.e.N.N-1, where N = # CPUs, with NemesisIO.\n\nDefault:False\n\nC++ Type:bool\n\nDescription:If nemesis=true and file=foo.e, actually reads foo.e.N.0, foo.e.N.1, ... foo.e.N.N-1, where N = # CPUs, with NemesisIO.\n\n\u2022 construct_node_list_from_side_listTrueWhether or not to generate nodesets from the sidesets (usually a good idea).\n\nDefault:True\n\nC++ Type:bool\n\nDescription:Whether or not to generate nodesets from the sidesets (usually a good idea).\n\n\u2022 patch_size40The number of nodes to consider in the NearestNode neighborhood.\n\nDefault:40\n\nC++ Type:unsigned int\n\nDescription:The number of nodes to consider in the NearestNode neighborhood.","date":"2020-11-29 08:32:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3214928209781647, \"perplexity\": 8126.323209906376}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-50\/segments\/1606141197278.54\/warc\/CC-MAIN-20201129063812-20201129093812-00065.warc.gz\"}"} | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.