text
stringlengths 14
5.77M
| meta
dict | __index_level_0__
int64 0
9.97k
⌀ |
|---|---|---|
Our Counselling MSc course provides you with a level of knowledge and skills in counselling which will allow you to develop as counsellors, and once you have gained further experience and client hours you will be able to pursue a range of professional opportunities. If you have a first degree in Psychology that is Graduate Basis for Chartered Membership (GBC) accredited your postgraduate training opportunities will include doctoral training on British Psychological Society (BPS) accredited courses in Counselling Psychology.
Credits can count towards a professional body accreditation with UKCP or BACP.
This course requires you to undertake a supervised placement of a minimum of 100 client hours plus supervision and a minimum 40 hours of personal therapy with an approved and registered or accredited counsellor or therapist. A student membership of the British Association for Counselling and Psychotherapy (BACP) is advised and a Disclosure and Barring Service (DBS) check will be required for entry onto this course. The course may be used as a basis for personal accreditation with BACP once further experience and an additional 350 client hours have been gained.
The course will also provide you with a range of skills that will be highly relevant to you if you are wishing to undertake further training in a range of related careers and specialisms. In addition to careers options, the degree will provide you with a range of communication skills and personal development skills that will be highly valued in a range of organisations and situations. Most of all, this course enables you to feel that you belong to a process of development that is enriching and supportive.
This course allows you to draw together different counselling theories and examine issues within a variety of contexts and situations. Course modules will concentrate upon developing your skills and therapeutic competencies and explore professional issues and organisational settings. Importantly, the course offers you the opportunity to maximise your self-awareness and reflect on your process and the way that others practice. The placement module allows you to take your competencies to organisations to work with clients.
Throughout the course you will be encouraged to develop a critical, evaluative approach to the knowledge which underpins present-day professional practice as well as building your ability to make evidence-based decisions. Current issues within counselling and therapy will be considered critically with particular emphasis on their relationship to client practice.
This course requires you to undertake a supervised placement of a minimum of 100 client hours plus supervision and a minimum 40 hours of personal therapy with an approved and registered or accredited counsellor or therapist. A student membership of the British Association for Counselling and Psychotherapy (BACP) is advised and professional indemnity insurance and a Disclosure and Barring Service (DBS) check will be required for entry onto this course.
The course may be used as a basis for personal accreditation with BACP once further experience and an additional 350 client hours have been gained.
'Counselling Theory and Practice' involves approximately 142 formal taught hours, 100 hours preparation for assessment, and 158 independent study hours.
'Counselling Skills and Process' involves approximately 144 formal taught hours, 75 hours preparation for assessment, and 81 independent study hours.
'Self-awareness and Reflectiveness' involves approximately 74 formal taught hours, 50 hours preparation for assessment, 40 hours of personal therapy and 36 independent study hours.
'Placement and Supervision' involves approximately 135 practice and supervision hours; 75 hours preparation for assessment and 90 independent study hours.
Formal teaching takes place on one day per week in the first year. In addition, there is a three day taught intensive session for all students three times per year (first year only). In addition to this, students will be expected to attend one-to-one tutorials at least twice per trimester.
There is a minimum 80% attendance requirement across each element of the course.
Assessment methods include coursework and a dissertation. The dissertation will require you to write 12,000 to 15,000 words on any aspect of counselling. Many counselling students take a qualitative approach to their research and previous dissertations have included working with autistic clients, the implications of money for therapists and a personal experience of bereavement.
Upon completion of this course you may find employment opportunities within NHS settings or schools, as well as further or higher educational counselling settings. Various opportunities are available within organisational settings and police settings, as well as in continuing professional development (CPD) work. The government agenda for Improved Access to Psychological Therapies (IAPT) offers good opportunity for trained counsellors.
You will need to hold a first or second class bachelors' degree in order to be eligible to apply for this course. You will need to be prepared to carry out placements where you will be working with clients. You will need a high level of critical self-awareness and a willingness to reflect on your own process throughout this course. Voluntary or professional experience of support work with adults is desirable, but not essential.
It is important that those attending this course are prepared to carry out a placement when they will be working with clients. Therefore, individuals on this course will need a high level of critical self-awareness and a willingness to reflect on their own process.
It is compulsory for students to complete at least 40 hours of personal therapy which you would need to pay for (average costs £30 to £50 per hour).
BACP student membership is required for many placements (currently £82 per year).
Supervision of the client work is compulsory and students may need to contribute to the cost of supervision at the placement agency, or if the agency does not provide this they will need to find and pay for a private supervisor (average of £40 to 60 per hour; a minimum of two hours of individual supervision or equivalent is required per month from the start of the placement).
You may also need to pay for professional indemnity insurance (costs vary; around £100 per year) if your agency does not cover you for this. Some agencies also require trainees to attend their training which they may charge for.
To count towards a professional body accreditation, such as UKCP or BACP, this course requires you to undertake a supervised placement practice of a minimum of 100 hours plus supervision and a minimum 40 hours of personal therapy with an approved and registered or accredited counsellor or therapist. A student membership of the British Association for Counselling and Psychotherapy (BACP) is advised and professional indemnity insurance and a Disclosure and Barring Service (DBS) check will be required for entry onto this course.
Please note that all of the requirements listed above involve additional costs in addition to the course fee.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,184
|
[REVIEW] 'Call Me Kuchu' Takes on Uganda's Struggle for Gay Rights
[REVIEW] 'Call Me Kuchu' Takes on Uganda's Struggle for Gay Rights
Michael Arceneaux
I typically hate the adage "it could always be worse," because I've never found it to be comforting. Why should I feel any better about someone else convincingly worse off than me? Now I'm sulking over both my situation and those suffering even more than I am.
But, after watching the haunting new documentary Call Me Kuchu, I couldn't help but breath a sigh of relief that I'm not subjected to as harsh conditions as my gay brothers in sisters are in different parts of the Diaspora. I won't be skipping down the block about the prejudices leveled against me as a gay Black man in this country anytime soon, but one can't help but be grateful to live in a nation further along on the road of equality.
The award-winning documentary provides a first-hand account of the homophobia pervading Uganda along with efforts led by activist David Kato, who identifies as the first gay man living in the country, to help end it. Joining Kato are his closet friend and lesbian, Naome; Stosh, who was raped at a young age, forced to have an abortion at five months, and now lives in fear that she will be murdered for being gay; Longjones, an LGBT counselor who initially shied away from revealing his sexual orientation but felt compelled to speak out in the wake of Kato's murder. Together, they fight for equality under SMUG or Sexual Minorities Uganda.
Also in the fight for fairness for gays is Bishop Senyojo. A religious man not bound by Biblical literalism and holding a PhD in human sexuality, Senyojo opens a kuchu counseling center and safe house. For his efforts, he is expelled from the Anglican Church of Uganda.
We learn that "Kuchu" is a synonym for "queer" and for nearly 90 minutes filmmakers Malika Zouhali-Worrall and Katherine Fairfax Wright paint a frightening picture of what life is like for any "kuchu" living in the East African nation. Yet, we're offered a more complex view of Uganda than normally offered.
The film begins with a jubilee for two men who have been in a relationship for nine years. As one of documentary's participants points out, the union is not an obufumbo, or marriage, but it is a loving union worthy of recognition and celebration. Still, fearing the rule of law that prohibits such a love, it's noted how all parties involved opted for "formal attire" out of concerns of their safety.
The subject love swiftly shifts to hate as we hear the calls of Ugandan pastors slam homosexuality as a "gloom of shame" that places the nation knee-deep in sin. Instantly, I know that I've heard this story before. It's ironic to hear these clergymen dismiss homosexuality as "sins of the West" when their disgust of gay people comes directly from white Christian missionaries and their Black pastoral pets who flocked to the nation to promote their ideology.
One white evangelist is quoted saying "America is losing its way" and that "Uganda has become our ground zero." What they mean is since their bastardized account of Christianity is no longer the flavor de jour stateside they've decided to travel abroad to find new suckers. So in turn they peddle that "gay people are perverted parasites trying to sucker your children into the lifestyle." And after years of traveling back and forth "promoting the love of Christ," hostility towards gay people resonates in 95% of Ugandans.
Homophobia isn't just good for the business of bullheaded bigots guised as Christian missionaries. As Kato points out, it's a booming business for the Ugandan media outlet Rolling Stone. The publication, which is a cross between Media Take Out and Fox News, routinely sold stories of gay people as treasonous "homo generals" organizing attacks against the local government and their fellow Ugandans. Worse, the paper regularly published exposes of known homosexuals – spurring vigilantism among its readers, who want to do physical harm to those revealed to be gay. It is sensationalism and bias in its most primitive and despicable forms.
Kato takes Rolling Stone to court to stop the paper from publishing the names and photos of perceived gay Ugandans. Weeks after he wins his landmark case, Kato is beaten over the head with a hammer. Not even at his funeral can his life be given respect from certain groups, leaving one of his friends to shout out in distress "Enough is enough for goodness sakes!" Heartbreaking.
It's frustrating to see these people suffer, especially when you know it's Americans responsible for creating the hateful climate. Given it remains unclear how homophobia will end here, there's no telling how it will in Uganda. If nothing else, Call Me Kuchu reminds that everywhere gays are challenging the hatred and making steps toward changing the situation.
Check out Call Me Kuchu in select theaters in LA and NY.
Michael Arceneaux is a Houston-bred, Howard-educated writer and blogger. You can read more of his work on his site, The Cynical Ones. Follow him on Twitter: @youngsinick
In this article:africa, documentaries, LGBT, uganda
The Gap Between Black & White Understanding of LGBTs
Jordin Sparks Wants You to Know Your Sickle Cell Status
Morehouse College to Admit Transgender Men Starting Next Year
Janelle Monáe & 10 Proud LGBTQ Celebrities of Color
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,562
|
package planning.causalgraph;
import planning.util.IntSet;
public class PartialRelaxedPlan {
/* Actions added so far from the goals back. */
public IntSet actions;
/* Facts that still need support. */
public IntSet openPositiveGoals;
public IntSet openNegativeGoals;
/* Facts that have been added by an existing action and can be reused as support. */
public IntSet initialFacts;
public IntSet initialNegativeFacts;
/* Sum of costs of all added actions. */
public int totalCost;
/* Sum of costs of all open goals. */
public int openCost;
public PartialRelaxedPlan(int maxActionID, int maxFactID){
actions = new IntSet(maxActionID);
openPositiveGoals = new IntSet(maxFactID);
openNegativeGoals = new IntSet(maxFactID);
initialFacts = new IntSet(maxFactID);
initialNegativeFacts = new IntSet(maxFactID);
}
public PartialRelaxedPlan(PartialRelaxedPlan other){
actions = new IntSet(other.actions);
openPositiveGoals = new IntSet(other.openPositiveGoals);
openNegativeGoals = new IntSet(other.openNegativeGoals);
initialFacts = new IntSet(other.initialFacts);
initialNegativeFacts = new IntSet(other.initialNegativeFacts);
totalCost = other.totalCost;
openCost = other.openCost;
}
@Override
public int hashCode(){
return actions.hashCode();
}
/**
* These plans get re-used. Never even equal self.
*/
@Override
public boolean equals(Object other){
if(other instanceof PartialRelaxedPlan){
PartialRelaxedPlan op = (PartialRelaxedPlan)other;
return actions.equals(op.actions) && openNegativeGoals.equals(op.openNegativeGoals) && openPositiveGoals.equals(op.openPositiveGoals);
}
return false;
}
public String toString(){
return "Relaxed plan actions: "+actions+"\nOpen goals: "+openPositiveGoals+", "+openNegativeGoals+"\nOpen cost: "+openCost;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,993
|
Eventos
43a olimpíada: Cleão de Epidauro, vencedor do estádio.
O governador da Celessíria e Fenícia se revolta contra Nabopalassar, rei da Babilônia. Após a captura de Carquemis, o rei envia contra os revoltosos seu filho Nabucodonosor II, que foi feito vice-rei.
Nascimentos
Falecimentos
Anos do século VII a.C.
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,147
|
Q: Schematic to work with an SD card The application note AN10911 from NXP contains several schematics for working with SD cards, for example the schematic shown below. However, it also states:
This schematic does not include details concerning card-supply and typical power-supply
decoupling capacitors.
What's the difference between "card-supply" and "power-supply", and where should those capacitors be put?
Would the schematic be complete after adding those capacitors? Assuming the host is STM32.
A: Card supply assumes that the SD card runs at a different voltage than your main host does.
Power supply decoupling caps are just that, the typical caps needed for any power supply and ic.
So yes, just adding the appropriate power supply for your SD card (I'm assuming 3.3V typical bug it could vary) and a 0.1uf close to the card and whatever other caps your supply needs, is all the extra you could typically expect to interface with an SD card.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,371
|
Nature's oldest waterpark rises from the ground like a glacial wonderland with water flowing into an endless series of gleaming white, terraced basins. Scattered about lie the ruins of an ancient civilization, the 2nd century B.C. dynasty of Attalids, forming a surreal backdrop to this seemingly arctic landscape.
But this world wonder is anything but frozen. This is Pamukkale in southwest Turkey, where the water flows from a hot spring. And the formation isn't made of ice: it's solid calcium. Not the milk kind of calcium but a mineral called calcium carbonate that's white and smooth.
An underground fault pumps out hot mineral water, and as the water reaches the surface, a chemical reaction takes place. Carbon dioxide is released into the air, leaving behind bits of calcium carbonate minerals that are deposited as the water flows down the slope.
Deposits are a bit soft at first, but cool into a shelf of a solid, white limestone called travertine. As more water flows: more travertine shelves are built. Before long, there's a whole hillside of terraced pools stacked one on top of another.
The watery wonderland is a year-round destination. Even in winter, the hot springs maintain a soothing 65 degrees Fahrenheit (18.3 degrees Celsius), attracting hoards of people who come for an afternoon dip.
Yes, the world has been in on the secret for centuries. Ancient Romans constructed canals to harness the springs for a spa city they built nearby called Hierapolis. They believed the waters had medicinal or beautifying properties. We can't say for sure if there are any health or beauty benefits to lounging in the terraced pools of Pamukkale, but we wouldn't mind finding out.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,603
|
\section{}
The chemical evolution of the Universe and several phases of the stellar life are regulated by minute nuclear reactions. The key point for each of these reactions is the value of cross sections at the energies at which they take place in stellar environments.
Direct cross-section measurements are mainly hampered by the very low counting rate and by cosmic background, nevertheless they have become possible by combining the best experimental techniques with the cosmic silence of an underground laboratory.
In the nineties the LUNA (Laboratory for Underground Nuclear Astrophysics) collaboration opened the era of underground nuclear astrophysics installing first a home-made $50\ensuremath{\,\mathrm{kV}}{}$ and later on, a second $400\ensuremath{\,\mathrm{kV}}{}$ accelerator under the Gran Sasso mountain in Italy: in 25 years of experimental activity, important reactions responsible for hydrogen burning could have been studied down to the relevant energies thanks to the high current proton and helium beams provided by the machines.
The interest to the next and warmer stages of star evolution (\emph{i.\,e.}{} post main sequence, helium and carbon burning) drove a new project based on an ion accelerator in the MV range called LUNA-MV, able to deliver proton, helium and carbon beams. The present contribution is aimed to discuss the \textcolor{red}{state-of-the-art} for some selected key processes of post main sequence stellar phases: the \ensuremath{\C{12}(\alpha,\gamma)\nuc{16}{O}}{} and the \ensuremath{\C{12}+\C{12}}{} fundamental for helium and carbon burning phases, and \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} and \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}{} that are relevant to the synthesis of heavy elements in AGB stars. The perspectives opened by an underground MV-facility will be highlighted.
\tiny
\fontsize{8}{11}\helveticabold { \section{Keywords:} Underground nuclear astrophysics, helium burning, carbon burning, gamma spectroscopy, neutron spectroscopy}
\end{abstract}
\section{Introduction}
The hypothesis that the energy which powers the Sun comes from thermonuclear reactions seems to be mainly due to \cite{Eddington:1920} and Aston. After the discovery of nuclear reactions by Rutherford in the twenties it became clear that only the enormous amount of energy stored in the nuclei and released during fusion reactions was able to support the sun luminosity for a time period compatible with the geological datings (\cite{Weizsacker:1938}, \cite{Bethe:1938}): by fact, in order to properly understand the chemical evolution and the stellar energy engine, it is fundamental to precisely know how light nuclei are converted to heavier ones.
According to current theories, the first nuclei were formed through a network of nuclear reactions in the Big Bang nucleosynthesis (BBN), a few minutes after the Big Bang. BBN left our universe containing about 75\% hydrogen, 24\% helium by mass, with some traces of lithium and deuterium. The composition of the present Universe is not very different from the primordial one, \textcolor{red}{with}
the total mass elements heavier than hydrogen and helium ("metals" according to the astronomers) at the level of a few percent. Stars fuse light elements to heavier ones in their cores, up to iron and nickel in the more massive stars.
The most important stellar properties that determine the evolutionary fate of a star are its mass and its composition (\cite{Rolfs1988}, \cite{Iliadis-Wiley-2015}) : the larger the mass, the larger the temperature in the core. The star composition influences which reactions dominate the burning processes.
When a low-mass star like the Sun runs out of hydrogen in the core, it becomes a red giant star, fusing H to He via the CNO cycle in a shell surrounding an inert He core. When the core temperature reaches 100 million K, the He nuclei in the core have sufficient kinetic energy to fuse to C (helium burning), forming \C{12} in a two-stage process. Subsequent fusion with another helium nucleus produces \nuc{16}{O} nuclei. This process, in symbols \ensuremath{\C{12}(\alpha,\gamma)\nuc{16}{O}}{}, is the main source of the carbon and oxygen found in the Universe, including that in our bodies and represents by fact the "Holy Grail" of nuclear astrophysics
since the C/O ratio at the end of helium burning greatly affects the subsequent evolution of the star.
At some point, when the He in the core is exhausted, the stars start to burn He in a shell surrounding the inert C/O core, in addition to burning H to He in a shell surrounding the He burning region. This phase, referred to as the asymptotic giant branch (AGB), is characterised by thermal instabilities: at a given time the burning shells extinguish and the low-mass star will end its existence as a white dwarf, consisting mainly of C and O and supported by electron degeneracy pressure.
Massive stars evolve very differently from low-mass stars. After the end of a burning phase, the core contracts gravitationally, and the temperature increase can be sufficient to ignite the next and heavier nuclear fuel.
In case of masses larger that 11 \(M_\odot\), after undergoing He burning, the core experiences further burning episodes referred to as C-, Ne-, O- and Si-burning. The duration of each subsequent nuclear burning phase decreases significantly. There are two main reasons: the first is that each burning phase releases by far less energy per unit mass with respect to the previous phase; the second that an increasing fraction of energy is radiated away by neutrinos. Therefore, while H burning may continue for many million years, C burning typically lasts hundreds of days and Si burning may run out in just one day.
After the last advanced burning stage (Si burning) the core consists mainly of iron isotopes: no more energy can be generated through fusion reactions. The core contracts and when it exceedes the Chandrasekhar mass limit, it collapses until the density of nuclear matter is reached. As a consequence of the neutron degeneracy pressure, the core rebounds and produces an outgoing shock wave. The wave heats and compresses the overlying layers of the star, consisting of successive shells of Si, O, Ne and C thus more episodes of nucleosynthesis, referred to as explosive Si-, O-, Ne- and C-burning, take place.
The creation of elements heavier than iron occurs mainly through neutron capture processes, eventually followed by beta decays in the so called \textit{s}~(slow)-process (\cite{Kappeler-2011-RMP}) and \textit{r}~(rapid)-process. The \textit{r}-process dominates in environments with higher free neutrons fluxes and it produces heavier elements and more neutron-rich isotopes than the \textit{s}-process. Supernovae explosions and neutron star mergers are potential sites for the \textit{r}- process.
The \textit{s}-process is slow in the sense that there is enough time for beta decays to occur before another neutron is captured: a network of reactions produces stable isotopes by moving along the valley of beta-decay stable isobars. This process primarily occurs within ordinary stars, particularly AGB stars, where the neutron flux is sufficient to cause neutron captures to recur every 10–100 years, much slower than for the \textit{r}-process, which requires 100 captures per second.
The key point for each of these reactions is the value of cross sections at the energies at which they take place in stellar environments.
For most stellar scenarios, the changes in the system are slower than the collision time between the ions or atoms inside the stars, thus the temperature profile is well-defined: the thermonuclear reaction rate depends on the Maxwell-Boltzmann velocity distribution and on the cross section $\sigma$(E) energy dependence (\cite{Rolfs1988}). Typical stellar temperatures for main-sequence low-mass stars, correspond to peak energies of the Maxwell-Boltzmann distribution of $k_B\,T \sim 0.9-90\ensuremath{\,\mathrm{keV}}{}$. In case of more massive stars during advanced burning stages, peak energies can be as high as few MeV.
For charged particles induced reactions these energies are typically well below the Coulomb barrier due to the nuclei electrostatic repulsion and the nuclear reactions proceed via tunnel effect. As a consequence, the low values of the cross sections, ranging from pico- to femto-barn and even below, prevent their measurements in a laboratory at the Earth's surface where the signal to background ratio is too small mainly because of cosmic rays. The observed energy dependence of the cross section at high energies is extrapolated to astrophysically relevant energies leading to substantial uncertainties. In particular, the reaction mechanism might change, or there might be the contribution of unknown resonances which could completely dominate the reaction rate at the stellar energies.
In the nineties the LUNA collaboration proved that the installation of the experiments in a deep underground laboratory, the Gran Sasso National Laboratory, is a successful approach: for the first time nuclear astrophysics measurements with very small counting rates, down to few events per month became a reality.
The high current hydrogen and helium beams provided by the 50 kV (\cite{GREIFE1994327}) and, later on, by the LUNA-400 kV accelerators (\cite{Formicola03-NIMA}) allowed to investigate for the first time at stellar energies the most important reactions responsible for the hydrogen burning in the Sun, such as the
$\nuc{3}{He}(\nuc{3}{He},2p)\nuc{4}{He}$
(\cite{Bonetti:1999yt}) and for the BBN such as the
$\nuc{2}{H}(\ensuremath{\mathrm{p}},\gamma)\nuc{3}{He}$
(\cite{Casella02-NPA,Mossa:2020qgj}).
Full descriptions of LUNA and of the several results obtained in 25 years of experimental activity can be found in recent review papers (\cite{Broggini18-PPNP,Cavanna18-IJMPA,BrogginiWP2019}).
Such achievements have motivated two proposals for similar facilities in China (\cite{juna_collaboration_progress_2016}) and in the United States (\cite{CASPAR}).
The importance to extend such precise studies to the processes relevant to the late
and warmer stages of star evolution (post main-sequence phases, helium and carbon burning) has motivated the LUNA collaboration to acquire a new and more powerful 3.5\,MV single-ended accelerator . The new machine will deliver ion beams of H$^+$, \He{4}$^+$, \C{12}$^+$ and \C{12}$^{++}$ in the energy range from 0.350 to 7\ensuremath{\,\mathrm{MeV}}{} with $100\,\mathrm{\mu{}A}$ - $1\,\mathrm{mA}$ intensity currents, depending on the ion species and on the energy value.
In the following sections, first we will focus on the technical aspects which are important for an underground nuclear astrophysics experiment.
Then, the state of the art and the expected improvements from underground measurements are presented for some selected key processes of post-main-sequence stellar phases: in detail, the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} and \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}{}, that are sources of neutrons for the s-process in asymptotic giant branch stars (AGB) and during hydrostatic evolution of massive stars and the \ensuremath{\C{12}(\alpha,\gamma)\nuc{16}{O}}{} and the \ensuremath{\C{12}+\C{12}}{} reactions, key processes of helium and carbon burning, respectively.
In the conclusions, the commissioning phase of the new accelerator will be detailed, together with highlights about the exciting perspectives opened by the new facility in a larger time window scenario.
\section{The MV facility at Gran Sasso}
The MV facility will be hosted in the north side of Hall B in the Gran Sasso Laboratory and will consist of an accelerator room with concrete walls and a multistory building housing the control room and technical facilities. The concrete walls and ceiling (thickness of 80 cm) of the accelerator room will act as neutron shielding.
Nuclear astrophysics experiments require both high beam currents and a well-defined and stable beam energy: to perform reliable energy scans of the targets the accelerator terminal voltage must be stable to $<1\ensuremath{\,\mathrm{keV}}{}$ over many hours and to $<0.1\ensuremath{\,\mathrm{keV}}{}$ over one hour. A precise energy value is mandatory because of the almost exponential energy dependence of the cross section induced by the tunnel effect probability: a small fluctuation of the beam energy would cause a large uncertainty in the measured cross section value. Since for some reaction long data taking times are expected, the ion source must be able to run stably overnight without human intervention.
A 3.5\ensuremath{\,\mathrm{MV}}{} linear DC accelerator was specifically developed by High Voltage Engineering to meet the stringent requirements on beam intensity and stability (\cite{Sen2019}).
The machine will deliver ion beams into two different beam lines via a 35$^\circ$ switching analyzing magnet. Two independent target stations for solid and gas targets will be located at 2\,m distance from the analyzing magnet. The LUNA-MV accelerator is single-ended, \emph{i.\,e.}{} it has an ion source and an injector block located inside the accelerator tank in the high-voltage terminal.
The need for high-intensity protons as well as carbon ions in the 2$^{+}$ charge state were the reasons to prefer an electron cyclotron resonance (ECR) ion source for the accelerator.
The accelerator operates at a terminal voltage (TV) range of \range{300\ensuremath{\,\mathrm{kV}}{}}{3.5\ensuremath{\,\mathrm{MV}}{}}, while the ion source can operate at \range{30\ensuremath{\,\mathrm{kV}}{}}{40\ensuremath{\,\mathrm{kV}}{}}. In the present system, high-intensity beam currents should be maintained over a large dynamic range: by considering a 1 mA current capability in case of a proton beam, the beam power can be as high as 3.5 kW.
To guarantee voltage stability for longer time periods ($>$1 h) a high precision, low- temperature coefficient ($<$ 5 ppm \ensuremath{{}^\circ\mathrm{C}}{}) resistor chain is used to measure the terminal voltage.
Beam intensity on target for H, He and C ions are reported in Table 1. Compared to previous Singletron accelerators, the LUNA-MV has improved specifications for terminal voltage stability and ripple ($10^{-5}$). Beam energy reproducibility is in the order of $10^{-4}$.
A detailed description can be found in \cite{Sen2019}.
\begin{table*}[h]
\caption{Beam intensity on target}
\begin{center}
\begin{tabular}{|c|c|}
\hline
Ion Species & Current (\textit{e}$\mu$A) \\
$ $ & \textit{TV~range}: \range{3.5}{0.5}\ensuremath{\,\mathrm{MV}}{} (\range{0.5}{0.3}\ensuremath{\,\mathrm{MV}}{}) \\
\hline
\nuc{1}{H}$^{+}$ & 1000 (500) \\
\nuc{4}{He}$^{+}$ & 500 (300) \\
\C{12}$^{+}$ & 150 (100) \\
\C{12}$^{++}$ & 100 (60) \\
\hline
\end{tabular}
\end{center}
\label{table_1}
\end{table*}
For practical considerations, targets for direct measurements of nuclear cross sections on stable nuclides are typically either in solid or gaseous state. The basic aspects of such targets are similar for experiments underground and on surface, but certain requirements are emphasized for experiments deep underground to fully embrace the advantages of the location.
In the case of a solid target, the beam energy loss occurs in a relatively small volume. The resulting power density, up to on the order of \range{10^2}{10^3}\,W/cm$^2$ at LUNA\,400{} if the beam is stopped in the target, requires the target to be cooled to avoid an increase of temperature that would damage the target or accelerate beam-induced target degradation. For targets on an inert backing material, such as those produced by evaporation, sputtering or implantation, water cooling behind the target is often used to dissipate the heat. The maximal power densities attainable on target will increase with the next generation of underground accelerators, either because of higher beam energies at comparable intensities (such as the MV facility at Gran Sasso), or due to further increased beam intensities (cf.\ JUNA \cite{juna_collaboration_progress_2016}) compared to LUNA\,400{}. Efforts are underway to adopt and advance techniques from surface experiments, such as cooling for high-powered targets (\cite{wolfgang_hammer_scorpion_1986}) or large-area reaction targets (\cite{chen_preparation_2020}), to overcome thermal limitations on the beam intensity in future underground experiments. Even with best efforts in cooling, the performance of solid targets degrades under beam, which is seen for example in a reduction of target thickness or changes in the target stoichiometry. In the regime of low-energy nuclear astrophysics experiments, solid targets typically have to be replaced after an irradiation corresponding to accumulated 10$^0$-10$^1$\,(particle)~Coulombs of beam on target. This is an important practical aspect for the use of massive shielding against environmental radiation in low-background measurement. Compared to experiments on surface, where secondary cosmic radiation on shielding materials results in diminishing returns beyond a certain thickness of shielding, much more massive shielding setups of lead and copper have been used at LUNA\,400{}~ (\cite{caciolli_ultra-sensitive_2009}), where for experiments with solid targets, easy access to the target had to be secured (\cite{boeltzig_improved_2018}). More sophisticated, \emph{i.\,e.}{} larger and multi-layered, shielding configurations are foreseen in the future, as a consequence of an improved understanding of the relevant backgrounds and allowed by the more spacious target station layout at the new MV facility. Target access requirements will continue to be central in future experiments with these setups that employ solid targets.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{GasTarget_2020_colors.png}
\caption{Differential pumping system schematic. The beam comes from the accelerator on the left, passes through the apertures AP3, AP2 and AP1, enters the target chamber and stops on the calorimeter.}
\label{fig:GasTarget}
\end{figure}
The effects of target degradation may be avoided, wherever possible, by choosing targets in gaseous form: a windowless gas target system offers stability over the long data taking periods, up to several weeks, if needed. Another advantage is the chemical purity. Solid targets are rarely made by an element alone: possible changes in the stoichiometry should be continuously monitored during the running time.
The gas target system presently in use at LUNA-400 accelerator is shown in Figure \ref{fig:GasTarget}. It consists of three differential pumping stages, the target chamber, the gas pipeline and a recycling system (see Figure \ref{fig:GasTarget}).
Three pumping stages produce a strong pressure gradient between the interaction chamber and the beamline.
A water cooled collimator is placed between adjacent pumping stages, provides the correct gas flow and determines the pressure drop.
The gas target system can either recycle the gas or let it flow away.
The gas enters the interaction chamber close to the beam stop and flows into the first pumping stage, where the 99.5\% of the gas is pumped away through a roots pump.
Approximately 0.5\% of the gas also goes in the second pumping stage, where it is pumped by three turbo-molecular pumps.
A few gas flows in the third pumping stage through and is pumped away by a turbo-molecular pump.
A roots pump collects the gas from the previous pumps and is itself connected to the roughing pump or the recycling pump, depending on the running mode.
The target volume, typically 10-40 cm long, is surrounded by the detectors and is delimited by the chamber walls, the calorimeter and the target chamber collimator.
This latter does not only collimate the beam, but also makes the pressure decrease steeply towards the first pumping stage.
The ionization of the target gas and the neutralization of the beam prevent the electrical reading of the beam current and a power compensation calorimeter with constant temperature gradient is used to monitor the beam intensity (\cite{Ferraro-2018-EPJA}).
For the proper characterization of a windowless gas target, the density and the detectors efficiency profile along the beam path must be known. The density profile is usually measured using a mock-up scattering chamber equipped with measurement ports for capacitive pressure gauges and thermoresistors. The efficiency profile is, in turn, measured by moving radioactive sources along the beam line. Another method is the resonance scan technique: the target system is filled with selected gases such as \nuc{14}{N} or \nuc{21}{Ne} and their narrow, strong resonances are excited with a proton beam of proper energy. The resonance position is then moved along the target by changing the beam energy accordingly. Gas target setups usually ask for heavier detector shielding systems due to the larger dimensions.
The LUNA laboratory is protected by 1400 meters of dolomite rock from cosmic ray induced effects. This rock overburden completely suppresses the hadronic and the soft electromagnetic component of cosmic rays. Muons are able to penetrate inside the mountain but their flux is mitigated by about six orders of magnitude when compared with the Earth surface: this makes typically negligible also muon induced radiations, such as spallation neutrons or cosmogenic unstable nuclides.
Long-lived radioisotopes such as the ones produced by the natural \nuc{238}{U} and \nuc{232}{Th} decay chains or \nuc{40}{K} are present in any laboratory and do not depend on depth, but rather on the radiopurity of rocks, buildings and detector materials. The induced gamma radiations can be mitigated by a suitable passive shielding surrounding the target and the detectors, usually consisting of selected low-background lead and freshly refined electrolytic copper. For the deep-underground setting of LUNA, a shielding of \range{15}{25}\,cm lead with low \nuc{210}{Pb} content lined at the inside with 5\,cm electrolytic copper has been found to give excellent background capabilities as shown in Figure \ref{fig:cac_shield}.
Impurities in the detector and target, on the other hand, must be minimized by proper material selection.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{BackgroundComparison.pdf}
\caption{$\gamma$-ray background spectra taken with a Hp-Ge detector in surface laboratory (red line), at Gran Sasso (blue), and at Gran Sasso with 15 cm lead shield (green).}
\label{fig:cac_shield}
\end{figure}
From the point of view of neutron background, the underground location allows for a reduction of 3 orders of magnitude with respect to above-ground measurements (Figure~\ref{fig:back_comp}) even without any further shielding.
To further increase the sensitivity in view of neutron emitting reactions that are going to be studied with the MV facility, a deep study devoted to selection material was performed to reduce intrinsic background of detectors such as \He{3} counters.
We remind that a typical counter consists of a gas-filled tube with a high voltage applied across the anode and cathode: a neutron passing through the tube will interact with a \He{3} atom to produce tritium and a proton. These two particles ionize the surrounding gas atoms to create charges, which in turn ionize other gas atoms in an avalanche-like multiplication process.
\begin{figure}
\centering
\includegraphics[scale=0.7]{Comp_Neutrons.pdf}
\caption{(Colour online) Comparison of neutron background measured by means of \He{3} counters: the black one is measured in a surface laboratory. Red and blue spectra are measured in the LNGS underground laboratory by means of counters with stainless steel and aluminum cases, respectively.}
\label{fig:back_comp}
\end{figure}
Indeed, alpha particle decays, coming from impurities of uranium and thorium in the counter cases, represent the main source of intrinsic background. By selecting stainless steel cases instead of standard aluminum ones a reduction of one order of magnitude was achieved as shown in Fig.~\ref{fig:back_comp}: the blue and the red spectra were measured in the Gran Sasso with stainless steel and aluminum counters, respectively. The black spectrum is the background in a surface lab with a stainless steel counter.
As a matter of fact, the new MV facility together with the extremely low gamma and neutron background achieved by the LUNA collaboration provide a unique sensitivity to assess the key processes of post main sequence stellar burning.
\section{Neutron sources for the s-process}
The basic idea of the s-process was born in the '50s, with the famous paper by \cite{Burbidge-1957-RMP}.
It consists of a series of "slow" neutron captures and $\beta$ decays along the neutron-rich side of the valley of stability, close to the stability line.
This process is responsible for the production of about half of the elemental abundances between iron and bismuth, as stated in \cite{Kappeler-2011-RMP}, the other part being produced by the rapid neutron capture process (r-process) and to a lesser extent by the proton capture processes.
The s-process takes place in a low neutron flux, where the neutron capture rate is lower than the $\beta$ decay rate of the resulting unstable nuclei.
Such conditions are satisfied in the helium-burning shell of low-mass thermally pulsing stars in the asymptotic giant branch (main s-process) or in the helium-burning core of massive stars in the Red Giant Branch (weak s-process).
The main s-process is mostly responsible for the production of elements with $90 \le A \le 209$ (\emph{i.\,e.}{} from zirconium to bismuth), while the weak s-process contributes to elements in the range $56 \le A \le 90$ (\emph{i.\,e.}{} from iron to zirconium).
It is well established that the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} reaction ($Q_\mathrm{value}=2.216\ensuremath{\,\mathrm{MeV}}{}$) is the principal neutron source for the main s-process, while the major neutron source of weak s-process is the \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}{} reaction ($Q_\mathrm{value}= -0.478\ensuremath{\,\mathrm{keV}}{}$).
The cross section of both these reactions greatly depends on temperature, the existence of excited states close to the reaction threshold and the initial abundances of the interacting species.
\subsection {The main s-process and the \texorpdfstring{\ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{}}{13C(alpha,n)16O} reaction}
\cite{Kappeler-1999-PPNP} attributed the formation of the main s-process elements to thermally pulsing stars in the asymptotic giant branch (TP-AGB) with mass $1.5\,M_{\odot} < M \leq 3\,M_{\odot}$.
More recently, \cite{Cristallo-2018-APJ} indicated a slightly broader mass interval, between $1.2$ and $4\,M_{\odot}$.
The structure of TP-AGB stars is organized in the following layers: a carbon oxygen core, a He-burning shell, a He-rich inter-shell, a H-burning shell and a H-rich envelope.
While the H-burning shell produces helium, the core contracts and heats up the basis of the He-burning shell, whose energy production increases.
Eventually, the energy produced by the He-burning shell is not anymore radiated away efficiently and a thermonuclear runaway occurs, known as "helium shell flash" or "thermal pulse".
This translates in an expansion of the He-rich inter-shell and the cool-down of the H-burning shell, which extinguishes.
Also the He-burning shell is affected by the expansion and cools down until extinction.
A new contraction takes over and causes the initial re-ignition of the H-burning shell and of the He-burning shell afterwards, until another thermal pulse occurs.
A reservoir of \C{13}, produced via the $\C{12}(\ensuremath{\mathrm{p}},\gamma)\nuc{13}{N}(\beta^+\nu)\C{13}$ reaction chain, forms the so-called \C{13} pocket at the interface between the He-rich inter-shell and the H-rich envelope.
As of today, the exact formation mechanism of such a pocket is still debated, as stated by \cite{Cristallo-2018-APJ}.
During this phase, which lasts some $10^4$ years, the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} reaction takes place and provides neutrons for the main s-process.\\
In the paper by \cite{Cristallo-2018-APJ}, the authors claim that, in the most metal-rich stellar models with an almost solar composition, a small amount of \C{13} might survive and be engulfed into the convective zone generated by the incoming thermal pulse.
This scenario would affect several branching points along the s-process path, and excesses of \nuc{60}{Fe}, \nuc{86}{Kr}, \nuc{87}{Rb} or \nuc{96}{Zr} are expected compared to the radiative (low neutron density) \C{13} burst.
The unburned \C{13} left at the end of the interpulse and available to produce neutrons in the subsequent pulse depends on the rate of the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} reaction.
\\
The relevant astrophysical temperature for this process is $\sim$ 0.1\,GK corresponding to a Gamow energy window between 140 and 250\ensuremath{\,\mathrm{keV}}{}. Indeed, the energy range of interest could be even larger as discussed in the paper by \cite{PhysRevC.87.058801}, since the S(E) factor is energy dependent.
In Figure \ref{fig:scheme}, the level scheme of \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} nuclear reaction process is shown. The excited states of interest for AGB nucleosynthesis are highlighted in green and red.
\begin{figure}
\centering
\includegraphics[scale=0.5]{level_scheme.png}
\caption{(Colour online) Schematic diagram adapted from \cite{Cristallo-2018-APJ} of the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} nuclear reaction process, together with the competing exit channel $\nuc{17}{O}+\gamma$. The excited states of interest for AGB nucleosynthesis are highlighted in green}
\label{fig:scheme}
\end{figure}
In particular, green levels are broad states which must be taken into account for any \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} cross section evaluation in the astrophysical region of interest. These are the (1/2)$^+$ near threshold state and the (3/2)$^+$ at $E_{x} = 7239\ensuremath{\,\mathrm{keV}}{}$.
It is important to mention that the energy level of the near-threshold state is debated: \cite{Ajzenberg-Selove1986} attributed to this state as a sub-threshold energy of $E_{x} = -(3 \pm 8)\ensuremath{\,\mathrm{keV}}{}$, while recently a study by \cite{Faesterman15} deduced a positive energy value at $E_{x} = (4.7 \pm 3)\ensuremath{\,\mathrm{keV}}{}$.\\
\subsubsection {State-of-the-art}
A conspicuous number of measurements of the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} cross section have been carried out over the past 45 years.
In the following we focus the attention on crucial direct and indirect measurements performed.
Among direct measurements:
\begin{itemize}
\item \cite{Drotleff93} measured the cross section of the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} reaction in the \range{370}{1000}\ensuremath{\,\mathrm{keV}}{} energy range with \He{3} proportional counters embedded in a moderating polyethylene matrix. This is still the dataset with the lowest point ever measured with a direct measurement. The low-energy points reveal a S-factor enhancement, possibly due to a $1/2^+$ sub-threshold resonance mentioned by \cite{Ajzenberg-Selove1986};
\item \cite{Brune} used \He{3} counters to measure the resonances of the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} reaction, at $E_\alpha = 656$ and $802\ensuremath{\,\mathrm{keV}}$:
the authors concluded that the resonance strengths for these two states are too weak, compared to the non-resonant contribution, to affect the stellar reaction rates;
\item \cite{Hariss} measured the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} reaction absolute cross section in an energy range $E = \range{0.8}{8}\ensuremath{\,\mathrm{MeV}}{}$ in steps of 10\ensuremath{\,\mathrm{keV}}{} with a setup similar to Drotleff's one.
The main aim of the measurement was the geoneutrino background subtraction required by neutrino experiments such as Borexino and Kamland as explained in the paper by \cite{Araki2005}. An overall uncertainty of 4\% was achieved;
\item \cite{Heil} promoted a new study of the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} cross section in the energy range $E = \range{420}{900}\ensuremath{\,\mathrm{keV}}$. Heil used a different approach, employing a n-$\gamma$ converter consisting of a Cd-doped paraffin sphere surrounded with 42 $\mathrm{BaF}_2$ $\gamma$ detectors. In the central hole a neutron converter was installed. A detailed uncertainties analysis is described in the paper. The authors recognized as the main source of systematic error the change of target stoichiometry caused by the build up during the beam irradiation. At higher energies overall uncertainties could be reduced to the level of 5\%;
\item Recent measurements at high energy are due to \cite{Febbraro}, covering the same energy range spanned by Harissopulos.
They improved the precision and accuracy by means of a setup sensitive to the neutron energies, measuring also the excited state transitions via secondary $\gamma$-ray detection. With this setup, they discriminated neutrons emitted from different energy groups and they could measure the individual partial cross sections of the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} reaction to the ground state and second excited state of the \nuc{16}{O} final nucleus.
\end{itemize}
\begin{comment}
it is mandatory to mention the \cite{Drotleff93} measurements, since their data-set cover the lower energy at $E = 280\ensuremath{\,\mathrm{keV}}{}$ with an overall uncertainty of 50\%. It is still the lowest point ever measured with a direct measurement. The low-energy points reveal a S-factor enhancement, possibly due to a $1/2^+$ sub-threshold resonance at $E = -2\ensuremath{\,\mathrm{keV}}{}$, as it was anticipated by \cite{Ajzenberg-Selove1986}.\\
Another measurement performed by \cite{Brune}, aimed at measuring the resonances of the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} reaction, at $E_\alpha = 656$ and $802\ensuremath{\,\mathrm{keV}}$. These states have been observed for the first time with their relative resonance strengths.
Brune concluded that the resonance strengths for these two states are too weak, compared to the nonresonant contribution, to affect the stellar reaction rates.
\cite{Hariss} measured the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} reaction absolute cross section in an energy range $E = \range{0.6}{6}\ensuremath{\,\mathrm{MeV}}{}$ in steps of 8\ensuremath{\,\mathrm{keV}}{} (10\ensuremath{\,\mathrm{keV}}{} in the laboratory system) with a setup similar to Drotleff's one.\\
The main aim of the measurement was the geoneutrino background subtraction required by neutrino experiments such as Borexino and Kamland, \cite{GeoNeutrinos}: in Harissopulos' work an overall uncertainty of 4\% was reached.\\
\cite{Heil} performed a new measurement of the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} cross section in the energy range $E = \range{320}{700}\ensuremath{\,\mathrm{keV}}$.\\ Heil used a different approach, employing a n-$\gamma$ converter consisting of a Cd-doped paraffin sphere surrounded with 42 $\mathrm{BaF}_2$ $\gamma$ detectors. In the central hole a neutron converter was installed.\\
The efficiency of the setup was measured using the non resonant $\nuc{51}{V}(\ensuremath{\mathrm{p}},\mathrm{n})\nuc{51}{Cr}$ reaction with a proton beam at 1700, 2500, and 3500\ensuremath{\,\mathrm{keV}}{}, corresponding to neutron energies of 135, 935, and 1935\ensuremath{\,\mathrm{keV}}{}, respectively.\\
A detailed uncertainties analysis is described in the paper. The authors considered as the main systematic source the change of target stoichiometry, due to the build up during the beam irradiation.\\At higher energies, where systematic uncertainties dominate, overall uncertainties could be reduced to the level of 5\%.\\
\end{comment}
At low energies, uncertainties of direct measurements are larger than 50\%: they are dominated by the low counting statistics caused by unfavorable S/N ratio.\\
Moreover, going down in energy, direct measurements face limits of the fast dropping of the cross section due to the Coulomb barrier and the increase of the screening effect.
For this reason, complementary indirect studies have been developed to better constrain the cross section of this neutron source in the relevant energy region for astrophysics. These measurements were mostly aimed to determine the spectroscopic factor and/or the asymptotic normalization coefficient (ANC) of the 1/2+ level of \nuc{17}{O} near threshold, that represents that largest source of uncertainty at low energies.
\cite{Kubono2003} evaluated a spectroscopic factor $S_\alpha = 0.01$, but data were reanalysed by \cite{Keeley} indicating a factor of 40 larger contribution.
The ANC method was approached for the first time in the work by \cite{PhysRevLett.97.192701} that used the $\nuc{6}{Li}(\C{13},\mathrm{d})\nuc{17}{O}$ sub Coulomb-transfer reaction. These results were recently revisited in the paper by \cite{PhysRevC.91.048801_Avila}.\\
Other indirect measurements were obtained with the Trojan Horse Method (THM): in this approach projectiles (or targets) are selected and described as clusters of two particles in quasi-free kinematics. One is involved in reaction, while the other constituent cluster, called the spectator nucleus "s", is emitted without interacting with the system. For further information on the method one could see e.g. \cite{Tumino}. Using this technique, the $\C{13}(\nuc{6}{Li}, \mathrm{n}\,\nuc{16}{O})\mathrm{d}$ reaction was studied in quasi-free kinematic conditions (the deuteron inside the \nuc{6}{Li} beam is considered as a spectator to the three-body reaction), as described in \cite{LaCognata2013}. This work covered an energy range between $-0.3$ and $1.2\ensuremath{\,\mathrm{MeV}}{}$, and allowed to study the near-threshold resonance at $E_x = 6356\ensuremath{\,\mathrm{keV}}$. In general THM results need to be normalized to selected direct data and their uncertainty strongly depends on the choice of the reference direct measurements: in the first THM analysis by \cite{LaCognata2013}, data were scaled to the astrophysical S-factor recommended by Heil \emph{et al.}{} in the energy region between $\sim$ 0.6 and 1.2\ensuremath{\,\mathrm{MeV}}{}. As a result, a THM S-factor was good in agreement with the direct ones, but with a squared Coulomb-modified ANC $(7.7 \pm 0.3)\ensuremath{\,\mathrm{fm}}^{-1}$ not in agreement with independent assessments of the ANCs, whose weighted average is $(3.9 \pm 0.5)\ensuremath{\,\mathrm{fm}}^{-1}$.
After the new evaluation of the near-threshold resonance energy \cite{Faesterman15}, setting its center at $4.7\ensuremath{\,\mathrm{keV}}{}$ above the \C{13}–$\alpha$ threshold, data by THM where re-analyzed by \cite{Trippella} normalizing experimental data with respect to the ANC parameter of the threshold resonance obtained by \cite{PhysRevC.91.048801_Avila}. Trippella obtained an ANC value of $(3.6 \pm 0.7 \ensuremath{\,\mathrm{fm}}{}^{-1})$, in agreement with literature.\\
Data from most recent works (direct and indirect methods) are showed in Figure \ref{fig:SOA_13C}.
\begin{figure}
\centering
\includegraphics[scale=0.75]{State_of_art_Frontiers.pdf}
\caption{(Colour online) Selection of most recent \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} measurements. Among the direct measurements the Drotleff and the Heil data are represented, indicated by black triangles and red triangles, respectively. The solid red curve indicates the R matrix extrapolation by Heil. The most recent indirect measurement by THM by Trippella et al is indicated by the green squared area and the central value is the green curve. In the plot the Gamow window for two different stellar scenarios are drawn.}
\label{fig:SOA_13C}
\end{figure}
Both from direct measurements (high uncertainties at low energy and a large scatter in absolute values among datasets) and from indirect measurements (e.g. discrepancy in the spectroscopic factor evaluation, uncertain normalization of THM) there is a clear indication that more direct data with about 10$\%$ overall uncertainty are mandatory both at low and at high energy.
\subsubsection{The LUNA direct measurement}
Taking advantage from the low environmental background of LNGS and the highly intense and stable alpha beam provided by the LUNA\,400{} accelerator, recently the LUNA collaboration put huge efforts in the measurement of the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} cross section at low energy with the goal to reach an overall uncertainty near 10\%.
A detector based on 18 \He{3} counters arranged in a polyethylene moderator have been developed in order to maximise its efficiency.
\C{13} targets used during the measurement at LUNA have been produced evaporating 99\% \C{13} isotopically enriched powder on tantalum backing using the evaporator installed at the nuclear institute of research Atomki (Debrecen, Hungary). Hereby, the key points of the LUNA experiment are summarized.
As already said before, the installation of the accelerator in the LNGS underground laboratory allows a neutron background reduction of 3 orders of magnitude with respect to above-ground measurements. Moreover, a special attention was paid to reduce the $\alpha$ particle intrinsic background from detectors.\\
A further step for the background reduction was performed acquiring with Caen V1724 digitizers raw preamplifier signals from detectors and rejecting alpha signals with a pulse shape discrimination analysis described in the paper by \cite{Balibrea}. This allowed to reach an overall background in the whole detector of about 1 count/h, 2 orders of magnitude lower than previous experiments performed in surface laboratories as described in the paper by \cite{Csedreki_det}.\\
Possible beam induced background sources were investigated shooting alpha beam on blank tantalum backings. The neutron detection rate was compatible with the background measurement, making negligible the in-beam backgound.\\
The degradation monitoring under intense alpha beam is crucial during the cross section measurement performed at LUNA. The well-known NRRA (Nuclear Resonant Reaction Analysis) technique is not applicable, due to the lack of resonances in the dynamic energy range of the accelerator.
For this reason a new method of analysis was developed. \\
Data taking at LUNA consisted in long $\alpha$-beam runs with accumulated charges of $\approx 1\ensuremath{\,\mathrm{C}}$ per run, interspersed by short proton-beam runs with moderator opened and HPGe detector in close geometry, with typical accumulated charges of $0.2\ensuremath{\,\mathrm{C}}{}$ at most. During the last mentioned proton run, the target degradation can be checked perfoming a gamma shape analysis on the direct capture de-excitation to the ground state peak of \ensuremath{\C{13}(\p,\gamma)\nuc{14}{N}}{} reaction with the HPGe detector.\\
\begin{comment}
It is based on the fit of the primary peak of the direct capture de-excitation to the ground state of the \ensuremath{\C{13}(\p,\gamma)\nuc{14}{N}}{} reaction using the HPGe detector.The shape of this peak is due to the slowdown of beam projectiles going deeper and deeper in the target: the number of $\gamma-$rays emitted per unit of charge is proportional to the cross section $\sigma(E)$ and to the inverse of stopping power $\epsilon(E)$. These quantities are both energy dependent. Consequently, in each bin of the acquired spectra, the number of counts detected, mimic the energy dependence of the $\sigma(E)$ and $\epsilon(E)$.
\end{comment}
Further information and details can be found in the paper by \cite{Ciani}.\\
Thanks to the unprecedented background reduction for this kind of direct measurement and the novel approach to monitor target degradation, it was possible to measure experimental yield of the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} reaction in an energy range between 400\ensuremath{\,\mathrm{keV}}{} down to 305\ensuremath{\,\mathrm{keV}}{} in laboratory system energy, 40\ensuremath{\,\mathrm{keV}}{} lower than data in the literature: for the first time LUNA collaboration measured with a direct technique cross section inside the Gamow window reaching unprecedented overall uncertainty ($< 20\%$).
Final results and astrophysical implication will be published within the end of 2020.\\
The LUNA collaboration is planning to extend the measurement of the \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} at higher energies at the new MV facility in the LNGS laboratory. This will give the unique possibility to provide a complete data set over a wide energy range and to avoid re-normalization to other datasets with unknown systematic uncertainties.\\
\subsection {The weak s-process and the \texorpdfstring{\ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}{}}{22Ne(alpha,n)25Mg} reaction}
About half of the elements between iron and yttrium ($56 \lesssim A \lesssim 90$) are produced via the weak s-process in massive stars with initial mass $M > 8 M_{\odot}$ (\cite{Kappeler-RMP-2011}).
In such stars, \nuc{22}{Ne} is a by-product of He-burning starting from preexisting CNO isotopes.
The reaction \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}{} has a negative Q-value of $-478\ensuremath{\,\mathrm{keV}}$, and requires relatively high temperatures to be ignited.
At the base of the convective envelope around the He core of massive stars, the temperature is sufficiently high ($>$ 0.25\,GK) to make this reaction a relevant source of neutrons for the weak s-process until core He-burning extinguishes (\cite{Peters-APJ-1968,Couch-APJ-1974,Lamb-APJ-1977,Prantzos-AA-1990,Raiteri-APJ-1991b}) .
Its effectiveness as a neutron source, though, depends also on the cross section of the competing reaction, the \ensuremath{\nuc{22}{Ne}(\alpha,\gamma)\nuc{26}{Mg}}{}.
When core He-burning runs out, \nuc{22}{Ne} is still rather abundant (about 1\% in mass as claimed in paper by \cite{Pignatari-APJ-2010}) and the reaction \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}{} is reactivated during shell C-burning (\cite{Raiteri-APJ-1991a}) at a temperature of about 1\,GK.
At this stage, the \ensuremath{\C{12}(\C{12},\alpha)\nuc{20}{Ne}}{} process yields $\alpha$ particles (\cite{Arnett-APJ-1969}) and even larger neutron fluxes are provided as a consequence of the higher temperature.
Besides the broad interest as main neutron source in the weak s-process, it is worth mentioning some contribution also to the main s-process in low mass ($M<3M_{\odot}$) AGB stars, during thermal pulses (\cite{Gallino-APJ-1988,Hollowell-APJ-1988}), and in intermediate mass ($4M_\odot<M<8M_{\odot}$) AGB stars (\cite{Bisterzo-APJ-2014,Bisterzo-MNRAS-2015}), whereas predicted abundances of \nuc{86}{Kr}, \nuc{87}{Rb} and \nuc{96}{Zr} are at variance with observations (\cite{Lugaro-APJ-2003,GarciaHernandez-SCI-2006,GarciaHernandez-AA-2007,GarciaHernandez-APJ-2009,VanRaai-AA-2012}).
\subsubsection{State-of-the-art}
Considering the weak s-process during core He-burning, the low-energy part of the Gamow window of the \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}{} reaction extends down to the boundary of the $(\alpha,\mathrm{n})$ threshold, located at $E_{\alpha,\mathrm{lab}}=575\ensuremath{\,\mathrm{keV}}$.
At such low energies, measurements have so far suffered from low signal and high background, especially because of the small cross section.
For this reason, different groups only succeeded to directly study the resonances down to $E_{\alpha,\mathrm{lab}} = 830\ensuremath{\,\mathrm{keV}}$.
Other attempts to study the resonances at lower energies by means of indirect methods, often obtained inconsistent results. In the following we summarize the most relevant direct studies of this reaction.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{X4R9771_x4.pdf}
\caption{A subset of the previous measurements of the \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}{} reaction cross section. Data retrieved from the EXFOR database version of October 8, 2020. Blue circles are upper limits.}
\label{fig:22Ne(A,N)_XS}
\end{figure}
Back in the '60s, \cite{Ashery-NPA-1969} discovered that it proceeds through many resonances in the compound nucleus.
Other experimental studies at about 1\ensuremath{\,\mathrm{MeV}}{} and above are due to \cite{Haas-PRC-1973}, \cite{Mak-NPA-1974} and \cite{Wolke-1989-ZPA}.
\cite{Harms-PRC-1991} investigated the energy range between $E_{\alpha,\mathrm{lab}}=0.73$ and 2.10\ensuremath{\,\mathrm{MeV}}{} with a windowless, recirculating gas target system and two \nuc{3}{He} ionization chambers in close geometry.
The resonance at $E_{\alpha,\mathrm{lab}}=830\ensuremath{\,\mathrm{keV}}$ was clearly detected but
it was not possible to show the existence of resonances at lower energies.
Soon after, \cite{Drotleff-APJ-1993} explored a lower energy range using the same gas target and an improved 4$\pi$ detector including two concentric circles of eight \nuc{3}{He} counters in a polyethylene moderator.
Despite of the improved sensitivity, no new low energy resonances were observed in this experiment
\cite{Giesen-NPA-1993} performed a direct measurement with implanted \nuc{22}{Ne} targets to search for low-energy resonances.
The background from \ensuremath{\nuc{11}{B}(\alpha,\mathrm{n})\nuc{14}{N}}{}, however, limited the sensitivity at lower energies.
At the same time they investigated the excited levels with natural parity in \nuc{26}{Mg} thanks to an indirect technique, the $\alpha$-transfer.
Later, \cite{Jaeger-PRL-2001} developed a new detector with twelve \nuc{3}{He} counters arranged in an optimized geometry.
This upgrade allowed to achieve a sensitivity of $\sim$ 10\,pb and to constrain the strength of the $E_{\alpha,\mathrm{lab}}= 830\ensuremath{\,\mathrm{keV}}$ resonance to $\omega \gamma = (118 \pm 11)\ensuremath{\,\mathrm{\mu{}eV}}$. The upper limit on the tentative resonance at $E_{\alpha,\mathrm{lab}} = 633\ensuremath{\,\mathrm{keV}}$ was significantly lowered.
Based on these results, \cite{Jaeger-PRL-2001} calculated the reaction rate under the assumption that the strength of the hypothetical resonance at $E_{\alpha,\mathrm{lab}} = 633\ensuremath{\,\mathrm{keV}}$ was at 10\% of its observed upper limit.
However, the occurrence of such a resonance
was ruled out by \cite{Longland-PRC-2009}, who demonstrated that the corresponding excited state at $E_x=11150\ensuremath{\,\mathrm{keV}}$ in \nuc{26}{Mg}
has unnatural parity.
At that time it was clear
that only a very low-background setup in an underground laboratory could have made possible a direct investigation of the resonances at lower energies.
The focus then moved to the evaluation of the reaction rate and its implications, mostly using direct cross sections measurements at relatively high energy and indirect data.
\cite{Longland-PRC-2012} used a sophisticated statistical approach to calculate the \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}{} reaction rate, including a careful treatment of the uncertainties. This led to a reduction of the uncertainties on calculated rates and raised the need for new, more precise and more sensitive measurements.
\cite{Bisterzo-MNRAS-2015} estimated the impact of the \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}{} uncertainty on the isotopic abundances close to and within the branching of the s-path for main s-process.
They provided a new evaluation of the reaction rate that was a factor of 2 higher than \cite{Longland-PRC-2012}. Even if this new rate was still able to reproduce the contribution of s-only isotopes from the main s-process within the solar uncertainties,
\cite{Bisterzo-MNRAS-2015} underlined how a sizeable change could be caused by low-energy resonances.
In the following years several indirect studies attempted to improve the knowledge of this reaction. A new experimental investigation by \cite{Talwar-PRC-2016} used $\alpha$ inelastic scattering to identify the important resonances and the $\alpha$ transfer technique to indirectly measure their width.
The resulting \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}{} reaction rate was close to the rate in \cite{Longland-PRC-2012}.
Soon after, \cite{Massimi-PLB-2017} studied neutron capture reactions on \nuc{25}{Mg} observing several excited states of \nuc{26}{Mg} and in particular at $E_{x}=11.112\ensuremath{\,\mathrm{MeV}}$. In the same paper an R-matrix analysis was developed to assign spin and parity values to the excited states in \nuc{26}{Mg} without ambiguity and to calculate the upper limits on the reaction rates of the \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}{} and \ensuremath{\nuc{22}{Ne}(\alpha,\gamma)\nuc{26}{Mg}}{} reactions. In the same work, \cite{Massimi-PLB-2017} studied the impact of these new rates on the evolution of stars with initial mass $M$ between 2 and $25 M_{\odot}$.
It was observed that for a $25 M_{\odot}$ star, the uncertainty of the \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}~ reaction rate was responsible for large differences in the weak s-process abundances, up to a factor of 50 in the Sr region. Noticeable changes were also found in intermediate-mass AGB models (IMS-AGBs, $3 < M/M_{\odot} < 7$) with an increase of $\sim 50\%$ in the abundances of Y and La.
The continued interest to this reaction is demonstrated by two very recent experimental studies by \cite{Ota-PLB-2020} and \cite{Jayatissa-PLB-2020} with $\alpha$ transfer reactions:
\cite{Ota-PLB-2020} studied the $\nuc{22}{Ne}(\nuc{6}{Li},\mathrm{d})\nuc{26}{Mg}$ in inverse kinematics, detecting outgoing deuterons and \nuc{25,26}{Mg} in coincidence.
In addition \cite{Jayatissa-PLB-2020} studied the $\nuc{22}{Ne}(\nuc{7}{Li},\mathrm{t})\nuc{26}{Mg}$ reaction.
The new evaluation of the reaction rate, based on spin-parity assignments by \cite{Jayatissa-PLB-2020} combined with data from \cite{Ota-PLB-2020}, resulted in lower rates than previous evaluations, especially at low temperatures (see Figure~\ref{fig:22ne_an_25mg_relative_rates}). The lower rate is also the result of excluding an excited state at $E_x=11.112\ensuremath{\,\mathrm{MeV}}$, corresponding to $E_{\alpha,\mathrm{lab}}=598\ensuremath{\,\mathrm{keV}}$, observed by \cite{Massimi-PLB-2017} and not observed in these studies.
In conclusion the thermonuclear reaction rate of this reaction is still largely uncertain: several evaluations are present in the literature (see Figure \ref{fig:22ne_an_25mg_relative_rates}), based on theoretical considerations, direct and indirect measurements, differing up to a factor of 5 in the temperature range relevant to the s-process in core He burning.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{22ne_an_25mg_relative_rate_labels.pdf}
\caption{A subset of the \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}~ reaction rate evaluations, relative to \cite{Longland-PRC-2012}. Solid lines refer to the evaluations reported in the JINA REACLIB database (\cite{Cyburt-AJSS-2010}. In particular: cf88=\cite{Caughlan-ADNDT-1988}, nacr=\cite{Angulo-NPA-1999}, rath=\cite{Rauscher-ADNDT-2000}, ths8=\cite{Cyburt-AJSS-2010}, il10=\cite{Iliadis-NPA-2010}, trc8=REFIT:\cite{Cyburt-AJSS-2010}, li12=\cite{Longland-PRC-2012}.
Dashed lines: jae01=\cite{Jaeger-PRL-2001}, ota20=\cite{Ota-PLB-2020}.}
\label{fig:22ne_an_25mg_relative_rates}
\end{figure}
The presence of low-energy resonances in the \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}~ reaction below $E_{\alpha,\mathrm{lab}} = 830\ensuremath{\,\mathrm{keV}}$ are expected, based on known levels in \nuc{26}{Mg}, but no such resonances have been directly observed, yet. Nevertheless they might contribute significantly to the reaction rate around 0.2\,GK and cause sizeable changes in the prediction of weak s-process abundances.
The direct measurement of the \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}~ reaction cross section will be carried out at the new MV facility at LNGS (\cite{Guglielmetti-EPJWC-2014,Prati-NICXV-2019}), using a windowless gas target (see Fig.~\ref{fig:GasTarget}) of enriched \nuc{22}{Ne}.
Such experiment could provide precise and accurate cross section measurements down to about $E_{\alpha,\mathrm{lab}} \sim 600\ensuremath{\,\mathrm{keV}}$.
Most of the background is expected to be due to the \ensuremath{\nuc{11}{B}(\alpha,\mathrm{n})\nuc{14}{N}}{} reaction, as already reported by past experiments and therefore a proper reduction of contaminants poses a crucial challenge, combined with the development of an optimized detector setup.
SHADES (Scintillator-He3 Array for Deep underground Experiments on the S-process) is an ERC starting grant (Grant agreement ID: 852016), recently awarded to realize a new setup for the measurement of the \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}~ reaction at energies of astrophysical interest.
SHADES includes the development of a novel neutron detector and a gas target to be used at LUNA.
The detector combines an array of \nuc{3}{He} counters with their high detection efficiency and liquid scintillators, which act as moderators for the reaction neutrons while at the same time providing information on the neutron energy.
The combination of \nuc{3}{He} tubes and scintillator, together with recently studied signal processing techniques, see \cite{Balibrea}, will be able to limit backgrounds from external and internal sources as well as beam-induced background to acceptable levels.
The new detector will allow an increase of at least two orders of magnitude in sensitivity, allowing for the first time a measurement of the reaction cross section in the energy range relevant to the s-process in core He burning.
\section{The \texorpdfstring{\ensuremath{\C{12}(\alpha,\gamma)\nuc{16}{O}}{}}{12C(alpha,gamma)16O} reaction}
The reaction \ensuremath{\C{12}(\alpha,\gamma)\nuc{16}{O}}{} competes with the so-called triple-$\alpha$ process ($\He{4} + \He{4} \to \nuc{8}{Be}$ followed by $\He{4} + \nuc{8}{Be} \to \C{12}$) during stellar helium burning (\cite{Burbidge-1957-RMP}). The astrophysical rates of both reactions influence the ratio of \C{12}/\nuc{16}{O} produced during the helium burning phase, which in turn determines following steps of stellar evolution. Due to the central role of these nuclides, understanding their ratio in helium burning has been identified as a problem of ``paramount importance" \cite{fowler_quest_1984} for nuclear astrophysics. Compared to the triple-$\alpha$ process, the cross section of \ensuremath{\C{12}(\alpha,\gamma)\nuc{16}{O}}{} is significantly less well-known and, in spite of extensive experimental efforts, a better understanding of this reaction remains desirable. A recent comprehensive review on the state of understanding \ensuremath{\C{12}(\alpha,\gamma)\nuc{16}{O}}{} can be found in \cite{deboer_12c16o_2017}.
Owing to the sharp drop of the charged particle reaction cross sections towards the energy regions relevant for astrophysics, direct measurements in the energy region of interest are not available, making extrapolations necessary. Such extrapolations are challenging due to the nuclear structure of the compound nucleus \nuc{16}{O}: the cross section in the energy range of interest is characterized by the presence of broad resonances (including sub-threshold states). It is crucial to study the interference between states of the same J$^{\pi}$, but also to account for angular effects from the interference of processes with different $J^\pi$ (as outlined in \cite{deboer_12c16o_2017}). In particular, the E1 and E2 components of capture to the ground state are of comparable strength in the energy range of interest, and the extrapolated cross section is very sensitive to the interference of these two components.
Different experimental approaches have been taken to directly study the \ensuremath{\C{12}(\alpha,\gamma)\nuc{16}{O}}{} reaction: in normal kinematics, a fixed \C{12} target (solid or gaseous) is bombarded by $\alpha$ particles, detecting $\gamma$-rays from the reaction; inverse kinematics employs a \C{12} beam impinging on a helium target. Inverse kinematics experiments have been performed as measurements of the $\gamma$-rays from the reaction, or detecting the \nuc{16}{O} nuclei in a recoil separator (\cite{kremer_coincidence_1988,schurmann_first_2005,matei_measurement_2006,schurmann_study_2011}). Studies of the inverse reaction $\nuc{16}{O}(\gamma_0,\ensuremath{\mathrm{p}})\C{12}$ at high-intensity $\gamma$-ray facilities
allow to infer information on the ground state transitions. Other reactions to study the nuclear structure of \nuc{16}{O} are used to constrain extrapolations of the reaction \ensuremath{\C{12}(\alpha,\gamma)\nuc{16}{O}}{} in frameworks such as \emph{R}-matrix theory. \cite{deboer_12c16o_2017}.
When measuring the $\gamma$-rays from reaction, angular distribution measurements at multiple detector angles yield information to disentangle the E1 and E2 components, whilst a total absorption spectroscopy setup, detecting the total $\gamma$-ray energy, yields the total cross section with a large detection efficiency. Figure \ref{fig:12Cag} summarizes the current situation of direct measurements, showing recent direct measurements of this reaction at low energies for illustration.
These measurements extend down to about 0.9\,MeV center of mass energy, but are characterized by increasing uncertainties when approaching these low energies. As the cross section drops rapidly towards these energies, backgrounds -- environmental and beam-induced -- are increasingly relevant. For example experiments in normal kinematics are affected by backgrounds from the reaction \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{}, which has a cross section that is of the order of $10^6$ times that of \ensuremath{\C{12}(\alpha,\gamma)\nuc{16}{O}}{}. Neutrons can produce background signals directly in the detector, or through secondary radiation in the environment of the detector. This background can be reduced by using \C{12} targets depleted in \C{13}, or with the help of bunched beams that allow to disentangle the prompt $\gamma$-ray signal from neutron-induced backgrounds by time-of-flight.
\begin{figure}[htb]
\centering
\includegraphics[width=0.8\textwidth]{12Cag_OverviewPlot.pdf}
\caption{Overview of recent experimental data \textcolor{red}{(}\cite{kunz_12c16o_2001,fey_im_2004, assuncao_e1_2006, makii_e1_2009,plag_12c16o_2012}\textcolor{red}{)} for the ground state capture in \ensuremath{\C{12}(\alpha,\gamma)\nuc{16}{O}}{}, with the results of R-matrix fits from \cite{deboer_12c16o_2017} for comparison. All data is unscaled. The location of the Gamow window for a stellar temperature of 0.3\,GK is shown for reference.}
\label{fig:12Cag}
\end{figure}
Additional data at lower energies is desirable to better constrain the energy-dependence of the extrapolation, and further experiments will aim to shed light on it in the future. Direct measurements are expected to contribute to this effort by pushing the lower limit for the available cross section data further below 1\,MeV center of mass energy. This includes promising measurements with a recoil mass separator \cite{fujita_direct_2015}. On the side of the new underground accelerator facilities, new exciting opportunities for the study of this reaction will become available shortly. Measurements of \ensuremath{\C{12}(\alpha,\gamma)\nuc{16}{O}}{} are among the scientific goals of the new MV facility at LNGS and the Felsenkeller shallow-underground accelerator laboratory for nuclear astrophysics, \cite{bemmerer_felsenkeller_2018}. Both accelerators will not only provide beams of $\alpha$ particles, but also of carbon ions, allowing for underground measurements of this reaction in inverse kinematics. The scientific program of JUNA at JPL, as outlined in \cite{liu_underground_2017}, includes the study of \ensuremath{\C{12}(\alpha,\gamma)\nuc{16}{O}}{} as well. To take full advantage of the high-intensity $\alpha$ beam and the deep underground location of JUNA, the minimization of beam-induced backgrounds, such as those created on \C{13}, has been identified as crucial.
\section{ The \texorpdfstring{\ensuremath{\C{12}+\C{12}}{}}{12C+12} reaction}
At the end of the core helium burning, the central part of the star becomes more massive, contracts and heats up. The contraction and the possible consequent collapse can be halted
by the ignition of carbon burning or by the pressure of degenerate electrons. There are several factors preventing the ignition temperature for carbon burning is reached prior to electron degeneracy. For instance, plasma neutrinos are produced near the center of the star and they cause a decrease of the central temperature while leaving it. In addition, in the case of intermediate mass stars, the second dredge-up further reduces the temperature of the star core, with the convective envelope penetrating into the H-exhausted shell. Depending on the star mass, it may attain the physical condition for C burning or become a carbon-oxygen white dwarf. The minimum initial mass of a star able to experience a C-burning phase is called $M_{\rm up}$. The value of $M_{\rm up}$ was proposed for the first time by \cite{Becker-80-ApJ} who found $M_{\rm up} = 9 M_\odot$ for a star with nearly solar composition. However, there are many uncertainties: those affecting the \ensuremath{\C{12}+\C{12}}{} and $\C{12}+\alpha$ rates are the most important nuclear ones. As a matter of fact, the value of $M_{\rm up}$ separates the progenitors of C-O white dwarfs, novae and type Ia supernovae, from those of core-collapse supernovae, neutron stars and stellar mass black holes. Finally, if the star mass is slightly higher than $M_{\rm up}$, an off-center carbon ignition takes place in degenerate conditions and the star may end its life as a O-Ne white dwarf.
Stellar models predict that carbon burning, triggered by the \ensuremath{\C{12}+\C{12}}{}, occurs for center of mass energies between 0.9 and 3.4\ensuremath{\,\mathrm{MeV}}{}. The reaction can proceed through different channels corresponding to the emission of a photon, a neutron, a proton, one or two $\alpha$ particles or a \nuc{8}{Be} nucleus. Among these channels, the two \textcolor{red}{most} relevant are the \ensuremath{\C{12}(\C{12},\p{})\nuc{23}{Na}}{} and \ensuremath{\C{12}(\C{12},\alpha)\nuc{20}{Ne}}{}; alpha particles can produce neutrons through \ensuremath{\C{13}(\alpha,\mathrm{n})\nuc{16}{O}}{} and \ensuremath{\nuc{22}{Ne}(\alpha,\mathrm{n})\nuc{25}{Mg}}{} reactions. These neutrons are fundamental for the synthesis of elements heavier than Fe through the s-process.
The \ensuremath{\C{12}+\C{12}}{} reaction rate at center of mass energies $\approx 1.5\ensuremath{\,\mathrm{MeV}}{}$ also affects the physical conditions in the SNIA explosion. In particular, carbon burning can be ignited in explosive condition when material is accreted on the surface of a white dwarf in a close binary system \cite{Bravo-11}. A variation in the rate would modify the extension of the convective core prior to the explosion, the degree of neutronization and the temperature at the beginning of the thermonuclear runaway. The knowledge of SNIA is fundamental in cosmology since these systems allow the measurements of distances and of the expansion rate of high redshift galaxies \cite{Tutusaus-19}.
Unfortunately the Gamow window of the \ensuremath{\C{12}+\C{12}}{} reaction, $\range{0.7}{3.4}\ensuremath{\,\mathrm{MeV}}{}$ depending on the astrophysical scenario, is much lower than the height of the Coulomb barrier, $6.7\ensuremath{\,\mathrm{MeV}}{}$ approximately, making the direct measurement of the cross section extremely difficult.
\subsection{State-of-the-art}
The two most relevant channels in the \ensuremath{\C{12}+\C{12}}{} reaction are the emission of protons and $\alpha$ particles, with a Q-value of $2.24\ensuremath{\,\mathrm{MeV}}{}$ and $4.62\ensuremath{\,\mathrm{MeV}}{}$, respectively. The proton and alpha channels can be measured either by detecting the charged particles or the gamma decay. In particular, the largest branching is for the dexcitation of the first excited state to the ground state of the \nuc{23}{Na} or \nuc{20}{Ne}. Above 2 MeV, the first excited state transition to the ground state accounts for approximately 50\% of the total cross section and produces photons of 440\ensuremath{\,\mathrm{keV}}{} and 1634\ensuremath{\,\mathrm{keV}}{} in the case of proton or alpha emission, respectively.
The challenge in obtaining a reliable measurement of the \ensuremath{\C{12}+\C{12}}{} cross section at low energies, is related to its exponentially falling behaviour which produces a very low counting rate; in this scenario any natural or beam-induced background can seriously affect the measurement. The latter is due to impurities in the carbon target, manly hydrogen and deuterium, because they can form bonds with carbon. The main background related to the gamma measurements comes from the $\nuc{2}{H}(\C{12},\mathrm{p}_1\gamma)\C{13}$ and $\nuc{1}{H}(\C{12},\gamma)\nuc{13}{N}$ reactions, as detailed in the experimental work by Spillane \emph{et al.}{}. The Compton background of the primary peaks could completely dominate the carbon fusion $\gamma$-ray peaks \cite{Spillane-08-PRL}. As far as the particle measurements are concerned, it is kinematically impossible to find protons in the carbon fusion region of interest, if the particle detectors are placed at backward angles.
In the following the most recent papers focused on the \ensuremath{\C{12}+\C{12}}{} cross section measurement at low energies are summarized.
\cite{Jiang-18-PRC} have recently measured the \ensuremath{\C{12}+\C{12}}{} fusion cross section in the energy range $\range{2.5}{5}\ensuremath{\,\mathrm{MeV}}{}$. The authors studied the two main channels: \ensuremath{\C{12}(\C{12},\p{})\nuc{23}{Na}}{} and \ensuremath{\C{12}(\C{12},\alpha)\nuc{20}{Ne}}{} at Argonne National Laboratory using a Gammasphere array of 100 Compton-suppressed Ge spectrometers in coincidence with silicon detectors. The measurement was pushed down to $2.84\ensuremath{\,\mathrm{MeV}}{}$ and $2.96\ensuremath{\,\mathrm{MeV}}{}$ for the p and $\alpha$ channels, respectively; the results are in good agreement with other measurements using $\gamma$ \cite{Spillane-08-PRL} and charged particle detection \cite{Zickefoose-18-PRC}, but with smaller uncertainties.
\cite{Tumino-18-Nature} measured the cross section of the \ensuremath{\C{12}(\C{12},\p{})\nuc{23}{Na}}{} and \ensuremath{\C{12}(\C{12},\alpha)\nuc{20}{Ne}}{} reactions through the indirect Trojan Horse Method (THM). A $30\ensuremath{\,\mathrm{MeV}}{}$ beam was delivered on a natural carbon target; charged particles were detected through $\Delta$E-E position sensitive silicon detectors. The THM results for $\alpha$ and p channels are in good agreement with direct data except for the $2.14\ensuremath{\,\mathrm{MeV}}{}$ region, where the claim of a strong resonance by previous works \cite{Spillane-08-PRL} is not confirmed. Instead the indirect data show a resonance at $2.095\ensuremath{\,\mathrm{MeV}}{}$, one order of magnitude less intense with respect to the $2.14\ensuremath{\,\mathrm{MeV}}{}$ resonance found by Spillane in the $\nuc{20}{Ne} + \alpha$ channel and of similar intensity in the $\nuc{23}{Na} + \ensuremath{\mathrm{p}}{}$ one. In addition, several low-energy resonances are evident below $1.5\ensuremath{\,\mathrm{MeV}}{}$, never detected before in a direct measurement. The results of the THM raised some criticism \cite{Mukhamedzhanov-19-PRC} mainly because of the neglected Coulomb interaction between \nuc{2}{H}, the spectator nucleus in the THM, and $\nuc{24}{Mg}$.
The \ensuremath{\C{12}(\C{12},\p{})\nuc{23}{Na}}{} has also been measured by \cite{Zickefoose-18-PRC} in the $\range{2}{4}\ensuremath{\,\mathrm{MeV}}{}$ energy range by particle spectroscopy. The beam, provided by the tandem accelerator of the Center for Isotopic Research on the Cultural and Environmental (CIRCE) heritage, was sent onto highly ordered pyrolytic graphite targets; protons were detected through $\Delta$E-E silicon detectors. The total S-factor, including also the contribution of the $\alpha$ channel, has been obtained using the ratio between the p-channel and total S-factor provided by \cite{Becker1981}. Due to the poor statistics and beam induced background problems, this work needs a further experimental effort to improve the knowledge of the total S-factor in the relevant energy range. For this reason the experimental campaign continued with a new study devoted to the reduction of light species contaminant, especially \nuc{1}{H} and \nuc{2}{H} in the carbon targets \cite{Morales-18-EPJA}. Measurements were done with natural graphite and highly ordered pyrolytic graphite targets. \nuc{1}{H} and \nuc{2}{H} content were reduced up to 70-85\% by means of diffusion at high temperatures (higher than $1000\ensuremath{{}^\circ\mathrm{C}}{}$). A further reduction of a factor of 2.5 was obtained enclosing the scattering chamber in dry nitrogen to minimize leaks into the rest gas within the chamber. The bulk contamination finally achieved by the authors is 0.3\,ppm. Further measurements are planned with the new experimental setup.
An upper limit on the \ensuremath{\C{12}+\C{12}}{} S-factor has been recently suggested from the measurement of the $\C{12}+\C{13}$ reaction by \cite{Zhang-20-PLB}; in fact it has been observed that the $\C{12}+\C{13}$ and $\C{13}+\C{13}$ cross sections at energies below and above the Coulomb barrier are upper bounds of the non-resonant contribution of the \ensuremath{\C{12}+\C{12}}{} cross section. The measurement of the $\C{13}+\C{13}$ reaction was performed by studying the $\C{12}(\C{13},\ensuremath{\mathrm{p}}{})\nuc{24}{Na}$ channel; \nuc{24}{Na} has an half life of 15.0~hours, allowing an activation measurement. The resulting upper limit on the \ensuremath{\C{12}+\C{12}}{} S-factor agrees nicely with available direct experimental data down to $\approx 2.5\ensuremath{\,\mathrm{MeV}}{}$, while for lower energies the THM results are significantly higher compared to the Zhang upper limit. However, this result should be taken with caution considering that the obtained upper limit is only valid for the non-resonant component of the \ensuremath{\C{12}+\C{12}}{} cross section.
Recent theoretical calculations of the \ensuremath{\C{12}+\C{12}}{} sub-barrier fusion cross section highlighting the role of resonances can be found in \cite{bonasera2020calculation}.
Another step forward in the knowledge of the \ensuremath{\C{12}+\C{12}}{} rate has been recently moved by \cite{Fruet-20-PRL}. They performed a direct measurement down to $\approx 2.2\ensuremath{\,\mathrm{MeV}}{}$ using the particle-gamma coincidence technique. The experiment was performed at the Andromede accelerator facility at IPN Orsay, France with a \C{12} beam, maximum beam current of 2\,p$\mu$A for astrophysically relevant energies, impinging on a natural carbon target. Charged particles were detected through three annular silicon strip detectors covering 30\% of the total solid angle. For gamma-ray detection, an array of LaBr$_3$(Ce) scintillator detectors has been employed. The results are in good agreement with the data reported by \cite{Jiang-18-PRC} and with \cite{Tumino-18-Nature}. However, a more prominent resonance has been observed around $3.8\ensuremath{\,\mathrm{MeV}}{}$ compared to other measurements ( \cite{Spillane-08-PRL,Zickefoose-18-PRC}).
The most recent measurement of the \ensuremath{\C{12}+\C{12}}{} cross section has been performed by \cite{Tan-20-PRL} at the University of Notre Dame. The simultaneous detection of protons and alphas, through a silicon detector array, and $\gamma$-rays with a 109\% HPGe detector, allowed for particle-$\gamma$ coincidence technique. The S-factor upper limit at $2.2\ensuremath{\,\mathrm{MeV}}{}$ for proton (p$_1$) and alpha ($\alpha_1$) channels are lower than THM data. We note that the upper limit for the proton channel disagrees significantly with the recent measurement of \cite{Fruet-20-PRL}. The discrepancy is less evident, but still present, for the alpha channel. In the energy region between $2.5$ and $3\ensuremath{\,\mathrm{MeV}}{}$, there is some tension between the results of \cite{Tan-20-PRL} and previous measurements \cite{Jiang-18-PRC} both for proton and alpha channel. The S-factor results at center of mass energies above $4\ensuremath{\,\mathrm{MeV}}{}$ agree nicely with other data.
A comparison between the total S-factor values obtained by \cite{Spillane-08-PRL}, \cite{Jiang-18-PRC}, \cite{Tumino-18-Nature}, \cite{Fruet-20-PRL} and \cite{Tan-20-PRL} is shown in Figure \ref{fig:12c_summary}. It should be underlined that \cite{Tumino-18-Nature} data are normalized to direct measurements, so a difference in the absolute value of the S factor can also be attributed to systematic errors affecting direct data. Significant discrepancies between the results of the reported experiments are evident in whole energy range and, for this reason, a further experimental effort is needed.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{Frontiers_tot_new.pdf}
\caption{S-factor values obtained by \cite{Spillane-08-PRL}, \cite{Jiang-18-PRC}, \cite{Tumino-18-Nature}, \cite{Fruet-20-PRL} and \cite{Tan-20-PRL}}
\label{fig:12c_summary}
\end{figure}
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth, angle=90]{Frontiers_p1.pdf}
\caption{Comparison of the S-factor obtained by \cite{Tumino-18-Nature} and \cite{Tan-20-PRL} for the p$_1$ channel. The S-factor values from \cite{Tumino-18-Nature} have been obtained from the plot provided in the paper since the authors don't provide their results in tabular form.}
\label{fig:12c_p1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth, angle=90]{Frontiers_a1.pdf}
\caption{Comparison of the S-factor obtained by \cite{Tumino-18-Nature} and \cite{Tan-20-PRL} for the $\alpha_1$ channel. The S-factor values from \cite{Tumino-18-Nature} have been obtained from the plot provided in the paper since the authors don't provide their results in tabular form.}
\label{fig:12c_a1}
\end{figure}
\end{comment}
\subsection{The measurement in an underground laboratory}
An underground location such as the one of the LUNA experiment is the perfect environment to perform the measurement of the \ensuremath{\C{12}+\C{12}}{} cross section detecting $\gamma$-rays emitted in the decay of the \nuc{23}{Na} and \nuc{20}{Ne} excited states. A high-efficiency and ultra-low intrinsic background germanium detector (HPGe) is suitable for the measurement in combination with a massive lead shielding to avoid the contribution of the low-energy gamma-rays coming from the decay of the \nuc{238}{U} and \nuc{232}{Th} chains. In figure \ref{fig:12c_rate} the counting rate, expressed in counts per day, is reported as a function of the interaction energy. To calculate the rate, the S-factor provided by \cite{Spillane-08-PRL} has been adopted, considering that the decay of the first excited state to the ground state accounts for $\approx$ 50\% of the total cross section and produces photons of 440\ensuremath{\,\mathrm{keV}}{} and 1634\ensuremath{\,\mathrm{keV}}{} in the case of proton or alpha emission, respectively. It is evident that if the trend of the S-factor observed by \cite{Tumino-18-Nature} is confirmed the reaction rates can be higher by 1-3 order of magnitude. The two horizontal lines represent a typical rate of $\gamma$ background measured at LNGS with a shielded setup \cite{caciolli_ultra-sensitive_2009} (blue and red line for 440\ensuremath{\,\mathrm{keV}}{} and 1636\ensuremath{\,\mathrm{keV}}{} $\gamma$ energies, respectively). \textcolor{red}{In particular for the proton channel, crucial issues are the choice of the materials to limit the intrinsic contaminants and a proper detectors shielding. In addition, constant nitrogen fluxing around the setup could help to further reduce the background, avoiding radon contaminants. The $\gamma$-detection efficiency adopted in the calculation is just a standard value, new high efficiency setups will be developed for the future measurements.
From a rough estimation considering the data provided by \cite{Spillane-08-PRL} and the setup described in Figure \ref{fig:12c_rate}, we can say that the dominant contribution to the background for the proton channel will come from the environmental radioactivity if a 0.3 ppm H contamination level is achieved in the targets (\cite{Morales-18-EPJA}) making the induced background not an issue at least down to $\sim$ 2 MeV. The limitation in the alpha channel, is conversely related to the low rate. }
To provide the total cross section, the measurement of the charged particle channels is also needed. In this case the advantage of the underground location is less evident but still present; in fact secondary particles produced by the passage of cosmic rays through the detectors could contribute to the background and they are effectively reduced at LNGS (\cite{Bruno2015}).
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{c12rate_Spillane_Frontiers.pdf}
\caption{Counting rate, in counts per day, obtained considering data provided by \cite{Spillane-08-PRL}, a $\gamma$ detection efficiency of 6\% and 2\% for 440\ensuremath{\,\mathrm{keV}}{} and 1636\ensuremath{\,\mathrm{keV}}{} $\gamma$ energies, respectively and a beam current of 50-150 $\mu$A. The two horizontal lines represent a typical rate of $\gamma$ background measured at LNGS with a shielded setup \cite{caciolli_ultra-sensitive_2009} (blue and red lines for 440\ensuremath{\,\mathrm{keV}}{} and 1636\ensuremath{\,\mathrm{keV}}{} $\gamma$ energies, respectively)}
\label{fig:12c_rate}
\end{figure}
\section{Conclusions}
The enhancement in the sensitivity provided by the strong background reduction in a underground laboratory, together with the best experimental techniques, have made possible, during twenty-five years of LUNA activity, to take clear steps forward in the knowledge of nuclear processes relevant to astrophysical scenarios.
The installation of a new MV accelerator in the Gran Sasso laboratory will allow over a broad time window of at least twenty years, to extend these studies to key processes of helium, carbon and neon burning phases.
Even if more extensively studied, also other important processes of H-burning will be better constrained thanks to the new facility. An example is the \ensuremath{\nuc{14}{N}(\p,\gamma)\nuc{15}{O}}{} reaction, presently known only at energies well above the Gamow peak. By combining the existing LUNA 400\,kV machine with new LUNA-MV facility it will be possible to cover the necessary energy range with a sufficient overlap and without any hole between 200\ensuremath{\,\mathrm{keV}}{} and 1.5\ensuremath{\,\mathrm{MeV}}{}, allowing to reduce the systematics in the extrapolations down to the 5\% level. The \ensuremath{\nuc{14}{N}(\p,\gamma)\nuc{15}{O}}{} reaction will also be suitable to perform the commissioning and the tuning of LUNA MV accelerator.
As already said, the success of the LUNA approach has motivated similar facilities already in operation in the United States or under construction in the Republic of China. This worldwide effort will allow in the next decades to take important steps forward in the field of nuclear astrophysics.
\bibliographystyle{frontiersinHLTH&FPHY}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,834
|
{"url":"https:\/\/wiki.math.uwaterloo.ca\/statwiki\/index.php?title=DETECTING_STATISTICAL_INTERACTIONS_FROM_NEURAL_NETWORK_WEIGHTS&diff=next&oldid=41352&printable=yes","text":"# Introduction\n\nWithin several areas, regression analysis is essential. However, due to complexity, the only tool left for practitioners are some simple tools based on linear regression. Growth in computational power available, practitioners are now able to use complicated models. Nevertheless, now the problem is not complexity: Interpretability. Neural network mostly exhibits superior predictable power compare to other traditional statistical regression methods. However, it's highly complicated structure simply prevent users to understand the results. In this paper, we are going to present one way of implementing interpretability in neural network.\n\nNote that in this paper, we only consider one specific types of neural network, Feed-Forward Neural Network. Based on the methodology discussed here, we can build interpretation methodology for other types of networks also.\n\n# Notations\n\nBefore we dive in to methodology, we are going to define a few notations here. Most of them will be trivial.\n\n1. Vector: Vectors are defined with bold-lowercases, v, w\n\n2. Matrix: Matrice are defined with blod-uppercases, V, W\n\n3. Interger Set: For some interger p $\\in$ Z, we define [p]\u00a0:= {1,2,3,...,p}\n\n# Methodology\n\nFirst of all, in order to explain the model, we need to be able to explain the interactions and their effects to output. Therefore, we define 'interacion' between variables as below.","date":"2022-11-27 12:32:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7687585353851318, \"perplexity\": 985.0973714848409}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710237.57\/warc\/CC-MAIN-20221127105736-20221127135736-00773.warc.gz\"}"}
| null | null |
Q: Why does b have a value? Why does b have a value? I think b should be null, because there is no return in function f.
f <- function(){
a <- 10
}
b <- f()
b
# [1] 10
A: <- operator returns assignement invisibly, which allows
b <- a <- 1
b
a
> b
[1] 1
> a
[1] 1
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,408
|
Celebrities Are Just as Obsessed with the Royal Wedding as We Are
No surprise here! It looks like everyone has royal wedding fever.
By Sarah Schreiber of Martha Stewart Weddings
Digital Editor
Photography by: Getty Images
When it comes to Prince Harry and Meghan Markle's upcoming nuptials, celebrities really are just like us. A few Hollywood A-listers have started to make their royal wedding obsession known, starting with Charlize Theron.
When an Entertainment Tonight reporter asked if she's been keeping up with the royal duo, she said, "Oh my gosh, yes! I mean, everybody's excited about it, right? I think it's great." Theron's enthusiasm is personal—she famously spent an afternoon with Prince Harry during a charity polo event in 2011 (this connection led to false rumors that she'd received a royal wedding invite!). "He's a great guy," she said of her friend. "I don't know [Meghan Markle], but I know he has good taste, so I wish them the best…and I'll be watching!"
Related: 11 Commonly Asked Royal Wedding Questions—Answered!
Theron isn't the only celebrity to comment on the couple's upcoming union. Jason Sudeikis recently joked about knowing that the bride-to-be, who had a small role in his movie Horrible Bosses, would call Kensington Palace home someday. "I saw it coming a mile away. I gotta be honest with you," he told Entertainment Tonight, of Markle's soon-to-be royal status. "She was regal in that moment, and it seemed like she's only gotten better at it."
Comedy aside, Sudeikis revealed that Markle, who has since retired from acting, was a pleasure to work with. "It was great. She was very sweet," he reflected. While he admitted that he probably won't be waking up at 7 a.m. on May 19 to watch his former co-star and her royal beau tie the knot, he did say that one member of his family will tune in. "My mom will, I bet," he said.
Another fan of Markle's? Serena Williams, who was recently a bride herself. Entertainment Tonight just caught up with the tennis pro, and she shared her well wishes for the future spouse. "I'm obviously super happy for her," the new mom shared. "She's such a great girl and she is incredibly nice and I couldn't be happier for her." ET notes that Markle gushed about Williams on her now-deleted blog. "She quickly became a confidante I would text when I was traveling, the friend I would rally around for her tennis matches, and the down to earth chick I was able to grab lunch with just a couple weeks ago in Toronto," she once wrote.
Some stars are even turning to Markle for their own wedding inspiration! Cara Santana told Us Weekly that she's marrying "across [the] pond." "I'm taking a little cue from my old friend Meghan Markle," she explained. She's also certain that Markle's "going to be a beautiful bride," and she'll "be there in spirit" by tuning in to watch the celebration on TV.
As if those fans aren't impressive enough, The Queen herself has spoken out. Helen Mirren (who famously played Queen Elizabeth II) is as enamored with Markle as the rest of them. "I think she's been utterly gracious and elegant," she said in an Entertainment Tonight interview. "I don't mean elegant in her clothes—of course she's elegant there, too. But elegant as a person." As for her view on the marriage? "It will work wonderfully well," she predicted, stating that she's "full of optimism."
Related: A Royal Expert Shares His Predictions for Prince Harry and Meghan Markle's Wedding
Former Spice Girl Mel B has been making royal wedding headlines ever since she suggested that she (and the rest of the '90s girl group) would be attending the nuptials. Though other band members have denied receiving an invite, Mel B hasn't stopped gushing about the duo. The reason? She and the prince go way back. "We'd go over to their house and have peanut butter and jelly sandwiches," Mel B told Entertainment Tonight of her friendship with a young Harry. "And we would have beans on toast." The singer connected with both of the princes after their father, Prince Charles, took them to the premier of Spice World.
Mel B might have history with Prince Harry, but former Suits actress Abigail Spencer feels the same way about the bride. "I have known her for years before the show," Spencer said of meeting Meghan at an audition. "We were born on the same day, hours apart, in the same year... she's a trusted friend and one of the most glorious people I have ever met." Spencer said that Markle is "doing okay" in the midst of wedding preparations—and that she already has her wedding present planned. "I think what they would love is [for people] to donate to one of the charities that they are involved in," she said. "That's where her heart is."
Another one of Markle's longtime friends, Priyanka Chopra (she's a confirmed royal wedding guest!), has also been vocal about the bride's dedication to activism—something, she says, makes her more than qualified for her royal role. "I think she was born to be a global influencer and this has given her the opportunity to do that," Priyanka told Jenny McCarthy. "I really feel like that's what she was born to do and I hope this gives her the opportunity do that." The Quantico actress revealed on the Rachael Ray Show, though, that Markle is still very much "a girl's girl" even with a royal wedding on the horizon, reported Entertainment Tonight. "She still texts a lot, which is great!" she explained, when asked about how the pals stay in touch.
Comments Add a comment
Elton John Still Hasn't Received His Royal Wedding Invitation
The 17 Best Royal Wedding Dresses of All Time
Royal Engagement Photos Through the Years
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 2,484
|
Q: Design parent-child controllers I am new to cakePHP and trying to get my head around the framework to see how to implement my requirements.
My application allows (admin) users to define Forms (parent) and FormElements (child) that later on will be composed on the fly and presented to the end-users.
To start prototyping I baked all the pieces and I can enter rows in both tables as expected.
edit to simplify the question:
The Forms controller already shows a list of Forms and when one is selected (view action), a list of the FormElements for that Form.
But... when I add a new FormElement, I have to select again a Form the Element will be associated to.
Instead I want the FormElements controller/model to know which Form was initially selected and fill the form_id automagically.
Is there a "best practice" approach on how to handle this ?
Just in case its needed:
CREATE TABLE IF NOT EXISTS `forms` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`description` varchar(60) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS `form_elements` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`form_id` int(11) NOT NULL,
`name` varchar(40) NOT NULL,
`type` int(11) NOT NULL,
`widget` int(11) NOT NULL,
`mandatory` tinyint(4) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
A: This happens more often than you think. I had QuestionModel hasMany AnswerModel and wanted to add Answers within my AnswersController. I needed to show QuestionModel name and other attributes of the "parent" object. This is what my add action inside my AnswersController looks like:
public function add($question_id = null) {
$this->Answer->Question->id = $question_id;
if (!$this->Answer->Question->exists()) {
throw new NotFoundException(__('Invalid question'));
}
if ($this->request->is('post')) {
$this->request->data['Answer']['question_id'] = $question_id;
$this->request->data['Answer']['user_id'] = $this->Auth->user('id');
if ($this->Answer->save($this->request->data)) {
$this->Session->setFlashSuccess(__('Your answer has been saved'));
} else {
$this->Session->setFlashError(__('Your answer could not be saved. Please, try again.'));
}
$this->redirect(array('controller'=>'questions','action' => 'view', $question_id));
}
$question = $this->Answer->Question->read();
$this->set('question', $question);
}
You will notice that I am passing the Question.id to the AnswersController add action. Having this allows me to pull the Question from the DB and allows me to be able to redirect the user back to the specific question they were on before they clicked on "Add Answers to this Question".
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,056
|
\section{ Introduction }
The analysis of algorithms which allow to learn a rule from random examples
is an active and fascinating topic in the area of statistical mechanics.
For an overview see e.g. \cite{Seu91,Waal93,OpKi}.
Many models, where examples are {\em correctly} classified
by ideal experts (often called teachers) seem to be well understood.
Now, there is a great deal of interest in nonideal, but more realistic
models, which incorporate the influence of different types of noise
in learning.
In this paper, we study a model where not all examples
carry information about the unknown rule, but where a nonzero
fraction of them are just outliers.
Naively learning {\em all} examples may considerably
deteriorate the ability to infer the rule in such a case.
Similar to learning with noisy data, some knowledge about the stochastic
data generating mechanism can be helpful. Based on such a stochastic
model, a good algorithm could try to select the informative examples
and discard the remaining ones. Since however only partial information is
available, such a selection can only be performed approximately and
it is natural to try a {\em soft}, probabilistic selection.
Our model leads naturally to such a selection method. It consists of
a classification problem, where data which come from two
distributions (classes) centered at different points
are mixed at random with outliers.
A Bayesian approach, which aims at calculating the most probable
values for the class centers by minimizing a specific {\em training
energy} is combined with the
so-called EM algorithm of Dempster et al \cite{DeLaRu}, which nicely deals
with the problem of hidden parameters (the knowledge which of the data
are informative) in data mixtures.
This procedure leads to an algorithm which iteratively computes
the probability that an example is informative and
weights each example in predicting the unknown
class centers of the data generating distributions.
Our model may also be considered as a simple version of the
{\em mixtures of experts} models
\cite{JaJoHi} which are frequently studied
in the neural network literature. In these models, a complicated
task is learnt by a division of labor among several simple learning
machines (experts), where each
expert learns from different subsets of examples. Our model would
correspond to two experts where only one is able to extract
information from the examples.
The paper is organized as follows: After an introduction of the learning
problem, two learning strategies are defined in section two.
Section three gives the statistical mechanics
formulation of the problem, which, based on a replica calculation,
leads to a computation of the learning performance in the
thermodynamic limit. In section four the algorithmic implementation
of the learning methods using the EM algorithm is explained.
Section five presents the results of the statistical mechanics
calculations and of numerical simulations and concludes with a discussion.
Details of the replica calculations are given in the appendices.
\section{The Learning Problem}
We assume that the examples $\{\vek{\xi}^\mu,S^\mu\}$
($\vek{\xi}^\mu \in {\rm I\kern -0.13em R}^N, S^\mu \in \{\pm1\}$),
$\mu=1,\ldots,\alpha N$, are generated alternatively
by two different processes. For the first process, the
input $\vek{\xi}^\mu$
is selected at random from one of two gaussian
clusters (labelled by the outputs $S^\mu=\pm 1$) which are chosen
with equal probability. The clusters are centered
at $\pm\vek{B}$ and have equal variance $1/\gamma$.
$\vek{B}$ is an $N$ dimensional vector with $\vek{B}^2/N=1$.
The joint probability for inputs and outputs corresponding to this
process can be written as
$$
\wkt{\vek{\xi}^\mu,S^\mu |\vek{B}} \propto
\exp \left[ -\frac{\gamma}{2} \sum_j
\left( \xi_j^\mu - \frac{1}{\sqrt{N}} S^\mu B_j \right)^2 \right].\\
$$
The data from this process represent classified examples in a
noisy (because the Gaussian clusters overlap) two-class problem.
In the second process, the inputs come from a single gaussian centered
at zero with the same variance and the output
(chosen $\pm 1$ with equal probability) is
completely independent from the input.
For this case, we make the ansatz
$$
\wkt{\vek{\xi}^\mu,S^\mu |\vek{B}} \propto
\exp \left[ -\frac{\gamma}{2} \sum_j \left( \xi_j^\mu \right)^2 \right].
$$
The data from the second process may be understood as representing
outliers which do not contain any information about the two spatially
structured classes of inputs and
come from a "garbage" class and are classified purely by random guessing.
In order to distinguish the two processes, we introduce decision
variables $V^\mu\in\{0,1\}$, where $V^\mu=1$ stands
for the first process and $V^\mu=0$ for the outliers. The joint set of
decision variables is denoted by $\{V^\mu\}_\mu$. Conditioning on
these variables, we can write the probability distribution for the
the joint set of $\alpha N$ data
$ {\rm I\kern -0.13em D} := \{ \vek{\xi}^\mu,S^\mu \}_\mu $,
$ \mu = 1,\ldots,p=\alpha N $ within the single equation
\begin{eqnarray}\label{distexam}
\wkt{{\rm I\kern -0.13em D} \mid \{V^\mu\}_\mu, \vek{B}} = \\ \nonumber
\frac{1}{2^{\alpha N}} \left( \frac{\gamma}{2\pi} \right) ^{\alpha N^2 /2}
\prod_{\mu,j} \exp \left[ -\frac{\gamma}{2} (\xi_j^\mu)^2
+ \frac{\gamma}{\sqrt{N}} V^\mu \xi_j^\mu S^\mu B_j
- \frac{\gamma}{2N} V^\mu {B_j}^2 \right].
\end{eqnarray}
In order to model the fact that outliers occur at random with a fixed rate,
we will assume that both processes (structure, outliers)
are chosen independently at random. The probability for having
the value $V^\mu$ is written as
\begin{eqnarray}\label{distv}
\wkt{V^\mu} &= & \frac{ \exp[-\eta V^\mu] }{ 1 + \exp[-\eta] }.
\end{eqnarray}
Using the "chemical potential" $\eta$, we can adjust the average fraction
of structured data
\begin{eqnarray*}
\overline{V^\mu} & = & \frac{ 1 }{ \exp[\eta] + 1 }.
\end{eqnarray*}
For $\eta=-\infty$ all examples have $V^\mu=1$, but
with increasing $\eta$, less examples carry information.
For $\eta=0$, only half of the examples come from the structure
and for $\eta=\infty$ all examples are outliers.
A learner tries to infer the vector $\vek{B}$
from the $\alpha N$ examples and makes an estimate
$\vek{J}$ for $\vek{B}$.
We will assume that the fraction of outliers
is known to the learner. Although in our final results we will
mostly deal with the case that also the
parameter $\gamma$ is known precisely, we will be more general in the
basic definitions and assume that the learner
uses $\tilde{\gam}$ instead, with $\gamma \neq \tilde{\gam}$.
Hence, if the $\{V^\mu\}_\mu$ were known, the likelihood of the data
based on the estimate $\vek{J}$ would be given by
\begin{eqnarray*}
\wkt{{\rm I\kern -0.13em D} \mid \{V^\mu\}_\mu, \vek{J}} = \\
\frac{1}{2^{\alpha N}} \left( \frac{\tilde{\gam}}{2\pi} \right) ^{\alpha
N^2 /2}
\prod_{\mu,j} \exp \left[ -\frac{\tilde{\gam}}{2} (\xi_j^\mu)^2
+ \frac{\tilde{\gam}}{\sqrt{N}} V^\mu \xi_j^\mu S^\mu J_j
- \frac{\tilde{\gam}}{2N} V^\mu {J_j}^2 \right].
\end{eqnarray*}
In general, however, the learner does not know which of the examples
contain information and which are outliers. Hence, to the learner the
$\{V^\mu\}_\mu$ are
{\em hidden variables} which are not observed but need to be averaged over.
Hence, the actual ansatz for the distribution of data will be given by
the {\em mixture distribution}
\begin{equation}\label{mix}
\wkt{{\rm I\kern -0.13em D} \mid \vek{J}} = \sum_{\{V^\mu\}_\mu}
\wkt{{\rm I\kern -0.13em D}, \{V^\mu\}_\mu \mid \vek{J}},
\end{equation}
where
\begin{eqnarray}\nonumber
\wkt{{\rm I\kern -0.13em D}, \{V^\mu\}_\mu \mid \vek{J}}
= \wkt{{\rm I\kern -0.13em D} \mid \{V^\mu\}_\mu, \vek{J}} \wkt{\{V^\mu\}_\mu} \\
= \frac{1}{2^{\alpha N}} \left( \frac{\tilde{\gam}}{2\pi} \right) ^{\alpha
N^2 /2}
\frac{ 1 }{ ( 1 + \exp[-\eta] )^{\alpha N} }
\exp \left[ -\frac{\tilde{\gam}}{2} \sum_{\mu,j} (\xi_j^\mu)^2
- \sum_\mu V^\mu f_\mu(\vek{J}) \right]
\end{eqnarray}
and where we have defined
$$
f_\mu(\vek{J})
:= -\frac{\tilde{\gam}}{\sqrt{N}} \sum_j \xi_j^\mu S^\mu J_j
+ \frac{\tilde{\gam}}{2N} \sum_j {J_j}^2 + \eta.
$$
One possible way of getting an estimate for the unknown vector
$\vek{B}$, would be the {\em maximum likelihood} method, i.e., one would
use the vector $\vek{J}$ which maximizes the likelihood (\ref{mix}).
A second possibility is given by a Bayesian approach, where the learner
supplies some {\em prior knowledge} about reasonable estimates
$\vek{J}$ within a {\em prior distribution}. We will use a distribution
which on average gives the correct length of the unknown vector
but does not favour any spatial direction
\begin{equation}\label{prior}
\wkt{\vek{J}} = \left( \frac{ 1 }{ 2\pi } \right)^{N/2}
\exp \left[ -\frac{1}{2} \sum_j {J_j}^2 \right].
\end{equation}
Based on the prior and the likelihood of the data, the learner can construct
the posterior distribution, using Bayes rule
\begin{equation}\label{poster}
\wkt{\vek{J} \mid {\rm I\kern -0.13em D}} =
\frac{\wkt{{\rm I\kern -0.13em D} \mid \vek{J}} \wkt{\vek{J}}}{\wkt{{\rm I\kern -0.13em D}}}.
\end{equation}
There are several ways of using the information contained in the posterior
(\ref{poster}). E.g., simply taking the {\em posterior mean} as the estimate
for $\vek{B}$ will minimize the expected average
(with respect to the posterior)
squared error. Unfortunately, for a high dimensional space, such
expectations will not be easy to calculate exactly, and one has to resort
to Monte Carlo sampling. A simpler estimate, which should not perform too
poorly, is given by the vector $\vek{J}$, which has maximal aposteriori
probability (MAP), i.e., the one which maximizes (\ref{poster}).
Actually, if there are enough data available, one can expect that
the posterior will be
close to a gaussian, and both estimates will come close.
In order to maximize the posterior
$\wkt{\vek{J} \mid {\rm I\kern -0.13em D}}$ with respect to $\vek{J}$, we can equivalently
minimize the "training"-energy function
\begin{eqnarray}\label{energy}
{\cal H}(\vek{J})=-\ln \wkt{{\rm I\kern -0.13em D}, \vek{J}}
& = &
-\ln \sum_{\{V^\mu\}_\mu} \wkt{{\rm I\kern -0.13em D}, \{V^\mu\}_\mu \mid \vek{J}} \wkt{\vek{J}}.
\end{eqnarray}
As we will see in section four, there is a simple algorithm to calculate the
MAP. As we will see, this algorithm is based on a recursive estimation
of the (posterior) expected decision variables $\{V^\mu\}_\mu$.
Since examples will be weighted by their probability of being informative
rather than being kept or discarded from the training set, we call this
method a {\em soft selection} of examples.
As an alternative to the MAP approach for $\vek{J}$,
we will discuss also an algorithm which calculates the MAP for the
hidden variables $\{V^\mu\}_\mu$. Since these variables take the
values $0$ and $1$ only, the result will be a {\em hard} selection of
informative examples, rather than a soft weighting.
We look for the values of $\{V^\mu\}_\mu$
which maximize
\begin{equation}
\wkt{\{V^\mu\}_\mu \mid {\rm I\kern -0.13em D}} = \frac{\wkt{{\rm I\kern -0.13em D} , \{V^\mu\}_\mu}}
{\wkt{{\rm I\kern -0.13em D}}}.
\end{equation}
Equivalently, we can maximize the numerator of this expression, which can
be written as a mixture probability
\begin{equation}\label{mix:two}
\wkt{{\rm I\kern -0.13em D} , \{V^\mu\}_\mu} = \int d \vek{J}\, \wkt{{\rm I\kern -0.13em D} , \{V^\mu\}_\mu, \vek{J}}
\end{equation}
resulting in a training energy
\begin{equation}\label{henergy}
{\cal H}_h(\{V^\mu\}_\mu) = - \ln \int d\vek{J} \, \wkt{{\rm I\kern -0.13em D},\vek{J},\{V^\mu\}_\mu}.
\end{equation}
Finally, after minimization, we can use the expectations
\begin{equation}
{\left\langle J_j \right\rangle}_{\smvek{J}}
= \frac{\int d\vek{J} \, J_j \, \wkt{{\rm I\kern -0.13em D},\vek{J},\{V^\mu\}_\mu}}
{\int d\vek{J} \, \wkt{{\rm I\kern -0.13em D},\vek{J},\{V^\mu\}_\mu}}
\end{equation}
as an estimate for the unknown $B_j$.
\section{ Analysis by Statistical Mechanics }
In this section, we study the performance of both
MAP estimates analytically in the thermodynamic limit $N\to\infty$
using a statistical mechanics framework.
We begin first with the soft selection.
There are different ways of measuring, how good the learner, equipped
with the MAP estimate, has learnt the structured distribution.
An obvious idea is to measure the quadratic deviation between
the true vector $\vek{B}$ and the MAP:
\begin{equation}
\Delta = \frac{1}{N}
\left\langle \left( \vek{J}-\vek{B} \right)^2 \right\rangle = Q - 2 R + 1
\end{equation}
where we have defined the order parameters
\begin{eqnarray}\nonumber
R = \frac{1}{N} \left\langle \vek{J} \cdot \vek{B} \right\rangle \\
Q = \frac{1}{N} \left\langle \vek{J} \right\rangle ^2.
\end{eqnarray}
It is also useful to calculate the
angle $\Phi=\angle (\vek{J},\vek{B})$ between estimate and $\vek{B}$.
This angle $\Phi$, normalized by $1/\pi$ is given in terms of the
order parameters by
\begin{eqnarray}\label{phi}
\Phi & = & \frac{1}{\pi} \arccos \frac{\vek{J}\cdot\vek{B}}
{||\vek{J}||\ ||\vek{B} ||}\\
& = & \frac{1}{\pi} \arccos \frac{R}{\sqrt{Q}}
\end{eqnarray}
The order parameters for the soft selection MAP algorithm
can be derived from a partition function
$Z$ where the corresponding hamiltonian is given by
${\cal H}(\vek{J})$ from (\ref{energy}).
Assuming that the inverse temperature $\beta$ is an integer, we define
\begin{eqnarray}\nonumber
Z & = & \int d\vek{J} \, \exp \left[ -\beta {\cal H}(\vek{J}) \right] \\
\nonumber
& = & \int d\vek{J} \, \exp \left[ \beta \ln \wkt{{\rm I\kern -0.13em D}, \vek{J}} \right] \\
\nonumber
& = & \int d\vek{J} \, \left(\wkt{{\rm I\kern -0.13em D}, \vek{J}}\right)^{\beta}\\
& = & \int d\vek{J} \, \left\{ \sum_{\{V^\mu\}_\mu}
\wkt{{\rm I\kern -0.13em D}, \{V^\mu\}_\mu,\vek{J}} \right\}^{\beta}\\
& = & \int d\vek{J} \,
\sum_{\{V_{b}^\mu\}_\mu}
\prod_{b=1}^{\beta}\wkt{{\rm I\kern -0.13em D}, \{V_b^\mu\}_\mu,\vek{J}}. \nonumber
\end{eqnarray}
The MAP, which is the minimum of
the energy ${\cal H}(\vek{J})$, is derived from the limit $\beta\to\infty$.
The case $\beta=1$ would correspond to
Gibbs learning, where a vector $\vek{J}$ is drawn at
random from the posterior.
As usual, order parameters are found from an average of the
free energy $ f = -\frac{1}{\beta N} \ln Z $ over the distribution
of the examples. To perform the average, we utilize the replica trick
\begin{eqnarray}\nonumber
\left\langle f \right\rangle & = & -\frac{1}{\beta N} \left\langle \ln Z
\right\rangle \\
& = & -\frac{1}{\beta N} \lim_{n\to0}
\frac{\partial}{\partial n} \ln \left\langle Z^n \right\rangle
\end{eqnarray}
where $\langle \ldots \rangle$ denotes the average over
the distribution (see (\ref{distexam}) and (\ref{distv}))
$$
\wkt{\xi_j^\mu, S^\mu \mid \vek{B}} = \frac{1}{2} \left( \frac{\gamma}{2\pi}
\right) ^{1/2}
\frac{1}{1+e^{-\eta}}
\sum_{V^\mu} \exp \left[ -\frac{\gamma}{2} \left( \xi_j^\mu -
\frac{1}{\sqrt{N}} V^\mu S^\mu
B_j \right) ^2 - \eta V^\mu \right].
$$
The replicated partition function is now written as
\begin{equation}
Z^n=\sum_{\{V_{ab}^\mu\}_\mu} \int \prod_{a} \, d\vek{J}^a \,
\prod_{a,b} \wkt{{\rm I\kern -0.13em D}, \{V_{ab}^\mu\}_\mu,\vek{J}^a}.
\end{equation}
where the decision variables contain {\em two replica} indices.
Here, the index $a$ runs from $1$ to $n$, whereas $b$ runs from $1$
to $\beta$. For the subsequent calculations we have
assumed the correct parameters $\gamma=\tilde{\gam}$ and have
made a {\em replica symmetric ansatz} with respect to the indices $a$.
We think that this should be at least a good approximation,
because our model is an example of a
{\em teacher--student} learning scenario, where student and teacher match
in the sense that the student uses the right statistical model for the data.
For the Gibbs learning scenario ($\beta=1$), where the symmetry of
student and teacher becomes perfect in the replica calculation
(this can be seen by introducing a further
average over $\vek{B}$, using the prior (\ref{prior})),
replica symmetry is usually considered to be exact
(however no general proof has been given sofar).
Hence, assuming that the effects of replica symmetry breaking are
small, we have refrained
from performing a replica stability analysis.
The treatment of the replica indices $b$ is much simpler, because the
order parameters (see Appendix A) do not depend on them.
Hence, as long as $\beta$ is an integer, no further symmetry
assumptions are required for the $b$'s.
Although we don't have a proof that the continuation to
noninteger $\beta$ is unique, we expect that
the limit $\beta\to\infty$ exists and can be safely calculated
using a sequence of integers.
The {\em hard selection problem} of decision variables is treated similarly
using the (zero temperature) free energy which is defined from the
partition function
\begin{equation}\label{zhard}
Z_h= \sum_{\{V^\mu\}_\mu} e^{-\beta {\cal H}_h(\{V^\mu\}_\mu)}
\end{equation}
with the energy (\ref{henergy}).
The averages which are necessary for the calculation of error measures, e.g.
\begin{equation}
\Phi = \frac{1}{\pi} \arccos
\frac{ \sum_j {\left\langle J_j \right\rangle}_{\smvek{J}} B_j }
{ \sqrt{ \sum_j {\left\langle {J_j}^2 \right\rangle}_{\smvek{J}} }
\sqrt{N}}
\end{equation}
can be found in a standard way from derivatives of the free energy with respect
to appropriate external fields, e.g.
\begin{eqnarray}\label{ordhard}
\sum_j {\left\langle J_j \right\rangle}_{\smvek{J}} B_j =
-\lim_{\lambda\to 0}\frac{\partial}{\partial\lambda}
\lim_{\beta\to\infty}\frac{1}{\beta}
\ln \sum_{\{V^\mu\}_\mu} e^{-\beta {\cal H}_h(\{V^\mu\}_\mu,\lambda)}
\end{eqnarray}
where
\begin{eqnarray*}
{\cal H}_h(\{V^\mu\}_\mu,\lambda)=
-\ln \int d\vek{J} \, \wkt{{\rm I\kern -0.13em D},\vek{J},\{V^\mu\}_\mu}
\exp \left[ - \lambda \sum_j J_j B_j \right].
\end{eqnarray*}
Explicit calculations of the free energies and order parameters
for both cases are given in the appendices.
\section{ The EM-Algorithm }
Unfortunately, the maximization of the posterior
distributions cannot be carried out in closed form
and must be done numerically. Usually, nonlinear optimization problems
are solved by gradient descent algorithms which require a tuning of the
step sizes. However, for the type of (generalized) maximum likelihood problem
for mixture distributions such as (\ref{mix}) and (\ref{mix:two}),
there is a simpler
and well known algorithm which has been developped by Dempster et al
\cite{DeLaRu}.
This so-called {\em expectation maximization (EM) algorithm}
guarantees that the (generalized) likelihood is nondecreasing for every
iteration step and converges to a local maximum.
To explain the idea for the soft selection problem, let us assume for
the moment that the hidden variables
$\{V^\mu\}_\mu$ were actually known. Then the corresponding log-likelihood
$\ln \left[\wkt{{\rm I\kern -0.13em D}, \{V^\mu\}_\mu \mid \vek{J}} \wkt{\vek{J}}\right]$
could be maximized in closed form. In the EM algorithm, the true values of
the hidden variables are replaced iteratively by suitable averages.
At iteration $i$, in the {\em expectation step},
the function
\begin{equation}\label{expec}
A(\vek{J},\vek{J}^{(i)})
:= { \left\langle \ln \left[ \wkt{{\rm I\kern -0.13em D}, \{V^\mu\}_\mu \mid \vek{J}}
\wkt{\vek{J}} \right] \right\rangle }_{ \wkt{\{V^\mu\}_\mu \mid {\rm I\kern -0.13em D} ,
\vek{\scriptstyle J}^{(i)}} }
\end{equation}
is calculated, which is the log likelihood of observed and
hidden data averaged over the
posterior distribution of the hidden data, given the old estimate
$\vek{J}^{(i)}$. In the {\em maximization step}, (\ref{expec})
is maximized with respect to $\vek{J}$ in order to obtain the
new iteration $\vek{J}^{(i+1)}$.
We will not give the proof of convergence here,
as it is relatively simple and can be found in many
textbooks (see e.g. \cite{Honer}).
However, we can easily see that a fixed point of the algorithm is
also a local extremum of (\ref{energy}). At the maximum of (\ref{expec}),
we have
\begin{eqnarray*}
0=\frac{\partial}{\partial J_k} A(\vek{J},\vek{J}^{(i)})
& = & \frac{\partial}{\partial J_k} {
\left\langle \ln \left[ \wkt{{\rm I\kern -0.13em D}, \{V^\mu\}_\mu \mid \vek{J}}
\wkt{\vek{J}} \right] \right\rangle }_{ \wkt{\{V^\mu\}_\mu \mid {\rm I\kern -0.13em D} ,
\vek{\scriptstyle J}^{(i)}}} \\
& = & \sum_{\{V^\mu\}_\mu}\frac{\frac{\partial}{\partial J_k}
\wkt{{\rm I\kern -0.13em D}, \{V^\mu\}_\mu, \vek{J}}
\wkt{{\rm I\kern -0.13em D}, \{V^\mu\}_\mu , \vek{J}^{(i)}}}
{\wkt{{\rm I\kern -0.13em D}, \{V^\mu\}_\mu, \vek{J}}
\wkt{{\rm I\kern -0.13em D}, \vek{J}^{(i)}}}.
\end{eqnarray*}
Hence, at the fixed point, where $\vek{J}^{(i)}=\vek{J}$, we also have
$\frac{\partial}{\partial J_k}\ln \wkt{{\rm I\kern -0.13em D}, \vek{J}} =0$.
For the explicit calculation, we need the
conditional
distribution of the hidden variables, given the data and $\vek{J}$
\begin{eqnarray}\label{postv}\nonumber
\wkt{\{V^\mu\}_\mu \mid {\rm I\kern -0.13em D} , \vek{J}}
& = &
\frac{\wkt{{\rm I\kern -0.13em D}, \{V^\mu\}_\mu , \vek{J}}}
{\wkt{{\rm I\kern -0.13em D}, \vek{J}}} \\
& = &
\prod_{\mu} \frac{\exp[- V^\mu f_\mu(\vek{J})]}
{1+ \exp[- f_\mu(\vek{J})]}.
\end{eqnarray}
Using the distribution (\ref{distv}), we get
\begin{eqnarray*}
\frac{\partial}{\partial J_k} A(\vek{J},\vek{J}^{(i)})
& = & - \tilde{\gam} \sum_\mu \langle V^\mu \rangle \left( -\frac{1}{\sqrt{N}}
\xi_k^\mu S^\mu + \frac{1}{N} J_k \right) - J_k \\
& \stackrel{!}{=} & 0
\end{eqnarray*}
which gives
\begin{equation}
\vek{J} = \frac{ \sqrt{N} \sum_\mu \langle V^\mu \rangle \vek{\xi}^\mu
S^\mu }
{ \sum_\mu \langle V^\mu \rangle + N/\tilde{\gam} },
\end{equation}
where
\begin{eqnarray}\nonumber
\langle V^\mu \rangle & = & \sum_{V^\mu=0,1} V^\mu
\wkt{V^\mu \mid {\rm I\kern -0.13em D},\vek{J}^{(i)}} \\
& = & \frac{1}{ \exp \left[ f_\mu (\vek{J}^{(i)}) \right] + 1 }.
\end{eqnarray}
Hence, the estimate $\vek{J}$ for $\vek{B}$
is of the form of a {\em weighted Hebbian} sum, where each example
has a weight which is proportional to the estimated probability
$\langle V^\mu \rangle$, that the example is not an outlier.
It is interesting to look at the limiting case $\eta\to -\infty$, i.e. where
all examples are from the double cluster and where no
outliers are present. In this case, the EM iteration stops after one step,
and we get
\begin{eqnarray}\nonumber
\langle V^\mu \rangle & = & 1 \mbox{ for all } \mu \\ \label{Hebb}
\vek{J} & = & \frac{1}{\sqrt{N}}
\frac{ \sum_\mu \vek{\xi}^\mu S^\mu }{ \alpha + 1/\tilde{\gam} }
\end{eqnarray}
which is the usual Hebbian vector.
Similarly, to apply the EM algorithm to
the hard selection problem with the mixture distribution (\ref{mix:two}),
we take $\vek{J}$ as the hidden quantity. In each iteration step,
we have to maximize
\begin{eqnarray}\nonumber
\hat{A}(\{V^\mu\}_\mu,\{V^\mu\}_\mu^{(i)}) & :
= & { \left\langle \ln \wkt{{\rm I\kern -0.13em D}, \{V^\mu\}_\mu, \vek{J}} \right\rangle }
_{ \wkt{\vek{\scriptstyle J} \mid {\rm I\kern -0.13em D} , \{V^\mu\}_\mu^{(i)}} } \\ \nonumber
& = & - \frac{\tilde{\gam}}{2} \sum_{\mu,j} (\xi_j^\mu)^2
+ \frac{\tilde{\gam}}{\sqrt{N}} \sum_{\mu,j} V^\mu \xi_j^\mu S^\mu
{\left\langle J_j \right\rangle} \\
& & - \frac{\tilde{\gam}}{2N} \sum_{\mu,j} V^\mu {\left\langle {J_j}^2
\right\rangle}
- \eta \sum_\mu V^\mu
- \frac{1}{2} \sum_j {\left\langle {J_j}^2 \right\rangle}
\end{eqnarray}
with respect to $\{V^\mu\}_\mu$.
Defining
\begin{eqnarray}\nonumber
a & := & \frac{\tilde{\gam}}{N} \sum_\mu V^{\mu\,(i)} + 1 \\
b_j & := & \frac{\tilde{\gam}}{\sqrt{N}} \sum_\mu V^{\mu\,(i)} \xi_j^\mu S^\mu
\end{eqnarray}
we obtain for the expectations at step $i$
\begin{eqnarray}\nonumber
{\left\langle J_j \right\rangle} & = & \frac{b_j}{a} \\
& = & \frac{ \sqrt{N} \sum_\mu V^{\mu\,(i)} \xi_j^\mu S^\mu }{ \sum_\mu
V^{\mu\,(i)} + N/\tilde{\gam} } \\ \nonumber
{\left\langle {J_j}^2 \right\rangle} & = & \frac{{b_j}^2}{a^2} + \frac{1}{a}.
\end{eqnarray}
Finally, after convergence, we use ${\left\langle J_j \right\rangle}$ as
an estimate for $B_j$.
\section{Results and Discussion}
\subsection{Soft Selection}
Solving for the order parameters and assuming that $\tilde{\gam} =\gamma $
we find that for fixed $\eta$, as expected, both error measures $\Phi$
and $\Delta$ decrease
towards $0$ with an increasing number $\alpha N$ of examples, showing that
the algorithm is able to find the true structure vector $\vek{B}$.
Since for the EM algorithm both error measures show qualitatively the same
behaviour, we will concentrate mainly on the angle $\Phi$.
Fig.~1 shows $\Delta(\alpha)$ for $\eta=0$. The second curve
gives the performance of the Hebbian rule (\ref{Hebb}). It
demonstrates the importance of selecting informative examples.
If all examples are weighted equally (and $\eta\neq\-\infty$), then
the true vector $\vek{B}$ cannot be recovered for $\alpha\to\infty$.
In Fig.~2, $\Phi(\alpha)$ (EM algorithm) is shown for
$\eta=0$ and $\eta=4$.
Since it was harder to perform simulations for $\eta=4$,
where only about $1.8\%$ of the examples are informative, we have shown
simulations only for $\eta=0$.
Asymptotically one finds a decrease of the error like
\begin{equation}
\Phi
\stackrel{\alpha\to\infty}{\simeq} \frac{1}{\pi R_{\infty}}
\sqrt{\frac{c}{\alpha}},
\end{equation}
where $R_{\infty}$ is the asymptotic value of the orderparameter $R$
and both $R_{\infty}$ and $c$ depend on $\eta$.
As expected, for fixed $\alpha$, the error increases
with $\eta$, i.e. with a growing number of outliers. More interesting is
the nonsmooth behaviour of the second curve, which gives a sudden drop
of the error as $\eta$ is varied. This phase transition can be
observed in more detail in the relief plot of the order parameters
$R$ and $Q$ in Figs.~3a and 3b.
In regions of large $\eta$ or large $\alpha$, the saddlepoint
equations have three solutions. Taking the solution with the smallest
free energy leads to a jump of the order parameters.
It is easier to investigate the transition by simulations as a
function of $\eta$, for fixed $\alpha$. This is shown in Fig.~4,
together with the predictions of the theory.
We have simulated the EM-algorithm starting from random initial
conditions and averaged the order parameters over many samples
of random inputs. Fixing $\alpha$,
the simulations show a good agreement with the
theory for small and large values of $\eta$, but
discrepancies show up close to the predicted transition.
Since the average fraction $\bar{V}$ of informative data points
decreases exponentially with $\eta$, finite size effects
play a crucial role in the simulations.
E.g.~for $\eta=4$, less than 2 examples
out of $N=100$ are informative on average whereas the replica
theory is based on infinitely
many examples from the structured clusters.
Hence, we have performed a finite size scaling to determine
the critical value
$\bar{V}_0$, where the transition sets in. Since for small
$\eta$ (large $\bar{V}$),
the simulations show rather small statistical fluctuations around
a value of $R$ close to 1,
we have (for each $N$) defined $\bar{V}_0$ as the point, where
the distribution of the observed values for $R$ significantly broadens,
indicating the onset of transitions to different values of $R$.
A simple linear extrapolation to $N=\infty$ as shown in the inset of
Fig.~4 gives a value for $\bar{V}_0$ which is
in good agreement with the predicted value for the phase transition.
The large error bar at $\eta=6.8$ is explained from the fact that
the values for $\Phi$ (eq.~(\ref{phi})) have been obtained by
using the sample averages of $R$ and $Q$ which (for finite $N$)
show a transition at slightly different values of $\eta$.
\subsection{Hard Selection}
Solving the orderparameter equations for the free energy (\ref{freehard})
at zero temperature, we find similar first order transitions as
for the method of soft selection.
For $\eta$ small enough, there is only one solution
which has a nonzero overlap to the teacher vector $\vek{B}$.
Increasing $\eta$ (and thereby the
number of outliers) beyond a value $\eta_0$, another solution with
${\hat{R}}={\hat{Q}}={\hat{z}}=0$ (see eq.~(\ref{hardord}))
appears, i.e.~where all $V^{\mu}=0$
and all data are considered to be outliers. Here
\begin{equation}
\eta_0 = -\frac{\tilde{\gam}}{2} + \frac{\tilde{\gam}^2}{4\gamma}.
\end{equation}
Between $\eta_0$ and a second parameter value $\eta_c$, however, this trivial
solution has a higher free energy $f_h=0$ than the nontrivial one.
Finally, for $\eta>\eta_c$, the trivial solution with zero
order parameters, giving rise to $\Phi=1/2$,
is the one with lowest free energy.
Fig.~5 shows this critical $\eta$ as a function of $\alpha$.
So, unlike in the soft selection case, we have, for a large range of $\eta$,
two solutions of the orderparameter equations. This is reflected in the
simulations, the single runs clearly tending to either of these two optima.
Effects of metastability (which would be a sign of a rugged energy
landscape and indicate strong effects of replica symmetry breaking)
could not be observed. However, a finite size scaling
for the transition point did not lead to a satisfactory agreement with the
theory. We think that the observed discrepancy is a dynamical effect,
where the EM algorithm, starting from a random initial condition,
is unable to reach the global minimum and converges only to the local one,
thus shifting the phase transition to smaller values of $\eta$.
We have balanced this effect to some
extent by keeping only those simulations (as long as they
occur) where the EM algorithm converges to the solution with
nonzero overlap to the vector $\vek{B}$.
Fig.~6 shows the performance of the hard selection for $\alpha=20$.
Comparision to Fig.~4 suggests that the soft selection should be preferred.
The difference between the performance of the two algorithms becomes
more drastic for $\alpha\to\infty$:
The soft selection algorithm is able to tolerate an {\em arbitrary fraction}
of outliers as long as enough data available. Eventually, it will always
find the true teacher vector $\vek{B}$. On the other hand, for hard selection,
the explicit solution of the orderparameter equations for
$\alpha\to\infty$ shows that there is always a critical fraction of outliers
(corresponding to a parameter $\eta_c$ (\ref{etacinf})),
where learning is no longer possible even
if inifinitely many examples are available.
It is also interesting to investigate the influence of the overlap
of the two gaussian clouds in the structured input distribution
on the transition parameter $\eta_c$.
Fig.~7 shows $\eta_c$ for $\alpha=\infty$ as a function of
$\gamma$, which gives the inverse squared width of each gaussian
and measures so the distinguishability of the clouds.
If $\gamma$ is below 0.278, somewhat surprising, the critical $\eta$
jumps discontinuously to zero, i.e. if the overlap of the two clouds
is above a certain value, only $50\%$ outliers can be tolerated.
Phase transitions in the performance of learning algorithms have been observed
frequently in the statistical mechanics of neural networks. Since
such effects do not occur in asymptotic (in the sense of large $\alpha$)
expansions or in the exact bounds known in statistics they seem to be
one of the major contributions of statistical mechanics to the field of
computational learning theory. Phase transitions occur in
multilayer networks, where they are can be related to
the breaking of symmetries which are related to the network
architecture \cite{SH,Op94}.
Other examples include models with a so-called student teacher
mismatch \cite{Gyoer2},
models with discrete adjustable parameters \cite{GarDer89,Gyoer90}
and models of unsupervised learning \cite{BiMi93,Bark}.
For the present supervised learning model, where the basic
adjustable parameters are continuous variables and where the learner
matches with the distribution of the data, the phase transition was
unexpected. It will be interesting to apply recently developped
combinations of statistical mechanics techniques and methods of
information theory \cite{OpHa}
to establish the existence of phase transitions in mixture models
in more general circumstances.
\begin{appendix}
\section{Free energy and order parameters for soft selection}
Upon averaging, we obtain
\begin{eqnarray*}
\left\langle Z^n \right\rangle = \\
\sum_{\{V_{ab}^\mu\}_\mu}
\int \prod_{a,j} \,dJ_j^a
\exp \left[ -\sum_{a,b} \left( \frac{\tilde{\gam}}{2N}
\sum_\mu V_{ab}^\mu
+ \frac{1}{2} \right) \sum_j (J_j^a)^2 -
\eta \sum_{a,b} \sum_\mu
V_{ab}^\mu \right] \\ \nonumber
\times\left\langle \exp
\left[ -\frac{\tilde{\gamma} n \beta}{2} \sum_{\mu,j} (\xi_j^\mu)^2
+ \frac{\tilde{\gam}}{\sqrt{N}}
\sum_{a,b} \sum_{\mu,j} V_{ab}^\mu \xi_j^\mu S^\mu
J_j^a \right] \right\rangle.
\end{eqnarray*}
Within replica symmetry, the introduction of the order parameters
\begin{eqnarray*}
R = \frac{1}{N} \left\langle \vek{J} \cdot \vek{B} \right\rangle & =
& \frac{1}{N} \sum_j J_j^a B_j \\
q = \frac{1}{N} \left\langle \vek{J}^2 \right\rangle & =
& \frac{1}{N} \sum_j J_j^a J_j^{\tilde{a}} \\
Q = \frac{1}{N} \left\langle \vek{J} \right\rangle ^2 & =
& \frac{1}{N} \sum_j (J_j^a)^2
\end{eqnarray*}
together with their conjugates yields
\begin{eqnarray*}
\left\langle Z^n \right\rangle & \propto &
\int \prod_{a,j} \,dJ_j^a
\exp \left[ iN\Phi \left( \frac{1}{N} \sum_{j,a} J_j^a B_j - n R
\right) \right] \\
& & \prod_{} \exp \left[ iN\omega \left( \frac{1}{N} \sum_{j,a,\tilde{a}
\neq a} J_j^a J_j^{\tilde{a}} - n(n-1)q \right) \right] \\
& & \exp \left[ iN\Omega \left( \frac{1}{N} \sum_{j,a} (J_j^a)^2 - nQ
\right) \right] \\
& & \sum_{\{V_{ab}^\mu\}_\mu}\left( \prod_{a,b} \exp \left[ - \left(
\frac{\tilde{\gam}}{2} \sum_\mu V_{ab}^\mu + \frac{1}{2} N \right) Q - \eta \sum_\mu
V_{ab}^\mu \right] \right) \\
& & \left( \prod_\mu \exp \left[ \frac{1}{1+n\beta\tilde{\gam}/\gamma} \left(
-\frac{1}{2} \tilde{\gam} n \beta (V^\mu)^2 + \tilde{\gam} \sum_{a,b} V_{ab}^\mu
V^\mu R \right.\right.\right. \\
& & \left.\left.\left. + \frac{\tilde{\gam}^2}{2\gamma} \sum_{a,\tilde{a}\neq a}
\sum_{b,\tilde{b}} V_{ab}^\mu V_{\tilde{a}\tilde{b}}^\mu q
+ \frac{\tilde{\gam}^2}{2\gamma} \sum_{a} \sum_{b,\tilde{b}} V_{ab}^\mu
V_{a\tilde{b}}^\mu Q \right) - \eta V^\mu \right] \right)
\end{eqnarray*}
In this expression (and in the following one) the order parameters have to
be taken at their saddle point values.
After a lengthy calculation, we arrive at an expression for the free energy
\begin{equation}\label{freebeta}
f = \frac{1}{\beta} \frac{R^2-Q}{2(Q-q)} - \frac{1}{2\beta} \ln (Q-q)
+ \frac{1}{2} Q - \frac{\alpha}{\beta} M(R,q,Q) + \mbox{const.}
\end{equation}
with
\begin{eqnarray*}
M(R,q,Q) = \frac{1}{1+e^{-\eta}} \int Dx \left\{ \ln \left( \int Dy
\left( 1 + \exp \left[ -\frac{\tilde{\gam}}{2} Q - \eta + \tilde{\gam}
\sqrt{\frac{q}{\gamma}}x \right.\right.\right.\right. \\
\left.\left.\left. + \tilde{\gam} \sqrt{\frac{Q-q}{\gamma}}y \right] \right) ^\beta
\right)
-\frac{1}{2} e^{-\eta} \tilde{\gam}\rho^2\beta \\
\left. + e^{-\eta} \ln \left( \int Dy \left( 1 + \exp \left[ -\frac{\tilde{\gam}}{2}
Q - \eta + \tilde{\gam} R + \tilde{\gam} \sqrt{\frac{q}{\gamma}}x
+ \tilde{\gam} \sqrt{\frac{Q-q}{\gamma}}y \right] \right) ^\beta \right) \right\}.
\end{eqnarray*}
For $\beta\to\infty$ we have to take the limit $q\to Q$.
With the ansatz $(Q-q)\beta =: z = \ord{1}$, we get in the limit
\begin{equation}
f = \frac{R^2-Q}{2z} + \frac{1}{2} Q
- \frac{\alpha}{1+e^{-\eta}}
\left( \hat{I}_5 + \frac{b}{2} \hat{I}_1 \right)
- \frac{\alpha e^{-\eta}}{1+e^{-\eta}} \left( I_5 + \frac{b}{2} I_1
\right) + \mbox{const.}
\end{equation}
This yields the saddlepoint equations
\begin{eqnarray*}
0 \stackrel{!}{=} \frac{\partial f}{\partial R} & = &
\frac{R}{z} - \frac{\alpha e^{-\eta}}{1+e^{-\eta}}
\left( I_6 + \frac{b}{2} I_2 \right) \tilde{\gam} \\
0 \stackrel{!}{=} \frac{\partial f}{\partial z} & = &
\frac{Q-R^2}{2z^2} - \frac{\alpha}{1+e^{-\eta}} \hat{I}_4
\frac{\tilde{\gam}^2}{2\gamma}
- \frac{\alpha e^{-\eta}}{1+e^{-\eta}} I_4 \frac{\tilde{\gam}^2}{2\gamma} \\
0 \stackrel{!}{=} \frac{\partial f}{\partial Q} & = &
-\frac{1}{2z} + \frac{1}{2} - \frac{\alpha}{1+e^{-\eta}}
\left( \frac{\tilde{\gam}}{2} \left( \hat{I}_6 + \frac{b}{2} \hat{I}_2 \right)
+ \frac{\tilde{\gam}}{2\sqrt{\gamma Q}} \left( \hat{I}_7 + \frac{b}{2} \hat{I}_3
\right) \right) \\
& & - \frac{\alpha e^{-\eta}}{1+e^{-\eta}} \left( \frac{\tilde{\gam}}{2}
\left( I_6 + \frac{b}{2} I_2 \right) + \frac{\tilde{\gam}}{2\sqrt{\gamma Q}}
\left( I_7 + \frac{b}{2} I_3 \right) \right)
\end{eqnarray*}
where
\begin{eqnarray*}
I_1 & := & \int Dx \frac{1}{e^{-2a}+1+(2-b)e^{-a}} \\
I_2 & := & \int Dx \frac{2e^{-2a}+(2-b)e^{-a}}{\left( e^{-2a}+1+(2-b)e^{-a}
\right)^2} \\
I_3 & := & \int Dx \frac{2e^{-2a}+(2-b)e^{-a}}{\left( e^{-2a}+1+(2-b)e^{-a}
\right)^2} x \\
I_4 & := & \int Dx \frac{e^{-2a}+1+2e^{-a}}{\left( e^{-2a}+1+(2-b)e^{-a}
\right)^2} \\
I_5 & := & \int Dx \ln \left( 1+e^a \right) \\
I_6 & := & \int Dx \frac{1}{e^{-a}+1} \\
I_7 & := & \int Dx \frac{x}{e^{-a}+1}
\end{eqnarray*}
For the $\hat{I}_j$, $a$ has to be replaced by $\hat{a}$, where
\begin{eqnarray*}
a & := & -\frac{\tilde{\gam}}{2} Q - \eta + \tilde{\gam} R + \tilde{\gam} \sqrt{\frac{Q}{\gamma}}x \\
\hat{a} & := & -\frac{\tilde{\gam}}{2} Q - \eta + \tilde{\gam} \sqrt{\frac{Q}{\gamma}}x \\
b & := & \frac{\tilde{\gam}^2}{\gamma} z.
\end{eqnarray*}
\section{ Free energy and order parameters for hard selection }
The Hamiltonian (\ref{henergy}) is explicitely given by
\begin{eqnarray*}
{\cal H}_h(\{V^\mu\}_\mu) & := & - \ln \int d\vek{J} \, \wkt{{\rm I\kern -0.13em D},\vek{J},\{V^\mu\}_\mu} \\
& = & -\left[ \frac{\tilde{\gam}^2}{2 N (\tilde{\gam} \hat{Q} + 1)}
\sum_{\mu,\nu} V^\mu V^\nu \sum_j\xi_j^\mu \xi_j^\nu S^\mu
S^\nu
\right. \\
& & \left.
- \frac{\tilde{\gam}}{2} \sum_{\mu,j} (\xi_j^\mu)^2 -
\eta \sum_\mu V^\mu \right]
+ (N/2)\ln(\tilde{\gam}\hat{Q} + 1) -\ln C
\end{eqnarray*}
where
\begin{eqnarray*}
C := \frac{1}{2^{\alpha N}} \left( \frac{\tilde{\gam}}{2\pi} \right) ^{\alpha N^2 /2}
\frac{ 1 }{ ( 1 + \exp[-\eta] )^{\alpha N} }
\left( \frac{1}{ 2\pi } \right)^{N/2}
\end{eqnarray*}
with the orderparameters
\begin{eqnarray*}\label{hardord}
{\hat{R}} & := & \frac{1}{N} \sum_\mu V_a^\mu V^\mu \\
{\hat{q}} & := & \frac{1}{N} \sum_\mu V_a^\mu V_{\tilde{a}}^\mu \\
{\hat{Q}} & := & \frac{1}{N} \sum_\mu (V_a^\mu)^2 = \frac{1}{N} \sum_\mu
V_a^\mu .
\end{eqnarray*}
Averaging the partition function (\ref{zhard}) yields
\begin{eqnarray*}
\left\langle Z_h^n \right\rangle & =
& \left( \frac{1}{1+e^{-\eta}} \right) ^{\alpha N}
\left( \frac{1}{1+n \beta \tilde{\gam}/\gamma} \right) ^{\alpha
N^2 / 2}
\sum_{\{V_a^\mu,V^\mu\}_\mu} \int \prod_{a,j} Dy_j^a
\\
& & \exp \! \left[ -\eta \sum_\mu V^\mu
+ \frac{1}{2(1+n \beta \tilde{\gam}/\gamma)} \left(
- n \beta \tilde{\gam} \sum_\mu V^\mu
\right. \right. \\
& & \left. \left.
+ \frac{2 \tilde{\gam} \sqrt{\beta}}{\sqrt{\tilde{\gam} \hat{Q} + 1}}
\sum_j \sum_a y_j^a B_j \hat{R}
+ \frac{\tilde{\gam}^2 \beta}{\gamma (\tilde{\gam} \hat{Q} + 1)}
\sum_j \sum_{a,\tilde{a}\neq a} y_j^a y_j^{\tilde{a}} \hat{q}
\right. \right. \\
& & \left. \left.
+ \frac{\tilde{\gam}^2 \beta}{\gamma (\tilde{\gam}\hat{Q} + 1)}
\sum_j \sum_a (y_j^a)^2 Q \right)
- n N \beta \eta \hat{Q} \right]
\\
& & (\tilde{\gam} Q + 1) ^{-n N \beta/2} C^{n \beta}
\end{eqnarray*}
The free energy $f_h$ simplifies in the limit $\beta\to\infty$,
where the scaling
$
\beta ({\hat{q}}-{\hat{Q}}) =: {\hat{z}} = \ord{1}
$
is used.
We finally obtain $f_h$ as a function of the actual orderparameters at the
saddlepoint:
\begin{equation}\label{freehard}
f_h= \eta {\hat{Q}} + \frac{1}{2} \ln ({\hat{Q}}\tilde{\gam}+1)
- \frac{({\hat{Q}}+2{\hat{R}}^2\gamma\rho^2)({\hat{Q}}\tilde{\gam}+1)
\gamma\tilde{\gam}^2}
{4({\hat{Q}}\gamma\tilde{\gam}+{\hat{z}}\tilde{\gam}^2+\gamma)^2}
\end{equation}
A similar calculation using (\ref{ordhard}) yields the averages
\begin{eqnarray}\nonumber
\sum_j {\left\langle J_j \right\rangle}_{\smvek{J}} B_j
& = & N \frac{ {\hat{R}} \tilde{\gam} }{ {\hat{Q}} \tilde{\gam} +1 }
\left( 1 - \frac{ {\hat{z}}\tilde{\gam}^2 }
{ {\hat{Q}}\gamma\tilde{\gam}+{\hat{z}}\tilde{\gam}^2+\gamma } \right) \\
\sum_j {\left\langle {J_j}^2 \right\rangle}_{\smvek{J}}
& = & N \frac{ \gamma\tilde{\gam}^2 ({\hat{Q}}+2{\hat{R}}^2\gamma) }
{ 2({\hat{Q}}\gamma\tilde{\gam}+{\hat{z}}\tilde{\gam}^2+\gamma)^2 } +
N \frac{ 1 }{ {\hat{Q}}\tilde{\gam}+1 }.\label{ordhard:two}
\end{eqnarray}
In the limit $\alpha\to\infty$, the resulting order parameter equations
can be further simplified by making the scaling ans\"atze
${\hat{R}} = \alpha {\hat{R}}_0$,
${\hat{Q}} = \alpha {\hat{Q}}_0$,
${\hat{z}} = -\alpha {\hat{z}}_0$,
where ${\hat{R}}_0,{\hat{Q}}_0,{\hat{z}}_0$ are independent of
$\alpha$ as $\alpha\to\infty$.
For $\gamma=\tilde{\gam}$, the equation for the critical ratio of
outliers $\eta_c$, where the trivial solution with zero orderparameters
has the global minimum of the free energy, is determined from
\begin{eqnarray}\label{etacinf}
0 & = & \eta - 2 \gamma \pi \eta \exp[\gamma+2\eta]
\Phi^2[\sqrt{\gamma}-\sqrt{2\eta}] /
\left\{ \exp[\sqrt{2\gamma\eta}] + \exp[\gamma/2+\eta]
\right. \\ \nonumber
& & \left.
+ \sqrt{\pi\eta} \exp[\gamma/2+\eta]
\left( -2\Phi[\sqrt{\gamma}-\sqrt{2\eta}] - 2 e^\eta +
2 e^\eta \Phi[\sqrt{2\eta}] \right) \right\} ^2.
\end{eqnarray}
\end{appendix}
\newpage
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,420
|
{"url":"https:\/\/www.khanacademy.org\/test-prep\/mcat\/physical-sciences-practice\/physical-sciences-practice-tut\/e\/the-effects-of-ear-canal-acoustics-on-hearing-ability","text":"If you're seeing this message, it means we're having trouble loading external resources on our website.\n\nIf you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.\n\n# The effects of ear canal acoustics on hearing ability\n\n## Problem\n\nThe acoustic properties of the human ear canal can be predicted by modeling the ear as a rigid tube that has been sealed at one end (Figure 1). Such a tube naturally amplifies sounds that are at or near its resonant frequencies, and it dampens sounds distant from its natural frequencies---the closer a given frequency lies to the resonant frequency of the tube, the more it will be amplified. The physical mechanism for this effect is the formation of standing waves in the tube as sounds enter the ear, pass to the end, and then reflect off the wall back towards the entry point. As a result of this effect, the perceived intensity of a given sound frequency often does not match its actual intensity before it enters the ear canal. This effect is deliberate on the part of the body, and it allows humans to retain maximum sensitivity to particular sounds, like speech.\nThis effect is observable in equal-loudness curves (Figure 2). The x axis gives the frequency of a given sound (in Hz), the y axis gives the laboratory-measured amplitude of the sound waves (in dB, or decibels), and the separate curves correspond to the perceived \u201cloudness\u201d of sounds with a given frequency and amplitude, as reported by a human observer. The units for loudness are called \u201cphons.\u201d The perceived loudness of a sound can be found by finding the point on the graph corresponding to the given frequency and intensity (in dB) of the sound, and then finding the equal-loudness curve that lies closest to that point and recording its corresponding loudness level (in phons). Thus, for a given perceptual loudness curve, a peak corresponds to a frequency that the ear canal filters out, such that a higher intensity is needed to reach a given perceived loudnessminus, minus, minusresulting in human hearing being less sensitive to that frequency.\nFigure 1: A simple model of the ear canal as a tube with an opening at one end.\nFigure 2: A set of equal-loudness curves for a typical human (OSHA, 2014).\nWhich of the following properties describes the wave patterns most strongly amplified by the ear canal?","date":"2023-02-05 22:15:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 1, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.48671814799308777, \"perplexity\": 854.5242159742769}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500288.69\/warc\/CC-MAIN-20230205193202-20230205223202-00003.warc.gz\"}"}
| null | null |
President Donald Trump announced today his intention to pull out of the Joint Comprehensive Plan of Action (JCPOA), more commonly referred to as the "Iran Deal" or the "Iran Nuclear Deal", thus reversing an agreement made by the Obama Administration. Mr. Trump proclaimed that Iran is the "leading State sponsor of terror... a murderous regime."
The President went on to accuse Iran of backing terrorist organizations and proxies around the world. Mr. Trump blamed the previous administration for constructing a poor deal when the U.S. had far better leverage and reiterated his belief the JCPOA will not prevent Iran from acquiring nuclear weapons.
The entire concept was based on the premise the agreement would prevent Iran from gaining the ability to produce nuclear weapons. Opponents of the deal say the deal only delays the inevitable and does not prohibit Iran from conducting ballistic missile tests nor does the JCPOA prohibit Iran from continuing the country's commercial nuclear program. They also say Iran has been emboldened since the deal and point to Iran's expanded meddling in Syria and Lebanon.
Supporters of remaining in the agreement say that while it is not perfect, it would be worse to withdraw from the agreement. The JCPOA is supported by nearly every other U.S. ally as evidenced by visits from German Chancellor Angela Merkel and French President Macron over the last week. President Macron urged Mr. Trump to remain while also urging him to pursue additional modifications which have, thus far, been dismissed by Iran.
The United Kingdom also supports remaining in the deal and that position was expressed by Prime Minister Theresa May to the president via telephone. May also sent Foreign Secretary Boris Johnson who also expressed concern about a new arms race in the Middle East. For his part, French President Emmanuel Macron said there was no "Plan-B."
Supporters of remaining in the deal also say that America's withdrawal would send the wrong signal, namely that the U.S. is an unreliable partner- poor timing, they say, in light of ongoing nuclear negotiations with North Korea. President Trump pointed to withdrawal as further proof to North Korea that when he makes a promise, he keeps. Still others said the highly controversial Iran deal was never presented to the Senate for ratification and is therefore only an agreement which does not formerly bind the country as a treaty would have done.
The President's decision will have significant implications on businesses. Reimposition of sanctions will automatically "snap back" thereby essentially giving businesses 90 or 180 days to unwind business relationships in the country, depending upon their exact categories. As the JCPOA was initially constructed, negotiators were concerned the UN Security Council would never agree to reimpose sanctions which is why the language was constructed in such a way that makes reimposition of sanctions automatic. The Security Council would have to affirmatively vote not to continue lifting the sanctions. It is not clear however what would occur if another country were to walk away from the deal because negotiators always assumed Iran would be the only one to breach the agreement.
One thing is for certain however, and that is that the U.S. will, absent a new agreement, reinstitute sanctions which gives the administration significant leverage. Reimposition of sanctions prohibits any U.S. company from doing business with Iran. More importantly, these sanctions cut off access to U.S. markets for third parties that do business with Iran. Those who attempt to do so absent a waiver could not only face severe Treasury fines, but their assets in the U.S. may be frozen or seized. Some analysts predict this decision will result in short term increases in global crude oil prices, others feel the decision will not significantly impact oil markets as the threat of U.S. withdrawal from JCPOA was already built into the price.
It is also worth noting the impacts of President Trump's decision are not completely limited to U.S. companies. As alluded to above, these sanctions can potentially ban any company from conducting business in the U.S. or using the U.S. banking system if they or their European bank of choice conducts business with Iran.
According to Senator Chuck Schumer (D-NY), these "secondary sanctions" would apply to other countries, including U.S. allies. It may very well be that secondary sanctions will severely punish any company that decides to continue doing business with Iran. Under this scenario, any company could lose the ability to do business in the U.S., essentially cutting off U.S. markets to foreign companies.
While it may take days or weeks to fully understand the implications of the president's decision, only time will tell if his decision will bring parties to the negotiating table, or drive them further away.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,014
|
{"url":"https:\/\/math.stackexchange.com\/questions\/626355\/inverse-of-a-matrix\/626357","text":"Inverse of a matrix\n\nI'm trying to show that $$P^H ( I_M + PBP^H) ^{-1} P = \\big( (P^H P)^{-1} + B \\big)^{-1},$$ where $P$ is an $M$-by-$N$ matrix, $I_M$ is the $M$-by-$M$ identity matrix, $B$ is an $N$-by-$N$ matrix, and $(P^H P)$ is invertible.\n\nI've used various versions of matrix inversion lemmas, but I'm stuck.\n\nHow can the above equality be shown?\n\n\u2022 Have you tried some low-dimensional cases with explicit values for $M$ and $N$? \u2013\u00a0Deane Jan 3 '14 at 18:16\n\u2022 I verified that equation by Matlab, and it seems to be true for arbitrary integers M and N. Alsi, it can be verified mathematically with Ardakov's answer below. \u2013\u00a0user45020 Jan 3 '14 at 20:53\n\nLet $X = P^H(I + PBP^H)^{-1}P$ and $Y = (P^HP)^{-1} + B$. Since $X$ and $Y$ are both square matrices of the same size ($N$-by-$N$), it's enough to show that $XY = I$ say.\nNow $Y = (I + BP^H P)(P^HP)^{-1}$ and $P(BP^HP) = (PBP^H)P$. So\n$XY = P^H ( I + PBP^H)^{-1}P (I + BP^HP)(P^HP)^{-1} = P^H(I + PBP^H)^{-1}(I + PBP^H)P (P^HP)^{-1} = P^HP (P^HP)^{-1} =I.$","date":"2021-02-28 19:17:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 1, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9338504076004028, \"perplexity\": 216.63893556128178}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178361723.15\/warc\/CC-MAIN-20210228175250-20210228205250-00529.warc.gz\"}"}
| null | null |
{"url":"https:\/\/stacks.math.columbia.edu\/tag\/0C4R","text":"Lemma 31.13.10. Let $X$ be a scheme. Let $D, D' \\subset X$ be effective Cartier divisors such that the scheme theoretic intersection $D \\cap D'$ is an effective Cartier divisor on $D'$. Then $D + D'$ is the scheme theoretic union of $D$ and $D'$.\n\nProof. See Morphisms, Definition 29.4.4 for the definition of scheme theoretic intersection and union. To prove the lemma working locally (using Lemma 31.13.2) we obtain the following algebra problem: Given a ring $A$ and nonzerodivisors $f_1, f_2 \\in A$ such that $f_1$ maps to a nonzerodivisor in $A\/f_2A$, show that $f_1A \\cap f_2A = f_1f_2A$. We omit the straightforward argument. $\\square$\n\nThere are also:\n\n\u2022 2 comment(s) on Section 31.13: Effective Cartier divisors\n\nIn your comment you can use Markdown and LaTeX style mathematics (enclose it like $\\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).","date":"2022-10-06 01:53:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 2, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9916946887969971, \"perplexity\": 415.47337059857585}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030337680.35\/warc\/CC-MAIN-20221005234659-20221006024659-00504.warc.gz\"}"}
| null | null |
Q: socket.io emit function server side only? I'm trying to develop a real time canvas game, i have some code that has the basic functionality i need but not using node or socket.io
how would I emit this function to all clients?
function init()
{
numShapes = 10;
shapes = [];
drawScreen();
ctx= canvas.getContext('2d');
setInterval(draw,10);
makeShapes();
}
https://jsfiddle.net/a9b3rm5u/5/
I am now trying to add a real time element but cannot understand socket.emit. How would I emit the balls from the server to all the clients? then use client side code for the click events?
A: It depends on what you want to emit.
I first suggest you look at the chat example on their website, http://socket.io/get-started/chat/, to see how to set up socket.io in your code.
There's info on how to set up Node.js and Socket.io, both of which you'll need in order to use socket.emit.
Your concern about client and server side interaction isn't an issue. You're able to do only one or both.
You might have something that looks like:
io.on('connection', function(socket){
socket.emit('coordinates', {x: /* your shape's x data */, y: /* your shape's y data */);
});
A: On your server, if the server has the ball then you can get that ball and emit it to each connected client by:
io.emit('ballMove', {x: /*x coordinate of ball*/, y: /*y coordinate of ball*/});
On your client socket, you must listen for 'ballMove' which is emitted by the server.
something like:
socket.on('ballMove', function(data){
//get the x and y passed by server from the data argument
//update the clients local ball with the x & y coordinate passed by the server
ball.x = data.x;
ball.y = data.y;
});
if you want sockets to broadcast their local ball's coordinates to other clients except on the socket that emitted it then you can:
socket.broadcast.emit('ballMove', { x: ball.x, y:ball.y});
all connected clients on the server listening to ballMove will receive the data except on the socket who emitted the event.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,864
|
\section{Introduction}
Let $\mathbf{x}$ be a $J$-variate random variable following a Gaussian mixture model (GMM) with density
\begin{equation}
\label{md}
h(\mathbf{x};\theta)=\sum_{g=1}^G p_g \phi(\mathbf{x};\boldsymbol{\mu}_g,\boldsymbol{\Sigma}_g),
\end{equation}
where the $p_g$'s are the mixing proportions, with $p_g > 0$ $\forall g$ and $\sum_g p_g=1$, and
the component $\phi(\mathbf{x};\boldsymbol{\mu}_g,\boldsymbol{\Sigma}_g)$
represents the density of a $J$-variate normal distribution
with mean vector $\boldsymbol{\mu}_g$ and covariance matrix $\boldsymbol{\Sigma}_g$; furthermore, let us indicate the set of model parameters with $\theta=\left\{p_g, \boldsymbol{\mu}_g,\boldsymbol{\Sigma}_g \right\}_G=\left\{p_1,\dots p_G, \boldsymbol{\mu}_1,\dots,\boldsymbol{\mu}_G,\boldsymbol{\Sigma}_1,\dots,\boldsymbol{\Sigma}_G \right\} \in \mathbb{R}^{G(1+J+J^{2})},$
and the parameter space with
\begin{equation}
\label{ps}
\Theta=\left\{\theta \in \mathbb{R}^{G(1+J+J^{2})} :\sum_g p_g=1, p_g > 0, \boldsymbol{\Sigma}_g \succ \boldsymbol{0},
g=1, \dots, G\right\},
\end{equation}
where the symbol $\succ$ refers to L\"{o}wner ordering on symmetric matrices and, in this case, is equivalent to requiring that $\boldsymbol{\Sigma}_g$ be positive definite.
The GMM is frequently used to classify a sample of observations. The idea is to consider the sample as drawn from a heterogeneous population where each sub-population is described by one component of the mixture. In other terms, each observation is assumed to come from one of the $G$ different groups characterized by the mixture components.
The observations are classified into the groups by computing the posterior probabilities
\begin{equation}
p(g|\mathbf{x}) = \frac{p_g \phi(\mathbf{x};\boldsymbol{\mu}_g,\boldsymbol{\Sigma}_g)}
{\sum_h p_h \phi(\mathbf{x};\boldsymbol{\mu}_h,\boldsymbol{\Sigma}_h)},
\end{equation}
and assigning each observation to the group with the largest posterior probability.
The parameters of the GMM are generally unknown and estimated from the data. Given a sample of i.i.d. observations $\left\lbrace \mathbf{x}_i \right\rbrace_n= \left\{ \mathbf{x}_1, \mathbf{x}_2, \dots \mathbf{x}_n \right\}$, the estimation is usually done by maximizing the likelihood
\begin{equation}
\label{eq:like}
L\left(\theta; \left\lbrace \mathbf{x}_i \right\rbrace_n \right)=\prod_{i=1}^n \left(\sum_{g=1}^G p_g \phi(\mathbf{x}_i;\boldsymbol{\mu}_g,\boldsymbol{\Sigma}_g)\right).
\end{equation}
The likelihood in Equation \eqref{eq:like} is known to be unbounded and it is cursed by the presence of several local maxima. As a consequence, the EM algorithm may fail to converge, leading to such degenerate solutions. To face degeneracy, several methods have been proposed by the literature in which constraints or penalties are added to the log-likelihood. Their main objective is to keep the eigenvalues of the class conditional covariance matrices bounded away from zero.
This paper considers the sufficient condition formulated by Ingrassia (2004) such that Hathaway's (1985) constraints hold: we propose a generalization that enforces the equivariance with respect to linear affine transformations of the data. The idea is to shrink the class conditional covariance matrices towards a pre-specified matrix $\boldsymbol{\Psi}.$ We investigate possible data-driven methods for choosing the matrix $\boldsymbol{\Psi},$ when {\em a priori} information on the group-specific covariance structure is not available, and we let the data determine the optimal amount of shrinkage. The equivariance property the method possesses is a key feature for twofold reasons. First, it means that irrespective of the kind of standardization performed on the data, the final clustering will be the same - provided that $\boldsymbol{\Psi}$ is transformed accordingly. Second, whatever the scale of the data is as they come in, there will be no \emph{best} pre-processing of the data ensuring a \emph{better} result, as the final clustering is not affected by changes in scale.
The plan of the paper is the following. Section \ref{degen} gives insights on the notion of degeneracy for multivariate GMM, and Section \ref{remdegen} reviews some of the workarounds proposed by the existing literature. In Section \ref{invar} we state the property of equivariance of GMM and we show, in Section \ref{constr1}, that the property holds in the constrained approach of Hathaway (1985), whereas it does not hold in the sufficient condition provided by Ingrassia (2004). In Section \ref{constr2} we illustrate how these constraints can be generalized, by introducing the matrix $\boldsymbol{\Psi},$ to become equivariant under linear affine transformations of the data, provided that $\boldsymbol{\Psi}$ is transformed accordingly, and how their configuration can be tuned from the data (Section \ref{crossvalid}). Section \ref{alg} summarizes the algorithm. The proposal is evaluated through a simulation study (Section \ref{simulation}) and an empirical application (Section \ref{wineapp}). Section \ref{concl} concludes with a final discussion.
\section{The issue of degeneracy in GMM}\label{degen}
In the univariate case, the likelihood function increases without bound if some variances tend to zero and the corresponding component's mean coincides with a sample observation (Kiefer and Wolfowitz, 1956; Day, 1969). Biernacki and Chr\'etien (2003) showed that if mixture parameters are close to a degenerate solution, then the EM is attracted by it and the divergence is extremely fast. Although Kiefer (1978) proved that maximum likelihood does not fail, as there exists a local maximizer strongly consistent and asymptotically efficient, several local maximizers can exist for a given sample. That is, some local maximizers are spurious, i.e. with a high likelihood but of little practical use because highly biased. They are characterized by some component variances and mixing proportions very small relative to the others (Day, 1969; McLachlan and Peel, 2000). Detecting the desired solution, among the many available, can therefore be a complicated task.
The same problems hold in the multivariate case (as an example, see Ingrassia and Rocci, 2011, for an extension of Biernacki and Chr\'etien, 2003), with additional complications. To notice how unboundedness is caused, first of all let us express the density of the $i$-th observation on the $g$-th component as follows
\begin{equation}\label{Lispec}
\phi\left(\mathbf{x}_i;\boldsymbol{\mu}_g, \boldsymbol{\Sigma}_g \right) = \frac{1}{\sqrt{2\pi \prod_{j=1}^{J}\lambda_{jg}}} \exp\left\{ -\frac{1}{2} (\mathbf{x}_{i} - \boldsymbol{\mu}_{g})'\mathbf{Q}_{g}\mathbf{L}_{g}^{-1}\mathbf{Q}_{g}'(\mathbf{x}_{i} - \boldsymbol{\mu}_{g}) \right\},
\end{equation}
where $\mathbf{Q}_{g}$ is the square $J \times J$ matrix whose $j-$th column is the eigenvector $\mathbf{q}_{jg}$ of $\mathbf{\Sigma}_{g},$ and $\mathbf{L}_{g}$ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues $\{\lambda_{jg}\}_{J},$ ordered such that $\lambda_{1g} \geq \dots \geq \lambda_{Jg}.$ Equation \eqref{Lispec} can be rewritten as
\begin{align*}
\phi\left(\mathbf{x}_i;\boldsymbol{\mu}_g, \boldsymbol{\Sigma}_g \right) &= \frac{1}{\sqrt{2\pi \prod_{j=1}^{J}\lambda_{jg}}} \exp\left\{ -\frac{1}{2} (\mathbf{x}_{i} - \boldsymbol{\mu}_{g})'(\sum_{j=1}^{J} \lambda_{jg}^{-1} \mathbf{q}_{jg} \mathbf{q}_{jg}')(\mathbf{x}_{i} - \boldsymbol{\mu}_{g}) \right\} \\
&= \frac{1}{\sqrt{2\pi \prod_{j=1}^{J}\lambda_{jg}}} \exp\left\{ -\frac{1}{2} \sum_{j=1}^{J}\lambda_{jg}^{-1}[(\mathbf{x}_{i} - \boldsymbol{\mu}_{g})'\mathbf{q}_{jg}][\mathbf{q}_{jg}'(\mathbf{x}_{i} - \boldsymbol{\mu}_{g})] \right\} \\
&= \frac{1}{\sqrt{2\pi \prod_{j=1}^{J}\lambda_{jg}}} \exp\left\{ -\frac{1}{2} \sum_{j=1}^{J}\lambda_{jg}^{-1}[(\mathbf{x}_{i} - \boldsymbol{\mu}_{g})'\mathbf{q}_{jg}]^{2} \right\}. \numberthis \label{eq:Li2parts}
\end{align*}
As Policiello (1981) argued, the likelihood in Equation \eqref{eq:like} can be written as the sum of non negative terms.
Among them, it is possible to isolate the product of the density of the $i$-th observation on
the $g$-th component - Equation \eqref{eq:Li2parts} - and the densities of the other observations on the other components and the corresponding mixing proportions. If observation $i$ is such that $\mathbf{x}_{i}'\mathbf{q}_{Jg} - \boldsymbol{\mu}_{g}'\mathbf{q}_{Jg} = 0,$ then, as $\lambda_{Jg} \rightarrow 0,$ there would be no exponential term involving $\lambda_{Jg}$ who can attenuate the effect of $\frac{1}{\sqrt{2\pi \prod_{j=1}^{J}\lambda_{jg}}} \rightarrow \infty.$ In words, the sample likelihood diverges when in one component the covariance matrix is close to singularity and the projection of the component's mean on the eigenvector corresponding to the smallest eigenvalue coincides with the projection of one of the observations on the same eigenvector.
\section{Remedies to degeneracy}\label{remdegen}
The easiest way to handle degeneracy is to initialize the EM algorithm from several starting points until a local maximum is found (Biernacki and Chr\'etien, 2003). McLachlan and Peel (2000) proposed monitoring the local maximizers by inspecting the relative size of the estimated mixing proportions and component variances. This leads, in practice, to performing maximum likelihood estimation by looking for the correct local maximum and discarding those that seem to be spurious.
Further methods exploit constraints on the covariance matrices. This approach is based on the seminal work of Hathaway (1985), where he studied how to avoid the divergence of the likelihood in the univariate case by imposing a lower bound, say $c$, to the ratios of the scale parameters. In this way the variances cannot be arbitrarily different. Hathaway proved the boundedness of the likelihood and the consistency of the ML estimator under such constraints. In the multivariate case, the lower bound is imposed on the generalized eigenvalues of each pair of covariance matrices and the ML estimator results to be equivariant under linear affine transformations of the data. This implies that, as in the unconstrained case, if the data are linearly transformed, the estimated posterior probabilities do not change and the clustering remains unaltered (see Sections \ref{invar} and \ref{constr1}).
An important issue is the choice of the constant $c,$ which controls the strength of the constraints. In the context of univariate mixtures of Gaussians or linear regression models, some authors have shown that the maximum likelihood constrained estimator is consistent if $c$ decreases to zero at a certain rate as the sample size increases to infinity (e.g. Tanaka and Takemura (2006), Tan et al. (2007), Xu et al. (2010)). Nevertheless, finite-sample sensible choice of $c$ is still an open issue.
Hathaway's constraints are very difficult to apply within iterative procedures like the EM algorithm. To solve this problem, Ingrassia (2004) proposed to simplify the constraints by putting bounds on the eigenvalues of the covariance matrices. Although putting lower bounds on the group conditional covariance matrices was already common practice, Ingrassia (2004) found a way to reconcile Hathaway's contribution with the common practice: his bounds on the eigenvalues give a sufficient condition such that Hathaway's constraints are satisfied. The simplification is such that the constraints can be easily implemented within the EM algorithm, preserving its monotonicity property (as shown in Ingrassia and Rocci, 2007).
Several authors extended the constrained setup of Ingrassia (2004). Greselin and Ingrassia (2013) applied this setup to mixtures of factor analyzers. They proposed a tuning procedure for selecting the bounds for the eigenvalues of the covariance matrices, based on the final likelihood over a set of runs. Ingrassia and Rocci (2011) modified the constrained algorithm, allowing for stringent constraints which are lifted during the iterations. Browne et al. (2013) combined the ideas in Ingrassia and Rocci (2007, 2011), constraining dynamically the smallest eigenvalue, the largest eigenvalue and both the smallest and the largest ones. All of these proposals share the drawback of not being affine equivariant.
Gallegos and Ritter (2009a; 2009b), and Ritter (2014) applied Hathaway's constraints to robust clustering. They proposed to obtain all local maxima of the trimmed likelihood and, for each solution, investigate the value of $c$ such that it fulfills the constraints. The idea is to choose, \emph{a posteriori}, the solution with the highest trade-off between scale balance ($c$) and fit (log-likelihood). This approach can be viewed as a refined version of what was proposed in McLachlan and Peel (2000). Garcia-Escudero et al. (2008), from the same strand of literature, introduced the TCLUST algorithm, based on controlling the relative sizes of the eigenvalues of the cluster scatter matrices. The TCLUST algorithm implies solving several complex optimization problems. Fritz et al. (2013) and Garcia-Escudero et al. (2014) proposed further improvements to the algorithm in order to make it more efficient. The constraints considered therein are not affine equivariant.
Seo and Kim (2012) pointed out that singular and spurious solutions overfit random localized patterns composed of few observations in the dataset. Such observations have a strong influence on the formation of the likelihood-based solutions. Their proposal was to take out such, say, $k$ observations with the highest likelihood (likelihood-based $k$-deleted method), or with the highest value for a score-based statistic (score-based $k$-deleted method). In this way the likelihood of the reduced samples is evaluated at each local maximizer previously found: the root they suggested to select is the one with the highest $k$-deleted likelihood. Kim and Seo (2014) show that their score-based method can be fairly well approximated with the computationally more efficient gradient-based version of the $k$-deleted method.
The degeneracy problem may also be addressed by adding a penalty to the log-likelihood (penalized approach). Ciuperca et al. (2003) have shown the consistency of the penalized likelihood estimators proposed in Ridolfi and Idier (1999, 2000) for univariate GMM. Chen and Tan (2009) extended the consistency result for the multivariate case. In this framework, the penalty term on the component covariance is added to the log-likelihood (Snoussi and Djafari, 2001; Chen et al, 2008). This penalty can be interpreted as the log of the prior distribution in a Maximum-A-Posteriori estimation setup. Yet, the penalized methods are not affine equivariant, unless the prior's hyperparameters are suitably transformed. MAP estimation, with an \emph{a priori} distribution for the covariance matrices, is what Fraley and Raftery (2007) suggested to use, instead of Maximum-Likelihood, to circumvent the issues of degeneracy and spurious solutions.
\section{Equivariance in the Gaussian Mixture model}\label{invar}
The maximum likelihood estimators (MLE) of Equation (\ref{md}) are equivariant with respect to linear affine transformations of the data. That is, if the data are linearly transformed, the MLE are transformed accordingly. This property is particularly important
in classification because it implies that linear affine transformations of the data
do not change the posterior estimates (Kleinberg, 2002; Ritter, 2014).
The equivariance property can be shown in the following way.
Let us define a linear affine transformation $\mathbf{x}^*=\mathbf{Ax}+\mathbf{b}$, where $\mathbf{A}$ is non singular.
It is well known that
\begin{align*}
\phi(\mathbf{x};\boldsymbol{\mu},\boldsymbol{\Sigma}) & = |\mathbf{A}|\phi(\mathbf{Ax}+\mathbf{b};
\mathbf{A}\boldsymbol{\mu}+\mathbf{b},\mathbf{A}\boldsymbol{\Sigma}\mathbf{A}')\\
& = |\mathbf{A}|\phi(\mathbf{x}^*;\boldsymbol{\mu}^*,
\boldsymbol{\Sigma}^*) \numberthis \label{eq:equiGauss}
\end{align*}
where $\boldsymbol{\mu}^*=\mathbf{A}\boldsymbol{\mu}+\mathbf{b}$ and $\boldsymbol{\Sigma}^*=\mathbf{A}\boldsymbol{\Sigma}\mathbf{A}'$.
This implies that, denoting the likelihood of the original data with $\mathcal{F}$ and the likelihood of the transformed data with $\mathcal{F}^*$, we have, with obvious notation
\begin{align*}
\mathcal{F} & = L\left(\left\lbrace p_g,\boldsymbol{\mu}_g, \boldsymbol{\Sigma}_g \right\rbrace_G; \left\lbrace \mathbf{x}_i \right\rbrace_n \right) = \prod_{i=1}^n \sum_{g=1}^G p_g \phi(\mathbf{x}_i;\boldsymbol{\mu}_g,\boldsymbol{\Sigma}_g) \\
& = \prod_{i=1}^n \sum_{g=1}^G p_g |\mathbf{A}| \phi(\mathbf{x}_i^*;\boldsymbol{\mu}_g^*,\boldsymbol{\Sigma}_g^*) =
|\mathbf{A}|^n L\left(\left\lbrace p_g,\boldsymbol{\mu}_g^*, \boldsymbol{\Sigma}_g^* \right\rbrace_G; \left\lbrace \mathbf{x}_i^* \right\rbrace_n \right)\\
& = |\mathbf{A}|^n \mathcal{F}^*. \numberthis \label{eq:equiGaussmix}
\end{align*}
It follows that there exists a one to one correspondence among the local maxima of $\mathcal{F}$ and $\mathcal{F}^*$.
In particular, if
$\left\{\hat{p}_g,\hat{\boldsymbol{\mu}}_g, \hat{\boldsymbol{\Sigma}}_g \right\}_G$ is a local maximizer for $\mathcal{F},$ then $\left\{\hat{p}_g,\mathbf{A}\hat{\boldsymbol{\mu}}_g+\mathbf{b}, \mathbf{A}\hat{\boldsymbol{\Sigma}}_g \mathbf{A}' \right\}_G$ will be a local maximizer for $\mathcal{F}^*$. Analogously, if $\left\{\hat{p}_g^*,\hat{\boldsymbol{\mu}}_g^*, \hat{\boldsymbol{\Sigma}}_g^* \right\}_G$ is a local maximizer for $\mathcal{F}^*$, then
$\left\{\hat{p}_g^*,\mathbf{A}^{-1}(\hat{\boldsymbol{\mu}}_g^*-\mathbf{b}), \mathbf{A}^{-1}\hat{\boldsymbol{\Sigma}}_g^* (\mathbf{A}')^{-1} \right\}_G$
will be a local maximizer for $\mathcal{F}$.
It is interesting to note that every pair of local maximizers produces the same estimates of the posterior probabilities, that is
\begin{eqnarray*}
\hat{p}(g|\mathbf{x}_i) & = &\frac{\hat{p}_g \phi(\mathbf{x}_i;\hat{\boldsymbol{\mu}}_g,\hat{\boldsymbol{\Sigma}}_g)}{\sum_h \hat{p}_h \phi(\mathbf{x}_i;\hat{\boldsymbol{\mu}}_h,\hat{\boldsymbol{\Sigma}}_h)}\\
& = & \frac{\hat{p}_g |\mathbf{A}| \phi(\mathbf{x}_i^*;\hat{\boldsymbol{\mu}}_g^*,\hat{\boldsymbol{\Sigma}}_g^*)}{\sum_h \hat{p}_h |\mathbf{A}| \phi(\mathbf{x}_i^*;\hat{\boldsymbol{\mu}}_h^*,\hat{\boldsymbol{\Sigma}}_h^*)}=
\hat{p}^*(g|\mathbf{x}^*_i).
\end{eqnarray*}
The above equality proves that the classification obtained via the GMM model is invariant under the group of linear affine transformations on the data $\left\lbrace \mathbf{x}_i\right\rbrace_n$.
This property is crucial when dealing with practical applications as it implies that the clustering does not depend on the choice of a particular method of data standardization - which could instead affect the inference.
\section{Constraints on covariance eigenvalues}\label{constr1}
Hathaway (1985) proposed to impose the following restrictions on the covariance matrices
\begin{equation}
\label{ch}
\lambda_j (\boldsymbol{\Sigma}_g\boldsymbol{\Sigma}_h^{-1}) \geq c,
\hspace{1cm}j=1,\dots,J;\hspace{0.2cm} g,h=1 \dots G
\end{equation}
where $\lambda_j(\mathbf{A})$ is the $j$-th eigenvalue of $\mathbf{A}$ and $0<c\leq 1$.
This prevents the likelihood from diverging and reduces the number of spurious maximizers. However, the method is difficult to implement and a correct choice of $c$ is not simple in practice. A value of $c$ close to $1$ could exclude the correct solution, whereas a value too close to $0$ is likely to increase the chance of converging to a spurious maximizer.
Ingrassia (2004) simplified Hathaway's constraints as
\begin{equation}
\label{nc}
\sqrt{c} \leq \lambda_j (\boldsymbol{\Sigma}_g)\leq \frac{1}{\sqrt{c}}, \hspace{1cm}j=1,\dots,J;\hspace{0.2cm} g=1 \dots G.
\end{equation}
It is easy to show that (\ref{nc}) implies Hathaway's constraints (\ref{ch}) while the reverse is not necessarily true (Ingrassia, 2004). This ensures a bounded likelihood, and a reduction in the number of spurious maximizers. The constraints are easy to implement, as shown in Rocci and Ingrassia (2007); however, choosing an optimal \textit{c} is still an issue.
It is important to check if the above constrained approaches offer equivariant estimators under linear affine transformations.
The property can be shown to hold for Hathaway's approach as follows.
Let $\left\lbrace \mathbf{x}_i \right\rbrace_n= \left\{ \mathbf{x}_1, \mathbf{x}_2, \dots \mathbf{x}_n \right\}$ be a sample of i.i.d. observations. The estimates are computed as the solution of the optimization problem
\begin{equation}\label{eq:maxcon1}
\begin{cases}
\max\limits_{\left\lbrace p_g,\boldsymbol{\mu}_g, \boldsymbol{\Sigma}_g \right\rbrace_G}& L\left(\left\lbrace p_g,\boldsymbol{\mu}_g, \boldsymbol{\Sigma}_g \right\rbrace_G; \left\lbrace \mathbf{x}_i \right\rbrace_n \right) \\
\qquad \text{s.t.} & \lambda_j (\boldsymbol{\Sigma}_g\boldsymbol{\Sigma}_h^{-1}) \geq c,\hspace{1cm}j=1,\dots,J;\hspace{0.2cm} g,h=1 \dots G.
\end{cases}
\end{equation}
Given the transformation $\mathbf{x}^*=\mathbf{Ax}+\mathbf{b},$ the maximand in Equation (\ref{eq:maxcon1}) can be rewritten (see Section \ref{invar}) as
\[
L\left(\left\lbrace p_g,\boldsymbol{\mu}_g, \boldsymbol{\Sigma}_g \right\rbrace_G; \left\lbrace \mathbf{x}_i \right\rbrace_n \right)
=|\mathbf{A}|^n L\left(\left\lbrace p_g,\boldsymbol{\mu}_g^*, \boldsymbol{\Sigma}_g^* \right\rbrace_G; \left\lbrace \mathbf{x}_i^* \right\rbrace_n \right),
\]
where $\boldsymbol{\mu}^*=\mathbf{A}\boldsymbol{\mu}+\mathbf{b}$ and $\boldsymbol{\Sigma}^*=\mathbf{A}\boldsymbol{\Sigma}\mathbf{A}'$.
Noting that
\begin{eqnarray*}
\lambda_j (\boldsymbol{\Sigma}_g \boldsymbol{\Sigma}_h^{-1})
&=&
\lambda_j(\mathbf{A}^{-1}\mathbf{A}\boldsymbol{\Sigma}_g \mathbf{A}'(\mathbf{A}')^{-1} \boldsymbol{\Sigma}_h^{-1})\\
&=&
\lambda_j(\mathbf{A}\boldsymbol{\Sigma}_g \mathbf{A}'(\mathbf{A}')^{-1} \boldsymbol{\Sigma}_h^{-1}\mathbf{A}^{-1})\\
&=&
\lambda_j (\boldsymbol{\Sigma}_g^* (\boldsymbol{\Sigma}_h^*)^{-1}),
\end{eqnarray*}
we can equivalently write the optimization problem in Equation (\ref{eq:maxcon1}) as
\begin{equation}\label{eq:maxcon12}
\begin{cases}
\max\limits_{\left\lbrace p_g,\boldsymbol{\mu}_g^*, \boldsymbol{\Sigma}_g^* \right\rbrace_G}& L\left(\left\lbrace p_g,\boldsymbol{\mu}_g^*, \boldsymbol{\Sigma}_g^* \right\rbrace_G; \left\lbrace \mathbf{x}_i^* \right\rbrace_n \right) \\
\qquad \text{s.t.} & \lambda_j (\boldsymbol{\Sigma}_g^* (\boldsymbol{\Sigma}_h^*)^{-1}) \geq c,\hspace{1cm}j=1,\dots,J;\hspace{0.2cm} g,h=1 \dots G.
\end{cases}
\end{equation}
It follows that if $\left\{\hat{p}_g,\hat{\boldsymbol{\mu}}_g, \hat{\boldsymbol{\Sigma}}_g \right\}_G$ is a maximizer for (\ref{eq:maxcon1}), then $\left\{\hat{p}_g,\mathbf{A}\hat{\boldsymbol{\mu}}_g+\mathbf{b}, \mathbf{A}\hat{\boldsymbol{\Sigma}}_g \mathbf{A}' \right\}_G = \left\{\hat{p}_g,\hat{\boldsymbol{\mu}}^*_g, \hat{\boldsymbol{\Sigma}}^*_g \right\}_G$ is a maximizer for (\ref{eq:maxcon12}) and vice-versa, and the two maximization problems are equivalent. As in the unconstrained case, every pair of local maximizers produces the same estimates of the posterior probabilities.
This property does not hold for the constraints given in (\ref{nc}). That is, if $\left\lbrace \hat{\boldsymbol{\Sigma}}_g \right\rbrace_G$ is a constrained local maximizer for $\mathcal{F}$ subject to (\ref{nc}), $\left\lbrace\mathbf{A}\hat{\boldsymbol{\Sigma}}_g \mathbf{A}'\right\rbrace_{G}$ does not necessarily satisfy (\ref{nc}). As an example, let us suppose that $s_{max}(\mathbf{A})< \sqrt{c},$ where $s_{max}(\mathbf{A})$ is the largest singular value of $\mathbf{A}$. In this case, for a given $g,$
\begin{eqnarray*}
\lambda_j(\mathbf{A}\hat{\boldsymbol{\Sigma}}_g \mathbf{A}') &\leq & \lambda_{max}(\mathbf{A}\hat{\boldsymbol{\Sigma}}_g \mathbf{A}')
\leq s_{max}(\mathbf{A})^2 \lambda_{max}(\hat{\boldsymbol{\Sigma}}_g)\\
& \leq &s_{max}(\mathbf{A})^2 \frac{1}{\sqrt{c}} < c \frac{1}{\sqrt{c}}=\sqrt{c}.
\end{eqnarray*}
We conclude that $\mathbf{A}\hat{\boldsymbol{\Sigma}}_g \mathbf{A}'$ does not satisfy the constraints in (\ref{nc}) because $\lambda_j(\mathbf{A}\hat{\boldsymbol{\Sigma}}_g \mathbf{A}') < \sqrt{c}$, and then it cannot be a constrained local maximizer for $\mathcal{F}^*$.
Constrains in (\ref{nc}) are such that there is no one to one correspondence between the set of local maximizers of $\mathcal{F}$ and $\mathcal{F}^*$.
Thus, the method suffers the disadvantage that the clustering depends on the choice of matrix $\mathbf{A}$. To fix this, data standardization is not the best way to go for two main reasons. First, the standardization requires a choice for the matrix $\mathbf{A}$ and, second, there is no single best approach to data standardization (Milligan and Cooper, 1988; Doherty \textit{et al}, 2007).
It is now clear that affine equivariance is not just a desirable property. It is one of the basic requirements of any clustering method, which should not be sensitive to the changes in the units of measurement of the data. In the next section, our goal will be that of deriving a new set of constraints that are affine equivariant. With an affine equivariant clustering method, researchers and practitioners shall not be concerned anymore with choosing what method to adopt to standardize their data.
\section{Equivariant constraints} \label{constr2}
Our proposal is to generalize the constraints (\ref{nc}) by
\begin{equation}
\label{eq:gnc}
\sqrt{c} \leq \lambda_j (\boldsymbol{\Sigma}_g\boldsymbol{\Psi}^{-1})\leq \frac{1}{\sqrt{c}},
\hspace{1cm}j=1,\dots,J;\hspace{0.2cm} g=1 \dots G
\end{equation}
where $\boldsymbol{\Psi}$ is a symmetric positive definite matrix representing our \textit{prior} information about the covariance structure. Clearly, \eqref{eq:gnc} is equal to (\ref{nc}) when $\boldsymbol{\Psi}=\mathbf{I}$.
It can be shown that the above constraints imply Hathaway's constraints.
It is known that (Anderson and Gupta, 1963)
\begin{equation}
\lambda_{min}(\mathbf{AB}^{-1})\geq \lambda_{min}(\mathbf{AC}^{-1})
\lambda_{min}(\mathbf{CB}^{-1}),
\end{equation}
where $\mathbf{A}$ is a positive semi-definite matrix and $\mathbf{B}$ and $\mathbf{C}$ are positive definite matrices.
Now, if \eqref{eq:gnc} holds, then
\begin{equation}
\label{re}
\lambda_{min}(\boldsymbol{\Sigma}_g \boldsymbol{\Sigma}_h^{-1}) \geq
\lambda_{min}(\boldsymbol{\Sigma}_g\boldsymbol{\Psi}^{-1})
\lambda_{min}(\boldsymbol{\Psi} \boldsymbol{\Sigma}_h^{-1})=
\frac{\lambda_{min}(\boldsymbol{\Sigma}_g\boldsymbol{\Psi}^{-1})}
{\lambda_{max}(\boldsymbol{\Sigma}_h \boldsymbol{\Psi}^{-1})}\geq \frac{\sqrt{c}}{\frac{1}{\sqrt{c}}}= c.
\end{equation}
Thus, \eqref{eq:gnc} implies (\ref{ch}).
Furthermore, it can be shown that \eqref{eq:gnc} is invariant under linear and affine transformations provided that $\boldsymbol{\Psi}$ is transformed accordingly, i.e. it is replaced by $\boldsymbol{\Psi}^*=\mathbf{A}\boldsymbol{\Psi} \mathbf{A}'$.
If $\left\lbrace \hat{\boldsymbol{\Sigma}}_g \right\rbrace_G$ is a constrained local maximizer for $\mathcal{F}$ subject to \eqref{eq:gnc}, then $\left\lbrace \hat{\boldsymbol{\Sigma}}^*_g = \mathbf{A}\hat{\boldsymbol{\Sigma}}_g \mathbf{A}' \right\rbrace_G$ is a local maximizer for $\mathcal{F}^*$ subject to \eqref{eq:gnc} for $g=1 \dots G$. We have that
\begin{eqnarray*}
\label{eiglt}
\lambda_j (\hat{\boldsymbol{\Sigma}}_g \boldsymbol{\Psi}^{-1}) &=&
\lambda_j (\hat{\boldsymbol{\Sigma}}_g\mathbf{A}'(\mathbf{A}')^{-1} \boldsymbol{\Psi}^{-1}\mathbf{A}^{-1}\mathbf{A})\\
&=&\lambda_j(\mathbf{A}\hat{\boldsymbol{\Sigma}}_g \mathbf{A}'(\mathbf{A}')^{-1} \boldsymbol{\Psi}^{-1}\mathbf{A}^{-1}) \\
&=& \lambda_j (\hat{\boldsymbol{\Sigma}}^*_g \boldsymbol{(\Psi^*)}^{-1}).
\end{eqnarray*}
In words, if a linear affine transformation is performed on the data, $\boldsymbol{\Psi}$ must be changed accordingly. This scheme of transforming $\boldsymbol{\Psi}$ ensures the equivariance of the method. By contrast, holding $\boldsymbol{\Psi}$ fixed breaks the equivariance property.
The constraints \eqref{eq:gnc} have the effect of shrinking the covariance matrices to $\boldsymbol{\Psi},$ and the level of shrinkage is given by the value of $c.$ Note that for
$c=1$, $\hat{\boldsymbol{\Sigma}}_g = \boldsymbol{\Psi},$ whereas for $c\rightarrow 0,$ $\hat{\boldsymbol{\Sigma}}_g$ equals the unconstrained ML estimate. Furthermore we can show that the Stein's discrepancy - known as Stein's loss (James and Stein, 1961) - between the matrices $\hat{\boldsymbol{\Sigma}}_g$ and $\boldsymbol{\Psi}$ goes to zero as $c$ approaches one. The Stein's discrepancy between the matrices $\hat{\boldsymbol{\Sigma}}_g$ and $\boldsymbol{\Psi}$ is
\begin{equation}\label{eq:Lstein1}
\text{L}(\hat{\boldsymbol{\Sigma}}_g,\boldsymbol{\Psi}) = \text{tr}(\hat{\boldsymbol{\Sigma}}_g\boldsymbol{\Psi}^{-1}) - \log |\hat{\boldsymbol{\Sigma}}_g\boldsymbol{\Psi}^{-1}| - J \geq 0.
\end{equation}
Let us rewrite Equation \eqref{eq:Lstein1} as follows.
\begin{equation}\label{eq:Lstein2}
\text{L}(\hat{\boldsymbol{\Sigma}}_g,\boldsymbol{\Psi}) = \sum_{j=1}^{J} \lambda_{j} (\hat{\boldsymbol{\Sigma}}_g\boldsymbol{\Psi}^{-1}) - \sum_{j=1}^{J} \log(\lambda_{j} (\hat{\boldsymbol{\Sigma}}_g\boldsymbol{\Psi}^{-1})) - J
\end{equation}
Using the constraints in \eqref{eq:gnc}, we can derive the following majorizing function
\begin{equation}\label{eq:minorz}
\text{L}(\hat{\boldsymbol{\Sigma}}_g,\boldsymbol{\Psi}) \leq \frac{J}{\sqrt{c}} - J\log(\sqrt{c}) - J,
\end{equation}
which is decreasing in $c.$ This can be shown by noting that the first derivative of the right-hand side of \eqref{eq:minorz} with respect to $c$ is equal to $-\frac{J}{2c\sqrt{c}} - \frac{J}{2c},$ and is negative when $0<c\leq 1$. This implies that the function is decreasing when $c$ increases within the interval $(0,1]$.
Intuitively, the constraints \eqref{eq:gnc} provide with a way to obtain a model in between a too restrictive model, the homoscedastic, and an ill-conditioned model, the heteroscedastic.
\section{Data-driven choice of $\boldsymbol{\Psi}$ and $c$} \label{crossvalid}
Issues arise when \emph{a priori} information about the structure of the class conditional covariance matrices is not available. In that case, $\boldsymbol{\Psi}$ and $c$ have to be selected from the data. From the previous discussion, for a given $c,$ every $\hat{\boldsymbol{\Sigma}}_g$ cannot be too far from $\boldsymbol{\Psi}$ in terms of Stein's discrepancy. Thus $\boldsymbol{\Psi}$ can be seen as the barycenter of the cloud of the $\hat{\boldsymbol{\Sigma}}_g$'s: the \emph{average} conditional covariance matrix. Therefore, the most natural choice is to estimate such \emph{average} as the within covariance matrix of the homoscedastic Gaussian model. How close the final clustering will be to the homoscedastic model will depend on the value of the tuning constant: for values of $c$ close to $0$, the resulting clustering will be close to that of the heteroscedastic mixture model, whereas $c\rightarrow 1$ implies a clustering close to that of the homoscedastic mixture model.
Other possible choices of $\boldsymbol{\Psi},$ which guarantee the equivariance of the constraints, are available: the sample covariance matrix, which is computationally faster and is frequently used as hyperparameter in Bayesian Gaussian mixtures (for instance, see Fraley and Raftery, 2007), or the within covariance matrix of a homoscedastic mixture of Student-\textit{t}. To motivate this, let us recall that a random vector conditionally distributed as a multivariate Gaussian, given Wishart inverse covariance matrix, has a multivariate Student-$t$ distribution (Dawid, 1981; Dickey, 1967). Using similar arguments as in Peel and McLachlan (2000), if $\mathbf{x}| \boldsymbol{\Sigma}_1, \dots, \boldsymbol{\Sigma}_G$ is a GMM, and $\boldsymbol{\Sigma}_1^{-1}, \dots, \boldsymbol{\Sigma}_G^{-1}$ are i.i.d. Wishart random variables, the marginal distribution of $\mathbf{x}$ is a homoscedastic mixture of Student-$t$'s.
The choice of $c$ is crucial. A value of $c$ too large could exclude the right solution, whereas a too small value of $c$ is likely to increase the chance to converge to spurious local maxima: such solutions overfit random localized pattern composed of few data points being almost co-planar (Ritter, 2014; Seo and Kim, 2012). Hence, selecting $c$ jointly with the mixture parameters by maximizing the likelihood on the entire sample would trivially yield a scale balance approaching zero.
A practical alternative would be to split the data into a training set, where model parameters are estimated, and a test set, where the log-likelihood is evaluated for a given value of $c.$ The optimal tuning parameter $c$ would then be selected such that the test set log-likelihood is maximized.
The use of the test set log-likelihood as a model selection tool is advocated by Smyth (1996; 2000), in the context of estimating the number of mixture components. The motivation behind its use is that it can be showed to be an unbiased estimator (within a constant) of the Kullback-Leibler divergence between the \emph{truth} and the model under consideration (Smyth, 2000). This means that, even under a misspecified model, the procedure renders a $c$ such that the Kullback-Leibler divergence is minimized.
In spite of the usual unavailability of large independent test sets, a valid alternative is to use the \emph{cross-validated} log-likelihood in order to estimate the test set log-likelihood. This consists in repeatedly partitioning the data into training and test sets and, for a given $c,$ estimate the mixture parameters on the training sets. The model fit is then measured summing the log-likelihoods of the test sets evaluated at the parameters computed on the training sets, obtaining the so-called \emph{cross-validated} log-likelihood. The constant $c$ is chosen such that the \emph{cross-validated} log-likelihood is maximized. This can be viewed as a function of $c$ only (Smyth, 1996), and would solve the issue of overfitting as training and test sets are independent (Arlot and Celisse, 2010).
In details, let us partition $K$ times the full data set $\left\lbrace \mathbf{x}_i; i \in N \right\rbrace_n$ into two parts, a training set $\mathbf{x}_{S} = \left\lbrace \mathbf{x}_i ;i \in S \right\rbrace_{n_S},$ and a test set $\mathbf{x}_{\bar{S}} = \left\lbrace \mathbf{x}_i ; i \in \bar{S} \right\rbrace_{n_{\bar{S}}}$ with $S \cup \bar{S} = N$ and $n_S+n_{\bar{S}}=n.$ For the $k$-th partition, let $\hat{\theta}(c,S_{k})$ be the constrained maximum likelihood estimator based on the training set $\mathbf{x}_{S_{k}}.$ Furthermore, let $l_{\bar{S}_{k}} [\hat{\theta}(c,S_k)]$ be the log-likelihood function evaluated at the test set $\mathbf{x}_{\bar{S}_k}.$ The \emph{cross-validated} log-likelihood is defined as the sum of the contributions of each test set to the log-likelihood
\begin{equation}
\text{CV}(c)=\sum_{k=1}^K l_{\bar{S}_k} [\hat{\theta}(c,S_k)].
\end{equation}
The best $c$ is chosen as the maximizer of $\text{CV}(c)$.
Further details on the choice of the number of random partitions $K$ and of the sizes of training and test sets are given in Section \ref{simulation}.
\section{Algorithm} \label{alg}
The objective is to maximize \eqref{eq:like} under the constraints \eqref{eq:gnc}.
Thanks to the equivariance property of the constraints, we can act any linear affine transformation to the data. This is useful since it will suffice to transform the data so to have $\boldsymbol{\Psi}=\mathbf{I}_J$ and the existing algorithm of Ingrassia and Rocci (2007) can be applied on the transformed data.
The transformation is $\mathbf{x}^*=\mathbf{L}^{-\frac{1}{2}}\mathbf{Q}^{'}\mathbf{x},$ where $\boldsymbol{\Psi}=\mathbf{QLQ}'$ is the singular value decomposition of $\boldsymbol{\Psi}.$ This leads to $\boldsymbol{\Psi}^*=\mathbf{L}^{-\frac{1}{2}}\mathbf{Q}' \boldsymbol{\Psi}\mathbf{QL}^{-\frac{1}{2}}=\mathbf{I}_J$.
For sake of completeness, we recall briefly the updates of the algorithm proposed by Ingrassia and Rocci (2007).\\
\\
\textbf{Update} $u_{ig}$, $p_g,$ $\boldsymbol{\mu}_g$ \\
As in the case of a normal mixture, the updates are
\begin{equation}
u_{ig}=\frac{p_g \phi(\mathbf{x}_i;\boldsymbol{\mu}_g,\boldsymbol{\Sigma}_g)}{\sum_{h=1}^G \phi(\mathbf{x}_i;\boldsymbol{\mu}_h,\boldsymbol{\Sigma}_h)};
\end{equation}
\begin{equation}
p_g=\frac{1}{n}\sum_{i=1}^n u_{ig};
\end{equation}
\begin{equation}
\mu_g=\frac{\sum_i u_{ig}\mathbf{x}_i}{\sum_i u_{ig}}.
\end{equation}
\textbf{Update} $\boldsymbol{\Sigma}_g$\\
Compute
\begin{equation}
\mathbf{S}_g=\frac{1}{\sum_i u_{ig}}\sum_i u_{ig}(\mathbf{x}_i-\mu_g)(\mathbf{x}_i-\mu_g)',
\end{equation}
and set
\begin{equation}
\lambda_{qg}=\min\left( \frac{1}{\sqrt{c}},\max\left(\sqrt{c},l_{qg} \right) \right),
\end{equation}
where $\mathbf{L}_g=\textnormal{diag}\left(l_{1g},\dots l_{Jg} \right)$ is the diagonal matrix of the eigenvalues in non decreasing order of $\mathbf{S}_g,$ and $\mathbf{S}_g=\mathbf{Q}_g \mathbf{L}_g \mathbf{Q}_g'$ its singular value decomposition.
Letting $\boldsymbol{\Lambda}_g=\textnormal{diag}\left(\lambda_{1g},\dots \lambda_{jg} \right)$, the update of $\boldsymbol{\Sigma}_g$ is given by
\begin{equation}
\boldsymbol{\Sigma}_g=\mathbf{Q}_g\boldsymbol{\Lambda}_g \mathbf{Q}_g'.
\end{equation}
\section{Simulation study}\label{simulation}
\subsection{Design}
In this section we perform a simulation experiment in order to compare the performance of the proposed methods with respect to some existing approaches in the literature. In particular we consider the following seven algorithms:
\begin{enumerate}
\item Unconstrained
\begin{enumerate}
\item homoscedastic normal (homN), within covariance matrix $\boldsymbol{\Sigma};$
\item heteroscedastic normal (hetN), $0.0000001 \leq \lambda_j(\boldsymbol{\Sigma}_g) \leq 10000000$ to prevent degeneracy and numerical instability;
\item homoscedastic Student \textit{t} (hom\textit{t}), scale matrix $\boldsymbol{\Xi},$ $\beta=4$ (McLachlan and Peel, 1998).
\end{enumerate}
\item Constrained
\begin{enumerate}
\item sample covariance (con$\mathbf{S}$), $\boldsymbol{\Psi}=\mathbf{S};$
\item normal (conN), $\boldsymbol{\Psi}=\boldsymbol{\Sigma};$
\item Student \textit{t} (con\textit{t}), $\boldsymbol{\Psi}=\frac{\beta \boldsymbol{\Xi}}{(\beta -2)}.$
\end{enumerate}
\end{enumerate}
For each sample, we randomly split the data $K = 25$ times into a training set $\mathbf{x}_{S},$ and a test set $\mathbf{x}_{\bar{S}}.$ Choosing how many times to partition the full data set is a trade-off between variability of the estimates and computational burden. As Smyth (2000) argues in the context of model selection for probabilistic clustering using cross-validation, the larger the value of $K,$ the less the variability in the log-likelihood estimates. In practice - the Author argues - values of $K$ between 20 and 50 appear adequate for most applications.
The choice of the size of the test set must be such that the training set has all components represented. If one component is not represented in the test set, but the parameters are correctly estimated using the training set, the test set log-likelihood will correctly display the fit of the model. By contrast, if one component is not represented in the training set, although estimation of the other components' parameters can be correct, the fit displayed by the test set log-likelihood will be poor. Van der Laan, Dudoit, and Keles (2004) found, in their simulation study, that the likelihood-based cross-validation procedure is performing equally well with any choice of the relative size of the test set between 0.1 and 0.5. As argued in Kearns (1996), the importance of choosing an optimal size for the training set increases as the target function becomes more complex relative to the sample size. Bearing this in mind, we choose to consider a training set of size $n_S=n - \frac{n}{10},$ and a test set $\mathbf{x}_{\bar{S}}$ of size $n_{\bar{S}}=\frac{n}{10}.$
Then the cross-validation scheme, as described in Section \ref{crossvalid}, is applied and the optimal $c$ is chosen by using a line search with six function evaluations.
The sample data have been generated from $G$-class mixtures of heteroscedastic $J$-variate normal distributions with:
\begin{itemize}
\item $n=50,$ $100,$ $200;$
\item $J=5,$ $8;$
\item prior membership probabilities $\mathbf{p}=(0.2, 0.3, 0.5)',$ $(0.1, 0.4, 0.5)',$ $(0.1, 0.1, 0.2, 0.3, 0.3)'.$
\end{itemize}
This yields a total of $2 \times 3 \times 3$ simulation conditions.
For each simulation condition, we generate 250 data sets, each with different means and covariance matrices, where:
\begin{itemize}
\item component means $\mu_{jg} \sim N(0,1.5^2)$, independent;
\item eigenvalues of the covariance matrices $\lambda_{jg} \sim U(0,\frac{g}{\text{sep}})$, independent with $\text{sep}=2$;
\item eigenvectors of the covariance matrices generated by orthonormalizing matrices generated independently from a standard normal distribution.
\end{itemize}
It is well known that the EM for GMM is sensitive to the initial position, especially in the multivariate context (among others, McLachlan and Peel, 2000). We choose to adopt the standard \emph{multiple random starts} strategy. That is, for each data set, 10 random initial partitions are generated: these are used as starting values for the M-step of all the seven algorithms under analysis. For conN, conS, and cont, a constrained algorithm with arbitrary lower and upper bounds of respectively $0.5$ and $2$ is run in order to exclude degenerate (and some spurious) solutions, and the estimated clustering is used to initialize the cross-validation scheme. The alternative option of directly generating 10 different starts for each training set - within the cross-validation scheme - would have added little in terms of accuracy of the final estimates.
Concerning the root selection criterion, for the unconstrained algorithm, we select the roots yielding the highest likelihood, whereas for the constrained algorithms we select the roots based on the cross-validated likelihood.
The performance of the different techniques has been analyzed in terms of:
\begin{itemize}
\item MAD (Mean absolute deviation): $\sum_{g=1}^G \sum_{i=1}^n |p(g|x_i)-\hat{p}(g|x_i)|;$
\item ARand (Adjusted Rand index; Hubert and Arabie, 1985);
\item computational time needed to analyze a single data set;
\item the value of the calibrated constant $c$ (for the constrained approach only).
\end{itemize}
The MAD is computed evaluating the above expression for all possible permutations of the estimated classes. The final MAD reported refers to the permutation which yields the lowest difference, and measures inaccuracy of estimated \emph{fuzzy} classification - whereas ARand measures accuracy of estimated \emph{crisp} classification.
In addition, we tested the robustness of the results with respect to changes in 1) the cross-validation settings, and 2) the level of class separation. In order to test robustness with respect to cross-validation settings, we considered a subset of the above simulation conditions as follows. 250 samples, of 50, 100, and 200 observations, were generated from a 3-group 8-variate heteroscedastic Gaussian mixture model, with prior class membership probabilities of 0.1, 0.4, and 0.5.
The same setting was used in order to count how many different local maxima each algorithm converged to over the 10 random initializations considered. This serves the purpose of providing some information on the likelihood surface.
Class separation has been manipulated by controlling the dispersion of the group conditional covariance matrices' eigenvalues (through the above sep value): higher dispersion levels correspond to overlap between the classes. Considering the above full simulation as corresponding to fixed moderate separation ($\text{sep}=2$), this final setup compares results for low, moderate, and high separation levels - respectively $\text{sep}=1,$ $\text{sep}=2,$ and $\text{sep}=3$. The subset of simulation conditions considered is as follows. 250 samples, of 50 observations each, were generated from a 3-group and 5-group heteroscedastic Gaussian mixture model, with prior class membership probabilities of respectively 0.2, 0.3, and 0.5; 0.1, 0.4, and 0.5; 0.1, 0.1, 0.2, 0.3, and 0.3. Table \ref{tabsimcon} summarizes the conditions explored in all testing setups.
\FloatBarrier
\begin{table}[h!]
\centering
\begin{tabular}{lcccccccc}
\hline \hline
& Full simulation & Cross-val settings & Class-sep & N. local max \\
& & & & \\
$J=5$, $p=(0.2,0.3,0.5)'$, $n=50$ & \checkmark & $\times$ & \checkmark & $\times$ \\
$J=5$, $p=(0.2,0.3,0.5)'$, $n=100$ & \checkmark & $\times$ & $\times$ & $\times$ \\
$J=5$, $p=(0.2,0.3,0.5)'$, $n=200$ & \checkmark & $\times$ & $\times$ & $\times$ \\
$J=5$, $p=(0.1,0.4,0.5)'$, $n=50$ & \checkmark & $\times$ & \checkmark & $\times$ \\
$J=5$, $p=(0.1,0.4,0.5)'$, $n=100$ & \checkmark & $\times$ & $\times$ & $\times$ \\
$J=5$, $p=(0.1,0.4,0.5)'$, $n=200$ & \checkmark & $\times$ & $\times$ & $\times$ \\
$J=5$, $p=(0.1,0.1,0.2,0.3,0.3)'$, $n=50$ & \checkmark & $\times$ & \checkmark & $\times$ \\
$J=5$, $p=(0.1,0.1,0.2,0.3,0.3)'$, $n=100$ & \checkmark & $\times$ & $\times$ & $\times$ \\
$J=5$, $p=(0.1,0.1,0.2,0.3,0.3)'$, $n=200$ & \checkmark & $\times$ & $\times$ & $\times$ \\
$J=8$, $p=(0.2,0.3,0.5)'$, $n=50$ & \checkmark & $\times$ & \checkmark & $\times$ \\
$J=8$, $p=(0.2,0.3,0.5)'$, $n=100$ & \checkmark & $\times$ & $\times$ & $\times$ \\
$J=8$, $p=(0.2,0.3,0.5)'$, $n=200$ & \checkmark & $\times$ & $\times$ & $\times$ \\
$J=8$, $p=(0.1,0.4,0.5)'$, $n=50$ & \checkmark & \checkmark & \checkmark & \checkmark \\
$J=8$, $p=(0.1,0.4,0.5)'$, $n=100$ & \checkmark & \checkmark & $\times$ & \checkmark \\
$J=8$, $p=(0.1,0.4,0.5)'$, $n=200$ & \checkmark & \checkmark & $\times$ & \checkmark \\
$J=8$, $p=(0.1,0.1,0.2,0.3,0.3)'$, $n=50$ & \checkmark & $\times$ & \checkmark & $\times$ \\
$J=8$, $p=(0.1,0.1,0.2,0.3,0.3)'$, $n=100$ & \checkmark & $\times$ & $\times$ & $\times$ \\
$J=8$, $p=(0.1,0.1,0.2,0.3,0.3)'$, $n=200$ & \checkmark & $\times$ & $\times$ & $\times$ \\
\hline \hline
\end{tabular}
\caption{Cross-table of simulation condition and simulation type.}
\label{tabsimcon}
\end{table}
\FloatBarrier
\subsection{Simulation results}
Tables \ref{tabfullj5} and \ref{tabfullj8} present the results obtained with, respectively, $J=5$ and $J=8.$
Among the unconstrained approaches, as expected, for small samples ($n = 50$), the heteroscedastic
normal (hetN) performs poorly, while the homoscedastic Student-t (homt) works nicely. However, the constrained approach (cont) is able to cope with such a small sample size and to improve the performance of the unconstrained approach. The heteroscedastic performs poorly, especially with higher model complexity in terms of number of components and variables relative to the sample size. A similar pattern is, indeed, observed for $n=100$ when $J=5$ with $G=5$ and when $J=8$.
As the sample size gets larger ($n = 200$), the homoscedastic models are the worst, while the unconstrained heteroscedastic increases in quality of classification. Interestingly, even for a large sample size, conS, conN and cont yield higher or equal quality estimation compared to the unconstrained approach. However, on average, cont seems to be the best, especially for small sample sizes. When $G=3$, the gains observed by the constrained approaches, in terms of cluster recovery, are more pronounced in presence of a component with a small weight ($p=(0.1,0.4,0.5)'$). In general we observe that, whereas increasing the sample size improves the performance of all methods, an increasing number of components lowers the quality of the clustering results.
The results point out that hetN and conS suffer higher values of $J.$ This is not surprising, as a growing number of variables causes, all else equal, parameter proliferation and a consequent loss in the accuracy of the estimation of the class conditional covariance matrices. Although parameter proliferation is limited with conS, this is constructed based on the choice of the sample covariance matrix as target, which is also very sensible to an increasing $J$ (holding $n$ fixed). In addition, the sample covariance matrix is the sum of the within and the between variance: as such it does not seem to be the best choice for $\boldsymbol{\Psi}.$ On the other hand, homN, homt, conN and cont process the increase in $J$ from 5 to 8 as additional information that, all else equal, improves the quality of the estimation (see Table \ref{tabfullj8}, and Figures \ref{fig235}, \ref{fig145}, and \ref{fig11233}. This is typically the case in finite mixtures with discrete variables (among others, Vermunt, 2010; Di Mari, Oberski, and Vermunt, 2016).
Overall the results show that the constant $c$ decreases in all constrained approaches as the sample size increases, coherently with the results of Xu et al. (2010) in the univariate case.
In terms of computational time, even if the sample covariance matrix is faster to compute, simulation results show that conS converges slower than conN and cont.
\FloatBarrier
\begin{table}[h!]
\centering
\begin{tabular}{cclcccccc}
\hline \hline
& & & homN & hetN & homt & conS & conN & cont \\ \hline
& & & & & & & & \\
p=(0.2,0.3,0.5)' & n=50 & MAD & 0.11 & 0.25 & 0.11 & 0.16 & 0.11 & 0.09 \\
& & ARand & 0.78 & 0.52 & 0.79 & 0.68 & 0.79 & 0.82 \\
& & time & 0.10 & 0.08 & 0.09 & 1.17 & 0.67 & 0.73 \\
& & c & & & & 0.32 & 0.93 & 0.79 \\
& & & & & & & & \\
& n=100 & MAD & 0.08 & 0.07 & 0.06 & 0.06 & 0.06 & 0.05 \\
& & ARand & 0.84 & 0.86 & 0.87 & 0.87 & 0.88 & 0.91 \\
& & time & 0.19 & 0.14 & 0.14 & 1.22 & 0.81 & 0.85 \\
& & c & & & & 0.12 & 0.77 & 0.53 \\
& & & & & & & & \\
& n=200 & MAD & 0.06 & 0.02 & 0.05 & 0.02 & 0.02 & 0.02 \\
& & ARand & 0.88 & 0.96 & 0.90 & 0.96 & 0.96 & 0.96 \\
& & time & 0.42 & 0.21 & 0.21 & 1.59 & 1.11 & 1.16 \\
& & c & & & & 0.07 & 0.39 & 0.28 \\ \hline
& & & & & & & & \\
p=(0.1,0.4,0.5)' & n=50 & MAD & 0.12 & 0.23 & 0.17 & 0.19 & 0.12 & 0.12 \\
& & ARand & 0.77 & 0.54 & 0.72 & 0.65 & 0.78 & 0.79 \\
& & time & 0.11 & 0.08 & 0.10 & 1.2 & 0.70 & 0.77 \\
& & c & & & & 0.34 & 0.93 & 0.78 \\
& & & & & & & & \\
& n=100 & MAD & 0.09 & 0.09 & 0.15 & 0.09 & 0.07 & 0.06 \\
& & ARand & 0.82 & 0.83 & 0.76 & 0.83 & 0.86 & 0.89 \\
& & time & 0.19 & 0.15 & 0.17 & 1.28 & 0.84 & 0.91 \\
& & c & & & & 0.14 & 0.79 & 0.52 \\
& & & & & & & & \\
& n=200 & MAD & 0.07 & 0.03 & 0.11 & 0.03 & 0.03 & 0.03 \\
& & ARand & 0.87 & 0.95 & 0.82 & 0.93 & 0.94 & 0.94 \\
& & time & 0.47 & 0.25 & 0.35 & 1.78 & 1.28 & 1.38 \\
& & c & & & & 0.07 & 0.45 & 0.31 \\ \hline
& & & & & & & & \\
p=(0.1,0.1,0.2,0.3,0.3)' & n=50 & MAD & 0.27 & 0.46 & 0.28 & 0.31 & 0.27 & 0.26 \\
& & ARand & 0.58 & 0.29 & 0.57 & 0.52 & 0.58 & 0.58 \\
& & time & 0.12 & 0.09 & 0.15 & 1.49 & 0.99 & 1.05 \\
& & c & & & & 0.42 & 0.95 & 0.92 \\
& & & & & & & & \\
& n=100 & MAD & 0.23 & 0.35 & 0.21 & 0.26 & 0.21 & 0.19 \\
& & ARand & 0.65 & 0.48 & 0.68 & 0.60 & 0.67 & 0.70 \\
& & time & 0.26 & 0.24 & 0.28 & 2.03 & 1.49 & 1.64 \\
& & c & & & & 0.27 & 0.90 & 0.66 \\
& & & & & & & & \\
& n=200 & MAD & 0.21 & 0.17 & 0.18 & 0.16 & 0.13 & 0.14 \\
& & ARand & 0.68 & 0.73 & 0.72 & 0.74 & 0.79 & 0.78 \\
& & time & 0.71 & 0.62 & 0.65 & 3.87 & 2.75 & 2.8 \\
& & c & & & & 0.13 & 0.53 & 0.40 \\ \hline \hline
\end{tabular}
\caption{Average results over 250 simulated data sets of five-variate ($J=5$) GMM estimation, with three and five components. Initialization from ten random starts.}
\label{tabfullj5}
\end{table}
\FloatBarrier
\begin{table}[h!]
\centering
\begin{tabular}{cclccccccc}
\hline \hline
& & & homN & hetN & homt & conS & conN & cont \\ \hline
& & & & & & & & \\
p=(0.2,0.3,0.5)' & n=50 & MAD & 0.13 & 0.39 & 0.05 & 0.25 & 0.13 & 0.04 \\
& & ARand & 0.74 & 0.27 & 0.91 & 0.50 & 0.75 & 0.91 \\
& & time & 0.08 & 0.06 & 0.09 & 1.51 & 0.66 & 0.70 \\
& & c & & & & 0.58 & 0.94 & 0.84 \\
& & & & & & & & \\
& n=100 & MAD & 0.03 & 0.12 & 0.02 & 0.07 & 0.02 & 0.01 \\
& & ARand & 0.95 & 0.75 & 0.96 & 0.84 & 0.96 & 0.97 \\
& & time & 0.17 & 0.14 & 0.13 & 1.51 & 0.81 & 0.85 \\
& & c & & & & 0.20 & 0.94 & 0.60 \\
& & & & & & & & \\
& n=200 & MAD & 0.01 & 0.00 & 0.01 & 0.00 & 0.00 & 0.00 \\
& & ARand & 0.97 & 0.99 & 0.98 & 0.99 & 0.99 & 0.99 \\
& & time & 0.41 & 0.21 & 0.24 & 1.71 & 1.14 & 1.18 \\
& & c & & & & 0.07 & 0.50 & 0.34 \\ \hline
& & & & & & & & \\
p=(0.1,0.4,0.5)' & n=50 & MAD & 0.15 & 0.37 & 0.14 & 0.27 & 0.14 & 0.10 \\
& & ARand & 0.72 & 0.28 & 0.77 & 0.49 & 0.73 & 0.83 \\
& & time & 0.09 & 0.06 & 0.11 & 1.66 & 0.72 & 0.76 \\
& & c & & & & 0.61 & 0.95 & 0.81 \\
& & & & & & & & \\
& n=100 & MAD & 0.05 & 0.12 & 0.09 & 0.10 & 0.04 & 0.03 \\
& & ARand & 0.90 & 0.77 & 0.86 & 0.81 & 0.93 & 0.94 \\
& & time & 0.18 & 0.15 & 0.17 & 1.62 & 0.88 & 0.95 \\
& & c & & & & 0.24 & 0.89 & 0.58 \\
& & & & & & & & \\
& n=200 & MAD & 0.02 & 0.02 & 0.05 & 0.02 & 0.01 & 0.01 \\
& & ARand & 0.96 & 0.97 & 0.92 & 0.97 & 0.98 & 0.98 \\
& & time & 0.44 & 0.26 & 0.35 & 1.77 & 1.27 & 1.34 \\
& & c & & & & 0.08 & 0.56 & 0.36 \\ \hline
& & & & & & & & \\
p=(0.1,0.1,0.2,0.3,0.3)' & n=50 & MAD & 0.22 & 0.52 & 0.19 & 0.33 & 0.21 & 0.18 \\
& & ARand & 0.65 & 0.19 & 0.70 & 0.49 & 0.66 & 0.72 \\
& & time & 0.09 & 0.05 & 0.16 & 1.98 & 1.05 & 1.10 \\
& & c & & & & 0.60 & 0.95 & 0.93 \\
& & & & & & & & \\
& n=100 & MAD & 0.13 & 0.40 & 0.10 & 0.25 & 0.13 & 0.10 \\
& & ARand & 0.80 & 0.38 & 0.84 & 0.61 & 0.80 & 0.84 \\
& & time & 0.22 & 0.18 & 0.26 & 2.11 & 1.49 & 1.63 \\
& & c & & & & 0.37 & 0.93 & 0.69 \\
& & & & & & & & \\
& n=200 & MAD & 0.09 & 0.18 & 0.07 & 0.10 & 0.05 & 0.06 \\
& & ARand & 0.86 & 0.74 & 0.90 & 0.84 & 0.91 & 0.91 \\
& & time & 0.57 & 0.58 & 0.52 & 3.46 & 2.44 & 2.40 \\
& & c & & & & 0.18 & 0.68 & 0.46 \\ \hline \hline
\end{tabular}
\caption{Average results over 250 simulated data sets of eight-variate ($J=8$) GMM estimation, with three and five components. Initialization from ten random starts.}
\label{tabfullj8}
\end{table}
\FloatBarrier
\begin{figure}
\centering
\includegraphics[clip, trim=0cm 8.75cm 0cm 2.5cm,width=\linewidth]{020305.pdf}
\caption{Boxplot of the ARand values observed in $250$ simulated data sets, $n=50,$ $n=100,$ and $n=200,$ $p=(0.2,0.3,0.5)',$ and $J=8.$}
\label{fig235}
\end{figure}
\begin{figure}
\centering
\includegraphics[clip, trim=0cm 8.75cm 0cm 2.5cm,width=\linewidth]{010405.pdf}
\caption{Boxplot of the ARand values observed in $250$ simulated data sets, $n=50,$ $n=100,$ and $n=200,$ $p=(0.1,0.4,0.5)',$ and $J=8.$}
\label{fig145}
\end{figure}
\begin{figure}
\centering
\includegraphics[clip, trim=0cm 8.75cm 0cm 2.5cm,width=\linewidth]{0101020303.pdf}
\caption{Boxplot of the ARand values observed in $250$ simulated data sets, $n=50,$ $n=100,$ and $n=200,$ $p=(0.1,0.1,0.2,0.3,0.3)',$ and $J=8.$}
\label{fig11233}
\end{figure}
\FloatBarrier
Table \ref{tablocmax} displays the means and the medians of the number of local maxima the methods under comparison found in the reduced simulation setup (see Table \ref{tabsimcon}). We observe that conN and cont, on average, yield the lowest number of local maxima in all the three sample size conditions. Increasing the sample size yields a more stable behavior also for the other methods.
\FloatBarrier
\begin{table}[h!]
\centering
\begin{tabular}{clllllll}
\hline \hline
& & homN & hetN & homt & conS & conN & cont \\ \hline
n=50 & mean & 9.64 & 9.99 & 7.01 & 9.86 & 5.31 & 4.99 \\
& median & 10 & 10 & 7 & 10 & 5 & 5 \\
& & & & & & & \\
n=100 & mean & 7.08 & 9.94 & 4.71 & 8.50 & 3.96 & 4.64 \\
& median & 7 & 10 & 5 & 9 & 4 & 4 \\
& & & & & & & \\
n=200 & mean & 3.82 & 7.04 & 3.16 & 3.78 & 3.37 & 3.59 \\
& median & 4 & 7 & 3 & 3 & 3 & 3 \\ \hline
\end{tabular}
\caption{Mean and median number of local maxima, over $250$ simulated data sets of eight-variate ($J=8$) GMM estimation, $G=3$ and $p=(0.1,0.4,0.5)'.$ Initialization from ten random starts.}
\label{tablocmax}
\end{table}
\FloatBarrier
In Table \ref{tabcrossval} we display results for different cross-validation settings. Perhaps surprisingly, this points out that the results are not sensitive to the choice of the cross-validations settings in terms of classification. Concerning computational time, this is maximal for $n_{\bar{S}}=n/2$ and $K=n,$ whereas it is minimal for $n_{\bar{S}}=n/10$ and $K=n/10.$ Interestingly however, we observe a systematic decrease in $c$ for lower levels of $n_{\bar{S}}:$ this could be due to the fact that accuracy in parameter estimation, with a larger training set, is higher. Hence, the prior information incorporated in the target matrix becomes less important - and the optimal $c$ relatively smaller.
\FloatBarrier
\begin{table}[h!]
\centering
\begin{tabular}{clccccccccccc}
\hline \hline
& & \multicolumn{3}{c}{$K=n/10$} && \multicolumn{3}{c}{$K=n/5$} && \multicolumn{3}{c}{$K=n$} \\
\cmidrule{3-5} \cmidrule{7-9} \cmidrule{11-13}
$n_{\bar{S}}=n/2$ & & conS & conN & cont && conS & conN & cont && conS & conN & cont \\
& & & & && & & && & & \\
& MAD & 0.10 & 0.04 & 0.03 && 0.10 & 0.04 & 0.03 && 0.10 & 0.04 & 0.03 \\
& ARand & 0.80 & 0.93 & 0.94 && 0.81 & 0.93 & 0.94 && 0.81 & 0.93 & 0.94 \\
& time & 0.94 & 0.45 & 0.52 && 1.29 & 0.68 & 0.73 && 4.04 & 2.39 & 2.49 \\
& c & 0.25 & 0.89 & 0.58 && 0.25 & 0.89 & 0.58 && 0.25 & 0.90 & 0.58 \\
& & & & && & & && & & \\
& & & & && & & && & & \\ \hline
& & \multicolumn{3}{c}{$K=n/10$} && \multicolumn{3}{c}{$K=n/5$} && \multicolumn{3}{c}{$K=n$} \\
\cmidrule{3-5} \cmidrule{7-9} \cmidrule{11-13}
$n_{\bar{S}}=n/5$ & & conS & conN & cont && conS & conN & cont && conS & conN & cont \\
& & & & && & & && & & \\
& MAD & 0.10 & 0.03 & 0.04 && 0.09 & 0.03 & 0.04 && 0.09 & 0.03 & 0.03 \\
& ARand & 0.81 & 0.94 & 0.93 && 0.82 & 0.94 & 0.93 && 0.83 & 0.95 & 0.94 \\
& time & 0.87 & 0.44 & 0.48 && 1.11 & 0.65 & 0.69 && 3.15 & 2.33 & 2.39 \\
& c & 0.10 & 0.69 & 0.47 && 0.10 & 0.70 & 0.47 && 0.10 & 0.70 & 0.47 \\
& & & & && & & && & & \\
& & & & && & & && & & \\ \hline
& & \multicolumn{3}{c}{$K=n/10$} && \multicolumn{3}{c}{$K=n/5$} && \multicolumn{3}{c}{$K=n$} \\
\cmidrule{3-5} \cmidrule{7-9} \cmidrule{11-13}
$n_{\bar{S}}=n/10$ & & conS & conN & cont && conS & conN & cont && conS & conN & cont \\
& & & & && & & && & & \\
& MAD & 0.10 & 0.03 & 0.04 && 0.09 & 0.03 & 0.04 && 0.09 & 0.03 & 0.03 \\
& ARand & 0.80 & 0.94 & 0.93 && 0.82 & 0.95 & 0.94 && 0.82 & 0.95 & 0.94 \\
& time & 0.82 & 0.43 & 0.48 && 1.01 & 0.63 & 0.67 && 2.84 & 2.32 & 2.34 \\
& c & 0.09 & 0.63 & 0.43 && 0.09 & 0.63 & 0.43 && 0.09 & 0.63 & 0.43 \\ \hline \hline
\end{tabular}
\caption{Average results over 250 simulated data sets, each of sample size 100, of eight-variate ($J=8$) GMM constrained estimation, $G=3$ and $p=(0.1,0.4,0.5)',$ for different cross-validation settings. Initialization from ten random starts.}
\label{tabcrossval}
\end{table}
\FloatBarrier
Table \ref{tabsepar} gives results on three different levels of class separation, for three and five components with small sample size ($n=50$). Both the unconstrained and the constrained methods have a relatively stronger effect on performance when class separation increases from $\text{sep}=1$ to $\text{sep}=2$ than when it increases from $\text{sep}=2$ to $\text{sep}=3$. Among all the approaches considered, whatever the class separation, cont yields on average the most accurate clustering.
\FloatBarrier
\begin{table}[h!]
\centering
\resizebox{0.75\hsize}{!}{\begin{tabular}{lllcccccc}
\hline \hline
$\text{sep}=1$ & & & homN & hetN & homt & conS & conN & cont \\ \cmidrule{3-9}
& $p=(0.2,0.3,0.5)'$ & & & & & & & \\
& & MAD & 0.22 & 0.40 & 0.13 & 0.32 & 0.22 & 0.13 \\
& & Arand & 0.57 & 0.23 & 0.73 & 0.42 & 0.57 & 0.74 \\
& & time & 0.09 & 0.06 & 0.11 & 1.75 & 0.72 & 0.79 \\
& & $c$ & & & & 0.66 & 0.95 & 0.84 \\
& & & & & & & & \\
& $p=(0.1,0.4,0.5)'$ & & & & & & & \\
& & MAD & 0.23 & 0.41 & 0.22 & 0.33 & 0.23 & 0.20 \\
& & Arand & 0.55 & 0.22 & 0.62 & 0.41 & 0.55 & 0.64 \\
& & time & 0.09 & 0.06 & 0.12 & 1.83 & 0.76 & 0.84 \\
& & $c$ & & & & 0.67 & 0.95 & 0.81 \\
& & & & & & & & \\
& $p=(0.1,0.1,0.2,0.3,0.3)'$ & & & & & & & \\
& & MAD & 0.38 & 0.55 & 0.34 & 0.43 & 0.37 & 0.34 \\
& & Arand & 0.40 & 0.15 & 0.45 & 0.34 & 0.41 & 0.46 \\
& & time & 0.11 & 0.05 & 0.19 & 2.37 & 1.13 & 1.22 \\
& & $c$ & & & & 0.69 & 0.95 & 0.94 \\
& & & & & & & & \\
\cmidrule{3-9}
$\text{sep}=2$ & & & & & & & & \\
& $p=(0.2,0.3,0.5)'$ & & & & & & & \\
& & MAD & 0.13 & 0.39 & 0.05 & 0.25 & 0.13 & 0.04 \\
& & Arand & 0.74 & 0.27 & 0.91 & 0.50 & 0.75 & 0.91 \\
& & time & 0.08 & 0.06 & 0.09 & 1.51 & 0.66 & 0.70 \\
& & $c$ & & & & 0.58 & 0.94 & 0.84 \\
& & & & & & & & \\
& $p=(0.1,0.4,0.5)'$ & & & & & & & \\
& & MAD & 0.15 & 0.37 & 0.14 & 0.27 & 0.14 & 0.10 \\
& & Arand & 0.72 & 0.28 & 0.77 & 0.49 & 0.73 & 0.83 \\
& & time & 0.09 & 0.06 & 0.11 & 1.66 & 0.72 & 0.76 \\
& & $c$ & & & & 0.61 & 0.95 & 0.81 \\
& & & & & & & & \\
& $p=(0.1,0.1,0.2,0.3,0.3)'$ & & & & & & & \\
& & MAD & 0.22 & 0.52 & 0.19 & 0.33 & 0.21 & 0.18 \\
& & Arand & 0.65 & 0.19 & 0.70 & 0.49 & 0.66 & 0.72 \\
& & time & 0.09 & 0.05 & 0.16 & 1.98 & 1.05 & 1.10 \\
& & $c$ & & & & 0.60 & 0.95 & 0.93 \\
& & & & & & & & \\
\cmidrule{3-9}
$\text{sep}=3$ & & & & & & & & \\
& $p=(0.2,0.3,0.5)'$ & & & & & & & \\
& & MAD & 0.10 & 0.37 & 0.03 & 0.22 & 0.09 & 0.03 \\
& & Arand & 0.81 & 0.31 & 0.94 & 0.56 & 0.81 & 0.95 \\
& & time & 0.08 & 0.06 & 0.09 & 1.45 & 0.66 & 0.67 \\
& & $c$ & & & & 0.55 & 0.94 & 0.83 \\
& & & & & & & & \\
& $p=(0.1,0.4,0.5)'$ & & & & & & & \\
& & MAD & 0.12 & 0.36 & 0.13 & 0.24 & 0.11 & 0.06 \\
& & Arand & 0.77 & 0.29 & 0.79 & 0.54 & 0.78 & 0.89 \\
& & time & 0.09 & 0.06 & 0.11 & 1.61 & 0.72 & 0.74 \\
& & $c$ & & & & 0.58 & 0.94 & 0.80 \\
& & & & & & & & \\
& $p=(0.1,0.1,0.2,0.3,0.3)'$ & & & & & & & \\
& & MAD & 0.15 & 0.49 & 0.14 & 0.28 & 0.14 & 0.12 \\
& & Arand & 0.77 & 0.23 & 0.79 & 0.55 & 0.78 & 0.81 \\
& & time & 0.09 & 0.05 & 0.15 & 1.81 & 1.04 & 1.08 \\
& & $c$ & & & & 0.55 & 0.95 & 0.93 \\ \hline
\end{tabular}}
\caption{Average results over 250 simulated data sets, each of sample size 50, of eight-variate ($J=8$) GMM estimation, three and five components, for different class separation levels. Initialization from ten random starts.}
\label{tabsepar}
\end{table}
\FloatBarrier
\section{Empirical application: the wine data set}\label{wineapp}
In the present Section we evaluate the seven algorithms on the basis of a data set available at \url{http://www.ics.uci.edu/~mlearn/MLRepository.html}. These data are the results of a chemical analysis of three types of wine - \emph{Barolo, Grignolino} and \emph{Barbera} - grown in the same region in Italy. The analysis determined the quantities of 13 constituents found in each of the three types of wines: alcohol, malic acid, ash, alcalinity of ash, magnesium, total phenols, flavanoids, nonflavanoid phenols, proanthocyanins, color intensity, hue, OD280/OD315 of diluted wines, and proline.
All of the six algorithms have been initialized from the same 50 random starts, assuming $G=3.$ The selected solutions are the ones with the highest likelihood. The cross-validation scheme is the same used in the simulation study. Results are shown in Table \ref{wine}.\\
\begin{table}[h!]
\centering
\begin{tabular}{lcccccc}
\hline
\hline
& homN & hetN & homt & conS & conN & cont \\ \hline
& & & & & & \\
ARand & 0.92 & 0.39 & 0.87 & 0.54 & 0.92 & 0.93 \\
time & 2.22 & 1.89 & 1.68 & 23.34 & 11.32 & 10.71 \\
$c$ & & & & 0.45 & 0.89 & 0.57 \\ \hline \hline
\end{tabular}
\caption{Comparison of the 6 algorithms in terms of ARand, computational time and optimal $c.$}
\label{wine}
\end{table}
\indent The homoscedasticity assumption seems to fit well the data. The constrained approach conN equals homN in terms of ARand, whereas cont yields an ARand of 0.93, compared to 0.87 of homt. Confirming the results obtained in the simulation study, cont seems to be the most accurate approach among the ones considered in this work.
Interestingly, however, all of the constrained approaches improve upon the unconstrained heteroscedastic approach.
\section{Discussion}\label{concl}
In this paper we have proposed affine equivariant constraints for the class conditional covariance matrices of multivariate GMM in order to circumvent the well-known issue of degenerate and spurious solutions in ML estimation. Our approach generalizes the sufficient condition for Hathaway (1985)'s constraints to hold as formulated by Ingrassia (2004). Previous constrained approaches lacked affine equivariance and suffered the choice of an optimal finite-sample scale balance ($c$). The setup we propose is such that the class specific covariance matrices are shrunk towards a pre-specified matrix $\boldsymbol{\Psi}.$ We have been able to show that this yields a clustering method which is equivariant with respect to linear affine transformations of the data, provided that $\boldsymbol{\Psi}$ is changed accordingly.
A natural choice for the shrinkage target matrix, whenever \emph{a priori} information on the covariance structure of the components is not available, seems to be the covariance matrix of a homoscedastic mixture of normals. For a given choice of the target matrix, we let the data decide, through the constant $c$, how close to the target the final clustering will be. The tuning constant $c$ is chosen by cross-validation. We have also shown that, given a matrix $\boldsymbol{\Psi},$ our constrained ML estimate can be computed by applying the algorithm of Ingrassia and Rocci (2007) to the data appropriately linearly transformed. This allows us to interpret our proposal as a way to decide how to standardize the data before applying Ingrassia (2004)'s constraints.
The validity of the proposal has been assessed through a simulation study and an empirical example. All constrained approaches yield more accurate estimates than the unconstrained ones. More specifically, cont has been shown to be the best among the constrained approaches this work has been concerned with. This is not surprising, since a random vector conditionally distributed as a Gaussian mixture, given random inverse Wishart covariance matrices, has a marginal homoscedastic mixture of Student $t$'s distribution.
Given an affine transformation of the data, the equivariance property of the method is guaranteed if also $\boldsymbol{\Psi}$ is adapted accordingly. This requires that the methods used to estimate $\boldsymbol{\Psi}$ from the data be also equivariant. This is the case for the sample covariance matrix and the homoscedastic model, for which \ref{eq:equiGauss} and \ref{eq:equiGaussmix} apply. For a homoscedastic mixture of Student $t$'s, this can also be shown expressing each marginal as a combination of a multivariate Gaussian and Gamma random variables. Then affine equivariance results by applying \ref{eq:equiGauss} (Roth, 2013). All in all, different choices of $\boldsymbol{\Psi}$ can as well be considered, according to the data specificity: however, in order for the method to preserve equivariance, also the method used to estimate $\boldsymbol{\Psi}$ from the data has to be equivariant.
The equivariant method developed in Gallegos and Ritter (2009a; 2009b) and extended in Ritter (2014) requires to obtain all local maxima of the trimmed likelihood. Our method has the virtue of being easily implementable with a minimal extra computational effort, as we have shown in the simulation study and in the empirical example.
There are cases where the clustering model might assume a specific structure on the relationship between the variables, like local independence (within-cluster diagonal matrices). Such a model is not affine equivariant because some (non diagonal) affine transformations on the data might destroy the local independence. In cases like these, the affine equivariance property of the constraints is not required. Yet our approach can be applied using a diagonal matrix as target. This would prevent the likelihood from degenerating, still improving upon the unconstrained algorithm thanks to the cross-validation strategy we have proposed. Clearly, when all variables in a data set are measured in a common scale, non equivariant constraints are a competitive choice.
An additional issue, pointed out by both the simulation study and the empirical example, is the computational time cross-validation requires to select an optimal $c.$ Whether different cross-validation schemes can speed up the constrained routines can be a topic for future research.
\newpage
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,384
|
Neospathodus is an extinct genus of conodonts.
Use in stratigraphy
The base of the Olenekian stage of the Early Triassic is at the lowest occurrence of Neospathodus waageni. It is defined as ending near the lowest occurrences of Chiosella timorensis.
The GSSP Candidate sections are in the Mud (Muth) village in the Spiti valley, India or in Chaohu, China.
References
Taxonomy and correlation of Lower Triassic (Spathian) segminate conodonts from Oman and revision of some species of Neospathodus. M. J. Orchard, Journal of Paleontology, Volume 69, Issue 01, January 1995, pages 110–122,
External links
Ozarkodinida genera
Triassic conodonts
Permian India
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,452
|
{"url":"http:\/\/www.idreamofprettythings.com\/esea4fa7\/generate-permutations-with-repetition-python-482909","text":"In this question you are asked to compute the number of permutations. python a recursive functionthat takes an integer n> 0 and returns a set containing all the permutations of 1, 2, 3,...,n. Each permutation must be represented as a tuple. 3.0.3840.0, Combinatorics. This article explores the problem of picking random numbers, covering all possible permutations, with O(M) space and time complexity, where M is the desired number of generated numbers, given by 0 <= M <= N and where N is the length of the range. the thing is that the output I'm getting is with repetition of numbers from the list, So these are technically not really permutations. Permutations and combinations are often required in algorithms that do a complete search of the solution space. Let's distinguish the two copies of A (for the moment) by writing one of them as a and generate all \\$4! Note: For more information, refer to Python Itertools. Python permutations. But your implementation goes and constructs all the permutations themselves, only to throw them all away again.It would be better to count them directly, via some kind of a mathematical formula. To generate all the permutations of an array from index l to r, fix an element at index l and recur for the index l+1 to r. Backtrack and fix another element at index l and recur for index l+1 to r. Repeat the above steps to generate all the permutations. C++ Program to Generate All Possible Combinations of a Given List of Numbers Print all palindrome permutations of a string in C++ Generate a list of Primes less than n in Python I tried to solve the simple problem of generating all permutations of length n with m distinct elements where repetition is allowed. Example: [1,2,3] will have the following permutations: [1,2,3] [1,3,2] [2,1,3] [2,3,1] [3,1,2] [3,2,1] NOTE * No two entries in the permutation sequence should be the same. Write a Python program to print all permutations of a given string (including duplicates). filter_none. Permutations %u2212 The syntax for combinations function is %u2013 scipy.special.perm(N,k). 01, Jul 20 . It contains well written, well thought and well explained computer science and programming articles, quizzes and practice\/competitive programming\/company interview \u2026 Ask Question Asked 10 years, 2 months ago. elements, unless the program decides to terminate early. Thus, we are left with the digits 2, 3 and 4. The official dedicated python forum. They are typically rather large so it's best not to compute them entirely but better to lazily generate them. Check if a binary string contains all permutations of length k in C++; All reverse permutations of an array using STL in C++? {\\displaystyle k} Generate a sequence of permutations of n elements drawn from choice of k values. In this Python tutorial, we will go over how to find permutations and combinations, combinations with replacement, and all possible combinations. So, let's use this logic to make the permutations of the digits 1, 2, 3 and 4. The order of arrangement of the object is very crucial. Permutations in Python without repetitions ... from user and supposed to print all the permutations of the list. 220.0.$\\begingroup$The code in question doesn't actually generate all permutations, since it allows repetition. The algorithm is not trivially understood. The Unique Permutations Algorithm with Duplicate Elements In the worst cases, both implementations are O(N!) ), the number of permutations will equal P = n r. Python Math: Exercise-16 with Solution. A Computer Science portal for geeks. {\\displaystyle k{\\text{th}}} When everything to the right of a digit is in descending order, we find the next largest digit and put it in front and then put the remaining digits back in ascending order. So, now we have all our permutations which can be made by the digits 1, 2 and 3. The recursive generators that are used to simplify combinatorial constructs such as permutations, combinations, and Cartesian products are called combinatoric iterators. This, if we look at it in action, makes it look like it is \u201cmoving\u201d from one end to the other 1 2 3 < 4; Explanation for Leetcode problem Permutations. Solution 1 You can use standard permutation solution, but it will contain repetition. One of the best ways to make a program faster is not to compute things that you don't have to. 4x10 38 places, the second in 3. It contains well written, well thought and well explained computer science and programming articles, quizzes and practice\/competitive programming\/company interview \u2026 String Permutations - Understanding Recursion | Learn Algorithms with Phanto - Duration: 12:34. Common cons include that they may not generate all permutations possible and can be tricky to setup. We will start by keeping 1 at the first position. Python Programing; Memo; Search for: Apple Swift \u2013 Generate combinations with repetition. Let's summarize with the general rule: when order matters and repetition is allowed, if n is the number of things to choose from (balloons, digits etc), and you choose r of them (5 balloons for the party, 4 digits for the password, etc. If \u2018n\u2019 is the number of distinct items in a set, the number of permutations is n * (n-1) * (n-2) * \u2026 * 1.. In the given example there are 6 ways of arranging 3 distinct numbers. These methods are present in an itertools package. n Asking for the 0th permutation just returns the total number of permutations (ie \"\"). You can read more about it here.. You can rate examples to help us improve the quality of examples. You can read about permutations from n to m here - Combinatorics. A Computer Science portal for geeks. And of course, making permutations of only 3 digits is quite easy. as N! This code uses the yield from syntax, added in Python 3.3. In this article, I will share a simple line of code to generate all the permutations \u2026 Item permutations with repetition consist in the list of all possible arrangements of elements (which can be repeated) in any order. is the total number of permutation results for N-size elements. Python try-except keywords are used to Always raise exceptions with meaningful messages. Python Tutorial: File Objects - Reading and Writing to Files - Duration: 24:33.$\\endgroup$\u2013 fade2black Aug 20 '17 at 12:51 A permutation of a set of objects is an ordering of those objects. It adds lexicographic ordering to figure out how to generate permutations and change direction. Generating permutations using recursion Permutations generation. April 19, 2020 at 2:52 pm. Select Page. December 6, 2020 Abreonia Ng. Hi, My current method of generating permutations with repetition is to first generate all combinations with repetition, then find the permutations (w\/out repetition) of each combination, and lastly to remove all copies. The permutation is an arrangement of objects in a specific order. The permutations must not contain duplicates (unique). Permutations are the ways of arranging items in a given set such that each arrangement of the items is unique. Permutations are bijections from a set to itself, and the set does not need to have an order. Are identical, the following would successfully match: a1 n't actually generate all permutations! ( which can be tricky to setup of arrangement of the best ways to make a faster. Nothing wrong to say that it is the gem of the object is very crucial,... With Phanto - Duration: 12:34 's use this logic to make permutations... Then 0 is returned things that you do n't have to possible arrangements of (. ; how to find permutations and change direction duplicates ) read about permutations from N to m here Combinatorics! Permutations possible and can be repeated ) in any order for the 0th permutation just returns the number. Permutation solution, but it will contain repetition string permutations - Understanding Recursion | algorithms. Programing language supposed to print all permutations of string in Python without repetitions... from and. Use backtracking technique string with repetitive character ( e.g given example there are 6 ways of arranging in... From a set to itself, and the set does not need to have an.! They may not generate all permutations possible and can be repeated ) in order! The gem of the best ways to make the permutations must not contain duplicates ( unique ) arguments are only! Element in all positions permutation just returns the total number of permutations ie! The items is unique course, making permutations of N things taken Y at a time,,! Understand how it work as follows: Put the nth element in all positions here -.! Some of those objects two list without repetition Python Tutorial: File objects - Reading and to... With repetitive character ( e.g tricky to setup < 0, then 0 is returned purpose of problem... Of this problem, assume that all the permutations of the items is.... Array arguments are accepted only for exact = False case with repetitive character ( e.g provides a package to permutations! String in Python without repetitions... from user and supposed to print all permutations possible and can made! About permutations from N to m here - Combinatorics ( ie '' ) N-size elements k a... = { 24 \\over 6 } = 4\\$ permutations = False case k-permutations of Select. Is very crucial of course, making permutations of a set of objects in a specific order not duplicates. A given set such that each arrangement of the sequence two list without repetition Tutorial. ( e.g example there are 6 ways of arranging 3 distinct numbers change direction k at time... Information, refer to Python Itertools objects is an arrangement of the solution space in. If k > N, N < 0, then 0 is returned itertools.permutation ( ) function under... Are typically rather large so it 's best not to compute the of! Include that they may not generate all valid permutations for the purpose of this problem, that! Understand how it work as follows: Put the nth element in all positions Swift \u2013 generate with... Given set such that each arrangement of the items is generate permutations with repetition python we have our! An order, and the set does not need to have an order ( ) (... Go over how to generate all valid permutations for the purpose of this problem assume! Permutation solution, but it will contain repetition Python program: for permutations, are. 1 at the first position from user and supposed to print all the permutations of things. Is an arrangement of the Python Programing language function falls under the Combinatoric Generators keywords are to... Into a problem about permutations from N to m here - Combinatorics the ways of arranging items in given. Supposed to print all the numbers in the worst cases, both implementations are O N. Python permutations \u2026 so, generate permutations with repetition python 's use this logic to make program! Of N things taken k at a time, i.e., k-permutations N.... 2 months ago how to find permutations and combinations are often required in algorithms that a! Permutations, we have all our permutations which can be tricky to setup specific order compute things that do! Choice of k values set does not need to have an order called Combinatoric iterators over. The recursive Generators that are used to Always raise exceptions with meaningful messages Python program: for information. To print distinct permutations of string in Python without repetitions... from user supposed. The ways of arranging 3 distinct numbers products are called Combinatoric iterators -... Element in all positions - Understanding Recursion | Learn algorithms with Phanto - Duration: 24:33 6 of. To generate permutations of the best ways to make the permutations of N elements drawn from choice generate permutations with repetition python! All positions best not to compute them entirely but better to lazily generate them returns the number. Memo ; search for: Apple Swift \u2013 generate combinations with repetition objects - Reading and Writing to -. Of this problem, assume that all the permutations of the items is unique from to... Return all possible permutations compute the number of permutations objects - Reading and Writing Files... % u2212 the syntax for combinations function is % u2013 scipy.special.perm ( N!:... Is unique getting all the permutations must not contain duplicates ( unique ) arrangements of elements which! To simplify combinatorial constructs such as permutations, we have an inbuilt to., unless the program decides to terminate early ( which can be tricky to setup 1 at the first.... Repetitive character ( e.g is returned years, 2 and 3 generate permutations with repetition python ) N taken! Include that they may not generate all permutations of a list in without. To make the permutations of a string with repetitive character ( e.g items is unique a permutation a!: 24:33 Phanto - Duration: 12:34 contain duplicates ( unique ) are used to simplify combinatorial constructs as! 1 you can read about permutations with repetition do n't have to say that it the! Understand how it work as follows: Put the nth element in all positions text from a regex in. String with repetitive character ( e.g 's best not to compute the number of permutations of list... Solution 1 you can use standard permutation solution, but it will contain repetition Python of. N'T actually generate all permutations, combinations with repetition print all the permutations of N taken... Accepted only for exact = False case { 24 \\over 6 } 4\\! N elements drawn from choice of k values getting all the permutations string... Let 's use this logic to make a program faster is not to compute them entirely but better to generate! The situation is transformed into a problem about permutations from N to m here - Combinatorics recursive that... A permutation of a set to itself, and all possible combinations including duplicates ) is... Permutation solution, but it will contain repetition such as permutations, we are left the! About permutations from N to m here - Combinatorics C # permutations and... Go over how to generate permutations and combinations are often required in algorithms that do a complete search of Python! Recursive Generators that are used to Always raise exceptions with meaningful messages will go over how to all. Years, 2, 3 and 4 compute the number of permutations of N things taken k at a,. 0 is returned False case how to find permutations and combinations are often required in algorithms that do a search... Quite easy scipy.special.perm ( N! - Combinatorics, k-permutations of N. Select.. N, N < 0, or k < 0, then 0 is returned setup. And can be made by the digits 2, 3 and 4 at the first position library has pretty coolest. The solution space \\displaystyle k } generate a sequence of permutations ( ie ). * for the 0th permutation just returns the total number of permutations Reading and to. Algorithms that do a complete search of the solution space read about from. Arrangement of the digits 1 generate permutations with repetition python 2 and 3 - Understanding Recursion | Learn algorithms with Phanto -:! Logic to make a program faster is not to compute things that you do have! Follows: Put the nth element in all positions collection of numbers, return all possible combinations Duplicate in! To simplify combinatorial constructs such as permutations, combinations, and all possible.. Can read about permutations from N to m here - Combinatorics identical, the situation is into. The solution space transformed into a problem about permutations with repetition consist in the worst cases, both implementations O... This library has pretty much coolest functions and nothing wrong to say that it is total. C # question does n't actually generate all permutations of N elements drawn from choice of k values are only! That they may not generate all permutations of a string with repetitive character (.. A problem about permutations with repetition left with the digits 1, 2 months ago a regex pattern in #... Itself, and the set does not need to have an order rate examples to us... To lazily generate them from choice of k values elements, unless the program decides to terminate.! Are using Python, we will start by keeping 1 at the first position N Asking for the of. Is an ordering of those objects gem of the solution space, k-permutations of N. Page... Collection of numbers, return all possible combinations - Understanding Recursion | algorithms... The list of all possible arrangements of elements ( which can be made by the 1! Of permutations generate permutations with repetition python ways to make a program faster is not to compute things that you do n't have.!","date":"2021-02-28 15:55:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.44381779432296753, \"perplexity\": 822.4997366200901}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178361510.12\/warc\/CC-MAIN-20210228145113-20210228175113-00275.warc.gz\"}"}
| null | null |
package won.node.camel.route;
import java.lang.invoke.MethodHandles;
import org.apache.camel.CamelContext;
import org.apache.camel.builder.RouteBuilder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import won.protocol.message.processor.camel.WonCamelConstants;
/**
* User: LEIH-NB Date: 25.11.13
*/
public class AtomProtocolDynamicRoutes extends RouteBuilder {
private static final Logger logger = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
private String from;
public AtomProtocolDynamicRoutes(CamelContext camelContext, String from) {
super(camelContext);
this.from = from;
}
@Override
public void configure() throws Exception {
logger.info("adding dynamic route from({}) to the recipient found in the header 'remoteBrokerEndpoint'", from);
from(from).routeId(from).recipientList(header(WonCamelConstants.REMOTE_BROKER_ENDPOINT_HEADER));
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,556
|
{"url":"http:\/\/poundthebudweiser.blogspot.com\/2009\/07\/omar-minaya-regrets-ive-had-few-but.html","text":"Tuesday, July 28, 2009\n\nOmar Minaya: Regrets? I've Had a Few, But Then Again, Too Few to Mention...\n\nWhen asked if he had any regrets, General Manager Omar Minaya answered with,\n\n\"No, I mean, well, no. I don't regret. I don't, I don't regret saying. I mean I regret saying, you know, you know what I'm saying. I mean, I stand by the things that I said, but I don't regret, I regret saying, that in that forum. That was not the proper forum.\"\n\nDo you think the Wilpons regret not firing Omar Minaya?\n\n3 comments:\n\n1. hello\n\n2. What I think is that man is totally confused or maybe he has a mental illness because All tings he said didn't have sense, and look at his face he was totally desperate.\n\n3. Dude I have to say I'm so glad to find your blog, even though I did it by a fluke. The information posted here is interesting and entertaining. In short, a really nice blog.","date":"2014-10-31 21:19:27","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.890596330165863, \"perplexity\": 2404.821171578952}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-42\/segments\/1414637900379.37\/warc\/CC-MAIN-20141030025820-00037-ip-10-16-133-185.ec2.internal.warc.gz\"}"}
| null | null |
Pеrсу Slеdgе, whо ѕоаrеd frоm раrt-tіmе ѕіngеr аnd hоѕріtаl оrdеrlу to lasting fame wіth hіѕ асhіng, fоrlоrn performance оn thе classic Whеn a Mаn Lоvеѕ a Woman, dіеd Tuеѕdау іn Lоuіѕіаnа. He was 74.
Dr. Wіllіаm "Bеаu" Clark, соrоnеr fоr Eаѕt Baton Rouge Parish, соnfіrmеd to Thе Associated Prеѕѕ thаt Slеdgе died еаrlу Tuеѕdау morning, about an hour аftеr midnight of nаturаl causes іn hоѕрісе саrе.
A No. 1 hіt іn 1966, Whеn a Man Lоvеѕ a Wоmаn wаѕ Slеdgе'ѕ debut single, аn almost unbеаrаblу hеаrtfеlt bаllаd with a rеѕоnаnсе hе nеvеr аррrоасhеd аgаіn. Few singers соuld hаvе. Its mood ѕеt bу a mоurnful оrgаn and dіrgе-lіkе tempo, Whеn a Man Loves a Wоmаn wаѕ fоr many the dеfіnіtіvе soul bаllаd, a tеѕtаmеnt of blinding, all-consuming lоvе haunted by fеаr аnd grасеd bу оvеrwhеlmіng еmоtіоn.
When a Man Lоvеѕ a Wоmаn was a реrѕоnаl trіumрh fоr Slеdgе, whо ѕееmеd оn the vеrgе of sobbing thrоughоut thе рrоduсtіоn, аnd a brеаkthrоugh for Southern ѕоul. It wаѕ thе fіrѕt Nо. 1 hіt frоm Alаbаmа'ѕ burgеоnіng Muѕсlе Shоаlѕ muѕіс ѕсеnе, whеrе Arеthа Frаnklіn аnd thе Rolling Stоnеѕ аmоng others wоuld rесоrd, and thе fіrѕt gоld record fоr Atlаntіс Rесоrdѕ.
Sledge's hіt bесаmе a ѕtаndаrd thаt ѕuѕtаіnеd hіѕ lоng tоurіng career in thе U.S., Eurоре аnd South Afrіса, when he аvеrаgеd 100 реrfоrmаnсеѕ a уеаr, and lеd tо hіѕ іnduсtіоn іntо the Rock and Roll Hаll of Fame in 2005. It wаѕ a favourite аt wеddіngѕ — Slеdgе hіmѕеlf dіd the hоnоurѕ at a ceremony fоr musician аnd actor Stеvе Vаn Zandt — аnd оftеn turnеd up іn mоvіеѕ, іnсludіng Thе Bіg Chіll, The Crуіng Game аnd a 1994 Mеg Ryan drama nаmеd fоr thе ѕоng'ѕ tіtlе.
When a Mаn Loves a Woman was re-released аftеr bеіng fеаturеd in Oliver Stоnе'ѕ Vіеtnаm Wаr fіlm Platoon іn 1987 аnd reached No. 2 іn Brіtаіn. Michael Bоltоn topped thе сhаrtѕ іn the 1990ѕ wіth a cover vеrѕіоn аnd Rоllіng Stоnе mаgаzіnе lаtеr rаnkеd іt No. 53 оn its lіѕt of thе greatest ѕоngѕ of аll tіmе.
Recognizable by hіѕ wide, gар-tооthеd ѕmіlе, Slеdgе hаd a hаndful оf other hіtѕ between 1966 аnd 1968, іnсludіng Wаrm аnd Tеndеr Lоvе, It Tеаrѕ Me Up, Out оf Left Fіеld and Take Time to Knоw Hеr. Hе rеturnеd tо the сhаrtѕ in 1974 wіth I'll Bе Your Everything.
Bеfоrе he became fаmоuѕ, Slеdgе wоrkеd in the соttоn fіеldѕ аrоund his hometown оf Lеіghtоn in nоrthwеѕt Alаbаmа and tооk a jоb in a hospital іn nеаrbу Shеffіеld. Hе also ѕреnt wееkеndѕ рlауіng wіth a rhуthm-аnd-bluеѕ band called thе Eѕԛuіrеѕ. A раtіеnt at thе hоѕріtаl heard him singing while wоrkіng and rесоmmеndеd him tо rесоrd producer Quin Ivу.
Thе соmроѕіtіоn оf thе ѕоng has lоng been a mystery. Some thought thаt Slеdgе wrоtе it hіmѕеlf. Slеdgе ѕаіd he was іnѕріrеd bу a gіrlfrіеnd whо left him for a mоdеlіng саrееr аftеr hе wаѕ laid off from a соnѕtruсtіоn job іn 1965, but he gave the ѕоngwrіtіng credits tо twо Eѕԛuіrеѕ bаndmаtеѕ, bаѕѕіѕt Cаlvіn Lewis аnd organist Andrew Wrіght, who helped hіm wіth thе ѕоng.
Whіlе іdеntіfіеd wіth thе Muscle Shoals music scene, Slеdgе ѕреnt most оf his саrееr lіvіng in Bаtоn Rоugе, Louisiana. Hе was іnduсtеd іn thе Alabama Music Hall оf Fаmе in 1993 аnd thе Louisiana Muѕіс Hаll оf Fаmе in 2007.
Slеdgе had ѕurgеrу for liver саnсеr іn Jаnuаrу 2014 but ѕооn rеѕumеd tоurіng.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,677
|
{"url":"http:\/\/blog.sigfpe.com\/2010_07_01_archive.html","text":"# A Neighborhood of Infinity\n\n## Saturday, July 31, 2010\n\n### Automatic Divided Differences\n\nDivided Differences\nI've previously talked about automatic differentiation here a few times. One of the standard arguments for using automatic differentiation is that it is more accurate than numeric differentiation implemented via divided differences. We can approximate f'(x) by using (f(x)-f(y))\/(x-y) with a value of y near x. Accuracy requires y to be close to x, and that requires computing the difference between two numbers that are very close. But subtracting close numbers is itself a source of numerical error when working with finite precision. So you're doomed to error no matter how close you choose x and y to be.\n\nHowever, the accuracy problem with computing divided differences can itself be fixed. In fact, we can adapt the methods behind automatic differentiation to work with divided differences too.\n\n(This paragraph can be skipped. I just want to draw a parallel with what I said here. Firstly I need to correct the title of that article. I should have said it was about *divided differences*, not *finite differences*. The idea in that article was that the notion of a divided difference makes sense for types because for a large class of function you can define divided differences without using either differencing or division. You just need addition and multiplication. That's the same technique I'll be using here. I think it's neat to see the same trick being used in entirely different contexts.)\n\nThe Direct Approach\nFirstly, here's a first attempt at divided differencing:\n\n> diff0 f x y = (f x - f y)\/(x - y)\n\n\nWe can try it on the function f:\n\n> f x = (3*x+1\/x)\/(x-2\/x)\n\n\ndiff0 f 1 1.000001 gives -14.0000350000029. Repeating the calculation with an arbitrary precision package (I used CReal) gives -14.000035000084000. We are getting nowhere near the precision we'd like when working with double precision floating point.\n\nThe Indirect Approach\nAutomatic differentiation used a bunch of properties of differentiation: linearity, the product rule and the chain rule. Similar rules hold for divided differences. First let me introduce some notation. If f is a function then I'll use f(x) for normal function application. But I'll use f[x,y] to mean the divided difference (f(x)-f(y))\/(x-y). We have\n\n(f+g)[x,y] = f[x,y]+g[x,y]\n(fg)[x,y] = f(x)g[x,y]+f[x,y]g(y)\nh[x,y] = f[g(x),g(y)]g[x,y] when h(x)=f(g(x))\n\nWe can modify the product rule to make it more symmetrical though it's not strictly necessary:\n\n(fg)[x,y] = 0.5(f(x)+f(y))g[x,y]+0.5f[x,y] (g(x)+g(y))\n\n(I got that from this paper by Kahan.)\n\nIn each case, given f evaluated at x and y, and its divided difference at [x, y], and the same for g, we can compute the corresponding quantities for the sum and product of f and g. So we can store f(x), f(y) and f[x,y] together in a single structure:\n\n> data D a = D { fx :: a, fy :: a, fxy :: a } deriving (Eq, Show, Ord)\n\n\nAnd now we can implement arithmetic on these structures using the rules above:\n\n> instance Fractional a => Num (D a) where\n> fromInteger n = let m = fromInteger n in D m m 0\n> D fx fy fxy + D gx gy gxy = D (fx+gx) (fy+gy) (fxy+gxy)\n> D fx fy fxy * D gx gy gxy = D (fx*gx) (fy*gy) (0.5*(fxy*(gx+gy) + (fx+fy)*gxy))\n> negate (D fx fy fxy) = D (negate fx) (negate fy) (negate fxy)\n\n\nI'll leave as an exercise the proof that this formula for division works:\n\n> instance Fractional a => Fractional (D a) where\n> fromRational n = let m = fromRational n in D m m 0\n> D fx fy fxy \/ D gx gy gxy = D (fx\/gx) (fy\/gy) (0.5*(fxy*(gx+gy) - (fx+fy)*gxy)\/(gx*gy))\n\n\nFor the identity function, i, we have i(x)=x, i(y)=y and i[x,y]=1. So for any x and y, the evaluation of the identity function at x, y and [x,y] is represented as D x y 1. To compute divided differences for any function f making use of addition, subtraction and division we need to simply apply f to D x y 1. We pick off the divided difference from the fxy element of the structure. Here's our replacement for diff0.\n\n> diff1 f x y = fxy \\$ f (D x y 1)\n\n\nThis is all mimicking the construction for automatic differentiation.\n\nEvaluating diff0 f 1 1.000001 gives -14.000035000083997. Much closer to the result derived using CReal. One neat thing about this is that we have a function that's well defined even in the limit as x tends to y. When we evaluate diff1 f 1 1 we get the derivative of f at 1.\n\nI thought that this was a novel approach but I found it sketched at the end of this paper by Reps and Rall. (Though their sketch is a bit vague so it's not entirely clear what they intend.)\n\nBoth the Kahan paper and the Reps and Rall papers give some applications of computing divided diferences this way.\n\nIt's not clear how to deal with the standard transcendental functions. They have divided differences that are very complex compared to their derivatives.\n\nAside\nThere is a sense in which divided differences are uncomputable(!) and that what we've had to do is switch from an extensional description of functions to an intensional description to compute them. I'll write about this some day.\n\nNote that the ideas here can be extended to higher order divided differences and that there are some really nice connections with type theory. I'll try to write about these too.\n\nUpdate: I found another paper by Reps and Rall that uses precisely the method described here.\n\n## Saturday, July 03, 2010\n\n### Death to Hydrae (or the operational semantics of ordinals)\n\nUnprovable Propositions\nAmong other things, Godel's first incompleteness theorem allows us to construct a statement in the language of Peano arithmetic that can't be proved using the axioms of Peano arithmetic. Unfortunately, this statement is a highly contrived proposition whose sole purpose is to be unprovable. People who learn of Godel's theorems often ask if there are other more natural and uncontrived mathematical statements that can't be proved from the Peano axioms.\n\nMy goal in this post will be to describe one of these propositions. Not just uncontrived, but actually very useful. I only intend to tell half of the story here because I feel like there are many good treatments already out there that tell the rest. I'm just going to get to the point where I can state the unprovable proposition, and then sketch how it can be proved if you allow yourself a little Set Theory.\n\n> {-# OPTIONS_GHC -fno-warn-missing-methods #-}> import Prelude hiding ((^))> infixr 8 ^> type Natural = Integer\n\nTermination\nSuppose we implement a function to compute the Fibonacci numbers like so:\n\n> fib 0 = 0> fib 1 = 1> fib n = fib (n-2) + fib (n-1)\n\nHow do we know that fib terminates for all natural number arguments? One approach is this: if we pass in the argument n it clearly never recurses more than n levels. Each time it recurses it calls itself at most twice. So it must terminate in O(2n) steps (assuming that the primitive operations such as addition take constant time). We can think of this code in a kind of imperative way. It's a bit like n nested loops, each loop going round up to two times.\n\nSuppose instead that we have some kind of recursive function g that goes n levels deep but for which the number of calls of g to itself is no longer two. In fact, suppose the number of self-calls is very large. Even worse, suppose that each time g is called, it calls itself many more times than it did previously, maybe keeping track of this ever growing number through a global variable. Or instead of a global variable, maybe an evil demon decides how many times g calls itself at each stage. Can you still be sure of termination?\n\nA Simple Machine\nIn order to look at this question, we'll strip a computer right down to the bare minimum. It will have an input (that the evil demon could use) for natural numbers and will output only one symbol. Here's a design for such a machine:\n\n> data Machine = Done | Output Machine | Input (Natural -> Machine)\n\nA value of type Machine represents the state of the machine. Done means it has finished running. Output s means output a symbol and continue in state s. Input f means stop to input a number from the demon (or elsewhere), call it i, and then continue from state f i. This is very much in the style discussed by apfelmus and I in recent blog posts.\n\nHere's an interpreter for one of these machines:\n\n> run1 Done = return ()> run1 (Output x) = print \"*\" >> run1 x> run1 (Input f) = readLn >>= (run1 . f)\n\nFor any n we can easily build a machine to output n stars. This is such a natural machine to want to build it seems only right to give it the name n. If we want to do this then we need to make Machine an instance of Num and define fromInteger for it:\n\n> instance Num Machine where> fromInteger 0 = Done> fromInteger n = Output (fromInteger (n-1))\n\nTyping run1 8, say, will output 8 stars.\n\nNow given two of these machines there is a natural notion of adding them. a + b is the machine that does everything b does followed by everything a does. (Remember, that's b then a.) To do this we need to dig into b and replace every occurrence of Done in it with a. That way, instead of finishing like b, it leads directly into a. In the case of a + Input f, for each number i we need to dig into f i replacing each Done with a:\n\n> a + Done = a> a + Output b = Output (a + b)> a + Input f = Input (\\i -> a + f i)\n\nThere's a natural way to multiply these machines too. The idea is that in a * b we run machine b. But each time the Output command is run, instead of printing a star it executes a. You can think of this as a control structure. If n is a natural number then a * n means running machine a n times. In the case of a * Input f, instead of multiplying by a fixed natural number, we get an input from the user and multiply by f i instead:\n\n> _ * Done = Done> a * Output b = a*b + a> a * Input f = Input (\\i -> a * f i)\n\nWe can make a machine to input a number and then output that many stars. Here it is:\n\n> w = Input fromInteger\n\nTry running run1 w.\n\nCan you guess what the machine w * w does? Your first guess might be that it inputs two numbers and outputs as many stars as the product of the two numbers. Try it. What actually happens is that we're computing w * Input fromInteger. Immediately from the definition of * we get Input (\\i -> w*i). In other words, the first input gives us an input i, and then w is run i times. So if we initially input i, we are then asked for i more inputs and after each input, the corresponding number of stars is output. Although the original expression contains just two occurrences of w, we are required to enter i+1 numbers.\n\nGiven the definitions of + and * it seems natural to define the power operation too:\n\n> (^) :: Machine -> Machine -> Machine> a ^ Done = Output Done> a ^ Output b = a^b * a> a ^ Input f = Input (\\i -> a ^ f i)\n\nThe power operation corresponds to the nesting of loops. So, for example, w ^ n can be thought of loops nested n deep.\n\nTry working out what w ^ w does when executed with run1.\n\nConsider the set M of all machines built using just a finite number of applications of the three operators +, * and ^ to w and the non-zero naturals. (The non-zero condition means we exclude machines like 0*w that accept an input and do nothing with it.) Any such expression can be written as f w, where the definition of f makes no mention of w.\n\nSuppose we use run1 and we always enter the same natural n. Then each occurrence of w acts like n. So if we start with some expression in w, say f w, then always inputting n results in f n stars. We could test this with run1 (w^w^w^w), always entering 2, but it would require a lot of typing. Instead we can write another intepreter that consumes its inputs from a list rather from the user (or demon). And instead of printing stars it simply prints out the total number of stars at the end:\n\n> run2 Done _ = 0> run2 (Output x) as = 1 + run2 x as> run2 (Input f) (a:as) = run2 (f a) as\n\nNow you can try run2 (w^w^w^w) [2,2..] and see that we (eventually) get 2222.\n\nTermination Again\nIf we run a machine in M there's a pattern that occurs again and again. We input a number, and then as a result we go into a loop requesting more numbers. These inputs may in turn request more inputs. Like the mythological hydra, every input we give may spawn many more requests for inputs. As the number of inputs required may depend on our previous inputs, and we may input numbers as large as we like, these machines may run for a long time. Suppose our machine terminates after requesting n inputs. Then there must be some highest number that we entered. Call it m. Then if the original machine was f w (with f defined in terms of the 3 operators and non-zero naturals), the machine must have terminated outputting no more than f m stars. So if our machine terminates, we can bound how many steps it took.\n\nBut do our machines always terminate? The input we give to the machine might not be bounded. If we run run2 (w^w) [4,5..], say, the inputs grow and grow. If these inputs grow faster than we can chop off the heads of our hydra, we might never reach termination.\n\nConsider a program to input n and then output fib n. It accepts an input, recurses to a depth of at most n, and calls itself at most twice in each recursion. Compare with the machine 2 ^ w. This accepts an input n, recurses to a depth n, calling itself exactly twice each time. So if 2 ^ w terminates, so does fib. The more complex example above where I introduced the evil demon will terminate if w ^ w does, as long as the demon doesn't stop inputting numbers. So if we can show in one proof that every machine of type Machine terminates, then there are many programs whose termination we could easily prove.\n\nLet's consider an example like run1 (w ^ w) with inputs 2, 3, 4, ...\n\nWe start with w ^ w. Examining the definition of the operator ^ we see that this proceeds by requesting an input. The first input is 2. Now we're left with w ^ 2. This is w * w. Again it accepts an input. This time 3. Now we go to state w * 3. This is w*2 + w. Again we accept an input. This time 4. We are led to w*2 + 4. This now outputs 4 stars and we are left with w * 2 which is w + w. We accept an input 5, output 5 stars and are left with w. After a further input of 6, it outputs 6 stars and terminates. Or we could just run run2 (w ^ w) [2,3..] and get 15(=4+5+6) as output.\n\nThe transitions are:\nw ^ w\n-> w ^ 2\n-> w * w\n-> w * 3\n-> w*2 + w\n-> w*2 + 4 -> ... -> w * 2\n-> w + w\n-> w + 5 -> ... w\n-> 6 -> ... -> 0.\n\nNow for some Set Theory. Rewrite the above sequence using the transfinite ordinal \u03c9 instead of w. The sequence becomes a sequence of ordinals. Any time we accept an input, the rightmost \u03c9 becomes a finite ordinal. So we have a descending sequence of ordinals. This is true whatever ordinal we start with. The execution of either Input or Output always strictly decreases our ordinal, and any descending sequence of ordinals must eventually terminate. Therefore every machine in M eventually terminates.\n\nBut here's the important fact: to show termination we used the ordinal \u03c9, and this required the axiom of infinity and some Set Theory. Instead we could encode the termination question, via Godel numbering, as a proposition of Peano arithmetic. If we do this, then we hit against an amazing fact. It can't be proved using the axioms of Peano arithmetic. So we have here a useful fact, not a contrived self-referential one, that can't be proved with Peano arithmetic.\n\nWhy can't it be proved using just the Peano axioms?\nA few years back, Jim Apple made a post about constructing (some) countable ordinals in Haskell. His construction nicely reflects the definitions a set theorist might make, but the code doesn't actually do anything. Later I learned from Hyland and Power how you can interpret algebraic structures as computational effects. apfelmus illustrates nicely how an abstract datatype can be made to do things with the help of an interpreter. Roughly speaking, doing this is what is known as operational semantics. So I thought, why not apply this approach to the algebraic rules for defining and combining ordinals. The result are the interpreters run1 and run2 above.\n\nrun1 gives an example of a Hydra game. In fact, its precisely the hydra game described in this paper because it always chops off the rightmost head. The Kirby-Paris theorem tells us we can't prove this game terminates using just the Peano axioms. A web search on Goodstein's theorem will reveal many great articles with the details.\n\nA well-ordered quantity that you can keep decreasing as a program runs, and that can be used to prove termination, is an example of a loop variant. Loop variants are often natural numbers but the above shows that transfinite ordinals make fine loop variants. But in the interest of being fair and balanced, here's a dissenting view. The author has a point. If you are forced to use transfinite ordinals to show your program terminates, the age of the universe will probably be but the briefest flicker compared to your program's execution. On the other hand, if you don't want an actual bound on the execution time, ordinals can provide very short proofs of termination for useful programs.\n\n> instance Show Machine> instance Eq Machine> instance Ord Machine","date":"2016-09-28 08:32:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6324329376220703, \"perplexity\": 1006.8982378021318}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-40\/segments\/1474738661327.59\/warc\/CC-MAIN-20160924173741-00172-ip-10-143-35-109.ec2.internal.warc.gz\"}"}
| null | null |
Asked continuously and repeatedly by friends, family and a surprising amount of strangers, I think it's time to address it head on.
It's a fair question. Marco, native Italian and my husband, didn't speak much English when we met and I, native American and English speaker, was just learning Italian. It's a fair question, but one that can't be answered simply with "English." Or "Italian." The true answer requires more of an explanation than that.
So here it is, the unabridged answer to the famous question: What language do you guys talk in?!
When Marco and I first met I was the mute American girl, desperately trying to understand the rapid-fire Italian that him and the friends around us were saying. On vacation in the mountains I was completely lost for two weeks. I wasn't lost on the mountain paths, but on trying to decipher the beautiful, but way-more-difficult-than-expected-language. Marco realized my struggle and with one helpful translation when I needed it, "both," I truly noticed him for the first time.
During the months I studied in Italy we met often, even traveling together and passing long dinners stumbling through conversations. During that time I spoke Italian and he spoke English. We were both very beginners, using the most basic verb tenses and words, but it slowed our speech down to use our second language, and in this way we could understand each other better.
Eventually my Italian improved and we began to talk much more in Italian. Through emails and Skype we'd still mix the languages, but for some time the Italian was slightly more dominant. Talks and visits often produced a strange version not unlike the Spanish, English "Spanglish" mix. We'd start a sentence in one language, only to realize half way through that we didn't know a certain word. Inserting that word in the original language our sentences made sense only to us – and often scared people at nearby tables.
Only when I moved to Italy last February did our language balance back out. Now, though our daily life is predominately Italian, we have a nice mix of both languages. Sometimes we don't realize what language we're talking in, or we'll have entire dinners where he talks in Italian and I respond in English.
When I want Marco to understand 100%, I speak in Italian. This often applies for when I'm mad also, though if I really get going I slip back into English. "No Marco perchè ti stavo dicendo TO NOT DO THAT!" Marco does the same, starting off slow and calm in English, but slipping back into Italian when reasoning doesn't work.
We both talk to each other on the phone in just Italian because it works better that way. And we follow the unspoken bilingual rule to speak in the language that the people around you know. This means no English in his parents' home, no Italian when we have American guests.
The truth is, we don't just talk in one language – it would be impossible now that we both know both languages. A different language implies different things, different emotions. I completely agree with the study that a second language brings with it a new personality. We understand each other perfectly, and often switch based on mood, tiredness or a lack of vocabulary.
Our language is a mix, a trade and a compromise, and though it might seem like speaking different languages can make it harder to understand, actually having two languages helps us to understand each other even better.
This entry was posted in Italy, Uncategorized and tagged culture, differences, english, ESL, Italian, language, second language problems, summary. Bookmark the permalink.
3 Responses to So Like, What Language Do You Talk To Each Other In Anyway?
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 6,443
|
Behind Xiaoyu's outstanding exam results (A* in each of Maths, Further Maths, Physics and Chemistry) is a virtually flawless track record in class work. Her natural ability, focus, discipline, and organization make her the best mathematics student in the school. She is a courteous, conscientious and unwavering in her commitment. Often she will take the lead in showing others how to solve problems, and she frequently asks questions that go way beyond the syllabus. She well deserves her Cambridge University place to read Mathematics.
Soala is a totally committed and dedicated student. She readily grasps the theoretical framework that provides context for the study of art and architecture. In her artwork she shows strong organisational skills and understands how contextual research helps the development of her ideas. She is very much 'hands-on', open to trying new ways to extend her visual vocabulary, and she enjoys the combining materials in a richly tactile way – employing paint, collage, inks and mixed media to produce a cohesive whole. Her ability to integrate study of architecture into her own vision of architectural style and function was well demonstrated in her excellent EPQ project. She gained Art A*, History of Art A*, Mathematics A*, Physics A* at A level, and will flourish reading Architecture at Cambridge.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 360
|
package berlin.reiche.securitas.util;
import android.app.AlertDialog;
import android.content.Context;
/**
* Class with factory methods for creating a notification dialog.
*
* @author Konrad Reiche
*
*/
public class NotificationDialog {
/**
* Factory method to create a notification dialog based on a string message.
* This alert dialog has only one button.
*
* @param context
* the context where the notification should be displayed.
* @param message
* the message to be displayed.
* @return the notification dialog object.
*/
public static AlertDialog create(Context context, String message) {
AlertDialog.Builder builder = new AlertDialog.Builder(context);
builder.setMessage(message).setCancelable(false)
.setPositiveButton("OK", null);
return builder.create();
}
/**
* Factory method to create a notification dialog based on a message id.
* This alert dialog has only one button.ge
*
* @param context
* the context where the notification should be displayed.
* @param messageId
* the message to be displayed.
* @return the notification dialog object.
*/
public static AlertDialog create(Context context, int messageId) {
AlertDialog.Builder builder = new AlertDialog.Builder(context);
builder.setMessage(messageId).setCancelable(false)
.setPositiveButton("OK", null);
return builder.create();
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,144
|
{"url":"https:\/\/www.studypug.com\/ca\/grade7\/ratios","text":"# Ratios\n\n#### All in One Place\n\nEverything you need for better grades in university, high school and elementary.\n\n#### Learn with Ease\n\nMade in Canada with help for all provincial curriculums, so you can study in confidence.\n\n#### Instant and Unlimited Help\n\n0\/3\n##### Intros\n###### Lessons\n1. Introduction to Ratios\n2. Introduction to Ratios\n3. What are equivalent ratios?\n4. Ratios, Rates, and Proportions\n\u2022 What are they?\n\u2022 How are they different from each other?\n0\/18\n##### Examples\n###### Lessons\n1. Write the following ratios using ratio notation in lowest terms.\n1. 5 km in 30 minutes.\n2. 4 cases for $500. 3. 5 cupcakes need 325 g of flour. 4. 130 mm of precipitation in 2 days. 2. Write the following ratios in fraction form in lowest terms. 1. 3 hours in 1 week. 2. 35 g sugar in a 335 ml can of pop. 3. 4 servers to serve 6 tables with 4 guests each. 4. 22 sheep and 30 cows. What is the ratio of sheep to the total number of animals? 3. The following table shows the sugar and flour needed for different kinds of bakery. Bakery Sugar (g) Flour (g) Cake 40 120 Cookie 20 60 bread 30 450 1. Which 2 kinds of bakery have the same sugar-flour ratio? Show your work. 2. What is the ratio of sugar needed for cookie to the total weight of sugar needed for all 3 kinds of bakery? 4. Ratios With Decimals Simplify the following ratios. 1. 1.2 : 4 2. 1.4 : 1.6 5. Ratios With Fractions Simplify the following ratios. 1. $\\frac{2}{5} : 4$ 2. $\\frac{2}{3} : \\frac{7}{8}$ 6. Ratios in Different Units Simplify the following ratios. 1. 5 cm : 30 km 2. 4 hr 15 min : 3hr 30 min 7. Application of Ratios Jack and Jill are to share the candies in the ratio 3:4. If there are 21 candies in total, how many candies does each of them get? 1. Harry and Sally got some pocket money from their mother and they were to split it in the ratio 3:5. If Sally got$30, how much money in total did their mother give them?\n0%\n##### Practice\n###### Topic Notes\nWhat are two-term and three term ratios? After watching this lesson, you will be able to answer this question by writing ratios in the lowest terms as well as in fraction and percent form. Practice your understanding by doing some real-life word questions too.\n\nIn this lesson, we will learn:\n\n\u2022 Ratios With Decimals\n\u2022 Ratios With Fractions\n\u2022 Ratios in Different Units\n\u2022 Application of Ratios\n\nNotes:\n\u2022 A ratio should always be in its simplest\/most reduced form (no common factors).\n\u2022 The values in a ratio should always be integers if possible.\n\u2022 A ratio can be scaled up\/down by multiplying\/dividing the ratio numbers by the same value. This process is called scaling, and the value used for scaling is called the scale factor.","date":"2023-02-01 03:34:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 2, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.37000393867492676, \"perplexity\": 3778.5031090437874}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499899.9\/warc\/CC-MAIN-20230201013650-20230201043650-00370.warc.gz\"}"}
| null | null |
Like a Prayer je čtvrté studiové album americké zpěvačky a skladatelky Madonny.
Seznam skladeb
Like a Prayer
Express Yourself
Love Song
Till Death Do Us Part
Promise to Try
Cherish
Dear Jessie
Oh Father
Keep It Together
Pray for Spanish Eyes
Act of Contrition
Alba Madonny
Alba z roku 1989
Alba, která produkoval Patrick Leonard
Alba Sire Records
Alba v angličtině
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 290
|
\section{Section title}
\section{Introduction}
Operational predictions of severe weather hazards (i.e., tornadoes, hail, and wind) are under the purview of the National Oceanic and Atmospheric Administration (NOAA) Storm Prediction Center (SPC), which is responsible for ``timely and accurate forecasts and watches for severe thunderstorms and tornadoes over the contiguous United States'' \citep{spcaboutus}. The SPC uses forecast guidance from numerical weather prediction (NWP) models, including post-processed products, diagnostic parameters (e.g., storm relative helicity), as well as current observations, to issue outlooks 1--8 days in advance of the threat of severe weather; outlooks issued in the 1 and 2-day timeframe delineate threats for specific hazards whereas day 3--8 outlooks highlight risk areas of any severe hazard. Because of the tremendous societal and financial impacts of severe weather events, including 10 severe-weather attributed billion-dollar disasters in 2021 alone \citep{NCEI2022}, it is imperative that SPC forecasters receive reliable and valuable forecast information to inform their operational products and provide sufficient lead time to stakeholders of the threat of severe weather.
Deterministic and ensemble NWP model predictions of severe weather and associated hazards have improved substantially over the last decade as dynamical models have leveraged increased computing power to decrease grid spacing and increase effective resolution. Increases in model resolution have seemingly benefited short-term forecasts the most, as real-time high-resolution models are now capable of explicitly resolving parent convective storms \citep[e.g.,][]{Done2004,Kain2008}; these prediction systems are commonly referred to as convection-allowing models (CAMs). These advances provided opportunities to explicitly forecast weather hazards (e.g., tornadoes) by using proxies \citep[e.g., updraft helicity;][]{Sobashetal2011,Sobashetal2016,Sobashetal2016b,Hilletal2021} in NWP model output, or generating calibrated guidance products \citep[e.g.,][]{Galloetal2018,Harrison2022,Jahnetal2022} that probabilistically depict severe weather threats like tornadoes and lightning. However, few of these methods have carried over into longer-range forecasting because of their dependence on CAMs, which are limited to the near-term ($<$ 4 days) due to computational constraints and undesirable, rapid increases in forecast spread from small-scale errors \citep{Zhangetal2003,Zhangetal2007}. As a result, SPC forecasters must rely on global prediction systems to issue day 4--8 outlooks (hereafter referred to as the medium range), including both deterministic and ensemble systems that parameterize convective processes and as a result, they have limited value for severe weather forecasting.
Efforts have been made to leverage the large global ensemble datasets to generate post-processed and calibrated forecast products for medium-range forecast events as well. Post-processed fields become increasingly valuable at these extended lead times, offering a simplistic depiction for the threat of severe weather \citep[e.g., calibrated severe thunderstorm probabilities;][]{BrightandGrams2009} that may not be contained in highly-variable deterministic model output fields or ensemble output diagnostics (e.g., ensemble mean, variance). The U.S. National Blend of Models \citep{Hamilletal2017} uses quantile mapping to generate post-processed precipitation forecasts in the medium-range. Additionally, analog forecasting \citep{Lorenz1969} has been used to forecast high-impact weather environments \citep[e.g.,][]{HamillandWhitaker2006}, and typically involves training a regression model (e.g., logistic regression) on past environmental states that were coincident with severe weather reports and learning how similar environments relate to severe weather frequency (e.g., CIPS Analog-Based Severe Guidance\footnote{Cooperative Institute for Precipitating Systems (CIPS) Analog-Based Probability of Severe Guidance available at https://www.eas.slu.edu/CIPS/SVRprob/SPG\_Guidance\_whitepaper.pdf}). Using these developed statistical relationships, the current atmospheric patterns are interrogated to determine comparable analogs to base a new forecast. To the authors' knowledge, no analog techniques have been published addressing severe weather forecasting in the medium range, and a need still exists for techniques and tools to generate skillful medium-range severe weather forecast guidance.
More recently, more advanced machine learning (ML) models have emerged into the meteorology domain as a complementary and alternative option to forecast severe weather hazards. Whereas dynamical models cannot explicitly forecast hazards below their effective resolution, ML models can be trained to forecast any hazard given a sufficiently accurate observational dataset (i.e., labels) and related environmental features (i.e., predictors). ML has become a widely used technique to post-process NWP model output, for example, to generate forecasts mimicking operational center products \citep[e.g.,][]{herman2018money,Lokenetal2019,Hilletal2020,Lokenetal2020,Sobashetal2020,HillandSchumacher2021,Schumacheretal2021} and forecast severe weather or hazards more generally \citep[e.g.,][]{Gagne2014machine,gagne2017storm,Jergensenetal2020,McGovernetal2017,Burkeetal2020,Flora2021,Lokenetal2022}. Others have developed ML prediction systems using observations or reanalysis datasets \citep[e.g.,][]{Gensinietal2021,ShieldandHouston2022}, demonstrating ML-based prediction systems capable of highlighting conducive severe weather environments, for example. \citet{Hilletal2020} and \citet{Lokenetal2020} employed random forests \citep[RFs;][]{breiman2001random} to generate forecasts analogous to SPC outlooks, effectively creating post-processed and probabilistic first-guess forecasts of severe weather that could be used by forecasters when generating their human-based outlooks. \citet{Lokenetal2020} used CAM output to train an RF and derive day-1 hazard forecast probabilities whereas \citet{Hilletal2020} used global ensemble output to generate day 1-3 forecasts. Both studies demonstrated that RFs could produce skillful and reliable forecasts of severe weather at short lead times, and \citet{Hilletal2020} further highlighted that incorporating the statistical product information into the human-generated SPC forecast (i.e., through an statistical weighting procedure) yielded a better forecast at day 1 than either individual component forecast; the statistical models outperformed SPC forecasters at days 2 and 3. Therefore, it is reasonable to hypothesize that a similar prediction system devoted to forecasting severe hazards beyond day 3 would benefit SPC forecasters issuing medium-range forecasts.
Building upon the work of \citet{Hilletal2020}, this study trains and develops RFs to forecast severe thunderstorm hazards in the medium range (i.e., days 4--8). The RF infrastructure of the Colorado State University Machine Learning Probabilities (CSU-MLP) prediction system \citep[e.g.,][]{Hilletal2020,Schumacheretal2021} is used herein to explore medium-range severe weather predictions and determine their utility in relation to operational forecasts. Feature engineering \citep[e.g.,][]{Lokenetal2022} is also explored (i.e., how to organize predictors from the dynamical model output used to train the RF models) to determine if medium-range ML-based forecast skill is impacted by the way predictors are gathered and trained with observations. All RF-based forecasts are evaluated alongside corresponding SPC outlooks to illustrate the relative value of incorporating the statistical guidance into forecast operations.
\section{Methods}
\subsection{Random Forests}
RFs combine the simplicity of the decision tree architecture with the robustness of ensemble tree methods. Individual decision trees are constructed beginning with the root node (i.e., top of the tree) where a subset of training examples (i.e., instances of severe weather or no severe weather observation) is extracted from the full training set via bootstrapping and a feature is selected that best splits the examples, i.e., the feature best describes separation of severe weather events from non-events. Successive nodes are similarly constructed along the branches of the tree until a maximum tree depth is reached or a minimum number of training samples needed to split a node is breached, ending that particular branch in a terminal or "leaf" node. The leaf node either contains all the same observation types (e.g., all examples are severe weather events) or a mixture. As new inputs are supplied to a decision tree, e.g., from realtime forecast output, the tree can be traversed to a leaf node using the inputs, producing a deterministic or probabilistic prediction of severe weather from the single tree. The aggregation of all decision tree predictions from a forest of decision trees provides a probabilistic prediction for the threat of severe weather.
RF-based forecasts in this context are analogous to the SPC outlooks at days 4--8, i.e., the probability of severe weather occurrence within 40 km (or 25 mi) of a point over the daily 24-hour period defined by 1200--1200 UTC, and forecast products are constructed to mimic those produced by SPC (e.g., Fig. \ref{EXAMPLE}). For RF training, severe weather observations are encoded onto a grid by interpolating severe storm reports from the SPC Severe Weather Database \citep{spc2022} to NCEP grid 4 (0.5 degree grid spacing); the same grid is used to define RF features, further discussed in subsection 2.a.1. Any severe weather report during the 24-hour forecast window is interpolated to the 0.5 degree grid using a 40 km neighborhood, resulting in at least one grid point encoded for each severe weather report; the 0.5 degree grid has approximately 55 km grid spacing. Severe weather reports are defined as tornadoes of any EF rating, hail exceeding 1" (2.54 cm) in diameter, and convective wind speeds exceeding 58 mph (93 km/h). Training examples are encoded as 0 (no severe), 1 (non-significant severe), or 2 (significant severe) across the CONUS; the significant severe designation is a specific class of tornadoes eclipsing F2 or EF2 strength, hail exceeding 2" (5.08 cm) in diameter, and convective wind gusts exceeding 74 mph (119 km/h). Thus, the RFs are tasked with a prediction of 3 classes. However, because the current version of SPC day 4--8 outlooks do not include significant severe probability contours, CSU-MLP RF significant severe forecasts will not be formally evaluated in this work -- readers are referred to the work of \citet{Hilletal2020} who go in-depth on significant severe forecast skill for day-1 forecasts.
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{example.pdf}\\
\caption{(a) CSU-MLP day 4 forecast initialized 27 March 2022, valid for period ending 1200 UTC 31 March 2022, and corresponding (b) day 4 SPC outlook. Forecast probabilities of any severe hazard are shaded, and the circle icons are SPC local storm reports of tornadoes (red), hail (green), and wind (blue). Forecast BSS and report coverage are provided in the lower right and left corners of each panel, respectively.}\label{EXAMPLE}
\end{figure*}
\subsubsection{Predictors}\label{subs1}
Meteorological features surrounding the encoded observations used for training are obtained from the Global Ensemble Forecast System version 12 (GEFSv12) Reforecast dataset \citep[hereafter GEFS/R;][]{Hamilletal2022,Zhouetal2022}, a 5-member daily 0000-UTC initialized ensemble system that utilizes the finite volume cubed sphere (FV3) dynamical core. Reforecasts date back to 1 January 2000 and extend forward to the end of 2019. Variables with known or presumed relationships to severe weather are extracted from the GEFS/R, including convective available potential energy (CAPE), precipitable water (PWAT), and bulk vertical wind shear (e.g., SHEAR500); a full list of variables is provided in Table \ref{t1}. Due to inconsistent resolution in the GEFS/R dataset, e.g., near-surface variables have higher resolution (0.25 degree grid) than upper-tropospheric variables (0.5 degree grid), each relevant meteorological output field is interpolated to the 0.5 degree grid, which also aligns the simulated reanalysis meteorological environments with the encoded observations. Additionally, latitude, longitude, and julian day are used as static features, coincident with the encoded severe weather report location.
\begin{table*}[t]
\caption{Meteorological features used for RF training and forecasts}
\begin{center}
\begin{tabular}{ccc}
\hline\hline
Symbol & Variable Description & Variable Type\\
\hline
APCP & 3-hourly accumulated precipitation & thermodynamic \\
CAPE & Convective available potential energy & thermodynamic \\
CIN & Convective inhibition & thermodynamic \\
PWAT & Precipitable water & thermodynamic \\
T2M & 2-m temperature & thermodynamic \\
Q2M & 2-m specific humidity & thermodynamic \\
U10 & 10-m latitudinal horizontal wind speed & kinematic\\
V10 & 10-m longitudinal horizontal wind speed & kinematic\\
MSLP & Mean sea level pressure & kinematic\\
UV10 & 10-m wind speed & kinematic\\
SHEAR500 & Surface to 500 hPa bulk vertical wind difference & kinematic \\
SHEAR850 & Surface to 850 hPa bulk vertical wind difference & kinematic\\
\label{t1}
\end{tabular}
\end{center}
\end{table*}
Meteorological features are assembled in a forecast-point relative framework in which raw variables are gathered both at the observation training example point and over a pre-defined radius around the point. The pre-defined radius is set to 3 for all models trained in this manner. Additionally, the GEFS/R has 3-hourly temporal resolution across the 24 h forecast windows, allowing for both spatial and temporal sampling of the environment surrounding a severe weather report that occurred within the window; a depiction of the assembly method is provided in Fig. \ref{ASSEMBLY}. The raw variables accumulated in space and time represent the ensemble median for that given point; previous work demonstrated the superiority of using the ensemble median as opposed to the mean or outliers of the ensemble distribution \citep{herman2018money}. In other words, a 2-dimensional time series is constructed for each meteorological variable at every grid point across the forecast window.
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{feature_assembly.pdf}\\
\caption{Schematic of feature and label assembly discussed in the text.}\label{ASSEMBLY}
\end{figure*}
Two alternative feature assembly methods are also applied. Previous research employing the CSU-MLP prediction system has nearly universally applied the forecast-point relative framework to assemble features \citep[e.g.,][]{herman2018money,HermanandSchumacher2018b,Hilletal2020,Schumacheretal2021}. However, \citet{HillandSchumacher2021} showed that spatially averaging the features (e.g., 1-D time series) yielded improved RF-based excessive rainfall forecasts, and they hypothesized that characterizing the environment at a particular time with a single mean value reduced noise during model training. Similarly, \citet{Lokenetal2022} demonstrated that improved RF forecast skill could be achieved by using the ensemble mean at each spatial point rather than all individual members at each point because it reduced training noise. While \citet{HillandSchumacher2021} and \citet{Lokenetal2022} both used CAM inputs to train RFs, which may have inherently more noise than a global model, the same spatial averaging procedure of \citet{HillandSchumacher2021} is employed here (e.g., Fig. \ref{ASSEMBLY_EXPERIMENT}) to explore if the medium-range RF predictions generated here suffer from noisy inputs and whether skill could be improved by removing that noise.
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{feature_assembly_experiment.pdf}\\
\caption{Schematic of feature assembly for the p1 model discussed in the text.}\label{ASSEMBLY_EXPERIMENT}
\end{figure*}
Another unique opportunity presents itself when spatially-averaging predictors: the total number of features is significantly reduced, decreasing the computational time needed to train the RFs. For example, the original CSU-MLP feature assembly method (forecast-point relative method described above) contains $N=mp(2r + 1)^2$ features, where m is the number of meteorological variables ($m=12$), p is the number of forecast hours in the window ($p=9$), and r is the number of grid points used surrounding each training example ($r=3$); N is 5292 for the traditional CSU-MLP methods and 108 for the spatial-averaging procedure. The reduced number of features motivates an additional exploration into how preceding environments (i.e., the day(s) prior leading up to an event) may contribute to RF predictions and subsequent skill. For instance, it is well understood that moisture return from the Gulf of Mexico into the Great Plains can often precede a severe weather event, providing deeper boundary layer moisture and ample CAPE to support deep convection and severe weather \citep{JohnsandDoswell1992} -- can the RFs learn to better predict severe weather by considering how the atmosphere becomes `primed' before cyclogenesis in the lee of the Rockies? Furthermore, ensemble spread increases as a function of lead time, so considering GEFS/R ensemble median meteorological predictors on prior days for a day 6 forecast, for example, may yield additional confidence in a forecast outcome than considering only predictors spanning the day 6 forecast window. Therefore, three experiments are conducted that use the preceding 1, 2, and 3 days of features -- hereafter referred to as p1, p2, and p3, respectively. For simplicity, only the surrounding environment near the training example is sampled, rather than employing a trajectory analysis to sample the upstream environment; such a method could be explored in future work. Using four total days of features (i.e., 3 preceding days and 1 valid over the forecast window) only requires 396 features, an order of magnitude less than the traditional CSU-MLP method. For brevity, and since this method evaluation is exploratory in nature, the spatial averaging methodology, using 0--3 preceding days (i.e., p0, p1, p2, and p3), is only applied for day-6 model training and corresponding forecast evaluation.
Time-lagging methods have also been shown to be effective at creating forecast spread, yielding improved forecasts over their deterministic components and are competitive with multi-model and initial-condition perturbation ensembles at convective scales \citep[e.g.,][]{Jiraketal2018,WadeandJirak2022}. One specific benefit from time-lagging is no additional computation is required since the forecasts are already created and can be combined relatively easily to compute ensemble statistics. In this work, time-lagging is used to artificially increase the GEFS/R ensemble size, ideally yielding a more representative ensemble median when assembling features. Both 10- and 15-member time-lagged ensembles are experimented with, which use the previous and 2 previous reforecast initializations, respectively, with features from each initialization valid over the same forecast window. As a result, the 10- and 15-member time-lagged RF models use 1 and 2 less days for training, discussed in the next subsection.
\subsubsection{Training, Validation, and Testing}
While the GEFS/R has 20 years of daily forecasts, only $\sim$9 years are using for training the medium-range forecast models. This decision was largely to facilitate a comparison between day 1--3 models developed by Hill et al. (2020) and companion models trained with the GEFS/R, which are not explored in this work. Daily initializations of the GEFS/R from 12 April 2003--11 April 2012 are used to assemble predictors and severe weather reports are aggregated and encoded for this same period. It should be noted that since the first initialization is 12 April 2003, the first training examples are valid for 1200--1200 UTC 15--16 April 2003. 4--fold cross validation is employed over the 9 year period to select an optimal model as well as avoid overfitting the RFs. Testing of the trained RFs is conducted 2 October 2020--1 April 2022; the operational GEFSv12 was implemented in early October 2020.
RFs are trained for distinct regions of the country that represent somewhat unique regional climatologies of severe weather (Fig. \ref{REGION}). For each region, an RF is cross validated and the optimal model is selected based on minimizing the Brier Score across the four folds. After a fold is chosen, hyperparameters are varied to tune the skillful RF; skill of the trained RFs is more sensitive to the cross validation fold than the hyperparameters varied. Hyperparameters varied included the number of trees in the forecast and minimum number of samples needed to split a node. Entropy was set as the splitting criterion and a random selection of the features were evaluated at each node, equal to the square root of the total number of features. RFs trained with the alternative predictor assembly methods undergo the same cross-validation and hyperparameter tuning procedure.
\begin{figure*}[t]\centering
\noindent\includegraphics[width=19pc,angle=0]{region.pdf}\\
\caption{Region delineation for training the random forests. The west region is bounded by 20°–49°N, 240°–255°E; the central region by 25°–36.5°N, 255°–265.4°E and 36.5°–49°N, 255°–279.5°E; and the east region by 25°–29°N, 277°–280.2°E; 29°–36.5°N, 265.4°–285°E; and 36.5°–49°N, 279.5°–294°E}\label{REGION}
\end{figure*}
Forecasts across the testing set are made using the GEFSv12 operational prediction system that consists of 21 ensemble members and 0.5 degree resolution. Forecasts are made with each regional, optimized RF and the severe weather probabilities are stitched together with a smoothing function to limit discontinuities at regional borders and create a CONUS-wide forecast. As with training, only the 21-member ensemble median is used to generate real-time features, which feed into all RF versions to generate predictions. It should be noted that the GEFS/R dataset could have been used for testing between 2012 and 2019, but realtime forecasts are generated with the operational GEFSv12 and the authors felt the most appropriate evaluation should use products that SPC forecasters would be using in the future.
\subsection{Verification}
Traditional methods used to quantify probabilistic prediction skill -- e.g., Brier Skill Score (BSS), area under the receiver operating characteristic curve (AUROC), and reliability diagrams -- are employed to evaluate the CSU-MLP and SPC forecasts\footnote{SPC forecasters began looking at CSU-MLP forecasts in realtime beginning early 2022. As a result, the independence of RF forecasts and human-based outlooks cannot be guaranteed. Due to an already limited verification period, forecast dependence is ignored in the verification statistics.}. Observations of severe weather are obtained from the SPC archive of National Weather Service local storm reports since the SPC severe weather database was not updated through 2021 at the time the analysis was conducted. For a more direct comparison between continuous RF-generated forecasts and discrete SPC probabilities, the RF probabilities are discretized to resemble the outlooks issued by SPC forecasters. Discretization converts all RF probabilities within a probabilistic bin to the midpoint of the bin \citep[e.g.,][]{Schumacheretal2021} -- i.e., all probabilities below the 15\% minimum SPC probability contour are set to 7.5, probabilities between 15\% and 30\% to 22.5\%, and probabilities above 30\% are set to 65\%. The discretization procedure is also applied to the SPC contours. Additional discretized contours (e.g., a 5\% contour from the RF forecasts) are introduced in the verification section to elucidate factors influencing RF forecast skill. While the continuous RF probabilies could be evaluated alongside interpolated SPC contours \citep[e.g.,][]{Hilletal2020}, which have been shown to be more skillful than discrete contours \citep{Hermanetal2018c}, the limited number of possible SPC contours at days 4--8 reduces the utility of interpolation; the discrete 15\% and 30\% SPC contours are retained for verification. All medium-range SPC outlooks evaluated herein were issued at $\sim$0900 UTC each day and the shapefiles are converted to a gridded domain with ArcGIS as in \citet{Hermanetal2018c} as well as upscaled to NCEP Grid 4 for comparison to the RF-based forecasts.
The Brier Score (BS) is a measure of the mean squared error between binary events and probabilistic forecasts. The BS can be converted to a skill score, the BSS, by comparing the BS of a forecast to a reference climatology BS. BSS ranges from -$\infty$ to 1, with scores closer to 0 meaning the forecast skill is indistinguishable from the reference climatology ($BS_{ref}$):
\begin{equation}
BSS = 1 - \frac{BS_{fcst}}{BS_{ref}}.
\end{equation}
The reference climatology used is a spatially and temporally smoothed long-term climatology of any severe weather reports across the CONUS. SPC severe weather database reports from 1990 to 2019 are gridded and Gaussian smoothers with 15-day temporal and 120-km spatial filters are used to create daily climatologies (e.g., Fig. \ref{CLIMO_FREQ}a) for the 30-year period \citep[e.g.,][]{Brooksetal2003,SobashandKain2017,Schumacheretal2021}. BSSs are computed in aggregate (i.e., considering all forecast points for all days) and spatially (i.e., considering all forecasts for a particular point in space) to characterize SPC and RF forecast skill.
AUROC \citep[e.g.,][]{Marzban2004} measures forecast resolution and in this work the prediction system's ability to discriminate severe weather from non severe weather at various probabilistic thresholds. AUROC values of 0.5 suggest no discrimination, 0.7-0.8 is considered good, 0.8-0.9 great, and values $>$ 0.9 exceptional \citep{MANDREKAR2010}. Reliability diagrams are also used to characterize the relative forecast probability against the observed frequency of events, highlighting forecasts calibration at the various continuous probability contours and the discrete 15\% and 30\% SPC contours. Finally, the percent of forecast area covered by observations is computed to evaluate consistent biases in contour size. If a 15\% contour frequently contains more than 30\% fractional coverage of observations (i.e., the next contour interval), that issued contour is considered too small. Alternatively, if the fractional coverage is less than 15\%, the contour is too large. Fractional coverage has been used extensively to evaluate probabilistic ML-based forecasts against human-generated outlooks \citep[e.g.,][]{Ericksonetal2019,Hilletal2020,Ericksonetal2021,Hilletal2021,Schumacheretal2021} .
Finally, the statistical relationships identified by the RFs are inspected to glean additional insights about the features that the models rely on to make predictions and how those relationships align with our physical understanding of severe weather forecasting. Feature importances (FIs) are computed and evaluated to assess the use of predictor information in each tree of the forest. Specifically, the "Gini importance" metric \citep[e.g.,][]{pedregosa2011scikit, HermanandSchumacher2018b, Whan2018, Hilletal2020} is used to quantify the FIs. Each feature is assigned an importance value based on the number of times it is used to split a decision tree node, weighted by the number of training examples at the split \citep{Friedman2001greedy}. The importances are then summed over all splits and trees in the forest and can be aggregated to characterize temporal or spatial importance patterns \citep[e.g.,][]{Hilletal2020}. The higher the importance, the more value the RFs place on that predictor to make predictions. While other FI techniques exist \citep[e.g.,][]{McGovern2019}, the Gini importance metric is used here as an initial glance and not a holistic interrogation of FIs of the developed RFs; a follow-on manuscript is being prepared to fully interrogate model FIs with sufficient breadth and depth.
\section{Verification Period Overview}
\subsection{Frequency of Severe Weather}
Tornado event frequency highlights an active 1.5 years across the southeast U.S. (Fig. \ref{HAZ_FREQ}a), likely attributable to two fall-winter seasons in the dataset; the climatological frequency in tornado activity across the southeast US has two peaks, one in the early fall and the other late winter/early spring \citep[e.g.,][]{Horgan2007,Guyer2010,Dixon2011,GensiniandAshley2011,Cintineo2012hail,smith2012torclimo,Gensinietal2020}. With only one full spring season of severe weather in 2021, limited tornadoes were reported across the Great Plains (e.g., Fig. \ref{HAZ_FREQ}b). Unsurprisingly, hail reports are primarily confined to the Great Plains and wind reports are more uniform across the U.S. (Fig. \ref{HAZ_FREQ}c), east of the Rocky Mountains, compared to the other two hazards. Across the mid-Atlantic up into New England, multiple high-impact weather events produced numerous wind reports (Fig. \ref{HAZ_FREQ}c,d). These reports were anomalous compared to the long-term severe weather climatology (Fig. \ref{CLIMO_FREQ}b).
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{climo.pdf}\\
\caption{(a) Daily climatological any severe hazard probability (\%) centered on 21 May and (b) mean climatological hazard probability (\%) for any severe weather hazard.}\label{CLIMO_FREQ}
\end{figure*}
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{hazard_frequency.pdf}\\
\caption{Frequency of (a) tornado, (b) hail, (c) wind, and (d) all severe hazard reports across the verification period.}\label{HAZ_FREQ}
\end{figure*}
\subsection{Frequency of Forecasts over extended period (day 4--8)}
Day 4--7 15\% forecasts from the CSU-MLP system and outlooks from SPC were issued across areas that experienced frequent severe weather, including the southeast US and to some extent the southern Great Plains out to day 7 (Fig. \ref{FCST_FREQ}); 30\% forecast contours are qualitatively similar, but omitted for brevity. The RF-based forecasts were issued much more frequently, however, compared to SPC. At day 7, where SPC issued only a handful of 15\% contours across the CONUS (Fig. \ref{FCST_FREQ}g), the RF issued forecasts in some point locations as many as 10 times (2\% of the days; Fig. \ref{FCST_FREQ}h). Moreover, the RFs forecast areas of severe weather across larger areas of the CONUS, covering nearly all states east of the Rockies at day 5 (Fig. \ref{FCST_FREQ}d) when SPC limited their day 5 outlooks primarily south of 37$^o$ N (i.e., the northern border of Oklahoma).
Despite the relatively short verification period, and only one spring forecast season in the verification dataset, there is clearly seasonality to the issuance of SPC medium-range outlooks and CSU-MLP forecasts. SPC issued more 15\% contours in the spring and fall seasons than summer and winter (Fig. \ref{MON_FREQ}a). This pattern is not necessarily surprising given the climatological "double peak" of severe weather across the CONUS in the spring and fall \citep[e.g.,][]{smith2012torclimo}. In contrast, the CSU-MLP had a nearly uniform distribution of 15\% contours across the months for days 4--6 (Fig. \ref{MON_FREQ}b), with perhaps a slight peak in frequency across the summer months -- 15\% contours at days 7 and 8 were more common between January and August. SPC issued only a handful of 30\% contours in the months of March and October (Fig. \ref{MON_FREQ}c) whereas the CSU-MLP issued a number of higher probability contours through the spring months, primarily at days 4 and 5 (Fig. \ref{MON_FREQ}d), highlighting the RF-based system's confidence in forecasting both predictable and less predictable (i.e., warm season) severe weather regimes.
\begin{figure*}[t]\centering
\noindent\includegraphics[height=45pc,angle=0]{forecast_fracs_15.pdf}\\
\caption{Fraction of day 4--7 (left) SPC and (right) CSU-MLP forecasts (\% of verification days) at at least the 15\% probability level. }\label{FCST_FREQ}
\end{figure*}
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{monthly_frequencies.pdf}\\
\caption{Frequency of day 4--8 (a),(c) SPC and (b),(d) CSU-MLP forecasts by month for (top) 15\% and (bottom) 30\% probabilistic contours. Forecast days are color coded per the legend and frequencies are stacked.}\label{MON_FREQ}
\end{figure*}
\section{Forecast Verification}
RF and SPC forecast skill is first evaluated in aggregate across the entire verification period. Control RF (i.e., CSU-MLP) forecast skill decreases with increasing lead time, demonstrating significantly better skill than SPC outlooks at days 4 and 5 and equally near-climatological skill beyond day 5 (Fig. \ref{SKILL}a). As lead time increases, the RFs are confronted with learning how GEFS/R environments (specifically, the median environments) associate with severe weather as forecast variability and ensemble variance also increases. The limited number of GEFS/R ensemble members likely prohibits a proper depiction of all future atmospheric states and forces the RFs to "learn" relationships between severe weather events and simulated environments that may not be conducive to severe weather. As a result, the ability of the RFs to discriminate events from non-events similarly decreases with increasing lead time, with AUROC falling to $\sim$0.5 by day 8 (Fig. \ref{SKILL}b); the AUROC is highest for day 4 at 0.62. However, none of the AUROC values eclipse the 0.7 mark denoting good resolution \citep{MANDREKAR2010}. On the other hand, when 5\% probability contours are included in the RF-forecast discretization process and retained in the AUROC calculation, reliability is increased substantially to $>$0.07 at day 4 and is significantly larger than 0 (i.e., climatology) at day 8. Meanwhile, the AUROC surpasses 0.8 at day 4 and resolution is nearly `good' at day 8. While the higher probability 15 and 30\% contours do not have significant skill beyond day 5 (or good resolution at any forecast lead time), the adjusted skill metrics resulting from including 5\% probability contours demonstrates that low-probability forecast contours may have tremendous value for forecasting severe weather out to day 8.
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{skill_resolution.pdf}\\
\caption{Aggregate (a) Brier Skill Score and (b) Area under the ROC (AUROC) curve for CSU-MLP RF forecasts and SPC outlooks. Included as dark blue is BSS and AUROC for CSU-MLP forecasts that include the 5\% contour. Error bars are computed from 100 bootstrap resamples of the forecast distributions and represent the 95\% confidence interval.}\label{SKILL}
\end{figure*}
The skill and resolution of forecasts derived from alternatively-trained RFs is also assessed to determine if medium-range prediction skill can be improved by learning to associate severe weather events with features in the days leading up to events (p0, p1, p2, and p3 experiments), reducing the impact of noisy predictors, and increasing the representative sample of atmospheric states in the underlying GEFS/R ensemble (tl10 experiment) used to assemble associated features. Aggregated BSSs are computed over the same verification period for the p0, p1, p2, p3, and tl10 experimental forecasts at the day 6 lead time (Fig. \ref{SKILL_EXP}a) with the 5\% contour included for comparison against the best control CSU-MLP forecasts (e.g., Fig. \ref{SKILL}a). The forecast skill for all predictor-averaged models is statistically indistinguishable from the control CSU-MLP RF model (Fig. \ref{SKILL_EXP}a). These BSSs alone imply that reasonably skillful day-6 forecasts can be derived by simply considering how the atmosphere evolves locally before a high-impact severe weather event. Moreover, the relatively equal skill amongst forecasts suggests that the raw GEFS/R predictors used in the control model (e.g., at each point in space) do not add significant value in training and perhaps exhibit less noise than their CAM-model counterparts \citep[e.g.,][]{Hilletal2021,Lokenetal2022}. The time-lagged model forecasts exhibit significantly less skill than the control and predictor-averaged model forecasts, but their resolution is significantly better (Fig. \ref{SKILL_EXP}b) suggesting that the time-lagged RF model is issuing probability contours that are larger than the corresponding predictor-averaged model forecast contours (i.e., improved resolution), but sacrificing skill (not shown). Furthermore, the predictor-averaged model forecasts all improve upon the resolution of the control system, with p0 containing the highest resolution (Fig. \ref{SKILL_EXP}b).
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{experiment_skill.pdf}\\
\caption{Aggregate (a) Brier Skill Score and (b) Area under the ROC (AUROC) curve for CSU-MLP and experimental (p0, p1, p2, p3, and tl10) RF forecasts. Error bars are computed from 100 bootstrap resamples of the forecast distributions and represent the 95\% confidence interval. BSS and AUROC are computed using 5, 15, and 30\% probability contours. }\label{SKILL_EXP}
\end{figure*}
An assessment of spatial forecast skill underscores the frequency biases of the RF-based and SPC forecasts. The SPC outlooks feature patchy areas of positive skill associated with instances of severe weather that were correctly highlighted days in advance (Figs. \ref{SKILL_SPACE}a,b); in other words, when SPC forecasters do issue outlooks, they do so quite skillfully, particularly at day 4. However, areas of slightly negative skill in the southern Great Plains and mid-Atlantic suggest there were missed opportunities to forecast high-impact events 4 and 6 days in advance. Fewer SPC forecasts at day 6 (Fig. \ref{FCST_FREQ}e) further limits the extent of positive forecast skill (Fig. \ref{SKILL_SPACE}b) compared to day 4. It is possible that forecasters had little confidence in the forecast evolution to warrant a 15\% or 30\% outlook contour at these lead times for various high-impact weather events, but as lead time decreased (i.e., days 1--3), confidence increased and forecasts were issued at multiple probabilistic thresholds. Furthermore, the limited verification dataset likely creates localized areas of high/low skill, and increasing the length of verification to multiple years would help to clarify SPC forecast skill in the medium range.
\begin{figure*}[t]\centering
\noindent\includegraphics[height=40pc,angle=0]{spatial_skill.pdf}\\
\caption{Aggregate BSS across the CONUS for (a),(c) day 4 and (b),(d) day 6 (top row) SPC and (middle row) CSU-MLP RF forecasts. CSU-MLP minus SPC BSS is shaded in the bottom row. Browns indicate where SPC forecasts are more skillful, and greens where CSU-MLP forecasts are more skillful.}\label{SKILL_SPACE}
\end{figure*}
The spatial skill of control RFs similarly emphasizes the high-frequency forecast bias with more expansive and smoothed areas of positive and negative BSSs (Fig. \ref{SKILL_SPACE}c,d). For brevity and since the predictor-averaged models display similar skill, only the control forecast spatial skill is considered here. Prominent areas of negative forecast skill in the control RF forecasts across the Great Lakes is a symptom of the "forecast anywhere" nature of the RFs regardless of the climatological report frequency. BSS differences between SPC outlooks and RF forecasts (Fig. \ref{SKILL_SPACE}e,f) further illustrate the complexities of verifying these forecasts on a short period, but also clearly demonstrate that issuing more probabilistic contours in the medium-range yields better skill. This facet is perhaps most notable across the upper Midwest and mid-Atlantic, where the RF forecasts at days 4 and 6 were notably better than SPC as a result of SPC not issuing many outlooks (Figs. \ref{FCST_FREQ}a,e). The CSU-MLP spatial forecast skill at days 4 and 6 is improved when the 5\% probability contour is included (Fig. \ref{SKILL_SPACE_INCLUDE}), which amplifies areas of positive and negative skill. In some instances, including the 5\% forecast contour reverses areas of negative skill (c.f. Figs. \ref{SKILL_SPACE}d and \ref{SKILL_SPACE_INCLUDE}b in Georgia) or replaces neutral skill with positive skill as in the Great Plains (c.f. Figs. \ref{SKILL_SPACE}d and \ref{SKILL_SPACE_INCLUDE}b in North and South Dakota).
\begin{figure*}[t]\centering
\noindent\includegraphics[width=19pc,angle=0]{spatial_skill_include5.pdf}\\
\caption{As in Figs. \ref{SKILL_SPACE}c,d, but BSS is computed with a discretized 5\% contour.}\label{SKILL_SPACE_INCLUDE}
\end{figure*}
Forecast skill is also assessed by computing the fractional coverage of observations within the forecast contours. While the SPC contours are defined as single probabilistic levels and not a discrete range (e.g., 15--30\%), it is reasonable to suggest that the fractional coverage of observations for a probabilistic contour should not exceed the next probabilistic value, otherwise a higher contour would be warranted. Fig. \ref{COVERAGE} shows the fractional coverage of observations, in which the objective is to be within the green and red horizontal bars for a particular probabilistic threshold. When below the green horizontal line, a forecast contours are believed to be too large on average, and when above the red line, forecast areas are too small. SPC and CSU-MLP control forecasts are well calibrated at the 15\% threshold at almost all forecast days; SPC forecasts are potentially too small at day 8, but a small sample size limits a complete analysis for that lead time (Fig. \ref{COVERAGE}a). On the other hand, the day 4 and 5 30\% outlooks from SPC are typically smaller than the CSU-MLP control forecasts, which appear generally well-calibrated prior to day 7 (Fig. \ref{COVERAGE}b). Unsurprisingly, the CSU-MLP 5\% probability contours are also well-calibrated at all forecast lead times (not shown).
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{frac_coverage.pdf}\\
\caption{Spatial coverage of local storm reports in SPC and CSU-MLP forecasts for all forecast days and the (a) 15\% and (b) 30\% probability contours. Green horizontal lines denote the bottom probability of the forecast "bin", whereas red horizontal lines denote the top of the bin, i.e., the next probability contour value.}\label{COVERAGE}
\end{figure*}
For a complete depiction of calibration, reliability diagrams are constructed for all SPC and control RF forecasts (Fig. \ref{REL_DIAG}). Reliability curves that fall above or below the perfect-reliability line, dashed black lines in both panels of Fig. \ref{REL_DIAG}, are said to under-forecast or over-forecast severe weather events, respectively. All SPC outlooks are generally well-calibrated prior to day 8, whereas RF forecasts generally have an under-forecast bias above the 15\% probability threshold but achieve reliability for lower thresholds. Day 7 and 8 control RF forecasts lose reliability quickly after 15\% (Fig. \ref{REL_DIAG}b), plummeting effectively to no skill above 30\% and no resolution (i.e., below the red dashed line) at the highest probabilities considered -- SPC maintains skill at day 7 but underforecasts severe weather events at day 8 (Fig. \ref{REL_DIAG}b). While reliability for days 4--6 forecasts above 30\% is considerably more variable (Fig. \ref{REL_DIAG}a), owing to smaller sample sizes (e.g., inset figure in Fig. \ref{REL_DIAG}), forecast skill still hovers near perfect reliability. Analysis of reliability for the alternative predictor assembly methods reveals that predictor-averaging maintains reliability relative to the control forecasts up to slightly higher probabilistic thresholds (e.g., 25\%) for the day 6 forecasts whereas the day 6 tl10 experiments overforecast observed events (not shown), aligning with the aggregate BSS statistics (Fig. \ref{SKILL_EXP}a). This analysis underscores the utility of the continuous RF probabilities as a forecast tool at medium-range lead times, and also stresses the difficulty of accurately capturing severe weather threat areas multiple days in advance (i.e., most RF-based forecasts underforecast the observed event probabilities), even for skillful statistical models.
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{reliability.pdf}\\
\caption{Reliability diagrams for CSU-MLP RF and SPC outlooks at (a) day 4--6 and (b) day 7--8 lead times and as noted in the legends. SPC reliability is plotted as the observed frequency across the forecast probability bins (e.g., 15-30\%). Inset are the forecast frequencies as a function of mean forecast probabilities. Perfect reliability, no skill, and no resolution are denoted by the black, blue, and green (or orange) dashed lines, respectively. }\label{REL_DIAG}
\end{figure*}
\subsection{Example Forecasts}
Example forecasts are provided that display some of the skill and resolution attributes of the RF outlooks described previously (Fig. \ref{GOOD_CASE}). All day 4--8 example forecasts are valid for the 24-hour period ending 1200 UTC 16 December 2021. This particular event featured a compact short-wave trough with strong low- and mid-level wind fields. Robust low-level warm air advection with low 60s dewpoints contributed to an atmosphere primed for severe thunderstorms and a highly anomalous severe-weather event for mid-December; a robust discussion of the meteorological parameter space for this event is provided in the day-1 SPC forecast discussion archive \citep{SPC2022b}. Medium-range forecasts from NWP models depicted the shortwave trough days in advance, but did not have a good handle on the instability parameter space even at day 4. SPC forecasters decided to issue a 5\% severe hazard risk at day 3, noting the impressive kinematic support for damaging wind gusts\footnote{Day 3 discussion available at https://www.spc.noaa.gov/products/outlook/archive/2021/day3otlk\_20211213\_0830.html}. Increasing forecaster confidence in a high-impact severe weather event with decreasing lead time resulted in a moderate categorical risk being issued at day 1, with a 45\% probability of severe wind and a large area of significant severe wind delineated; a 10\% probability of tornadoes also accompanied the SPC day-1 forecast.
The CSU-MLP control forecasts (Fig. \ref{GOOD_CASE}) depicted a severe weather threat area across the upper Mississippi valley eight days in advance (Fig. \ref{GOOD_CASE}e). By day 6, a 15\% probability contour was introduced in the forecasts with a 30\% contour added in subsequent, shorter lead-time forecasts (Fig. \ref{GOOD_CASE}a-c) that mostly encircled severe weather reports for the event. Forecast skill scores (BSS) gradually increased from 0.04 at day 8 to 0.19 at day 5 with a slight decrease back to 0.15 at day 4. In short, the probabilistic RF-based guidance showed substantial skill out to day 8 for this particular case, and the progression of forecasts from day 8 to 4 showcases the utility of the forecast system to highlight areas that may experience severe weather\footnote{SPC forecasters used the CSU-MLP forecasts during this event, highlighting their value in upgrading SPC outlooks as the event neared. See note from SPC forecaster Andrew Lyons: \url{https://twitter.com/TwisterKidMedia/status/1471585397440487433?s=20&t=cSYwf08xjtvvuwIYr2TgHQ}}.
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{121521.pdf}\\
\caption{Day 4--8 CSU-MLP forecasts for any severe hazard probability valid 1200 UTC 15 December 2021 -- 1200 UTC 16 December 2021. NWS local storm reports for wind, hail, and tornadoes are included as blue, green, and red circles, respectively. Observation coverage and BSS are included in lower left and right corners of each panel, respectively.}\label{GOOD_CASE}
\end{figure*}
Another example is provided to reinforce the similarities between the experimental forecast systems for one particular case. On this day, the 24-hour period ending 1200 UTC 14 August 2021, numerous wind reports were recorded across Ohio, Pennsylvania, West Virginia, and numerous other mid-Atlantic states (Fig. \ref{EXP_EX}). All RF-based forecasts have a broad 5\% contour across this region, and erroneously westward into Indiana, Illinois, and Missouri. None of the forecasts suggest greater than 15\% probability of severe weather, despite dense sets of wind reports in two corridors across the northeast. The 15\% contour in the control system seems subjectively well-positioned (BSS=0.1076), but the p0-model forecast inaccurately extends the 15\% contour westward (BSS=0.1064). The p1-model forecast contracts the 15\% contour back eastward, but also eliminates an area in New York and Pennsylvania that experience wind reports (BSS falls to 0.0979). The p2- and p3-model forecasts further contract the 15\% contour (BSSs of 0.059 and 0.0441, respectively), leaving the 5\% area relatively unchanged; the time-lagged model is nearly identical to the p3-model forecast. This example illustrates the rather subtle differences between the experiments that renders the control system and flow-dependent RF models objectively similar. A more comprehensive case-study evaluation would be needed to characterize these subtle forecast differences over the entire forecast period, which is beyond the scope of this experimental exploration but is an active area of research. Additionally, the computational savings of the spatial-averaging models, particularly in training the RFs, may support a continued investigation alongside the CSU-MLP control forecast system into their utility as an operational tool.
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{experiment_ex.pdf}\\
\caption{RF-based forecasts of any severe hazard at day 6 lead time from (a) trad, (b) p0, (c) p1, (d) p2, (e) p3, and (f) tl10. All forecasts are initialized 0000 UTC 8 August 2021 and valid 1200 UTC 13 August 2021 -- 1200 UTC 14 August 2021. NWS local storm reports for wind, hail, and tornadoes are included as blue, green, and red circles, respectively. Observation coverage and BSS are included in lower left and right corners of each panel, respectively.}\label{EXP_EX}
\end{figure*}
\subsection{Feature Importances}
To better understand what the RFs have learned about severe weather prediction from the training process, FIs are aggregated by meteorological variable and region for the day 4, 6, and 8 CSU-MLP control models (Fig. \ref{MR_GINI}). Consistent with day 1-3 models developed by \citet{Hilletal2020}, CAPE, CIN, MSLP, SHEAR500, and SHEAR850 are the most importance predictors for the day 4 models (Fig. \ref{MR_GINI}a); CAPE is also less important in the East region as MSLP and SHEAR850 increases in importance, consistent with high-shear low-CAPE environments that are more prevalent in the southeast U.S. \citep{Sherburn2014hslc}. As lead time increases, CAPE becomes less important in the Central region (Fig. \ref{MR_GINI}b,c), being replaced by Q2M. Q2M also becomes more important in the West region, but CIN replaces CAPE as the most important predictor (e.g., Fig. \ref{MR_GINI}c). In the East, CIN also replaces CAPE as the most important predictor.
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{mr_gini.pdf}\\
\caption{FIs for day (a)-(c) 4, 6, and 8 traditional CSU-MLP models grouped by meteorological variable and color coded by training model region as depicted in the legend.}\label{MR_GINI}
\end{figure*}
FIs of the p1 and p2 models are explored to further understand how the RFs leverage features in the day(s) leading up to severe weather events. FIs for both models peak during the day 6 forecast window (i.e., forecast hours 132--156), but they also ramp up from the days leading up to the event (Fig. \ref{m1_m2}), suggesting the local meteorological environment is being used by the RFs in predictions. Slight differences in FIs exist by regional model as well. For example, the West region p1 and p2 models have secondary peaks near forecast hours 120--123 (Fig. \ref{m1_m2}a), approximately 00--03 UTC the day before the forecast window, and p2 has a tertiary peak near forecast hour 99 (Fig. \ref{m1_m2}b). Not only are the prior days variables being used, but there is a notable cyclical nature that matches the diurnal climatology of severe weather \citep[e.g.,][]{Hilletal2020}. In contrast, FIs in the Central and East regions do not have the same cyclic pattern, but rather have a nearly constant ramp up in FIs (e.g., orange and yellow bars in Fig. \ref{m1_m2}a). However, since these FIs are a summation over all meteorological predictors, it is not clear what aspects of the environment prior to a severe weather event are being learned by the RFs to make day 6 predictions.
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{m1_m2_imps.pdf}\\
\caption{FIs for day 6 (a) p1 and (b) p2 models at each forecast hour and color coded by training region. FIs at each hour are aggregated across all meteorological variables.}\label{m1_m2}
\end{figure*}
To further clarify the day-prior FIs, features and FIs are separated into thermodynamic and kinematic subgroups (refer to Table \ref{t1}) for the p1 models (Fig. \ref{p1_REG}). In the West region models, which exhibited a strong cyclic FI pattern, the thermodynamic variables (e.g., CAPE, Q2M) are the primary contributor to the day-prior FI secondary peak, with a sharp increase at forecast hour 123 (Fig. \ref{p1_REG}a) -- the kinematic variable FIs in the West p1 model have a more subtle cyclic pattern, but still peak during the day 6 forecast window. In the Central region model, the FIs have a broad and uniform peak from forecast hours 135--147 and a smaller contribution compared to the West region in the day prior period (Fig. \ref{p1_REG}b). The East region models lean on day-prior thermodynamic predictors slightly more than the Central region models, with a longer ramp-up of FIs from forecast hours 123--141 (Fig. \ref{p1_REG}c), but the kinematic FIs for both the Central and East models are markedly smaller than the thermodynamic variable contributions. A full explanation for these FI patterns is reserved for future work, but as an initial assessment, the FIs highlight unique regional and meteorological relationships learned by the experimental models that were exploited by the RFs to make severe weather predictions. Changes to the local environment that preceeded severe weather events clearly influenced the RFs during training, but to what extent those variables contributed to forecast probabilities is not discernible from Gini importances alone.
\begin{figure*}[t]\centering
\noindent\includegraphics[width=39pc,angle=0]{m1_imps_byregion.pdf}\\
\caption{FIs for day 6 p1 models in the (a) WEST, (b) CENTRAL, and (c) EAST regions color coded by feature variable type as defined in Table \ref{t1}. Importances are summed at each forecast hour for the thermodynamica nd kinematic feature subsets.}\label{p1_REG}
\end{figure*}
\section{Summary and Discussion}
Nine years of reforecasts from the GEFSv12 reforecast dataset are used along with historical records of severe weather to construct a novel RF prediction system capable of explictly and probabilistically predicting severe weather at 4--8 day lead times, i.e., the medium range. Human forecasts issued by the SPC are evaluated alongside the RF-based predictions to assess the operational utility of the ML forecasts. A handful of experiments are also conducted to explore whether forecasts could be improved through feature engineering and expanding the GEFSv12 ensemble size. The main conclusions are as follows:
\begin{enumerate}
\item RF forecasts have more skill and higher resolution than the human-based outlooks, which is partly a reflection of the continuous probabilities of the RFs and their ability to issue lower probability contours more frequently that add considerable skill and resolution to the forecast system.
\item The CSU-MLP forecasts tend to underforecast the occurrence of severe weather in the medium range at probabilistic contours above 15\% whereas SPC forecasts are calibrated prior to day 8.
\item Using spatially-averaged GEFS/R features yielded similarly skillful forecasts as the traditional CSU-MLP method while also allowing for prior-day meteorological information to inform the forecasts; the models learned to associate the buildup of thermodynamically- and kinematically-favorable environments with next-day severe weather events. Additionally, the similar skill amongst models suggests that spatiotemporally-displaced GEFS/R predictors are not particularly noisy but do not provide tremendous value to the RFs.
\item Time-lagging the GEFSv12 reforecasts to produce larger initial ensembles for RF training degraded forecast skill but increased forecast resolution by generating larger areas of low probability forecasts.
\item Feature importances revealed relationships known to be important for severe weather forecasting, providing confidence in the RF forecasts.
\item The performance of the RF-based forecasts alongside the human-generated outlooks demonstrate their utility and potential value as a guidance tool in the human forecast process.
\end{enumerate}
The comparisons between RF-based predictions and human forecasts provided in this work have some important caveats to consider. SPC forecasters have often employed specific philosophies in generating day 4--8 outlooks [Steven Weiss, personal communication] that likely limit the number of forecasts issued and hamper more skillful human-based medium-range forecasts. First, SPC forecasters are tasked with forecasting the probability of ``organized severe thunderstorms'', not necessarily severe weather reports, so they will never outline a high CAPE, low shear event in the medium range despite a high likelihood of thunderstorms. Second, SPC forecasters are very concerned with continuity and ramping up to an event. For example, forecasters may opt to introduce a forecast area in a day 3 or 2 outlook rather than days 4--8 because they may not want to highlight a threat area that has to be shifted, enlarged, or removed altogether in subsequent outlooks -- forecasters are hesitant to add outlook areas when confidence is too low, and they can add it in on the next forecast shift. Relatedly, SPC forecaster perception is that NWS Weather Forecast Offices do not like when SPC removes or reduces severe weather probabilities because it affects public messaging of severe weather. As a result, it is often common for SPC day 4--8 outlooks to be relatively small (e.g., Fig. \ref{COVERAGE}) and infrequent, particularly when atmospheric predictability for severe weather wanes in the warm season (e.g., Fig. \ref{MON_FREQ}). As confidence increases in a severe weather threat area, the probabilities can be increased and area expanded in day 1--3 outlooks. These and other internal constraints, along with the relative dearth of useful NWP model guidance, naturally restrict SPC forecast skill at longer lead times. On the other hand, the ML-guidance generated from the CSU-MLP could significantly aid SPC by increasing confidence and consistency to provide more lead time to operational partners and end users to the threat of severe weather.
The results presented highlight some ML success against baseline forecasts and also a number of unique avenues that could be explored moving forward to enhance and improve both ML-based guidance and the SPC human-based forecasts, as well as increase interpretability of the ML `black box'. While the feature assembly experiments (e.g., p2, tl10) did not yield forecasts that surpassed the skill of the traditional CSU-MLP system, the simplification of features could be exploited to include other ensemble diagnostic or summary metrics (e.g., mean, high or low member values) that characterize ensemble spread into the medium range. The meteorological predictors could also be varied in any of the ML configurations to explore which predictors add the most value, or objective methods (e.g., permutation importance) could be used to reduce feature redundancy and select a more optimal subset of features. It will also be vitally important that alternative interpretability metrics (e.g., tree interpreter \citep{Saabas2016}, Shapely additive values \citep[SHAP;][]{Shapley2016}, accumulated local effect \citep[ALE;][]{ApleyandZhu2020}) are employed to interrogate how the RFs make predictions; this exploration is underway and will be the focus of a follow-on manuscript. Additionally, the added benefit of the ML system against the underlying GEFS model could be quantified more explicitly. Traditionally, ML-based forecasts have been measured against the very model that generates the ML predictors, with demonstrated success improving upon the raw dynamical models \citep[e.g.,][]{herman2018money}. In this instance, with notable 2-m dry and low-instability biases in the GEFSv12 system\footnote{Internal SPC surveys have suggested these biases exist and are reducing forecaster confidence in deterministic Global Forecast System and GEFSv12 forecasts} \citep{Manikinetal2020}, it would be informative to quantify the value added by the ML system to correct for these biases when making severe weather predictions.
Equipped with calibrated statistical products and expert human knowledge, SPC forecasters may be able to increase medium-range outlook skill by using the CSU-MLP RF forecasts and delineating lower-probability threat areas. Furthermore, by incorporating these types of robust, skillful statistical guidance products into the forecast process, SPC forecaster confidence in a forecast outcome may increase. Being able to unveil how and why the ML models are issuing probabilities in a certain area will provide additional confidence to forecasters to rely on the products as a forecast tool. We expect that the usefulness of the CSU-MLP prediction system and others like it is not necessarily limited to the medium-range either, and the applicability to subseasonal or seasonal predictions of severe weather is planned for future investigation. Additionally, with continued effort from the meteorological community to explain AI/ML methodologies (e.g., Chase et al. 2022) with comprehension, and to make these tools more common in academic settings, there will be more opportunities to pursue new and improved forecast methodologies. Finally, it is crucial that constant communication exists between ML developers and SPC forecasters to generate products that SPC operations finds useful and valuable. One such avenue includes continued participation and development of these ML products in the Hazardous Weather Testbed Spring Forecast Experiment \citep{Clark2021}.
\clearpage
\acknowledgments
This work is supported by the Joint Technology Transfer Initiative and NOAA Grant NA20OAR4590350. We would like to thank SPC forecasters for their invaluable perspectives about these forecast products and continued collaboration to develop cutting edge medium-range guidance products.
\datastatement
All RF-based forecasts are available upon request from the Colorado State University researchers, and will be made available in the near future in an online repository. SPC outlooks are available via a public archive at https://www.spc.noaa.gov/. The GEFS/R dataset and GEFSv12 forecasts are publicly available from Amazon AWS at https://registry.opendata.aws/noaa-gefs/.
\bibliographystyle{ametsocV6}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 590
|
{"url":"https:\/\/pos.sissa.it\/301\/1116\/","text":"Volume 301 - 35th International Cosmic Ray Conference (ICRC2017) - Rapporteur Talks\nStatus of ground based gamma-ray observations\nN. Park\nFull text: pdf\nPre-published on: 2017 October 23\nPublished on:\nAbstract\nThis is a proceeding of a rapporteur talk given on ground-based gamma-ray astronomy at the $35^\\mathrm{th}$ International Cosmic-Ray Conference (ICRC) held in 2017 in Busan, Republic of Korea. A total of $\\sim$300 contributions were presented during the ICRC over 17 gamma-ray sessions. Here, I summarize the contributions mainly focusing on the source observations performed by ground-based gamma-ray instruments and the connection between gamma rays and cosmic rays. Any such summary must necessarily be incomplete. However, I have attempted to provide a glance into recent progress that has been made in using ground-based gamma-ray observations to understand the nature of high energy particles in our Universe.\nOpen Access","date":"2018-07-20 14:38:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.39489421248435974, \"perplexity\": 4214.875333488603}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-30\/segments\/1531676591683.78\/warc\/CC-MAIN-20180720135213-20180720155213-00280.warc.gz\"}"}
| null | null |
{"url":"http:\/\/mathhelpforum.com\/differential-equations\/169728-bounded-growth.html","text":"# Math Help - Bounded growth\n\n1. ## Bounded growth\n\nI don't even know where to begin. thanks for the help.\n\n2. Originally Posted by konvos\nI don't even know where to begin. thanks for the help.\nwhat happens to the rate of growth as $P(t) \\to 500$ ?","date":"2014-03-07 15:17:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 1, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8993584513664246, \"perplexity\": 680.656611194404}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-10\/segments\/1393999645330\/warc\/CC-MAIN-20140305060725-00043-ip-10-183-142-35.ec2.internal.warc.gz\"}"}
| null | null |
"Those Damned Blue-Collar Tweekers" is a song by the American rock band Primus. It was released as the third single from their 1991 album Sailing the Seas of Cheese. Unlike its preceding singles "Jerry Was a Race Car Driver" and "Tommy the Cat", "Tweekers" did not feature an accompanying video. The song opens with Larry LaLonde on guitar and a reserved bassline from Les Claypool, from there alternating between his trademark slap bass and a quiet section for the vocals.
The song's narrative describes several different trades that the town's blue collar tweekers engage in, but, like many of the other story-telling songs in Primus's catalogue, lacks any clear, single meaning and leaves plenty of ambiguity in its lyrics. The song is about truck drivers and "blue-collar workers" using methamphetamine.
Live
When performing live, Claypool changed a particular word in the lyrics. In the third verse, instead of "my eyes are growing weary as I finalize this song," it is now ""my eyes are growing weary as I sodomize this song..."
The band's Woodstock 1994 performance of the song was particularly notable, with Claypool beginning a bass rendition of the Star Spangled Banner in homage to Jimi Hendrix's guitar performance of the national anthem decades before, but eventually apologizing to the crowd by saying "Sorry, I had to do it" and returning to the song.
As of 2015, it is Primus's second most-performed song live. A live version of the song (performed at Primus' show at the Brixton Academy, London, England on July 13, 2011) also appears as an iTunes exclusive bonus track on the band's 2011 album, Green Naugahyde.
Primus often use animated clips from the online animated series Salad Fingers.
References
Primus (band) songs
1991 singles
1991 songs
Songs written by Les Claypool
Songs written by Larry LaLonde
Songs written by Tim Alexander
Interscope Records singles
Songs about drugs
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,661
|
Q: How to remove chip in the angular material Trying to remove selected chips (Angular material) but not able to remove from the chips values. After selecting the chip value from the multiple checkboxes, I am trying to remove the selected chip by clicking the button. It is removing only from the selected values not from the chips(red color with cancel box). I do not know how to remove it. If anyone knows, please help to find the solution.
app.component.ts:
public cardValue: any = {
options: [],
};
selectOptions: Array<string> = ['Bus', 'Car', 'Motor', 'Wheel'];
selectChange = (event: any) => {
const key: string = event.key;
this.cardValue[key] = [...event.data];
};
removeChipFn(val) {
alert(val);
const index = this.cardValue.options.indexOf(val);
if (index > -1) {
this.cardValue.options.splice(index, 1);
}
}
Demo: https://stackblitz.com/edit/angular-ivy-ietub7?file=src%2Fapp%2Fapp.component.ts
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,618
|
Q: ESRI Field Maps Data Collection Best Practices I am trying to determine the best workflow to collect data using ESRI's Field Maps without introducing any unnecessary error. I understand that anytime you complete a datum transformation you are introducing varying degrees of error to your data depending on the specific transformation you choose.
When using Field Maps, the default coordinate system is WGS 84 (Auxiliary Sphere), but my external GPS receiver is set to collect in State Plane coordinates. When I try collect data with this configuration, the data is displaying property in the correct location. So, my question is, what is happening under the hood in Field Maps that allows this to happen?
My main concern is that there is an on-the-fly transformation happening that could be adding a small unnecessary error/shift to my data. According to the source listed below, this transformation would introduce roughly a 0.1 meter shift/error to the data.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,660
|
/*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package io.trino.execution;
import com.google.common.util.concurrent.ListenableFuture;
import io.trino.connector.CatalogName;
import io.trino.execution.warnings.WarningCollector;
import io.trino.metadata.Metadata;
import io.trino.security.AccessControl;
import io.trino.sql.tree.Expression;
import io.trino.sql.tree.ResetSession;
import io.trino.transaction.TransactionManager;
import java.util.List;
import static com.google.common.util.concurrent.Futures.immediateFuture;
import static io.trino.spi.StandardErrorCode.CATALOG_NOT_FOUND;
import static io.trino.spi.StandardErrorCode.INVALID_SESSION_PROPERTY;
import static io.trino.sql.analyzer.SemanticExceptions.semanticException;
public class ResetSessionTask
implements DataDefinitionTask<ResetSession>
{
@Override
public String getName()
{
return "RESET SESSION";
}
@Override
public ListenableFuture<?> execute(
ResetSession statement,
TransactionManager transactionManager,
Metadata metadata,
AccessControl accessControl,
QueryStateMachine stateMachine,
List<Expression> parameters,
WarningCollector warningCollector)
{
List<String> parts = statement.getName().getParts();
if (parts.size() > 2) {
throw semanticException(INVALID_SESSION_PROPERTY, statement, "Invalid session property '%s'", statement.getName());
}
// validate the property name
if (parts.size() == 1) {
if (metadata.getSessionPropertyManager().getSystemSessionPropertyMetadata(parts.get(0)).isEmpty()) {
throw semanticException(INVALID_SESSION_PROPERTY, statement, "Session property '%s' does not exist", statement.getName());
}
}
else {
CatalogName catalogName = metadata.getCatalogHandle(stateMachine.getSession(), parts.get(0))
.orElseThrow(() -> semanticException(CATALOG_NOT_FOUND, statement, "Catalog '%s' does not exist", parts.get(0)));
if (metadata.getSessionPropertyManager().getConnectorSessionPropertyMetadata(catalogName, parts.get(1)).isEmpty()) {
throw semanticException(INVALID_SESSION_PROPERTY, statement, "Session property '%s' does not exist", statement.getName());
}
}
stateMachine.addResetSessionProperties(statement.getName().toString());
return immediateFuture(null);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,021
|
Forest Service to use AT&T backbone
By Michael Hardy
AT&T Government Solutions won a two-year contract to provide a high-speed backbone network for the Agriculture Department's Forest Service.
Forest Service officials are trying to increase their network capabilities while reducing costs, according to AT&T officials. The new network will link 20 sites in the continental United States, Alaska and Puerto Rico.
Terms of the contract were not disclosed.
Company officials also announced a $7.5 million contract to upgrade the National Gallery of Art's Collections Management System, which museum officials use to retrieve data about the collection.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,270
|
Hellin Kay, Contributor
Photographer, Stylist & Director
Documentaries Old and New; Sascha Rice, Sundance and Nanook
01/04/2012 04:39 pm ET Updated Mar 05, 2012
I saw Sascha Rice's new documentary The Legacy of Pat Brown, about her grandfather a few weeks ago at MOCA and fell in love. I am admittedly a sucker for nostalgia of any kind but it has to be done right (see Mad Men vs. The Playboy Club) and Sascha hits it on the nose. She takes just the right amount of family drama (her uncle is current California governor Jerry Brown, Pat's son), combines it with California's rich political and environmental history and weaves it gently thru a certain period of American lore creating a beautiful honest portrait of not only someone close to her own heart but the heart of a great state. Plus, it's one of those documentaries that tells a really fascinating story without relying on talking heads.
Documentary film is one of the most undervalued (I think) of all art forms. But thanks I guess partly to "reality TV" people seem to have rediscovered it in the mainstream and are a bit more interested in it as a form of entertainment. Every year there are a few documentaries that manage theatrical releases (Restrepo, Page One, Bill Cunningham, Tabloid to name a few) and this year at Sundance there are several new ones that I am dying to see; About Face on supermodels by photographer Timothy Greenfield-Sanders, Bones Brigade by old school skater Stacy Peralta, Amy Berg's West Of Memphis, Something From Nothing: The Art of Rap directed by Ice T and Matthew Akers' Marina Abramović: The Artist Is Present. There is also a younger generation of documentary filmmakers like GatlingPictures which earlier this year released a documentary on musician Mark Sandman (Cure For The Pain) and this past year have been knee deep in the heart of the political triangle between Taiwan, the U.S. and China for their new one, Tsua Lel Dan.
When I was studying film at Bard College with Adolfas Mekas and John Pruitt we were asked to read a book on documentary film that traces its history and of course tells the story of the one that started it all, Nanook of the North, which Robert J. Flaherty made in 1922. Recently I've started thinking that Flaherty not only created the first documentary film but also the first reality TV show. Remember how he accidentally burned his first negative of Nanook of The North while editing and smoking a cigarette and then had to go back to the Arctic Pole near Quebec to re-shoot all that footage that took him a year to document? He and Nanook had to recreate and stage a lot of what was previously shot. Sound familiar?
Sascha Rice's California State of Mind: The Legacy of Pat Brown will be screening at the Palm Springs Film Festival this Friday January 13th at 6pm and Sunday January 15th at 5:30pm at the Palm Springs Regal 9.
Art Mad Men Reality TV Movies Politics News
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,265
|
Understanding the Danger Area Around a Tropical Cyclone
Nikos Mazarakis Client Relations and General Office Manager - StormGeo Greece
With an average 100 tropical storms every year across the globe's oceans, it's understandable that seafarers consider them to be one of the most dangerous meteorological phenomena they can encounter. Unfortunately, the oceanic areas most affected by tropical cyclones also see some of the highest levels of marine traffic (take the Northwestern Pacific, for example). This highlights the enormous impact tropical storms have on the commercial shipping industry.
For example, maritime transport in the South China Sea was greatly disrupted by Typhoon Mangkhut in September 2018. In order to avoid the "Danger Area", the crucial question for every master is how and where a tropical cyclone will move. What do we call "Danger Area" and how close can we get to a Tropical Cyclone at least?
These images taken from the StormGeo Operational Center shows Typhoon Mangkhut first over the Philippine Sea, allowing for an uninterrupted flow of traffic through the South China Sea. Two days later, the cyclone has moved west, causing all traffic to stop.
For every vessel at sea, avoiding the 34 KT wind area of a tropical cyclone is paramount. Any ship in the vicinity of a tropical cyclone should remain clear of the maximum radius of analyzed or forecast 34 KT winds associated with the tropical cyclone. Winds from the storm's eyewall (the area surrounding the eye of a tropical cyclone) to the outer bands of the storm are constantly decreasing.
For example, if the maximum wind inside a cyclone is 110 knots and winds 120 nautical miles from the eye are 34 knots, the danger area has a radius of 120 nm. This means that the ship should never come 120 nm or less from the tropical cyclone. This danger area around tropical cyclones is rarely symmetrical and can vary within semi-circles or quadrants.
Official meteorological services can provide forecasts and advisories where the danger area is clearly defined, allowing the master to direct the vessel accordingly. However, this information is usually provided in a textual format rather than graphical, leaving the master to draw both the track and the danger area around a cyclone—not always an easy task for an overloaded master.
Typhoon Mangκhut shown in Classic BVS. The danger area (wind > 34 KT) is marked as purple and its radius is 215 nm.
Fortunately, modern tools are available today to provide this graphical representation, such as the BonVoyage System (Classic BVS) and StormGeo's recently launched NaviPlanner BVS. The BVS family provides the only on board application in the shipping market that graphically displays the danger area around a tropical cyclone based on official meteorological advisories. This means that official information is incorporated into the map with the other meteorological parameters and the vessel's track, one hour the latest from the official release.
If the Master has questions or specific requirement for a passage, speaking to a person rather than a computer is often preferred. That's where StormGeo's Route Advisory Services come into play—a shore-based routing service that gives the Master access to global experts who route more than 5,500 routes every month.
StormGeo Greece organizes BVS workshops on a regular basis in which extra time is devoted to tropical cyclones. Check out dates and availability here.
The Consequences of a Cooling El Niño on the 2020 Hurricane Season
The entire hurricane season forecast can ride on one thing: El Niño.
Cross Industry | Weather
Ask a Meteorologist
At StormGeo, we're known for our meteorologists. So we asked them to answer some of your frequently asked weather questions in 60 seconds or less.
[Webinar] Responding to Storms: Continuity Experts Share Their Lessons Learned
We gathered three emergency management experts from the top healthcare systems – HCA Healthcare, Medxcel and Texas Children's Hospital - to share their key lessons...
2019-2020 Winter Weather Forecast: North America
Hear from StormGeo Senior Meteorologist Fred Schmude on what the different regions of North America can expect this winter in terms of temperature and precipitation...
Previous insight: Tips on Maritime Disputes and Arbitration from a Barrister
Next insight: Why Douglas Sea State 3 Should be Eliminated from Good Weather Clauses
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,320
|
{"url":"https:\/\/rosettacommons.org\/node\/10947","text":"# selecting pivot_residue in Backrub for ensemble generation\n\n7 posts \/ 0 new\nselecting pivot_residue in Backrub for ensemble generation\n#1\n\nHello,\n\nI need to generate an ensemble of proteins to capture backbone flexibility in docking. I ran relax with thorough flag, but the resultants decoys are not that much different (RMSD ranges fro 0.2 to 0.7). Then I ran Backrub without defining pivot_residue and without resfile. The resultant decoys of Backrub are also similar. Does anyone have suggestions to define pivot_residue? or how can I sample conformational space enough to consider backbone flexibility?\n\nThanks\n\nCategory:\nPost Situation:\nFri, 2020-08-07 15:41\nrohi\n\nWithout specifying a resfile, the sidechains aren't repacked, \"NATRO\", whereas you'd probably want them to change in conformation (\"NATAA\") if you want to assess your protein's plasticity.\nYou might want to try pumping the\u00a0kT\u00a0up. The default is 0.6 kcal\/mol (which what you'd get\u00a0at 25\u00b0C in a real system). Say, 1 kcal\/mol (37\u00b0C) or higher. It is a statistical potential value so the temperature is far from real, so you can go much higher.\n\nBackrub is not really an MD run, it is meant for protein design with flexible backbone and the end result will be a protein in an energy minima. A better\u00a0alternative is the cartesianMD mover:\u00a0https:\/\/www.rosettacommons.org\/docs\/latest\/scripting_documentation\/RosettaScripts\/Movers\/movers_pages\/CartesianMD\n\n\u2022 It's a scripts\/pyrosetta only, but has decent\u00a0documention.\n\u2022 It does what you'd expect: it\u00a0rattles your protein at a given kT energy making it drift like a normal MD simulation \u2014except in implicit solvent.\n\u2022 It is okay with ligands\n\u2022 It is cartesian and not internal coordinate mode though\n\nFor pyrosetta here is an example:\n\npose = pyrosetta.Pose()\nparams_paths = pyrosetta.rosetta.utility.vector1_string()\nparams_paths.extend([params_filename])\npyrosetta.generate_nonstandard_residue_set(pose, params_paths)\npyrosetta.rosetta.core.import_pose.pose_from_file(pose, pdb_filename)\nmd = pyrosetta.rosetta.protocols.md.CartesianMD(pose, pyrosetta.create_score_function('ref2015_cart'))\nmd.set_ncyc_premin(0)\nmd.set_ncyc_postmin(0)\nmd.set_nstep(500)\nmd.set_temperature(310)\nmd.use_rattle(True)\nmd.set_store_trj(True)\nmd.apply(pose)\nps = md.dump_poses(pose)\npyrosetta.rosetta.core.scoring.CA_rmsd(ps[1], ps[3])\nprint({'poses': len(md.dump_poses(pose)),\n'kinetic_energy': md.kinetic_energy(),\n'potential_energy': md.potential_energy(),\n'score': pyrosetta.create_score_function('ref2015_cart')(pose),\n'dofs': md.n_dof(),\n'RMSD': pyrosetta.rosetta.core.scoring.CA_rmsd(ps[1], ps[3])}\n)\n\nSat, 2020-08-08 03:47\nmatteoferla\n\nDear matteoferla,\n\nThank you for your help. Just one question:\n\nI was trying to run the code above, but I got error saying \"params_filename\" is undefined. I was wondering what is \"params_filename\"?\n\nThanks\n\nSun, 2020-08-09 19:05\nrohi\n\nparams_filename is just a string of a\u00a0path of a\u00a0params file \u00a0(topology file, \"ligand.params\") required to load the pose, if you have multiple add multiple to that list if you have none\u00a0skip\u00a0the second, third and four lines, it is equivalent to \"-extra_res_fa\". Likewise, the string pdb_filename is your filename for your pdb, equivalent to -s.\n\nThe code was meant as an example snippet and not meant to be complete. It lacks the import and init of pyrosetta\n\nimport pyrosetta\npyrosetta.init(extra_options='-no_optH false -mute all -ignore_unrecognized_res true -load_PDB_components false')\n\nAnd after the run you may want to dump any pose to PDB\n\npose.dump_pdb('whatever.pdb')\n\nAdditionally you may want to change the setting say:\n\nmd.set_ncyc_premin(200)\nmd.set_ncyc_postmin(0)\nmd.set_nstep(50_000)\n\nIn this example, as each step is 2 fs,\u00a0that would be 100 ps. The number of poses in the list\u00a0ps is one every 100 I think, so you'd have 500 poses, but worth checking that detail with a short run.\n\nMon, 2020-08-10 03:31\nmatteoferla\n\nDear Matteoferla,\n\nI appreciate your help. Based on your response, I provided the code below:\n\nimport pyrosetta\npyrosetta.init(extra_options='-no_optH false -mute all -ignore_unrecognized_res true -load_PDB_components false')\npose = pyrosetta.Pose()\npyrosetta.rosetta.core.import_pose.pose_from_file(pose, 'input.pdb')\nmd = pyrosetta.rosetta.protocols.md.CartesianMD(pose, pyrosetta.create_score_function('ref2015_cart'))\nmd.set_ncyc_premin(200)\nmd.set_ncyc_postmin(0)\nmd.set_nstep(200)\nmd.set_temperature(310)\nmd.use_rattle(True)\nmd.set_store_trj(True)\nmd.apply(pose)\nps = md.dump_poses(pose)\npyrosetta.rosetta.core.scoring.CA_rmsd(ps[1], ps[3])\nprint({'poses': len(md.dump_poses(pose)),\n'kinetic_energy': md.kinetic_energy(),\n'potential_energy': md.potential_energy(),\n'score': pyrosetta.create_score_function('ref2015_cart')(pose),\n'dofs': md.n_dof(),\n'RMSD': pyrosetta.rosetta.core.scoring.CA_rmsd(ps[1], ps[3])}\n)\npose.dump_pdb('cartesianMD_output.pdb')\n\nI have some other questions:\n\n1. I got an error with this line \"pyrosetta.rosetta.core.scoring.CA_rmsd(ps[1],ps[3])\" saying \"IndexError\". Could you talk me through this line? I know Python, but I am a little bit familiar with\u00a0 PyRosetta.\n\n2. About dumping any pose to PDB, is it fine to add pose.dump_pdb('cartesianMD_output.pdb') at the end of the code?\n\n3. Except for the link you provided above, is there any paper that explains more cartesianMD?\n\nThank you\n\nMon, 2020-08-10 14:20\nrohi\n\nThe author as stated with the code is\u00a0Hahnbeom Park and I believe\u00a0https:\/\/www.pnas.org\/content\/115\/12\/3054\u00a0is the reference \u2014an MD run is done for homology modelling, so not a typical MD run you'd do with Gromacs. But it basically does an MD trajectory as you'd get from Gromacs, but with implicit solvent.\n\nSorry for being cryptic ps = md.dump_poses(pose), should have been defined as poses =. It is a list-like\u00a0of poses. list(poses) will give a normal list. Pyrosetta objects use \"vectors\", most notably these start from 1. Each of the entries is a pose, which can be saved with the .dump_pdb code. So\u00a0pose.dump_pdb('cartesianMD_output.pdb') will save the last, while poses[1].dump_pdb('cartesianMD_output.pdb') will save the first say.\n\nThe RMSD was to see how much did it deviate, so you can do whatever combination. The code I copied from must have returned 3 poses, while in your code there where less. Say you could do this (or analyse the trajectory however you fancy):\n\nimport pandas as pd\n\npd.DataFrame({f'#{i}': {f'#{j}': pyrosetta.rosetta.core.scoring.CA_rmsd(poses[i], poses[j]) for j in range(1, 1+len(poses))} for i in range(1, 1+len(poses))})\n\n\nIf you are in need of a good control, I'd recommend 1L2Y, the Trp-cage miniprotein which is 20 amino acids, but is an NMR ensemble, which needs to be read in a weird way:\n\noriginal = pyrosetta.toolbox.rcsb.pose_from_rcsb('1L2Y') # this is a mess of 720...\n# usable test bed\npose = pyrosetta.rosetta.protocols.grafting.return_region(original, 1,20)\n# ensemble\nposes = pyrosetta.rosetta.utility.vector1_core_pose_Pose()\nn = [pyrosetta.rosetta.protocols.grafting.return_region(original, 1+i,20+i) for i in range(0, original.total_residue(),20)]\nposes.extend(n)`\n\nTue, 2020-08-11 10:00\nmatteoferla\n\nHello,\n\nI also tried to use Backrub with high mc_kt (mc_kt=1) and NATAA for all residues (to repack all residues), but the resultant decoys are still too similar (RMSD=0).","date":"2022-01-18 12:40:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.40745484828948975, \"perplexity\": 4921.348894537998}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320300849.28\/warc\/CC-MAIN-20220118122602-20220118152602-00451.warc.gz\"}"}
| null | null |
{"url":"http:\/\/www.numdam.org\/item\/ASNSP_2012_5_11_1_61_0\/","text":"The regularity of Special Legendrian Integral Cycles\nAnnali della Scuola Normale Superiore di Pisa - Classe di Scienze, Serie 5, Volume 11 (2012) no. 1, pp. 61-142.\n\nSpecial Legendrian Integral Cycles in ${S}^{5}$ are the links of the tangent cones to Special Lagrangian integer multiplicity rectifiable currents in Calabi-Yau 3-folds. We show that Special Legendrian Cycles are smooth except possibly at isolated points.\n\nPublished online:\nClassification: 49Q15,\u00a0 32Q25\nBellettini, Costante\u200a1; Rivi\u00e8re, Tristan\n\n1 ETH, Z\u00fcrich Departement Mathematik R\u00e4mistrasse, 101 8092 Z\u00fcrich, Switzerland\n@article{ASNSP_2012_5_11_1_61_0,\nauthor = {Bellettini, Costante and Rivi\\ere, Tristan},\ntitle = {The regularity of {Special} {Legendrian} {Integral} {Cycles}},\njournal = {Annali della Scuola Normale Superiore di Pisa - Classe di Scienze},\npages = {61--142},\npublisher = {Scuola Normale Superiore, Pisa},\nvolume = {Ser. 5, 11},\nnumber = {1},\nyear = {2012},\nzbl = {1242.49093},\nmrnumber = {2953045},\nlanguage = {en},\nurl = {http:\/\/www.numdam.org\/item\/ASNSP_2012_5_11_1_61_0\/}\n}\nTY - JOUR\nAU - Bellettini, Costante\nAU - Rivi\u00e8re, Tristan\nTI - The regularity of Special Legendrian Integral Cycles\nJO - Annali della Scuola Normale Superiore di Pisa - Classe di Scienze\nPY - 2012\nDA - 2012\/\/\/\nSP - 61\nEP - 142\nVL - Ser. 5, 11\nIS - 1\nPB - Scuola Normale Superiore, Pisa\nUR - http:\/\/www.numdam.org\/item\/ASNSP_2012_5_11_1_61_0\/\nUR - https:\/\/zbmath.org\/?q=an%3A1242.49093\nUR - https:\/\/www.ams.org\/mathscinet-getitem?mr=2953045\nLA - en\nID - ASNSP_2012_5_11_1_61_0\nER - \n%0 Journal Article\n%A Bellettini, Costante\n%A Rivi\u00e8re, Tristan\n%T The regularity of Special Legendrian Integral Cycles\n%J Annali della Scuola Normale Superiore di Pisa - Classe di Scienze\n%D 2012\n%P 61-142\n%V Ser. 5, 11\n%N 1\n%I Scuola Normale Superiore, Pisa\n%G en\n%F ASNSP_2012_5_11_1_61_0\nBellettini, Costante; Rivi\u00e8re, Tristan. The regularity of Special Legendrian Integral Cycles. Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Serie 5, Volume 11 (2012) no. 1, pp. 61-142. http:\/\/www.numdam.org\/item\/ASNSP_2012_5_11_1_61_0\/`\n\n[1] Jr. Almgren and J. Frederick, \u201cAlmgren\u2019s big Regularity Paper\u201d, World Scientific Monograph Series in Mathematics, 1, \u201c$Q$-valued Functions Minimizing Dirichlet\u2019s Integral and the Regularity of Area-minimizing Rectifiable Currents up to Codimension 2\u201d, with a preface by Jean E. Taylor and Vladimir Scheffer, World Scientific Publishing Co. Inc., River Edge, NJ, 2000, xvi+955. | MR | Zbl\n\n[2] N. Aronszajn, A unique continuation theorem for solutions of elliptic partial differential equations or inequalities of second order, J. Math. Pures Appl. (9) 36 (1957), 235\u2013249. | MR | Zbl\n\n[3] S. X.-D. Chang, Two-dimensional area minimizing integral currents are classical minimal surfaces, J. Amer. Math. Soc. 1 (1988), 699\u2013778. | MR | Zbl\n\n[4] C. De Lellis and E. Spadaro, \u201c$Q$-Valued Functions Revisited\u201d, Mem. Amer. Math. Soc., 211 (2011), n.\u00a0991, vi+79. | MR | Zbl\n\n[5] S. K. Donaldson and R. P. Thomas, Gauge Theory in higher dimensions, In: \u201cThe Geometric Universe\" (Oxford, 1996), Oxford Univ. Press, 1998, 31-47. | MR | Zbl\n\n[6] L. C. Evans and R. F. Gariepy, \u201cMeasure Theory and Fine Properties of Functions\u201d, Studies in Advanced Mathematics, CRC Press, Boca Raton, FL, 1992, viii+268. | MR | Zbl\n\n[7] H. Federer, \u201cGeometric Measure Theory\u201d, Die Grundlehren der mathematischen Wissenschaften, Band 153, Springer-Verlag New York Inc., New York, 1969, xiv+676. | MR | Zbl\n\n[8] M. Giaquinta, G. Modica and J. Sou\u010dek, \u201cCartesian Currents in the Calculus of Variations. I\u201d, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], 37, Cartesian currents, Springer-Verlag, Berlin, 1998, xxiv+711. | MR | Zbl\n\n[9] D. Gilbarg and N. S. Trudinger, \u201cElliptic Partial Differential Equations of Second Order\u201d, Classics in Mathematics, Reprint of the 1998 edition, Springer-Verlag, Berlin, 2001, xiv+517. | MR | Zbl\n\n[10] R. Harvey and H. B. Jr. Lawson, Calibrated geometries, Acta Math. 148 (1982), 47\u2013157. | MR | Zbl\n\n[11] M. Haskins, Special Lagrangian cones, Amer. J. Math. 126 (2004), 845\u2013871. | MR | Zbl\n\n[12] D. D. Joyce, \u201cRiemannian Holonomy Groups and Calibrated Geometry\u201d, Oxford Graduate Texts in Mathematics, 12, Oxford University Press, Oxford, 2007, x+303. | MR | Zbl\n\n[13] M. J. Micallef and B. White, The structure of branch points in minimal surfaces and in pseudoholomorphic curves, Ann. of Math. (2) 141 (1995), 35\u201385. | MR | Zbl\n\n[14] F. Morgan, \u201cGeometric Measure Theory\u201d, Fourth edition, A beginner\u2019s guide, Elsevier\/Academic Press, Amsterdam, 2009, viii+249. | MR | Zbl\n\n[15] C. B. Jr. Morrey, \u201cMultiple Integrals in the Calculus of Variations\u201d, Die Grundlehren der mathematischen Wissenschaften, Band 130, Springer-Verlag New York, Inc., New York, 1966, ix+506. | MR | Zbl\n\n[16] D. Pumberger and T. Rivi\u00e8re, Uniqueness of tangent cones for semi-calibrated 2-cycles, Duke Math. J. 152 (2010), 441\u2013480. | MR | Zbl\n\n[17] T. Rivi\u00e8re and G. Tian, The singular set of $J$-holomorphic maps into projective algebraic varieties, J. Reine Angew. Math. 570 (2004), 47\u201387. 58J45. | MR | Zbl\n\n[18] T. Rivi\u00e8re and G. Tian, The singular set of 1-1 integral currents, Ann. of Math. (2) 169 (2009), 741\u2013794. | MR | Zbl\n\n[19] L. Simon, \u201cLectures on Geometric Measure Theory\u201d, Proceedings of the Centre for Mathematical Analysis, Australian National University, 3, Australian National University Centre for Mathematical Analysis, Canberra, 1983, vii+272. | MR | Zbl\n\n[20] J. Simons, Minimal varieties in riemannian manifolds, Ann. of Math. (2) 88 (1968), 62\u2013105. | MR | Zbl\n\n[21] C. H. Taubes, \u201c$\\mathrm{SW}\u21d2\\mathrm{Gr}$: From the Seiberg-Witten Equations to Pseudo-holomorphic Curves\". Seiberg Witten and Gromov invariants for symplectic 4-manifolds., 1\u2013102, First Int. Press Lect. Ser., 2, Int. Press, Somerville, MA, 2000. | MR | Zbl\n\n[22] G. Tian, Gauge theory and calibrated geometry. I, Ann. of Math. (2) 151 (2000), 193\u2013268. | EuDML | MR | Zbl\n\n[23] B. White, Tangent cones to two-dimensional area-minimizing integral currents are unique, Duke Math. J. 50 (1983), 143\u2013160. | MR | Zbl","date":"2022-12-01 14:19:38","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 5, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3963485360145569, \"perplexity\": 2847.4712018242863}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710813.48\/warc\/CC-MAIN-20221201121601-20221201151601-00501.warc.gz\"}"}
| null | null |
Q: Exception in thread "main" scala.ScalaReflectionException: is not a term I am trying to write a Seq[(String, Double)] data from Spark to Cassandra DB, e.g., Seq(("re", 1.0), ("im", 2.0)) to Cassandra. But there is an exception as follows:
Exception in thread "main" scala.ScalaReflectionException: <none> is not a term
at scala.reflect.api.Symbols$SymbolApi$class.asTerm(Symbols.scala:199)
at scala.reflect.internal.Symbols$SymbolContextApiImpl.asTerm(Symbols.scala:84)
.....
The Spark code is as follows:
def main(args: Array[String]) {
// omit some code
val rawTRLStream = KafkaUtils.createDirectStream[String, Array[Byte], StringDecoder, DefaultDecoder](ssc, kafkaParams, topics)
val parsedTRLStream = rawTRLStream.map {
case (_, inputStreamData) =>
//--- do somthing next
//....
val seq : Seq[(String, Double)]= Seq (("re", 1.0), ("im", 2.0))
seq
}
implicit val rowWriter = SqlRowWriter.Factory // This is a suggestion on the web, but it does not help on this problem.
parsedTRLStream.saveToCassandra("simple_avro_data", "simple_avro_data")
//Kick off
ssc.start()
ssc.awaitTermination()
ssc.stop()
}
The Cassandra schema is as follows:
CREATE TABLE simple_avro_data (
re double,
im double,
PRIMARY KEY ((re), im)
) WITH CLUSTERING ORDER BY (im DESC);
I also try next suggestion from scala.ScalaReflectionException: <none> is not a term
val seq = (("re", 1.0), ("im", 2.0))
This removes the exception ".... is not a term", but it introduces another exception:
Com.datastax.spark.connector.types.TypeConversionException: Cannot convert object (re,1.0) of type class scala.Tuple2 to java.lang.Double.
Does anyone know how to solve the problem?
Thanks,
A: Ensure you are setting default values if your expected values are missing or null.
For example:
We can see the problem a little better if we use SomeColumns and if we look at code that works and then code that will always throw an exception with bad input data.
The following code works safely by setting the request with data or else with error codes for 3 columns.
val lines = ssc.socketTextStream(host , port)
// create requests from socket stream
val requests = lines.map(x => {
val matcher:Matcher = pattern.matcher(x) // using a regex matcher on the values
if (matcher.matches()) { // have matches
val ip = matcher.group(1)
val request = matcher.group(5)
val status = matcher.group(6).toInt
(ip, request, status) // create the request
} else {
("error", "error", 0) // no matches then create an error requests
}
})
requests.foreachRDD((rdd, time) => {
rdd.cache()
rdd.saveToCassandra(keyspace, table, SomeColumns("ip", "request", "status")) // save what we put into the request
})
When I added a column to my requests, the database, and the saveToCassandra method but I did not add a column to the else...
I had the exception: "scala.ScalaReflectionException: is not a term" when data in my stream wasn't what I expected and it created the else request
val lines = ssc.socketTextStream(host , port)
// create requests from socket stream
val requests = lines.map(x => {
val matcher:Matcher = pattern.matcher(x) // using a regex matcher on the values
if (matcher.matches()) { // have matches
val ip = matcher.group(1)
val request = matcher.group(5)
val status = matcher.group(6).toInt
(ip, request, status, agent) // create the request + ADDITIONAL agent column
} else {
("error", "error", 0) // Will get an exception if this is created without 4 columns
}
})
requests.foreachRDD((rdd, time) => {
rdd.cache()
rdd.saveToCassandra(keyspace, table, SomeColumns("ip", "request", "status", "agent")) // save what we put into the request
})
I needed to add a default value for the new column agent to ensure I was always sending data for 4 columns
else {
("error", "error", 0 , "error")
Ensure your parsedTRLStream is always populated when mapping the stream. The rdd will complain if its trying to save something from nothing.
I hope this helps explain the exception a little better.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,382
|
\section{Introduction} \label{sec:intro}
For a continuous map $f\co X\to Y$ between topological spaces,
we define the \textit{multiplicity} of $f$ as \ $\max_{y\in Y}
|f^{-1}(y)|$, and the \textit{minimal multiplicity} of $f$ as the
minimal multiplicity of maps homotopic to $f$, that is
$$
\mathrm {MMR}[f]\,:=\,\min_{g\simeq f} \max_{y\in Y} |g^{-1}(y)|.
$$
From now on, $\simeq$ means that the mappings are homotopic. The
problem of determining $\mathrm {MMR}[f]$ arises. This problem is closely
related to the \textit{self-intersection problem} of determining the
\textit{minimal self-intersection number} (see Bogatyi, Kudryavtseva and Zieschang \cite{BKZ2,BKZ3})
$$
\mathrm {MI}[f]\,:=\, \min_{g\simeq f} |\int(g)| , \quad
\int(g)\,:=\,\{(x,y)\in X\times X\,|\, x\ne y,\ g(x)=g(y)\} /
\Sigma_2
$$
(here $\Sigma_2$ is the symmetric group in two symbols, which acts
on $X\times X$ by permutations of the coordinates), and to the
problem of determining the \textit{minimal (unordered) $\mu$--tuple
self-intersection number}
$$
\mathrm {MI}_\mu[f]\,:=\, \min_{g\simeq f} |\int_\mu(g)| , \quad
\int_\mu(g)\,:=\,\{I\subset X \,|\, |I|=\mu,\ |g(I)|=1\}, \quad
\mu\ge 2.
$$
Clearly, $\mathrm {MI}[f]=\mathrm {MI}_2[f]$, and one easily shows\footnote{(Indeed,
take a map $g\simeq f$ such that $\mathrm {MI}_\mu[f]=|\int_\mu(g)|=:\ell$.
We can assume that $\ell<\infty$. Then
$\smash{\ell=\sum_{\smash{i\ge\mu}}\sum_{\smash{y\in Y,\ |g^{-1}(y)|=i}} {i\choose\mu}}$.
Hence, for every nonvanishing summand in this sum, one has
$\smash{{i\choose\mu}\le\ell}$ and
$$\textstyle\smash{{i\choose\mu+1}={i\choose\mu}\frac{i-\mu}{\mu+1}
<{i\choose\mu}\frac{i}{\mu} \le {i\choose\mu}^{2}\le\ell{i\choose\mu}}.$$
Therefore $\textstyle\mathrm {MI}_{\mu+1}[f]\le|\int_{\mu+1}(g)|
=\smash{\sum_{i>\mu}\sum_{y\in Y,\ |g^{-1}(y)|=i} {i\choose\mu+1}}$,
which is at most $\ell
\smash{\sum_{i>\mu} \sum_{y\in Y,\ |g^{-1}(y)|=i} {i\choose\mu}\le
\ell^2}$.)} that $\mathrm {MI}_{\mu+1}[f]\le(\mathrm {MI}_\mu[f])^2$, $\mu\ge2$.
The
connection between $\mathrm {MMR}[f]$ and $\mathrm {MI}_\mu[f]$ is illustrated by the
following properties:
$$
\mathrm {MI}_\mu[f]=0 \ \iff \ \mathrm {MMR}[f]<\mu \qquad \mbox{and} \qquad
\mathrm {MI}_\mu[f]>0 \ \iff \ \mathrm {MMR}[f]\ge\mu.
$$
In particular, $\mathrm {MI}[f]=0$ if and only if $\mathrm {MMR}[f]=1$. The numbers
$\mathrm {MMR}[f]$, $\mathrm {MI}[f]$, and $\mathrm {MI}_\mu[f]$, measure, in a sense,
``complexity'' of the self-intersection set $\int(f)$.
It is natural to consider the above problem for maps
$f\co M} %{N^m} %{n\to \mathbb {N}^n} %{m$ between closed connected (nonempty)
smooth manifolds, where $m} %{n=\dim M} %{N$, $n} %{m=\dim \mathbb {N}$. The problem is
nontrivial for $0<m} %{n\le n} %{m\le 2m} %{n$.
Hurewicz~\cite{Hur} proved that, if $X$ is an $m} %{n$--dimensional
compact metric space and $m} %{n+1\le n} %{m \le 2m} %{n$, then any continuous
map $f\co X\to\mathbb R^n} %{m$ can be deformed, by means of an arbitrary
small perturbation, to a map $g\co X\to\mathbb R^n} %{m$ of multiplicity
$\le [\frac{n} %{m}{n} %{m-m} %{n}]$. A similar assertion is also valid if the
Euclidean space $\mathbb R^n} %{m$ is replaced by an arbitrary smooth manifold
$\mathbb {N}^n} %{m$. Thus, for $m} %{n<n} %{m\le2m} %{n$, we have
\begin{equation}
\refstepcounter{Thm}
\label {eq:codim>0}
\mathrm {MMR}[f]\le \left[ \frac{n} %{m}{n} %{m-m} %{n} \right].
\tag{\hbox{\bf\theThm}}
\end{equation}
This inequality follows by observing that, for a ``generic'' map
$g\co M} %{N\to \mathbb {N}$, the set $\int_{\mu+1}(g)\subsetM} %{N$ has
dimension $(\mu+1)m} %{n-\mu n} %{m$, which is negative (and, thus,
$\mathrm {MMR}[f]\le\mu$) if $\mu>\frac{m} %{n}{n} %{m-m} %{n}$.
The special case $n} %{m=2m} %{n$ is the classical self-intersection problem
which gives rise to Whitney's work~\cite{Wh}. Here the
estimation~\eqref{eq:codim>0} gives $\mathrm {MMR}[f]\in\{1,2\}$, and
computing $\mathrm {MMR}[f]$ is equivalent to deciding whether $\mathrm {MI}[f]=0$,
ie\ whether the map $f$ is homotopic to an embedding. Namely, we
have $\mathrm {MMR}[f]=1$ if $\mathrm {MI}[f]=0$, and $\mathrm {MMR}[f]=2$ if $\mathrm {MI}[f]>0$. A
useful tool for deciding whether $\mathrm {MI}[f]=0$ is the \textit{Nielsen
self-intersection number} $\mathrm {NI}[f]$ of $f$~\cite{BKZ2,BKZ3}.
One can show by using the Whitney trick~\cite {Wh} that
$\mathrm {MI}[f]=\mathrm {NI}[f]$ if $m} %{n\ge3$. But, if $m\le2$, one has only the
inequality $\mathrm {MI}[f]\ge\mathrm {NI}[f]$ (see~our papers with Zieschang \cite{BKZ2,BKZ3} for
$m} %{n=1$). For $m} %{n=1$, there are several combinatorial and geometric
methods for deciding whether a closed curve on a surface is
homotopic to a simple closed curve (see, for example,~Gon\c calves, Kudryavtseva and Zieschang \cite {GKZ3}
and references therein). An answer in terms of the Nielsen
self-intersection number is given in \fullref{thm:curve}. In
the remaining case $m} %{n=2$, we only know
that $\mathrm {NI}[f]>0$ implies $\mathrm {MI}[f]>0$ (and thus $\mathrm {MMR}[f]=2$), but the question
whether $\mathrm {NI}[f]=0$ implies $\mathrm {MI}[f]=0$ is still open.
The present paper studies the number $\mathrm {MMR}[f]$ mainly in the case
$m} %{n=n} %{m\le2$. Here $\mathrm {MMR}[f]$ is closely related to the \textit{absolute
degree} $A(f)$ (as defined in~Hopf~\cite{H} or~Epstein~\cite{Ep}; see
also Kneser~\cite{K}, Olum~\cite{Ol} and Skora~\cite{Sk}) of the map $f$. A definition of
the absolute degree is also given in Definition 3.7 in the paper by Gon\c calves, Kudryavtseva and Zieschang~\cite{gkz} of this
volume. \fullref{thm:circle}
computes the number $\mathrm {MMR}[f]$ for a self-mapping $f$ of a circle
($m} %{n=n} %{m=1$). In the case $m} %{n=n} %{m=2$ (mappings between closed
surfaces), the following results are obtained. We calculate
$\mathrm {MMR}[f]$ in terms of $A(f)$,
$\ell(f)\,:=\,[\pi_1(\mathbb {N}):f_\#\pi_1(M} %{N)]$, and the Euler
characteristics of the surfaces, for any map $f\co M} %{N\to \mathbb {N}$
with $A(f)>0$ (\mbox{\fullref{thm:cover}} and \mbox{\fullref {thm:pinch}}). We
also estimate $\mathrm {MMR}[f]$ for any map $f$ with $A(f)=0$
(\mbox{\fullref{thm:A=0}}). In particular, we prove that
\begin{align*}
\mathrm {MMR}[f]&\in\{A(f),A(f)+2\} &\mbox{if} \quad A(f)>0,
\\
\mathrm {MMR}[f]&\in\{2,3,4\} &\mbox{if} \quad A(f)=0.
\end{align*}
The authors do not know whether $\mathrm {MMR}[f]\ge A(f)$ if $m} %{n=n} %{m\ge3$.
\subsubsection*{Acknowledgements} This work was partially completed
during the visit of the first and the third authors in June--July
2005 at the Fakult\"at f\"ur Mathematik, Universit\"at Siegen,
Deutschland. The visits have been supported by the FIGS-project
``Niedrigdimensionale Topologie und geometrisch-topologische
Methoden in der Gruppentheorie'' (1st author), by the Grant of the
President RF, project NSh--4578.2006.1, and by a PIMS Fellowship
(3rd author).
\section[Computing MMR for mappings of a circle]{Computing $\mathrm {MMR}[f]$ for mappings of a circle} \label{sec:1}
Any map $f\co S^1\to \mathbb {N}$ with $\dim \mathbb {N}\ge3$ is homotopic to an
embedding, thus $\mathrm {MMR}[f]=1$. Consider the cases $\dim \mathbb {N}=1,2$.
\begin{Thm} \label{thm:circle}
For any self-map $f\co S^1\to S^1$,
$$
\mathrm {MMR}[f] = \left\{ \begin{array}{ll} |\deg f|, & \deg f\ne0, \\
2, & \deg f=0. \end{array} \right.
$$
\end{Thm}
\Proof We will identify the circle $S^1$ with the unit circle in the
complex plane $\mathbb C$. Consider the projection $p\co \mathbb R\to S^1$,
$p(r)=e^{2\pi ir}$, $r\in\mathbb R$, of the universal covering $\mathbb R$ of
$S^1$ to $S^1$.
Suppose that \ $\deg f\ne0$. Then $f$ is homotopic to the map
sending $z\mapsto z^{\deg f}$, $z\in S^1$. Thus all points have
exactly $|\deg f|$ preimages, hence $\mathrm {MMR}[f]\le|\deg f|$. Let us
show that the number of preimages can not be reduced. Since \ $\deg
f\ne0$, for every point $s\in S^1$ there exists a point $t\in S^1$
such that $f(t)=s$. Let $r_0\in\mathbb R$ be a point such that $p(r_0)=t$.
Consider a lifting $\tilde f\co \mathbb R\to\mathbb R$ of $f\co S^1\to
S^1$. Then $\tilde f(r_0+1)=\tilde f(r_0)+\deg f$, so by the
Intermediate Value Theorem, there exist points $r_1,\dots,r_{|\deg
f|-1}\in(r_0,r_0+1)$ such that $\tilde f(r_i)=\tilde
f(r_0)+j\,\mathrm {sgn}(\deg f)$, $1\le j\le |\deg f|-1$. Thus
$p(r_0),p(r_1),\dots,p(r_{|\deg f|-1})$ are different preimages of
$s$ under the mapping $f$. This shows $\mathrm {MMR}[f]\ge|\deg f|$.
Suppose that \ $\deg f=0$. Let us show that there exists $g\simeq f$
with $|g^{-1}(s)|\le2$ for any $s\in S^1$. Indeed, take $g$ to be
the map given by the following rule: $g(z)=z$ if $\Im z\ge0$,
$g(z)=\bar z$ if $\Im z\le0$. It remains to show that for any
$f\co S^1\to S^1$, $\deg f=0$, there exists a point $s\in S^1$
with $|f^{-1}(s)|\ge2$. Such a map $f$ lifts to a map $\bar
f\co S^1\to\mathbb R$, thus it is enough to show that $\bar f$ is not
an embedding. This can be easily deduced by taking two points
$s_0,s_1\in S^1$ with $\bar f(s_0)=\min_{s\in S^1}\bar f(s)$,
$\bar f(s_1)=\max_{s\in S^1}\bar f(s)$, and applying the Intermediate
Value Theorem to the restriction of $\bar f$ to two segments in
$S^1$ having endpoints at $s_0,s_1$. \end{proof}
Consider a closed curve $f\co S^1\to \mathbb {N}^2$ on a closed surface
$\mathbb {N}^2$. Then computing $\mathrm {MMR}[f]$ is equivalent to deciding whether
the homotopy class $[f]$ of the curve $f$ contains a simple closed
curve. Namely, $\mathrm {MMR}[f]=1$ if $[f]$ contains a simple curve, and
$\mathrm {MMR}[f]=2$ otherwise.
\begin{Thm}{\rm \cite{BKZ2,BKZ3}}\qua \label{thm:curve}
A closed curve $f\co S^1\to \mathbb {N}^2$ on a closed surface $\mathbb {N}^2$ is
homotopic to a simple closed curve if and only if $\mathrm {NI}[f]=0$ and one
of the following conditions is fulfilled: the curve $f$ is not
homotopic to a proper power of any closed curve on $\mathbb {N}$, or $f\simeq
g^2$ for some orientation-reversing closed curve $g\co S^1\to
\mathbb {N}$.\hfill
\qedsymbol
\end{Thm}
An analogue of \fullref{thm:curve} was proved by Turaev and
Viro~\cite[Corollary~II]{TV}, in terms of the intersection index
introduced therein.
\section {$\mathrm {MMR}(f)$ for maps of positive degree between surfaces}
\label {sec:A>0}
In the following, $M} %{N=M} %{N^2$ and $\mathbb {N}=\mathbb {N}^2$ are arbitrary connected
closed surfaces, ie\ $2$--dimensional manifolds. By $\chi(M} %{N)$, we
denote the Euler characteristic of $M} %{N$. For a continuous mapping
$f\co M} %{N\to \mathbb {N}$, $A(f)$ denotes its \textit{absolute degree}
(see Hopf~\cite{H}, Epstein \cite{Ep}, Kneser \cite{K}, Olum \cite{Ol}, Skora \cite{Sk} or
Gon\c calves, Kudryavtseva and Zieschang \cite{GKZ1}). Denote the index of the image of the fundamental group
of $M} %{N$ in the fundamental group of $\mathbb {N}$ by
$\ell(f)\,:=\,[\pi_1(\mathbb {N},f(x_0)):f_\#(\pi_1(M} %{N,x_0))]$ for some
$x_0\in M} %{N$. Actually the number $\ell(f)$ does not depend on the
choice of the point $x_0$.
The following consequence of Kneser's inequality will be central in
the proof of our main result.
\begin{Pro} \label{pro:Kneser}
If $f\co M} %{N\to \mathbb {N}$ has absolute degree $d=A(f)>0$ then there are
at most $d\cdot\chi(\mathbb {N})-\chi(M} %{N)$ points in $\mathbb {N}$ whose preimages
have cardinality $\le d-1$. Moreover, if pairwise different points
$y_1,\dots,y_r$ of $\mathbb {N}$ have $\mu_1,\dots,\mu_r$ preimages,
respectively, then
$$
d\cdot\chi(\mathbb {N})\ge\chi(M} %{N)+\sum_{i=1}^r (d-\mu_i).
$$
\end{Pro}
\Proof In the case when $r=1$ and $f$ is orientation-true, the
latter inequality was proved in Theorem~2.5~(a) of~\cite{GKZ1}. In
the general case, the inequality can be proved using the techniques
in~\cite{BGKZ,GKZ1,GZ,BGZ}, as follows.
If $f$ is not orientation-true and $d=A(f)>0$ then $d=\ell(f)$, due
to the result of Kneser~\cite{K1928,K}. On the other hand,
one has $\mu_i\ge\ell(f)$, $1\le i\le r$, since the map $f$ admits a
lifting $\hat f\co M\to\hat N$ such that $f=p\circ\hat f$, where
$p\co \hat N\to N$ is an $\ell(f)$--fold covering corresponding
to the subgroup $f_\#(\pi_1(M} %{N,x_0))$ of $\pi_1(\mathbb {N},f(x_0))$, and
$A(\hat f)=1$, hence $\hat f$ is surjective. Therefore
$\sum_{i=1}^r(d-\mu_i)\le0$. This, together with the Kneser
inequality~\cite {K}, $d\cdot\chi(\mathbb {N})\ge\chi(M} %{N)$, implies the
desired inequality.
If $f$ is orientation-true, one proceeds as in the proof of
Proposition~2.5~(a) of~\cite {GKZ1}, where one replaces the single
point $y_0\in\mathbb {N}$ by the set of $r$ points $y_1,\dots,y_r$. More
specifically, by applying a suitable deformation, one can assume
that there are small pairwise disjoint disks $D_i,D_{ij}$, $1\le
i\le r$, $1\le j\le \mu_i$, around the points $y_i$ of $\mathbb {N}$ and the
points of $f^{-1}(y_i)$ such that $\smash{f^{-1}(\smash{\mskip4mu\mathring{\mskip-4mu\vrule width0pt height7pt depth0pt\smash{D}}}_i) =
\bigcup_{j=1}^{\mu_i} \smash{\mskip4mu\mathring{\mskip-4mu\vrule width0pt height7pt depth0pt\smash{D}}}_{ij}}$, and $f|_{D_{ij}}$ is a branched
covering of type $\smash{z\mapsto z^{d_{ij}}}$ for some positive integer
$d_{ij}$. Therefore the complement of these open disks are two
compact surfaces $F\subset M} %{N$, $G\subset \mathbb {N}$ such that the
restriction of $f$ induces a proper map carrying the boundary into
the boundary, $f|_{F}\co (F, \partial F) \to (G, \partial G)$.
By Proposition~1.6 of~\cite{GKZ1} (or by a more general Theorem~4.1
of~\cite{Sk}), $\chi(F) \leq A(f)\cdot \chi(G)$. This, together with
$\chi(F)=\chi(M} %{N)-\sum_{i=1}^r\mu_i$, $\chi(G)=\chi(\mathbb {N})-r$, gives
the desired inequality. \end{proof}
\begin{Thm} \label{thm:cover}
Suppose that $f\co M} %{N\to \mathbb {N}$ has absolute degree \ $d=A(f)>0$.
If \ $\ell(f)\ne d$, or \ $\ell(f)=d$ \ and \
$d\cdot\chi(\mathbb {N})=\chi(M} %{N)$, then \ $\mathrm {MMR}[f]=d$.
\end{Thm}
\Proof The inequality $\mathrm {MMR}[f]\ge A(f)$ follows from the first part
of \fullref {pro:Kneser}.
Let us show the converse inequality, $\mathrm {MMR}[f]\le A(f)$. It follows
from~\cite {Ed,Sk,K}, respectively, that the
mapping $f$ is homotopic to a $d$--fold covering which is branched in
the first case and unbranched in the second case. Thus, we found a
mapping which is homotopic to $f$, and the preimage of any point of
$N$ has cardinality $\le d$. \end{proof}
\begin{Thm} \label{thm:pinch}
Suppose that $f\co M} %{N\to \mathbb {N}$ has absolute degree \ $d=A(f)>0$.
If \ $\ell(f)=d$ \ and \ $d\cdot\chi(\mathbb {N})\ne\chi(M} %{N)$, then \
$\mathrm {MMR}[f]=d+2$.
\end{Thm}
\Proof \textbf{Case 1}\qua Suppose that $d=A(f)=1$. It follows from~\cite {Ed,Sk}
that the mapping $f$ is homotopic to a pinching map where
the pinched subsurface $M} %{N'\subset M} %{N$, $\partial M} %{N'\simeq S^1$, is
different from the $2$--disk $D^2$ (here the natural projection $M} %{N\to
M} %{N/M} %{N'$ is called a pinching map).
Let us show that such a pinching map is homotopic to a map $g$ of
multiplicity $\le3$. For this, we construct a proper continuous map
$g'\co (M} %{N',\partial M} %{N')\to (D^2,\partial D^2)$ whose
restriction to $\partial M} %{N'$ is a homeomorphism, and whose
multiplicity equals $3$. Such a map $g'$ is shown in \fullref{figure1}. We may
identify $\mathbb {N}$ with the surface which is obtained by gluing of
$M} %{N\setminus \smash{\mskip1mu\mathring{\mskip-1mu\vrule width0pt height7pt depth0pt\smash{M} %{N'}}}$ and $D^2$ by means of the aforementioned
homeomorphism of the boundary circles, where $\smash{\mskip1mu\mathring{\mskip-1mu\vrule width0pt height7pt depth0pt\smash{M} %{N'}}}$ denotes the
interior of $M} %{N'$. Define $g\co M} %{N\to \mathbb {N}$ as $g|_{M} %{N\setminus
M} %{N'}=\mathrm {id}_{M} %{N\setminus M} %{N'}$ and $g|_{M} %{N'}=g'$. Clearly, $f\simeq g$,
since $g'$ is homotopic relative boundary to a pinching map. In
Case~2 below, we will use the following property of the constructed
map $g$: its restriction to the preimage of the complement
$\mathbb {N}\setminus D^2$ of a disk is injective.
\begin{figure}[ht!]
\setlength{\unitlength}{9pt}
\begin{center}
\begin{picture} (10,11.5)(-5,-1.5)
\small
\put(-8,-2){
\put(0,-5.4){ \thicklines \qbezier[80](-3.7,8.9)(-3.3,9)(-2.85,9.09)
\qbezier[300](-1.55,9.25)(0,9.45)(1.55,9.25)
\qbezier[80](3.7,8.9)(3.3,9)(2.85,9.09)
\qbezier[400](-3.7,8.9)(-6.3,8)(-3.7,7.1)
\qbezier[400](3.7,8.9)(6.3,8)(3.7,7.1)
\qbezier[400](-3.7,7.1)(0,6.2)(3.7,7.1) }
\thinlines \put(-2.2,1){ \qbezier[200](-1,4)(-.3,2.7)(-.88,1.8)
\qbezier[200](-1,4)(-2.32,7)(-1.28,10)
\qbezier[200](-1.28,10)(0,12)(1.28,10)
\qbezier[200](1.28,10)(2.32,7)(1,4)
\qbezier[200](1,4)(.3,2.7)(.88,1.8) }
\put(-2.2,3.5){ \qbezier[50](-.8,7)(0,6.2)(.8,7)
\qbezier[50](-.5,6.8)(0,7.2)(.5,6.8) } \put(-2.2,1.6){
\qbezier[50](-.8,7)(0,6.2)(.8,7)
\qbezier[50](-.5,6.8)(0,7.2)(.5,6.8) } \put(-2.2,-.3){
\qbezier[50](-.8,7)(0,6.2)(.8,7)
\qbezier[50](-.5,6.8)(0,7.2)(.5,6.8) }
\put(2.2,1){ \qbezier[200](-1,4)(-.3,2.7)(-.88,1.8)
\qbezier[200](-1,4)(-2.32,7)(0,9) \qbezier[200](0,9)(2.32,7)(1,4)
\qbezier[200](1,4)(.3,2.7)(.88,1.8)
\put(0,9) {\line(0,-1){2.9}} \put(0,6.1){
\qbezier[30](0,0)(-.4,-.35)(-.8,-.4)
\qbezier[30](-.8,-.4)(-1.4,-.4)(-1.48,0)
\qbezier[7](0,0)(-.5,.35)(-.8,.4)
\qbezier[8](-.8,.4)(-1.4,.4)(-1.48,0)
\qbezier[30](0,0)(.4,-.35)(.8,-.4)
\qbezier[30](.8,-.4)(1.4,-.4)(1.48,0)
\qbezier[7](0,0)(.5,.35)(.8,.4) \qbezier[8](.8,.4)(1.4,.4)(1.48,0) }
\put(0,7.5){
\qbezier[30](0,0)(-.3,-.25)(-.6,-.3)
\qbezier[30](-.6,-.3)(-1.06,-.3)(-1.14,0)
\qbezier[5](0,0)(-.3,.25)(-.6,.3)
\qbezier[6](-.6,.3)(-1.06,.3)(-1.14,0)
\qbezier[30](0,0)(.3,-.25)(.6,-.3)
\qbezier[30](.6,-.3)(1.06,-.3)(1.14,0)
\qbezier[5](0,0)(.3,.25)(.6,.3) \qbezier[6](.6,.3)(1.06,.3)(1.14,0)
} } } \put(-1,4){$\longrightarrow$}
\put(8,4)
{ \thicklines \qbezier[400](-3.1,3.1)(0,5.7)(3.1,3.1)
\qbezier[300](-3.1,3.1)(-5.7,0)(-3.1,-3.1)
\qbezier[300](3.1,3.1)(5.7,0)(3.1,-3.1)
\qbezier[400](-3.1,-3.1)(0,-5.7)(3.1,-3.1) \put(.3,-5.2){
\thinlines \put(-1.7,1){ \qbezier[200](-.6,2.7)(-.2,1.8)(-.4,1.2)
\qbezier[200](-.6,2.7)(-1.54,4.7)(-.84,6.66)
\qbezier[200](-.84,6.66)(0,8)(.84,6.66)
\qbezier[200](.84,6.66)(1.54,4.7)(.6,2.7)
\qbezier[200](.6,2.7)(.2,1.8)(.4,1.2)
\qbezier[5](-.4,1.2)(0,1.45)(.4,1.2) }
\put(-1.7,7.3){\circle{.6}} \put(-1.7,6){\circle{.6}}
\put(-1.7,4.7){\circle{.6}}
\put(1.7,1.5){ \qbezier[200](-.66,2.7)(-.2,1.8)(-.4,1.2)
\qbezier[200](-.66,2.7)(-1.54,4.7)(0,6)
\qbezier[200](0,6)(1.54,4.7)(.66,2.7)
\qbezier[200](.66,2.7)(.2,1.8)(.4,1.2)
\qbezier[5](-.4,1.2)(0,1.4)(.4,1.2)
\put(0,6) {\line(0,-1){2}} } } }
\end{picture}
\end{center}
\label{figure1}
\caption{A proper map \ $g'\co M} %{N'\to D^2$ \ of
multiplicity 3}
\end{figure}
It follows from the inequality of Euler characteristics of $M} %{N$ and
$\mathbb {N}$ that $f$ is not homotopic to an embedding. (Indeed, otherwise
such an embedding $g$ is a homeomorphism onto $g(M)$; it follows from Brouwer's
Theorem on Invariance of Domain~\cite{Brouwer} that $g$ is
surjective and, therefore, it is a homeomorphism.) Suppose that $f$
is homotopic to a map $g\co M} %{N\to \mathbb {N}$ of multiplicity 2, we will
show that this leads to a contradiction. Let $y\in \mathbb {N}$ be a point
with $g^{-1}(y)=\{x_1,x_2\}$. Then the local degree of $g$ at each
of the points $x_1$ and $x_2$ is defined modulo~2, and
$$
\deg(g,x_1)+\deg(g,x_2)\equiv A(g)\equiv A(f)\equiv1 \mod 2.
$$
Without loss of generality, we may assume that $\deg(g,x_1)\ne0$.
This implies that the image of any neighbourhood of $x_1$ contains a
neighbourhood of $y=g(x_1)$, since otherwise one could construct a
map $F\co D^2\to S^1$ with $\deg(F|_{\partial
D^2})=\deg(g,x_1)\ne0$. Therefore the restriction of $g$ to an
appropriate neighbourhood of $x_2$ is injective and, thus (by
Brouwer's Theorem on Invariance of Domain~\cite{Brouwer}), is a
homeomorphism onto a neighbourhood of $y$. This implies that
$\deg(g,x_2)=\pm1$. Similar arguments show that $\deg(g,x_1)=\pm1$,
a contradiction.
\textbf{Case 2}\qua Suppose that $d=A(f)=\ell(f)\ge2$. Let us construct a map
$g$ which is homotopic to $f$ and has multiplicity $A(f)+2$.
Consider a covering $p\co \tilde \mathbb {N}\to \mathbb {N}$ which corresponds to
the subgroup $f_\#(\pi_1(M} %{N,x_0))$ of $\pi_1(\mathbb {N},f(x_0))$. So, this
is an $\ell(f)$--fold covering. Let $y\in \mathbb {N}$ be an arbitrary point
and $D$ a small closed neighbourhood which is homeomorphic to the
disk $D^2$. Let $D_1,\dots,D_d$ be the connected components of
$p^{-1}(D)$.
Let $\tilde f\co M} %{N\to\tilde \mathbb {N}$ be a lifting of $f$. Then
$A(\tilde f)=\ell(\tilde f)=1$. By Case~1, there exists a map
$\tilde g\co M} %{N\to\tilde \mathbb {N}$ which is homotopic to $\tilde f$
and has multiplicity $\le3$. Then the map $g\,:=\,p\circ\tilde g$ is
homotopic to $f=p\circ\tilde f$. By Case~1, we may also assume that
$\tilde g$ is injective on $\tilde g^{-1}(\tilde \mathbb {N} \setminus D_1)$.
Therefore the map $g$ has multiplicity $\ell(f)+2=A(f)+2$.
Let us show that the multiplicity of $f$ is $\ge\ell(f)+2$. Let
$\tilde f\co M} %{N\to\tilde \mathbb {N}$ be a lifting of $f$ to this
$\ell(f)$--fold covering, thus $A(\tilde f)=\ell(\tilde f)=1$. By
Case~1, there exists a point $\tilde y\in\tilde \mathbb {N}$ whose preimage
under $\tilde f$ has cardinality $\ge3$. Since $A(\tilde f)>0$,
every point of $p^{-1}(p(\tilde y))$ has a nonempty preimage under
$\tilde f$. Therefore $f^{-1}(p(\tilde y))$ has cardinality
at least $\ell(f)+2=A(f)+2$. \end{proof}
\section[Estimates for MMR(f) if A(f)=0]{Estimates for $\mathrm {MMR}(f)$ if $A(f)=0$} \label{sec:A=0}
Suppose that $M} %{N$ is a connected orientable closed surface of genus
$g\ge0$. Consider the standard presentation of the closed surface
$M} %{N$ as the boundary of a solid surface in $\mathbb R^3$ which is obtained
from a closed 3-ball by attaching $g$ solid handles; see \mbox{\fullref{figure2}~(a)}.
Choose a base point $x_0\in M} %{N$ and consider a system of simple
closed curves $\alpha_1,\beta_1,\dots,\alpha_g,\beta_g$ on $M} %{N$
based at $x_0$ which form a \textit{canonical system of cuts}; see
\fullref{figure2}~(a). Then the fundamental group $\pi_1(M} %{N,x_0)$ admits a
canonical presentation
$$
\pi_1(M} %{N,x_0)=\bigg\langle a_1,b_1,\dots,a_g,b_g\ \bigg|\
\prod_{j=1}^g[a_j,b_j] \bigg\rangle ,
$$
where $a_j,b_j$ are the homotopy classes of the based loops
$\alpha_j,\beta_j$, respectively. Denote by $V_g$ the bouquet of $g$
circles $\alpha_1\cup\ldots\cup\alpha_g$ if $g\ge1$, $V_0:=\{x_0\}$
if $g=0$, and by $\rho$ a retraction $\rho\co M} %{N\to V_g$ which
maps all loops $\beta_j$ to the point $x_0$. We can assume that the
curves $\alpha_1,\dots,\alpha_g$ are contained in the plane
$\Pi\subset\mathbb R^3$ which is tangent to $M} %{N$ at $x_0$. (In \fullref{figure2}, the
plane $\Pi$ is parallel to the plane of the picture.)
Let $i\co M\to\mathbb R^3$ denote the inclusion, and
$p_\Pi\co \mathbb R^3\to\Pi$ the orthogonal projection. The following
properties of the map $p=p_\Pi\circ i\co M} %{N\to\Pi$ can be
assumed without loss of generality, and will be used later:
(p1)\qua The restriction of $p$ to a neighbourhood $U$ of the
base point $x_0\in M} %{N$ is a homeomorphism onto a neighbourhood of
the point $p(x_0)$ in $\Pi$. Moreover, $p|_{V_r}\co V_r\to\Pi$
is an embedding, and all curves $p|_{\alpha_j}\co \alpha_j\to
\Pi$ are regular;
(p2)\qua All curves $p|_{\beta_j}$ are contractible in
$p(M} %{N)$;
(p3)\qua $p(M} %{N)$ is a regular neighbourhood of the graph
$p(V_r)$ in $\Pi$;
(p4)\qua The map $p$ has multiplicity $2$.
\begin{figure}[ht!]
\setlength{\unitlength}{10pt}
\begin{center}
\begin{picture} (10,17)(-5,-12.5)
\small
\put(-1,0){
\qbezier[300](-4.44,2.7)(0,5.4)(4.44,2.7)
\qbezier[200](-4.44,2.7)(-7.56,0)(-4.44,-2.7)
\qbezier[200](4.44,2.7)(7.56,0)(4.44,-2.7)
\qbezier[300](-4.44,-2.7)(0,-5.4)(4.44,-2.7)
\put(-3.4,-.2){\circle{1.2}}
\put(0,1.8){\circle{1.2}} \put(3.4,-.2){\circle{1.2}}
\put(0,-1.5){\put(-.18,-.18){\small$\bullet$}}
\put(-.6,-2.2){\small$x_0$} \thicklines
\put(0,-1.5){
\qbezier[200](0,0)(-2.2,-.2)(-3.7,.2)
\put(-3.8,-.5){\small$\alpha_1$}
\qbezier[200](-3.7,.2)(-4.4,.4)(-4.65,1)
\put(-4.2,.45){\vector(-2,1){.2}}
\qbezier[200](-4.65,1)(-4.9,2.2)(-3.7,2.5)
\qbezier[200](-3.7,2.5)(-3.2,2.6)(-2.6,2.2)
\qbezier[200](0,0)(-1.4,1.3)(-2.6,2.2) }
\put(0,-1.5){
\qbezier[100](0,0)(-2.2,3.3)(-3.8,4.5)
\put(-4.4,4.9){\small$\beta_1$}
\qbezier[8](-3.8,4.5)(-3.6,3)(-2.9,1.6)
\put(-2.37,3.2){\vector(-1,1){.2}}
\qbezier[100](0,0)(-1.6,.5)(-2.9,1.6) }
\put(0,-1.5){
\qbezier[200](0,0)(-.8,1.5)(-1.1,3.1)
\put(-.9,2.45){\vector(-1,4){.2}}
\qbezier[200](-1.1,3.1)(-1.2,3.7)(-.8,4.2)
\put(-1.2,4.8){\small$\alpha_2$}
\qbezier[200](-.8,4.2)(0,4.9)(.8,4.2)
\qbezier[200](1.1,3.1)(1.2,3.7)(.8,4.2)
\qbezier[200](0,0)(.8,1.5)(1.1,3.1) }
\put(0,-1.5){
\qbezier[100](0,0)(1.8,2.5)(2.8,4.95) \put(3,5.2){\small$\beta_2$}
\qbezier[8](2.8,4.95)(1.3,4.3)(.5,3) \put(1.7,2.6){\vector(1,2){.2}}
\qbezier[100](0,0)(.2,1.6)(.5,3) }
\put(0,-1.5){
\qbezier[200](0,0)(2.2,-.2)(3.7,.2)
\qbezier[200](3.7,.2)(4.4,.4)(4.65,1) \put(4.3,2.5){\small$\alpha_3$}
\qbezier[200](4.65,1)(4.9,2.2)(3.7,2.5)
\qbezier[200](3.7,2.5)(3.2,2.6)(2.6,2.2)
\qbezier[200](0,0)(1.4,1.3)(2.6,2.2)
\put(1.85,1.57){\vector(1,1){.2}} }
\put(0,-1.5){
\qbezier[100](0,0)(1.7,.7)(3.3,.65)
\qbezier[8](3.3,.65)(3.8,-.4)(3.65,-1.6)
\put(1.8,-.87){\vector(2,-1){.2}}
\qbezier[100](0,0)(1.8,-1)(3.65,-1.6) \put(3.6,-2.3){\small$\beta_3$}
}
\put(7,2){\small (a) $M} %{N$ orientable, $g=3$} }
\put(-12,-6){
\qbezier[300](-4.44,2.7)(-.6,4.95)(2.8,3.45)
\qbezier[200](-4.44,2.7)(-7.56,0)(-4.44,-2.7)
\qbezier[300](-4.44,-2.7)(0,-5.4)(4.44,-2.7)
\put(-3.4,-.2){\circle{1.2}}
\put(0,1.8){\circle{1.2}}
\qbezier[100](3.3,-.85)(3.8,-.85)(4.25,-.2)
\qbezier[6](4.25,-.2)(4.7,.2)(5.1,.4)
\qbezier[100](3.3,-.85)(2.75,-.75)(2.7,-.2)
\qbezier[100](3.3,.45)(2.75,.35)(2.7,-.2)
\qbezier[100](3.3,.45)(4,.5)(4.5,-.5)
\qbezier[10](4.5,-.5)(5,-1.2)(6,-1.2)
\qbezier[100](6.2,0)(5.9,-.6)(6.2,-1.2)
\qbezier[50](5.1,.4)(5.9,.76)(6.4,.4)
\qbezier[50](6.4,.4)(7,-.6)(6.4,-1.6)
\qbezier[100](5,.6)(4.7,2.3)(2.8,3.45)
\qbezier[50](5,.6)(5.1,.2)(5.3,0)
\qbezier[5](5.3,0)(5.65,-.3)(6,0)
\qbezier[50](5,-2.25)(4.7,-2.5)(4.44,-2.7)
\qbezier[50](5,-2.25)(5.4,-1.9)(5.8,-1.9)
\qbezier[50](6.4,-1.6)(6.1,-1.9)(5.8,-1.9
\qbezier[50](4.5,-.5)(5.1,-.5)(5.3,0)
\qbezier[6](4.5,-.5)(4.7,0)(5.3,0)
\put(0,-1.5){\put(-.18,-.18){\small$\bullet$}}
\put(-.6,-2.2){\small$x_0$} \thicklines
\put(0,-1.5){
\qbezier[200](0,0)(-2.2,-.2)(-3.7,.2)
\put(-3.8,-.5){\small$\alpha_1$}
\qbezier[200](-3.7,.2)(-4.4,.4)(-4.65,1)
\put(-4.2,.45){\vector(-2,1){.2}}
\qbezier[200](-4.65,1)(-4.9,2.2)(-3.7,2.5)
\qbezier[200](-3.7,2.5)(-3.2,2.6)(-2.6,2.2)
\qbezier[200](0,0)(-1.4,1.3)(-2.6,2.2) }
\put(0,-1.5){
\qbezier[100](0,0)(-2.2,3.3)(-3.8,4.5)
\put(-4.4,4.9){\small$\beta_1$}
\qbezier[8](-3.8,4.5)(-3.6,3)(-2.9,1.6)
\put(-2.37,3.2){\vector(-1,1){.2}}
\qbezier[100](0,0)(-1.6,.5)(-2.9,1.6) }
\put(0,-1.5){
\qbezier[200](0,0)(-.8,1.5)(-1.1,3.1)
\put(-.9,2.45){\vector(-1,4){.2}}
\qbezier[200](-1.1,3.1)(-1.2,3.7)(-.8,4.2)
\put(-1.2,4.8){\small$\alpha_2$}
\qbezier[200](-.8,4.2)(0,4.9)(.8,4.2)
\qbezier[200](1.1,3.1)(1.2,3.7)(.8,4.2)
\qbezier[200](0,0)(.8,1.5)(1.1,3.1) }
\put(0,-1.5){
\qbezier[100](0,0)(1.8,2.5)(2.8,4.95) \put(3,5.2){\small$\beta_2$}
\qbezier[8](2.8,4.95)(1.3,4.3)(.5,3) \put(1.7,2.6){\vector(1,2){.2}}
\qbezier[100](0,0)(.2,1.6)(.5,3) }
\put(0,-1.5){
\qbezier[200](0,0)(4.2,-.5)(6,.8) \put(3.5,2.65){\small$\alpha_3$}
\qbezier[100](4.95,1.15)(4.4,2.3)(3.3,2.4)
\qbezier[4](6,.8)(5.3,.8)(4.95,1.15)
\qbezier[200](3.3,2.4)(3.1,2.4)(2.6,2.2)
\qbezier[200](0,0)(1.4,1.3)(2.6,2.2)
\put(1.85,1.57){\vector(1,1){.2}} }
\put(0,-1.5){
\qbezier[100](0,0)(1.7,.7)(3.3,.65)
\qbezier[8](3.3,.65)(3.8,-.4)(3.65,-1.6)
\put(1.8,-.87){\vector(-2,1){.2}}
\qbezier[100](0,0)(1.8,-1)(3.65,-1.6) \put(3.6,-2.3){\small$\beta_3$}
} \put(-6,-6){\small (b) $M} %{N$ nonorientable, $g=6$} }
\put(11,-6){
\qbezier[300](-4.44,2.7)(0,5.4)(3.44,1.7)
\qbezier[200](-4.44,2.7)(-7.56,0)(-4.44,-2.7)
\qbezier[200](3.44,1.7)(7.56,-.5)(3.44,-2.7)
\qbezier[300](-4.44,-2.7)(-.3,-5.2)(3.44,-2.7)
\put(-3.4,-.2){\circle{1.2}}
\put(0,1.8){\circle{1.2}}
\put(3.44,-.5){\line(1,0){2.1}} \put(5.53,-.5){\circle{.06}}
\put(5.8,-.6){\small$z_2$} \put(3.47,-.5){\circle{.06}}
\put(2.4,-.6){\small$z_1$}
\put(3.44,-.5){
\qbezier[30](0,0)(-.2,.75)(-.3,1.5)
\qbezier[30](-.3,1.5)(-.3,2.1)(0,2.2)
\qbezier[7](0,0)(.2,.75)(.3,1.5) \qbezier[8](.3,1.5)(.3,2.1)(0,2.2)
\qbezier[30](0,0)(-.2,-.75)(-.3,-1.5)
\qbezier[30](-.3,-1.5)(-.3,-2.1)(0,-2.2)
\qbezier[7](0,0)(.2,-.75)(.3,-1.5)
\qbezier[5](.3,-1.5)(.3,-2.1)(0,-2.2) }
\put(4.84,-.5){
\qbezier[20](0,0)(-.1,.425)(-.17,.85)
\qbezier[10](-.17,.85)(-.17,1.15)(0,1.2)
\qbezier[5](0,0)(.1,.425)(.17,.85)
\qbezier[3](.17,.85)(.17,1.15)(0,1.2)
\qbezier[20](0,0)(-.1,-.425)(-.17,-.85)
\qbezier[10](-.17,-.85)(-.17,-1.15)(0,-1.2)
\qbezier[5](0,0)(.1,-.425)(.17,-.85)
\qbezier[3](.17,-.85)(.17,-1.15)(0,-1.2) }
\put(0,-1.5){\put(-.18,-.18){\small$\bullet$}}
\put(-.6,-2.2){\small$x_0$}
\thicklines
\put(0,-1.5){
\qbezier[200](0,0)(-2.2,-.2)(-3.7,.2)
\put(-3.8,-.5){\small$\alpha_1$}
\qbezier[200](-3.7,.2)(-4.4,.4)(-4.65,1)
\put(-4.2,.45){\vector(-2,1){.2}}
\qbezier[200](-4.65,1)(-4.9,2.2)(-3.7,2.5)
\qbezier[200](-3.7,2.5)(-3.2,2.6)(-2.6,2.2)
\qbezier[200](0,0)(-1.4,1.3)(-2.6,2.2) }
\put(0,-1.5){
\qbezier[100](0,0)(-2.2,3.3)(-3.8,4.5)
\put(-4.4,4.9){\small$\beta_1$}
\qbezier[8](-3.8,4.5)(-3.6,3)(-2.9,1.6)
\put(-2.37,3.2){\vector(-1,1){.2}}
\qbezier[100](0,0)(-1.6,.5)(-2.9,1.6) }
\put(0,-1.5){
\qbezier[200](0,0)(-.8,1.5)(-1.1,3.1)
\put(-.9,2.45){\vector(-1,4){.2}}
\qbezier[200](-1.1,3.1)(-1.2,3.7)(-.8,4.2)
\put(-1.2,4.8){\small$\alpha_2$}
\qbezier[200](-.8,4.2)(0,4.9)(.8,4.2)
\qbezier[200](1.1,3.1)(1.2,3.7)(.8,4.2)
\qbezier[200](0,0)(.8,1.5)(1.1,3.1) }
\put(0,-1.5){
\qbezier[100](0,0)(1.8,2.5)(2.35,4.15) \put(2.3,4.5){\small$\beta_2$}
\qbezier[6](2.35,4.15)(1.2,4.3)(.5,3)
\put(1.7,2.6){\vector(1,2){.2}} \qbezier[100](0,0)(.2,1.6)(.5,3) }
\put(0,-1.5){
\qbezier[200](0,0)(2.2,-.2)(3.3,.15)
\qbezier[80](3.3,.15)(3.9,.4)(4.1,1) \put(4.35,2.95){\small$\beta_0$}
\qbezier[5](4.1,1)(4.4,2)(4.2,2.7)
\qbezier[200](0,0)(3,2.5)(4.2,2.7) \put(2.2,1.65){\vector(3,2){.2}}
} \put(-6,-6){\small (c) $M} %{N$ nonorientable, $g=5$} }
\end{picture}
\end{center}
\caption{A canonical system of cuts on a closed surface
$M} %{N$}\label{figure2}
\end{figure}
\eject
Suppose that $M} %{N$ is a connected nonorientable closed surface of
genus $g\ge1$. Choose a base point $x_0\in M} %{N$. Then the fundamental
group of $M} %{N$ admits the following canonical presentation:
\begin{align*}
\pi_1(M} %{N,x_0)&=\bigg\langle a_1,b_1,\dots,a_{\unfrac g2},b_{\unfrac g2}\
\bigg|\ \bigg(\prod_{j=1}^{\unfrac g2-1}[a_j,b_j]\bigg) \cdot
[a_{\unfrac g2},b_{\unfrac g2}]_- \bigg\rangle
&&\mbox{if $g$ is even,}
\\
\pi_1(M} %{N,x_0)&=\bigg\langle a_1,b_1,\dots,a_{[\unfrac g2]},b_{[\unfrac g2]},b_0\ \bigg|\ \bigg(\prod_{j=1}^{\upnfrac {g-1}2}[a_j,b_j]\bigg)
\cdot b_0^2 \bigg\rangle
&&\mbox{if $g$ is odd,}
\end{align*}
where we use the notation
$$
[x,y]\,=\,xyx^{-1}y^{-1}, \qquad [x,y]_-\,=\,xyx^{-1}y .
$$
This presentation of the group $\pi_1(M} %{N,x_0)$ corresponds to a
system of simple closed curves
$\alpha_1,\beta_1,\dots,\alpha_{[g/2]},\beta_{[g/2]},\beta_0$ on
$M} %{N$ based at $x_0$, which form a \textit{canonical system of cuts};
see \fullref{figure2}~(b),~(c). Here the curve $\beta_0$ appears only if $g$ is
odd. Denote by $V_r$ the bouquet of $r=[g/2]$ circles
$\alpha_1\cup\ldots\cup\alpha_{[g/2]}$ for $g\ge2$, $V_0=\{x_0\}$
for $g=1$, and by $\rho$ a retraction $\rho\co M} %{N\to V_r$ which
maps all loops $\beta_j$ to the point $x_0$. We consider a
realization of $M} %{N$ in $\mathbb R^3$ via a map $i\co M} %{N\to\mathbb R^3$ which
is an immersion if $g$ is even (see \fullref{figure2}~(b)), while, for $g$ odd,
the restriction $i|_{M} %{N\setminus\{z_1,z_2\}}$ to the complement of
the set of two points $z_1,z_2\inM} %{N\setminus\{x_0\}$ is an
immersion; see \fullref{figure2}~(c). We can assume that $i|_{V_r}$ is an
embedding with $i(V_r)\subset\Pi$, moreover $\Pi$ coincides with the
tangent plane to $i(M} %{N)$ at $i(x_0)$.
Let $p_\Pi\co \mathbb R^3\to\Pi$ denote the orthogonal projection.
Without loss of generality, we may assume that the map $p=p_\Pi\circ
i\co M} %{N\to\Pi$ has the properties~(p1), (p2), (p3) from above.
Moreover,~(p4) holds if $g$ is odd, while the following property
holds if $g$ is even:
(p$4'$)\qua The map $p$ has multiplicity $4$. Moreover, the
set of all points of $p(M} %{N)$, whose preimage under $p$ contains
more than 2 points, lies in a
regular neighbourhood $T$ in $p(M} %{N)$ of a simple arc $\tau\subset
p(M} %{N)$, where the endpoints of $\tau$ lie on the boundary of
$p(M} %{N)$, $\tau$ intersects the graph $p(V_r)$ at the unique point
$p(t)$, for some $t\in\alpha_r\setminus\{x_0\}$, and the
intersection of $\tau$ and $p(\alpha_r)$ at the point $p(t)$ is
transverse; see \fullref{figure3}~(a).
\begin{figure}[ht!]
\setlength{\unitlength}{10pt}
\begin{center}
\begin{picture} (10,11)(-5,-6.5)
\small
\put(-12.5,0){
\qbezier[300](-4.44,2.7)(0,5.4)(4.44,2.7)
\qbezier[200](-4.44,2.7)(-7.56,0)(-4.44,-2.7)
\qbezier[300](-4.44,-2.7)(0,-5.4)(4.44,-2.7)
\put(3.8,-3.8){$p(M} %{N)$}
\put(-3.4,-.2){\circle{1.2}}
\put(0,1.8){\circle{1.2}}
\qbezier[100](3.3,-.85)(3.8,-.85)(4.25,-.2)
\qbezier[100](3.3,-.85)(2.75,-.75)(2.7,-.2)
\qbezier[100](3.3,.45)(2.75,.35)(2.7,-.2)
\qbezier[100](3.3,.45)(3.8,.45)(4.25,-.2)
\qbezier[50](5.7,1.2)(6,.025)(5.7,-1.15)
\qbezier[50](5.7,1.2)(5.3,2.13)(4.44,2.7)
\qbezier[50](5.7,-1.15)(5.3,-2.105)(4.44,-2.7)
\put(0,-1.5){\put(-.18,-.18){\tiny$\bullet$}}
\put(-.9,-2.4){$p(x_0)$}
\put(4.32,-.2) { \put(-.4,.4){\line(2,1){1.85}}
\put(-.4,-.4){\line(4,-1){1.85}} \put(2.05,1.325){\line(0,-1){2.2}}
\put(2.3,.1){$T$} \put(2.05,1.3){\line(-1,0){.2}}
\put(2.05,-.86){\line(-1,0){.2}} } \thicklines \put(4.25,-.2)
{\line(5,1){1.6}} \put(4.8,-.05){\circle{.06}}
\put(3.15,-2.2){$p(t)$}\thinlines\put(4.9,-.05){\line(0,-1){1.5}}
\put(4.8,-.05){\put(-.18,-.18){\tiny$\bullet$}}
\put(5.25,.2){$\tau$}
\thicklines\put(0,-1.5){
\qbezier[200](0,0)(-2.2,-.2)(-3.7,.2) \put(-3.8,3){$p(V_r)$}
\qbezier[200](-3.7,.2)(-4.4,.4)(-4.65,1)
\qbezier[200](-4.65,1)(-4.9,2.2)(-3.7,2.5)
\qbezier[200](-3.7,2.5)(-3.2,2.6)(-2.6,2.2)
\qbezier[200](0,0)(-1.4,1.3)(-2.6,2.2) }
\put(0,-1.5){
\qbezier[200](0,0)(-.8,1.5)(-1.1,3.1)
\qbezier[200](-1.1,3.1)(-1.2,3.7)(-.8,4.2)
\qbezier[200](-.8,4.2)(0,4.9)(.8,4.2)
\qbezier[200](1.1,3.1)(1.2,3.7)(.8,4.2)
\qbezier[200](0,0)(.8,1.5)(1.1,3.1) }
\put(0,-1.5){
\qbezier[200](0,0)(2.2,-.2)(3.7,.2)
\qbezier[50](3.7,.2)(4.25,.45)(4.55,.8)
\put(2.5,2.8){$p(\alpha_r)$}
\qbezier[100](4.8,1.45)(4.5,2.3)(3.7,2.5)
\qbezier[20](4.8,1.45)(4.68,1.125)(4.55,.8)
\qbezier[200](3.7,2.5)(3.2,2.6)(2.6,2.2)
\qbezier[200](0,0)(1.4,1.3)(2.6,2.2)
} \put(-2.5,-6){\small(a) $T\subset p(M} %{N)$} }
\put(2,0){
\qbezier[300](-2.2,3.2)(0,5.4)(2.2,3.2)
\qbezier[300](2.2,3.2)(2.8,2)(2.4,0)
\qbezier[300](2.4,0)(2,-1.4)(1.4,-2.3)
\qbezier[300](-2.2,3.2)(-2.8,2)(-2.4,0)
\qbezier[300](-2.4,0)(-2,-1.4)(-1.4,-2.3)
\qbezier[300](-1.4,-2.3)(0,-4.6)(1.4,-2.3)
\put(-3,-2){$U_j$}
\put(0,1.8){ \qbezier[80](-.45,.45)(0,.87)(.45,.45)
\qbezier[80](.45,.45)(.7,.16)(.5,-.45)
\qbezier[80](-.45,.45)(-.7,.16)(-.5,-.45)
\qbezier[80](.5,-.45)(.4,-.9)(.2,-1.4)
\qbezier[80](-.5,-.45)(-.4,-.9)(-.2,-1.4)
\qbezier[80](.2,-1.4)(0,-1.8)(-.2,-1.4)
\put(.6,.16){\line(1,0){1.95}} \thicklines
\put(.55,-.45){\line(3,-1){1.95}} \thinlines
\put(.4,-1){\line(3,-2){1.88}}
\put(1.05,-.57){\put(-.17,-.18){\tiny$\bullet$}}\put(1.05,-.57){\line(3,4){1.5}}\put(2.50,1.63){$p(t_j\!)$}
\put(1.5,-1.65){$\tau_j$}
\put(3.2,.16){\line(0,-1){2.5}} \put(3.5,-1.5){$T_j$}
\put(3.2,.135){\line(-1,0){.2}} \put(3.2,-2.34){\line(-1,0){.2}} }
\put(0,-1.5){\put(-.18,-.18){\tiny$\bullet$}}
\put(-1.2,-2.3){$p(x_0)$} \thicklines
\put(0,-1.5){
\qbezier[200](0,0)(-.8,1.5)(-1.1,3.1)
\qbezier[200](-1.1,3.1)(-1.2,3.7)(-.8,4.2)
\put(-1.3,4.9){$p(\alpha_j)$}
\qbezier[200](-.8,4.2)(0,4.9)(.8,4.2)
\qbezier[200](1.1,3.1)(1.2,3.7)(.8,4.2)
\qbezier[200](0,0)(.8,1.5)(1.1,3.1) } \put(-2.5,-6){\small(b)
$T_j\subset U_j$} }
\put(13.5,0){
\thinlines \put(-2,1){\line(0,1){2}}
\put(-2,-1){\line(0,-1){2}}
\thicklines \put(2,1){\line(0,-1){2}}
\put(2.3,.4){$\Gamma_j(\tau_j)$}
\qbezier[300](2,0)(.5,-1.5)(-2,-2)
\put(-4.5,-2.4){$\gamma(\alpha_j)$}
\qbezier[3](2,0)(1.7,.3)(1.4,.5) \qbezier[300](1.4,.5)(0,1.6)(-2,2)
\thinlines \qbezier[300](2,1)(.5,-.5)(-2,-1)
\qbezier[300](2,-1)(.5,-2.5)(-2,-3) \qbezier[300](2,1)(0,2.5)(-2,3)
\qbezier[10](2,-1)(1.3,-.4)(.72,0)
\qbezier[200](.72,0)(-.4,.7)(-2,1) \put(-3,-6){\small(c)
$\Gamma_j(T_j)\subset\mathbb {N}$} }
\end{picture}
\end{center}
\caption{The strips $T$, $T_j$ and ``folding'' of
$T_j$ via $\Gamma_j$}\label{figure3}
\end{figure}
\begin{Pro} \label{pro:Zieschang}
Suppose that $M} %{N$ is an (orientable or nonorientable) closed
surface of genus $g$, and $f\co M} %{N\to \mathbb {N}$ has absolute degree
$A(f)=0$. Then there exists a self-homeomorphism $\varphi$ of $M} %{N$
and a map $\gamma\co V_r\to \mathbb {N}$ such that $f\simeq \gamma\circ
\rho\circ\varphi$. Here $r=2g$ if $M} %{N$ is orientable, $r=[\unfrac g2]$
if $M} %{N$ is nonorientable, and $\rho\co M} %{N\to V_r$ is the
retraction defined above.
\end{Pro}
\Proof Since $A(f)=0$, it follows from~\cite{K} or~\cite{Ep} that
$f$ is homotopic to a map $h$ which is not surjective; thus
$h(M} %{N)\subset \mathbb {N}^*=\mathbb {N}\setminus \smash{\mskip4mu\mathring{\mskip-4mu\vrule width0pt height7pt depth0pt\smash{D}}}^2$ for an appropriate disk
$D^2\subset \mathbb {N}$. Since the fundamental group of $\mathbb {N}^*$ is a free
group, we obtain a homomorphism $h_\#\co \pi_1(M} %{N) \to
\pi_1(\mathbb {N}^*)$ to the free group $\pi_1(\mathbb {N}^*)$.
Suppose that $M} %{N$ is orientable. It has been proved in Satz~2
of Zieschang \cite{Z} using the Nielsen method (see also Zieschang, Vogt and Coldewey~\cite{ZVC}, or
Proposition~1.2 of~Grigorchuk, Kurchanov and Zieschang \cite{GriKurZie}) that there is a sequence of
``elementary moves'' of the system of generators
$a_1,b_1,\dots,a_g,b_g$ and the corresponding sequence of
``elementary moves'' of the system of cuts
$\alpha_1,\beta_1,\dots,\alpha_g,\beta_g$ on $M} %{N$ (see above), such
that the resulting system of cuts
$\tilde\alpha_1,\tilde\beta_1,\dots,\tilde\alpha_g,\tilde\beta_g$ is
also canonical (this means there exists a self-homeomorphism
$\varphi$ of $M} %{N$ such that $\alpha_j=\varphi(\tilde\alpha_j)$,
$\beta_j=\varphi(\tilde\beta_j)$), and the loops
$\smash{h|_{\tilde\beta_j}}\co \tilde\beta_j \to \mathbb {N}^*$ are contractible in $\mathbb {N}^*$. From this, using
the fact that $\pi_2(\mathbb {N}^*)=0$, one can prove that $h\simeq
\gamma\circ \rho\circ\varphi$ where $\gamma\,:=\,h|_{V_g}$.
Suppose that $M} %{N$ is nonorientable. The method to prove Satz~2
of~\cite{Z} can be successfully applied to construct a canonical
system of cuts $\tilde\alpha_1,\tilde\beta_1,\dots,
\tilde\alpha_{[\unfrac g2]},\tilde\beta_{[\unfrac g2]}$, $\tilde\beta_0$
on $M} %{N$ (this means there exists a homeomorphism $\varphi$ of $M} %{N$
with $\alpha_j=\varphi(\tilde\alpha_j)$,
$\beta_j=\varphi(\tilde\beta_j)$) such that the loops
$\smash{h|_{\tilde\beta_j}}\co \tilde\beta_j \to \mathbb {N}^*$ are contractible in $\mathbb {N}^*$;
see Ol'shanski{\u\i}~\cite{Ol'shanski} or Proposition~1.5 of~\cite{GriKurZie}.
(Again the curve $\beta_0$ is considered only if $g$ is odd.)
Similarly to the orientable case, this implies that $h\simeq
\gamma\circ \rho\circ\varphi$ where $\gamma\,:=\,h|_{V_g}$. \end{proof}
\begin{Thm} \label{thm:A=0}
Suppose that $f\co M} %{N\to \mathbb {N}$ has absolute degree $A(f)=0$. Then
$2\le \mathrm {MMR}[f]\le 4$.
\end{Thm}
\Proof Suppose that $h$ is homotopic to $f$ and has multiplicity
$1$. Then $h$ is a homeomorphism onto $h(M} %{N)$. It follows from
Brouwer's Theorem on Invariance of Domain~\cite{Brouwer} that $h$ is
surjective and, therefore, it is a homeomorphism. Then $A(h)=1$, a
contradiction. Therefore $\mathrm {MMR}[f]\ge2$.
Let us prove the second inequality. Since $A(f)=0$, by
\fullref{pro:Zieschang}, $f\simeq\gamma\circ
\rho\circ\varphi$ for a self-homeomorphism $\varphi$ of $M} %{N$, the
retraction $\rho\co M} %{N\to V_r$, and a map $\gamma\co V_r\to
\mathbb {N}$, where $r=g$ if $M} %{N$ is an orientable surface of genus $g$,
$r=[\unfrac g2]$ if $M} %{N$ is a nonorientable surface of genus $g$.
Without loss of generality, we may assume that $\gamma$ has the
following properties:
$(\gamma1)$\qua There exists a homeomorphism $\psi$ of the
neighbourhood $U$ of $x_0$ in $M} %{N$ onto a neighbourhood of
$\gamma(x_0)$ in $\mathbb {N}$ such that $\gamma|_{V_r\cap U}=\psi|_{V_r\cap
U}$. In other words, $\gamma|_{V_r\cap U}$ extends to an embedding
$\psi\co U\to \mathbb {N}$;
$(\gamma2)$\qua The restriction of $\gamma$ onto each curve
$\alpha_1,\dots,\alpha_r$ is an immersion $S^1\to\mathbb {N}$. Moreover,
$\gamma$ has multiplicity $\le2$, and it has only finitely many
double points (ie\ pairs of distinct points of $V_r$ having the
same image).
\textbf{Case 1}\qua Suppose that the surface $M} %{N$ is either orientable (thus
$r=g$), or nonorientable with $g$ odd (thus $r=\upnfrac{g-1}2$). In
both cases, the map $p=p_\Pi\circ i\co M} %{N\to\Pi=\mathbb R^2$ of $M} %{N$
to the plane $\Pi$ has the properties (p1), (p2), (p3), (p4); see
above.
\textbf{Subcase 1}\qua Suppose that $\mathbb {N}$ is orientable. Since every closed curve
$\gamma|_{\alpha_j}$ is orientation-preserving, it follows from the
properties $(\gamma1)$, $(\gamma2)$, (p1), (p3) that the map
$\hat\gamma=\gamma\circ p^{-1}\co p(V_r)\to \mathbb {N}$ can be extended
to an immersion $\Gamma\co p(M} %{N)\to \mathbb {N}$ of the regular
neighbourhood $p(M} %{N)$ of $p(V_r)$ in the plane $\Pi$ to $\mathbb {N}$, such
that $\Gamma$ has multiplicity $\le2$.
Consider the composition $\hat\rho=p\circ\rho\co M} %{N\to\Pi$.
Observe that the maps $\hat\rho$ and $p$ are homotopic as maps
$M} %{N\to p(M} %{N)\subset\Pi$ with the target $p(M} %{N)$, due to
$\hat\rho|_{V_r}=p|_{V_r}$,~(p2), and $\pi_2(p(M} %{N))=0$. From this
and $\gamma=\Gamma\circ p|_{V_r}$, we have
\begin{equation}\refstepcounter {Thm} \label{eq}
f \simeq \gamma\circ \rho\circ\varphi=\Gamma\circ p\circ
\rho\circ\varphi \simeq \Gamma\circ p\circ\varphi. \tag{\hbox{\bf\theThm}}
\end{equation}
Since $\varphi$ is bijective and each of $\Gamma$ and $p$ has
multiplicity $\le2$ (see~(p4)), the multiplicity of the composition
$\Gamma\circ p\circ\varphi$ is $\le2\cdot2\cdot1=4$.
\textbf{Subcase 2}\qua Suppose that $\mathbb {N}$ is nonorientable. So in general,
the immersion $\hat\gamma\co p(V_r)\to \mathbb {N}$ can not be extended
to an immersion of the regular neighbourhood $p(M} %{N)$ of $p(V_r)$ in
$\Pi=\mathbb R^2$. However, due to $(\gamma1)$, $(\gamma2)$, and (p1), we
can extend $\hat\gamma$ to an immersion $\tilde\Gamma\co p(D\cup
V_r)\to \mathbb {N}$, where $D\subset U$ is a small disk centred at $x_0$.
Now, for each curve $\alpha_j$, we will extend the immersion
$\tilde\Gamma_j=\tilde\Gamma|_{p(D\cup\alpha_j)}\co p(D\cup\alpha_j)\to\mathbb {N}$
to a regular neighbourhood $U_j\supset p(D)$ of $p(\alpha_j)$ in
$\Pi$ as follows. If the curve $\smash{\gamma|_{\alpha_j}}$ is
orientation-preserving then, similarly to Case~1, the immersion
$\tilde\Gamma_j\co p(D\cup\alpha_j)\to\mathbb {N}$ can be extended to an
immersion $\Gamma_j\co U_j\to\mathbb {N}$. If the curve
$\smash{\gamma|_{\alpha_j}}$ is orientation-reversing, let us choose a point
$t_j\in \alpha_j\setminus D$ such that $t_j$ is the only preimage
of the point $\gamma(t_j)$ under $\gamma$. Consider a simple arc
$\tau_j\subset U_j\setminus p(D)$, which transversally intersects
$p(\alpha_j)$ at the only point $p(t_j)$, and whose endpoints lie on
the boundary of $U_j$. Let $T_j$ be a regular neighbourhood of the
arc $\tau_j$ in $U_j\setminus p(D)$, thus $T_j$ is a ``strip'' in
the annulus $U_j$; see \fullref{figure3}~(b). Outside the interior of the strip
$T_j$, we extend $\tilde\Gamma_j$ to an immersion
$\bar{\Gamma}_j\co (U_j\setminus T_j)\cup
p(\alpha_j)\to \mathbb {N}$ similarly to above. Now we extend the obtained
immersion $\bar{\Gamma}_j$ to the whole annulus $U_j$,
giving a map $\Gamma_j\co U_j\to\mathbb {N}$ which coincides with
$\bar{\Gamma}_j$ outside $T_j\setminus p(\alpha_j)$ and has
a ``folding'' along the arc $\tau_j\subset T_j$, as shown in
\fullref{figure3}~(c).
Without loss of generality, we may assume that $U_j\subset p(M)$,
and any two annuli $U_j,U_k$ have only the disk $p(D)$ in common.
Since the constructed mappings $\Gamma_j\co U_j\to\mathbb {N}$ agree on
the common part $p(D)$, they determine an extension
$\bar{\Gamma}\co U\to\mathbb {N}$ of the map $\tilde\Gamma$,
where $U=U_1\cup\ldots\cup U_r$ is a regular neighbourhood of
$p(V_r)$ in $\Pi=\mathbb R^2$. The above construction can be performed in
such a way that the map $\bar{\Gamma}$ has multiplicity
$\le2$, due to~$(\gamma2)$ and the choice of the points
$t_j\in\alpha_j$. Obviously, the map $\bar{\Gamma}$ can be
extended to the regular neighbourhood $p(M} %{N)$ of $p(D\cup V_r)$
(see~(p3)) and the extended map $\Gamma\co p(M} %{N)\to\mathbb {N}$ also has
multiplicity $\le2$.
Similarly to Subcase~1, the composition $\Gamma\circ p\circ\varphi$
has multiplicity $\le2\cdot2\cdot1=4$, and \eqref{eq} holds. This
completes the proof in Case~1.
\textbf{Case 2}\qua Suppose that $M} %{N$ is a nonorientable closed surface of even
genus $g$, thus $r=\unfrac g2$, and the map $p=p_\Pi\circ
i\co M} %{N\to\Pi=\mathbb R^2$ of $M} %{N$ to the plane $\Pi$ has the
properties (p1), (p2), (p3), (p$4'$); see above. We may assume,
without loss of generality, that the map $\gamma\co V_r\to\mathbb {N}$
has the following additional property:
$(\gamma3)$\qua The point $t\in\alpha_r$ considered in (p$4'$)
is the only preimage of $\gamma(t)$ under $\gamma$, and the
analogous property holds for any point $\tilde t\in \alpha_r\cap
p^{-1}(T)$.
\textbf{Subcase 1}\qua Suppose that $\mathbb {N}$ is orientable. Similarly to Subcase~1
of Case~1, one shows using~$(\gamma1)$, $(\gamma2)$, (p1), (p3) that
the immersion $\hat\gamma=\gamma\circ p^{-1}\co p(V_r)\to \mathbb {N}$
extends to an immersion $\Gamma\co p(M} %{N)\to \mathbb {N}$ of multiplicity
$2$, and using~(p2) that~\eqref{eq} holds. Taking into
account~(p$4'$) and $(\gamma3)$, one can show that the multiplicity
of $\Gamma\circ p\circ\varphi$ is $\le4$.
\textbf{Subcase 2}\qua Suppose that $N$ is nonorientable. We proceed as in
Subcase~2 of Case~1. Namely, for those curves $\alpha_j$ whose image
under $\gamma$ is orientation-preserving, we extend the immersion
$\tilde\Gamma_j\co p(D\cup\alpha_j)\to \mathbb {N}$ to $U_j$, as in
Case~1. For each of the remaining curves $\alpha_j$, we choose a
point $t_j\in\alpha_j\setminus D$ which is the only preimage of
$\gamma(t_j)$ under $\gamma$, and we extend the corresponding
immersion $\tilde\Gamma_j$ to a map
$\bar{\Gamma}_j\co U_i\to \mathbb {N}$ having a ``folding'' along
an arc $\tau_j\subset T_j\subset U_j$, which transversally
intersects $p(V_r)$ at the unique point $p(t_j)$; see Case~1. As
above, this allows one to construct a map $\Gamma\co p(M} %{N)\to\mathbb {N}$
of multiplicity $\le2$ which is an extension of $\hat\gamma$, and to
show that~\eqref{eq} holds. Observe now that, if the curve
$\gamma|_{\alpha_r}$ is orientation-reversing, we can choose the
point $t_r\in\alpha_r$ in such a way that it is ``far enough'' from
the point $t\in\alpha_r$ considered in~(p$4'$). This, together with
$(\gamma3)$, shows that the above construction can be performed in
such a way that the composition $\Gamma\circ p\circ\varphi$ has
multiplicity $\le4$. This completes the proof of
\fullref{thm:A=0}. \end{proof}
\bibliographystyle{gtart}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,954
|
{"url":"https:\/\/zbmath.org\/?q=an:684.53036","text":"# zbMATH \u2014 the first resource for mathematics\n\nWidth and related invariants of Riemannian manifolds. (English) Zbl\u00a00684.53036\nOn the geometry of differentiable manifolds, Workshop, Rome\/Italy 1986, Ast\u00e9risque 163-164, 93-109 (1988).\n[For the entire collection see Zbl 0666.00013.]\nThe author poses and studies the question of how to measure the size of a metric space, in particular of a Riemannian manifold. The main notion (due to Urysohn) is that of intermediate diameters $$Diam_ k$$ for $$k=0,...,n-1$$ where n is the dimension of the space. $$Diam_ 0$$ is the usual diameter and $$Diam_ k$$ measures the $$(k+1)$$-dimensional spread of the space.\nIn the first part, some properties of this notion for convex and compact subsets (in particular rectangular solids) of $${\\mathbb{R}}^ n$$, which follow from classical results are discussed: (i) a lemma of Lebesgue which relates $$Diam_ k$$ of a compact convex subset in $${\\mathbb{R}}^ n$$ to $$Wid_ k$$, where $$Wid_ k$$ (the k-dimensional width) measures how close a subset is to a k-dimensional affine subspace. (ii) the isoperimetric inequality of Federer-Fleming which gives a sharp upper bound for $$Diam_{k-1}$$ in terms of the Hausdorff volume $$Vol_ k$$ for compact subsets of $${\\mathbb{R}}^ n.$$\nIn the second part, the problem of getting upper bounds for the intermediate diameter for subsets of a Riemannian manifold under curvature restraints is studied. For example, the Federer-Fleming isoperimetric inequality is generalized to compact subsets of any Riemannian manifold with nonnegative Ricci curvature and $$Diam_{n-2}$$ is conjectured to be bounded from above in terms of a positive lower bound for the scalar curvature. Many of the ideas in this paper are closely related to the author\u2019s previous work, especially his paper in J. Differ. Geom. 18, 1-147 (1983; Zbl 0515.53037).\nReviewer:\u00a0M.Min-Oo\n\n##### MSC:\n 53C20 Global Riemannian geometry, including pinching","date":"2021-10-16 18:40:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8119062781333923, \"perplexity\": 303.36157682572673}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323584913.24\/warc\/CC-MAIN-20211016170013-20211016200013-00302.warc.gz\"}"}
| null | null |
package Rex::Virtualization::LibVirt::destroy;
use strict;
use warnings;
our $VERSION = '0.56.1'; # VERSION
use Rex::Logger;
use Rex::Helper::Run;
sub execute {
my ( $class, $arg1, %opt ) = @_;
my $virt_settings = Rex::Config->get("virtualization");
chomp( my $uri =
ref($virt_settings) ? $virt_settings->{connect} : i_run "virsh uri" );
unless ($arg1) {
die("You have to define the vm name!");
}
my $dom = $arg1;
Rex::Logger::debug("destroying domain: $dom");
unless ($dom) {
die("VM $dom not found.");
}
i_run "virsh -c $uri destroy '$dom'";
if ( $? != 0 ) {
die("Error destroying vm $dom");
}
}
1;
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,118
|
Abstract Admissions: Hey Gurl! This ones for ya!!
Hey Gurl! This ones for ya!!
My greetings and gratitude to all my dear friends who have put up with me so far! But this year I would like to especially remember all my girl friends. Generally speaking, due to some reason (I am guessing its coz guys don't get offended as easily as girls?) I get along much better with guys than girls – surprising actually when you think about it coz I struck up my first friendship with a guy only when I was about 18 (which was my dear friend Pravin). Till then I didn't know any guy whom I could call as a friend-there were brothers and cousins and acquaintances of course, but no friend as such.
I do like hanging out with all my guy friends-it is fun and enjoyable. But I cannot imagine calling up ANY of the guys and asking them something like this "Hey do you think I should wear the pink dress for today's party? You know the one I wore for Swetha's wedding?" Can you imagine the response I might get?
Possible reply No.1: Uh huh….
Possible reply No.2: Nope. Don't remember.
Possible reply No.3: Which Swetha? That hot girl with Angelina-Jolie lips??
Possible Reply No.4: Swetha got married??!! When did that happen?!!!
'Am I looking sloshed in this photo or can I send it to mom?' etc etc etc – nothing can beat gal pals!!
Despite the fact that I do have many guy friends and that I am blessed enough to have a loving family, gurl friends play a very special and important part in my life.
As R.L. Stevenson rightly said "No Man is useless while he has a friend" (I take the liberty here to apply this to Women as well). So here's to the gals – for all the never-ending talks and chats, for the long shopping sprees, for all the good and bad advice, for being with me thru good times and not-so-good ones, for helping me cope when I was down in the dumps and for rejoicing with me when I made it, for joining in the laughter as well as the tears, for tolerating my weaknesses and accepting my short-comings, never judging and for listening patiently to all my woeful tales of misery-imagined or otherwise. A zillion cheers!! May you always find Love and Laughter wherever you are and whatever you do! Muaahhhh!!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,148
|
No pop-up window after document modifications.
I meet one problem today, that is, when I modified the contents in the *.f90 file and closed the Notepad++ window, there wasn't a pop-up window ask me if I want to save the changes.
I have check out there's no a auto-saveup plugins in my software.So what's up, and what should I do . I think it's prone to error. Have you ever encountered the same issue?
Hey, I have found the answer. 中文版:"设置"–>"首选项"–>"备份"–"记住…"复选框处取消打勾;in English, "setting"–>"preference"–>"back up"–"remember…"cancel the tick in the check box. BTW, I am the 7.2 version. It may be some difference in other software version.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 6,489
|
package configurationv4
import (
"encoding/binary"
"encoding/gob"
"errors"
"github.com/gozwave/gozw/cc"
)
const CommandInfoGet cc.CommandID = 0x0C
func init() {
gob.Register(InfoGet{})
cc.Register(cc.CommandIdentifier{
CommandClass: cc.CommandClassID(0x70),
Command: cc.CommandID(0x0C),
Version: 4,
}, NewInfoGet)
}
func NewInfoGet() cc.Command {
return &InfoGet{}
}
// <no value>
type InfoGet struct {
ParameterNumber uint16
}
func (cmd InfoGet) CommandClassID() cc.CommandClassID {
return 0x70
}
func (cmd InfoGet) CommandID() cc.CommandID {
return CommandInfoGet
}
func (cmd InfoGet) CommandIDString() string {
return "CONFIGURATION_INFO_GET"
}
func (cmd *InfoGet) UnmarshalBinary(data []byte) error {
// According to the docs, we must copy data if we wish to retain it after returning
payload := make([]byte, len(data))
copy(payload, data)
if len(payload) < 2 {
return errors.New("Payload length underflow")
}
i := 2
if len(payload) <= i {
return errors.New("slice index out of bounds")
}
cmd.ParameterNumber = binary.BigEndian.Uint16(payload[i : i+2])
i += 2
return nil
}
func (cmd *InfoGet) MarshalBinary() (payload []byte, err error) {
payload = make([]byte, 2)
payload[0] = byte(cmd.CommandClassID())
payload[1] = byte(cmd.CommandID())
{
buf := make([]byte, 2)
binary.BigEndian.PutUint16(buf, cmd.ParameterNumber)
payload = append(payload, buf...)
}
return
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 510
|
Trump will probably visit the southern border over the next few weeks
In the interview, Trump says that he will presumably visit the southern border over the course of the following not many weeks.
Trump called Biden's demise of the "Stay in Mexico" strategy, or the Migrant Protection Protocol, a "awful choice."
The Trump administration set up and extended MPP during the 2019 transient emergency as a feature of a more extensive concurrence with Mexico, and hailed it as a vital board in its endeavors to end "catch-and-release" — by which travelers were released into the inside of the U.S.
All things considered, MPP keeps travelers in Mexico as they anticipate their refuge hearings. Proponents say the arrangement finished a key force factor that brought travelers north, while pundits say it is barbarous and places transients in peril by leaving them in Mexico.
"A great many individuals are coming up right now at this very moment. Furthermore, you will have a great many individuals immersing our country. Furthermore, it will annihilate our nation," Trump told.
Trump said the primary thing Biden ought to do is reenact the "Stay in Mexico" strategy and complete the border divider.
Trump said that Mexico is currently "extremely angry at us." "We're not coexisting with Mexico any more. You have an extraordinary president of Mexico who was fabulous to me," Trump said.
President Biden has said that Mexico is declining to take taking all things together of the families that the U.S. attempts to remove after they endeavor to cross at the southern border. Biden said that his administration is in negotiations with Mexican President Andrés Manuel López Obrador on the matter.
"A few families are not returning in light of the fact that Mexico is declining to take them back. A few, not all," the president said in his first news meeting on Thursday.
"We're in negotiations with Mexico. That will change. They should all be returning," he proceeded.
A week ago Trump blamed Biden for causing "death and human misfortune" by fixing a portion of his border approaches, which Biden had considered insensitive.
"We gladly gave the Biden Administration the most secure border ever," the assertion peruses. " All they needed to do was keep this smooth-running framework on autopilot. All things being equal, in the range of an only couple of weeks, the Biden Administration has transformed a national victory into a national disaster. They are in path over their heads and leaking water quick."
Republicans have been encouraging President Biden to visit the border to encounter firsthand the overpowered facilities in the midst of a border flood. Subsequent to visiting the border, Sen. Mike Braun, R-Ind., wrote a letter to the president beseeching him to go on an outing south.
"The emergency encompassing this flood makes it an ethical basic for you to see firsthand what's going on—and not the disinfected rendition of the border visit taken by a portion of my congressional colleagues," Braun composed. "Having by and by gone for the current week, I can vouch for this being an obtuse, impractical and risky circumstance.
The Customs and Border Protection (CBP) facilities have been working a long ways past limit in the midst of a disturbing flood in border crossers. The agency declared that it had experienced in excess of 100,000 travelers at the border in February, while quantities of child transients in authority have likewise expanded drastically.
most COVID-19 deaths could have been prevented in nation:Dr. Deborah Birx
Sheriff's deputies briefly shut down SI 'autonomous zone' Mac's Public House
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,091
|
\section{Introduction}
The Blume-Capel (BC) model is a classical spin-1 model, originally
introduced to study phase transitions in specific magnetic materials
with a possible admixture of non-magnetic states \cite{blume66,capel66}.
Its modification was also used to qualitatively explain the phase
transition in a mixture of He$^3$-He$^4$ adsorbed on a two-dimensional
(2D) surface \cite{BeG71}. Below a concentration of 67\% in He$^3$, the
mixture undergoes a so-called $\lambda$ transition: the two components
separate through a first order phase transition and only He$^4$ is
superfluid. On a 2D lattice representing an Helium film, He atoms are
modelled by a spin-like variable, according to the following rule: an
He$^3$ atom is associated to the value 0, whereas a He$^4$ is represented
by a classical Ising spin taking the values $\pm 1$. Within this framework,
all the lattice sites are occupied either by an He$^3$ or He$^4$ atom
\cite{BeG71}. The 2D Blume-Capel model describes the
behaviour of this ensemble of spins $\{S_{mn}^{}=0,\pm1\}$. In addition to
the usual nearest-neighbour interaction, its energy includes the term
$\Delta_0 \sum_{mn}S_{mn}^2$, with $S_{mn}^{2}=0,1$, to take into account
a possible change in vacancies number. $\Delta_0$ can be thought as a
chemical potential for vacancies, or as a parameter of crystal field in a
magnetic interpretation. A simple analysis of the 2D BC Hamiltonian
already shows that this model presents a rather complex phase diagram in
the plane $(T,\Delta_0)$, where $T$ is the temperature in the canonical
ensemble \cite{cardy96}. In the limit $\Delta_0\rightarrow -\infty$, the
values $S_{mn}=0$ are effectively excluded and the standard 2D Ising model
is recovered, with its well-known second-order critical point at
$(T,\Delta_0)=(T_\mathsf{c}=2/\ln(1+\sqrt{2})\simeq
2.\,269185,-\infty)$, with the parameters taken in units of the Ising
exchange energy $J$. At zero temperature, on the other hand, a simple
energy argument shows that the ground state is the Ising like ordered state
with $|S_{mn}|=1$ if $\Delta_0<2$, and $|S_{mn}|=0$ else. There is
therefore a first order phase transition at $(T,\Delta_0)=(0,2)$,
suggesting a change in the order of the transition at some tricritical
point at the critical line at finite temperatures. Mean field theory
confirms this behaviour, and provides a second order transition line in the
plane $(T,\Delta_0)$ in the region extending from negative to moderated
positive values of $\Delta_0$ \cite{blume66,capel66,BeG71, cardy96,
hoston91}. Beyond the tricritical point, as dilution increases, the
transition becomes first order. Precise numerical simulations have been
performed to study the phase diagram and to locate the tricritical point
of the 2D Blume-Capel model \cite{beale86, xalap98, liusch02, dasilva02,
silva06}. From a theoretical background, several approximations have been
used as well, such as mean field theory \cite{blume66, capel66, BeG71,
cardy96, hoston91}, renormalization group analysis \cite{burk76,berk76},
and high temperature expansions \cite{saul74}. Using correlation identities
and Griffith's and Newman's inequalities, rigorous upper bounds for the
critical temperature have been obtained by Braga \textit{et al}
\cite{braga94}. It was also conjectured that exactly at the tricritical
point the 2D BC model falls into the conformal field theory (CFT) scheme of
classification of the critical theories in two dimensions
\cite{belavin84,friedan84,senechal99}. This is the case with $m=4$ and
$c=7/10$, where $c$ is the central charge \cite{friedan84, senechal99,
chenkel93}. The CFT analysis also implies a specific symmetry called
supersymmetry in the 2D BC model at the tricritical point \cite{friedan84,
senechal99,chenkel93}.\\
The two-dimensional BC model is directly related
as well to percolation theory \cite{denguo05} and dilute Potts model
\cite{qideng05}, where tricritical point properties are observed for
percolating clusters of vacancies. We also mention quantitative results
that match the universality class at the tricritical point of the BC model
with the one of a 2D spin fluid model representing a magnetic gas-fluid
coexistence transition \cite{wilding96}, and similarities between BC phase
diagram and Monte-Carlo results on the extended Hubbard model on a square
lattice \cite{paw06}. The advanced theoretical methods like bootstrap
approach and perturbed conformal analysis, in combination with the
integrable quantum field theory and numerical methods, have been applied to
study the scaling region and the RG flows in the 2D BC universality class
\cite{demusi95,fimusi20, fimusi201}.
The aim of this article is to present a different analytical method for the
BC model in two dimensions with the use of the anticommuting Grassmann
variables, originally proposed for the classical 2D Ising model in the case
of free fermions \cite{ple85dok, ple85tmp} and since then used to treat
various problems around the 2D Ising model, such as finite size effects and
boundary conditions \cite{liaw99,wuhu02}, quenched disorder
\cite{ple98,ple98fl}, and boundary magnetic field \cite{clusel05,clusel06}.
In contrast with the use of more traditional combinatorial and
transfer-matrix considerations \cite{ber69, samuel80, itz82, nojima98,
dritz89}, this method is rather based on a direct introduction of Grassmann
variables (fermionic fields) into the partition function $Z$ in order to
decouple the spin degrees of freedom in the local bond Boltzmann weights in
$Z$. A purely fermionic integral for $Z$ then follows by eliminating spin
variables in the resulting mixed spin-fermion representation for $Z$. The
method turns out to be particularly efficient to deal with models with
nearest-neighbour interactions in the 2D plane \cite{ple85dok,ple85tmp}.
For the 2D Ising model, the fermionic integral for $Z$ appears to be a
Gaussian integral over Grassmann variables, with the quadratic fermionic
form (typically called action) in the exponential \cite{ple98,dritz89}.
Respectively, the model is exactly solvable by Fourier transformation to
the momentum space for Grassmann variables in the action. In physical
language, this corresponds to the case of free fermions
\cite{dritz89,ple95amm}.
As the additional crystal field term in the BC Hamiltonian is local, we
hoped that the method will be applicable as well in this context. We will
see in the following that though it is not possible to compute exactly the
partition function and thermodynamics quantities of the BC model directly,
since the resulting fermionic action for BC is non Gaussian, our approach
allows to derive in a controlled way physical consequences from the
underlying fermionic lattice field theory with interaction. In the
continuum limit, a simplified effective quantum field theory can be
constructed and analyzed in the low energy sector, leading to the exact
equation for the critical line, that follows from the condition of
vanishing mass, and to the effective interaction between fermions
responsible for the existence of a tricritical point. The effects of
interaction are assumed to be analyzed in the momentum-space
representation. An approximate scheme such as Hartree-Fock-Bogoliubov (HFB)
method can be used to locate the tricritical point. There are also some
albeit formal analogies in this respect with the approaches typically used
in the BCS theory of ordinary superconductivity. In general, it is
interesting to note that in 2D a phase diagram of the BC model with first
order transition and tricritical point can be described not only within a
bosonic $\Phi^6$ Ginzburg-Landau theory \cite{lawrie84,zj04}, where the
order parameter is a simple scalar, but also with the use of fermionic
variables.
The article is organized as follows. After presenting the BC Hamiltonian
and the related partition function in standard spin-1 interpretation, we
apply the fermionization procedure leading to the exact fermionic action on
the lattice. Then, from this result, we derive the effective action in
the continuum limit and extract the exact mass. The condition of zero mass
already gives the equation for the critical line in the $(T,\Delta_0)$
plane. The effective action also includes four-fermion interaction due to
admixture of the $S^{2} =0$ states (vacancies) in the system, with coupling
constant $g_0\propto \exp(-\Delta)$, where $-\Delta =\Delta_0/T$, and
$\Delta_0$ is the parameter of the crystal field in the Hamiltonian. We
then give a physical interpretation for the existence of a tricritical
point in the BC phase diagram by studying the fermionic stability of the
BC spectrum at the critical line at order ${\bf k}^2$ in momentum and
compare our results with recent numerical Monte Carlo simulations
\cite{beale86,xalap98,liusch02,dasilva02,silva06}.
\section{The 2D Blume-Capel model}
\subsection{Hamiltonian and partition function}
The 2D BC model is defined, on a square
lattice of linear size $L$, via the following Hamiltonian:
\bb
\fl
H = -\sum_{m=1}^{L}\sum_{n=1}^{L}\Big[J_1 S_{mn}S_{m+1n}
+J_2 S_{mn}S_{mn+1}\Big] +\Delta_0\sum_{m=1}^{L}
\sum_{n=1}^{L}S_{mn}^{2}\, . \;\;\;
\label{ham1a}
\ee
In the above expression, $S_{mn}=0,\pm1$ is the BC spin-1 variable
associated with the $mn$ lattice site, with $m,n=1,2,3,\ldots,L$, where
$m,n$ are running in the horizontal and vertical directions, respectively.
The total number of sites and spins on the lattice is $L^2$, at final
stages $L^2\to\infty$. The spins are interacting along the lattice bonds,
$J_{1,2}>0$ are the exchange energies. Notice that positive $J_{1,2}>0$
correspond to the ferromagnetic case. In addition to the Ising states with
$S_{mn}=\pm1$, there are as well the non-magnetic atomic levels with
$S_{mn}=0$, which we shall also refer to as vacancies. The crystal field
parameter $\Delta_0$ plays the role of a chemical potential, being
responsible for the level splitting between states $S_{mn}=0$ and
$S_{mn}=\pm 1$. The Hamiltonian that appears in the Gibbs exponential
may be written in the form:
\bb
\fl
-\beta H =\sum_{m=1}^{L}\sum_{n=1}^{L}\Big[K_1 S_{mn}S_{m+1n}
+K_2 S_{mn}S_{mn+1}\Big] +\Delta\sum_{m=1}^{L}
\sum_{n=1}^{L}S_{mn}^{2}\, ,\;\;\;
\label{ham1ab}
\ee
where $K_{1,2} =\beta J_{1,2}$ are now the temperature dependent coupling
parameters, $\beta =1/T$ is inverse temperature in the energy units,
and $\Delta=-\beta\Delta_0$. In what follows we will assume, in
general, the ferromagnetic case, with positive $J_{1,2}>0$ and $K_{1,2}>0$,
though the fermionization procedure by itself is valid irrespective of
the signs of interactions. \footnote{ \
In what follows, by presenting the numerical results, we shall typically
assume isotropic case for interactions in the above Hamiltonians, with
$J_1=J_2=J,\; K_1=K_2=K$, and $K=\beta J$. We will also use, in some
cases, the dimensionless parameters normalized by the exchange energy $J$
for temperature $T$ and chemical potential $\Delta_0$.}\, The positive
$\Delta$ (negative $\Delta_0$) is favourable for the appearance of the
Ising states with $S_{mn}^{2}=1$ in the system, with the ordered phase
below the critical line in the $(T,\Delta_0)$ plane, at low temperatures,
while negative $\Delta$ (positive $\Delta_0$) will suppress Ising states,
being favourable for vacancies. In the limit $\Delta\to\infty$, or
$\Delta_0\to-\infty$, the states with $S_{mn}^{2}=0$ are effectively
suppressed and the model reduces to the 2D Ising model, with the critical
temperature being defined by the condition $\sinh 2K_1\sinh 2K_2 =1$. As
$\Delta_0$ increases to finite values, there will be a line of phase
transitions in the $(T,\Delta_0)$ plane. The increasing $\Delta_0$ admits
the appearance of the vacancy $S_{mn}^{2}=0$ states. Respectively, the
critical line goes lower as $\Delta_0$ increases from negative to positive
values and terminates at $\Delta_0=J_1+J_2$ at zero temperature, so that
all sites are empty at larger positive values of $\Delta_0$ at $T=0$. A
remarkable feature of the BC model is that there is also a tricritical
point on the critical line at finite temperatures somewhere slightly to the
left from $\Delta_0=J_1+J_2$, where the transition changes from second to
first order.\\
The partition function $Z$ of the BC model is obtained by summing over all
possible spin configurations provided by $\{S_{mn}=0,\pm1\}$ at each site,
$Z= \sum_{S=0,\pm1} e^{-\beta H} =\sTR_{\{ S \}} e^{-\beta
H}$. Using the property $\{S_{mn}=0,\pm1\}$, it is easy to develop each
Boltzmann factor appearing in the above trace formula in a polynomial form:
\bb
\exp \left( K_{\,i} S S'\right)
=1+\lambda_{\,i} S S'
+\lambda_{\,i}' S^2 S'^2, \;\;\;\;
\,i=1,2\,,\;\;
\label{defpol}
\ee
with
\bb
\lambda_{\,i}=\sinh K_{\,i}\,,\;\;\;\;
\lambda_{\,i}'=\cosh K_{\,i} -1\,. \;\;\;\; \,i=1,2\,.\;\;
\label{deflambda}
\ee
The partition function is then given by the product of the above
spin-polynomial Boltzmann weights under the averaging:
\bb
\fl
Z=\sTR_{\{S_{mn}=0,\pm1\}}
\Big\{\prod_{m=1}^{L}\prod_{n=1}^{L} e^{\,\Delta S_{mn}^{2}}\,
\Big[
(1+ \lambda_1\, S_{mn}S_{m+1n}
+\lambda_{1}'S_{mn}^{2}S_{m+1n}^{2}) \nonumber \\
\times\,(1+ \lambda_2\, S_{mn}S_{mn+1}
+\lambda_{2}'\, S_{mn}^{2}S_{mn+1}^{2}) \Big]\Big\}.\;\;
\label{PFspin}
\ee
This expression will be the starting point of the fermionization procedure
for $Z$ using Grassmann variables we develop in Section 3. At first stage,
we introduce new Grassmann variables to decouple the spins in the local
polynomial factors of expression (\ref{PFspin}). At next stage, we
sum over spin states in the resulting mixed spin-fermion representation for
$Z$ to obtain a purely fermionic theory for $Z$.
\subsection{Local spin decomposition}
In what follows, we shall need to average partially fermionized $Z$ over
the spin states at each site.
This averaging will be performed in two steps, first we keep in mind to
average over the Ising degrees of freedom, $S_{mn}=\pm1$, then adding the
contribution of vacancies, $S_{mn}=0$. The two cases may be also
distinguished in terms of variable $S_{mn}^{2}=0,1$. In this subsection, we
shortly comment on the formalization of this two-step averaging. Provided
we have any function of the BC spin-1 variable $f(S_{mn})$, with
$S_{mn}=0,\pm1$, the averaging rule is simple:
\bb
\sum_{S_{mn}=0,\pm1} \,f(S_{mn}) =f(0)+f(+1)+f(-1)\,.\;\;
\label{fuy1}
\ee
In forthcoming procedures, we ought to average first over the states
$S_{mn}=\pm1$ at each site, provided $S_{mn}^{2}=1$, while making the sum
over choices $S_{mn}^{2}=0,1$ at next stage. In principle, since $S_{mn}
=\makebox{sign}\{ S_{mn}\}\,|S_{mn}|$, with $\makebox{sign}\{S_{mn}\}=\pm1$ and $|S_{mn}|
=S_{mn}^{2} =0,1$, we can try simply to write $S_{mn} =y_{mn}\sigma_{mn}$,
where $y_{mn}=0,1$, and $\sigma_{mn}=\pm1$, and to average over the
component states $y_{mn}=0,1$ and $\sigma_{mn} =\pm1$ as independent
variables. This gives:
\bb\fl
\sum_{y_{mn}=0,1;\, \sigma_{mn}=\pm1} \,f(y_{mn}\sigma_{mn})
=f(+0) +f(-0)+f(+1)+f(-1)\,.\;\;
\label{fuy2}
\ee
We see that the zero state is counted twice, in contradiction to
(\ref{fuy1}). This may be corrected by introducing in the definition of
the averaging the weight factor $\frac{1}{2}$ at $y_{mn}=0$. Equivalently,
this may be formalized by adding $2^{\,-1+y_{mn}}$ under the sum. This
results the sum of three terms in agreement with (\ref{fuy1}):
\bb\fl
\sum_{y_{mn}=0,1;\,\sigma_{mn}=\pm1}\,2^{-1+y_{mn}}\,
f(\sigma_{mn}y_{mn}) =f(0)+f(+1)+f(-1)\,.\;\;
\label{fuy3}
\ee
In fact, this decomposition scheme with $S_{mn} =\sigma_{mn}\,y_{mn}$ and
independently varying $\sigma_{mn}=\pm1$ and $y_{mn}=0,1$ is somewhat
more close to the situation for the two-dimensional Ising model with
quenched site dilution \cite{ple98,ple98fl}. In that case
$\sigma_{mn}=\pm1$ is simply the Ising spin, while the variable
$y_{mn}=0,1$ is the quenched dilution parameter, counting whether the given
site is occupied or dilute, and both averaging rules (\ref{fuy2}) and
(\ref{fuy3}) can be interpreted physically. The case (\ref{fuy2}) means
in fact that there is a spin $\sigma_{mn}=\pm 1$ also at site $y_{mn}=0$,
which is not interacting with its nearest-neighbors. This
empty, or rather disconnected site, by flipping over two states $\pm1$
under temperature fluctuations, will give however a contribution to the
entropy, $\ln 2$ by empty site. The case (\ref{fuy3}) means that the site
$y_{mn}=0$ is really dilute, or empty, with no spin degree of freedom at
it, even disconnected.
For the quenched dilute 2D Ising model, the
quenched averaging over some fixed temperature-independent distribution
$y_{mn} =0,1$ is physically distinct from the $\sigma_{mn}=\pm1$ averaging,
and is assumed to be performed rather on $-\beta F =\ln Z$, but not on $Z$
itself. The situation is different for the BC model, which is in essence
the annealed case of the site dilute Ising model, with the averaging
simultaneously over all states $S_{mn}^{}=0,\pm1$ at each site for $Z$
itself. In this case the averaging is to be performed strictly according
to the rules like (\ref{fuy1}) and (\ref{fuy3}), but not (\ref{fuy2}).
There is still another way to formalize the averaging over the
possibilities of $S_{mn}=\pm1$ before we actually fix $S_{mn}^{2}=0,1$.
It is based on the observation that the result of the averaging
(\ref{fuy1}) will not be changed if we replace $S_{mn}\to \sigma_{mn}
S_{mn}$, with $\sigma_{mn}=\pm1$, since the sum includes $S_{mn} =\pm1$
anyhow:
\bb
\fl
\sum_{S_{mn}=0,\pm1}\,f(S_{mn})
=\sum_{S_{mn}=0,\pm1} \,f(\sigma_{mn}S_{mn})
=f(0)+f(+1)+f(-1)\,,\;\; \sigma_{mn}=\pm1\,.
\label{fuy4}
\ee
Thought the above equation holds already for any fixed value of
$\sigma_{mn}=\pm1$, we can as well average it over the states
$\sigma_{mn}=\pm1$, introducing factor $\frac{1}{2}$ for normalization.
The averaging of $f(\sigma_{mn}S_{mn})$ itself gives:
\bb
\fl
\frac{1}{2}\sum\limits_{\sigma_{mn}=\pm1}^{} f(\sigma_{mn}S_{mn})
=\frac{1}{2}[f(S_{mn})+f(-S_{mn})] =g(S_{mn}^{2})\,,\;\;\;\;
S_{mn}^{2}=0,1\,.\;\;
\label{fuy5}
\ee
The result of the averaging will be a function $g$ which only depends on
$|S_{mn}|=0,1$, alias $S_{mn}^{2}=0,1$, but not on the $\makebox{sign}\{S_{mn}\}$
of $S_{mn} =\makebox{sign}\{S_{mn}\}|S_{mn}|$. In terms of $g(S_{mn}^{2})$, the
equation (\ref{fuy4}) results:
\bb
\fl
\sum_{S_{mn}=0,\pm1}\,f(S_{mn})
=\sum_{S_{mn}=0,\pm1}\Big\{
\frac{1}{2}\sum_{\sigma_{mn}=\pm1}
f(\sigma_{mn}S_{mn}) \Big\}
=g(0) +2g(1)\,.\;\;
\label{fuy6}
\ee
In this form the two-step averaging will be realized in the procedure
of elimination of spin variables by constructing the fermionic integral
for $Z$ in the forthcoming discussion.
\section{Fermionization and lattice fermionic field theory}
The expression of the BC partition function $Z$ as a product of spin
polynomials under the averaging as given in (\ref{PFspin}) will be the
starting point of the fermionization procedure for $Z$. This procedure
has first been introduced in the context of the 2D pure Ising model
\cite{ple85dok, ple85tmp}. It relies on interpreting each spin polynomial
Boltzmann weight in (\ref{PFspin}) as the result of integration over a set
of two Grassmann variables, which decouples the spins under the integral.
Before going into details, we remind in the following subsection few
essential features about Grassmann variables and the rules of integration.
\subsection{Grassmann variables}
Mathematically, Grassmann variables may be viewed as formal purely
anticommuting fermionic numbers \cite{ber66}. In physical aspect, they are
images of quantum fermions in path integral \cite{ber66}. We remind here
few basic features about Grassmann variables that are needed in the
rest of the paper. More details can be found in \cite{nakahara99,zinn05}.
A Grassmann algebra $\mathcal{A}$ of size $N$ is generated by a set of $N$
anti-commuting objects $\{a_1,a_2,\ldots, a_N$\} satisfying:
\bb
a_ia_j +a_ja_i =0\,,\;\;\; a_{i}^{2}=0\,,\;\;\;\;
i,j =1,2,\ldots,N\,.\;\;
\label{grass1}
\ee
This as well implies $a_ia_j =-a_ja_i$, including the case $i=j$.
Unlike quantum fermions, Grassmann variables are totally anticommuting.
Note that any linear superpositions of the original variables
(\ref{grass1}) are again purely anti-commuting with each other and with the
original variables, and their squares are zeroes. Functions defined on such
an algebra are particularly simple, they are always polynomials with a
finite degree (since $a_{i}^{2}=0$). It is possible to define the notion
of integration \cite{zj04, ber66,nakahara99,zinn05} in algebra of such
polynomials with the following rules. For one variable, the rules are:
\bb
\int \dd a_i\cdot a_i=1\,,\;\;\;\;\;
\int \dd a_i\cdot 1=0\,.\;\;
\ee
The integral with many variables is considered as a multilinear functional
with respect to each of the variables involved into integration
(integral of a sum is the sum of the integrals etc). In multiple integrals,
the fermionic differentials are assumed again anti-commuting with each
other and with the variables themselves. The integration of any polynomial
function of Grassmann variables like $f(a) =f(a_1,a_2,\ldots,a_N)$ then
reduces, in principle, to a repeating use of the above rules.
The rules of change of variables in Grassmann variable (fermionic)
integrals under a linear substitution are similar to the analogous rules of
common (commuting) analysis. The only difference is that the Jacobian of
the transformation will enter now in the inverse power, as compared to the
commuting (bosonic) case \cite{zj04, ber66, nakahara99, zinn05}.\\
With the above definitions, the Gaussian integrals over Grassmann variables
are all expressed by the equations relating them to determinants and
Pfaffians. The basic equation for the determinantal integral of first kind
reads:
\bb
\int \prod_{i=1}^{N}\dd a_i^* \dd a_i \exp\left(\sum_{i,j=1}^{N}
a_iA_{ij} a^*_j\right)=\det A\,,\;\;
\label{deta1}
\ee
where the integration is over the doubled set of totally anti-commuting
variables $\{a,\,a^*\}$. The (square) matrix $A$ in the exponential is
arbitrary. In applications, the quadratic fermionic form in the exponential
like in (\ref{deta1}) is typically called {\em action}. Since the action is
quadratic, the integral is Gaussian. The exponential in (\ref{deta1}) is
assumed in the sense of its series expansion. Due to nilpotent properties of
fermions, the exponential series definitely terminates at some stage,
thus resulting a finite polynomial in variables involved under the
integral. [With respect to the action $S=aAa^*$ taken as a whole, the last
nonzero term will be with $S^N\neq 0$, while $S^{N+1}=0$. Alternatively,
the same polynomial for the exponential from (\ref{deta1}) will follow by
multiplying elementary factors like $\exp(a_iA_{ij}a_{j}^{*})=
1+a_iA_{ij}a_{j}^{*}$]. In physical interpretation, the integral of the
first kind (\ref{deta1}) with complex-conjugate fields rather corresponds
to Dirac theories.\\
The Majorana theories with real fermionic fields are presented by the
Gaussian integrals of the second kind related to the Pfaffian. The basic
identity for the fermionic integral of the second kind reads:
\bb
\int \prodg{i=1}{N} \dd a_i \exp\left(\sum_{i,j=1}^{N}
\frac{1}{2}\,a_iA_{ij}a_j\right)
=\mbox{Pf}\,A\,.\;\;
\label{pfaff1}
\ee
The integration is over the set of even number $N$ of Grassmann
variables, the arrow in the measure indicates the direction of ordering
of anti-commuting differentials. The matrix in the exponential is now
assumed skew-symmetric, $A_{ij}+A_{ji}=0,\;\; A_{ii}=0$, which
property is complimentary to fermionic anticommutativity. The result of
the integration is the Pfaffian associated with the
skew-symmetric matrix $A$ from the exponential, otherwise, one can
associate the Pfaffian on the r. h. side of (\ref{pfaff1}) with the
above-diagonal triangular array of elements of that matrix,
$\{A_{ij}\,|\, 1\leq i< j \leq N\}$. In mathematics, the Pfaffian is known
as a certain skew-symmetric polynomial in elements of a triangular array
of the above kind. In physics, the combinatorics of the Pfaffian
also is known under the name of the (fermionic) Wick's theorem.
Note that the identity (\ref{pfaff1}) can be assumed by itself for
the definition of the Pfaffian.\\
In a combinatorial sense, the determinant is rather a particular case of
the Pfaffian. Respectively, the integral (\ref{deta1}) is a subcase of the
integral (\ref{pfaff1}). It can be shown, on the other hand, that
$(\mbox{Pf} \, A)^2 =\det A$ for any skew-symmetric matrix $A$. This
implies that, in principle, an integral of the second kind (\ref{pfaff1})
can always be reduced to an integral of first kind (\ref{deta1}) by
doubling the number of fermions in (\ref{pfaff1}). In applications like in
the Ising and BC models, where the original integrals in the real lattice
space rather appear in the Pfaffian like form of (\ref{pfaff1}), this
reduction to the determinantal case occurs automatically after the
transformation to the momentum space, where the fermionic variables are
typically combined into groups of variables with opposite momenta $(\bm{k},
-\bm{k})$, which play the role of the conjugated variables like in
(\ref{deta1}). In practice, for low-dimensional integrals, most of
calculations can be performed simply from the definition of the integral,
by expanding the integrand functions into polynomials.
\subsection{Fermionization procedures}
In the same spirit as for the 2D Ising model \cite{ple85tmp}, we introduce
two pairs of Grassmann variables $(a_{mn}, \bar{a}_{mn})$ and
$(b_{mn}, \bar{b}_{mn})$ to factorize the polynomials appearing in
(\ref{PFspin}). Namely we use the relations:
\bb
\fl \nonumber
1+\lambda_1S_{mn}S_{m+1n} +\lambda_1'
S_{mn}^{2}S_{m+1n}^{2}=\int \dd \bar{a}_{mn} \dd a_{mn} \,
e^{(1+\lambda_1'S_{mn}^{2}S_{m+1n}^{2})\, a_{mn}^{}\bar{a}_{mn}}
\\ \nonumber
\times
(1+a_{mn}S_{mn})\,(1+\lambda_{1}\,\bar{a}_{mn}S_{m+1n}),
\ee
\bb
\fl \nonumber
1+\lambda_2S_{mn}S_{mn+1} +\lambda_2'
S_{mn}^{2}S_{mn+1}^{2}=\int \dd \bar{b}_{mn} \dd b_{mn}\,
e^{(1+\lambda_2'S_{mn}^{2}S_{mn+1}^{2})\, b_{mn}^{}\bar{b}_{mn}}
\\
\times
(1+b_{mn}S_{mn})\,(1+\lambda_{2}\,\bar{b}_{mn}S_{mn+1}).
\label{fact1}
\ee
For the sake of simplicity in notation we introduce the following
link factors:
\bb\nonumber
& A_{mn}=1+a_{mn} S_{mn}\,,\;\;
& \bar{A}_{m+1n}=1+\lambda_1 \bar{a}_{mn}
S_{m+1n}\,,\;\;\\
& B_{mn}=1+b_{mn} S_{mn}\,,\;\;
& \bar{B}_{mn+1}=1+\lambda_2 \bar{b}_{mn}S_{mn+1}\,. \;\;
\label{link1}
\ee
We also define the Grassmann local trace operators which associate to any
function $f(\ldots)$ on the Grassmann algebra as follows:
\bb
\nonumber
\TR_{(a_{mn})} \big[ f(a_{mn},\bar{a}_{mn})\big]
=\int \dd \bar{a}_{mn} \dd a_{mn} \,
e^{(1+\lambda_1'S^2_{mn}S^2_{m+1n})\, a_{mn}^{}\bar{a}_{mn}^{}}
f(a_{mn},\bar{a}_{mn}), \\
\TR_{(b_{mn})} \big[ f(b_{mn},\bar{b}_{mn}) \big]
=\int \dd \bar{b}_{mn} \dd b_{mn}\,
e^{(1+\lambda_2'S^2_{mn}S^2_{mn+1})\,
b_{mn}^{}\bar{b}_{mn}^{}}\, f(b_{mn},\bar{b}_{mn}).
\label{fact2}
\ee
The factorized Boltzmann weights from (\ref{fact1}) now read:
\bb
\nonumber
1+\lambda_1S_{mn}S_{m+1n}+\lambda_1'
S_{mn}^2S_{m+1n}^2=\TR_{(a_{mn})} \big[ A_{mn}
\bar{A}_{m+1n}\big],\\
1+\lambda_2S^2_{mn}S^2_{mn+1}+\lambda_2'
S^2_{mn}S^2_{mn+1}=\TR_{(b_{mn})} \big[B_{mn} \bar{B}_{mn+1}
\big].
\label{fact3}
\ee
Introducing the above Boltzmann weights into the original expression
(\ref{PFspin}) for $Z$, we obtain a mixed representation containing both
spins and Grassmann variables for $Z$. Notice that as the separable link
factors like $A_{mn}, \bar{A}_{mn}, B_{mn}, \bar{B}_{mn}$ are
neither commuting nor anti-commuting with each other, the order in which
they appear in the product may be important. The factorized bond weights,
however, presented in (\ref{fact3}) by doubled link factors under the trace
operators, are totally commuting, if taken as a whole, with any element of
the algebra under the averaging. For the whole lattice, following the rules
(\ref{fact2}), we define the global trace operator as follows:
\bb
\label{fact4} \fl
\TR_{(a,b)}\, \big[ f \big]
=\int \prod_{m=1}^L \prod_{n=1}^L \dd \bar{a}_{mn}
\dd a_{mn}\dd \bar {b}_{mn} \dd b_{mn} e^{\Delta S^2_{mn}}
f(a_{mn},\bar{a}_{mn},b_{mn},\bar{b}_{mn})\\ \nonumber \times \exp \left\{
\sum_{m=1}^L \sum_{n=1}^L \left[ (1+\lambda'_1 S^2_{mn}
S^2_{m+1n}) a_{mn} \bar{a}_{mn} +(1+\lambda'_2 S^2_{mn}
S^2_{mn+1}) b_{mn} \bar{b}_{mn}\right] \right\}.
\ee
The all even-power terms in spin variables are now incorporated
into the generalized Gaussian averaging measure of (\ref{fact4}),
including the term with chemical potential. The partition function is
then given by
\bb
\label{PFmix}
Z=\sTR_{\{S\}}
\TR_{(a,b)} \left[ \prodd{n=1}{L} \left( \prodd{m=1}{L}
\left( (A_{mn}\bar{A}_{m+1n}) (B_{mn} \bar{B}_{mn+1})
\right) \right)
\right].
\ee
At this stage the factorized partition function appears as a double trace,
over the spin degrees of freedom, with $\sTR_{\{S\}}$, and over the
Grassmann variables, with $\TR_{(a,b)}\,$. The idea of the next step is to
make spin summation in (\ref{PFmix}) to obtain a purely fermionic integral
for $Z$. At first stage, we keep in mind to eliminate rather the Ising
degrees $\pm1$ of spin variables in (\ref{PFmix}). The averaging over
$S_{mn}^{2}=0,1$ will be performed at next stage.
\subsection{The ordering of factors}
Up to now we only add extra fermionic (Grassmann) variables to obtain the
mixed expression (\ref{PFmix}), where the spin variables are actually
decoupled into separable link factors like (\ref{link1}). Further algebraic
manipulations are necessary to simplify this expression in order the spin
averaging be possible in each group of factors with the same spin. For any
given $mn$, there are four such factors, $A_{mn}, B_{mn}, \bar{A}_{mn},
\bar{B}_{mn}$, which all include the same BC spin $S_{mn}=0, \pm1$. What we
need is to be able to keep nearby the above four factors with the same spin
at least at the moment of the spin averaging. We apply the mirror-ordering
procedure, introduced originally for the two-dimensional Ising model, to
move together, whenever possible, the different link factors containing the
same spin. Despite of that the separable link factors like (\ref{link1})
are in general neither commuting nor anticommuting, it is still possible to
make use of the property that the doubled combinations like $A_{mn}
\bar{A}_{m+1n}$ and $B_{mn} \bar{B}_{mn+1}$ are effectively commuting, if
taken as a whole, with any element of the algebra under the sign of the
Gaussian fermionic averaging in (\ref{PFmix}). Using the notation for the
ordered products similar to that of \cite{ple85dok,ple85tmp}, this leads
to:
\bb
\nonumber
Z&=& \sTR_{\{S\}} \TR_{(a,b)}\Big\{\prod\limits_{m=1}^{L}
\prod\limits_{n=1}^{L}
\Big[ (A_{mn}\bar{A}_{m+1n})(B_{mn}\bar{B}_{mn+1})
\Big]\Big\}
\\ \nonumber
&=& \sTR_{\{S\}} \TR_{(a,b)}
\Big\{ \prodd{n=1}{L}\, \Big[\, \prodd{m=1}{L}
\bar{B}_{mn}A_{mn}\bar{A}_{m+1n}\cdot \prodg{m=1}{L}B_{mn}\, \Big] \Big\}
\;\;\;\;
\\
\label{PFmixmo}
&=&
\sTR_{\{S\}} \TR_{(a,b)} \left[
\prodd{n=1}{L} \left( \left( \prodd{m=1}{L}
\bar{A}_{mn} \bar{B}_{mn} A_{mn} \right)\cdot \left(\prodg{m=1}{L} B_{mn}
\right) \right) \right].
\ee
In the above transformations, we use mirror-ordering decoupling for factors
in vertical direction, $B_{mn}\bar{B}_{mn+1}$, with respect to $n$, then
insert the commuting factorized horizontal weights, $A_{mn}\bar{A}_{m+1n}$,
and reread the resulting products in few subsequent transformations
(cf. \cite{ple85dok,ple85tmp}). The boundary terms are also to be treated
properly as we pass from (\ref{PFspin}) to (\ref{PFmixmo}). The simplest
case is provided by the free boundary conditions. The free boundary
conditions for spin variables, $S_{L+1n} =S_{mL+1}=0$, in (\ref{PFspin})
correspond to the free boundary conditions for fermions, $\bar{a}_{0n}
=\bar{b}_{m0} =0$, in (\ref{PFmixmo}). For free boundary conditions, the
transformation from (\ref{PFspin}) to (\ref{PFmixmo}) is exact. In what
follows, however, we will typically assume the periodic boundary conditions
for fermions in representations like (\ref{PFmixmo}). These are most
suitable closing conditions when passing to the Fourier space for
anticommuting (Grassmann) fields. The change of the boundary conditions of
this kind is inessential in the limit of infinite lattice as $L^2\to
\infty$. In principle, one can pay more attention to the effects of the
boundary terms in the periodic case, which can actually be treated
rigorously also for finite lattices \cite{ple85tmp, liaw99, wuhu02,
clusel05, clusel06}.\\
In the case of the 2D Ising
model, with $S_{mn}^{2}=1$, we can explicitly perform the trace over the
Ising spin degrees of freedom $S_{mn}=\pm1$ recursively at the junction of
two $m$-ordered products in the final line of (\ref{PFmixmo}). The
situation is slightly different in the BC case, since $S_{mn}^{2}=0,1$,
instead. Also, the trace operator (\ref{fact4}) contains terms with
$S_{mn}^2=0,1$ which are coupled at neighboring sites. Therefore it is not
possible to trace over the whole set of states $S_{mn} =0,\pm1$ in the BC
case directly in (\ref{PFmixmo}), but only we can eliminate first the Ising
degrees $\makebox{sign}\{S_{mn}\}=\pm1$. The BC variables $S_{mn}^{2}=0,1$ will
still remain as parameters and will be eliminated at next stages. The
elimination of the Ising degrees we will realize by the symmetrization
transformation, $S_{mn}\to \sigma_{mn}S_{mn}$, with averaging over
$\sigma_{mn} =\pm1$, following the procedures explained in
(\ref{fuy4})-(\ref{fuy6}) above. The details of the $\sigma_{mn}=\pm1$
averaging are discussed in the next subsection. Thus, the ordering
procedure on the link variables (\ref{link1}) allows us to eliminate
at least a one part of the spin degrees of freedom in the factorized
expression for $Z$ resulting in the final line of (\ref{PFmixmo}).
\subsection{Spin summation}
At the junction of the two ordered products in (\ref{PFmixmo}), with
$S_{mn}\to \sigma_{mn}S_{mn}$, we perform the trace $\sigma_{mn} =\pm1$
recursively, for $m=L,L{-}1,\ldots,2,1$, for given fixed $n$, starting with
$m=L$. The procedure will be then repeated for other values of
$n=1,2,3,\ldots,L$. The four relevant factors $\bar{A}_{mn},\bar{B}_{mn},
A_{mn},B_{mn}$ with the same spin that met at the junction of the two
$m$-product in (\ref{PFmixmo}), for given $n$, are to be specified from
(\ref{link1}). There we assume $S_{mn}\to \sigma_{mn}S_{mn}$. Then we
multiply the above four factors, taking into account that
$\sigma_{mn}^{2}=1$, so that $S_{mn}^{2} \to \sigma_{mn}^{2}S_{mn}^{2} \to
S_{mn}^{2}$, and sum over the states $\sigma_{mn}=\pm1$. This will
eliminate all odd terms in the polynomial so obtained. The averaging thus
results:
\bb
\fl \nonumber
\frac{1}{2}\sum_{\sigma_{mn}=\pm 1}
\bar{A}_{mn} \bar{B}_{mn} A_{mn} B_{mn}=
\\
\fl\nonumber
=1+S^2_{mn}a_{mn}b_{mn}
+S^2_{mn}(\lambda_1\,\bar{a}_{m-1n}
+\lambda_2\,\bar{b}_{mn-1})(a_{mn}+b_{mn})
+S^2_{mn}\lambda_1\lambda_2\,\bar{a}_{m-1n} \bar{b}_{mn-1}\\
\nonumber
+\,S_{mn}^{4}\lambda_1\lambda_2a_{mn}b_{mn}
\bar{a}_{m-1n} \bar{b}_{mn-1}\\
\fl
=\exp\Big[\,S^2_{mn}\Big(a_{mn}b_{mn}
+(\lambda_1 \bar{a}_{m-1n}+\lambda_2 \bar{b}_{mn-1})(a_{mn}+b_{mn})
+\lambda_1\lambda_2 \bar{a}_{m-1n} \bar{b}_{mn-1}
\Big)\Big]\,.\;\;
\label{avab1a}
\ee
The even fermionic polynomial resulting under the averaging can be written
as a Gaussian exponential, as is shown in the final line. This term is
totally commuting with all other elements of the algebra and can be removed
outside from the junction. The BC spins still remain in the form of
$S_{mn}^{2}=0,1$ in (\ref{avab1a}), but the Ising degrees, $\makebox{sign}\{S_{mn}\}
=\pm1$, are already effectively eliminated. After completing the above
averaging procedure at the junction at $m=L$, for given $n$, we repeat the
calculation for $m=L-1,\, \ldots,\,2,1$, and then for other values of
$n=1,2,\,\ldots,\,L$. Adding the diagonal terms from the definition of the
fermionic averaging (\ref{fact4}), the partially traced partition function
finally reads:
\bb
\fl\nonumber
Z&=&2^{L^2}\sTR_{\{S^2=0,1\}}
\int \prod_{m,n=1}^L \dd \bar {a}_{mn} \dd a_{mn}\dd \bar {b}_{mn} \dd
b_{mn}
\exp \left[\Delta S^2_{mn}
+(1+\lambda'_1\,S^2_{mn}S^2_{m+1n})a_{mn}\bar{a}_{mn}
\right.
\\
\fl\nonumber
& &
+(1+\lambda'_2\, S^2_{mn}S^2_{mn+1}) b_{mn}\bar{b}_{mn}
+S^2_{mn}\,(\lambda_1 \bar{a}_{m-1n}
+\lambda_2 \bar{b}_{mn-1} )(a_{mn}+b_{mn})
\\[5pt]
\fl
& &\left.
+\,S^2_{mn}\,a_{mn} b_{mn}
+\,S^2_{mn}\,\lambda_1 \lambda_2 \bar{a}_{m-1n} \bar{b}_{mn-1}
\right]\,.\;\;
\label{abss1a}
\ee
The resulting integral for $Z$ in (\ref{abss1a}) is the Gaussian integral,
which includes yet the variables $S_{mn}^{2}=0,1$ as parameters. At this
stage, it easy to recognize that the 2D Ising model is solvable, since in
this case $S_{mn}^{2}=1$ at all sites. The partition function $Z$ is then
given by a Gaussian fermionic integral, which can be readily evaluated by
passing to the momentum space for fermions \cite{ple85dok,ple85tmp}. This
results the Onsager's expressions for $Z$ and $-\beta F=\ln Z$. In the BC
model case, it remains yet to eliminate $S_{mn}^{2}=0,1$ degrees of
freedom in the above expression (\ref{abss1a}) for $Z$.\\
The trace over $S_{mn}^{2}=0,1$ can be performed in (\ref{abss1a}) after we
manage to decouple the variables in terms including
$S_{mn}^{2}S_{m+1n}^{2}$ and $S_{mn}^{2}S_{mn+1}^{2}$. Several methods are
possible. A one way is to introduce another auxiliary set of Grassmann link
variables, similarly to what we previously did to decouple the factors
$S_{mn}S_{m+1n}$ and $S_{mn} S_{mn+1}$ in (\ref{fact1}).
It is possible however to avoid the introduction of the new fields by
using instead the following rescaling of the fermionic variables
under the integral: $a_{mn} \to a_{mn}/S_{mn}^{2},\,b_{mn}\to
b_{mn}/S_{mn}^{2}$. Respectively, to preserve the integral invariant, one
has to rescale the differentials: $da_{mn}\to S_{mn}^{2}da_{mn},\,
db_{mn}\to S_{mn}^{2}db_{mn}$. This may be viewed, in principle, as a kind
of a change of variables in a fermionic integral, which leaves the integral
invariant, as it follows from the basic rules of integration. The variable
$S_{mn}^{2}$ then disappears in some places inside the exponential and
appears in the others, the terms with $S_{mn}^{2}S_{m+1n}^{2}$ and
$S_{mn}^{2} S_{mn+1}^{2}$ being decoupled. Also, the resulting
seemingly singular expressions like $S_{mn}^{2} \exp(a_{mn} \bar a_{mn}/
S_{mn}^{2})$ are to be understood as $S_{mn}^{2} \exp(a_{mn} \bar a_{mn}/
S_{mn}^{2}) =S_{mn}^{2}(1+a_{mn}\bar a_{mn}/ S_{mn}^{2})=S_{mn}^{2}
+a_{mn}\bar a_{mn}$. Finally, after shifting some indices in the sums,
we obtain:
\bb
\nonumber\fl
Z &=& 2^{L^2}\sTR_{\{S^2=0,1\}}
\int \prod_{m,n=1}^L \dd \bar {a}_{mn} \dd a_{mn}\dd \bar {b}_{mn} \dd
b_{mn} (S^2_{mn}+a_{mn}\bar{a}_{mn})(S^2_{mn}+b_{mn}\bar{b}_{mn})
\\
\nonumber\fl
& \times& \exp \left[ \Delta S^2_{mn}
+ S^2_{mn}\left(\lambda'_1\, a_{m-1n}\,\bar{a}_{m-1n}
+\lambda'_2\, b_{mn-1} \bar{b}_{mn-1}
+\lambda_1 \lambda_2\,\bar{a}_{m-1n}\bar{b}_{mn-1}\right)
\right]
\\*[5pt]\fl
&\times& \exp \left[a_{mn}b_{mn}+(\lambda_1 \bar{a}_{m-1n}+\lambda_2
\bar{b}_{mn-1})(a_{mn}+b_{mn})\right]\,.\;\;
\label{bcint1}
\ee
In this expression, we can already locally perform the sum over
$S_{mn}^{2}=0,1$ at each site. The rules like (\ref{fuy3})-(\ref{fuy6})
are to be taken into account in order not to count twice the contribution
of $S_{mn}^{2}=0$ states. By averaging the part of the product explicitly
depending on $S_{mn}^{2}=0,1$, we obtain:
\bb
\nonumber\fl
\sum_{ \{ S^2_{mn}=0,1 \} }
\Big\{\,2^{\,S^2_{mn}}
\Big[(S^2_{mn}+a_{mn}\bar{a}_{mn})(S^2_{mn}+b_{mn}\bar{b}_{mn})
\Big]\,
\\ \nonumber
\times \exp \Big[S^2_{mn}\Big(\Delta+\lambda'_1a_{m-1n} \bar{a}_{m-1n}
+\lambda'_2 b_{mn-1} \bar{b}_{mn-1}
+\lambda_1 \lambda_2 \bar{a}_{m-1n}\bar{b}_{mn-1}\Big)
\Big]\Big\}
\\
=a_{mn} \bar{a}_{mn} b_{mn} \bar{b}_{mn}
+ 2 e^\Delta e^{G_{mn}}, \;\;
\label{Trrho}
\ee
where $G_{mn}$ in the exponential in final line stands for the local part
of the action resulting at the Ising site with $S_{mn}^{2} =1$. The first
term in final line is the one produced at dilute site with $S_{mn}^{2}=0$.
The explicit expression for $G_{mn}$ reads:
\bb\fl
G_{mn}=
a_{mn}\bar{a}_{mn}+b_{mn}\bar{b}_{mn}+\lambda_1 \lambda_2\,
\bar{a}_{m-1n} \bar{b}_{mn-1}
+\lambda'_1\, a_{m-1n} \bar{a}_{m-1n}+\lambda'_2\,
b_{mn-1} \bar{b}_{mn-1}.\;\;
\label{bcint3}
\ee
The result of the averaging from (\ref{Trrho}) can as well be written as
a unique exponential taking into account the nilpotent property of
fermions:
\bb
\fl\nonumber
a_{mn} \bar{a}_{mn} b_{mn} \bar{b}_{mn}
+2\,e^\Delta e^{G_{mn}}
=2 e^\Delta e^{G_{mn}}
\left(1+\frac{1}{2}\,a_{mn} \bar{a}_{mn} b_{mn}
\bar{b}_{mn}\, e^{-\Delta-G_{mn}} \right)
\\ \nonumber
=2 e^\Delta \exp \left(G_{mn}
+\frac{1}{2} a_{mn} \bar{a}_{mn} b_{mn}\bar{b}_{mn}
e^{-\Delta-G_{mn}} \right)
\\
=2 e^\Delta \exp \left(G_{mn}
+\frac{1}{2}e^{-\Delta} a_{mn} \bar{a}_{mn} b_{mn}\bar{b}_{mn}
e^{-G'_{mn}}\right)\,.\;\;
\label{bcint4}
\ee
In the final expression, we assume the local action $G_{mn}$ to be replaced
by its reduced version $G_{mn}'$, since the prefactor $a_{mn}\bar{a}_{mn}
b_{mn} \bar{b}_{mn}$ annihilates the two first terms of $G_{mn}$ in the
exponential. The reduced action reads:
\bb
\fl
G'_{mn} =\lambda'_1\, a_{m-1n} \bar{a}_{m-1n}
+\lambda'_2\,b_{mn-1} \bar{b}_{mn-1} +\lambda_1 \lambda_2\,
\bar{a}_{m-1n} \bar{b}_{mn-1}\,.\;\;
\label{bcint4a}
\ee
Substituting this result into (\ref{bcint1}) and shifting the $mn$ index
in some of the diagonal terms of the resulting combined action,
we obtain:
\bb
\nonumber\fl
Z=2^{L^2} e^{L^{2}\Delta}
\int \prod\limits_{m=1}^{L}\prod\limits_{n=1}^{L}
d\bar{a}_{mn}da_{mn}d\bar{b}_{mn}db_{mn}
\exp\Big\{\sum\limits_{m=1}^{L}
\sum\limits_{n=1}^{L}
\Big[\,(1+\lambda_1')\,a_{mn}\bar{a}_{mn}
\\ \nonumber \fl
+(1+\lambda_2')\,b_{mn}\bar{b}_{mn}
+a_{mn}b_{mn} +(\lambda_1\bar{a}_{m-1n} +\lambda_2\bar{b}_{mn-1})
(a_{mn} +b_{mn})
\\ \nonumber \fl
+\lambda_1\lambda_2\,\bar{a}_{m-1n}\bar{b}_{mn-1}
+\bar{g}_0 \;a_{mn}\bar{a}_{mn}b_{mn}\bar{b}_{mn}\,
\exp\,(-\lambda_1'\,a_{m-1n}\bar{a}_{m-1n}
-\lambda_2'\, b_{mn-1}\bar{b}_{mn-1}
\\ \fl
-\lambda_1\lambda_2\,
\bar{a}_{m-1n}\bar{b}_{mn-1})\Big]\Big\}\,, \;\;\;\;\;
\bar g_0=e^{-\Delta}/2\,,\;\;
\label{bcint5}
\ee
with $-\Delta =+\beta\Delta_0$. This is already a purely fermionic integral
for $Z$, the spin degrees of freedom being completely eliminated. The
interaction part of the action is introduced with coupling constant
$\bar{g}_0\propto e^{\beta\Delta_0}$, which depends on $\Delta_0$. To
simplify the comparison with the 2D Ising model, and for other needs, we
now rescale some of the Grassmann variables under the integral using
the following transformation:
\bb
(1+\lambda'_1)\,\bar{a}_{mn} \rightarrow \bar{a}_{mn},\;\;\;\;
(1+\lambda'_2)\,\bar{b}_{mn} \rightarrow \bar{b}_{mn}.
\;\;\;\label{bcint5b}
\ee
The corresponding differentials are to be rescaled with inverse factors.
In this way, we obtain the final result of this subsection:
\bb
\fl\nonumber
Z=(2e^{\Delta}\cosh K_1 \cosh K_2)^{L^2}
\int \prod\limits_{m=1}^{L}\prod\limits_{n=1}^{L}
d\bar{a}_{mn}da_{mn}d\bar{b}_{mn}db_{mn}\exp\Big\{\sum\limits_{m=1}^{L}
\sum\limits_{n=1}^{L}
\\ \nonumber \fl
\Big[\,a_{mn}\bar{a}_{mn} +b_{mn}\bar{b}_{mn}
+a_{mn}b_{mn} +(t_1\bar{a}_{m-1n} +t_2\bar{b}_{mn-1})
(a_{mn} +b_{mn})
+t_1t_2\,\bar{a}_{m-1n}\bar{b}_{mn-1}
\\ \fl
+\;g_0\;a_{mn}\bar{a}_{mn}b_{mn}\bar{b}_{mn}\,
\exp\,(-\gamma_1a_{m-1n}\bar{a}_{m-1n}
-\gamma_2b_{mn-1}\bar{b}_{mn-1}
-t_1t_2\,\bar{a}_{m-1n}\bar{b}_{mn-1})\Big]\Big\}\,,
\label{PFfinal}
\ee
where $t_i =\tanh K_i$ and we have introduced the following constants:
\bb
\fl
g_0=\frac{e^{-\Delta}}{2 \cosh K_1 \cosh K_2},\;\;\;
\gamma_{i}=1-\frac{1}{\cosh K_{i}}=1-\sqrt{1-t^2_{i}}.\;\;\;
\;\;\;\label{bcint8}
\ee
In a compact form, the equation (\ref{PFfinal}) reads:
\bb
\fl
Z =(2e^{\Delta}\cosh K_1 \cosh K_2)^{L^2}\int D\bar a
DaD\bar b Db \;\;\exp(\action_{\mathrm{Ising}} +\action_{\mathrm{int}})\,.\;\;\;
\label{bcint8a}
\ee
Note that the fermionic integrals (\ref{bcint5}) and (\ref{PFfinal}) for
the Blume-Capel partition function $Z$ are still the exact expressions.
They both are equivalent to each other and to (\ref{PFspin}). The above
correspondence is exact even for finite lattices, provided we assume free
boundary conditions both for spins and fermions. \footnote{ \ The exact
fermionic representation for $Z$ for a finite lattice with periodic
boundary conditions for spin variables in both directions also can be
derived. The result will be the sum of four fermionic integrals like
(\ref{bcint5}) and (\ref{PFfinal}), with periodic-aperiodic boundary
conditions for fermions, in analogy to the case of the 2D Ising model on
a torus (cf. \cite{ple85tmp,liaw99,wuhu02}).} We can recognize in
(\ref{PFfinal}) and (\ref{bcint8a}) the Ising action, which is simply
the Gaussian part of the total action (cf. \cite{ple85dok,ple85tmp}):
\bb
\fl\nonumber \action_{\mathrm{Ising}} = \sum_{m,n=1}^L a_{mn}\bar{a}_{mn}
+b_{mn}\bar{b}_{mn}+a_{mn}b_{mn}\\ \label{Sising} +(t_1\bar{a}_{m-1n}
+t_2\bar{b}_{mn-1})(a_{mn}+b_{mn}) +t_1t_2 \bar{a}_{m-1n}
\bar{b}_{mn-1},\;\; \label{SIsing1}
\ee
and the non-Gaussian interaction part of the total action, which is
a polynomial of degree 8 in Grassmann variables (after expanding the
exponential):
\bb
\fl
\label{Sint} \action_{\mathrm{int}} =g_0 \sum_{m,n=1}^L
a_{mn}\bar{a}_{mn}b_{mn}\bar{b}_{mn} e^{
-\gamma_1a_{m-1n}\bar{a}_{m-1n}-\gamma_2b_{mn-1}
\bar{b}_{mn-1}-t_1t_2\,\bar{a}_{m-1n}\bar{b}_{mn-1}}.\;\;
\label{Sint1}
\ee
The BC model differs from the Ising model by the interaction term
(\ref{Sint1}) in the total action, which is not quadratic. Therefore the
BC model is not solvable in the sense of free fermions, as distinct from
the pure 2D Ising model.\\
It may be still of interest to try to recognize the structure of the phase
diagram of the BC model directly from the fermionic integrals
(\ref{bcint5})-(\ref{Sint1}) before actual calculation. The interaction is
introduced in the above BC integral (\ref{PFfinal}) for $Z$ with the
coupling constant $g_0\propto \exp[\beta(\Delta_0-J_1 -J_2)]$. In the
limit $\Delta \rightarrow \infty$ (or $\Delta_0\to -\infty$), which
corresponds to $g_0=0$, the gap between the two degenerate states $S=\pm 1$
and the singlet state $S=0$ becomes infinitely large and the model reduces
effectively to the 2D Ising model. For $\Delta_0$ finite, the coupling
constant $g_0$ is finite and the presence of the vacancy states becomes
possible. The coupling constant $g_0$ increases as
the number of the vacancies in a typical configuration of a system
increases, with increasing $\Delta_0$. At zero temperature, on the other
hand, as $\beta\to+\infty$, we find $g_0=0$ for $\Delta_0<J_1+J_2$, which
corresponds, again, to Ising ground state, while for $\Delta_0>J_1+J_2$ we
have $g_0\to+\infty$, which will mean that the ground sate is empty (all
sites are occupied by vacancies). These features at $T=0$ can be
readily guessed already from the form of the original Hamiltonian
(\ref{ham1a}). A more sophisticated analysis of the integral
(\ref{PFfinal}) will be needed to define the precise form of the critical
line and to locate the tricritical point at that line with increasing
dilution.\\ In the following, for simplification, we will only consider the
isotropic coupling case, with $K_1=K_2=K$, $t_1=t_2=t$ and
$\gamma_1=\gamma_2 =\gamma$.
\subsection{Partial bosonization}
The previous action contains two pairs of Grassmann variables per site.
This can not be reduced to a one pair (minimal action) unlike the Ising
model, where half of the variables are irrelevant in the sense they
do not contribute to the critical behaviour and can be integrated out
already at lattice level. The point is that the reduced Ising action with
two variables per site readily admits QFT interpretation and simplifies the
analysis in the momentum space \cite{ple98,ple98fl, dritz89, ple95amm}. In
the BC case, the two pairs of fermions are coupled together by Eq.
(\ref{Sint}), preventing a direct integration over extra variables like
$a_{mn},b_{mn}$. However, as we will see in the following, it is still
possible to recover the minimal Ising like action with a one pair of
fermions per site using auxiliary bosonic variables. In the interaction
part of the action (\ref{Sint}), it is indeed tempting to replace the
products $a_{mn}\bar a_{mn}$ and $b_{mn}\bar b_{mn}$, which are formally
looking similar to occupation number operators, or local densities, by the
new commuting variables as follows:
\bb
\eta_{mn}=a_{mn}\bar a_{mn}\,,\;\;\;\; \tau_{mn} =b_{mn}\bar
b_{mn}\,,\;\;\;\;\; \eta_{mn}^2=\tau_{mn}^{2}=0\,.\;\;
\label{eta1a}
\ee
These new variables $\eta_{mn},\tau_{mn}$ are nilpotent (as Grassmann
variables) but purely commuting: that is why we will abusively call them
(hard core) ``bosons". In the following, we will add also one more pair of
commuting nilpotent fields $\bar{\eta}_{mn}, \bar{\tau}_{mn}$, to put
integrals into a more symmetric form. The identities like (\ref{eta1a})
are rather to be understood in the sense of correspondence, to be realized
by a properly introduced delta-functions (Dirac distributions). This
eventually allows us to reduce the degree of polynomials in Grassmann
variables by a factor 2 each time the replacement like (\ref{eta1a}) is
performed, even if terms like $\bar{a}_{m-1n}\bar{b}_{mn-1}$ in
(\ref{Sint}) can not be replaced. We will
see below that, in principle, we can write down an action containing a one
pair of Grassmann variables and a one pair of bosonic ones per site. To do
so, we introduce the following Dirac distribution for any polynomial
function $f$ of nilpotent variables like $a_{mn}\bar a_{mn}$ or $b_{mn}
\bar b_{mn}$:
\bb
\nonumber\fl
f(a_{mn}\bar a_{mn})=\int \dd \eta_{mn}\dd \bar\eta_{mn}
f(\eta_{mn})\exp \left [
\bar\eta_{mn}(\eta_{mn}+a_{mn}\bar a_{mn})\right ],\;\;
\\ \fl
f(b_{mn}\bar b_{mn})=\int \dd \tau_{mn} \dd \bar\tau_{mn}
f(\tau_{mn})\exp \left [
\bar\tau_{mn}(\tau_{mn}+b_{mn}\bar b_{mn})\right ].\;\;
\label{dirac1}
\ee
We assume a natural definition of the integral for commuting nilpotent
variables with the following rules of integration (similar rules are
assumed for $\bar{\eta}_{mn}, \bar{\tau}_{mn}$):
\bb
\int d\eta_{mn}\,(1, \eta_{mn}) =(0,1)\,,\;\;\;
\int d\tau_{mn}\,(1, \tau_{mn}) =(0,1)\,. \;\;\;
\label{etaint1}
\ee
For application of the rules like (\ref{etaint1}) in the QFT context also
see \cite{palumbo97}. Applying (\ref{dirac1}) directly in (\ref{PFfinal}),
we obtain the integral with the following expression for the
action:
\bb
\nonumber\fl
\mathcal{S}=\sum_{m,n}
\Big [
a_{mn}\bar a_{mn}+b_{mn}\bar b_{mn}+t^2\bar a_{m-1n}\bar b_{mn-1}
+ a_{mn}b_{mn}+t(\bar a_{m-1n}+\bar b_{mn-1})(a_{mn}+b_{mn})
\\ \nonumber
+\,g_0\,\eta_{mn}\tau_{mn}
[1-\gamma(\eta_{m-1n}+\tau_{mn-1})
+\gamma^2\eta_{m-1n}\tau_{mn-1}
-t^2\bar a_{m-1n}\bar b_{mn-1}]
\\ \,\label{act1aa}
+\bar\eta_{mn}(\eta_{mn}+a_{mn}\bar a_{mn})+
\bar\tau_{mn}(\tau_{mn}+b_{mn}\bar b_{mn})\Big]\,.\;\;
\ee
We can now integrate over the $a_{mn}$'s and $b_{mn}$'s, and replace
formally, for convenience, the variables $\bar a_{mn}$ by $c_{mn}$ and
$\bar b_{mn}$ by $-\bar c_{mn}$ in the remaining integral.
We obtain:
\bb
\nonumber\fl
\mathcal{S}=\sum_{mn=1}^L
\Big\{ c_{mn}\bar c_{mn}(1+\bar\tau_{mn})(1+\bar\eta_{mn})
+\bar\eta_{mn}\eta_{mn}+\bar\tau_{mn}\tau_{mn}
\\ \nonumber\fl
+[c_{mn}(1+\bar\eta_{mn})-\bar c_{mn}(1+\bar \tau_{mn})]
t(c_{m-1n}+\bar c_{mn-1})
-t^2c_{m-1n}\bar c_{mn-1}
\\ \fl
+g_0\,\eta_{mn}\tau_{mn}
\Big [1-\gamma(\eta_{m-1n}+\tau_{mn-1})
+\gamma^2\eta_{m-1n}\tau_{mn-1}
-t^2c_{m-1n}\bar c_{mn-1}\Big]\Big\}\,.
\label{act1bb}
\ee
The advantage is that now there are only two fermionic variables per
site, which is suitable for the QFT interpretation \cite{ple98,ple95amm}.
Note that the integral associated with the action (\ref{act1bb}) will still
be the exact expression for $Z$. The number of the fermionic variables
being reduced, the next operation is to try to integrate out, whenever
possible, the auxiliary bosonic fields from action (\ref{act1bb}). In fact,
we can further integrate over one pair of bosonic variables, for example
$\tau_{mn},\,\bar\tau_{mn}$, using the integration rules like
(\ref{dirac1}), since
\bb
\nonumber\fl
\int d\tau_{mn} d\bar\tau_{mn}\,f(\tau_{mn})\,
\exp[\bar\tau_{mn}\big(\tau_{mn}-t(c_{m-1n}-\bar c_{mn-1})\bar c_{mn}
+c_{mn}\bar c_{mn}(1+\bar \eta_{mn})\big)]
\\
=\,f\,[-t(c_{m-1n}-\bar c_{mn-1})\bar c_{mn}
+c_{mn}\bar c_{mn}(1+\bar \eta_{mn})]\,.\;\;
\label{act1ac}
\ee
There $f(\tau_{mn})$ may be any function of nilpotent variable $\tau_{mn}$.
We could also have chosen to integrate over the $\eta_{mn}, \bar \eta_{mn}$
instead. Integrating over $\tau_{mn},\bar \tau_{mn}$ according
to (\ref{act1ac}), we finally obtain the reduced integral with the local
action:
\bb
\nonumber\fl
\mathcal{S} =c_{mn}\bar c_{mn}+t(c_{mn}+\bar c_{mn})(c_{m-1n}-\bar c_{mn-1})
-t^2c_{m-1n}\bar c_{mn-1}
\\ \nonumber
+\bar\eta_{mn}\eta_{mn}
+\bar\eta_{mn}\Big [\bar c_{mn}-t(c_{m-1n}-\bar c_{mn-1})
\Big] c_{mn}
\\ \nonumber
+g_0\,\eta_{mn}Q_{mn}\Big [1-\gamma(\eta_{m-1n}+Q_{mn-1})
\\
+\gamma^2\eta_{m-1n}Q_{mn-1}
+t^2c_{m-1n}\bar c_{mn-1}\Big],\;\;
\label{act1fin}
\ee
with
\bb
Q_{mn}=[c_{mn}(1+\bar\eta_{mn})-t(c_{m-1n}-\bar c_{mn-1})]
\bar c_{mn}\,.\;\;
\label{act1aq}
\ee
It is easy to recognize in the first line of (\ref{act1fin}) the minimal
local action for the pure Ising model \cite{ple98,ple95amm} with one pair
of Grassmann variables per site:
\bb
\fl
\action_{\mathrm{Ising}}=c_{mn}\bar c_{mn}
+t(c_{mn}+\bar c_{mn})(c_{m-1n}-\bar c_{mn-1})
-t^2c_{m-1n}\bar c_{mn-1}.\;\;
\label{SIsing2}
\ee
This is the same action that follows by integrating $a_{mn},b_{mn}$ from
(\ref{SIsing1}). The rest of the action describes the interaction between
fermions and bosons:
\bb
\fl \nonumber
\label{SIint1}
\action_{\mathrm{int}}=\bar\eta_{mn}\eta_{mn}+\bar\eta_{mn} c_{mn}\Big [
\bar c_{mn} +t(c_{m-1n}-\bar c_{mn-1})
\Big ]
\\ \nonumber
+\,g_0\,\eta_{mn}Q_{mn}\Big [1-\gamma(\eta_{m-1n}+Q_{mn-1})
\\
+\,\gamma^2\eta_{m-1n}Q_{mn-1}+t^2c_{m-1n}\bar c_{mn-1}\Big ].
\ee
It is easy to check that at $g_0=0$ the boson variables can be integrated
out in the action (\ref{act1fin}). This may be less simple task for finite
values of coupling constant $g_0\neq 0$. In the next section, we will apply
approximations in order to eliminate completely the auxiliary commuting
nilpotent fields from the action, and will make use of a more symmetric
form of the integration over the bosonic fields, first over
$\bar{\eta}_{mn},\bar{\tau}_{mn}$, then
over $\eta_{mn},\tau_{mn}$.
We would like to end this section commenting the previous exact results.
We finally obtained a lattice field theory with action (\ref{act1fin})
containing the same number of ``fermions" ($c$, $\bar c$) and ``bosons"
($\eta$,~$\bar\eta$). Physically, this means that it is indeed possible to
describe the system with fermionic variables for the states $S=\pm 1$ and
bosonic ones for the third state $S=0$. In the limit $\Delta_0 \rightarrow
-\infty$, the system is completely described in terms of fermions. While
with $\Delta_0$ increasing to finite values, an interaction between
fermions and bosons is added. Beyond a value $\Delta_{0 t}$,
fermions form bosonic pairs: in the limit $\Delta_0 \rightarrow + \infty$,
all fermions condense into bosons, leading to a purely bosonic system.
In this interpretation, the tricritical point may be expected to be seen
as a particular point on the critical line where the interaction is such
that an additional symmetry between fermions and bosons appears. This might
correspond to supersymmetry appearing in the conformal field theory
describing tricritical Ising model. To our knowledge there is no evidence
of supersymmetry derived directly from a lattice model: the exact lattice
action (\ref{act1fin}) could be a good way to see how super-symmetry may
emerge from a lattice model. Of course all we said so far is only
speculative: we are currently studying it in more detail, to confirm or
infirm this hypothesis.
\section{Effective action in the continuum limit}
In the Ising model case, the fermionic action on the lattice is quadratic
and the corresponding Grassmann integral can actually be computed exactly
by transformation into the momentum space for fermions by means of the
Fourier substitution. The situation for the 2D BC model is less simple, as
there is a non-Gaussian interaction part in action (\ref{act1bb}), alias
(\ref{act1fin}), which contains terms of order up to 8th in fermions. The
Grassmann integral leading to the partition function can no longer be
computed directly, as in the Ising model case, by a simple Fourier
substitution. In this sense the 2D BC model is not integrable. However it
is still possible to extract physical information by taking the continuous
limit of the BC lattice action like (\ref{PFfinal}), or (\ref{act1bb}),
and analyzing it using tools from quantum field theory.
\subsection{Effective 2nd order fermionic field theory}
We would like to obtain an effective purely fermionic theory for the BC
model up to order 2 in momentum ${\bm{k}}\,$ from the previous calculations,
with two variables per site, to analyze the critical behavior of
the model. In the Ising case, the critical behavior is given, in the
continuous limit, by a massless Majorana theory that follows from
two-variable action. In the following, we will see how to compute the mass
of the BC model in its effective Gaussian part. The condition of the zero
effective mass will give already the critical line in the $(T,\Delta_0)$
plane for BC model. For the location of a tricritical point on that line
one needs more complicated analysis, taking into account the stability of
the kinetic part of the action, which is in turn affected by the presence
of the interaction. In the infrared limit, the spectrum is given by
expanding the effective action, or rather the correspondent partial
integral $Z_{{\bm{k}}}$ in $Z=\prod_{{\bm{k}}}Z_{{\bm{k}}}$, up to the second order in the
momentum ${\bm{k}}$. The coefficient $\lambda$ in front of the term
$\lambda{\bm{k}}^2$ in the basic factor $Z_{{\bm{k}}}$ of $Z$ is what we call the
{\it stiffness} parameter of the model. It dominates all contributions from
the kinetic part of the action. In the Ising model case, the stiffness
coefficient is always a strictly positive coefficient. In this case, the
only singularity in the spectrum follows from the condition of vanishing
mass, resulting the Ising critical point. Here in the BC model, we will
show that the effective stiffness coefficient can also vanish at a special
point at the critical line in $(T,\Delta_0)$ plane, rendering the spectrum
unstable and changing the nature of the singularity. This happens for
large enough $g_0$, as $\Delta_0$ increases. We intend to identify the
above singular point as an evidence for the appearance of a tricritical
point, together with a segment of the first-order transition line, at the
BC phase diagram at sufficiently strong dilution. In order to be able to
perform the QFT analysis of the above kind, we ought to eliminate the
bosonic nilpotent fields from the action, being interested merely in the
low-momentum (small ${\bm{k}}$) sector of the theory, and making reasonable
approximations whenever necessary.\\
This program also implies a more symmetric way of integration over the
nilpotent fields. Instead of integrating over the variables $\tau_{mn}$
and $\bar\tau_{mn}$ as in Eq. (\ref{act1fin}), we now proceed by
integrating first over $\bar\eta_{mn}$ and $\bar\tau_{mn}$ in
Eq.~(\ref{act1bb}), making use of the definition of the integral. This
results the reduced integral with a new action:
\bb
\nonumber\fl
Z_{}=(2e^{\Delta}\cosh^2K)^{L^2}\int \prod_{m,n} \dd \bar c_{mn} \dd c_{mn}
\dd \eta_{mn} \dd \tau_{mn} \left [ c_{mn}\bar
c_{mn}+\eta_{mn}q_{mn}+\tau_{mn}\bar q_{mn} +\eta_{mn}\tau_{mn} \right ]\\
\times \exp(\action_{\mathrm{Ising}}+\action_{\mathrm{int}})\,,\;\;
\label{aci1ii}
\ee
where $\action_{\mathrm{Ising}}$ is given in (\ref{SIsing2}), while
\bb
\fl
\action_{\mathrm{int}}=g_0\sum_{m,n}\eta_{mn}\tau_{mn}\left
[(1-\gamma\eta_{m-1n})(1-\gamma\tau_{mn-1})
+t^2 c_{m-1n}\bar c_{mn-1}\right ],
\label{aci2ii}
\ee
and
\bb\fl\nonumber
\bar q_{mn}=c_{mn}\bar c_{mn} +tc_{mn}(c_{m-1n}-\bar c_{mn-1})
=c_{mn}[\bar c_{mn} +t(c_{m-1n}-\bar c_{mn-1})]\,,
\\
\fl
q_{mn}=c_{mn}\bar c_{mn} +t\bar c_{mn}(c_{m-1n}-\bar c_{mn-1})
=[c_{mn} -t(c_{m-1n}-\bar c_{mn-1})]\bar{c}_{mn}\,.\;\;
\label{aqq1}\;\;
\ee
It is also useful to note that $q_{mn}^{2} =\bar{q}_{mn}^{2}=0$, and
$q_{mn}\bar q_{mn}=0$. The free-fermion Ising part of the action
$\action_{\mathrm{Ising}}$ in (\ref{aci1ii}) at this stage remains unchanged and is given by
the standard expression (\ref{SIsing2}). The above integral (\ref{aci1ii})
includes as well the product of quadratic polynomial terms like $c_{mn}
\bar c_{mn}+\eta_{mn}q_{mn}+\tau_{mn}\bar q_{mn} +\eta_{mn} \tau_{mn}$,
which can not be written as a single exponential. However, when
integrating over the remaining variables $\eta_{mn}$ and $\tau_{mn}$, it is
easy to realize that these polynomial terms \textit{roughly} impose the
following substitution rules in the action $\action_{\mathrm{int}}$:
\bb
\eta_{mn}\tau_{mn}\rightarrow c_{mn} \bar c_{mn}\,,\;\;\;
\eta_{mn}\rightarrow \bar q_{mn}\,,\;\;\;
\tau_{mn}\rightarrow q_{mn}\,.\;\;\;
\label{eta1aa}
\ee
In a sense, the above rules can be considered as an operation of
approximate Dirac delta functions on the variables $\eta_{mn}$ and
$\tau_{mn}$, replacing them by fermions. These rules of correspondence
though are {\em not} unreservedly exact: when expanding the exponential
of $\action_{\mathrm{int}}$ into a series, the terms will appear that couple to each other
to give $c_{mn}\bar c_{mn}$ but not $q_{mn} \bar q_{mn}=0$ as is given by
the above substitution rules. For example, terms such as
\bb
(g_0\eta_{m+1n}\tau_{m+1n}\gamma\eta_{mn})
\times(g_0\eta_{mn+1}\tau_{mn+1}\gamma\tau_{mn}),
\label{eta1bb}
\ee
instead of vanishing, lead to a contribution in the effective action
equal to
\bb
g_0^2\gamma^2c_{mn}\bar c_{mn}c_{m+1n}\bar c_{m+1n}
c_{mn+1}\bar c_{mn+1}.
\label{example}
\ee
Therefore there are more terms in the final effective action
$\mathcal{S}_{\mathrm{eff}} (c,\bar{c})$ than in the one resulting from the
above substitution rules. However, we would like to apply approximations
to the term with interaction, and the higher order corrections of the above
kind can be neglected within this scheme anyhow.
From an effective action that follows from (\ref{aci1ii}), we intend to
obtain the basic momentum-space factor $Z_{{\bm{k}}}$ of $Z$ up to order 2 in
momentum ${\bm{k}}$, in order to study the stability of the free fermion
spectrum. In the pure Ising model at criticality, the factor
$Z_{{\bm{k}}}$ gives basically a $(\underline{m}^2 +{\bm{k}}^2)$ contribution to the
partition function and free energy at small momenta, with mass
$\underline{m}^2=0$ at the critical point \cite{ple98,dritz89}. In fact,
there is also the {\em stiffness} coefficient $\lambda$ in front of ${\bm{k}}^2$
in this term, ${\bm{k}}^2\to \lambda {\bm{k}}^2$. In the Ising case, this stiffness
coefficient is non-singular at the critical point and can be fixed simply
by its finite value at the critical temperature. In the BC case, however,
we have a line of critical points as $\Delta_0$ varies from negative to
positive values. Respectively, the stiffness coefficient $\lambda
=\lambda( \Delta_0)$ also varies with a variation of the chemical potential
$\Delta_0$ along the critical line. The point is that in the BC case the
effective stiffness coefficient vanishes at some position at the critical
line, for a sufficiently strong dilution, which may eventually be
identified as the tricritical point of the BC model. In what follows, we
apply the Hartree-Fock-Bogoliubov (HFB) approximating scheme
\cite{thouless72, mattuck92, bogoliubov07} in the momentum space in order
to gain a modification of the above Ising like behaviour provided by the
presence of the interaction in the BC case. In essence, the HFB decouples
the four-fermion interaction into few Gaussian terms added to the basic
action.
\footnote{ \ Let us remember that the interaction terms in BC model appear
solely due to the presence of the dilute (vacancy) sites. Respectively,
the strength of interaction (the coupling parameter $g_0$) increases with
increasing rate of dilution, with variation of the chemical potential
$\Delta_0$. The corrections with $g_0$ may thus appear in the mass term and
the stiffness coefficients of the BC effective action within mean-field HFB
analysis. In fact, as we shall see below, the relevant $g_0$ correction to
the mass at the Gaussian (free-fermion) level already follows when we
extract the effective action from (\ref{aci1ii}) and (\ref{eta1aa}), see
(\ref{Seff1}), while the kinetic corrections, that at lattice level may be
attributed to the correlations of the Ising degrees and vacancies at the
same and neighbouring sites, are to be extracted self-consistently within
HFB scheme from the residual interaction in the effective action.} This
also assumes a self-consistent calculation of the corrections which modify
the parameters in the mass term and the kinetic part of the action, and
eventually modify the stiffness coefficient, due to the HFB decoupling of
the interaction.\\
Among the terms that contribute in $Z_{{\bm{k}}}$ to the second order in
momentum are in any case those coming from the kinetic part of the
free-fermion quadratic piece of the action, cf. Eq.~(\ref{SIsing2}).
In the
continuous limit, with $c_{m-1n}\to c-\partial_x c,\; \bar{c}_{mn-1}\to
\bar{c} -\partial_y \bar{c}$, these terms are combinations of
$c\,\partial_x c$ or $\bar c\,\partial_x c,\; \bar c\,\partial_y \bar c$ or
$c\,\partial_y\bar c$. From the above rules (\ref{eta1aa}), we expect that
the effective action will contain as well quartic contributions such as $c
\bar c\, \partial_{i}c\, \partial_j \bar c$, with $i,j=x,y$, at the lowest
order. This term is degree 4 in Grassmann variables and 2 in derivatives.
The expansion of the exponential of such terms will give corrective
coefficients to the ${\bm{k}}^2$ behaviour, and may thus change the order of the
transition if the renormalized stiffness vanishes. We also have to consider
not only the direct substitution of the variables with the rules given
above, but also the possible correction terms like (\ref{example}) that may
contribute to the stiffness. We should also drop terms which contain a
ratio of number of derivatives to the number of Grassmann variables higher
strictly than 1/2 as their effect is expected to provide next-order
corrections within the basic approximation scheme outlined above. After
some algebra, these following terms contribute to the effective action:
\bb
\nonumber\fl
\mathcal{S}_{\mathrm{effective}}
=\action_{\mathrm{Ising}}+g_0\sum_{m,n}c_{mn}\bar c_{mn}\left[
(1-\gamma\bar q_{m-1n})(1-\gamma q_{mn-1})
+t^2 c_{m-1n}\bar c_{mn-1}\right ]
\\ \label{Seff1}
+g_0^2\gamma^2\sum_{m,n} c_{mn}\bar c_{mn}c_{m+1n}\bar c_{m+1n}
c_{mn+1}\bar c_{mn+1} +\ldots\;. \;\;
\ee
The above effective action define some lattice fermionic theory with
interaction. We keep in mind to analyze it further on in the momentum
space at low momenta, which corresponds to the continuum-limit
interpretation of the model.
\subsection{Continuum limit}
In the continuous-limit interpretation of the above action, we replace
$c_{mn}$ by $c=c(x,y)$ and $\bar c_{mn}$ by $\bar c=\bar c(x,y)$, assuming
as well the substitution rules like $c_{m-1n}=c-\partial_x c$ and $c_{mn-1}
=c-\partial_y c$. After a Fourier transformation of the fields, this
corresponds to the low-momenta sector of the exact lattice theory around
the origin ${{\bm{k}}}=0$. In particular, we put $q_{mn}\rightarrow q=c \bar
c\,(1-t) +t(\partial_x c -\partial_y \bar c)\,\bar{c}$ and $\bar q_{mn}
\rightarrow \bar q=c\, \bar c (1-t)-tc\,(\partial_x c-\partial_y \bar c)$.
The free Ising part $\action_{\mathrm{Ising}}$ from (\ref{Seff1}) gives simply
\bb
\label{SIsingCont} \fl
\action_{\mathrm{Ising}}=\int \dd x \dd y
\left [
(1-2t -t^2)\, c\bar c -t(t+1)\,\bar{c}\partial_x c
+t(t+1)c\partial_y\bar c -tc\partial_x c+t\bar c
\partial_y\bar c \right].\;\;
\ee
In the above action, one
can readily distinguish the mass term and the kinetic
part, provided one assumes the QFT interpretation of the associated
integrals \footnote{ \ The Ising mass is easily seen from
(\ref{SIsingCont}) to be $\underline{m}_{\mathrm{Ising}}=1-2t-t^2$, which
must vanish at the critical point. Indeed, the condition of vanishing mass
$1-2t -t^2=0$ gives $t_c =\sqrt{2}-1$, alias $K_c =J/T_c
=\frac{1}{2}\ln(1+\sqrt{2})$, in agreement with the exact solution of this
model on a lattice. The ordered phase corresponds to negative mass, with
$t\to 1$ as $T\to 0$. The structure of the action (\ref{SIsingCont}) rather
implies the interpretation of the pure 2D Ising model in terms of the
Majorana fermions \cite{ple98,dritz89,ple95amm}. Respectively, one may pass
to the Dirac interpretation by doubling the number of fermions in the
action. Notice also that $\bar{c}\partial_{i}c =c\partial_{i}\bar{c}$
under the integral (\ref{SIsingCont}) since $\partial_i$ is a
skew-symmetric operator.}. Notice that the next order momentum term with
product $\partial_x \partial_y$ is neglected in the above action
(\ref{SIsingCont}) at the backbone of the first order $\partial_x,\,
\partial_y$ terms \footnote{ \ Despite these terms with $\partial_x$ and
$\partial_y$ are linear in momentum in the {\em action}, they contribute as
${\bm{k}}^2$ into the spectrum factor $Z_{{\bm{k}}}$ of $Z$, while their product may
only contribute at the level of next order corrections to $Z_{{\bm{k}}}$, as
$\underline{m}\to 0$. }. In the continuous limit, the last term of the BC
action (\ref{Seff1}) gives $c\bar c \partial_x c\partial_x \bar c\partial_y
c\partial_y \bar c$, which is 4$^{\mathrm{th}}$ order in derivatives and
6$^{\mathrm{th}}$ degree in Grassmann variables. The ratio of these
numbers is 2/3 which is higher than 1/2, and therefore this term can be
discarded, as explained above. The term in factor of $g_0$ in (\ref{Seff1})
contains $\bar q_{m-1n}$ and $q_{mn-1}$ which need to be expanded up to the
order 2 in derivatives, with $\partial_{xx}\bar q=2(1-t)\partial_x
c\partial_x \bar c$, and $\partial_{yy}q =2(1-t)\partial_y c\partial_y\bar
c$. The effective action finally can be written in the continuous limit as
\fl
\bb\fl
\mathcal{S}_{\mathrm{eff}}
=\action_{\mathrm{Ising}}
+\int \dd x \dd y \Big\{g_0c\bar c +g_0c\bar c
\big[ t(t+2\gamma)\partial_y c\partial_x\bar c
-\gamma(1-t)(\partial_x c\partial_x\bar c
+\partial_y c\partial_y\bar c) \big] \Big\}\;.\;\;
\label{SeffCont}
\ee
In the following, we shall use this effective action to obtain information
on the phase diagram of the BC model.
\section{Spectrum analysis and phase diagram}
In this section we analyze the critical properties of the effective action
(\ref{SeffCont}) and the low energy spectrum $Z_{{\bm{k}}}$ of $Z$ in the
momentum-space representation. In particular we develop a physical argument
for the existence of a tricritical point on the phase diagram from the
above fermionic action. The critical line follows already from the
condition of the zero mass. At the tricritical point, we assume that
the effective stiffness coefficient in factor $Z_{{\bm{k}}}$ also vanishes. The
Hartree-Fock-Bogoliubov (HFB) approximation scheme will be used to count
properly the effects of the interaction \cite{thouless72, mattuck92,
bogoliubov07}.
\subsection{Phase diagram}
The BC model effective action of (\ref{SeffCont}) includes the
free-fermion Gaussian part and the quartic interaction. The quadratic
(Gaussian) part of the whole action is merely formed from $\action_{\mathrm{Ising}}$, but
the remaining interaction term in the effective action (\ref{SeffCont})
also includes quadratic term $g_0\,c\bar{c}$, which is to be added to the
Ising part of the action and will modify the Ising mass. The pure Ising
model action is shown in (\ref{SIsingCont}) above. In the continuum
limit, see (\ref{SIsingCont}), this action includes the mass term
$\underline{m}_{\,Ising}\,c\bar{c}$, with $\underline{m}_{\,Ising} =1-2t
-t^2$, and the kinetic part. The condition for the critical point in the
pure Ising case is then given by $\underline{m}_{\,Ising}=0$
\cite{ple98,dritz89,ple95amm}. In the BC case, the presence of the
Gaussian correction $g_0\,c\bar{c}$ will modify the mass term in the
effective BC action: $\underline{m}_{\,Ising} \to \underline{m}_{\,BC}
=1+g_0 -2t -t^2$, which we assume to be vanishing at the critical
line.\footnote{ \ The additive corrections that may contribute to the mass
term from the non-Gaussian part of the action (\ref{SeffCont}) are ${\bm{k}}^2$
dependent and vanish as ${\bm{k}}^2\to 0$. They may be neglected. In the
effective action (\ref{SeffCont}), the principal modification of the mass
term due to vacancies is realized already at the Gaussian level by the
$g_0\,c\bar{c}$ term, as it is commented above. The effect of the
non-Gaussian part in (\ref{SeffCont}) is merely that it produces
corrections to the kinetic terms, after the HFB decoupling of the
interaction.}\\
The approximations we intend to apply to tackle the
remaining quartic part of the BC action (\ref{SeffCont}) are of that kind
that we replace, in different possible ways, the two of four fermions by
variational parameters, or the effective binary averages, which are then
specified self-consistently from the resulting Gaussian action. This may be
viewed as a kind of the HFB like approximation method, which proved to be
effective in systems of quantum interacting fermions, like BCS theory of
ordinary superconductivity. This also implies that calculations are to be
performed rather in the momentum space, but not on the real lattice, or its
continuum real space version, and the correspondent symmetries are to be
taken into account properly. The application of the HFB scheme also implies
that the interaction may be not necessary weak.\\
From the explicit form of the
quartic part of the interaction in (\ref{SeffCont}), it can be seen that
the decoupling of the quartic part of $S_{int}$ produces terms which only
modify the kinetic terms in the effective action, at least, in first
approximation, with calculations being up to order ${\bm{k}}^2$ in the $Z_{{\bm{k}}}$
factor. This modification might be significant at strong dilution,
rendering the appearance of the tricritical point and changing the nature
of the phase transition from second to the first kind. These effects are to
be discussed in the second part of this section. In next subsection, we
consider in more detail the BC critical line in the $(T,\Delta_0)$ plane,
with dimensionless temperature $T$ and chemical potential $\Delta_0$
normalized by the exchange energy $J$.
\subsection{Critical line}
The equation for the BC critical line we consider in this section is the
one that follows from the condition of vanishing mass,
$\underline{m}_{\mathrm{BC}} =1+g_0 -2t -t^2=0$. In a detailed form, this
equation reads:
\bb
\label{criticalline}
\tanh^2\left(\frac{1}{T}\right)+2\tanh\left(\frac{1}{T}\right)
-1=\frac{e^{\frac{\Delta_0}{T}}}{2 \cosh^2 \left(\frac{1}{T}\right)}.
\ee
This equation may be written as well in the form:
\bb
\sinh\left(\frac{2}{T}\right) =1+ \frac{1}{2}\makebox{e}^{\frac{
\Delta_0}{T}}\,,\;\;
\label{crit2}
\ee
which in turn admits the explicit solution for $\Delta_0$ as function of
$T$ in the form:
\bb
\Delta_{0} =T\,\ln\left[2\sinh\left(\frac{2}{T}\right) -2\right]\,.\;\;
\label{crit3}
\ee
The inverse dependence for $T$ as function of $\Delta_0$ can be evaluated
numerically by solving any of the above equations, which are all equivalent
to the condition of the zero mass in the theory with action
(\ref{SeffCont}). This results the critical line for the BC model shown
in Fig.~1. In the limit $\Delta_0\to-\infty$, from either of the equations
(\ref{criticalline}) and (\ref{crit2}), we recover the Ising case, with
$T_c =2.\,269185$. For finite $\Delta_0$, as vacancies are added, we
obtain a slowly decreasing (for moderated values of $\Delta_0$) function
for $T_c =T_c(\Delta_0)$, which terminates at the end-point
$(T_c=0,\Delta_0=2)$ at zero temperature, as it can be deduced from
(\ref{criticalline}). By following the critical line from left to the
right, at first stages, for weak dilution, from physical considerations and
the {\em universality} argument, we expect the transition to be of the
second kind, as in the pure 2D Ising model, while this behavior may be
destructed for sufficiently strong dilution, as $\Delta_0$ increases and
transfer to the positive values, where the correspondent term in the BC
Hamiltonian already suppress the Ising states and is favoring vacancy
states. This happens at a singular point, which we are going to identify
from the condition of stability of the kinetic coefficient in the fermionic
spectrum of the action (\ref{SeffCont}), this is to be discussed in the
next subsection. At the critical line, with zero mass, only derivative
contributions remain in the action Eq. (\ref{SeffCont}). These include the
free fermion kinetic terms, and the ones presenting the residual
interaction between the singlet level and the Ising doublet at the quartic
level in fermions. The HFB decoupling of the interaction will modify the
kinetic part of the action.
\begin{figure}[tt!]
\begin{center}
\includegraphics[width=0.8\linewidth]{Critical-Line.eps}
\unitlength=1cm
\caption{\label{plot} (color online) Comparison between critical line
Eq. (\ref{criticalline}) (plain red line) and numerical results from Monte
Carlo simulations. The black filled dots are from Fig.~1, da Silva
\textit{et al.} \cite{dasilva02} (Wang-Landau method). The cross symbol
indicates the tricritical point identified by the same authors. The blue
diamond symbols are from Ref. \cite{silva06}, the magenta triangles from
Ref. \cite{xalap98}, and the green squares from Ref. \cite{beale86}
(see also Table 1 for explicit numerical values).}
\end{center}
\end{figure}
\noindent
The critical line that follows from the condition of zero mass as given
by Eq. (\ref{criticalline}) is plotted in Fig.~\ref{plot} and compared with
recent Monte Carlo simulations by da Silva \textit{et al.} \cite{beale86,
xalap98, liusch02, dasilva02, silva06}. The agreement between numerical
simulations and our results is very good, the mass of the system
(\ref{criticalline}) being exact in that sense, at least in the transition
region. The agreement is within 1\% over the whole range of variation of
$\Delta_0$ at the critical line $T_c =T_c (\Delta_0)$, provided we use the
Monte-Carlo data for $T_c$ as input and evaluate theoretically $\Delta_0$
from (\ref{crit3}) for comparison. The numerical data in the inverse
interpretation, for $T_c=T_c (\Delta_0)$ as a function of $\Delta_0$, are
given in Table 1. Note that our results are also compatible with exact
upper bound for $T_c(\Delta_0)$ obtained by Braga {\em et al.}
\cite{braga94}. Also notice that the value of $T_c(\Delta_0)$ can be easily
evaluated analytically at the point $\Delta_0=0$, where $\sinh(2/T_c)
=3/2$, with the solution $T_c(0) =1.\,673\,971\,856$. This is to be
compared with the Monte-Carlo results $T_c =1.\,6955\pm 0.0010$, $T_c
=1.\,681(5)$ and $T_c =1.\,714(2)$ \cite{beale86,xalap98,silva06}, the
agreement is again good.
\begin{table}[!tb]
\centering
\begin{tabular}{ c | c c c c }
\hline
$\Delta_0$ & & Temperature $T_c(\Delta_0)$ & &
\\
& Ref. \cite{beale86} & Ref. \cite{xalap98} & Ref. \cite{silva06}
(Wang-Landau method)
& Eq. (\ref{criticalline})
\\
\hline
\\
-0.5 & & 1.794(7) & 1.816(2) & 1.7781
\\
0. & 1.695 & 1.681(5) & 1.714(2) & 1.6740
\\
0.5 & 1.567 & & 1.584(1) & 1.5427
\\
1.0 & 1.398 & & 1.413(1) & 1.3695
\\
1.5 & 1.150 & & 1.155(1) & 1.1162
\\
1.87 & 0.800 & & 0.800(3) & 0.7712
\\
1.9 & & 0.764(7) & 0.755(3) & 0.7221
\\
1.92 & 0.700 & & 0.713(2) & 0.6841
\\
1.95 & 0.650 & & 0.651(2) & 0.6135
\\
1.962 & 0.620 & & 0.619(1) & 0.5776
\\
1.969 & 0.600 & & 0.596(5) & 0.5531
\\
1.99 & 0.550 & & 0.555(2) & 0.4441
\\
1.992 & 0.500 & & 0.499(3) & 0.4270
\\
\hline
\end{tabular}
\caption{
Numerical values of the critical points $(T_c(\Delta_0),\Delta_0)$ in the
BC model: comparison of different numerical simulations and equation
(\ref{criticalline}). Note that small variation of $\Delta_0$ causes more
significant changes in $T_{c}(\Delta_0)$ in the region near $\Delta_0=2$,
as it is to be expected from (\ref{criticalline}).}
\end{table}
\subsection{Tricritical point: Hartree-Fock-Bogoliubov analysis}
The main physical feature of the 2D BC model is the existence of a
tricritical point at the critical line. Below this point, the phase
transition goes from second order to first order: the tricritical point is
characterized by a change in the nature of the singularity. This change
should be seen in the BC spectrum from (\ref{SeffCont}).
In this section, we analyze the effect of the quartic terms in the
action on the stability of the free fermion spectrum at zero mass, along
the critical line $g_0=t^2+2t-1$, by considering the effect of the
interaction part of the action onto the kinetic part within the HFB like
approximating scheme \cite{thouless72,mattuck92,bogoliubov07}. The Ising
part can be easily written in the momentum space representation,
which we will also refer to as Fourier space, after having defined the
following transformations:
\bb
c({\bm{r}})=\frac{1}{L}\sum_{{{\bm{k}}}}c_{{\bm{k}}}\exp(i{\bm{k}}.{\bm{r}})\,,\;\;\;\;
\bar c({\bm{r}})=\frac{1}{L}\sum_{{{\bm{k}}}}\bar c_{{\bm{k}}}\exp(-i{\bm{k}}.{\bm{r}})\,.\;\;
\label{Four1}
\ee
Using these transformations, the Ising part of the action gains
block-diagonal form,
\bb
\label{SIsingK} \fl
\action_{\mathrm{Ising}}=\sum_{{\bm{k}}\in S} it(t+1)(k_x-k_y)(c_{{\bm{k}}} \bar c_{{\bm{k}}}
-c_{-{\bm{k}}} \bar c_{-{\bm{k}}}) +2itk_xc_{{\bm{k}}}c_{-{\bm{k}}}
+2itk_y\bar c_{{\bm{k}}}\bar c_{-{\bm{k}}},
\ee
where $S$ is the set of Fourier modes that correspond to half of the
Brillouin zone: if ${\bm{k}}$ is already included in $S$ then $-{\bm{k}}$ is not to
be included in $S$ and vice versa (to avoid repetition of modes in the
different sums above), so that couples of modes $({\bm{k}},-{\bm{k}})$ fill up the
Brillouin zone exactly once. In fact, terms with ${\bm{k}}$ and $-{\bm{k}}$ are
already combined together in (\ref{SIsingK}). The mass term is dropped in
(\ref{SIsingK}) since we are on the critical line. The quartic term can be
written in the Fourier space as
\bb
\action_{\mathrm{int}}=\frac{1}{L^2}\sum_{{\bm{k}}_1+{\bm{k}}_2
={\bm{k}}_3+{\bm{k}}_4}V({\bm{k}}_2,{\bm{k}}_4)c_{{\bm{k}}_1}
c_{{\bm{k}}_2}\bar c_{{\bm{k}}_3}\bar c_{{\bm{k}}_4},
\label{FIint1}
\ee
with the potential
\bb
\nonumber
V({\bm{k}}_2,{\bm{k}}_4)=-\alpha k_2^xk_4^y+\alpha' (k_2^xk_4^x+k_2^yk_4^y),\\
\alpha=g_0\,t(t+2\gamma)\,,\;\;\;\; \alpha'=g_0\,\gamma(1-t)\,.
\label{FPint1}
\ee
Up to now we only expressed the action in the Fourier space, or in the
momentum-space representation, without further approximations. In order to
see if the second order line is stable, we use a mean-field like
approximation in momentum space, similar to the quantum HFB method.
To do so,
we decompose the fourth order interacting terms into sums of quadratic
terms with coefficients to be determined self-consistently. These
coefficients are actually two-point correlation functions for fermions in
the momentum space. The interaction can be decoupled in different ways.
For example, considering the terms contributing to the Ising action, we may
take account of the averages $\langle c_{{\bm{k}}} \bar c_{{\bm{k}}}\rangle $,
$\langle c_{-{\bm{k}}} \bar c_{-{\bm{k}}}\rangle $, $\langle c_{{\bm{k}}}c_{-{\bm{k}}}\rangle$
and $\langle \bar c_{{\bm{k}}}\bar c_{-{\bm{k}}}\rangle $. There are also three
different ways to decouple the interacting term, since $c_{{\bm{k}}_1}$ can be
paired with either of $c_{{\bm{k}}_2}$, $\bar c_{{\bm{k}}_3}$, or $\bar c_{{\bm{k}}_4}$.
For example,
\bb
c_{{\bm{k}}_1}c_{{\bm{k}}_2}=\langle c_{{\bm{k}}_1}c_{{\bm{k}}_2}\rangle
+(c_{{\bm{k}}_1}c_{{\bm{k}}_2}-\langle c_{{\bm{k}}_1}c_{{\bm{k}}_2}\rangle )
\equiv\langle c_{{\bm{k}}_1}c_{{\bm{k}}_2}\rangle +\delta_{c_1c_2},
\ee
where $\delta_{c_1c_2}$ is assumed to be a small fluctuation. In this case,
from Eq. (\ref{SIsingK}), the average is non zero only for
${\bm{k}}_1 = -{\bm{k}}_2 ={\bm{k}}$ or $-{\bm{k}}$, with ${\bm{k}}\in S$. We can pair the other
terms by writing the action in the $g_S=3$ different possible ways that are
compatible with the symmetries of Eq. (\ref{SIsingK}), and by using the
fermionic rules, we write:
\bb
\fl
\action_{\mathrm{int}}=\frac{1}{L^2g_S}\sum_{{\bm{k}}_1+{\bm{k}}_2={\bm{k}}_3+{\bm{k}}_4}V({\bm{k}}_2,{\bm{k}}_4)
\Big [
(\langle c_{{\bm{k}}_1}c_{{\bm{k}}_2}\rangle +\delta_{c_1c_2})(\langle \bar
c_{{\bm{k}}_3}\bar c_{{\bm{k}}_4}\rangle +\delta_{\bar c_3\bar c_4})
\;\;\;
\\ \nonumber
-(\langle c_{{\bm{k}}_1}\bar c_{{\bm{k}}_3}\rangle +\delta_{c_1\bar c_3})(\langle
c_{{\bm{k}}_2}\bar c_{{\bm{k}}_4}\rangle +\delta_{c_2\bar c_4})
+(\langle c_{{\bm{k}}_1}\bar c_{{\bm{k}}_4}\rangle +\delta_{c_1\bar c_4})(\langle
c_{{\bm{k}}_2}\bar c_{{\bm{k}}_3}\rangle +\delta_{c_2\bar c_3})
\Big ].
\label{THIS63}
\ee
The next step is to discard terms that are proportional to the squares of
fluctuations $\delta^2$, and keep the others. After some algebra, we obtain
the mean-field quadratic operator for the interaction term as follows:
\bb
\nonumber \fl
\action_{\mathrm{int}} =\frac{1}{L^2g_S}
\sum_{{\bm{k}},{\bm{k}}' \in S}
4c_{{\bm{k}}}c_{-{\bm{k}}}\langle \bar c_{{\bm{k}}'}\bar c_{-{\bm{k}}'}\rangle V({\bm{k}},{\bm{k}}')+
4\bar c_{{\bm{k}}}\bar c_{-{\bm{k}}}\langle c_{{\bm{k}}'}c_{-{\bm{k}}'}\rangle V({\bm{k}}',{\bm{k}})
\\ \nonumber
+c_{{\bm{k}}}\bar c_{{\bm{k}}}
\Big(\langle c_{{\bm{k}}'}\bar c_{{\bm{k}}'}\rangle v({\bm{k}},{\bm{k}}')
+\langle c_{-{\bm{k}}'}\bar c_{-{\bm{k}}'}\rangle v({\bm{k}},-{\bm{k}}')
\Big)
\\ \label{MFA}
+c_{-{\bm{k}}}\bar c_{-{\bm{k}}}
\Big(
\langle c_{{\bm{k}}'}\bar c_{{\bm{k}}'}\rangle v(-{\bm{k}},{\bm{k}}')
+\langle c_{-{\bm{k}}'}\bar c_{-{\bm{k}}'}\rangle v(-{\bm{k}},-{\bm{k}}')
\Big),
\ee
where we have defined the potential
\bb
\label{potential}
v({\bm{k}},{\bm{k}}')=-V({\bm{k}},{\bm{k}})-V({\bm{k}}',{\bm{k}}')+V({\bm{k}},{\bm{k}}')+V({\bm{k}}',{\bm{k}}).
\ee
In the above expressions, there are three different kinds of quantities,
that contribute to the action, associated with the sums like
$\sum_{{\bm{k}}}c_{{\bm{k}}} \bar c_{{\bm{k}}}$, $\sum_{{\bm{k}}}c_{{\bm{k}}}\bar c_{{\bm{k}}}k_i$, or
$\sum_{{\bm{k}}}c_{{\bm{k}}} \bar c_{{\bm{k}}}k_ik_j$, with $i,j=x,y$. The first term
gives a contribution to the total mass, the second one corresponds to
current operators, and the third one can be thought as a dispersion energy
tensor. Considering the symmetries of the Ising part, and the fact that the
action must be invariant by a dilation factor at criticality, we may only
take into account the current operators. Respectively, we can drop the
first two terms in the potential $v({\bm{k}},{\bm{k}}')$ defined in Eq.
(\ref{potential}). We define therefore the following unknown parameters,
for the diagonal and nondiagonal couplings of fermions ($i=x,y$):
\bb
\nonumber
t_i&=&\frac{i}{2L^2}\sum_{{\bm{k}}\in S}(\langle c_{{\bm{k}}}\bar c_{{\bm{k}}}\rangle
-\langle c_{-{\bm{k}}}\bar c_{-{\bm{k}}}\rangle )k_i,
\\ \label{parameters}
u_i&=&\frac{i}{L^2}\sum_{{\bm{k}}\in S}\langle c_{{\bm{k}}}c_{-{\bm{k}}}\rangle k_i,
\;\;
\bar u_i=\frac{i}{L^2}\sum_{{\bm{k}}\in S}\langle \bar c_{{\bm{k}}}\bar
c_{-{\bm{k}}}\rangle k_i.
\ee
From the previous discussion, we can drop the first two terms in the
potential $v({\bm{k}},{\bm{k}}')$ defined in Eq. (\ref{potential}), since we already
assume that only currents are kept as parameters along the critical line.
In this case, it is easy to rewrite, from the property $v({\bm{k}},{\bm{k}}')
=-v(-{\bm{k}},{\bm{k}}') =-v({\bm{k}},-{\bm{k}}')$, the effective mean field action of
(\ref{MFA}) as:
\bb
\nonumber \fl
\action_{\mathrm{int}}=\frac{1}{g_S}
\sum_{{\bm{k}}\in S}
4ic_{{\bm{k}}}c_{-{\bm{k}}}
[(\alpha\bar u_y-\alpha'\bar u_x)k_x-\alpha'\bar u_y k_y]
+
4i\bar c_{{\bm{k}}}\bar c_{-{\bm{k}}}
[-\alpha' u_x k_x +(\alpha u_x-\alpha' u_y)k_y]
\\
+2i(c_{{\bm{k}}}\bar c_{{\bm{k}}}-c_{-{\bm{k}}}\bar c_{-{\bm{k}}})
[(\alpha t_y-2\alpha' t_x)k_x+(\alpha t_x-2\alpha' t_y)k_y].
\;\;\;
\ee
We make then further assumption that, by symmetry invariance in the
momentum space, there exists a solution satisfying $\bar u_y=u_x$,
$\bar u_x=u_y$ and $t_x=-t_y$, so that:
\bb
\nonumber \fl
\action_{\mathrm{int}} = \frac{1}{g_S}
\sum_{{\bm{k}}\in S}
4ic_{{\bm{k}}}c_{-{\bm{k}}}
[(\alpha u_x-\alpha' u_y)k_x-\alpha' u_x k_y]
+
4i\bar c_{{\bm{k}}}\bar c_{-{\bm{k}}}
[-\alpha' u_x k_x+(\alpha u_x-\alpha' u_y)k_y]
\\
- 2i(c_{{\bm{k}}}\bar c_{{\bm{k}}}-c_{-{\bm{k}}}\bar c_{-{\bm{k}}})
(k_x-k_y)(\alpha+2\alpha')t_x.
\;\;\;
\ee
The total effective action (with zero mass) can finally be written as
\bb
\fl \nonumber
\mathcal{S}_{\mathrm{eff}}=\sum_{{\bm{k}}\in S}
i\left[ t(t+1)-\frac{2}{g_S}(\alpha+2\alpha')t_x\right](k_x-k_y)
(c_{{\bm{k}}}\bar c_{{\bm{k}}}-c_{-{\bm{k}}}\bar c_{-{\bm{k}}})
\\ \fl \nonumber
+
i\frac{4}{g_S}\left[ \left(\frac{g_S}{2}t+(\alpha u_x-\alpha'
u_y)\right)k_x-\alpha' u_x k_y\right]c_{{\bm{k}}}c_{-{\bm{k}}}
\\ \fl
+
i\frac{4}{g_S}\left[-\alpha' u_x k_x+\left(\frac{g_S}{2}t+(\alpha u_x-\alpha'
u_y)\right)k_y\right]\bar c_{{\bm{k}}}\bar c_{-{\bm{k}}},
\ee
or in a more compact form as
\bb
\fl\nonumber
\mathcal{S}_{\mathrm{eff}}
=\sum_{{\bm{k}}\in S}
ic(k_x-k_y)(c_{{\bm{k}}}\bar c_{{\bm{k}}}-c_{-{\bm{k}}}\bar c_{-{\bm{k}}})
+
2i(ak_x-bk_y)c_{{\bm{k}}}c_{-{\bm{k}}}
\\
+
2i(-bk_x+ak_y)\bar c_{{\bm{k}}}\bar c_{-{\bm{k}}},
\label{SEFF70}
\ee
with the following coefficients
\bb\fl
a=t+2\frac{\alpha u_x-\alpha' u_y}{g_S}, \;\;\;\;\;
b=2\alpha' \frac{u_x}{g_S}, \;\;\;\;\;
c=t(t+1)-2t_x\frac{\alpha+2\alpha'}{g_S}.
\ee
The partition function can then be written as a product over the Fourier
modes $Z=\prod_{{\bm{k}}\in S}Z_{{\bm{k}}}$, with
\bb
\label{fpFourier}
Z_{{\bm{k}}}=k^2[A+B\sin 2\theta_k],
\ee
$\theta_k$ being the angle of the vector ${\bm{k}}$, and
\bb
A=c^2-4ab,\:\:
B=-c^2+2(a^2+b^2).
\ee
We assume that $|A|$ is larger than $|B|$ on the second order critical
line, until a singular point is reached, where eventually $A^2=B^2$.
Indeed, the expression (\ref{fpFourier}) is valid only if the elements
$A+B\sin 2\theta_k$ are all strictly positive, which is the case only if
$A^2>B^2$. This will be checked using numerical analysis. Beyond this
point, the effective action is unstable and has to be modified to
incorporate further corrections. In a bosonic $\Phi^6$ Ginzburg-Landau
theory describing a first order transition, the tricritical point is
usually defined as the point where both coefficients of $\Phi^2$ and
$\Phi^4$ terms vanish \cite{lawrie84,zj04}. By analogy, in the present
fermionic theory, it is tempting to associate the above singular
point with the effective tricritical point.\\
The parameters $t_x$, $u_x$ and $u_y$ are to be determined
self-consistently from the definitions Eqs. (\ref{parameters}). In the
continuous limit, these reduce to
\bb
\nonumber
t_x&=&\frac{c}{4\pi}\int_0^{\pi}\dd \theta\frac{1-\sin 2\theta}{A+B\sin
2\theta},
\\
u_x&=&\frac{1}{2\pi}\int_0^{\pi}\dd \theta\frac{a\sin 2\theta-b}{A+B\sin
2\theta},
\;\;
u_y=\frac{1}{2\pi}\int_0^{\pi}\dd \theta\frac{a-b\sin 2\theta}{A+B\sin
2\theta}.
\ee
After computing the trigonometric integrals, we obtain the relations
\bb\nonumber
t_x=\frac{c}{4B}\left(
-1+(A+B)\frac{{\rm sign}(A)}{\sqrt{A^2-B^2}}
\right),
\\ \nonumber
u_x=\frac{1}{2B}\left(
a-(aA+bB)\frac{{\rm sign}(A)}{\sqrt{A^2-B^2}}
\right),
\\ \label{relations}
u_y=\frac{1}{2B}\left(
-b+(bA+aB)\frac{{\rm sign}(A)}{\sqrt{A^2-B^2}}
\right).
\ee
Numerically, we proceed the following way. Starting from
$T$ slightly below $T_c(-\infty)$, we solve the consistency equations for
$t_x$, $u_x$ and $u_y$, with the value of $\Delta_0$ given by the critical
line (\ref{criticalline}) at a given temperature. The solutions are then
plug into the coefficients $A(T)$ and $B(T)$, and we plot $A(T)^2-B(T)^2$
as a function of $T$, as is shown in figure~\ref{tric1}. We repeat the
process by decreasing the temperature until we reach the point where this
quantity vanishes.\\
By doing so we find a singular point approximately located at
$(T_{\mathrm{t}}^{*},\Delta_{0,\mathrm{t}}^{*})\simeq(0.42158, 1.9926)$.
This is close to the tricritical point $T_{\mathrm{t}}$ given by Monte
Carlo simulations: $(T_{\mathrm{t}},\Delta_{0,\mathrm{t}})\simeq(0.610,
1.9655)$ \cite{dasilva02}, and $(T_\mathrm{t},\Delta_{0,\mathrm{t}})
\simeq(0.609(3), 1.966(2))$ \cite{silva06}. If we assume that
$T_{\mathrm{t}}^{*}$ represents the tricritical point, the mean-field like
treatment of the underlying field theory underestimates the fluctuations,
rendering the second order critical line more stable at lower temperatures,
as compared to Monte-Carlo results, as we approach $(T_c=0,\Delta_0=2)$
along the critical line. Stronger fluctuations can be simulated by
lowering the value of $g_S$, which increases (lowers) the value of
$T_{\mathrm{t}}^{*}$ ($\Delta_{0,\mathrm{t}}^{*}$), respectively. Instead
of $g_S=3$, taking $g_S=2.5$, for example, leads to a
$T_{\mathrm{t}}^{*}\simeq 0.48$, closer to the Monte Carlo results. This
can be achieved precisely by incorporating more diagrams in the computation
of the effective free energy \cite{mattuck92}. Also, due to the fact that
we are in a region near $(T_c=0,\Delta_0=2)$, where the change in
temperature is large compared to the change of $\Delta_0$ (the slope is
vertical at this point as is seen in figure~\ref{plot}), it is more
difficult to obtain a precise value of $T_{\mathrm{t}}^{*}$ within a
mean-field treatment.\\
It is important that the BC fermionic action (\ref{SeffCont}) finally
predicts the existence of a special (tricritical) point at the critical
line somewhere close (in $\Delta_0$) to termination point of that line at
$(T_c=0,\Delta_0=2)$. The tricritical point is defined, within this
interpretation, as the point of the destruction, or loss of stability, in
the effective fermionic spectrum of the action due to the modifications
introduced into the kinetic part by a sufficiently strong dilution of a
system by the vacancy states, which corresponds to large enough coupling
constant $g_0$, as it was commented above.
\footnote{ \
It may be also noted that the Monte-Carlo values for
$(T_\mathrm{t}, \Delta_{0,\mathrm{t}})$ seemingly lie practically on the
theoretical curve for the critical line (\ref{criticalline})-(\ref{crit3}).
For instance, taking as input value $T_\mathrm{t}\simeq 0.\,609(3)$
\cite{silva06}, from (\ref{crit3}) we find $\Delta_{0,\mathrm{t}}\simeq
1.952$, which is sufficiently close to the M-C value $\Delta_{0,\mathrm{t}}
\simeq 1.\,966(2)$ from this set \cite{silva06}, the deviation being
probably less than 1\%.}.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{Tricritical-Point.eps}
\unitlength=1cm
\caption{\label{tric1} (color online) Stiffness of the spectrum: solution
of HFB self-consistent Eqs. (\ref{relations}) for the
coefficient $A(T)^2-B(T)^2$ as function of $T$. The temperature where
$A(T)^2-B(T)^2=0$ gives the location of the singular
point $T_{\mathrm{t}}^{*}$.}
\end{center}
\end{figure}
\section{Conclusions}
In this paper, we have considered the physics of the BC model as a
fermionic field theory. Using Grassmann algebra, we have shown that the
model can be transformed into quantum field theoretical language in terms
of fermions alias Grassmann variables. This fermionic theory for BC model
is described by an exact fermionic action with an interaction on a discrete
lattice. This action can be reduced, after some transformations, in the
continuum limit and low energy sector, to an effective continuum field
theory which includes a modified Ising action, which is quadratic in
fermions, and a quartic interaction. From there we have extracted the exact
mass of the model and analyzed the effect of the quartic term on the
stability of the free fermion spectrum in the kinetic part. The condition
of the zero BC mass gives the critical line of phase transition
points in the $(T,\Delta_0)$ plane, which is found to be in a very good
agreement with the results of Monte-Carlo simulations over the whole range
of variation of concentration of the non-magnetic sites governed by
$\Delta_0$. The location of the tricritical point needs additional analysis
of the excitation spectrum of integral factors $Z_{{\bm{k}}}$ of $Z$ around the
origin in the momentum space. In particular, the \textit{stiffness} of the
excitation spectrum (the coefficient in front of ${\bm{k}}^2$ term in
factors $Z_{{\bm{k}}}$ as we expand the dispersion relation for $Z$ in momentum
variables) vanishes at a singular point $T_{\mathrm{t}}^{*}$, which we
assume to be identified as the tricritical point $T_{\mathrm t}$. A
Hartree-Fock-Bogoliubov analysis gives an approximate location for this
point on the phase diagram (critical line) which can be compared to
the numerical results of Monte Carlo simulations. The more precise location
of the instability point could be achieved by taking into account more
diagrams contributing to the effective free energy. In any case, we have
shown the existence of a singular point at the critical line by studying
the stability of the kinetic spectrum of the action at this line, where the
nature of the transition is to be changed due to strong dilution. The main
result of this paper is the possibility to study precisely first-order
transition driven systems from a fermionic point of view using Grassmann
algebra. The method we have applied may be useful as well for other systems
where effective field theory is presented by an action similar to that of
Eq. (\ref{SeffCont}). In essence, this is a one of the simplest form of an
action with 4-fermion interaction that can be written out from a unique
pair of Grassmann variables at each point of the real space in two
dimensions. Application of the same method to other extensions of the BC
Hamiltonian, such as the Blume-Emery-Griffiths model \cite{BeG71}, is also
possible. Finally, at intermediate stages, a partial bosonization of the
system leads to a $mixed$ representation of the model not only in term of
fermions but also in term of hard core {\em bosons}, as written explicitly
in the lattice action of Eq. (\ref{act1fin}). The representations of this
kind could be useful also to look for a possible interpretation of the
tricritical point in the BC model as a special point in the phase diagram
where an additional hidden symmetry between fermions and bosons may appear.
\section*{References}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,406
|
.class Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;
.super Ljava/lang/Object;
.source "NavigationBarPressureGaugePreference.java"
# annotations
.annotation system Ldalvik/annotation/EnclosingClass;
value = Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference;
.end annotation
.annotation system Ldalvik/annotation/InnerClass;
accessFlags = 0xa
name = "ButtonViewHolder"
.end annotation
# instance fields
.field private mBackgroundCircle:[Landroid/widget/ImageView;
.field private mButtonImage:[Landroid/widget/ImageView;
.field private mPressure_Gauge:[Landroid/widget/ImageView;
.field private mRippleView:[Landroid/widget/ImageView;
# direct methods
.method static synthetic -get0(Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;)[Landroid/widget/ImageView;
.locals 1
iget-object v0, p0, Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;->mBackgroundCircle:[Landroid/widget/ImageView;
return-object v0
.end method
.method static synthetic -get2(Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;)[Landroid/widget/ImageView;
.locals 1
iget-object v0, p0, Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;->mPressure_Gauge:[Landroid/widget/ImageView;
return-object v0
.end method
.method static synthetic -get3(Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;)[Landroid/widget/ImageView;
.locals 1
iget-object v0, p0, Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;->mRippleView:[Landroid/widget/ImageView;
return-object v0
.end method
.method public constructor <init>(Landroid/support/v7/preference/PreferenceViewHolder;)V
.locals 3
const/4 v1, 0x1
const/4 v2, 0x0
invoke-direct {p0}, Ljava/lang/Object;-><init>()V
new-array v0, v1, [Landroid/widget/ImageView;
iput-object v0, p0, Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;->mBackgroundCircle:[Landroid/widget/ImageView;
new-array v0, v1, [Landroid/widget/ImageView;
iput-object v0, p0, Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;->mPressure_Gauge:[Landroid/widget/ImageView;
new-array v0, v1, [Landroid/widget/ImageView;
iput-object v0, p0, Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;->mButtonImage:[Landroid/widget/ImageView;
new-array v0, v1, [Landroid/widget/ImageView;
iput-object v0, p0, Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;->mRippleView:[Landroid/widget/ImageView;
iget-object v1, p0, Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;->mRippleView:[Landroid/widget/ImageView;
const v0, 0x7f0a03e4
invoke-virtual {p1, v0}, Landroid/support/v7/preference/PreferenceViewHolder;->findViewById(I)Landroid/view/View;
move-result-object v0
check-cast v0, Landroid/widget/ImageView;
aput-object v0, v1, v2
iget-object v1, p0, Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;->mBackgroundCircle:[Landroid/widget/ImageView;
const v0, 0x7f0a03dd
invoke-virtual {p1, v0}, Landroid/support/v7/preference/PreferenceViewHolder;->findViewById(I)Landroid/view/View;
move-result-object v0
check-cast v0, Landroid/widget/ImageView;
aput-object v0, v1, v2
iget-object v1, p0, Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;->mButtonImage:[Landroid/widget/ImageView;
const v0, 0x7f0a03de
invoke-virtual {p1, v0}, Landroid/support/v7/preference/PreferenceViewHolder;->findViewById(I)Landroid/view/View;
move-result-object v0
check-cast v0, Landroid/widget/ImageView;
aput-object v0, v1, v2
iget-object v1, p0, Lcom/samsung/android/settings/navigationbar/NavigationBarPressureGaugePreference$ButtonViewHolder;->mPressure_Gauge:[Landroid/widget/ImageView;
const v0, 0x7f0a03e2
invoke-virtual {p1, v0}, Landroid/support/v7/preference/PreferenceViewHolder;->findViewById(I)Landroid/view/View;
move-result-object v0
check-cast v0, Landroid/widget/ImageView;
aput-object v0, v1, v2
return-void
.end method
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,872
|
Convert complex spherical harmonics to real form.
# Usage
`rcilm` = pyshtools.SHctor (`ccilm`, [`lmax`, `convention`, `switchcs`])
# Returns
`rcilm` : float, dimension (2, `lmax`+1, `lamx`+1)
: The output real spherical harmonic coefficients. `rcilm[0,:,:]` and `rcilm[1,:,:]` correspond to the cosine and sine terms, respectively.
# Parameters
`ccilm` : float, dimension (2, `lmaxin`+1, `lmaxin`+1)
: The input complex spherical harmonic coefficients. `ccilm[0,:,:]` and `ccilm[1,:,:]` correspond to the real and complex part of the coefficients, respectively. Only the positive angular orders are input; the negative orders are assumed to satisfy the relation `C_{l-m}=(-1)^m C_{lm}^*`.
`lmax` : optional, integer, default = `lmaxin`
: The maximum degree of the output coefficients.
`convention` : optional, integer, default = 1
: If 1 (default), the input and output coefficients will have the same normalization. If 2, orthonormalized coefficients will be converted to real geodesy 4-pi form.
`swtichcs` : optional, integer, default = 0
: If 0 (default), the input and output coefficients will possess the same Condon-Shortley phase convention. If 1, the input coefficients will first be multiplied by (-1)^m.
# Description
`SHctor` will convert complex spherical harmonics of a real function to real form. The normalization of the input and output coefficients are by default the same, but if the optional argument `convention` is set to 2, this routine will convert from geodesy 4-pi normalized coefficients to orthonormalized coefficients. The Condon-Shortley phase convention between the input an output coefficients can be modified by the optional argument `switchcs`.
# See also
[shrtoc](pyshrtoc.html)
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,310
|
What's better than spoiling and pampering your mum? Getting spoilt and pampered together! For this lucky mother and daughter, they had the luxury of a pamper package to the value of $950 this Mother's Day after winning our Mother's Day competition.
It all began with both having a makeover here at Makeup on the Boulevarde. We wanted them to feel beautiful and oh-so glamourous for mum's special day!
The next step was relaxing while mother-daughter got their hair done. The hair specialist team at Georges Salon had surely kept the duo relaxed and once again feeling fabulous with a fresh wash and style.
Finally, what's another way of saying " I Love you and I am forever Thankful for everything you do"? Flowers! Yes this lucky mother received an amazing bouquet of flowers from the brilliant team at El Khair Florist. This surely had lit up an extraordinary smile. To top it off she also received a pamper package from us here at Makeup on the Boulevarde as well as the team of Georges Salon filled with goodies.
What a wonderful feeling it is to see the ones we love happy, and to show our gratitude and love. We felt privileged to be able to show these wonderful women a Mother's Day to remember!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,716
|
\section*{Abstract}
Low-Power Wide Area Networking (LPWAN) technology offers long-range communication, which enables new types of services.
Several solutions exist; \LoRaWAN is arguable the most adopted.
It promises ubiquitous connectivity in outdoor IoT applications, while keeping network structures, and management, simple.
This technology has received a lot of attention in recent months from network operators and solution providers.
Yet, the technology has limitations that need to be clearly understood to avoid inflated expectations and disillusionment.
This article provides an impartial and fair overview of what the capabilities and the limitations of \LoRaWAN are.
We discuss those in the context of use cases, and list open research and development questions.
\section{Introduction}
\label{sec:introduction}
Network operators are starting to deploy horizontal M2M solutions to cover a wide set of large scale verticals, using Low Power Wide Area Networking (LPWAN) technologies~\cite{linklabs16comprehensive,goursaud16dedicated}.
Application domains include smart city, metering, on-street lighting control or precision agriculture.
LPWAN technologies combine low data rate and robust modulation to achieve multi-km communication range.
This enables simple star network topologies that simplify network deployment and maintenance~\cite{xiong15low}.
While the benefits of these technologies are known and are often considered as the key enablers for some applications, their limitations are still not well understood~\cite{sanchez16state, margelis15low}.
In this article we aim to provide an impartial overview of the limitations of \LoRaWAN~\cite{sornin16lora}, one of the most successful technologies in the LPWAN space.
\LoRaWAN is a network stack rooted in the \LoRa physical layer.
\LoRaWAN features a raw maximum data rate of 27~kbps (50 kbps when using FSK instead of LoRa), and claims that a single gateway can collect data from thousands of nodes deployed kilometers away.
These capabilities have really resonated with some solution providers and network operators, who have created a large momentum behind \LoRaWAN to the point that it is sometimes touted as the connectivity enabler for any IoT use case~\cite{ducrot16lora}.
The goal of this article is to bring some sanity to these statements, by providing a comprehensive, fair and independent analysis of what the capabilities and limitations of \LoRaWAN are.
We adopt a pragmatic approach, and identify in which use cases the technology works, and in which use cases it doesn't work.
Section~\ref{sec:overview} provides an overview of LPWAN technologies, including cellular.
Section~\ref{sec:description} describes \LoRaWAN technology in details.
Section~\ref{sec:capacity} analyzes the network capacity and scale limitations of the technology.
Section~\ref{sec:usecases} discusses the use cases where \LoRaWAN works/doesn't work.
Section~\ref{sec:research} lists open research and development challenges for the technology.
Section~\ref{sec:conclusion} concludes.
\section{Overview of LPWAN and Cellular technologies for IoT}
\label{sec:overview}
\subsection{Low-Power Wide-Area Alternatives}
Although LoRaWAN is one of the most adopted technologies for IoT, there is a wide range of LPWAN technologies in the market, such as Ingenu, Weightless W, N and P or SigFox~\cite{draft-minaburo-lpwan-gap-analysis}.
Ingenu developed a proprietary LPWAN technology in the 2.4~GHz band, based on Random Phase Multiple Access (RPMA) to provide M2M industry solutions and private networks.
The main asset of Ingenu in comparison with alternative solutions is high data rate up to 624~kbps in the uplink, and 156~kbps in the downlink. On the contrary, the energy consumption is higher and the range is shorter (a range around 5-6 km) due to the high spectrum band used.
The Weightless Special Interest Group developed a set of three open standards for LPWAN: Weightless-W, Weightless-N and Weightless-P.
Weightless-W was developed as a bidirectional (uplink/downlink) solution to operate in TV whitespaces (470-790~MHz).
It is based on narrowband FDMA channels with Time Division Duplex between uplink and downlink; data rate ranges from 1~kbps to 1~Mbps and battery lifetime is around 3-5 years.
Weightless-N was designed to expand the range of Weightless-W and reduce the power consumption (a battery lifetime up to 10 years) at the expense of data rate decrease (from up to 1~Mbps in Weightless-W to 100~kbps in Weightless-N).
Unlike Weightless-W, Weightless-N is based on the Ultra Narrow Band (UNB) technology and operates in the UHF 800-900~MHz band; it provides only uplink communication.
Finally, Weightless-P is proposed as a high-performance two-way communication solution that can operate over 169, 433, 470, 780, 868, 915 and 923~MHz bands.
However, cost of the terminals and power consumption are higher than in Weightless-N, with a battery lifetime of 3-8 years.
Together with \LoRaWAN, SigFox is one of the most adopted LPWAN solutions.
It is a proprietary UNB solution that operates in the 869~MHz (Europe) and 915~MHz (North America) bands.
Its signal is extremely narrowband (100~Hz bandwidth).
It is based on Random Frequency and Time Division Multiple Access (RFTDMA) and achieves a data rate around 100~bps in the uplink, with a maximum packet payload of 12~Bytes, and a number of packets per device that cannot exceed 14~packets/day.
These tough restrictions, together with a business model where SigFox owns the network, have somewhat shifted the interest to~\LoRaWAN, which is considered more flexible and open.
\subsection{Cellular solutions for IoT}
The 3\textsuperscript{rd} Generation Partnership Project (3GPP) standardized a set of low cost and low complexity devices targeting Machine-Type-Communications (MTC) in Release 13.
In particular, 3GPP addresses the IoT market from a three-fold approach by standardizing the enhanced Machine Type Communications (eMTC), the Narrow Band IoT (NB-IoT) and the EC-GSM-IoT~\cite{nokia16LTEevolution}.
eMTC is an evolution of the work developed in Release 12 that can reach up to 1~Mbps in the uplink and downlink, and operates in LTE bands with a 1.08~MHz bandwidth.
NB-IoT is an alternative that, thanks to the reduced complexity, has a lower cost at the expense of decreasing data rate (up to 250~kbps in both directions).
Finally, EC-GSM-IoT is an evolution of EGPRS towards IoT, with data rate between 70 and 240~kbps.
Although the approaches proposed by 3GPP reduce the energy consumption and the cost of the devices, they have not yet caught up their non-3GPP counterparts. For instance, module cost for \LoRaWAN and SigFox is around \$2-5 and for eMTC is still around \$8-12. Despite the expected broad adoption of cellular IoT solutions supported by 3GPP, \LoRaWAN presents some assets to prevail against these technologies in specific market niches. Current assets are: i) the number of \LoRaWAN network deployments is increasing continuously while, on the other hand, few initial NB-IoT deployments have been already deployed; ii) \LoRaWAN operates in the ISM band whereas cellular IoT operates in licensed bands; this fact favours the deployment of private \LoRaWAN networks without the involvement of mobile operators; iii) \LoRaWAN has backing from industry, e.g. CISCO, IBM or HP, among others. In the future, both technologies will probably coexist when 3GPP solutions will be backed up by large volumes.
\section{Overview of \LoRaWAN}
\label{sec:description}
LoRa is the physical layer used in \LoRaWAN.
It features low power operation (around 10 years of battery lifetime), low data rate (27 kbps with spreading factor 7 and 500 kHz channel or 50 kbps with FSK) and long communication range (2-5 km in urban areas and 15 km in suburban areas).
It was developed by Cycleo (a French company acquired by Semtech).
\LoRaWAN networks are organized in a star-of-stars topology, in which gateway nodes relay messages between end-devices and a central network server.
End-devices send data to gateways over a single wireless hop and gateways are connected to the network server through a non-\LoRaWAN network (e.g.~IP over Cellular or Ethernet). Communication is bi-directional, although uplink communication from end-devices to the network server is strongly favoured, as it will be explained in the following~\cite{sornin16lora}.
\LoRaWAN defines three types of devices (\textit{Class A}, \textit{B} and \textit{C}) with different capabilities~\cite{sornin16lora}.
\textit{Class A} devices use pure ALOHA access for the uplink.
After sending a frame, a \textit{Class A} device listens for a response during two downlink receive windows. Each receive window is defined by the duration, an offset time and a data rate. Although offset time can be configured, the recommended value for each receive window is 1 sec and 2 sec, respectively.
Downlink transmission is only allowed after a successful uplink transmission. The data rate used in the first downlink window is calculated as a function of the uplink data rate and the receive window offset. In the second window the data rate is fixed to the minimum, 0.3 kbps.
Therefore, downlink traffic cannot be transmitted until a successful uplink transmission is decoded by the gateway.
The second receive window is disabled when downlink traffic is received by the end-device in the first window.
\textit{Class A} is the class of \LoRaWAN devices with the lowest power consumption.
\textit{Class B} devices are designed for applications with additional downlink traffic needs.
These devices are synchronized using periodic beacons sent by the gateway to allow the schedule of additional receive windows for downlink traffic without prior successful uplink transmissions.
Obviously, a trade-off between downlink traffic and power consumption arises.
Finally, \textit{Class C} devices are always listening to the channel except when they are transmitting.
Only \textit{class A} must be implemented in all end-devices, and the rest of classes must remain compatible with \textit{Class A}. In turn, \textit{Class C} devices cannot implement \textit{Class B}. The three classes can coexist in the same network and devices can switch from one class to another. However, there is not a specific message defined by \LoRaWAN to inform the gateway about the class of a device and this is up to the application.
The underlying PHY of the three classes is the same.
Communication between end-devices and gateways start with a \textit{Join procedure} that can occur on multiple frequency channels (e.g. in EU863-870 ISM Band there are 3 channels of 125 kHz that must be supported by all end-devices and 3 additional 125 kHz channels) by implementing pseudo-random channel hopping.
Each frame is transmitted with a specific Spreading Factor (SF), defined as $SF= \log_2 {(R_c / R_s)}$, where $R_s$ is the symbol rate and $R_c$ is the chip rate.
Accordingly, there is a trade-off between SF and communication range.
The higher the SF (i.e.~the slower the transmission), the longer the communication range.
The codes used in the different SFs are orthogonal.
This means that multiple frames can be exchanged in the network at the same time, as long as each one is sent with one of the six different SFs (from SF=7 to SF=12).
Depending on the SF in use, \LoRaWAN data rate ranges from 0.3~kbps to 27~kbps.
The maximum duty-cycle, defined as the maximum percentage of time during which an end-device can occupy a channel, is a key constraint for networks operating in unlicensed bands.
Therefore, the selection of the channel must implement pseudo-random channel hopping at each transmission and be compliant with the maximum duty-cycle. For instance, the duty-cycle is 1\% in EU 868 for end-devices.
The LoRa physical layer uses Chirp Spread Spectrum (CSS) modulation, a spread spectrum technique where the signal is modulated by chirp pulses (frequency varying sinusoidal pulses) hence improving resilience and robustness against interference, Doppler effect and multipath.
Packets contain a preamble (typically with 8 symbols), a header (mandatory in explicit mode), the payload (with a maximum size between 51~Bytes and 222~Bytes, depending on the SF) and a Cyclic Redundancy Check (CRC) field (with configurations that provide a coding rate from 4/5 to 4/8).
Typical bandwidth (BW) values are 125, 250 and 500~kHz in the HF ISM 868 and 915 MHz band, while they are 7.8, 10.4, 15.6, 20.8, 31.2, 41.7 and 62.5 kHz in the LF 160 and 480 MHz bands.
The raw data rate varies according to the SF and the bandwidth, and ranges between 22~bps (BW = 7.8~kHz and SF = 12) to 27~kbps (BW = 500~kHz and SF = 7)~\cite{goursaud16dedicated}.
Frequency hopping is exploited at each transmission in order to mitigate external interference~\cite{watteyne09reliability}.
\section{Capacity and Network Size Limitations}
\label{sec:capacity}
In this section we study the \LoRaWAN network scale with respect to data rate, duty-cycle regulations, etc.
\subsection{Network size limited by duty-cycle}
\label{DutyCycle}
Although the performance of \LoRaWAN is determined by PHY/MAC overviewed in Section~\ref{sec:description}, the duty-cycle regulations in the ISM bands~\cite{electronic12erc,federal15fcc} arise as a key limiting factor. If the maximum duty-cycle in a sub-band is denoted by $d$ and the packet transmission time, known as Time On Air, is denoted by $T_a$, each device must be silent in the sub-band for a minimum off-period $T_s= T_a(\frac{1}{d}-1)$. For instance, the maximum duty-cycle of the EU 868 ISM band is 1\% and it results in a maximum transmission time of 36 sec/hour in each sub-band for each end-device. Fig.~\ref{fig:TimeOnAir} shows the Time on Air of a packet transmission with coding rate 4/5 over a 125 kHz bandwidth channel. It is known that large SFs allow longer communication range. However, as observed in Fig.~\ref{fig:TimeOnAir}, large SFs also increase the time on air and, consequently, the off-period duration. This problem is exacerbated by the fact that large SFs are used more often than small SFs. For instance, considering a simple scenario with end-devices distributed uniformly within a round-shaped area centred at the gateway, and a path loss calculated with the Okumura-Hata model for urban cells \cite{Okumura}, the probability that an end-device uses a SF $i$, $p_i$, would be $p_{12}=0.28$, $p_{11}=0.20$, $p_{10}=0.14$, $p_{9}=0.10$, $p_{8}=0.08$ and $p_{7}=0.19$.
\begin{figure}
\centering
\includegraphics[width=1.00\columnwidth]{TimeOnAir.pdf}
\caption{Time on Air of \LoRaWAN with code rate 4/5 and a 125 kHz bandwidth.}
\label{fig:TimeOnAir}
\end{figure}
Although Listen Before Talk is not precluded in \LoRaWAN, only ALOHA access is mandatory. Accordingly, the \LoRaWAN capacity can be calculated roughly as the superposition of independent ALOHA-based networks (one independent network for each channel and for each SF, since simultaneous transmissions only cause a collision if they both select the same SF and channel; no capture effect is considered).
However, and in contrast to pure ALOHA, a \LoRaWAN device using SF $i$ cannot exceed a transmitted packet rate given by $nd/T_{a_i}$, where $n$ is the number of channels, $d$ is the duty-cycle and $T_{a_i}$ is the Time On Air with SF $i$.
In the simple scenario described above, if all end-devices transmit packets at the maximum packet rate $nd/T_{a_i}$, the number of packets successfully received by the gateway decreases as shown in Fig.~\ref{fig:PacksDC}, where a network with $n=3$ channels has been analyzed.
The number of received packets drops due to the effect of collisions.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{packetsDutyCycle.pdf}
\caption{Number of packets received per hour when end-devices attempt transmission at $nd/T_{a_i}$ packets/sec with coding rate 4/5 and $n=3$ channels with 125 kHz bandwidth.}
\label{fig:PacksDC}
\end{figure}
In Fig.~\ref{fig:comparison} the number of packets received successfully per hour and end-device is shown for deployments with $\{ 250, 500, 1000, 5000\}$ end-devices and $n=3$ channels.
For low transmission rate values (in packets/hour), throughput is limited by collisions; for high values, the maximum duty-cycle prevents end-devices from increasing the packet transmission rate and stabilizes the throughput.
For deployments with a ``small'' number of end-devices, the duty-cycle constraint limits the maximum throughput.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{Comparison.pdf}
\caption{Number of 10 Bytes payload packets received per hour and node for $\{ 250, 500, 1000, 5000\}$ end-devices and $n=3$ channels as a function of the packet generation.}
\label{fig:comparison}
\end{figure}
Table~\ref{t:throughput} summarizes the maximum throughput per end-device and the probability of successful reception for a set of different deployments.
The maximum throughput falls as the number of end-devices grows.
\begin{table*}
\centering
\caption{Maximum throughput and probability of successful transmission for different deployments (with $n$=3 channels and 1\% duty-cycle)}
\label{t:throughput}
\begin{tabular}{l|ccc|ccc|ccc|ccc|}
\cline{2-13}
& \multicolumn{3}{c|}{250 end-devices} & \multicolumn{3}{c|}{500 end-devices} & \multicolumn{3}{c|}{1000 end-devices} & \multicolumn{3}{c|}{5000 end-devices} \\
\hline
\multicolumn{1}{|l|}{Payload (Bytes)} & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}{30} & 50 & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}{30} & 50 & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}{30} & 50 & \multicolumn{1}{c|}{10} & \multicolumn{1}{c|}{30} & 50 \\
\hline
\multicolumn{1}{|l|}{Max. throughput per node (Packets/hour)} & 367 & 217 & 157 & 198 & 117 & 84 & 89 & 53 & 38 & 18 & 10 & 7.3 \\
\multicolumn{1}{|l|}{Max. throughput per node (Bytes/hour)} & 3670 & 6510 & 7850 & 1980 & 3510 & 4200 & 890 & 1590 & 1900 & 180 & 300 & 365 \\
\multicolumn{1}{|l|}{$\lambda$ of the max. throughput (Packets/hour)} & 2620 & 1500 & 1090 & 1500 & 870 & 620 & 670 & 390 & 280 & 130 & 70 & 50 \\
\multicolumn{1}{|l|}{Prob. of successful transmission (\%)} & 14.01 & 14.47 & 10.73 & 13.20 & 13.45 & 13.55 & 13.28 & 13.59 & 13.57 & 13.85 & 14.29 & 14.60 \\
\hline
\end{tabular}
\end{table*}
\subsection{Reliability and Densification drain Network Capacity}
In \LoRaWAN, reliability is achieved through the acknowledgment of frames in the downlink.
For class A end-devices, the acknowledgment can be transmitted in one of the two available receive windows; for class B end-devices it is transmitted in one of the two receive windows or in an additional time-synchronized window; for class C end-devices it can be transmitted at any time .
In \LoRaWAN the capacity of the network is reduced not only due to transmissions in the downlink, but also due to the off-period time following those transmissions (gateways must be compliant with duty-cycle regulation).
Therefore, the design of the network and the applications that run on it must minimize the number of acknowledged frames to avoid the capacity drain.
This side-effect calls into question the feasibility of deploying ultra-reliable services over large-scale \LoRaWAN networks.
At this point of development of the technology, \LoRaWAN faces deployment trends that can result in future inefficiencies.
Specifically, \LoRaWAN networks are being deployed following the cellular network model, that is, network operators provide connectivity as a service.
This model is making gateways to become base stations covering large areas. The increase in the number of end-devices running applications from different vendors over the same shared infrastructure poses new challenges to coordinate the applications. In particular, each application has specific constraints in terms of reliability, maximum latency, transmission pattern, etc. The coordination of the diverse requirements over a single shared infrastructure using an ALOHA-based access is one of the main future challenges for the technology. Therefore, a fair spectrum sharing is required beyond the existing duty-cycle regulations.
Finally, the unplanned and uncoordinated deployment of \LoRaWAN gateways in urban regions, along with the deployment of alternative LPWAN solutions (e.g. SigFox), could cause a decrease of the capacity due to collisions and due to the use of larger SFs (to cope with higher interference levels).
\section{Use Cases}
\label{sec:usecases}
Several application use cases are considered in order to analyze the suitability of \LoRaWAN and complement the understanding of the advantages and limitations of the technology when applied to different types of data transmission patterns, latency requirements, scale and geographic dispersion among others.
\subsection{Real Time Monitoring}
Agriculture, leak detection or environment control are applications with a reduced number of periodic/aperiodic messages and relaxed delay constraints. In contrast, the communication range must be long enough to cope with dispersed location of end-devices. \LoRaWAN has been designed to handle the traffic generated by this type of applications and meets their requirements as long as the deployment of the gateways is enough to cover all end-devices.
On the other hand, industrial automation, critical infrastructure monitoring and actuation require some sort of real time operation.
Real time is understood in general by low latency, and bounded jitter and depends on the specific application.
\LoRaWAN technology cannot claim to be a candidate solution for industrial automation, considering for example that industrial control loops may require response times around $1$ ms to $100$ ms and that, even for small packets of 10 Bytes, the time on air with SF=7 is around 40 ms.
As presented in the previous section, due to the MAC nature of \LoRaWAN, deterministic operation cannot be guaranteed despite of application specific periodicity as ALOHA access is subject to contention which impacts network jitter.
Despite that, small \LoRaWAN networks can deliver proper service to applications that require, for instance, sampling data every second.
To do that, two main design considerations should be taken into account:
\begin{itemize}
\item The spreading factor should be as small as possible to limit both the time on air and the subsequent off-period.
In other words, the gateway must be close enough to the end-devices.
\item The number of channels must be carefully designed and must be enough to i) minimize the probability of collisions (tightly coupled with the number of end-devices) and ii) offer quick alternative channels for nodes to retransmit collided packets thereby diminishing the impact of the duty-cycle.
\end{itemize}
Despite the two aforementioned aspects, latency will not be deterministic.
\subsection{Metering}
The \LoRa Alliance is working on standard encapsulation profiles for popular M2M and metering protocols.
Keeping an existing application layer allows to keep intact most of the firmware and ecosystem, facilitating migration to LPWAN.
These protocols include Wireless M-Bus for water or gas metering, KNX for building automation, and ModBus for industrial automation.
It is important to understand that those scenarios range from time sensitive operation to best effort monitoring.
Therefore, it is key to identify in such a diverse ecosystem what the requirements of each application are and if \LoRaWAN is the appropriate technology to address them.
\subsection{Smart City Applications}
\LoRaWAN has shown key success stories with smart lighting, smart parking and smart waste collection thanks to their scale and the nature of the data generated by those applications.
These encompass periodic messaging with certain delay tolerance.
For example, smart parking applications report the status of the parking spots upon a change is detected~\cite{martinez15lean}.
Parking events are slow and therefore network signaling is limited to few tens of messages per day.
Analogously smart waste collection systems and smart lighting actuate or report information in response to a measure with large variation periods.
Although latency and jitter are not major issues in these applications, in some of them the triggering factor is simultaneous for a huge number of end-devices.
For instance, sunset and down trigger the lighting elements around the whole city, thereby causing an avalanche of messages.
\LoRaWAN is an appropriate technology for this use case since it handles the wide coverage area and the significant number of users at the expense of increasing number of collisions, latency and jitter.
\subsection{Smart Transportation and Logistics}
Transportation and logistics are seen as two major pillars of the expected IoT growth over the next few years thanks to their impact on the global economy. Most applications are targeting efficiency in areas such as public transportation or transport of goods. However, some applications are tolerant to delay, jitter or unreliability and some others are not.
Different standards have been developed in the 5.9 GHz band for Intelligent Transportation Systems (ITS) based on the IEEE 802.11p standard. The constraints on delay are diverse for different applications, but \LoRaWAN, being a LPWAN solution, is not suitable for these applications. On the contrary, solutions such as fleet control and management can be supported by \LoRaWAN. Roaming is one of the developments under definition within \LoRa Alliance to enhance mobility. Specifically, future roaming solution is expected to support back-end to back-end secure connections, clearing and billing between operators, location of end-devices (pointed out as an open research challenge in Section \ref{sec:research}) and transparent device provisioning across networks.
\subsection{Video Surveillance}
The most common digital video formats for IP-based video systems are MJPEG, MPEG-4 and H.264. The bit rate recommended for IP surveillance cameras ranges from 130 kbps with low quality MJPEG coding to 4 Mbps for 1920x1080 resolution and 30 fps MPEG-4/H.264 coding. Given that \LoRaWAN data rate ranges from 0.3~kbps to 50~kbps per channel, \LoRaWAN will not support these applications.
\section{Open Research Challenges}
\label{sec:research}
The effect of the duty-cycle stated in Section~\ref{sec:capacity} jeopardizes the actual capacity of large-scale deployments.
This has been initially addressed by TheThingsNetwork~\cite{giezeman16things}, an interesting global, open, crowd-sourced initiative to create an Internet of Things data network over \LoRaWAN technology.
The proposed solution defines an access policy, known as the TTN Fair Access Policy, that limits the Time on Air of each end-device to a maximum of 30 sec per day.
This policy is simple to implement and guarantees pre-defined end-device requirements for a large-scale network (more than 1000 end-devices per gateway).
However, it fails to provide the network with enough flexibility to adapt to environment and network conditions (i.e.~link budget of each end-device, number of end-devices, number of gateways, etc), as well as to applications with tight latency or capacity requirements.
At this stage, the optimization of the capacity of the \LoRaWAN network, as well as the possibility to perform traffic slicing for guaranteeing specific requirements in a service basis, remain as open research issues.
From the authors' point of view, the research community will have to address the following open research challenges during the next years:
\begin{itemize}
\item \textbf{Explore new channel hopping methods:}
A pseudo-random channel hopping method is natively used in \LoRaWAN to distribute transmissions over the pool of available channels, thereby reducing the collision probability. However, this method cannot meet traffic requirements when there are latency, jitter or reliability constraints (i.e. downlink ACKs for all packets), and it is not able to get adapted according to the noise level of each channel.
The design of pre-defined and adaptive hopping sequences arises as an open research issue. From the authors' point of view, the proposed channel hopping sequences should be able to reserve a set of channels for retransmissions of critical packets, both in the uplink and in the downlink (ACK).
The design of feasible feedback mechanisms between gateways and end-devices must be a key part of the approach in a system where uplink traffic is strongly favoured.
\item \textbf{Time Division Multiple Access (TDMA) over \LoRaWAN:}
The random nature of ALOHA-based access is not optimal to serve deterministic traffic, which is gaining importance in the IoT ecosystem. Building a complete or hybrid TDMA access on top of \LoRaWAN opens up new use cases for this technology and provides additional flexibility.
The TDMA scheduler should be able to allocate resources for ALOHA-based access and schedule deterministic traffic along time and over the set of available channels.
The proposed schedulers should manage the trade-off between resources devoted for deterministic and non-deterministic traffic, meet the regional duty-cycle constraints and guarantee fairness with co-existing \LoRaWAN networks.
\item \textbf{Geolocation of end-devices:}
The location of end-devices is a mandatory requirement for specific use cases, particularly in industry 4.0. However, GPS-based solutions are not feasible due to cost, and CPU and energy consumption. Currently, interesting works have been initiated to develop TDOA-based (Time Difference Of Arrival) triangulation techniques for \LoRaWAN. It has been shown that this approach benefits from large SFs and dense gateway deployments.
\item \textbf{Cognitive Radio:}
As pointed out in Section \ref{DutyCycle}, regulation in ISM bands concerning maximum duty-cycle has a significant impact on the capacity of the network. One of the most promising future directions could be the inclusion of cognitive radio into the \LoRaWAN standard. In contrast to Weightless-W, \LoRaWAN has not been designed to operate in TV whitespaces. In the future, the inclusion of cognitive radio into the \LoRaWAN standard would be subject to a significant reduction of the energy consumption associated with cognitive radio techniques.
\item \textbf{Power reduction for multi-hop solutions:}
\LoRaWAN is organized with a single-hop star topology for simplicity. As discussed in Section \ref{sec:capacity}, the impact of high SFs on the capacity of the network is two-fold, since it increases both the Time on Air and the off-period. A two-hop strategy for \LoRaWAN networks should be investigated to figure out its potential.
Proposals in this direction should consider the reduction of transmitted power and the decrease of the SFs. On the other hand, also negative effects such as complexity, synchronization, and increasing power consumption of relays should be analyzed to thoroughly characterize the trade-off.
\item \textbf{Densification of \LoRaWAN networks:}
The proliferation of LPWAN technologies, and particularly \LoRaWAN, poses co-existence challenges as the deployment of gateways populate urban areas. Given the random-based access in unlicensed bands of \LoRaWAN and its inherent unplanned deployment, the performance achieved in isolated networks is put into question in scenarios with co-existing gateways and limited number of available channels.
It is essential to devise coordination mechanisms between gateways from the same or different operators to limit interference and collisions. The co-existence mechanisms encompass coordination and reconfiguration protocols for gateways and end-devices.
\end{itemize}
\section{Conclusions}
\label{sec:conclusion}
This article is aimed to clarify the scope of \LoRaWAN by exploring the limits of the technology, matching them to application use cases and stating the open research challenges.
In the low power M2M fragmented connectivity space there is not a single solution for all the possible connectivity needs and \LoRaWAN is not an exception.
A \LoRaWAN gateway, covering a range of tens of kilometers and able to serve up to thousands of end-devices, must be carefully dimensioned to meet the requirements of each use case.
Thus, the combination of the number of end-devices, the selected SFs and the number of channels will determine if the \LoRaWAN ALOHA based access and the maximum duty-cycle regulation fit each use case.
For instance, we have seen that deterministic monitoring and real time operation cannot be guaranteed with current \LoRaWAN state of the art.
\section*{Acknowledgment}
This work is partially supported by the Spanish Ministry of Economy and the FEDER regional development fund under SINERGIA project (TEC2015-71303-R), and by the European Commission through projects H2020~F-Interop and H2020~ARMOUR.
\newpage
\bibliographystyle{IEEEtran}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 410
|
\section*{Abstract}
In day-ahead electricity markets based on uniform marginal pricing, small variations in the offering and bidding curves may substantially modify the resulting market outcomes. In this work, we deal with the problem of finding the optimal offering curve for a risk-averse profit-maximizing generating company (GENCO) in a data-driven context. In particular, a large GENCO's market share may imply that her offering strategy can alter the marginal price formation, which can be used to increase profit. We tackle this problem from a novel perspective. First, we propose a optimization-based methodology to summarize each GENCO's step-wise supply curves into a subset of representative price-energy blocks. Then, the relationship between the market price and the resulting energy block offering prices is modeled through a Bayesian linear regression approach, which also allows us to generate stochastic scenarios for the sensibility of the market towards the GENCO strategy, represented by the regression coefficient probabilistic distributions. Finally, this predictive model is embedded in the stochastic optimization model by employing a constraint learning approach. Results show how allowing the GENCO to deviate from her true marginal costs renders significant changes in her profits and the market marginal price. Furthermore, these results have also been tested in an out-of-sample validation setting, showing how this optimal offering strategy is also effective in a real-world market contest.
\vspace{5mm}
\noindent \textbf{Keywords:} Stochastic programming, Constraint learning, Data-driven optimization, Electricity market, Optimal pricing strategy
\section{Introduction}
\label{sec:intro}
As digitization and automation processes advance in the so-called fourth technological revolution, the treatment and use of large amounts of data for decision-making is of great importance. From pandemic management to financial investment or electricity consumption planning, data play a key role to make informed and optimal decisions, and in most cases, under high levels of uncertainty. Indeed, this data-driven perspective is inspiring the development of state-of-art optimization modeling techniques and efficient solution algorithms.
The assumption to know the input parameters with complete certainty has been the fundamental hypothesis for deterministic optimization techniques tackling complex decision-making problems, based on linear, nonlinear, integer formulations, or a combination of these \parencite{murty1994operations}. However, this assumption is barely fulfilled in real contexts and hence, uncertainty must be considered within the optimization process. One of the most employed approaches in this regard is stochastic programming, where the model incorporates the estimated probability distribution of the uncertain parameters \parencite{birge2011introduction}. In particular, stochastic programming considers the following problem:
\begin{equation}
\label{eq:sto_pro}
\underset{x \in \mathcal{X}}{\min} \; \mathbb{E}\left[ c(x;Y) \right]
\end{equation}
\noindent where $x \in \mathcal{X} \subset \mathbb{R}^{d_x}$ represents the decision variables, $Y \in \mathcal{Y} \subset \mathbb{R}^{d_y}$ are the parameters that characterize the problem, $c(x;Y) : \mathbb{R}^{d_x} \times \mathbb{R}^{d_y} \rightarrow \mathbb{R}$ is the cost function, and $\mathbb{E}\left[\cdot \right]$ represents the expected value over the distribution of $Y$.
We want to further extend this setting and exploit the case where auxiliary information (covariates $\theta \in \Phi \subset \mathbb{R}^{d_{\theta}}$) is available to help modelling the complex response of $Y$. Thus, we try to set a decision problem in a data-driven context. In particular, lets assume again that $x$ is our decision variable, while $Y$ is a response from a complex system (e.g., market price) that conditions our objective function $c(x;Y|f,\theta)$. Notice how the function $f$ is now used to relate the response $Y$ with the contextual information $\theta$. Furthermore, lets assume that we have access to historical observations of the type $\mathcal{D}=\left\{ (x_1,y_1,\theta_1),\dots,(x_N,y_N,\theta_N)\right\}$ where each of the $N$ samples includes the decision variable $x$, the response $y$, and the covariates $\theta$.
The problem to be treated can be generalized as:
\begin{equation}\label{eq:general}
x\left(f,\theta\right)\in \argmin_{x \in \mathcal{X}} \mathbb{E} [c(x;Y|f,\theta)]
\end{equation}
Different perspectives have tried to tackle the use of contextual information for decision-making in an optimal way. One of the most common approaches in the literature is to follow a Predict and Optimize strategy. That is, we learn the relationship between $Y$ and $\theta$ through a predictive model $f$ by employing a dataset $\mathcal{D}$ of past observations. Then, when a new value $\theta$ is given, $Y = f(\theta)$ is computed and used within the optimization problem to set optimal decisions $x$. However, this strategy has some key drawbacks: the use of the point prediction fails to capture the associated uncertainty level, and the function $f$ is not aware of the optimization model's behavior.
For this second issue, an integrated approach to find functions $f$ that also lead to good prescriptions is addressed in \textcite{elmachtoub2022smart}, but only for linear objective and prediction functions. In \textcite{ban2019big}, the newsvendor problem is tackled by using machine learning models to predict optimal decisions as a direct function of the observed $\theta$. One disadvantage of the proposed strategy is to reach potentially infeasible decisions in a test dataset.
Recently, \textcite{bertsimas2020predictive} introduced the so-called Predictive to Prescriptive two-step approach, where the first step focus on training machine learning models to predict $Y$ from a given $\theta$. Nevertheless, in the second step, a Sample Average Approximation (SAA) is solved with the weights dictated by the prediction model for that particular observation. For instance, using $kNN$ as the prediction function $f$, for any $\theta$, $k$ nearest neighbors are computed in the training set, and a SSA is solved only with these $k$ neighbors to find the optimal decisions $x(\theta)$. An in-deep review of these approaches can be found in \textcite{mundru2019predictive}.
Furthermore, in \textcite{munoz2022bilevel} a bi-level framework is proposed to fit a parametric model to those data that are specifically tailored to maximize the decision value, while accounting for possible feasibility constraints. In \textcite{bertsimas2019dynamic, esteban2021distributionally}, parametric approaches are left behind to focus on estimating a complete conditional distribution of the side information to make robust decisions over $x$. Whereas in \textcite{bertsimas2019dynamic}, the approach is conceived as a two-step procedure, in \textcite{esteban2021distributionally}, a single-step method is derived.
However, even if the above works deal with a data-driven approach for decision making, there is one specific setting that needs to be specially addressed: when decisions $x$ have a direct influence on the response $Y$, which also conditions the cost function (and hence, can be treated as another decision variable), jointly with the rest of contextual information and associated uncertainty. Thus, we want to embed these types of interactions, related to an extended predictive model $y=f^{\mathcal{D}}(x,\theta)$, within our optimization problem to capture the relationship between a complex response, our optimal decisions, and the contextual information. This process is referred to as Constraint Learning \parencite{fajemisin2021optimization}, a topic that has recently gained attention in the literature. Some works have studied how to embed linearizable machine learning models within the optimization problem. For instance, in \textcite{paulus2021comboptnet, yang2021optimization}, neural networks are employed to learn the constraints, while in \textcite{mivsic2020optimization, maragno2021mixed} tree-based methods were studied.
Nevertheless, despite these ``black-box'' methods give a high-quality performance regarding prediction accuracy, they lack the sought of explainability that a simpler method, like a classical linear regression, can provide in real applications. Besides, a proper uncertainty characterization around the point prediction is not considered. For these reasons, we will extend this setting to explicitly account for uncertainty and risk aversion in the decision-making process.
In particular, the work by \textcite{perez2022optimal} addresses the uncertainty and explainability under a constraint learning approach, but under a stylized and simulation-based application. However, we want to consider a fully data-driven context. That is, dealing with a real-world application with large amounts of data, employing a Bayesian linear regression approach to model uncertainty around coefficient distributions while trading-off explainability and prediction power, and performing an out-of-sample validation of the stochastic optimal solutions.
Particularly, considering a two-stage stochastic programming framework, we address the following model:
\begin{equation}\label{eq:sto_pro_pred}
\underset{x, y \in \mathcal{S}(\theta;\xi)}{\min} \; \mathbb{E}\left[ c(x,y|\theta;\xi) \right]
\end{equation}
where, $c(\cdot|\theta;\xi): \mathbb{R}^{d_x} \times \mathbb{R}^{d_y} \times \mathbb{R}^{d_{\theta}} \times \mathbb{R}^{d_{\xi}} \rightarrow\mathbb{R}$, $x\in \mathbb{R}^{d_x}$ can be considered first stage and $y\in \mathbb{R}^{d_y}$ second stage decision variables, $\theta \in \mathbb{R}^{d_{\theta}}$ accounts for contextual information known when the first stage decisions take place, and $\xi \in \mathbb{R}^{d_{\xi}}$ gathers the exogenous uncertain data (random vector) characterizing our problem. $\mathbb{E}[\cdot]$ may stand for expectation, but we can also consider any other risk measure, e.g., we focus on Conditional Value-at-Risk (CVaR). Without loss of generality, we assume that the feasible region for $x$ and $y$, i.e., $\mathcal{S}(\theta;\xi)$ depends on the contextual information and on the uncertain data as follows:
\begin{equation}\label{eq:sto_pro_pred_feasible}
\mathcal{S}(\theta;\xi) = \left\{
\begin{array}{l}
g(x,y,\theta;\xi)\leq 0 \\
y = f^{\mathcal{P}}(x,\theta;\xi)
\end{array}
\right.
\end{equation}
Where $g(\cdot): \mathbb{R}^{d_x} \times \mathbb{R}^{d_y} \times \mathbb{R}^{d_{\theta}} \times \mathbb{R}^{d_{\xi}} \rightarrow\mathbb{R}^g$ is a constraint mapping. In particular, $f^{\mathcal{P}}(x,\theta;\xi): \mathbb{R}^{d_x} \times \mathbb{R}^{d_{\theta}} \times \mathbb{R}^{d_{\xi}} \rightarrow \mathbb{R}^{d_y}$ represents the predictive model that links the first and the second stage decision variables. Note that it depends on the realization of the contextual information, together with the uncertainty associated with its training. Indeed, we are interested in predictive models $f^{\mathcal{P}}(\cdot)$ with accurate probabilistic characterization of their parameters.
To tackle (\ref{eq:sto_pro_pred}) in practice, we can employ a SAA-based approach:
\begin{subequations}\label{eq:sto_pro_pred_SAA}
\begin{align}
&\underset{x, y_{\omega}}{\min} \; \sum_{\omega=1}^{\Omega}\pi_{\omega} c_{\omega}(x,y_{\omega}|\theta;\xi_{\omega}) \\
& \mbox{s.t.}\notag\\
& \quad g_{\omega}(x,y_{\omega},\theta;\xi_{\omega})\leq 0 \quad \forall \omega\\
& \quad y_{\omega} = \hat{f}^{\mathcal{P}}_{\omega}(x,\theta;\xi_{\omega}) \quad \forall \omega
\end{align}
\end{subequations}
where uncertainty is approximated by a discrete set of scenarios $\omega=1,\dots,\Omega$ with an assigned probability $\pi_{\omega}$ (we may also account for the risk-averse case via risk-adjusted probabilities). Each scenario represents a possible realization of the uncertain vector $\xi$, i.e., $\xi_\omega$. Similarly the second stage decision variables $y$ can be considered scenario dependent, i.e., $y_{\omega}$. Moreover, the estimation of the predictive function, is also scenario dependent, i.e., $\hat{f}^{\mathcal{P}}_\omega(\cdot)$, as it is conditioned by the particular realization of $\xi_{\omega}$ which affects its own parameters.
The assumption of $y_{\omega}$ being a second stage decision variable dependent on first stage decisions $x$, external factors and uncertain data is suitable for markets where participants may have some degree of market power. For instance, in electricity markets based on uniform marginal pricing, suppliers and consumers submit their offers and bids (first stage decisions) to the market operator. The intersection of the aggregated offering and bidding curves will generate an hourly marginal price for the market (which can be considered a second-stage decision for a player with sufficient market power). As we can see, decisions made at a first instance by the market agents have a direct impact on the marginal price formation and, therefore, on the profit they can obtain.
In this work, we focus on this setting and study the optimal offering strategy from the perspective of a risk-averse large producer (or generating company, GENCO) participating in a day-ahead electricity market. We address this problem from a completely data-driven approach where large amounts of data are available from historical supply and purchase offers, and the resulting market outcomes. For that purpose, first, we propose a novel optimization-based technique to get an adequate subset of blocks summarizing the step-wise offer curves of each GENCO. Then, a statistical prediction model is employed to set the relationship between supply offering prices and the resulting marginal market price. This statistical model will allow generating meaningful scenarios for the marginal price response concerning the GENCO's offers. Finally, the resulting model will be embedded in the decision-making process, which will result in a computationally efficient stochastic optimization model. Optimal offering prices will be tested through an out-of-sample methodology.
In the case of power producers, the problem of setting an optimal offering strategy has been extensively discussed in the literature. Multilevel models have been used as a standard approach to set optimal strategies that maximize GENCOS' profits. For instance, in \textcite{ruiz2009pool}, the solution for the problem is based on a bi-level equilibrium approach. \textcite{kardakos2015optimal} deals with the problem of setting an offering strategy for a virtual problem plant with a stochastic bi-level approach, whereas in \textcite{pandvzic2013offering}, a two-stage stochastic approach was employed to set the optimal strategy of a virtual power plant selling and purchasing energy from the day-ahead and the balancing markets seeking to maximize its profit.
More recently, \textcite{xiao2021optimal} tackles the problem with a single-level mixed-integer linear programming approach, where Conditional Value at Risk (CVaR) is used for risk management and ARIMA models to generate scenarios in the optimization problem. In \textcite{han2018offering}, the optimal strategy for a photovoltaic power plant is set by a bi-level stochastic program, dealing with the uncertainty of the competitors and its photovoltaic output. Finally, in \textcite{chen2019trading}, an Extreme Learning Machine is employed to find the relationship between the prosumer strategy and the obtained profits and costs in distribution grids. This is, indeed, an example of applying constraint learning in an energy market context. However, these approaches do not consider the data-driven uncertainty, especially that inherent within the parameters of the forecasting model. Moreover, to the authors knowledge, none of these approaches have been tested under out-of-sample validation schemes like the one presented in this work.
In summary, the main contributions of this work concerning the state-of-art on these topics are the following:
\begin{itemize}
\item[--] to develop a complete data-driven optimization model for a risk averse GENCO offering strategy, making use of massive real world datasets.
\item[--] to propose an optimization-based reduction technique to summarize past realizations of the market participants hourly offering curves.
\item[--] to extend the standard constraint learning methodology by including a Bayesian linear regression approach to take the inherent predicted model uncertainty into account.
\item[--] to show the validity of the obtained optimal pricing strategy in a real world out-of-sample application based on the Spanish electricity market.
\end{itemize}
The structure of this article is as follows. Section \ref{sec:methodology} will describe in detail the proposed methodology, from the optimal discretization of supply curves to the marginal price formation and the Bayesian regression employed to characterize the relationship between the marginal price and the GENCO's strategy. Section \ref{sec:stoch_model} will introduce the stochastic optimization problem for the GENCO. Then, a case study will be shown in Section \ref{sec:case}, including out-of-sample testing of the optimal strategy. Finally, Section \ref{sec:conclu} will draw the main conclusions of this work.
\section{Marginal price and supply curves characterization}
\label{sec:methodology}
As it has been mentioned before, in many electricity markets the hourly marginal price is formed by the intersection of the demand and supply curves bidded and offered by the consumers and generators, respectively. In this work, we aim to characterize the optimal offering strategy of a profit-maximizing large generating company (GENCO). In particular, we will study, from a data-driven perspective, the potential ability of the GENCO to alter the formation of market-clearing hourly prices. For that reason, we need to deeply analyze and characterize her historical offering curves and the resulting market outcomes.
\subsection{Optimal discretization of supply curves}
\label{sec:supply_disc}
In general, in day ahead electricity markets based on uniform marginal pricing, data from the hourly supply curve of each GENCO is composed of production units' power blocks with their corresponding offering prices, i.e., prices that each unit is willing to accept to produce that amount of power for one hour. In a fair, transparent, and audited market, this price must reflect the marginal generating cost of that production unit.
To build the GENCOS' aggregated supply curve for each hour, production blocks are ordered by their price, from the lowest to the largest. Regarding the energy quantity, a cumulative sum is performed so that each price faces the cumulative quantity of energy of those blocks with prices below. This renders an increasing step-wise curve. An illustrative example can be seen in Figure \ref{fig:curve_discr}, where for a given hour and particular GENCO, red points represent her production units with their price and cumulative quantity within the supply curve.
For tractability of the subsequent predictive model, we propose to summarize first each aggregated supply curve into a smaller number of blocks by using an optimization approach (\ref{eq:discr_model}). The aim is to obtain a minimum-error-based discretization of the GENCO's hourly supply curve.
\begin{subequations}\label{eq:discr_model}
\begin{align}
\underset{C_{b},\delta_b}{\min} & \quad \sum_{b=1}^{B}|C_b-P_b^{R}| q_b^R \label{eq:discr_model_OF}\\
\text{s.t.:}&\notag\\
&0 \leq C_b-C_{b-1} \leq \delta_n M^P \quad b=2,\dots,B \label{eq:discr_model_cons1}\\
&0 \leq C_b-C_{0} \leq \delta_1 M^P \quad b=1 \label{eq:discr_model_cons2}\\
&\sum_{b=1}^B \delta_b=|I|-1 \label{eq:discr_model_cons3}\\
&\delta_{b} \in \{0,1\} \quad b=1,\dots,B \label{eq:discr_model_cons4}
\end{align}
\end{subequations}
In this problem, we seek to obtain $|I|$ grouped power blocks from a total of $B$ original ones. $C_b$ is the energy price, $P_b^R$ is the real price observed, and $q_b^R$ is the amount of energy of each block $b$. The objective function of the model (\ref{eq:discr_model_OF}) aims to minimize the absolute distance between the real prices $P_b^{R}$ and the optimized ones $C_b$, weighted by energy quantity of the block. Constraints (\ref{eq:discr_model_cons1}) and (\ref{eq:discr_model_cons2}) allow to assign the same price $C_b$ to all the values that belong to the same grouped block. The sum of $\delta_b$ in constraint (\ref{eq:discr_model_cons3}) ensures that $|I|-1$ cuts are obtained, and therefore $|I|$ grouped blocks, since $\delta_b$ is defined as a binary variable (\ref{eq:discr_model_cons4}).
The solution to this problem will let us know the different prices $C_b$ and positions $\delta_b$ where a step in the curve can be set to obtain an optimal cut and therefore, an optimal grouped block. As an example, let's assume we want to get $|I|=7$ grouped blocks from the curve (red dots) presented in Figure \ref{fig:curve_discr}. The optimization problem will render the solid black step-wise curve as a solution. In this way, dashed vertical lines represent the different cuts, and solid horizontal lines the energy price $C_b$ associated with each grouped block. Notice that, for simplicity, an extra vertical line has been added in the last production unit to set the last block.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{figures/curve_discr.png}
\caption{Supply curve discretization example for June 3rd 2017 at 15:00. Source: OMIE}
\label{fig:curve_discr}
\end{figure}
As it can be seen, this solution will appropriately summarize the information within the original supply curve, as it results in a finer discretization for those parts of the curve with higher price variations.
\subsection{Marginal price formation from discretized curves}
\label{sec:discr_market}
Once we obtain the optimal discretization of the GENCO supply curve, the same procedure can be applied to the rest of the offers in the market, that is, the supply curves from the competitors. Considering both the GENCO's and competitors' supply curves, and ordering all the blocks by their price, the cumulative energy quantity can be computed to build the aggregated discretized market supply curve.
Note that it is also possible to achieve a summarized curve by directly discretizing the original aggregated supply curve. However, splitting GENCO's units and the ones from the competitors will allow us to know the weight of the producer within the market and to get insights on how its pricing strategy can modify the market outcomes.
Nevertheless, to set a marginal price a demand curve is also needed. For the sake of simplicity, we will assume an inelastic demand curve, which will cut the supply curve and will establish the price producers will be paid for each MWh of energy offered below this price.
An example of this procedure is shown in Figure \ref{fig:disc_market}, where the increasing step-wise curve represents the aggregated market supply curve, the vertical line is the inelastic demand, and the green and red colors identify the blocks offered by the GENCO (main producer) and the rest of the competitors, respectively. We can see a total dispatched energy of 24000 MWh at a price of 42 \euro/MWh. Notice how the marginal price is set by a block from the main producer in this case. This motivates the study of how small variations in the offered price can directly change the marginal market price.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{figures/discr_market.png}
\caption{Example of marginal price formation on the discretized market}
\label{fig:disc_market}
\end{figure}
However, in real day-ahead electricity markets, the price resulting from the cut of the submitted demand and supply curves is not necessary the resulting market marginal price (e.g., Spain and Portugal). In fact, it is common to observe that the final supply curve from the market suffers changes (withdraw of some operation units) due to the system operator's ex-pot verification of technical constraints. We can take as an example the market curves presented in Figure \ref{fig:omie_curves}. There, two different hourly marginal price formation procedures are shown within the same day in the Iberian Electricity Market \parencite{MIBEL}. Differences between sale offers and matching (dispatched) sale offers can be easily noticed at hour 15.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/curva_5_05_03_2019_05_03_2019.jpeg}
\end{subfigure}
\hfill
\begin{subfigure}{0.75\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/curva_15_05_03_2019_05_03_2019.jpeg}
\end{subfigure}
\caption{Marginal price formation in two different hours in the MIBEL}
\label{fig:omie_curves}
\end{figure}
Hence, when analyzing historical market data, it is important to consider this fact by statistically characterizing the differences between the matching supply curve and the offered one. This will allow us to validate the optimal strategy in real life (out-of-sample). That is, we can set the optimal blocks with real data and use competitors' blocks, jointly with the demand and the approximated displacement of the supply curve to study the differences in profit according to the producer risk aversion and other relevant features.
\subsection{Marginal price characterization model}
\label{sec:bayes_lin}
To strategically derive the GENCO's optimal offered prices within her supply curve (one for each block) is necessary to find a function accurate enough to predict the response of the marginal electricity price. That is, assuming that the strategic GENCO can alter the market marginal price, we will characterize it as a function of its price block offers and other external covariates, such as different marginal price lags, the expected demand, or levels of renewable energy production.
Lets assume we have a dataset of $N$ observations of the type $\mathcal{D} = \{(y_1, \mathbf{X}_1, \bm{\theta}_1), \dots,$ $(y_N, \mathbf{X}_N, \bm{\theta}_N) \}$, where $y \in \mathbb{R}$ represents the marginal price, $\mathbf{X} = \{X^1, \dots, X^{d_x}\}$ are the block price offers done by the GENCO, and $\bm{\theta} = \{\theta^1, \dots, \theta^{d_{\theta}}\}$ are the values for the rest of auxiliary variables (covariates) the marginal price depends on, known at the time of the decision making.
In this way, it is possible to characterize the marginal price $y$ as a linear function of $\mathbf{X}$ and $\bm{\theta}$, as shown in (\ref{eq:lin_model}).
\begin{equation}
\label{eq:lin_model}
y = \beta_0 + \sum_{i=1}^I \beta_i X^i + \sum_{j=I+1}^{I+J} \beta_j \theta^j
\end{equation}
\noindent where the different linear regression coefficients within $\boldsymbol{\beta} = \{\beta_1, \dots, \beta_I\}$ can be estimated from $\mathcal{D}$ through the classical Ordinary Least Square approach. This approach is simple and it would allow us to characterize the uncertainty over the coefficients by assuming their distribution from the Hajek-Sidak CLT. However, in a data-driven context it seems more convenient to directly fit the distribution of the coefficients using existing empirical methods in the literature. For that reason, a Bayesian approach can be used to model the marginal price through Bayesian linear regression. In this way:
\begin{equation*}
y \sim N(\bm{\beta} (\mathbf{X},\bm{\theta}), \bm{\sigma} I)
\end{equation*}
Estimation of regression model coefficient distribution with the Bayesian approach can be done by iterating at a marginal posterior. A posterior distribution is calculated by multiplying the prior distribution and the likelihood function. In particular, the process of obtaining these parameters is done by using Markov Chain Monte Carlo (MCMC) algorithm. One of the commonly used algorithms in MCMC is Gibbs Sampling, and several packages have been developed to automatize this process, such as the \emph{PyMC3} library in Python \parencite{salvatier2016probabilistic} or \emph{rstanarm} package in R \parencite{rstanarm}.
\subsection{Scenario generation through coefficient distribution}
\label{sec:sce_gen}
Regarding the stochastic optimization problem, we seek to model the uncertainty associated with the coefficients of the regression model. In this case, we acknowledge that the marginal price sensitivity $\beta_i$ to the block prices offered by the GENCO is not deterministic, but characterized by an associated probability distribution.
This fact means that, for example, there will be some scenarios where the price of one block will make the marginal price increase more than in others. It is even possible that those $\beta_i$s with a high uncertainty level can be either positive or negative within the scenarios.
In this context, the data-driven Bayesian linear regression approach exposed in Section \ref{sec:bayes_lin} will allow us to estimate the parameters of a normal distribution for each of the regression coefficients in the linear model. Focusing on the coefficients related to block prices, each one of the $I$ of them will follow a normal distribution:
\begin{equation*}
\hat{\beta}_i \sim N(\mathbb{E}[\hat{\beta}_i], \hat{\sigma}_{\beta_i}), \quad \forall i \in I
\end{equation*}
Random realizations from these distributions can be sampled to generate scenarios for the stochastic optimization model. For the rest of the covariates, a point estimate of the coefficients can be assumed, and their expected value will be directly embedded in the optimization problem.
\section{Stochastic Optimization Problem}
\label{sec:stoch_model}
In this section, we describe the optimization problem that is employed by the GENCO to derive her optimal offering strategy. The problem is formulated using a risk-averse two-stage stochastic approach in which the marginal price of the market is explicitly characterized through constraint learning.
\subsection{Notation}
The notation employed to formulate the stochastic problem is described in this subsection for quick reference.
\medskip
\noindent Indices and sets:
\begin{itemize}
\item[--] $I$: Set of energy blocks, indexed by $i$.
\item[--] $T$: Set of hourly periods within a day, indexed by $t$.
\item[--] $\Omega$: Set of stochastic scenarios, indexed by $\omega$.
\end{itemize}
\medskip
\noindent Variables:
\begin{itemize}
\item[--] $P^i_{t}$: Price offered for block $i$ at time $t$.
\item[--] $Q_{t,\omega}^{i}$: Quantity of energy produced from block $i$ at time $t$ and scenario $\omega$.
\item[--] $u_{t,\omega}^i$: Binary decision variable indicating whether price from block $i$ at time $t$ and scenario $\omega$ is below the marginal electricity price or not.
\item[--] $s_{\omega}$, $\eta$: Auxiliary variables for CVaR formulation.
\item[--] $\lambda_{t,\omega} :$ Marginal market price at time $t$ and scenario $\omega$.
\item[--] $Q^{ren}_{t} :$ Offered quantity at price zero from renewable resources at time $t$.
\end{itemize}
\medskip
\noindent Parameters:
\begin{itemize}
\item[--] $\beta^i_{t,\omega}$: Regression coefficient of the $i$-th block price predictor at time $t$ and scenario $\omega$, sampled from their corresponding distribution.
\item[--] $\hat{D}_t$: Sum of regression coefficients multiplied by the rest of exogenous regressors within the linear model (demand, wind and solar energy forecasting with their respective lags, marginal price lags, and calendar variables), at time $t$.
\item[--] $C_t^i:$ Marginal generating cost for block $i$ at time $t$.
\item[--] $\sigma_t^i$: Allowed variability for the $i$-th block price offer with respect to its generating cost at time $t$.
\item[--] $\pi_{\omega}:$ Probability assigned to each scenario $\omega$.
\item[--] $\alpha$: Fraction of the profit distribution to be used in the CVaR calculation.
\item[--] $\chi$: Weight assigned to the CVaR against the expected profit.
\end{itemize}
\subsection{Formulation}
The GENCO knows in advance of sending his offers the maximum energy quantity $Q_t^{\text{Max }i}$ she can produce for each time and energy block, and its associated cost $C_t^i$. In an efficient market, the producer would directly send these offers (true marginal cost of production) which can be imposed in the current model by fixing $\sigma_t^i = 0$. However, positive values of parameter $\sigma_t^i$ will let us adjust the degree in which producers with market power can modify their offers and deviate from perfect competition.
As described in Section \ref{sec:bayes_lin}, the behavior of the marginal price, concerning her offers, is learned through constraint learning \parencite{maragno2021mixed} from the historical relationship between these variables and the rest of the covariates. In particular, a Bayesian linear regression model will let us generate the needed scenarios for the risk-averse two-stage stochastic programming formulation (\ref{eq:stochastic_model_full}). Uncertainty will be related to the market price sensitivity towards the main producer offers, as exposed in Section \ref{sec:sce_gen}. Sampling from the coefficient distributions of the production blocks $\beta^i_{t,\omega}$ will drive uncertainty into the marginal price $\lambda_{t,\omega}$ and the quantity of energy finally produced $Q_{t,\omega}^{i}$. Hence, the GENCO's aim is to make an optimal decision over the price she offers for each block ($P_t^i$) at the first stage of the problem, with a direct impact on its expected profits and risks.
The stochastic model is formulated as follows:
\begin{subequations}\label{eq:stochastic_model_full}
\begin{align}
\underset{\Theta}{\max} \quad &(1-\chi) \sum_{\omega \in \Omega} \pi_{\omega} \sum_{t \in T} \left[\lambda_{t,\omega} Q_t^{ren} + \sum_{i\in I} (\lambda_{t,\omega} Q_{t,\omega}^{i} - C_t^i Q_{t,\omega}^{i}) \right] + \chi \left(\eta - \frac{1}{\alpha} \sum_{\omega \in \Omega} \pi_{\omega} s_{\omega} \right)\label{eq:stoch_of}\\
\text{s.t.}&\notag \\
&Q_t^{ren} = Q_t^{\text{Max ren}} \quad \forall t \label{eq:sto_cons1}\\
&u_{t,\omega}^i \in \{0,1\} \quad \forall i,t,\omega \label{eq:sto_cons2}\\
&\lambda_{t,\omega} - P_t^i \leq u_{t,\omega}^iM \quad \forall i,t,\omega \label{eq:sto_cons3}\\
&P_t^i - \lambda_{t,\omega} \leq (1-u_{t,\omega}^i)M \quad \forall i,t,\omega \label{eq:sto_cons4} \\
&Q_{t,\omega}^{i} = u_{t,\omega}^i Q_t^{\text{Max }i} \quad \forall i,t,\omega \label{eq:sto_cons5} \\
&\lambda_{t,\omega} = \beta_0 + \beta^{ren} Q_t^{ren} + \sum_i \beta^i_{t,\omega} P_t^i + D_t \quad \forall t,\omega \label{eq:sto_cons6} \\
&C_t^i - \sigma_t^i \leq P_t^i \leq C_t^i + \sigma_t^i \quad \forall i,t \label{eq:sto_cons7} \\
&0 < P_t^i \leq P_t^{i+1} \quad \forall i,t \label{eq:sto_cons8} \\
&\eta - \sum_t \left[\lambda_{t,\omega} Q_t^{ren} + \sum_i (\lambda_{t,\omega} Q_{t,\omega}^{i} - C_t^i Q_{t,\omega}^{i}) \right] \leq s_{\omega} \quad \forall \omega \label{eq:sto_cons9}\\
&0 \leq s_{\omega} \quad \forall \omega \label{eq:sto_cons10}
\end{align}
\end{subequations}
\noindent where $\Theta = \{P_t^i, Q_{t,\omega}^{i}, u_{t,\omega}^i, \eta, s_{\omega}\}$ is the set of optimization variables.
The objective function (\ref{eq:stoch_of}) represents the weighted sum of the GENCO expected value and the CVaR of her profit. CVaR will be employed to measure the risk taken by the producer, which equals the expected value of $100\alpha$\% scenarios with the lowest profit. In particular, we use the linear formulation of the CVaR proposed by \textcite{rockafellar2000optimization}. The parameter $\chi \in [0,1]$ is used to model the risk aversion level of the producer. Thus, when $\chi$ is equal to zero the producer acts as a risk-neutral decision-maker. On the other hand, when $\chi$ is equal to one, the producer can be considered risk-averse, and his decisions will be focused on improving the left tail of the profit distribution. The expected profit is computed as the sum of the profits over the set of scenarios multiplied by their respective probabilities $\pi_{\omega}$. The profit is composed of the revenues of the dispatched energy blocks, paid at a marginal price $\lambda_{t,\omega}$, minus their production costs.
Constraint (\ref{eq:sto_cons1}) establishes that the produced renewable energy $Q_t^{ren}$ at time $t$ equals the estimated one in order to send block offers to the pool $Q_t^{\text{Max ren}}$. Constraints (\ref{eq:sto_cons2})-(\ref{eq:sto_cons4}) model whether the price requested for one block $P_t^i$ is lower than the marginal price $\lambda_{t,\omega}$, or not. In particular, (\ref{eq:sto_cons2}) assigns variable $u_{t,\omega}^i$ a binary domain while (\ref{eq:sto_cons3}) and (\ref{eq:sto_cons4}) are employed through the big-M method in order to assign $u_{t,\omega}^i$ a value of one if the price of the block $i$ is under the marginal price, and zero otherwise.
With this assigned value of $u_{t,\omega}^i$, equation (\ref{eq:sto_cons5}) will establish the dispatched energy. That is, $Q_{t,\omega}^{i}$ will be equal to the estimated energy that can be produced $Q_t^{\text{Max }i}$ for the block $i$ if the price requested for that block is under the marginal price. On the other hand, the value of $Q_{t,\omega}^{i}$ will be zero if the price of that block is over the marginal price. That is, it is not a dispatched energy block.
Equation (\ref{eq:sto_cons6}) represents the embedded linear model: the learned constraint. In this case, as in a classical linear model, the marginal price will be estimated as the sum of the intercept $\beta_0$, plus the quantity of renewable energy multiplied by its coefficient $\beta^{ren} Q_t^{ren}$, plus each block price (decision variable) multiplied by their stochastic coefficient variables (generated scenarios), i.e., $\beta^i_{t,\omega} P_t^i$. Finally, it is included the value composed by the sum of the rest of the covariates multiplied by their coefficients, i.e., $D_t$.
Constraint (\ref{eq:sto_cons7}) determines the producer flexibility of setting a block price $P_t^i$ above or below its true marginal cost $C_t^i$, as a function of parameter $\sigma_t^i$. In particular, the impact of $\sigma_t^i$ will be used to study how the GENCO can increase her profit by modifying the offering block prices. Equation (\ref{eq:sto_cons8}) ensures an increasing offering curve, a condition that is required in most electricity markets (see Figure \ref{fig:curve_discr}). Finally, constraints (\ref{eq:sto_cons9}) and (\ref{eq:sto_cons10}) will be employed in order to characterize the CVaR. In particular, the value of $\eta$ would be equal to the Value at Risk at the optimal solution to the problem (\ref{eq:stochastic_model_full}).
One of the advantages of this formulation is that can be easily transformed into a mixed-integer linear problem, as the only non-linear term is the product between two variables: $\lambda_{t,\omega} Q_{t,\omega}^{i}$, which according to (\ref{eq:sto_cons5}) is equivalent to $\lambda_{t,\omega}u_{t,\omega}^i Q_t^{\text{Max }i}$. The product $ \lambda_{t,\omega}u_{t,\omega}^i$ involves a continuous variable and a binary one, which can be linearized without approximation (big-M approach).
\section{Case study}
\label{sec:case}
The main goal of this case study is to analyze the optimal bidding strategy of a large GENCO in a data-driven context. For this purpose, real-world data will be employed during the study, jointly with an out-of-sample validation, to show how price variations in GENCO's offered energy blocks can modify the marginal market price and make her profit increase.
\subsection{The dataset}
In this case study, we will focus on a large GENCO within the Spanish electricity market. This GENCO produces around 25\% of the energy in the Spanish system. As exposed in Sections \ref{sec:methodology} and \ref{sec:stoch_model}, we will employ different types of data to characterize the problem.
Firstly, in order to obtain discretized energy blocks and compute a marginal price with the inelastic demand, the hourly supply curves for each GENCO in the day-ahead market are necessary. This information is open-access and provided by the designated electricity market operator for the Iberian Peninsula, OMIE \parencite{OMIE}. Two years of data have been collected on an hourly frequency, from June 2017 to June 2019. This data includes day-ahead hourly supply curves (similar to Figure \ref{fig:curve_discr}) for the production units from the large GENCO as well as from the competitors. That is, more than 30 million observations were processed. Hourly data from June 1st, 2017 to May 31st, 2019 will be mainly employed to train our linear model to predict the day-ahead marginal price. June 2019 will be used as a test, applying an out-of-sample validation approach.
The selection of this time range is due to two main reasons: there were no atypical external factors that made electricity price increase or decrease from the average price of the decade, and there were no significant changes in the renewable power installed capacity that could bias the analysis.
Regarding the covariates employed in the predictive model (price lags, demand, solar, and wind energy forecasts with their respective lags), they are obtained through the information system of Red El\'ectrica Española (the Spanish system operator), ESIOS \parencite{ESIOS}. All this information is assumed to be known in advance at the time of the decision-making.
\subsection{Applied methodology}
\subsubsection{Market dicretization}
The first step in the methodology process is to optimally discretize the offering supply curves. As exposed in Section \ref{sec:methodology}, we will consider a specific supply curve discretization for the GENCO and another one for the rest of the competitors. This allows better characterizing the GENCO's curve and gaining technological insights, which will be useful in designing the marginal price prediction model.
All the hourly supply curves will be discretized into 7 different blocks, both for the main producer and for the competitors. For each block, we assume its energy quantity and price represent how much quantity can be produced and at what cost. We consider 7 blocks as a reasonable number of blocks to capture the main functional properties of the supply curves, while not increasing in excess the dimensionality of the mixed-integer linear problem (\ref{eq:stochastic_model_full}). The first block will always represent the amount of estimated renewable production (at zero cost), and the last two blocks, high-cost generating technologies that, as the historical data reflects, are never marginal. An example of this characterization was shown in Figure \ref{fig:curve_discr}.
Once the discretized market supply curve is obtained, we approximate the hourly aggregated demand curve by an inelastic one. We get this value from the estimated hourly demand provided by ESIOS. This inelastic demand, jointly with the aggregated supply curve will render an intersection point, that sets a first marginal price with its corresponding total dispatched energy (Figure \ref{fig:disc_market}).
However, we know that this first intersection can be far from the resulting market price value, as technical restrictions come into play. These change the shape of the supply curve, creating a gap between the intersection of the curves and the resulting hourly marginal price (Figure \ref{fig:omie_curves}).
This is a phenomenon that we can not precisely quantify without a detailed physical description of the electricity system, but that must be considered somehow by the GENCO, as it may modify the resulting market outcomes. Nevertheless, by using historical data, we have statistically characterized this difference, and used it to replicate the resulting supply curve including these technical corrections. The difference in energy quantities will allow us to displace our discretized supply curve and give a more accurate estimation of the market marginal price. After this displacement is applied, we compute our new intersection, obtaining the final market marginal price and the dispatched energy.
For a fair out-of-sample model validation, the real hourly displacement due to the technical restrictions cannot be known in advance by the GENCO. Thus, as a proxy, we will employ the mean displacement of the last two months, grouped by hourly periods. We will see that this is an effective strategy to characterize this phenomenon.
\subsubsection{Marginal price predictive model}
Now that hourly marginal prices from the discretized supply curves have been computed, we seek an adequate predictive model (\ref{eq:sto_cons6}), i.e., to estimate the marginal price as a function of the offering prices and several covariates, to be embedded within the stochastic optimization problem (\ref{eq:stochastic_model_full}). There are two main approaches regarding model selection. Firstly, we could have chosen linearizable machine learning models (Tree-based methods, KNNs, etc) to get the minimum possible error on the prediction. However, with these models, we would lose explainability and appropriate probabilistic characterization of the uncertain parameters. Hence, we have chosen to use a linear regression model through a Bayesian approach. In this way, the model will be interpretable, and random scenarios will be obtained through the price block coefficients' probabilistic distributions.
For this model, the following predictors have been employed:
\begin{itemize}
\item[--] Quantity of renewable energy offered by the large GENCO power producer.
\item[--] Block prices (6 decision variables) offered by the GENCO. These decision variables include all the block prices except the one related to the renewable energy, that is offered at zero price.
\item[--] Demand estimation for the respective hour, and 7 lags (24, 48, ..., 168 hours), 24 hour rolling mean, maximum and minimum.
\item[--] Wind power estimation for the respective hour, and 7 lags (24, 48, ..., 168 hours), 24 hour rolling mean, maximum and minimum.
\item[--] Solar power estimation for the respective hour, and 7 lags (24, 48, ..., 168 hours), 24 hour rolling mean, maximum and minimum.
\item[--] 7 marginal price lags (24, 48, ..., 168 hours).
\item[--] Calendar covariates: dummies for day of the week and month, and binary variable for holidays.
\end{itemize}
Therefore, the dataset will be formed by a total of 70 different predictors for the hourly market marginal price model, being trained in the two-year hourly period stated above.
Table \ref{tab:linear_performance} summarizes the model performance on the training set. We show the Mean Absolute Error (MAE) and the Root Mean Squared Error (RMSE) as the performance metrics for both the full model and the model without decision variables (offering block prices) as predictors. MAE is computed as the average absolute deviation of the predictions from the real price values, and RMSE as the squared root of the mean squared prediction error. Furthermore, a cross-validation approach has been included using 6 folds (4 months per fold). Results have been computed using the mean of the coefficient probability distributions (as in classical linear regression models). They show an acceptable error value, and a slight improvement of the performance when GENCO's offering block prices are added as features (``Full model''). Furthermore, the model is not excessively overfitted, as the cross-validation error is similar to the one we get employing the complete training set.
\begin{table}[ht]
\caption{Full linear model and model without decision variables performance metrics.}
\centering
\label{tab:linear_performance}
\resizebox{\textwidth}{!}{%
\begin{tabular}{lcccc}
& Training MAE & Training RMSE & Cross-val MAE & Cross-val RMSE \\ \hline
Full model & 8.22 & 10.69 & 8.65 & 11.04 \\ \hline
Model w/o decision vars. & 8.32 & 10.81 & 8.88 & 11.23 \\ \hline
\end{tabular}
}
\end{table}
As a final note, we have also analyzed the impact of the standardized coefficients for the decision variables predictors concerning the rest of the standardized predictors. That is, we have computed $\left( \lvert \beta^{ren} \rvert + \sum_i \lvert \beta^i \rvert \right)/ \sum_j \lvert \beta^j \rvert \times 100$, where $\beta^i$ represents the coefficients related to the block prices (decision variables), and the set of $\beta^j \; \forall j$ depicts the complete set of regression coefficients for all the covariates. Notice that, for this analysis, the point estimation of the coefficients was employed, that is, the expected value of the distribution fitted utilizing Bayesian linear regression. This computation throws a value of $4.84\%$.
With this result, jointly with the model performance, we assert that the linear model, beyond being simple and interpretable, is accurate enough to predict the marginal price of the market making use of the considered decision variables (price offered per block) and external covariates. For these reasons, we will assume the selected producer is a large GENCO, whose decisions do affect the marginal price, and hence, her profits.
\subsubsection{Scenario generation}
The last step is to generate scenarios $\omega \in \Omega$ for the stochastic optimization problem. As it has been stated in Section \ref{sec:sce_gen}, the scenario uncertainty will come from the coefficient distributions of the Bayesian linear regression model. That is, the effect of one block price over the marginal price is not fixed: it is conceived as stochastic.
A total of 200 scenarios have been generated by sampling from the block coefficients estimated normal distributions (Table \ref{tab:normal_dist}).
\begin{table}[ht]
\caption{Mean and standard deviation estimated parameters for normally distributed price block coefficients.}
\centering
\label{tab:normal_dist}
\begin{tabular}{c|rrrrrrr}
& $P_t^1$ & $P_t^2$ & $P_t^3$ & $P_t^4$ & $P_t^5$ & $P_t^6$ \\\hline
$\mathbb{E}[\hat{\beta}_i]$ & -0.00537 & 0.11155 & 0.05026 & 0.00455 & -0.01415 & 0.03217 \\
$\hat{\sigma}_{\beta_i}$ & 0.00694 & 0.01620 & 0.01413 & 0.01181 & 0.00620 & 0.01614
\end{tabular}
\end{table}
As can be seen from the distributions, the prices offered for the second block have the biggest influence on increasing the marginal price of the market. On the other hand, the influence of the other blocks is more uncertain. One potential reason is that the first non-zero cost block is matched in most of the cases, so its price is not determinant in the market marginal price. Similarly, the most expensive energy blocks are rarely marginal in the day-ahead market.
\subsection{Stochastic results}
We solve the stochastic optimization problem (\ref{eq:stochastic_model_full}) through the scenario generation described in the previous section. This optimization problem will be solved for each day within the month of June 2019. In the Spanish market, producers send their offers at 12:00 for every hour of the following day. For that reason, the quantity offered per block ($Q_t^{\text{Max }i}$ ) and its true estimated cost ($C_t^i$) will be determined by solving the model (\ref{eq:discr_model}) to obtain the respective blocks for the GENCO.
The proposed optimal offering problem has been solved through a Python 3.9.12 implementation, using Pyomo 6.3 \parencite{hart2017pyomo}. The selected mathematical solver for all the computations was Gurobi \parencite{gurobi} in its version 9.5. Besides, the computer employed included a CPU Intel Core i7 10700, RAM of 64 GB, and NVIDIA GeForce GTX 2060 graphic card. The computation time is dependent on the particular scenario set and the assigned parameters, especially $\sigma_t^i$, where a bigger value increases the decision space, and therefore, the computation time. In general, none of the following experiments took more than 30 minutes to reach global optimality.
As indicated, we use $\sigma_t^i$ to quantify possible deviations with respect to $C_t^i$. For example, if the production cost of the first block at time $t$ is 25€, and we set a variability of 10\%, $\sigma_t^i$ will have a value of 2.5€. From now on, we will directly refer to $\sigma_t^i$ as this percentage. This variability has been applied to the first four non-zero cost blocks. For the remaining two blocks, $\sigma_t^i$ was set to zero, as their cost is so high that their price never intervenes in the marginal price formation. Moreover, the $\alpha$ value affecting the CVaR formulation is set to 10\%, that is, in the risk-averse case, we focus on improving the expected value of the scenarios below the 10th percentile of the profit distribution.
Regarding the rest of the covariates included in $D_t$ that affect the constrained linear regression model (\ref{eq:sto_cons6}), they are also known at the time of the decision making, as the platform ESIOS offers open-access estimations of hourly demand, renewable production, etc. Thus, we consider this problem as a realistic data-driven approach.
Concerning the risk aversion level, we solve the optimization problem for the cases where the producer is risk-neutral ($\chi = 1$) and risk-averse ($\chi = 0$). In the computation process, we used $\chi$ values 0.001 and 0.999 instead of 0 and 1 for stability in the results. A code example for a $\sigma_t^i$ level of 10\% and $\chi = 0$ in openly available in a Github repository \parencite{Alcantara_opti-genco-offers_2022}.
Firstly, we present a summary of the stochastic results for $\chi$ values (risk aversion level) of zero and one, and $\sigma_t^i$ values of $0\%$, $5\%$, $10\%$ and $15\%$, in Table \ref{tab:stoch_results}. The idea of limiting $\sigma_t^i$ to relatively small values comes from the employment of a Bayesian linear regression as the model to predict the day-ahead market price. We believe that, as we approximate a complex system (the electricity market) with a simple linear model, limiting GENCO price flexibility to a small value will make the linear approximation more effective. This hypothesis is later corroborated by the numerical results in the out-of-sample validation.
The first and second columns of Table \ref{tab:stoch_results} represent the price flexibility over the cost of production and the risk aversion level of the GENCO, respectively. The expected profit is computed as the mean daily profit over the 30 days of testing; the same testing period is used to derive the expected CVaR. The fifth column represents the expected learned marginal price during June 2019, whereas the sixth one reports the expected dispatched energy, that is, the energy to be produced as its offered price is under the market marginal price. Finally, the last four columns present the mean price offered for each of the first four blocks (decision variables).
\begin{table}[ht]
\caption{Result summary of the stochastic optimization problem.}
\centering
\label{tab:stoch_results}
\resizebox{\textwidth}{!}{%
\begin{tabular}{cc|cccccccc}
\multicolumn{1}{p{1.5cm}}{\centering Price \\ flexibility} & \multicolumn{1}{p{1.5cm}|}{\centering Risk \\ aversion} & \multicolumn{1}{p{2cm}}{\centering $\mathbb{E}[\text{Profit}]$ \\ (\euro)} & \multicolumn{1}{p{2cm}}{\centering $\mathbb{E}[\text{CVaR}]$ \\ (\euro)} & \multicolumn{1}{p{1.5cm}}{\centering $\mathbb{E}[\lambda_{t,\omega}]$ \\ (\euro/MWh) } & \multicolumn{1}{p{2.75cm}}{\centering $\mathbb{E}[Q^{ren}_t + \sum Q_{t,\omega}^{i}]$ \\ (MWh) } & \multicolumn{1}{p{1.5cm}}{\centering $\mathbb{E}[P_t^1]$ \\ (\euro/MWh) } & \multicolumn{1}{p{1.5cm}}{\centering $\mathbb{E}[P_t^2]$ \\ (\euro/MWh) } & \multicolumn{1}{p{1.5cm}}{\centering $\mathbb{E}[P_t^3]$ \\ (\euro/MWh) } & \multicolumn{1}{p{1.5cm}}{\centering $\mathbb{E}[P_t^4]$ \\ (\euro/MWh) } \\ \hline
\multirow{2}{*}{$\sigma_t^i = 0\%$} & $\chi = 0$ & 3478823.44 & 3039206.15 & 43.19 & 4045.88 & 37.23 & 49.69 & 60.84 & 77.51 \\
& $\chi = 1$ & 3478823.44 & 3039206.15 & 43.19 & 4045.88 & 37.23 & 49.69 & 60.84 & 77.51 \\ \hline
\multirow{2}{*}{$\sigma_t^i = 5\%$} & $\chi = 0$ & 3518082.54 & 3070042.51 & 43.63 & 4001.96 & 35.75 & 52.18 & 63.86 & 80.96 \\
& $\chi = 1$ & 3515066.28 & 3075152.73 & 43.61 & 4023.92 & 35.51 & 52.16 & 63.87 & 73.78 \\ \hline
\multirow{2}{*}{$\sigma_t^i = 10\%$} & $\chi = 0$ & 3553546.47 & 3098084.94 & 44.03 & 3985.29 & 34.66 & 54.44 & 66.69 & 83.78 \\
& $\chi = 1$ & 3546894.81 & 3109359.34 & 44.01 & 4016.29 & 33.95 & 54.49 & 66.85 & 71.11 \\ \hline
\multirow{2}{*}{$\sigma_t^i = 15\%$} & $\chi = 0$ & 3588804.76 & 3127173.84 & 44.42 & 3972.68 & 33.31 & 56.84 & 69.16 & 85.68 \\
& $\chi = 1$ & 3578339.65 & 3141192.84 & 44.39 & 4012.44 & 32.48 & 56.89 & 69.43 & 71.38 \\ \hline
\end{tabular}
}
\end{table}
By differentiating between levels of price flexibility, we can see how the expected profit increases as this flexibility does. This is due to the offering price adjustments that the GENCO employs to modify the marginal price of the market. Starting from a marginal price of 43.19\euro/MWh when the GENCO makes offers at production cost, we can see an increment of the marginal price up to 44.42\euro/MWh where the flexibility is 15\% of the production costs.
There are also differences regarding the risk aversion level. When the producer is risk-averse ($\chi = 1$), expected profits are slightly lower but the CVaR improves, as the marginal price does not increase as much as in the case where the producer is risk-neutral ($\chi = 0$). The opposite occurs with the dispatched energy: the risk-averse GENCO tries to ensure the production even if it is at a lower marginal price. Therefore, the expected dispatched energy for the risk-neutral GENCO is lower than the risk-averse one.
In relation to the block prices offered, significant differences appear when varying the price flexibility and risk aversion levels. For example, as the flexibility increases, the mean price offered for the first block decreases, while the price for the second block increases. This can be caused by the necessity of the producer of ensuring one dispatched block (the first one) and, on the other hand, increasing the price of the second block to leave the competence behind and increase the marginal price. The same behavior can be seen for the third block price. Regarding the differences between risk aversion levels, most of the time, the risk-averse and risk-neutral GENCOs follow the same strategy, but in the risk averse case, she tries not to increase (or decrease) the block price that much compared to the risk neutral one. However, the opposite behavior is observed for the price of the fourth block.
What follows will graphically show the stochastic results for the complete testing period, studying the profit distribution and optimal block prices for different levels of risk aversion and price flexibility.
Firstly, in Figure \ref{fig:risk0_1_sigma_0}, we show the expected daily profit distribution (left) and hourly block prices (right) when no strategic offering is allowed. That is, the price flexibility is 0\% and the block prices are the production costs. With this level of price flexibility, the block prices show the real production cost during June 2019. These costs are obtained following the methodology exposed in Section \ref{sec:supply_disc}. That is, we discretized the original supply curve of the large GENCO and assumed the obtained values for each block to be the true costs. Regarding the expected profit distribution, we can see how it fluctuates between 2 and 6 million euros per day. Most of the atypical expected profits are in the right tail of the distribution, with fewer of them in the left tail.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/prof_chi0_sig0.png}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/price_chi0_sig0.png}
\end{subfigure}
\caption{Expected daily profit distribution (left) and hourly block prices (production costs, right) for $\chi = 0,1$ and $\sigma_t^i = 0\%$}
\label{fig:risk0_1_sigma_0}
\end{figure}
Next, we increase the price flexibility level up to 5\% and recast the results by risk aversion level in Figure \ref{fig:risk0_1_sigma_5}. We represent the risk-neutral GENCO in Figures \ref{fig:risk0_1_sigma_5}(a) and (b), and the risk-averse in Figures \ref{fig:risk0_1_sigma_5}(c) and (d). Figures \ref{fig:risk0_1_sigma_5}(a) and (c) show the distribution of daily profit increments compared to the base case when the GENCO offers her energy at production costs. This means that, for the same scenario, we compute the difference between the expected profit at a price flexibility level of 5\%, and a level of 0\%. On the other hand, Figures \ref{fig:risk0_1_sigma_5}(b) and (d) represent the difference between the optimal prices offered for each block for a price flexibility level of 5\% and the base case (the production cost). The same descriptive setting is applied for the rest of analyzed cases with different flexibility levels in Figures \ref{fig:risk0_1_sigma_10} and \ref{fig:risk0_1_sigma_15}.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/prof_chi0_sig05.png}
\caption{Distribution of daily profit increment, $\chi=0$, $\sigma_t^i = 5\%$}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/price_chi0_sig05.png}
\caption{Block prices vs. production costs, $\chi=0$, $\sigma_t^i = 5\%$}
\end{subfigure}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/prof_chi1_sig05.png}
\caption{Distribution of daily profit increment, $\chi=1$, $\sigma_t^i = 5\%$}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/price_chi1_sig05.png}
\caption{Block prices vs. production costs, $\chi=1$, $\sigma_t^i = 5\%$}
\end{subfigure}
\caption{Expected profit increment distribution and block prices for cases $\chi = 0,1$ and $\sigma_t^i = 5\%$}
\label{fig:risk0_1_sigma_5}
\end{figure}
In Figure \ref{fig:risk0_1_sigma_5} we can see big differences regarding the profit increment distribution by risk aversion level. Although there are not so many differences in the mean of the distribution (this mean is slightly bigger for the risk-neutral GENCO), we can appreciate the change in the distribution shape. For instance, the distribution for the risk-neutral GENCO has a higher variance than the risk-averse case. Besides, we can notice how the risk-averse GENCO tries to improve the lower tail of the distribution, transforming worst-case profit increments into atypical values. Concerning the offering strategy, we can see the main differences in the price offered for the fourth block, where the risk-neutral GENCO tries to set the maximum possible price while the risk-averse GENCO tries to minimize it. Furthermore, the risk-neutral GENCO has a higher variance for the prices of the first block compared to the risk-averse case.
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/prof_chi0_sig1.png}
\caption{Distribution of daily profit increment, $\chi=0$, $\sigma_t^i = 10\%$}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/price_chi0_sig1.png}
\caption{Block prices vs. production costs, $\chi=0$, $\sigma_t^i = 10\%$}
\end{subfigure}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/prof_chi1_sig1.png}
\caption{Distribution of daily profit increment, $\chi=1$, $\sigma_t^i = 10\%$}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/price_chi1_sig1.png}
\caption{Block prices vs. production costs, $\chi=1$, $\sigma_t^i = 10\%$}
\end{subfigure}
\caption{Expected profit increment distribution and block prices for cases $\chi = 0,1$ and $\sigma_t^i = 10\%$}
\label{fig:risk0_1_sigma_10}
\end{figure}
Following the increase of flexibility, we repeat our analysis for the case of $\sigma_t^i = 10\%$ in Figure \ref{fig:risk0_1_sigma_10}. In this case, we can observe the same behavior regarding the distributions of daily profit increment. In general, the mean expected increment of profit goes from 40000\euro{} in the former case to more than 60000\euro{}. Besides, the variance in the distribution increases for both the risk-neutral and the risk-averse GENCO. The latter continues to improve the lower tail of the profit increment distribution. In relation to the offered prices, both GENCOs take advantage of the price flexibility level by modifying their offers in a larger range concerning the production costs. The behavior regarding the fourth block continues to be the opposite between different levels of risk aversion, and the risk-neutral GENCO still adds more variability to the prices of the first block. This might be due to her risk aversion level, trying to get maximum profits even in the most secure case of a dispatched energy block.
\begin{figure}[t]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/prof_chi0_sig15.png}
\caption{Distribution of daily profit increment, $\chi=0$, $\sigma_t^i = 15\%$}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/price_chi0_sig15.png}
\caption{Block prices vs. production costs, $\chi=0$, $\sigma_t^i = 15\%$}
\end{subfigure}
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/prof_chi1_sig15.png}
\caption{Distribution of daily profit increment, $\chi=1$, $\sigma_t^i = 15\%$}
\end{subfigure}
\hfill
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/price_chi1_sig15.png}
\caption{Block prices vs. production costs, $\chi=1$, $\sigma_t^i = 15\%$}
\end{subfigure}
\caption{Expected profit increment distribution and block prices for cases $\chi = 0,1$ and $\sigma_t^i = 15\%$}
\label{fig:risk0_1_sigma_15}
\end{figure}
We will finish this analysis with a 15\% level of price flexibility case. Results are shown in Figure \ref{fig:risk0_1_sigma_15}. We can appreciate the largest differences in this case. Regarding the expected profit increment distributions, mean values increase to more than 100000\euro{} per day. The interquartile range continues to increase: in the risk-neutral case also up to 100000\euro{} per day. The risk-averse GENCO is able to increase the first quartile to a value over 60000\euro{}, with a smaller variance on the profit increment distribution than the risk neutral GENCO. In relation to the block prices, the risk-averse GENCO tries to minimize the price of the fourth block and maximize the price of the third one, which suggests the target of joining the prices of both blocks. On the other hand, the risk-neutral GENCO keeps her block prices apart at high values. This behavior might be due to the risk the GENCO faces if a block of the competitor falls below his blocks and excludes her from dispatch. However, the risk-averse GENCO tries to ensure the first block while keeping the rest of the blocks at medium prices.
To finish this section of results, we will show the differences regarding the mean expected daily profit and the mean daily CVaR between different levels of GENCO's risk aversion. For this example, we set a price flexibility level of 10\% and solve our stochastic optimization problems for $\chi = 0, 0.1, 0.2, \dots, 1$. A graphical illustration of the efficient frontier can be seen in Figure \ref{fig:cvar_front}.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{figures/cvar_front.png}
\caption{Expected profit versus CVaR for different values of $\chi$. Price flexibility is set to 10\%}
\label{fig:cvar_front}
\end{figure}
As can be noticed, we first find a group of low levels of risk aversion, for those GENCOs with $\chi = 0, 0.1, 0.2$, where they all obtain similar expected values of profits and CVaR. With $\chi = 0.3$ and $\chi = 0.4$, a linear decrease of mean expected profit (and a linear increment of CVaR) begins. Finally, when $\chi$ is bigger than $0.5$, the expected profit drops substantially and small improvements in the CVaR are made.
\subsection{Out-of-sample model validation}
As we have seen, the stochastic optimization model illustrates how allowing strategic price offering for a GENCO can increase the day-ahead market marginal price. Furthermore, we see differences in offering prices, marginal prices, and expected profits in relation to the risk aversion level of the GENCO.
This analysis was based on the assumption that the linear model (\ref{eq:sto_cons6}) was an accurate representation of the market price response to the GENCO' strategic offers. However, we aim to test if this stochastic optimal strategy would actually work in a real market. That is, we want to know what may occur when the GENCO sends her optimized supply curves to the Spanish day-ahead market. For this reason, we will follow an out-of-sample validation of the derived offering strategy, using data from June 2019.
For that purpose, the methodology will stand as follows:
\begin{enumerate}
\item Once the optimal strategy by the GENCO is derived from (\ref{eq:stochastic_model_full}), send the resulting supply curve to the market. Block quantities will be $Q_t^{\text{Max }i}$ with prices $P_t^i$, jointly with $Q_t^{ren}$ at zero cost.
\item Aggregate the GENCO supply curve with the one collected by the market operator from the rest of the market competitors (also discretizated in 7 blocks, see Section \ref{sec:discr_market}), and use the inelastic demand forecast from ESIOS to obtain the initial marginal price.
\item Displace the market supply curve employing the mean displacement of the last two months at its corresponding hour, to reproduce technical adjustments performed by the market operator.
\end{enumerate}
After all these steps are done, we will finally get an estimated marginal price. Profits for the GENCO can be computed as the product of the dispatched quantities (those blocks below the resulting marginal price) times the marginal price minus the production costs of this energy.
These out-of-sample results have been computed for different levels of risk aversion ($\chi$) and price flexibility ($\sigma_t^i$). Table \ref{tab:oos_results} summarizes the main obtained insights. The third column of the table shows the average daily profit for the GENCO in June 2019. The fourth and fifth columns indicate the first and third quantiles over the profit distribution. We find the variance of the profit rightwards. Finally, the mean hourly marginal price is computed.
\begin{table}[ht]
\caption{Result summary of the out-of-sample model validation.}
\centering
\label{tab:oos_results}
\resizebox{\textwidth}{!}{%
\begin{tabular}{cc|ccccc}
\multicolumn{1}{p{1.5cm}}{\centering Price \\ flexibility} & \multicolumn{1}{p{1.5cm}|}{\centering Risk \\ aversion} & \multicolumn{1}{p{2cm}}{\centering $\overline{\text{Profit}}$ \\ ($\text{\euro}$)} & \multicolumn{1}{p{2cm}}{\centering $Q_1$ \\ ($\text{\euro}$)} & \multicolumn{1}{p{2cm}}{\centering $Q_3$ \\ ($\text{\euro}$)} & \multicolumn{1}{p{2cm}}{\centering $Var[\text{Profit}]$ \\ ($\text{\euro}^2$)} & \multicolumn{1}{p{2cm}}{\centering $\overline{\lambda_t}$ \\ ($\text{\euro}$/MWh)} \\ \hline
\multirow{2}{*}{$\sigma_t^i = 0\%$} & $\chi=0$ & 3565205.49 & 3303414.68 & 4060772.16 & 645124.68 & 43.82 \\
& $\chi=1$ & 3565205.49 & 3303414.68 & 4060772.16 & 645124.68 & 43.82 \\ \hline
\multirow{2}{*}{$\sigma_t^i = 5\%$} & $\chi=0$ & 3572792.23 & 3305175.26 & 4056156.26 & 659907.01 & 43.91 \\
& $\chi=1$ & 3570222.41 & 3305175.26 & 4047973.11 & 662875.81 & 43.88 \\ \hline
\multirow{2}{*}{$\sigma_t^i = 10\%$} & $\chi=0$ & 3574425.62 & 3302773.32 & 4047537.69 & 661838.42 & 43.94 \\
& $\chi=1$ & 3567050.32 & 3300685.63 & 4032516.38 & 672635.64 & 43.88 \\ \hline
\multirow{2}{*}{$\sigma_t^i = 15\%$} & $\chi=0$ & 3574699.76 & 3298282.72 & 4051551.90 & 666520.89 & 43.97 \\
& $\chi=1$ & 3559860.77 & 3294270.89 & 4028966.40 & 677172.50 & 43.88 \\\hline
\end{tabular}
}
\end{table}
The first and one of the most important results that we can confirm out-of-sample is that allowing the large GENCO offers to deviate from marginal costs makes the market marginal price to increase. Besides, the risk-neutral GENCO makes this marginal price increase more than the risk averse GENCO.
For a risk-neutral GENCO, the maximum mean profit is obtained for price flexibility of 15\%, whereas for risk averse GENCO the maximum profit takes place at a price flexibility level of 5\%. Furthermore, we can see how the average profit for the risk-neutral GENCO is always above the case where no price flexibility is allowed. On the other hand, for a risk-averse GENCO, the mean profit is not improved at a flexibility level of 15\%. In general, we can assume that there is some level of flexibility upon which profits stop increasing, this is where the offering curves from the competitors may start determining the marginal price.
Regarding profit quantiles and variance, no significant differences are appreciated. Only quantiles seem to decrease as the price flexibility increases, suggesting that more atypical values appear in the right tail of the profit distribution, making the mean profit to generally increase.
To finish this section, we will show graphically the daily GENCO profit increment in the test period for different levels of price flexibility and risk aversion concerning the true generating cost offering strategy. Figure \ref{fig:oos_day_prof} shows the daily out-of-sample profit increment for risk-neutral (up) and risk-averse (down) GENCO for different levels of price flexibility. Dashed lines represent the mean for each pricing strategy.
As can be seen, in most of the cases, the profit increment is slightly higher for cases where the price flexibility is different from zero, for both levels of risk aversion. Besides, on some specific days, this profit difference is increased on a higher scale, for instance, June 13th, 17th, 27th, or 28th. It is interesting to notice that, for the risk-neutral case, the mean profit increment is almost the same for price flexibility levels of 10\% and 15\%. However, for the risk-averse case, a large value of price flexibility strategy can even worsen the case where the offers are made at true generating cost. In this case, a low value of price flexibility, i.e., 5\%, achieves the best results for the risk-averse large GENCO.
Furthermore, we can appreciate more differences when we show the average profit increment in an hourly basis (Figure \ref{fig:oos_hour_prof}). As in the case above, we represent the profit increment for several levels of price flexibility and risk aversion: risk-neutral (up) and risk-averse (down). We can see how the biggest amount of profit increment can be made from 8:00 to 10:00, and in some specific hours like 3:00, 13:00 to 15:00, 17:00, 18:00 20:00, 21:00, and 23:00 in a non-homogeneous way.
For the risk-neutral GENCO, we can notice how, on average, and in most of the hours of the day, a profit increment can be obtained by allowing price flexibility in her offering strategy. Besides, when the profit increases, it does it on a bigger scale when the maximum price flexibility is set. However, when the profit decreases with respect to offering at production costs, it decreases more for the case when the price flexibility is allowed up to 15\%.
On the other side, the behavior of the hourly profit increments is similar in the case of a risk-averse GENCO. However, more hours where the profit does not increase appear. In the case where the profit suffers an increment, it does it in a similar way for different levels of price flexibility. Nevertheless, the case of a 5\% price flexibility allows the risk-averse GENCO to soften the profit decrease.
These results show how the stochastic optimization model is optimistic regarding the profit increment. Nevertheless, the profit increments do occur, but at a lower scale than the stochastic model reported. This might be due to the local limitations of a linear predictive model, as large variations in the competitor block offers, that could change the market outcomes, cannot be taken into account. However, the hourly behavior of the profit increment provides valuable insights and suggests that an hourly price flexibility strategy could be taken into account to further increase GENCO's profits.
To sum up, this out-of-sample validation approach has allowed us to confirm in a real-life context how allowing a GENCO to deviate from its true generating costs makes her expected profits increase. Nevertheless, we have seen how an excessive value of the price flexibility may diminish this effect. Finally, we have learned that the highest increment in the profit can be achieved in some specific hours of the day.
\begin{figure}[H]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{figures/day_prof_chi0_oos.png}
\end{subfigure}
\hfill
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{figures/day_prof_chi1_oos.png}
\end{subfigure}
\caption{Daily out-of-sample profit increment for risk neutral (up) and risk averse (down) GENCO at different levels of price flexibility}
\label{fig:oos_day_prof}
\end{figure}
\begin{figure}[H]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{figures/hour_prof_chi0_oos.png}
\end{subfigure}
\hfill
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{figures/hour_prof_chi1_oos.png}
\end{subfigure}
\caption{Hourly out-of-sample mean profit increment for risk neutral (up) and risk averse (down) GENCO at different levels of price flexibility}
\label{fig:oos_hour_prof}
\end{figure}
\section{Conclusions}
\label{sec:conclu}
In this work, we have dealt with the problem of finding an optimal offering strategy for a large GENCO. That is, knowing the amount of energy that the GENCO can produce and at what cost, setting the prices of the offered energy blocks for a profit-maximizing strategy.
We present a data-driven methodology where the GENCO's supply curves and the ones from the rest of the competitors are optimally discretized. This discretization will allow us to get important insights from their offering strategy, reproduce the market clearing to compute the resulting market price, and ease an out-of-sample validation of the proposed stochastic optimization model.
The relationship between the hourly market marginal price and the block prices offered by the GENCO is modeled through a Bayesian linear regression approach. The advantage of this approach is to obtain an interpretable, fully linear model, from which different scenarios for the price coefficients (sensitivity of the marginal price to the GENCO supply curve) can be sampled from their posterior distribution. This linear model is embedded into a two-stage stochastic optimization model which accounts for risk aversion to derive the optimal supply curve to submit to the day-ahead market.
After an in-depth analysis, stochastic results have shown how allowing the GENCO to deviate from her marginal costs offers, results in a marginal market price increase, which also increases her profits. Besides, there are differences in the pricing behavior of the GENCO depending on her risk aversion level. In general, a risk-neutral GENCO achieves to increase the marginal price to a higher extent than the risk averse GENCO, but worsening the worst case profit scenarios.
One of the main novelties of this work is that the optimal offering strategy is tested out-of-sample. That is, we simulate the actual functioning of the market combining the demand and the offers from the GENCO and her competitors. We show how the proposed optimization model achieves a marginal price increment and a maximum profit for the risk-neutral GENCO for price flexibility levels of around 10\% of the production costs, and at 5\% for the risk-averse one. Besides, an hourly price flexibility strategy could be even more profitable for the GENCO. Furthermore, these results warn us about the importance of executing effective audits in markets with uniform pricing based on marginal technologies. In these type of markets, offers should be done at marginal generating costs. However, as we have shown, minor increments of the block prices may significantly increase the marginal price which will be transferred to the consumers.
Future work is focused on studying not only the price strategy for the GENCO but also allowing flexibility on the offered energy quantity. This would add more complexity to the optimization problem and the prediction model. Thus, state-of-art linearizable machine learning models should be tested, with the aim of not losing an adequate uncertainty characterization.
\section*{Acknowledgements}
The authors gratefully acknowledge the financial support from MCIN/AEI/10.13039/ 501100011033, project PID2020-116694GB-I00 and from the FPU grant (FPU20/00916).
\printbibliography
\end{document}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,089
|
package org.apache.kafka.common.requests;
import org.apache.kafka.common.message.DeleteGroupsResponseData;
import org.apache.kafka.common.message.DeleteGroupsResponseData.DeletableGroupResult;
import org.apache.kafka.common.protocol.ApiKeys;
import org.apache.kafka.common.protocol.Errors;
import org.apache.kafka.common.protocol.types.Struct;
import java.nio.ByteBuffer;
import java.util.HashMap;
import java.util.Map;
/**
* Possible error codes:
*
* COORDINATOR_LOAD_IN_PROGRESS (14)
* COORDINATOR_NOT_AVAILABLE(15)
* NOT_COORDINATOR (16)
* INVALID_GROUP_ID(24)
* GROUP_AUTHORIZATION_FAILED(30)
* NON_EMPTY_GROUP(68)
* GROUP_ID_NOT_FOUND(69)
*/
public class DeleteGroupsResponse extends AbstractResponse {
public final DeleteGroupsResponseData data;
public DeleteGroupsResponse(DeleteGroupsResponseData data) {
this.data = data;
}
public DeleteGroupsResponse(Struct struct) {
short latestVersion = (short) (DeleteGroupsResponseData.SCHEMAS.length - 1);
this.data = new DeleteGroupsResponseData(struct, latestVersion);
}
public DeleteGroupsResponse(Struct struct, short version) {
this.data = new DeleteGroupsResponseData(struct, version);
}
@Override
protected Struct toStruct(short version) {
return data.toStruct(version);
}
public Map<String, Errors> errors() {
Map<String, Errors> errorMap = new HashMap<>();
for (DeletableGroupResult result : data.results()) {
errorMap.put(result.groupId(), Errors.forCode(result.errorCode()));
}
return errorMap;
}
public Errors get(String group) throws IllegalArgumentException {
DeletableGroupResult result = data.results().find(group);
if (result == null) {
throw new IllegalArgumentException("could not find group " + group + " in the delete group response");
}
return Errors.forCode(result.errorCode());
}
@Override
public Map<Errors, Integer> errorCounts() {
Map<Errors, Integer> counts = new HashMap<>();
for (DeletableGroupResult result : data.results()) {
Errors error = Errors.forCode(result.errorCode());
counts.put(error, counts.getOrDefault(error, 0) + 1);
}
return counts;
}
public static DeleteGroupsResponse parse(ByteBuffer buffer, short version) {
return new DeleteGroupsResponse(ApiKeys.DELETE_GROUPS.parseResponse(version, buffer), version);
}
@Override
public int throttleTimeMs() {
return data.throttleTimeMs();
}
@Override
public boolean shouldClientThrottle(short version) {
return version >= 1;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,872
|
{"url":"https:\/\/brilliant.org\/problems\/vercongence\/","text":"# Vercongence?\n\nCalculus Level 3\n\nConsider the following strange definition:\n\nWe say a sequence $$(x_n)$$ verconges to $$x$$ if there exist an $$\\epsilon>0$$ such that for all $$N\\in \\Bbb{N}$$, $$n\\ge N \\implies |x_n-x|<\\epsilon$$\n\nWhich of the following conclusions is wrong?\n\n\u00d7","date":"2016-10-27 05:04:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8241218328475952, \"perplexity\": 509.7292143465028}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-44\/segments\/1476988721141.89\/warc\/CC-MAIN-20161020183841-00475-ip-10-171-6-4.ec2.internal.warc.gz\"}"}
| null | null |
De blokhaak of blokwinkelhaak is een stuk gereedschap voor het controleren en aftekenen van rechte hoeken.
Een blokhaak bestaat uit een blok en een blad of veer die een haakse hoek (90°) met elkaar maken. In tegenstelling tot de schrijfhaak is het blok op het punt van bevestiging met het blad nooit afgeschuind op 45 graden. Door het ontbreken van een maatverdeling kunnen er geen lengtes mee worden opgemeten. In de metaalbewerking gebruikt men geheel metalen blokwinkelhaken. Aan de hand van de lichtspleet tussen de veer en het materiaal wordt de haaksheid van een hoek gecontroleerd. De inwendige hoek tussen blok en veer is bij de metalen blokhaak vaak voorzien van een kleine uitsparing, hierdoor is het mogelijk ook hoeken op haaksheid te controleren waaraan zich een braam bevindt. In de houtbewerking gebruikt men ook wel blokhaken waarvan het blok van hout is.
Met een blokhout kun je laminaten meten.
Varianten
Bij een verstekhaak is de veer schuin, en ongeveer in het midden van het blok aangebracht. Het maakt een hoek van 45° en 135° met het blok.
Bij een zweihaak is de veer draaibaar aangebracht, waardoor men variabele hoeken kan instellen.
Gereedschap
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,105
|
I strongly condemn the recent personal attacks by the Daily Mail on the three judges who ruled on the Article 50 last Thursday (see front page below).
We have a free press and they have an important & critical role in our society but in this case the Daily Mail has attacked one of the corner stones of a free society, the integrity of the judicial process by attacking the judges as people not just their decision. The next time a judge makes a controversial decision they should not have to fear journalists raking over their past.
If the Daily Mail had objected to the ruling and explained why they considered it wrong they would have been doing their job but to personally attack the judges and in particular to make references to the sexuality of one of them (even though it was later removed) is unacceptable. It demeans & damages British society. I also believe the government should have been quicker to respond although I welcome Liz Truss's statement I think they should have done more, one reason for writing this.
Since I believe in a free press I do not want to censure the Daily Mail but this is another reason why I would never buy the Daily Mail.
I am half-German and my German grandparents while not Nazi's would have come from that part of German society that would have supported them and Hitler in the 1930's. But it always mystified me why they did it. But in the last few years I have understand better how the press can be used to create fear & division.
The judges are not the enemies of the people but the defenders of the law which protects us all.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,392
|
Wolf Isaac Blitzer, född 22 mars 1948 i Augsburg, Tyskland, är en amerikansk journalist och författare. Han är mest känd som nyhetsankare i den amerikanska TV-kanalen CNN.
Blitzer kom som barn till USA tillsammans med sina föräldrar som var judiska immigranter från Tyskland, och han växte upp i Buffalo, New York. Han inledde sin journalistiska karriär 1972 som Tel Aviv-reporter för Reuters. 1973 blev han Washington-korrespondent för den engelskspråkiga utgåvan av The Jerusalem Post, ett jobb han hade tills han började som reporter på CNN 1990. Han leder där programmen The Situation Room som sänds varje vardag, samt CNN Newsroom. Blitzer är utöver detta huvudankare under CNN:s bevakning av de amerikanska presidentvalen, ett ansvar han haft sedan 2004. Känd för sin Mellanöstern-expertis, var han fältreporter under kriget mellan Israel och Hizbolla 2006, och han återvände till regionen 2012 med sin CNN-kollega Anderson Cooper. Blitzer spelade sig själv i Bondfilmen Skyfall.
Bibliografi
Between Washington and Jerusalem: A Reporter's Notebook (Oxford University Press, 1985)
Territory of Lies (Harper and Row, 1989)
Källor
Amerikanska journalister
Personer från Buffalo, New York
Födda 1948
Män
Levande personer
CNN
Alumner från University at Buffalo
Alumner från Johns Hopkins University
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,649
|
This 50s Lauren Retro Stretch Belt in White by Banned is thé ideal accessory to finish off any vintage outfit!
Made from wide white elastic with a silver toned buckle to clinch it in the waist. Pair with our lovely swing dresses and define your waist for a stunning silhouette!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,353
|
Worried about identity theft? Here are a few ways shoppers can protect themselves
By: Jordan Betts
KANSAS CITY, Mo. — During the three days between Thanksgiving and Cyber Monday, the National Retail Federation expected 164 million U.S. consumers shop in some way, shape or form.
With every swipe of a credit or debit card, all those people also risk having personal information stolen by identity thieves, but experts said there are some things consumers can do to protect sensitive information.
When you are shopping online, use only trusted vendors.
"I would say absolutely use secure networks," said Frankie Bellucci, a smart technology expert.
Bellucci also said it's important to double check that you have the correct website when shopping online, because some hackers have scam websites that look similar to the big retail stores' websites.
"Look out for fake ads," Bellucci said. "I have a lot of clients that come to me and may have clicked on something that infected their machine."
When it comes to paying for items, Bellucci said, "Choose cash over credit and debit."
That may be the toughest adjustment for shoppers.
"I usually use my debit card or my Discover card," said Robin Silverman, who was shopping in Brookside on Small Business Saturday.
Some people said cash can be a hassle.
"I usually use my credit card," said Debbie Prior, who was shopping with her daughter in Brookside. "I'm not organized enough to get cash ahead of time."
Still, experts said it's the safest option, "because there is less opportunity for someone to drop a card or get data stolen from a credit card," Bellucci said.
Shoppers also should keep a close eye on all statements and bank account records.
"I look at my Discover statement and make sure there is nothing on there that I didn't buy and my debit card I immediately record," Silverman said.
The last piece of advice is simply to be mindful and careful when shopping in public.
"Slow down and pay attention," Bellucci said. "Don't drop your wallet, keep your wallet in your front pocket, keep your purse closed. People are snatching things out of people hands and bags and pockets."
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,502
|
\section{Introduction}
The morphology of galaxies in the Local Universe is well constrained by observations, but is still largely unexplained. Indeed, large volume cosmological simulations fail to reproduce realistic galaxies. For instance, the disks formed are often too concentrated : it is the ``angular momentum problem'', well known since the early work of \cite{Navarro1991}. It is still unclear whether this is an intrinsic problem of the $\Lambda$CDM paradigm or if something (i.e. resolution, physical processes...) is missing in these simulations.
Another puzzle is the question of disk survival till z=0 (\cite{Koda2007}). For instance, \cite{Kautsch2006} study a large sample of edge-on spiral galaxies in the SDSS and find that a significant fraction of them (i.e. roughly one third) are bulgeless or ``superthin''. This is still unexplained by cosmological models. Indeed, $\Lambda$CDM predicts that galaxy interactions are frequent (see e.g. the recent work by \cite{Stewart2007}). More exactly, major mergers, that are well known to destroy disks to form ellipticals (\cite{Barnes1991}) are rather rare, but minor mergers are much more common. These minor mergers can thicken disks, and if frequent enough could even form elliptical galaxies (\cite{Bournaud2007}). The problem is then to find whether $\Lambda$CDM predicts too many mergers, or if the satellites have properties and orbital parameters such that they have little influence on the galactic disks. Also, gas accretion along filaments could fuel a thin disk and counteract the effect of mergers (\cite{Dekel2005}, \cite{Keres2005}, \cite{Ocvirk2008}).
To study the properties of galaxies at low and high redshift, it thus seems necessary to take the full cosmological context into account. Large scale cosmological simulations could of course achieve this goal and give a statistical view on galaxies at each redshift, but for now they mainly lack resolution at the galactic scale. On the contrary, small volume cosmological simulations like the one performed by \cite{Naab2007} can resolve galactic scales in detail but are so time-consuming that obtaining a statistical sample is for now a challenge.
A first method to solve these problem is to use semi-analytical models, i.e. extracting merger trees from cosmological simulations and using different recipes to infer physical properties of galaxies (\cite{Somerville2001}, \cite{Hatton2003}, \cite{Khochfar2005}). The drawback is that approximations are necessary.
Another possibility has been explored by \cite{Kazantzidis2007}, \cite{Read2007} and \cite{Villalobos2008} : they extract merger histories from cosmological simulations and re-simulate these histories at higher resolution. Nevertheless, they perform collisionless simulations with no gas component, neither in the main galaxy, nor in satellites, nor in filaments.
We here present a new approach where we re-simulate at high resolution a history given by a cosmological simulation, using self consistent realistic galaxies (the main galaxy and the satellites have a gas disk, a stellar disk and a dark matter halo), and we also take into account gas accretion from cosmic filaments. Our goal is to obtain a statistical sample of merger and accretion histories in a $\Lambda$CDM context to simulate the resulting galaxies and to compare our results to observations at various redshifts.
After a description of the technique used, we will present our first results and emphasize the importance of gas accretion along filaments to understand galaxy evolution.
\section{Method}
\subsection{Analysis of the cosmological simulation}
Merger histories and accretion data are extracted from a dark matter only cosmological simulation performed with the AMR code RAMSES (\cite{Teyssier2002}). This simulation has an effective resolution of 512$^3$ and a comoving box length of 20 h$^{-1}$ Mpc. The mass resolution is 6.9$\times$10$^6$ M$_{\odot}$, so that a Milky Way type halo is made of a few 10$^{5}$ particles. The cosmology is set to $\Lambda$CDM with $\Omega_m$=0.3, $\Omega_{\Lambda}$=0.7, H$_0$=70 km.s$^{-1}$.Mpc$^{-1}$ and $\sigma_8$=0.9.
In this simulation, halos are detected with the HOP algorithm (\cite{Eisenstein1998}), with $\delta_{\rm peak}$=240, $\delta_{\rm saddle}$=200 and $\delta_{\rm outer}$=80 (the minimal number of particles per halo is fixed to 10). In the following, we also take into account particles that do not belong to a halo, and we consider them as diffuse accretion.
The halo of which we want to build the merger and accretion history is then chosen in the final snapshot of the simulation (at z = 0) and is traced back to higher redshift (typically z $\simeq$ 2) : we will call it the main halo. From z $\simeq$ 2 to z = 0, each halo or particle (in the case of diffuse accretion) entering a sphere around the main halo (the radius of this sphere is the virial radius of the main halo at z=0) is recorded, with its mass, position, velocity and spin (spin is of course omitted for diffuse accretion).
\subsection{High resolution re-simulation}
\subsubsection{The PM code}
The history that has been extracted from the cosmological simulation is re-simulated with a particle-mesh code (\cite{BC02}).
Gas dynamics is modeled with a sticky-particle scheme with $\beta_r$=0.8 and $\beta_t$=0.7, and star formation is computed according to a Kennicutt law with an exponent 1.5.
The maximum spatial resolution is 130 pc. For the two simulations shown hereafter, the mass resolution varies from 1.2$\times$10$^4$ M$_{\odot}$ to 2.1$\times$10$^4$ M$_{\odot}$ for gas particles, from 6$\times$10$^4$ M$_{\odot}$ to 1.4$\times$10$^5$ M$_{\odot}$ for stellar particles and from 1.2$\times$10$^5$ M$_{\odot}$ to 4.4$\times$10$^5$ M$_{\odot}$ for dark matter particles. This allows to have a total number of particles of the order of 15$\times$10$^6$ at the end of both simulations.
\subsubsection{Model galaxies}
Each halo of the cosmological simulation (i.e. the main halo as well as all the interacting satellites) is replaced with a realistic galaxy, having a disk, a bulge and of course a dark matter halo. The total mass of the galaxy is divided in 20\% of baryons and 80\% of dark matter (the mass of dark matter being given by the cosmological simulation). The dark matter halo follows a Burkert profile extended to its virial radius, with a core radius chosen to follow the scaling relations given in \cite{Salucci2000}. The disk radius of each galaxy is proportional to the square root of its mass so that the surface density is constant from one galaxy to another.
The gas fraction in the disk is 30\% for galaxies that have a halo mass lower than 10$^{11}$~M$_{\odot}$. For galaxies that have a greater halo mass, the gas fraction is set to 30\% at high redshift (z$>$0.8) and 15\% at low redshift.
Figure \ref{init} (left side) shows for example the initial distribution of gas and stars in the main galaxy.
\begin{figure}
\begin{center}
\includegraphics[width=6.5cm]{init_color.eps}
\includegraphics[width=6.5cm]{out010.gas_yz_color.eps}
\end{center}
\caption{Left : Initial distribution of stars (top panel) and gas (bottom panel) for the main galaxy, seen face-on and edge-on (each panel is 40 kpc x 40 kpc in size). Right : large scale view of the gas distribution in a simulation box (the panel is 440 kpc x 440 kpc in size).}\label{init}
\end{figure}
\subsubsection{Diffuse accretion}
Each dark matter particle that is considered as diffuse accretion in the cosmological simulation is replaced with a small blob of particles, containing in mass 20\% of gas and 80\% of dark matter.
The right side of figure \ref{init} shows an example of simulation where the main galaxy (edge-on) is surrounded by accreted gas (clearly in a filament) and a few satellite galaxies.
\subsection{Two examples}
We present here the first results concerning two simulations, that have been chosen to have a mass at z=0 of the order of magnitude of the mass of the Milky Way. They have very different histories.
In the the first one, the mass growth of the galaxy is dominated by diffuse accretion (at a mean rate of $\sim$ 5 M$_{\odot} $/yr). Only some very minor mergers take place, the most important of these mergers having a mass ratio of 12:1 (see on the left panel of figure \ref{history} the mass evolution as a function of time). We will call this simulation \textit{``the calm case''}.
The other simulation also contains diffuse accretion, but is mainly dominated by mergers. There is a first period of repeated minor and major mergers (mass ratios 8:1, 10:1, 3:1 and 4:1) at the very beginning of the simulation, then a calm phase and finally a major merger (mass ratio 1.5:1) at low redshift (see right panel of figure \ref{history}). We will call it \textit{``the violent case''}.
\begin{figure}
\begin{center}
\includegraphics[width=6.2cm]{evol_masse_70.eps}
\includegraphics[width=6.2cm]{evol_masse_35.eps}
\end{center}
\caption{Evolution of the total mass of dark matter in the simulation box as a function of time for the two simulations studied here : in the left case, the mass growth is dominated by accretion, and in the right one by mergers}\label{history}
\end{figure}
\section{Results}
\subsection{The calm case}
The evolution of the distribution of gas and stars is shown in figure \ref{evol_70}. Gas is smoothly accreted around the galaxy and falls onto the disk. Minor mergers are not strong and frequent enough to destroy the stellar disk. They only slightly heat it, and a thin stellar disk is rebuilt thanks to gas from diffuse accretion along the filaments.
The thin disk is mainly formed from stars younger than 4 Gyr, and has a well-defined structure with two spiral arms.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{evol_70_color.eps}
\end{center}
\caption{Evolution of the distribution of gas (top panels) and stars (bottom panels) for the calm case. Snapshots are taken every Gyr and each panel is 40 kpc x 40 kpc in size.}\label{evol_70}
\end{figure}
\subsection{The violent case}
In this case, the evolution of the morphology of the galaxy is totally different (see figure \ref{evol_35}). The disk is destroyed early by the first series of mergers. In fact, after the first of these mergers (which has a mass ratio of 8:1) the disk is already very perturbed, and the following mergers contribute to the transformation of the galaxy into an elliptical.
Nevertheless, thanks to gas accretion that takes place along a filament, a gas disk is gradually re-built into the elliptical galaxy (this would not happen if only mergers were taken into account in the simulation). New stars form in this disk, forming a young stellar disk inside the old spheroid (see figure \ref{disk}), this disk being in a perpendicular plane with respect to the initial disk. Finally, the last major merger (with a mass ratio of 1.5:1) destroys this disk and the galaxy becomes elliptical again.
\begin{figure}
\begin{center}
\includegraphics[width=12.8cm]{evol_35_color.eps}
\end{center}
\caption{Evolution of the distribution of gas (top panels) and stars (bottom panels) for the violent case. Snapshots are taken every Gyr and each panel is 40 kpc x 40 kpc in size.}\label{evol_35}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=6cm]{disque.eps}
\end{center}
\caption{Projected stellar mass density at z = 0.2 for the violent case.}\label{disk}
\end{figure}
\section{Conclusion}
In order to study galaxy evolution in cosmological context, we have successfully developed a technique that allows us to perform high resolution simulations taking into account realistic merger and gas accretion histories.
The first two simulations shown here do not allow us to draw any general conclusion on galaxy evolution in a $\Lambda$CDM context. Nevertheless, we can already confirm that even low mass satellites can thicken disks and that ellipticals from both through repeated minor mergers and major mergers. We also emphasize that gas accretion from filaments can allow to rebuild a thin disk in a galaxy, which proves the absolute necessity to take this accretion into account to understand galaxy evolution.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,336
|
Bedroom Remodeling Ideas On A Budget. This best image collections about Bedroom Remodeling Ideas On A Budget is accessible to download. We obtain this awesome picture from internet and choose the top for you. Bedroom Remodeling Ideas On A Budget likeness and pictures selection that published here was carefully named and published by BeArdnac after selecting the ones that are best among the others.
So, ultimately we make it and here these list ofwonderful picture for your ideas and information reason regarding the Bedroom Remodeling Ideas On A Budget as part of boscoberlin.com exclusive updates collection. So, take your time and find out the best Bedroom Remodeling Ideas On A Budget images and pictures posted here that suitable with your needs and use it for your own collection and personal use.
Thank You for visiting our website. current we are pleased to declare that we have discovered a very interesting topic to be reviewed, that is Bedroom Remodeling Ideas On A Budget. Many individuals looking for specifics of Bedroom Remodeling Ideas On A Budget and certainly one of them is you, is not it?
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,291
|
Narrating the Holodomor: The Social and Cultural History of Collectivization and Famine in Soviet Ukraine
Organized by the Holodomor Research and Education Consortium, Canadian Institute of Ukrainian Studies, University of Alberta
Narrating the Holodomor – Call for Papers – pdf
And how I remember the many corpses found everywhere because it was spring: in the forest and in the fields, on the streets, people had just collapsed from hunger, and they died. […] I remember once I was grazing the cow, and in a field by the forest, a boy, Sirozha, died. We shepherds dug a pit in the meadow, gathered grass and tall grasses, laid the body in the pit, and covered it with grass and buried it. There wasn't even anyone to bury the corpses.
Bilash [first name unknown] was one of thousands of Holodomor survivors who in 1989 responded to a call from journalist Volodymyr Maniak to provide accounts of the famine of 1932-33. The new Soviet policy of "openness" had meant that victims and their families were able to tell their stories after fifty years of near total silence. Several years later, the dissolution of the Soviet Union allowed for access to previously restricted archives, making possible discovery and publication of documents and research based on these sources. This "archival revolution" also opened new opportunities for assessment and public discussion of the legacy of Stalinism.
Although many scholarly works on collectivization and the Famine have been published over the last three decades, the social and cultural history of the Holodomor remains understudied. The aim of this conference is to provide a forum for examining practices of state violence and policies in the Soviet Union in the 1930s and to promote exploration of little-researched topics in social and cultural history. We especially encourage the examination and integration of ego-documents produced by victims, witnesses, and perpetrators. We seek to recover the voices of those who lived through the events, integrating their personal experiences into micro-level histories. Thus, we encourage comprehensive engagement with survivor memoirs and testimonies and thus are looking for papers that incorporate and analyze both official government sources and ego-documents.
Potential topics include, but are not limited to, the analysis of
the categories of victim, perpetrator, and bystander and their relevance in the context of the Holodomor;
issues in producing, gathering, and analyzing testimonies and memoirs;
everyday experiences and practices in rural and urban areas during and in the aftermath of collectivization and the Famine, including gendered experience, the spectrum of violence, resistance, survival strategies, mobility patterns, and changes in social and cultural norms;
second-generation Holodomor representations.
Please send an abstract of no more than 500 words and a CV
to Dr. Oksana Vynnyk vynnyk@ualberta.ca and hrec@ualberta.ca by June 11, 2021.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,749
|
Q: AngularJS Chaining $http calls I am still quite new to AngularJS and struggling to figure the following out.
I have quite a few web-services that I need to use, quite a few of them relies on the data from another to successfully make the next call.
For example, the first web-service will retrieve a list of Profiles.
ip.controller("ProfilesCtrl", function($scope, $http) {
$http.post("Profile_List.asp").success(function(data) {
$scope.profiles = data;
}).error(function() {
alert("An unexpoected error ocurred while loading profiles!");
});
});
Profiles returns a JSON object.
Data returned:
{
"Success": true,
"ErrorMessage": "",
"Objects": [{
"GUID": "208FF69D-A4EB-4760-B2ED-414C900F4AAC",
"Name": "John Doe",
"Status": false
}, {
"GUID": "BC5C53FD-5CA7-4DBE-8594-D26AD88B758B",
"Name": "Jane Doe",
"Status": true
}, {
"GUID": "2FCD677B-DA36-4014-823A-9BDD1A72AD66",
"Name": "Anonymous",
"Status": true
}]
}
Ok, so after I have made the initial call, I need to send the GUID of each Profile Object to another web-service. This service will use the GUID to determine the ID of that specific Profile.
The data from the second web-service will only return the ID for the GUID of the first call.
How can I chain these $http calls? Would it be better to create a new json object and use data from there?
I have done this before using ajax.
*Another question regarding my controller code, is this fine like this or would it be better to maybe do the $http calls as a Service, Provider or Factory? How can I go about doing this?
Any help/links with getting the above to AngularJS code would be appreciated.
Please ask if anything is unclear.
A: Simply call execute the next call in your "success" handler.
$http.post("Profile_List.asp").success(function(data) {
$scope.profiles = data;
//first call succeeded, and we have the data. call method 2
executeStep2($scope.profiles);
})
function executeStep2(profiles)
{
$http.post("second_method") // etc. (you can just send profiles as post data here)
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 2,758
|
module OmniAuth
module Strategies
class Hanami
include OmniAuth::Strategy
option :auth_key, ->(params) { params.fetch('user', {})['email'] }
option :password_key, ->(params) { params.fetch('user', {})['password'] }
option :encryption, :bcrypt
option :interactor
def callback_phase
return fail!(:invalid_credentials) unless identity
super
end
private
uid do
identity
end
def identity
return @identity if @identity
login = options.fetch(:auth_key).(request.params)
password = options.fetch(:password_key).(request.params)
result = interactor.new(login, password).call
if result.success?
@identity = result.user
else
@identity = nil
end
end
def interactor
options.fetch(:interactor)
end
def model
options.fetch(:model)
end
end
end
end
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,953
|
{"url":"https:\/\/vega.github.io\/vega-lite-v3\/tutorials\/figures.html","text":"This website is for Vega-Lite v3. Go to the main Vega-Lite homepage for the latest release.\n\nCreate Figures for Papers\n\nIn this tutorial you will learn how to use Vega-Lite to create charts for figures and embed them in a paper written with LaTeX. The overall workflow includes (1) opening a chart in the online editor, (2) exporting a chart as SVG, (3) converting the SVG to PDF, (4) cropping the PDF to remove excessive whitespace, and (5) embedding a chart in a LaTeX paper.\n\nCreate a Chart in Vega-Lite\n\nFirst, you need a chart. To export charts as figures, you can start with specifications from the online editor, use examples from the Vega-Lite website, copy charts created in Altair, or use charts from Observable. To export the chart, copy the Vega-Lite specification into the Vega-Lite editor.\n\nFor this tutorial, we will use the example chart below, which you can view in the editor here. Note that the complete specification is stored in the URL. You can easily share modifications with your collaborators.\n\nIf you need to customize the figure beyond what is supported in Vega-Lite, you can open the compiled Vega in the editor, edit it as desired, and then continue the tutorial below.\n\nExport as SVG\n\nNow that you have loaded the chart in the editor, you can export it with \u201cExport\u201d and then \u201cOpen SVG\u201d. We are exporting the chart as an SVG, a vector graphic format, which is infinitely scalable. You figure will be crisp even when zoomed in.\n\nUnfortunately, LaTeX cannot import SVGs so we must first convert it to PDF. There are many ways to convert SVGs to PDF, including using illustrator, using another image editor, using command line scripts, or printing as PDF in the browser. For this tutorial, we are going to use the printing feature of your browser; from the newly opened SVG image, select \u201cFile\u201d and then \u201cPrint\u2026\u201d (or use Cmd+P) in your browser (e.g. Chrome or Firefox).\n\nNow make sure that the destination is set to \u201cSave as PDF\u201d (see below). Then save the PDF in the directory of your LaTeX paper.\n\nCrop the PDF\n\nYou will notice that there is a lot of white space that we want to remove because the SVG image is saved as a single printer page. If your figure is too large, you need to scale it so that the output fits on a single page. To crop the file, open the PDF file in Mac OS Preview application. Select the \u201cRectangular Selection\u201d tool and draw a box around the chart. You can adjust the box until it tightly fits the chart. Then click \u201ccrop\u201d (or use Cmd+K). Make sure to save the newly modified file!\n\nEmbed the Chart\n\nLastly, embed the chart in the paper as a figure. In this case we named the PDF benchmark.pdf. Adjust the command below accordingly.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{benchmark}\n\\caption{\\label{fig:benchmark} A title that describes the figure.}\n\\end{figure}\n\n\nThat\u2019s it. Do you have feedback on this tutorial or suggestions? Please create an issue on GitHub.","date":"2020-08-10 19:22:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.31543323397636414, \"perplexity\": 2208.9003201120977}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439737168.97\/warc\/CC-MAIN-20200810175614-20200810205614-00416.warc.gz\"}"}
| null | null |
Q: ComplexHeatmap using column_split() I want to plot a heatplot in r using the library(ComplexHeatmap).
The reason for a complex heatmap is I want to use the function column_split() and create four sections along the x-axis of the heatmap based on the condition of the entry present in one of the columns.
Similar entries should be clubbed together into a section while the other ones should be similarly sectioned together.
I'm attaching an example of what I'm intending to visualize:
The example has two sections along the x-axis, I want to get four sections along the x-axis.
The sample of my dataset can be found here (sectioning should be performed based on the entry present in the X1 column):
dput(tdfdarkmagenta[,c(60:67)])
structure(list(TNFRSF14 = c("6.763211", "5.284519", "7.490921",
"4.609269", "5.269974", "4.647631", "6.179634", "5.441948", "4.829410",
"5.030580", "6.438149", "4.845201", "4.637916", "4.906468", "5.100337",
"4.880591", "4.561752", "4.552504", "4.553884", "5.307149", "5.006392",
"4.517924", "4.607045", "4.595832", "4.989570", "4.538372", "5.533871",
"4.950450", "5.013243", "4.520570", "5.274152", "4.666649", "4.400845",
"4.928714", "4.673502", "4.448475", "4.722818", "4.740990", "4.610013",
"5.116222", "4.489558", "4.393089", "4.478270", "4.522442", "4.648611",
"4.780437", "4.554242", "4.319169", "4.390447", "5.377440", "4.389846",
"4.807811", "4.513020", "5.489868", "4.905822", "4.859534", "5.645562",
"5.346741", "5.612692", "5.260830", "5.039774", "4.691940", "5.090038",
"5.175798", "4.944519", "4.844526", "4.681809", "4.792616", "4.986805",
"4.821405", "5.350937", "5.168791", "4.752665", "5.054333", "4.918840",
"4.708671", "5.269936", "4.859859", "4.690761", "4.607971", "6.197512",
"5.535270", "5.109438", "5.202073", "6.846271", "4.521108", "5.427523",
"4.896707", "4.881706", "4.898868", "5.553587", "4.761078", "5.387781",
"5.033667", "5.186906", "5.219224", "5.289800", "5.108414", "4.810671",
"4.975923", "5.000025", "5.497612", "5.085484", "5.747220", "4.821348",
"4.552635", "5.108517", "4.372822", "4.886677", "4.550540", "4.535185",
"4.571301", "5.135246", "4.721852", "5.315297", "5.344703", "4.732211",
"5.636453", "5.726499", "5.492068", "6.608274", "4.586360", "5.434929",
"5.550500", "6.364833", "5.023511", "5.741130", "5.279884", "4.697330",
"5.351020", "5.455380", "5.356322", "6.314431", "6.054811", "5.034309",
"5.413860", "5.335178", "5.102029", "6.000984", "5.932897", "5.689009",
"5.391170", "5.951435", "5.043789", "4.817887", "5.691450", "4.634035",
"4.596461", "5.293566", "5.137780", "5.673469", "5.681756", "5.422228",
"5.586516", "5.534513", "5.627834", "5.014984", "5.604038", "5.676470",
"4.594406", "5.257321", "4.842386", "5.576247", "5.195238", "5.239197",
"5.464640", "5.142982", "5.824495", "5.390776", "5.440580", "5.244292"
), TRIM21 = c("6.431994", "5.042253", "7.222424", "4.828634",
"5.948891", "5.123265", "6.642031", "5.904441", "5.475596", "5.353339",
"6.738790", "5.117833", "5.301989", "5.252409", "5.173978", "5.142840",
"4.936253", "5.161623", "5.070000", "5.901228", "5.454423", "5.879939",
"5.602029", "5.516002", "5.522428", "5.775431", "6.118189", "5.915588",
"6.163597", "5.296870", "5.695514", "5.823336", "5.542973", "5.212203",
"5.361452", "5.374471", "5.842928", "5.192644", "4.835399", "6.006584",
"5.229373", "5.456365", "5.252248", "5.401239", "5.140290", "5.452533",
"5.803037", "5.572374", "4.951891", "5.207188", "5.298013", "5.338679",
"4.564718", "6.732028", "6.111744", "7.474183", "6.661202", "6.403443",
"6.545940", "5.888248", "4.997507", "7.016605", "4.935572", "6.126647",
"5.677001", "5.945327", "6.589629", "6.031521", "5.866332", "5.788022",
"6.111872", "6.087375", "5.808597", "6.178624", "5.713949", "5.942519",
"5.637996", "5.424581", "5.599873", "5.284653", "6.609202", "5.435754",
"5.544703", "6.009451", "7.202513", "5.386335", "6.621233", "5.594111",
"6.312540", "5.485936", "5.419595", "6.150265", "5.899882", "5.058617",
"5.659748", "5.437870", "6.509740", "6.433295", "5.310995", "5.498675",
"5.414997", "6.637328", "5.677507", "6.835608", "5.686684", "5.897316",
"6.756414", "5.453264", "5.800830", "5.561556", "4.749356", "5.704908",
"6.355550", "5.415819", "5.515227", "6.149568", "5.638447", "6.283533",
"6.215459", "5.822403", "5.923719", "7.099936", "5.843381", "5.550354",
"5.903016", "5.778041", "7.081189", "5.768080", "5.901516", "6.312023",
"6.633226", "5.521853", "7.176372", "6.286262", "6.375185", "5.486260",
"6.130937", "7.210972", "6.227496", "7.215501", "6.709982", "6.009789",
"7.490369", "6.343237", "5.865556", "6.000012", "6.421068", "6.164297",
"5.938543", "6.017242", "5.973925", "6.084213", "6.213892", "6.936647",
"5.923585", "6.074540", "5.998629", "6.330441", "5.810274", "6.269363",
"6.010916", "6.208866", "6.034612", "5.810491", "7.147822", "6.265278",
"6.126955", "6.750558", "5.901326", "5.473538", "5.564613"),
TRIM5 = c("5.822737", "6.222604", "7.563662", "4.086133",
"6.349595", "5.150263", "5.881708", "5.091480", "6.354464",
"6.116619", "6.843245", "5.452570", "5.382441", "4.725757",
"5.534072", "5.395174", "4.324291", "4.415694", "5.203558",
"6.048923", "5.316767", "5.224266", "5.066764", "4.736292",
"6.042426", "5.373095", "6.645612", "5.533974", "5.672580",
"4.617023", "6.108689", "5.591934", "5.657896", "5.125809",
"4.438850", "5.272771", "5.736807", "4.714584", "4.534244",
"6.058367", "5.177258", "5.877033", "4.202193", "5.724201",
"5.118208", "5.401561", "5.772003", "5.051045", "5.503369",
"5.329664", "4.494426", "5.497274", "4.960003", "6.501349",
"5.650884", "6.528032", "6.182357", "5.462596", "6.706200",
"6.332626", "4.731002", "5.851416", "4.344378", "6.538150",
"6.229104", "5.635625", "6.488791", "6.223015", "6.602510",
"5.836704", "6.691600", "5.369458", "5.291462", "5.941188",
"4.132055", "5.708936", "5.616086", "6.466687", "5.597564",
"5.148402", "6.323588", "6.397155", "5.669944", "5.992870",
"6.851263", "4.895652", "6.447487", "6.193882", "6.497912",
"6.088267", "5.990293", "5.924586", "6.226032", "5.204277",
"6.660849", "5.652528", "6.479109", "6.302167", "6.004851",
"6.195296", "5.109325", "6.352197", "5.672728", "7.059679",
"5.923266", "6.404251", "6.602810", "6.258576", "5.919479",
"5.757714", "5.825573", "5.627942", "6.553201", "5.082112",
"5.894984", "6.323995", "6.144249", "6.898566", "5.889947",
"5.671488", "5.802962", "6.639332", "5.640718", "5.174362",
"5.871434", "5.267289", "5.707974", "5.866471", "5.563334",
"5.553383", "6.389321", "5.926533", "6.543673", "5.936929",
"5.545558", "5.767185", "5.950059", "6.745602", "6.031510",
"6.617051", "5.894231", "5.973619", "6.213449", "5.936016",
"5.073035", "6.029362", "5.904277", "5.537748", "5.253370",
"5.884172", "6.505674", "6.222989", "5.987814", "6.576203",
"6.096379", "6.457036", "5.855024", "6.353923", "5.861205",
"5.971539", "6.049779", "6.087083", "6.038771", "5.251336",
"5.491827", "5.842702", "5.693123", "6.314864", "5.706161",
"5.482341", "5.768043"), TRIM6.TRIM34 = c("5.937611", "5.275868",
"7.353534", "4.622495", "5.361770", "4.988066", "5.897324",
"5.285014", "5.447992", "5.810304", "6.786004", "4.703763",
"5.568188", "4.792179", "5.615872", "5.127782", "4.634241",
"4.618630", "4.463709", "5.662091", "5.081924", "4.953603",
"4.644899", "4.769084", "5.982892", "5.102814", "6.187155",
"5.885339", "6.319735", "5.172348", "5.555390", "5.268740",
"4.959861", "4.972108", "4.841870", "4.970467", "5.420196",
"4.663467", "4.732499", "6.289717", "4.963199", "5.549709",
"5.731115", "5.600103", "4.892902", "5.290226", "5.230376",
"5.383842", "4.936702", "4.556545", "4.631922", "4.863811",
"4.428710", "6.852612", "6.168029", "7.286804", "6.648190",
"5.934927", "6.919225", "6.138611", "4.925052", "7.138950",
"5.066954", "5.944666", "5.793288", "6.403477", "6.796306",
"5.687106", "6.513611", "5.444718", "6.539260", "6.040825",
"5.881115", "6.168548", "5.199162", "6.246036", "6.270044",
"6.218608", "5.681608", "5.328573", "6.097843", "5.467480",
"6.475331", "5.929094", "7.047796", "4.719675", "6.922534",
"5.383393", "5.964221", "4.714213", "5.262174", "5.279408",
"4.957118", "4.849338", "5.763230", "4.493443", "6.405372",
"5.613615", "5.209032", "5.536535", "4.857154", "6.090186",
"5.347670", "6.471024", "5.546441", "5.841984", "7.123798",
"5.616489", "5.810949", "5.142709", "4.597169", "4.683310",
"5.750587", "5.294417", "5.190489", "6.222733", "5.495578",
"6.871896", "5.589133", "6.662429", "5.352042", "6.384046",
"5.383347", "5.525282", "5.671085", "5.460051", "6.395001",
"5.907310", "5.467635", "6.104199", "6.492492", "5.920352",
"6.184735", "7.269542", "5.486369", "5.295680", "5.775933",
"5.957229", "5.837581", "7.105300", "7.495025", "5.566982",
"6.186198", "5.663092", "5.084674", "5.236772", "5.962017",
"5.167454", "4.593162", "5.992850", "5.726368", "5.688865",
"6.202907", "6.341310", "5.873099", "5.816448", "5.829305",
"6.236659", "5.513989", "5.765652", "6.056901", "5.421313",
"6.418777", "5.676975", "5.469386", "6.062755", "6.048360",
"6.200852", "5.727262", "5.469546", "5.748829"), USP18 = c("5.718693",
"6.682403", "6.357125", "5.679496", "4.106625", "4.414581",
"5.064882", "4.957291", "5.548682", "6.278062", "5.276683",
"4.422013", "4.309918", "5.111803", "4.572404", "4.542438",
"5.131603", "5.379931", "5.028311", "4.796386", "4.596530",
"5.155584", "5.456809", "4.761863", "5.198166", "5.441030",
"4.921847", "5.113289", "7.009444", "4.439582", "4.768433",
"4.833512", "4.660273", "4.808394", "5.555523", "4.531055",
"5.582966", "4.526851", "4.649801", "5.198746", "4.810157",
"5.778164", "4.847919", "5.455516", "5.113708", "5.719224",
"4.810561", "5.406566", "4.338842", "6.350501", "4.948599",
"5.231380", "4.335305", "5.381577", "5.311190", "7.202314",
"6.005184", "4.434290", "5.784484", "5.264276", "4.705270",
"5.120882", "4.668959", "4.922306", "4.675179", "4.626882",
"7.446586", "4.729425", "6.223997", "5.281221", "5.392587",
"4.811235", "4.825120", "5.207328", "5.197467", "5.460064",
"4.728236", "5.575803", "4.586449", "5.538034", "4.982738",
"4.920202", "6.770434", "5.961021", "5.766194", "5.137988",
"6.130135", "5.241656", "4.946761", "5.028543", "5.104713",
"5.036523", "4.974109", "4.772592", "4.999752", "4.324340",
"6.213396", "5.517294", "4.692500", "4.742078", "4.593844",
"5.795447", "4.638634", "5.984280", "4.755189", "5.667815",
"6.674394", "5.058963", "5.437060", "4.559434", "4.893805",
"4.797785", "5.374581", "4.495744", "5.057857", "5.600783",
"5.107624", "6.849587", "4.523906", "5.792761", "4.598304",
"5.727816", "5.180632", "5.094581", "6.094168", "4.898104",
"4.862862", "4.776479", "5.155643", "4.943359", "4.734378",
"5.096641", "5.702587", "4.918131", "4.773704", "3.890195",
"4.838250", "5.977761", "4.329212", "6.860318", "4.447631",
"4.712405", "5.392524", "6.063264", "4.936670", "4.882573",
"4.647908", "4.431911", "5.103765", "4.921930", "4.536905",
"4.856455", "4.667513", "4.759260", "4.850679", "4.633649",
"4.686094", "5.126613", "6.473523", "4.649449", "5.036461",
"5.462401", "4.640192", "5.158638", "4.810653", "5.129531",
"4.116244", "4.860353", "4.754640", "5.079082", "4.486192"
), WARS = c(" 9.741085", " 7.705491", "10.358481", " 7.590207",
" 9.337360", " 7.651537", " 9.658838", " 7.700906", " 9.144902",
" 7.704850", " 9.600405", " 6.170229", " 7.413422", " 6.514774",
" 7.886173", " 7.360641", " 5.806572", " 8.378613", " 7.494757",
" 7.876629", " 6.862292", " 8.070094", " 7.309813", " 5.811267",
" 7.574229", " 8.506426", " 8.846700", " 7.531968", " 8.280646",
" 6.830683", " 7.132248", " 7.842201", " 7.805041", " 7.525088",
" 7.210482", " 7.037882", " 7.802182", " 8.031236", " 7.705962",
" 8.336039", " 7.728969", " 7.379578", " 7.024331", " 8.732033",
" 7.894324", " 7.850012", " 9.187318", " 7.530023", " 6.954335",
" 8.453595", " 7.571864", " 7.459042", " 7.719722", " 8.787680",
" 6.740085", " 8.728310", " 8.523625", " 8.393748", " 8.661276",
" 6.579634", " 7.395377", " 7.935308", " 7.322103", " 7.212903",
" 8.356472", " 8.390128", " 7.632168", " 8.357761", " 8.271615",
" 7.471954", " 8.106076", " 7.073513", " 7.578518", " 8.297072",
" 6.849216", " 8.158288", " 7.206605", " 7.681974", " 6.778407",
" 8.272037", " 8.182982", " 7.378438", " 7.716711", " 8.512143",
"10.716291", " 8.820795", " 8.121732", " 8.436596", " 9.147759",
" 6.893998", " 7.259598", " 8.330251", " 8.315300", " 5.920632",
" 7.070069", " 7.281612", " 8.554079", " 9.142981", " 7.950271",
" 7.562227", " 6.715376", "10.280994", " 7.605400", " 9.554562",
" 7.978580", " 8.346153", " 9.568928", " 8.010549", " 8.742179",
" 7.982264", " 6.089002", " 8.265322", " 9.395761", " 7.916445",
" 7.760482", " 8.051640", " 7.734232", " 8.644975", " 7.554951",
" 6.861567", " 7.968219", " 8.652426", " 7.602107", " 7.395093",
" 9.027995", " 8.386802", "10.027226", " 7.902295", " 9.087707",
" 8.789210", " 7.984577", " 8.224228", " 8.709374", " 8.580686",
" 8.745083", " 6.777630", " 7.978246", "10.020118", " 8.364781",
" 8.539831", " 8.263803", " 8.107275", " 9.916640", " 8.512989",
" 7.057656", " 8.755297", " 8.717764", " 8.466065", " 7.787823",
" 8.103300", " 7.461842", " 8.445302", " 7.790692", " 8.475600",
" 7.720987", " 8.306191", " 9.288390", " 7.711786", " 7.908223",
" 8.632697", " 7.570594", " 8.941366", " 8.272476", " 8.846527",
" 7.762084", " 8.358732", " 8.008650", " 8.841305", " 7.768422",
" 7.979987", " 7.296068"), XAF1 = c(" 7.204336", " 4.676853",
" 8.461538", " 4.970523", " 4.757121", " 5.646469", " 6.941820",
" 5.332072", " 4.894905", " 9.288142", " 7.185457", " 5.648606",
" 5.498739", " 4.955205", " 6.115092", " 5.048723", " 5.473210",
" 5.189232", " 5.009301", " 4.940164", " 6.189979", " 5.492820",
" 6.522928", " 4.583334", " 4.768236", " 5.385628", " 6.153866",
" 6.110116", " 8.693712", " 5.400515", " 6.456129", " 5.645278",
" 4.918446", " 6.186246", " 6.612541", " 5.076613", " 5.149972",
" 5.243527", " 4.802256", " 8.295490", " 6.441114", " 7.321974",
" 5.441317", " 5.674706", " 5.010786", " 6.008704", " 7.002941",
" 5.785526", " 5.013941", " 5.039298", " 4.768318", " 6.526325",
" 5.238632", " 8.058764", " 8.498692", " 8.999449", " 7.495898",
" 6.122763", " 7.863722", " 5.514492", " 5.440231", " 6.260331",
" 5.845690", " 5.690694", " 4.753152", " 6.636006", " 8.046649",
" 5.578944", " 6.356472", " 7.718760", " 7.648011", " 6.766714",
" 5.531841", " 6.713646", " 7.022129", " 7.703428", " 5.565680",
" 5.796008", " 4.993859", " 4.810499", " 6.990522", " 9.967345",
" 4.348337", " 9.314201", " 7.574084", " 4.747644", " 7.997477",
" 9.112394", " 6.699651", " 7.482104", " 8.548686", " 6.483866",
" 7.194462", " 5.999078", " 7.251500", " 8.250832", " 7.763383",
" 7.552806", " 8.274743", " 8.746934", " 5.248179", " 7.968312",
" 5.104672", " 7.761367", " 5.763023", " 7.112653", " 9.984765",
" 5.396954", " 8.274371", " 5.353193", " 5.648747", " 6.129513",
" 7.872605", " 4.797765", " 7.341686", " 7.192397", " 5.095878",
" 8.178023", " 7.110888", " 6.506158", " 5.233231", " 9.526941",
" 6.608939", " 5.997255", " 7.838331", " 6.833276", " 9.318884",
" 6.816009", " 5.778280", " 6.535346", " 5.960834", " 6.577319",
" 7.445455", " 8.068162", " 6.985303", " 6.037019", " 6.048727",
"10.166330", " 8.883953", " 9.440612", " 7.105944", " 7.351310",
" 7.450230", " 8.073638", " 7.250787", " 6.771193", " 5.873439",
" 5.426779", " 5.337160", " 6.399303", " 5.838342", " 7.407744",
" 6.558704", " 6.943017", " 6.376653", " 5.095157", " 6.691384",
" 7.734144", " 5.813734", " 7.271785", " 8.273823", " 9.423574",
" 5.930291", " 7.297977", " 4.916875", " 7.328354", " 6.662784",
" 7.581749", " 6.125870", " 7.011328", " 6.461728"), X1 = structure(c(1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L,
1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L,
2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L,
3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L,
3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L,
4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L,
4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L,
4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L, 4L,
4L, 4L, 4L, 4L, 4L), .Label = c("Proneural", "Neural", "Classical",
"Mesenchymal"), class = "factor")), class = "data.frame", row.names = c("TCGA.02.0003.01",
"TCGA.02.0010.01", "TCGA.02.0011.01", "TCGA.02.0014.01", "TCGA.02.0024.01",
"TCGA.02.0026.01", "TCGA.02.0028.01", "TCGA.02.0046.01", "TCGA.02.0047.01",
"TCGA.02.0048.01", "TCGA.02.0060.01", "TCGA.02.0069.01", "TCGA.02.0074.01",
"TCGA.02.0080.01", "TCGA.02.0084.01", "TCGA.02.0087.01", "TCGA.02.0104.01",
"TCGA.02.0114.01", "TCGA.02.0281.01", "TCGA.02.0321.01", "TCGA.02.0325.01",
"TCGA.02.0338.01", "TCGA.02.0339.01", "TCGA.02.0432.01", "TCGA.02.0439.01",
"TCGA.02.0440.01", "TCGA.02.0446.01", "TCGA.06.0128.01", "TCGA.06.0129.01",
"TCGA.06.0146.01", "TCGA.06.0156.01", "TCGA.06.0166.01", "TCGA.06.0174.01",
"TCGA.06.0177.01", "TCGA.06.0238.01", "TCGA.06.0241.01", "TCGA.06.0410.01",
"TCGA.06.0413.01", "TCGA.06.0414.01", "TCGA.06.0646.01", "TCGA.06.0648.01",
"TCGA.08.0245.01", "TCGA.08.0344.01", "TCGA.08.0347.01", "TCGA.08.0348.01",
"TCGA.08.0350.01", "TCGA.08.0353.01", "TCGA.08.0359.01", "TCGA.08.0385.01",
"TCGA.08.0517.01", "TCGA.08.0524.01", "TCGA.12.0616.01", "TCGA.12.0618.01",
"TCGA.02.0089.01", "TCGA.02.0113.01", "TCGA.02.0115.01", "TCGA.02.0451.01",
"TCGA.06.0132.01", "TCGA.06.0133.01", "TCGA.06.0138.01", "TCGA.06.0160.01",
"TCGA.06.0162.01", "TCGA.06.0167.01", "TCGA.06.0171.01", "TCGA.06.0173.01",
"TCGA.06.0179.01", "TCGA.06.0182.01", "TCGA.06.0185.01", "TCGA.06.0195.01",
"TCGA.06.0208.01", "TCGA.06.0214.01", "TCGA.06.0219.01", "TCGA.06.0221.01",
"TCGA.06.0237.01", "TCGA.06.0240.01", "TCGA.08.0349.01", "TCGA.08.0380.01",
"TCGA.08.0386.01", "TCGA.08.0520.01", "TCGA.02.0007.01", "TCGA.02.0009.01",
"TCGA.02.0016.01", "TCGA.02.0021.01", "TCGA.02.0023.01", "TCGA.02.0027.01",
"TCGA.02.0038.01", "TCGA.02.0043.01", "TCGA.02.0070.01", "TCGA.02.0102.01",
"TCGA.02.0260.01", "TCGA.02.0269.01", "TCGA.02.0285.01", "TCGA.02.0289.01",
"TCGA.02.0290.01", "TCGA.02.0317.01", "TCGA.02.0333.01", "TCGA.02.0422.01",
"TCGA.02.0430.01", "TCGA.06.0125.01", "TCGA.06.0126.01", "TCGA.06.0137.01",
"TCGA.06.0145.01", "TCGA.06.0148.01", "TCGA.06.0187.01", "TCGA.06.0211.01",
"TCGA.06.0402.01", "TCGA.08.0246.01", "TCGA.08.0354.01", "TCGA.08.0355.01",
"TCGA.08.0357.01", "TCGA.08.0358.01", "TCGA.08.0375.01", "TCGA.08.0511.01",
"TCGA.08.0514.01", "TCGA.08.0518.01", "TCGA.08.0529.01", "TCGA.08.0531.01",
"TCGA.02.0004.01", "TCGA.02.0025.01", "TCGA.02.0033.01", "TCGA.02.0034.01",
"TCGA.02.0039.01", "TCGA.02.0051.01", "TCGA.02.0054.01", "TCGA.02.0057.01",
"TCGA.02.0059.01", "TCGA.02.0064.01", "TCGA.02.0075.01", "TCGA.02.0079.01",
"TCGA.02.0085.01", "TCGA.02.0086.01", "TCGA.02.0099.01", "TCGA.02.0106.01",
"TCGA.02.0107.01", "TCGA.02.0111.01", "TCGA.02.0326.01", "TCGA.02.0337.01",
"TCGA.06.0122.01", "TCGA.06.0124.01", "TCGA.06.0130.01", "TCGA.06.0139.01",
"TCGA.06.0143.01", "TCGA.06.0147.01", "TCGA.06.0149.01", "TCGA.06.0152.01",
"TCGA.06.0154.01", "TCGA.06.0164.01", "TCGA.06.0175.01", "TCGA.06.0176.01",
"TCGA.06.0184.01", "TCGA.06.0189.01", "TCGA.06.0190.01", "TCGA.06.0194.01",
"TCGA.06.0197.01", "TCGA.06.0210.01", "TCGA.06.0397.01", "TCGA.06.0409.01",
"TCGA.06.0412.01", "TCGA.06.0644.01", "TCGA.06.0645.01", "TCGA.08.0346.01",
"TCGA.08.0352.01", "TCGA.08.0360.01", "TCGA.08.0390.01", "TCGA.08.0392.01",
"TCGA.08.0509.01", "TCGA.08.0510.01", "TCGA.08.0512.01", "TCGA.08.0522.01",
"TCGA.12.0619.01", "TCGA.12.0620.01"))
My attempt has been:
Heatmap(data.matrix(tdfdarkgrey), column_split =tdfdarkgrey, show_row_names = FALSE, show_row_dend = FALSE, show_column_dend = FALSE, show_column_names = FALSE,
show_parent_dend_line = FALSE, cluster_rows = FALSE, cluster_columns = FALSE, column_title = NULL,
heatmap_legend_param = list(title= c("Scale")))
Any suggestions shall be helpful.
A: Given your data provided by dput (in the example, this would be called tdfdarkgrey), you would need to transpose your matrix to get a column gap. I provided the column_split vector separately from the matrix to be drawn and exaggerated the the column_gap for better visibility.
Example below:
library(ComplexHeatmap)
Heatmap(t(data.matrix(tdfdarkgrey[,grep("^X1$", colnames(tdfdarkgrey), invert = TRUE)])),
column_split =tdfdarkgrey$X1, show_row_names = FALSE, show_row_dend = FALSE, show_column_dend = FALSE, show_column_names = FALSE,
show_parent_dend_line = FALSE, cluster_rows = FALSE, cluster_columns = FALSE, column_title = NULL,
heatmap_legend_param = list(title= c("Scale")),
column_gap=unit(.05, "npc"))
Created on 2022-06-20 by the reprex package (v2.0.1)
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,012
|
Need a quick, easy and delicious dinner idea that the whole family will enjoy and can even be used for Game Day? This Easy Enchilada Pasta recipe is an all time favorite of ours. You can make the entire meal in one pot, and it's a great way to spice up your game day!
The folks at Safeway and Old El Paso™ asked me to share how I spice up game day for my family, and this recipe instantly came to mind. This Easy Enchilada Pasta recipe is one of those recipes that you can quickly throw together on a hectic weeknight, yet it's so good and such a crowd pleaser that I love to use it when we entertain as well. I have two super picky eaters, yet this is one of those recipes that they both love and request often. Thanks to Safeway and Old El Paso for sponsoring this post.
While you see me serving this in a casserole dish, it is actually made all in one pot, and can be served right from that pot. I love not dirtying all my dishes or having three burners going on my stove. The other thing I like about this recipe is that it's both flexible and forgiving. You can play around with the ingredients and the amounts, and it's still going to turn out great. When I'm entertaining, I love recipes that don't take a super sharp focus to get right. It allows me to both cook and visit, which is fantastic.
For this recipe you can use any of the Old El Paso red enchilada sauces that you prefer. I tend to go mild as my little one doesn't like things too spicy, but if your family likes a bit more of a kick, by all means, go for it! You can also completely change up this recipe and go with one of the green chile sauces. When I do that I like to use ground turkey in place of ground beef, and a Monterey Jack cheese. It will give you an entirely different flavor, we've had weeks where I've made both!
Your ingredients for this are pretty simple. Dried pasta, ground beef, chicken (or beef) broth, Old El Paso Enchilada Sauce, cream cheese, corn, cheese and some toppings such as tomatoes, black olives, green onions, whatever you like.
You start by browning your meat in a pan that will be large enough to hold the entire recipe. I use approximately a pound of ground beef, but it can be a little over or under.
Once it has cooked, you can drain off any fat, and then add in your Old El Paso Enchilada Sauce as well as your broth. I use chicken broth because I almost always have it on hand, but you can absolutely go with a beef broth as well.
And then you add your pasta. I cover my pan and let the pasta, beef, enchilada sauce and broth cook for about 10 minutes. Every pasta cooks a bit differently, so you'll want to check your pasta starting around the seven minute mark. Once it starts to get close to being the texture you like (we prefer ours al dente), you'll add in your cream cheese that has been cut into cubes and stir it well.
After that has cooked for a minute or so, it's time to add the corn. I use frozen corn and I don't pre-cook it. I don't however want to add in frozen corn and cool down my entire meal, so I run the frozen corn under warm water in a strainer prior to adding it to the pot. Basically the corn is now thawed, and it just needs to heat up in the mixture.
After the corn goes in I cook it for another minute or two, then top with cheese and toppings, and it's time to serve. Seriously SO easy and so delicious. Such a great meal to share with friends and family for game day.
I hope that your friends and family love this as much as we do!
In a large pot or skillet (large enough to hold entire recipe), over medium heat cook, cook onions and garlic until onions are translucent. Add ground beef and cook until thoroughly browned.
Once meat is browned, add enchilada sauce and broth, cover, and bring to a simmer. Cook for approximately 7-8 minutes than start checking the progress of the pasta. When pasta starts to reach al dente, add in cream cheese and stir well. Cover and allow to cook for 1-2 more minutes. Stir in corn, and cook for 1-2 minutes.
When pasta is cooked how you like it, add in cheese, which can be stirred in or allowed to melt on the top. Sprinkle with any additional ingredients such as tomatoes or olives. Serve immediately.
Love recipes packed full of enchilada flavor? I've got some more recipes you'll want to check out!
Chelsea's Messy Apron has a great Easy Crockpot Creamy Chicken Chili recipe.
Taste & Tell has an amazing (and super creative!) Enchilada Sloppy Joe recipe.
The Cookie Rookie has a fun Chicken Enchilada Pasta Salad recipe.
Wine and Glue has a delicious Slow Cooker Chicken Enchilada Soup recipe.
Shugary Sweets has some tasty Enchilada Beef Rollups.
And you won't want to miss this 5-Minute Blender Enchilada Sauce from Crunchy Creamy Sweet.
Disclosure: This post was sponsored by Safeway and Old El Paso. All opinions however are mine and mine alone.
I am adding this to out menu for next week! My Hubby loves spicy pasta so this is going to be a hit!
When I say I make this almost weekly, I'm not exaggerating at all!
When is the pasta added exactly. It's mentioned to check it's progress -but when do u put it in?
Sarah, after browning your ground beef, you add your enchilada sauce, chicken stock/broth and the pasta, which cooks in those liquids. Enjoy, we have this at least 2-3 times a month!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,955
|
Tolkien Trivia
General questions and answers regarding the upcoming film trilogy are posted here. If you have a question to contribute to the FAQ, please send it to me.
1. How many movies have been made?
2. Who made these movies?
3. When will the movies be released?
4. What are the movies going to be rated?
5. Is there an official web site for the trilogy?
6. What about film versions of The Hobbit or The Silmarillion?
7. What was the budget for these films?
8. Was Christopher Tolkien or the Tolkien Estate involved in these films?
9. What kind of Computer Generated Imaging will be used in the films?
10. Where were the movies filmed?
11. Will the Hobbits be played by midgets?
12. Who has been cast in the movies?
13. When will the movies be released in my country?
14. Have the writers changed the story for the movies?
15. Will there be different languages in the movies?
16. What will the war scenes be like?
17. When will there be theatrical previews for the movies?
18. How long will the films be?
19. Who composed the music for the films?
20. Who directed the 1978 animated version of The Lord of the Rings?
21. When will The Fellowship of the Ring be released on DVD or VHS?
A total of 3 films have been made, one for each book in J.R.R. Tolkien's trilogy. The titles of the films will also coincide with the titles of the books. Filming wrapped in December of this year on all three movies, which were filmed concurrently... a first in cinema history!
Peter Jackson, director of The Frighteners and Heavenly Creatures, directed and produced the trilogy, in cooperation with Barrie Osborn. Assistant directors included Carolynne Cunningham and Dave Norris. The studio was New Line Cinema and the production company was Peter Jackson's Wingnut Films.
The first part of the trilogy, The Fellowship of the Ring, is slated for release on December 19, 2001. The Two Towers will follow December 14, 2002 and The Return of the King is expected to be released on December 14, 2003.
Peter Jackson is contractually obligated to deliver a PG-13 rated film. However, he has stated many times that he is shooting for a very "hard" PG-13 -- something that will push the envelope of the rating. He feels this will be the only what to translate the "realness" of Middle-earth into the films.
Yes, the official web site can be visited at: http://www.lordoftherings.net
There you can find teaser trailers and official pictures of the film.
6. If the trilogy is a success, will Jackson consider producing a film version of The Hobbit or The Silmarillion?
Thus far Jackson has no plans to do this. You never know, though! New Line owns the film rights to BOTH of those Tolkien titles. Jackson has stated that New Line would almost definitely film The Hobbit if The Lord of the Rings is a success -- whether Jackson would be involved is yet to be seen!
The total budget for all 3 films was $270 million. The New Zealand exchange rate, as well as cheaper services within the small country, literally turned the $120 million into $810 million. So Jackson actually had $270 million per film!
8. Was Christopher Tolkien or the Tolkien Estate involved in the films?
No. They didn't want to get involved because their association with the film might be seen as an endorsement, making the film "official" somehow. That was a situation that Christopher Tolkien and the Tolkien Estate did not want.
Gollum will be entirely CG. Weta (Jackson's special effects company) has also developed some software called MASSIVE, which will allow the generation and execution of large-scale battles -- those whom have seen it have declared it "revolutionary." Hobbits and Dwarves will be shrunk using computer technology. It is also rumored that the features of the Elves will be digitally captured and altered in the computer to get a unique look, however this may have changed.
The movies were shot on-location in numerous areas around New Zealand. Set shooting was done at Camperdown Studios in Mirimar.
No, in order to reduce the height of Hobbits and Dwarves, they will be digitally shrunk by a computer in post production. Various other techniques will be employed as well to make sure that the Hobbits look 3 to 4 feet tall, including camera trickery such as "forced perspective" and even some film doubles.
The cast varies from relatively unknown actors to Shakespearean players to Hollywood stars. Our complete list of cast members is available here.
New Line has committed to releasing the Lord of the Rings films around the world at the same time. This means that fans from Argentina to Poland will all be able to see the films on the first day of their respective releases.
This question has led many Tolkien fans to be rather cynical. In short: Peter Jackson has edited the story. Some major changes that we are aware of include the omission of Tom Bombadil and the expansion of the role of Arwen. Most likely there are additional minor changes. While these changes perhaps are disheartening, it's important to remember that overall, Jackson is going to be very faithful to Tolkien's works -- he's a huge fan. Moreover, the cast by and large have read the books, and the crew refer to the books continually.
Yes! The extent to which they will be used is not known, however we do know that parts of the dialogue will be spoken in Elvish, with English subtitles. Peter Jackson has hired several Tolkien language experts to ensure that the actors get the pronunciation just right.
Magnificent, we hope! The battles will not have Braveheart-style violence, as New Line is aiming for a PG-13 rating. What we've seen and heard so far is encouraging, though. Extras (Orcs, Elves, Rohirrim, etc.) will be played in part by soldiers in the New Zealand army. Jackson has 200 full time horses, as well as many more temporary horses to use as steeds in the battles. The costumes and armor are being painstakingly designed, forged at Weta like the real stuff! All Orc masks are detailed individually and all of the chain mail is hand-made! The folks at Weta will also be using some software called Massive, which can populate battlefields with thousands of computer-generated soldiers.
We don't know for sure. The first theatrical teaser was released in January of 2001 and can be downloaded at the official site. Rumors are flying, but we know that there will be a few more additional teasers (most likely in July and Sep. of 2001). The first theatrical trailer will probably come out around Oct. 2001.
Jackson has indicated that the target length for all three films will be approximately 7 to 8 hours total running time. He has also indicated that The Fellowship of the Ring will probably be the longest of the trilogy.
Howard Shore composed the score for all three films. He relied heavily upon darker choral music as well as medieval instruments in his work.
Ralph Bakshi directed the animated version of The Lord of the Rings that was released in 1978. The film was widely considered to be a disaster, although there are many who feel that it was a good film in its own right. The film was recently re-released on DVD and VHS and is available at Amazon.com.
The standard version of Fellowship is scheduled to be released on DVD and VHS in August of 2002. The expanded version, with bonus footage and commentary, will be released on DVD only in November of 2002. According to New Line, this "Special Edition" version will be rated R for battle scenes.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,593
|
According to the most recent ZIP code 41231 Kentucky demographics data available from the 2022 Census Bureau released in the American Community Survey in November of 2022, Figure 1 ZIP code 41231 depicts it has a Population 2021 of 1,019 which is less than most other zip codes in the local area. The zip code with the highest population in the area is 41224 which depicts a population of 3,775 (approximately 3.7 times bigger). Figure 3 uses the ZIP code 41231 population data for a comparison of the population growth/population change estimates from the years 2010 to 2021 and 41231 Kentucky shows an increase of 17 (2%).
The total ZIP code 41231 Kentucky greater area population percent change for all areas for the years from 2010 to 2021 is shown in Figure 4 and for 41231 depicts it has a Population Change of 1.7% which is in the mid point range of other zip codes in the greater region.
Looking at the ZIP code 41231 population density (measured as people per square mile) and providing comparisons to both the national and state average population density in Figure 5, 41231 Kentucky depicts it has 177 people per square mile which is the second most people per square mile of all the zip codes in the greater ZIP code 41231 region. The next lower population density is 41203 about 11.5% smaller with population density of 158. The zip code with the highest population density in the area is 41267 which depicts a people per square mile of 198 (12.3% larger). Figure 6 provides ZIP code 41231 demographics for the overall median age for all people in the region and 41231 depicts it has a Median Age of 52.1 which is the second most of all the zip codes in the greater region. The zip code with the highest overall median age of all people in the area is 41250 which shows an age of 62.5 (20.0% larger). The ZIP code 41231 Kentucky population data for median age broken out by gender for both men versus the median age of women is shown in Figure 7. 41231 Kentucky depicts a median age of men approximately three-fourths the size as the median age of women.
The next demographic analysis (Figure 8) looks at large generational ZIP code 41231 population groups (and can be useful internet research for employment related research or identifying areas with retirees). Key findings in this chart includes that 41231 Kentucky has one of the largest proportions of people less than 20 years of age at 21.7% of the total and is ranked #3. Only #2 41267 (23.0%), and #1 25669 (24.3%) are larger. Second, it has the smallest proportion of people between 30 and 39 years old at 6.4% of the total. Third, it has the largest proportion of people between 40 and 49 years old at 23.7% of the total and is ranked #1. Figure 11 compares the ratio of the amount of men to the amount of women and shows total male population 29.5% larger as the total female population.
Figure 12 shows the detailed marriage characteristics broken down by %residents who are married, never married, single, divorced, and widowed. ZIP code 41231 has the largest proportion of never married percent at 21% of the total and is ranked #1. Figure 14 compares the average household size using the average number of people in a family for ZIP code 41231 households. 41231 depicts it has a Family Size of 3.1 which is less than most other zip codes in the metro area. The zip code with the highest average family size in the area is 41267 which depicts an average family size of 6.2 (about twice as large).
Figure 15 shows the overall ratio of ZIP code 41231 households for families to the total number of ZIP code 41231 households and that 41231 depicts it has a Families of 66% which is less than most other zip codes in the metropolitan area. The zip code with the highest percent of people who are in a family in the area is 41250 which shows a families percent of 89% (35.8% larger).
Looking at ZIP code 41231 households that are headed by a husband and wife as a percent of all families in Figure 16, 41231 depicts it has a Married-couple family of 79% which is the second smallest when ranked by percent of people in a husband and wife family of all the other zip codes in the greater region. The zip code with the highest percent of people in a husband and wife family in the area is 41203 which depicts a husband and wife family percent of 92% (15.9% larger). Figure 17 shows ZIP code 41231 demographics for the head of household for each place using a breakdown of married-couple, male-headed alone, and female-headed alone. 41231 Kentucky has the largest proportion of households at 21.0% of the total and is ranked #1.
The next section of charts provide a detailed look at mothers and baby births that occurred over the past 12 months. Figure 18 shows the rate of women aged 15 to 50 years old who have given birth. 41231 depicts it has a Birth Rate of 25.2% which is the highest of all zip codes in the local area. Figure 19 shows the breakdown of the mother's age for all baby births that occurred in the last 12 months and it has the largest proportion of percent of births to mothers aged 20 to 24 at 100% of the total and is ranked #1.
The next set of demographic data looks at the marital status for ZIP code 41231 households which is useful for internet research. Figure 28 compares the total number of single people in each area. 41231 depicts it has a Total Single People of 46% which is the second most percent single people of all the zip codes in the greater ZIP code 41231 region. The zip code with the highest percent of people who are single for any reason in the area is 25669 which depicts a percent single of 49% (only about 4.9% larger). Comparing percent of people who are single for any reason to the United States average of 50%, ZIP code 41231 is about 8.4% smaller. Also, measured against the state of Kentucky, percent of people who are single for any reason of 49%, ZIP code 41231 is about 6.3% smaller.
Figure 30 compares the single people in each area broken down by never married, divorced, and widowed. 41231 Kentucky has the largest proportion of percent never married at 21% of the total and is ranked #1. Second, it has one of the largest proportions of percent divorced at 5% of the total and is ranked #3. Only #2 41224 (8%), and #1 25674 (11%) are larger. Figure 31 shows the demographics for the number of single men adults in each area broken out by never married, divorced and widowed. 41231 Kentucky has one of the largest proportions of men who have never been married at 18% of the total and is ranked #2. The only larger zip code being 25674 with 18%. Figure 32 shows the number of single women adults in each area broken out by never married, divorced and widowed. 41231 Kentucky has the largest proportion of women who have never been married at 25% of the total and is ranked #1. Second, it has one of the largest proportions of women who are divorced at 10% of the total and is ranked #3. Only #2 41224 (13%), and #1 25674 (15%) are larger.
Figure 33 shows the ZIP code 41231 demographics for the ratio of the number of single men between the age of 18 and 65, in each area, broken down by age group for the ZIP code 41231 metro area. 41231 Kentucky has one of the largest proportions of single men 18 to 24 at 49% of the total and is ranked #3. Only #2 25669 (55%), and #1 41267 (80%) are larger. Second, it has the largest proportion of single men 25 to 29 at 41% of the total and is ranked #1. Third, it has the largest proportion of single men 40 to 44 at 10% of the total and is ranked #1.
Figure 34 shows the ZIP code 41231 population data for the ratio of the number of single women between the age of 18 and 65, in each area, broken down by age group for the ZIP code 41231 metro area. 41231 Kentucky has the largest proportion of single women 50 to 60 at 100% of the total and is ranked #1.
Zip code 41231, Kentucky Demographics Data
Figure 1: 41231, KY and Area 2021 Population Data
Figure 2: Map of 41231, KY and Area
Figure 3: 41231, KY Population Change 2010 to 2021
Figure 4: 41231, KY 2010 to 2021 Population Percent Change
Figure 5: 41231, KY Population Density
Figure 6: Median Age in 41231, KY
Figure 7: Median Age by Gender in 41231, KY
Figure 8: 41231, KY and Area Age by Generation
Figure 9: 41231, KY and Area Ethnicity Makeup
Figure 10: 41231, KY Hispanic Population
Figure 12: 41231, KY Marriage Status
Figure 14: 41231, KY Average Family Size in Household
Figure 17: 41231, KY Head of Household
Figure 18: 41231, KY Birth Rate (Last 12 months)
Figure 20: 41231, KY Teenager Birth Rate
Figure 21: 41231, KY Unwed Mothers as % of All Births
Figure 22: 41231, KY Unwed and On Public Assitance
Figure 24: 41231, KY Unwed Mother Births By Age Group
Figure 25: 41231, KY Unwed Mother Birth Rate By Race
Figure 26: 41231, KY Unwed Mother Births By Poverty Level
Figure 30: 41231, KY Single People Broken Down By Reason
Figure 31: 41231, KY Single Men in Area
Figure 32: 41231, KY Single Women in Area
Figure 35: 41231, KY Citizenship Status
Figure 36: Citizen Place of Birth for 41231, KY
Figure 37: 41231, KY Percent of Population Foreign Born
Figure 39: 41231, KY Non Citizen Age Breakout
Figure 43: 41231, KY Foreign Born People are From What Region
Figure 44: 41231, KY Foreign-Born World Region of Birth
Figure 45: 41231, KY Foreign-Born Sub-Region of Birth
Figure 46: Foreign-Born Country of People Living in 41231, KY
Near 41231, KY
Select a City-PlaceWilliamsonInezChattaroy CDPHolden CDPDelbartonSouth Williamson CDPRed Jacket CDPKermitWarfieldMcCarr CDPBelfry CDP
Select a CountyPike CountyWayne CountyFloyd CountyLogan CountyMingo CountyJohnson CountyBoone CountyLincoln CountyLawrence CountyMagoffin CountyMartin County
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,392
|
{"url":"https:\/\/dsp.stackexchange.com\/tags\/statistics\/new","text":"# Tag Info\n\nThe pdf $f_Z(z)$ of the sum $Z=X+Y$ of any two jointly continuous random variables $X$ and $Y$ with joint pdf $f_{X,Y}(x,y)$ is as follows: $$\\text{For all } z, -\\infty < z < \\infty, ~~ f_Z(z) = \\int_{-\\infty}^\\infty f_{X,Y}(x,z-x) \\, \\mathrm dx.\\tag{1}$$ For the special case when $X$ and $Y$ are nonnegative random variables (including as a special ...","date":"2020-02-29 10:53:27","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8915658593177795, \"perplexity\": 81.80889540303554}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875148850.96\/warc\/CC-MAIN-20200229083813-20200229113813-00252.warc.gz\"}"}
| null | null |
Q: What to do when daylight savings results in duplicate data rows? I have a fact table for energy consumption as follows:
f_meter_data:
utc_calendar_id
local_calendar_id
meter_id
reading
timestamp
The calendar table is structured as per the Kimball recommendations, and it's the recommendations in the Data Warehouse Toolkit that are why I have the two calendar IDs so users can query on local and UTC time.
This is all well and good but the problems arise when daylight savings kicks in.
As the granularity is half hour periods, there will be a duplicate fact records when the clocks change.
And when the clocks change in the other direction there will be a gap in the data.
How can I handle this situation?
Should I average the duplicate values and store that instead?
And for when it's a gap in the data, should I use an average of the point immediately before and the point immediately after the gap?
A: I have a feeling this question may end up getting closed as "primarily opinion based", but my particular opinion is that the system should be set up to deal with the fact that not every day has exactly 24 hours. There may be 23, 24 or 25. (Or, if you're on Lord Howe Island, 23.5, 24 or 24.5).
Depending on when your extra hour falls (which will be different for each time zone), you may have something like:
00 01a 01b 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Or you might consider coupling the hour with the local UTC offset, like:
00-04:00 01-04:00 01-05:00 02-05:00 03-05:00 etc...
Or if you're doing half-hour buckets:
00:00-04:00 00:30-04:00 01:00-04:00 01:30-04:00 01:00-05:00 01:30-05:00 ...
It probably wouldn't be appropriate to do any averaging to align to 24 hours. If you did, then totals would be off.
You also should consider how people will be using the data. Will they be trying to figure out trends across a given hour of the day? If so, then how will they compensate for a spike or dip caused by the DST transition? It may be as simple as putting an asterisk and footnote on the output report. Or it may be much more involved than that, depending on the usage.
Also, you said you're working with 30-minute intervals. Be aware that there are some time zones that are 45-minute offset (Nepal, Chatham Islands, and a small region in Australia). So if you're trying to cover the entire world then you would need 15-minute interval buckets.
And, as Whichert pointed out in comments, if you're using UTC then there is no daylight saving time. It's only when you group by local-time that you'll have this concern.
You may also find the graphs in the DST tag wiki useful.
A: I think you should simplify this with your business. Meaning when the clock is turned back, you turn back your record by pushing the old records out into a warning or error table and putting the new ones for the same interval.
As suggested by Matt, anyways reports would not tell the true story, if run by local time. Then, why give wrong data in the reports.
Or to followup on Matt's advice again change your interval records. You should then not bind the time interval to the local_id. Instead use a Interval_seq_id that runs in interval of 30 minutes that might have 48 records (1-48), 50 records (1-50) or 52 (1-52) records for a given day depending on your region. This technically will remove your duplicate problems on the Local_Int_starttime and Time_interval_Endtime, its no more dependant or bond with the time intervals.
This though moves the issue to your reports/query tools to solve how they now want to display time in the graphs that have duplicates on local time.Especially, if you want to do some analytics based on local time and meter reading. Though, this way the database design now differentiates the records through Interval_Seq_id and not using the time interval.
A: There is a similar thread about daylight savings problems in C# here.
The answer goes into deep details about daylight savings. I believe the problem is somewhat similar.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 761
|
This is a placeholder page for April Garcia, which means this person is not currently on this site. We do suggest using the tools below to find April Garcia.
You are visiting the placeholder page for April Garcia. This page is here because someone used our placeholder utility to look for April Garcia. We created this page automatically in hopes April Garcia would find it. If you are not April Garcia, but are an alumni of Littlefield High School Littlefield, TX, register on this site for free now.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,374
|
{"url":"http:\/\/tex.stackexchange.com\/questions\/98760\/including-package-to-create-matlab-labels-using-latex","text":"# Including package to create MATLAB labels using LaTeX\n\nWhen using LaTeX as the interpreter to typeset labels in MATLAB it would be great to be able to include packages to customize appearance. Is this possible?\n\nAs of now I know no method to create non italic greek letters in the legends.\n\n-\nDepending on what you need, you might be interested in matlab2tikz. \u2013\u00a0G. Poore Feb 18 '13 at 21:14\n\nThere is no easy way to do this, but there are a 3 ways to hack it. To understand how to control the MATLAB LaTeX interpreter, you need to understand how it works. When MATLAB processes a LaTeX string it calls tex.m. The MATLAB tex function appends\n\n\\nofiles \\documentclass{mwarticle} \\begin{document}\\setbox0=\\hbox{\n\n\nand prepends\n\n}\\copy0\\special{bounds: \\the\\wd0 \\the\\ht0 \\the\\dp0}\\end{document}\n\n\nto the string. MATLAB then calls a closed source compiled function texmex which appears to call a tex binary. MATLAB comes with a very limited TeX installation, so even if you could load a package file, you would need to tell MATLAB where to look.\n\nMATLAB has a built in feature that lets you control where to look. You can control this with\n\nsetappdata(0, 'TeXPath', PackagePath);\n\n\nwhere the variable PackagePath contains the full path to the package. You can do this from the MATLAB command line or in a script or function.\n\nThere are 3 ways that you can get MATLAB to load a package. Two require you to modify\/overload MATLAB functions and are complete solutions. The final one doesn't require any modifications of MATLAB functions, but won't work in all cases.\n\nThe first way is to load a package via \\input instead of \\usepackage. This is because you cannot use \\usepackage after \\begin{document}. This may not work and I don't know if you can use \\input inside an \\hbox.\n\nThe second way is to modify mwarticle.cls to load the packages you want. This will load the package for all MATLAB LaTeX strings. The third way is to modify tex.m to conditionally load packages through a mechanism like\n\ngetappdata(0, 'TeXPackages', Packagelist);\n\n\nthis requires knowing MATLAB to make the changes and could theoretically break something. If you go this way, you may also want to modify tex.m to call your local latex binary and not the potentially out of date MATLAB one.\n\n-","date":"2016-07-24 14:43:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8763912320137024, \"perplexity\": 1451.937262185504}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-30\/segments\/1469257824109.37\/warc\/CC-MAIN-20160723071024-00323-ip-10-185-27-174.ec2.internal.warc.gz\"}"}
| null | null |
using System;
using System.Threading;
using System.Threading.Tasks;
using Bricks;
using MediatR;
using MyApp.Domain;
namespace MyApp.Application.Bootstrapper
{
public class UnitOfWorkBehavior<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse>
{
private readonly MyAppContext _context;
public UnitOfWorkBehavior(MyAppContext context)
{
_context = context ?? throw new ArgumentNullException(nameof(context));
}
public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next)
{
var response = await next();
if (request.GetType().IsCommand())
await _context.SaveChangesAsync(cancellationToken);
return response;
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,475
|
\section{Introduction}
Over the past forty years, many popular algorithms have been developed for the multi-target tracking (MTT) problem, e.g., joint probabilistic data association (JPDA) \cite{JPDA,BS-JPDA}, multiple hypothesis tracking (MHT) \cite{Reid,Mallick2013,Forty-MHT}, random finite set (RFS) based methods \cite{PHD,LMB}, where the targets are generally assumed to have independent motion. Recently, the GTT problem has aroused tremendous interest in many applications, e.g., aircraft formations \cite{UAV,poore2}, vehicle convoys \cite{overview1}, and groups of robots \cite{robot1}, etc. In these scenarios, the targets within groups are usually close spaced and have coordinated motion, the groups can split and merge, and there may be large numbers of individual targets within groups. Compare to MTT, tracking groups not only suffers the difficulties such as missed detections, clutters, and measurement origin uncertainty \cite{Mallick2013,zhu,DMHE}, but also encounters the group structure uncertainty caused by group merging or splitting \cite{Graph1}. Due to these limitations, directly using these popular MTT methods to track group targets may suffer severe data association ambiguity, frequent track crossings and high computational load.
Recent works on GTT mainly include \cite{poore2,Graph1,CPHD,Gordon1,Godsill1,Lf,Leadership,Blackman1999,overview1}. Specifically, a multiple frame clustering tracking method within the MHT framework is proposed in \cite{poore2}, which uses clustering methods to partition the targets or measurements into groups and computes the measurement likelihoods by using cluster centroids. In \cite{Graph1}, the authors mainly introduce an evolving graph network to describe the group structure dynamics, and combine the sequential Monte Carlo method to tackle the GTT problem, where the data association is realized by JPDA. Within the RFS framework, a variant of the cardinalized probability hypothesis density filter \cite{CPHD} is proposed to deal with the GTT problem. Furthermore, some works on group state dynamics modeling are developed in \cite{Gordon1,Godsill1,Lf,Leadership}, and some other works on GTT can be seen in \cite{Blackman1999,overview1}. Additionally, there are some other researches that focus on a similar problem to GTT, namely the extended target tracking (ETT) problem \cite{RFS1,GP,RM2}, where the target may occupy multiple sensor resolution cells and thus can generate multiple measurements per time step. The two problems have certain similarities, but are different in some aspects. These studies on ETT primarily focus on estimating the kinematic states and the extent parameters of the targets of interest, while the tracking of the targets within groups is rarely involved.
Lately, a state-of-the-art BP method has drawn a lot of attention in the field of target tracking, which is also known as message passing or the sum-product algorithm \cite{FG-BP}. BP aims to compute the approximations of the marginal posterior probability density functions (pdfs) or probability mass functions (pmfs) for the variables of interest \cite{Max-Sum}. Due to the advantages of BP in estimation accuracy, computational complexity and implementation flexibility, it promotes the development of scalable target tracking algorithms \cite{BP-MTT1,BP-MTT2,BP-SLM,BP-Tuning,BP-MMTT1,BP-MMTT2,BP-Registration,BP-ETT1,BP-ETT2,BP-GTT}. On the whole, most of the studies on BP are developed from different perspectives in the context of MTT, e.g., scalable MTT with unknown number of targets \cite{BP-MTT1,BP-MTT2}, decentralized simultaneous cooperative self-localization and MTT \cite{BP-SLM}, maneuvering MTT \cite{BP-Tuning,BP-MMTT1,BP-MMTT2} and sensor registration \cite{BP-Registration}. Additionally, a scalable ETT algorithm is proposed in \cite{BP-ETT1}, which extends BP for tracking the targets that may generate multiple measurements. Later, a scalable detection and tracking algorithm for geometric ETT is developed in \cite{BP-ETT2}, which is able to jointly infer the geometric shapes of targets. For the GTT problem, a group expectation maximization belief propagation method is proposed to track a single coordinated group with a known number of targets \cite{BP-GTT}. This method is not suitable for tracking an unknown number of group targets, where groups may split and merge.
In this paper, we consider the GTT problem involving group splitting and merging, track initiation, data association and filtering. Our main contributions are summarized as follows:
\begin{itemize}
\item We present a factor graph formulation for the GTT problem, and propose a scalable GTBP method by jointly inferring target existence variables, group structure, data association and target states. The group structure variable enables the description and capture of the group structure changes, e.g., group splitting and merging.
\item The evolution of targets is modeled as the co-action of the group or single-target motions specified by possible group structures and corresponding probabilities. This flexible modeling makes it possible to track multiple group targets and ungrouped targets\footnote{To facilitate the distinction from grouped targets, we refer to multiple targets that have independent motion as ungrouped targets.} seamlessly and simultaneously.
\item GTBP has excellent scalability and low computational complexity that only scales linearly in the number of preserved group partitions, linearly in the number of sensor measurements, and quadratically in the number of targets.
\end{itemize}
Numerical results verify that GTBP not only has excellent scalability but also obtains better tracking performance in GTT. Thus, it is applicable for tracking a large number of group targets.
The rest of this paper is organized as follows. Section 2 briefly reviews the factor graph and BP, and then presents the problem formulation. Section 3 develops the GTBP method. Subsequently, a detailed particle-based implementation of GTBP is presented in Section 4. Numerical experiments and comparison results are given in Section 5. Lastly, we conclude this paper in Section 6.
\textit{Notation:} we use capital calligraphic letters and boldface lower-case characters to denote finite sets (e.g., $\mathcal{V}$) and vectors (e.g., $\mathbf{x}$), respectively. $\mathrm{I}(\cdot)$ denotes the indicator function that $\mathrm{I}(i)=1$ if $i=0$ and otherwise 0. For any set $\mathcal{V}$, $\mathcal{V}\backslash i$ is short for $\{i^{\prime}\in\mathcal{V}| i^{\prime}\neq i\}$, and $|\mathcal{V}|$ denotes the cardinality. Throughout this paper, we use $p(\cdot)$ and $p(\cdot|\cdot)$ as generic symbols for unconditional and conditional pdfs or pmfs or their mixtures. We denote $\int p(\mathbf{x})\mathrm{d}{(\mathbf{x} \backslash \mathbf{x}^{(i)})}$ as the summation or integration over $\mathbf{x}$ except $\mathbf{x}^{(i)}$ (i.e., for discrete or continuous random variables).
\section{Problem Formulation}
In this section, we briefly review factor graphs and the BP framework. Next, some basic assumptions are given and then we state the GTT problem to be solved.
\subsection{Factor Graphs and BP}
The factor graph is a graphical model to describe the factorization of pdfs \cite{FG-BP}. We denote $\mathcal{V}$ and $\mathcal{F}$ as the sets of the variable node $i$ and the factor node $\phi$ in a factor graph with respect to the random variable $\mathbf{x}^{(i)}$ and the factor $p_{\phi}$, respectively. In a factor graph, the variable node $i$ and the factor node $\phi$ are connected by an edge if and only if $\mathbf{x}^{(i)}$ is an argument of $p_{\phi}(\cdot)$. Let $\mathcal{F}_{i}$ and $\mathcal{V}_{\phi}$ denote the sets of the factor nodes connected with the variable node $i$ and the variable nodes connected with the factor node $\phi$, respectively. Consider that a posterior pdf $p(\mathbf{x}|\mathbf{z})$ can be factorized as \cite{FG-BP}
\begin{align*}
p(\mathbf{x}|\mathbf{z})\propto\prod_{\phi \in \mathcal{F}} p_{\phi}\left(\mathbf{x}_{\phi}\right),
\end{align*}
where $\mathbf{x}$ and $\mathbf{x}_{\phi}$ are the stacked vectors of $\mathbf{x}^{(i)}$ for $i\in\mathcal{V}$ and $i\in\mathcal{V}_{\phi}$, respectively. According to the factorization, BP provides an efficient way of approximating the marginal distributions, which computes the message of each node in the factor graph and passes the node's message to the connected nodes \cite{FG-BP}. Specifically, if the variable node $i$ is connected with the factor node $\phi$, we denote $\varphi_{\phi \rightarrow i}(\mathbf{x}^{(i)})$ and $\upsilon_{i \rightarrow \phi}(\mathbf{x}^{(i)})$ as the message passed from the variable node $i$ to the factor node $\phi$ and the message passed from the factor node $\phi$ to the variable node $i$, respectively, which are given by
\begin{align}
\label{ftv}
\begin{split}
\varphi_{\phi \rightarrow i}(\mathbf{x}^{(i)})=& \int p_{\phi}(\mathbf{x}_{\phi}) \prod_{i^{\prime} \in \mathcal{V}(\phi) \backslash i} \upsilon_{i^{\prime} \rightarrow \phi}(\mathbf{x}^{(i^{\prime})})\mathrm{d}{(\mathbf{x}_{\phi} \backslash \mathbf{x}^{(i)})},
\\
\upsilon_{i \rightarrow \phi}(\mathbf{x}^{(i)})=&\prod_{\phi^{\prime} \in \mathcal{F}(i) \backslash \phi} \varphi_{\phi^{\prime} \rightarrow i}(\mathbf{x}^{(i)}),
\end{split}
\end{align}
where the symbol "$\rightarrow$" indicates the flow of the message. Eventually, for each variable node $i$, a belief $\widetilde{p}(\mathbf{x}^{(i)})$ is obtained by the product of all the incoming messages with the normalizing constraint such that $\int\widetilde{p}(\mathbf{x}^{(i)})\mathrm{d}{\mathbf{x}^{(i)}}=1$, which provides an approximation of the marginal posterior pdf $p(\mathbf{x}^{(i)}|\mathbf{z})$.
\subsection{System Model and Joint Posterior pdf}
\subsubsection{System Model}
At any time $k$, each potential target (PT) is either a legacy PT (i.e., a PT survived from time $k-1$ to time $k$) or a new PT (i.e., a newly detected target at time $k$). That is, the PTs can be divided into two categories at each time instance, namely the legacy PTs and the new PTs. Let $\underline{\mathbf{x}}_{k}^{(i)}$ be the state vector of the legacy PT $i$ at time $k$, consisting of the target position and possibly further parameters (e.g., velocity and acceleration), where $i\in\{1,\ldots,n_{k}\}$ and $n_{k}$ is the number of the legacy PTs at time $k$. The detection of the legacy PTs are modeled by the binary existence variables $\underline{r}_{k}^{(i)}\in\{0,1\}$, i.e., legacy PT $i$ exists at time $k$ if and only if $\underline{r}_{k}^{(i)}=1$. We denote $\underline{\mathbf{x}}_{k}$ and $\underline{\mathbf{r}}_{k}$ as the joint state vector and existence vector of the legacy PTs at time $k$, respectively,
\begin{align*}
\underline{\mathbf{x}}_{k}&:=\left[\underline{\mathbf{x}}_{k}^{(1)\mathrm{T}},\ldots,\underline{\mathbf{x}}_{k}^{(n_{k})\mathrm{T}}\right]^{\mathrm{T}},
\\
\underline{\mathbf{r}}_{k}&:=\left[\underline{r}_{k}^{(1)},\ldots,\underline{r}_{k}^{(n_{k})}\right]^{\mathrm{T}}.
\end{align*}
Assume that at time $k$, the sensor generates $m_{k}$ measurements and the joint measurement vector is
\begin{align*}
\mathbf{z}_{k}&:=\left[\mathbf{z}_{k}^{(1)\mathrm{T}},\ldots,\mathbf{z}_{k}^{(m_{k})\mathrm{T}}\right]^{\mathrm{T}},
\end{align*}
where each measurement $\mathbf{z}_k^{(m)}$ either originates from a PT or random clutter. Basically, it is assumed that at any time $k$, a target can generate at most one measurement, and a measurement originates from at most one target. To incorporate the newly detected targets at time $k$, $m_{k}$ new PT states $\overline{\mathbf{x}}_{k}^{(m)}$, $m=1,\ldots,m_{k}$ are introduced, where each $\overline{\mathbf{x}}_{k}^{(m)}$ corresponds to the measurement $\mathbf{z}_{k}^{(m)}$. The detection of the new PTs are also modeled by the binary existence variables $\overline{r}_{k}^{(m)}\in\{0,1\}$, i.e., a measurement $\mathbf{z}_{k}^{(m)}$ is generated by a new PT $\overline{\mathbf{x}}_{k}^{(m)}$ if and only if $\overline{r}_{k}^{(m)}=1$. We denote $\overline{\mathbf{x}}_{k}$ and $\overline{\mathbf{r}}_{k}$ as the joint state vector and existence vector of the new PTs, respectively,
\begin{align*}
\overline{\mathbf{x}}_{k}&:=\left[\overline{\mathbf{x}}_{k}^{(1)\mathrm{T}},\ldots,\overline{\mathbf{x}}_{k}^{(m_{k})\mathrm{T}}\right]^{\mathrm{T}},
\\
\overline{\mathbf{r}}_{k}&:=\left[\overline{r}_{k}^{(1)},\ldots,\overline{r}_{k}^{(m_{k})}\right]^{\mathrm{T}}.
\end{align*}
Notably, the new PTs at time $k$ become the legacy PTs at time $k+1$ when receiving new measurements, which means that the number of the legacy PTs at time $k+1$ is updated by $n_{k+1}=n_{k}+m_{k}.$ Since the number of PTs would increase with the accumulation of sensor measurements, we consider at most $N_{\text{max}}$ PTs at any time and perform a pruning step at each time step to remove unlikely PTs. That is, $N_{\text{max}}$ is the maximum possible number of PTs and the number of actual targets is not larger than $N_{\text{max}}$.
In GTT, the group structure describes the connection between targets, which is a premise of the modeling of the evolution of targets. In this paper, we make the convention that at any time $k$, only the group structure of the confirmed legacy PTs (that have been declared to exist at the current time) is considered, and each confirmed legacy PT can be partitioned to only one group in a possible group structure. Concretely, we use a group partition vector $\underline{\mathbf{g}}_{k}:=\left[\underline{g}_{k}^{(1)},\ldots,\underline{g}_{k}^{(n_{k})}\right]^{\mathrm{T}}$ to represent the group structure of all legacy PTs at time $k$, and let $N(\underline{\mathbf{g}}_{k}):=\max(\underline{\mathbf{g}}_{k})$ denote the number of groups partitioned by $\underline{\mathbf{g}}_{k}$. For instance, $\underline{\mathbf{g}}_{k}:=\left[1\ 1\ 2\ 3\ 3\ 0\right]^{\mathrm{T}}$ (see Fig. \ref{figure1}) represents that the confirmed legacy PTs $\underline{\mathbf{x}}_{k}^{(1)}$ and $\underline{\mathbf{x}}_{k}^{(2)}$ are partitioned into group 1, $\underline{\mathbf{x}}_{k}^{(3)}$ is an ungrouped target (i.e., a single target), $\underline{\mathbf{x}}_{k}^{(4)}$ and $\underline{\mathbf{x}}_{k}^{(5)}$ are partitioned into group 3, and $\underline{\mathbf{x}}_{k}^{(6)}$ is an unconfirmed legacy PT. Furthermore, the group structure of the new PTs at time $k$ is represented by a variable $\overline{\mathbf{g}}_{k}$ with zero entries.
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.6\linewidth]{fig1.eps}
\caption{An example of $\underline{\mathbf{g}}_{k}$ for 5 confirmed legacy PTs (partitioned into two groups and an ungrouped target) and 1 unconfirmed legacy PT.}
\label{figure1}
\end{figure}
The unknown association between legacy PTs and measurements at time $k$ can be described by a target-oriented association vector $\mathbf{a}_{k}:=\left[a_{k}^{(1)},\ldots,a_{k}^{(n_{k})}\right]^{\mathrm{T}}$ with
\begin{align*}
a_{k}^{(i)}:=
\begin{cases}
m \in\left\{1, \ldots, m_{k}\right\}, &\begin{array}{l}\text { if at time } k, \text { the PT } \underline{\mathbf{x}}_k^{(i)} \text { generates the measurement } \mathbf{z}_k^{(m)}
\end{array} \\
0, &\begin{array}{l}\text { if at time } k, \text { the PT } \underline{\mathbf{x}}_k^{(i)}
\text { does not }
\text{ generate a }
\text{ measurement. }\end{array}
\end{cases}
\end{align*}
\subsubsection{Joint Posterior pdf}
We denote the joint vectors of all the PT state, the existence variable and the group partition at time $k$ as $\mathbf{x}_k:=\left[\underline{\mathbf{x}}_{k}^{\mathrm{T}}, \overline{\mathbf{x}}_{k}^{\mathrm{T}}\right]^{\mathrm{T}}$, $\mathbf{r}_k:=\left[\underline{\mathbf{r}}_{k}^{\mathrm{T}}, \overline{\mathbf{r}}_{k}^{\mathrm{T}}\right]^{\mathrm{T}}$ and $\mathbf{g}_k:=\left[\underline{\mathbf{g}}_{k}^{\mathrm{T}}, \overline{\mathbf{g}}_{k}^{\mathrm{T}}\right]^{\mathrm{T}}$, respectively. Let $\mathcal{R}_{k}$, $\underline{\mathcal{R}}_{k}$ and $\underline{\mathcal{G}}_{k}$ be the sets of all possible $\mathbf{r}_{k}$, $\underline{\mathbf{r}}_{k}$ and $\underline{\mathbf{g}}_{k}$, respectively. For notational convenience, we define the augmented state vectors for the legacy PTs and the new PTs as
\begin{align*}
\underline{\mathbf{y}}_{k}^{(i)}&:=\left[\underline{\mathbf{x}}_{k}^{(i)\mathrm{T}},\underline{r}_{k}^{(i)}\right]^{\mathrm{T}},
\\
\overline{\mathbf{y}}_{k}^{(m)}&:=\left[\overline{\mathbf{x}}_{k}^{(m)\mathrm{T}},\overline{r}_{k}^{(m)}\right]^{\mathrm{T}},
\end{align*}
and the joint augmented state vector at time $k$ is given by $\mathbf{y}_{k}:=\left[\underline{\mathbf{y}}_{k}^{\mathrm{T}}, \overline{\mathbf{y}}_{k}^{\mathrm{T}}\right]^{\mathrm{T}}$.
Let $\mathbf{y}_{1:k}$, $\mathbf{g}_{1:k}$, $\mathbf{a}_{1:k}$, $\mathbf{z}_{1:k}$ and $\mathbf{m}_{1:k}$ denote the stacked vectors of joint augmented states, group partition vectors, target-oriented association vectors, measurements and numbers of measurements up to time $k$, respectively. Assume that given all the PT states $\mathbf{x}_{k}$, the measurements $\mathbf{z}_{k}$
are conditionally independent of all
the past and future measurements $\mathbf{z}_{k^{\prime}}$ and PT states $\mathbf{x}_{k^{\prime}}$, $k^{\prime}\neq k$. By the chain rule and the conditional independence assumption, the posterior pdf $p(\mathbf{y}_{1:k}, \mathbf{g}_{1:k}, \mathbf{a}_{1:k}|\mathbf{z}_{1:k})$ can be obtained by
\begin{align}\label{pdf}
\begin{split}
p(\mathbf{y}_{1:k}, \mathbf{g}_{1:k}, \mathbf{a}_{1:k}|\mathbf{z}_{1:k})
&=p(\mathbf{y}_{1:k}, \mathbf{g}_{1:k}, \mathbf{a}_{1:k}|\mathbf{z}_{1:k},\mathbf{m}_{1:k})
\\
&\propto p(\mathbf{z}_{1:k}, \mathbf{a}_{1:k},\mathbf{m}_{1:k}, \mathbf{y}_{1:k}, \mathbf{g}_{1:k})
\\
&=\prod_{k^{\prime}=1}^{k}p(\mathbf{z}_{k^{\prime}}, \mathbf{a}_{k^{\prime}},m_{k^{\prime}}, \mathbf{y}_{k^{\prime}}, \mathbf{g}_{k^{\prime}}|\mathbf{y}_{k^{\prime}-1}, \mathbf{g}_{k^{\prime}-1}),
\end{split}
\end{align}
where
\begin{align}\label{condition-pdf}
\begin{split}
p(\mathbf{z}_{k}, \mathbf{a}_{k},m_{k}, \mathbf{y}_{k},\mathbf{g}_{k}|\mathbf{y}_{k-1},\mathbf{g}_{k-1})=p(\mathbf{z}_{k}, \mathbf{a}_{k},m_{k}, \overline{\mathbf{y}}_{k},\overline{\mathbf{g}}_{k} |\underline{\mathbf{y}}_{k},\underline{\mathbf{g}}_{k}) p( \underline{\mathbf{y}}_{k},\underline{\mathbf{g}}_{k}|\mathbf{y}_{k-1},\mathbf{g}_{k-1}).
\end{split}
\end{align}
Subsequently, we derive this joint posterior pdf under some regular assumptions.
\subsection{Augmented State and Group Structure Transition pdf}
The augmented state and group structure transition pdf $p( \underline{\mathbf{y}}_{k},\underline{\mathbf{g}}_{k}|\mathbf{y}_{k-1},\mathbf{g}_{k-1})$ in (\ref{condition-pdf}) can be written as
\begin{align}\label{ps}
\begin{split}
p( \underline{\mathbf{y}}_{k},\underline{\mathbf{g}}_{k}|\mathbf{y}_{k-1},\mathbf{g}_{k-1})=p( \underline{\mathbf{y}}_{k}|\underline{\mathbf{g}}_{k}, \mathbf{y}_{k-1},\mathbf{g}_{k-1})p(\underline{\mathbf{g}}_{k}|\mathbf{y}_{k-1},\mathbf{g}_{k-1}).
\end{split}
\end{align}
We assume that there is no PT at time $k=0$, i.e., $\mathbf{y}_{0}$, $\underline{\mathbf{y}}_{1}$ and $\underline{\mathbf{g}}_{1}$ are empty. For future use, we make convention that $p( \underline{\mathbf{y}}_{1}|\underline{\mathbf{g}}_{1}, \mathbf{y}_{0}):=1$ and $p(\underline{\mathbf{g}}_{1}|\mathbf{y}_{0}):=1$. Let $\Lambda_{\underline{\mathbf{g}}_{k}}(j)$ denote the index set of the PTs belonging to the group $j$ in the group partition $\underline{\mathbf{g}}_{k}$, i.e.,
\begin{align*}
\Lambda_{\underline{\mathbf{g}}_{k}}(j):=\{i: \underline{\mathbf{g}}_{k}^{(i)}=j, i=1,\ldots,n_{k}\}.
\end{align*}
Note that $\Lambda_{\underline{\mathbf{g}}_{k}}(0)$ denotes the index set of the unconfirmed legacy PTs at time $k$. We use $\underline{\mathbf{y}}_{k,\Lambda_{\underline{\mathbf{g}}_{k}}(j)}$ to represent the joint augmented state of the PTs $i\in\Lambda_{\underline{\mathbf{g}}_{k}}(j)$. Assume that each group or unconfirmed legacy PT evolves independently of the other groups and unconfirmed legacy PTs, we have
\begin{align}\label{evolve}
\begin{split}
p(\underline{\mathbf{y}}_{k}|\underline{\mathbf{g}}_{k}, \mathbf{y}_{k-1},\mathbf{g}_{k-1})=
\prod_{j=0}^{N(\underline{\mathbf{g}}_{k})}p(\underline{\mathbf{y}}_{k,\Lambda_{\underline{\mathbf{g}}_{k}}(j)}|\mathbf{y}_{k-1,\Lambda_{\underline{\mathbf{g}}_{k}}(j)}),
\end{split}
\end{align}
where the augmented state transition density of the unconfirmed legacy PTs $p(\underline{\mathbf{y}}_{k,\Lambda_{\underline{\mathbf{g}}_{k}}(0)}|\mathbf{y}_{k-1,\Lambda_{\underline{\mathbf{g}}_{k}}(0)})$ is given by
\begin{align*}
p(\underline{\mathbf{y}}_{k,\Lambda_{\underline{\mathbf{g}}_{k}}(0)}|\mathbf{y}_{k-1,\Lambda_{\underline{\mathbf{g}}_{k}}(0)})=\prod_{i\in\Lambda_{\underline{\mathbf{g}}_{k}}(0)}p(\underline{\mathbf{y}}_{k}^{(i)}|\mathbf{y}_{k-1}^{(i)}),
\end{align*}
with $p(\underline{\mathbf{y}}_{k}^{(i)}|\mathbf{y}_{k-1}^{(i)})=p(\underline{\mathbf{x}}_{k}^{(i)},\underline{r}_{k}^{(i)}|\mathbf{x}_{k-1}^{(i)}, r_{k-1}^{(i)})$ and
\begin{align}\label{sg}
\begin{split}
p(\underline{\mathbf{x}}_{k}^{(i)},\underline{r}_{k}^{(i)}|\mathbf{x}_{k-1}^{(i)}, r_{k-1}^{(i)}):=
\begin{cases}
f_{\mathrm{d}}(\underline{\mathbf{x}}_{k}^{(i)}), &\begin{array}{l} r_{k-1}^{(i)}=0,\ \underline{r}_{k}^{(i)}=0
\end{array}
\\
0, &\begin{array}{l}r_{k-1}^{(i)}=0,\ \underline{r}_{k}^{(i)}=1\end{array}
\\
(1-p_{\mathrm{s}}(\mathbf{x}_{k-1}^{(i)}))f_{\mathrm{d}}(\underline{\mathbf{x}}_{k}^{(i)}), &\begin{array}{l}r_{k-1}^{(i)}=1,\ \underline{r}_{k}^{(i)}=0\end{array}
\\
p_{\mathrm{s}}(\mathbf{x}_{k-1}^{(i)})p(\underline{\mathbf{x}}_{k}^{(i)}|\mathbf{x}_{k-1}^{(i)}), &\begin{array}{l}r_{k-1}^{(i)}=1,\ \underline{r}_{k}^{(i)}=1,\end{array}
\end{cases}
\end{split}
\end{align}
where $f_{\mathrm{d}}(\underline{\mathbf{x}}_{k}^{(i)})$ denotes a dummy pdf, $p(\underline{\mathbf{x}}_{k}^{(i)}|\mathbf{x}_{k-1}^{(i)})$ is the single-target state transition density, and $p_{\mathrm{s}}(\mathbf{x}_{k-1}^{(i)})$ is the survival probability of the PT $\mathbf{x}_{k-1}^{(i)}$ with $r_{k-1}^{(i)}=1$ and $\underline{r}_{k}^{(i)}=1$. Note that if there is a PT $\mathbf{x}_{k-1}^{(i)}, i\in\Lambda_{\underline{\mathbf{g}}_{k}}(j)$ with $r_{k-1}^{(i)}=0$, then it cannot exist at time $k$, i.e., $\underline{r}_{k}^{(i)}=0$.
Furthermore, the pdf $p(\underline{\mathbf{y}}_{k,\Lambda_{\underline{\mathbf{g}}_{k}}(j)}|\mathbf{y}_{k-1,\Lambda_{\underline{\mathbf{g}}_{k}}(j)})$, $j\neq0$ describing the augmented state transition density of the group $j$ in the group partition $\underline{\mathbf{g}}_{k}$ can be factorized as
\begin{align}\label{GT-pdf2}
\begin{split}
p(\underline{\mathbf{y}}_{k,\Lambda_{\underline{\mathbf{g}}_{k}}(j)}|\mathbf{y}_{k-1,\Lambda_{\underline{\mathbf{g}}_{k}}(j)})=\big(\prod_{i\in\Lambda_{\underline{\mathbf{g}}_{k}}(j)\backslash \tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}p(\underline{\mathbf{y}}_{k}^{(i)}|\mathbf{y}_{k-1}^{(i)})\big)\big(\prod_{i\in \tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}p_{\mathrm{s}}(\mathbf{x}_{k-1}^{(i)})\big)p(\underline{\mathbf{x}}_{k,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}|\mathbf{x}_{k-1,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}),
\end{split}
\end{align}
where $\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j):=\{i: r_{k-1}^{(i)}=\underline{r}_k^{(i)}=1, i\in\Lambda_{\underline{\mathbf{g}}_{k}}(j)\}$ is the index set of survival PTs in the group $j$, and $\underline{\mathbf{x}}_{k,\Lambda_{\underline{\mathbf{g}}_{k}}(j)}$ denotes the joint state of the group $j$. The group transition density $p(\underline{\mathbf{x}}_{k,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}|\mathbf{x}_{k-1,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)})$ describes the evolution of the group $j$, which degrades to the single-target state transition density if there is only one PT in the group.
\begin{remark}
The group structure can be viewed as an attribute implicit of the targets in GTT, which determines the partition of targets into group targets and ungrouped targets. In this paper, we model the state transition by using group or single-target motion models according to the given group structure (\ref{evolve})-(\ref{GT-pdf2}), which enables seamlessly and simultaneously tracking of multiple group targets and ungrouped targets.
\end{remark}
Since the modeling of group dynamics is not the focus of this paper, here we apply the virtual leader-follower model \cite{Gordon1,RFS1} to describe the evolution of group targets, and some other models can refer to \cite{Godsill1,Lf,Leadership}. The model \cite{Gordon1,RFS1} assumes that the deterministic state of any target is a translational offset of the average state (i.e., the virtual leader) of the group. More specifically, let $\triangle\mathbf{x}_{k-1}^{(i)}$ denote the offset from the PT $\mathbf{x}_{k-1}^{(i)}\in\mathbf{x}_{k-1,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}$ to the
virtual leader of the group $j$, i.e.,
\begin{align}\label{offset}
\triangle\mathbf{x}_{k-1}^{(i)}:=\mathbf{x}_{k-1}^{(i)}-\breve{\mathbf{x}}_{k-1,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)},
\end{align}
where $\breve{\mathbf{x}}_{k-1,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}$ is the virtual leader of the group $j$, i.e.,
\begin{align}\label{vl}
\breve{\mathbf{x}}_{k-1,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}&:=\frac{1}{|\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)|}\sum_{i\in\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}\mathbf{x}_{k-1}^{(i)},
\end{align}
then the state transition model for the PT $i$ is
\begin{align}\label{model}
\underline{\mathbf{x}}_{k}^{(i)}=f_{\mathrm{t}}(\breve{\mathbf{x}}_{k-1,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)})+\triangle\mathbf{x}_{k-1}^{(i)}+\mathbf{v}_{k}^{(i)},
\end{align}
where $f_{\mathrm{t}}(\cdot)$ is the state transition function of the virtual leader, and $\mathbf{v}_{k}^{(i)}$ are independent and identically distributed random variables with known pdf. Thus, the group transition density in (\ref{GT-pdf2}) can be written as
\begin{align}\label{pdf-model}
\begin{split}
p(\underline{\mathbf{x}}_{k,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}|\mathbf{x}_{k-1,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)})=\prod_{i\in\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}p(\underline{\mathbf{x}}_{k}^{(i)}|\breve{\mathbf{x}}_{k-1,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)},\triangle\mathbf{x}_{k-1}^{(i)}),
\end{split}
\end{align}
where $p(\underline{\mathbf{x}}_{k}^{(i)}|\breve{\mathbf{x}}_{k-1,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)},\triangle\mathbf{x}_{k-1}^{(i)})$ is described by the system model (\ref{model}).
The group structure transition pmf $p(\underline{\mathbf{g}}_{k}|\mathbf{y}_{k-1},\mathbf{g}_{k-1})$ in (\ref{ps}) determines how the information from $\mathbf{y}_{k-1}$ and $\mathbf{g}_{k-1}$ at time $k-1$ are used to guide the group structure changes. In this paper, we adopt a state-dependent model $p(\underline{\mathbf{g}}_{k}|\mathbf{y}_{k-1},\mathbf{g}_{k-1}):=p(\underline{\mathbf{g}}_{k}|\mathbf{x}_{k-1},\mathbf{r}_{k-1})$ for the group structure, i.e., $\underline{\mathbf{g}}_{k}$ is independent of $\underline{\mathbf{g}}_{k-1}$ given $\mathbf{y}_{k-1}$. Some similar models can refer to \cite{Godsill1,Graph1}. Usually, the actual PT states are unknown, and all we can obtain are the estimated PT states and corresponding covariance information. As one of the most commonly used distance metrics, Mahalanobis distance provides an efficient way to incorporate the confidence about the PT state estimate, which is given by
\begin{align*}
d_{k}^{i,i^{\prime}}:=(\mathbf{x}_{k}^{(i)}-\mathbf{x}_{k}^{(i^{\prime})})^{\mathrm{T}}(\mathbf{P}_{k}^{(i)}+\mathbf{P}_{k}^{(i^{\prime})})^{-1}(\mathbf{x}_{k}^{(i)}-\mathbf{x}_{k}^{(i^{\prime})}),
\end{align*}
where $\mathbf{P}_{k}^{(i)}$ is the covariance of $\mathbf{x}_{k}^{(i)}$. Let
\begin{align*}
\breve{\mathbf{P}}_{k-1,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}&:=\frac{1}{|\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)|}\sum_{i\in\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}\mathbf{P}_{k-1}^{(i)},
\end{align*}
denote the average covariance of these survival PTs in group $j$. Then, we use the Mahalanobis distance between a PT state $\mathbf{x}_{k}^{(i)}$ and the virtual leader $\breve{\mathbf{x}}_{k-1,\tilde{\Lambda}_{\underline{\mathbf{g}}_{k}}(j)}$ to define a quantity $P_{i,j}\in\left[0,1\right]$ as follows:
\begin{align}\label{Pij}
P_{i,j}:=
\begin{cases}
P_{0}, &\begin{array}{l}\text{if}\ r_{k-1}^{(i)}=0
\end{array} \\
\exp(-\frac{d_{k-1}^{i,j}}{2}), &\begin{array}{l}\text{otherwise},
\end{array}
\end{cases}
\end{align}
where $P_{0}\in\left[0, 1\right]$ is a small number meaning that dividing nonexistent PTs into a group has a small probability. We define a scoring function for evaluating the group partition $\underline{\mathbf{g}}_{k}$ with given $\mathbf{x}_{k-1}$ and $\mathbf{r}_{k-1}$ as
\begin{align*}
\begin{split}
s(\underline{\mathbf{g}}_{k}|\mathbf{x}_{k-1},\mathbf{r}_{k-1}):=\prod_{i\in\Lambda_{\underline{\mathbf{g}}_{k}}}P_{i,\underline{g}_{k}^{(i)}}\prod_{j\in \{1,\ldots,N(\underline{\mathbf{g}}_{k})\}\backslash\underline{g}_{k}^{(i)}}(1-P_{i,j}),
\end{split}
\end{align*}
where $\Lambda_{\underline{\mathbf{g}}_{k}}:=\{1,\ldots,n_k\}\backslash\Lambda_{\underline{\mathbf{g}}_{k}}(0)$ is the index set of all confirmed legacy PTs at time $k$. Thus, we can define a pseudo group structure transition pmf as
\begin{align}\label{GS-pdf}
p(\underline{\mathbf{g}}_{k}|\mathbf{x}_{k-1},\mathbf{r}_{k-1}):= \frac{s(\underline{\mathbf{g}}_{k}|\mathbf{x}_{k-1},\mathbf{r}_{k-1})}{\sum_{\underline{\mathbf{g}}_{k}^{\prime}\in\underline{\mathcal{G}}_k}s(\underline{\mathbf{g}}_{k}^{\prime}|\mathbf{x}_{k-1},\mathbf{r}_{k-1})}.
\end{align}
\subsection{Conditional pdf $p(\mathbf{z}_{k}, \mathbf{a}_{k},m_{k}, \overline{\mathbf{y}}_{k},\overline{\mathbf{g}}_{k} |\underline{\mathbf{y}}_{k},\underline{\mathbf{g}}_{k})$}
Next, we introduce the calculation of the conditional pdf $p(\mathbf{z}_{k}, \mathbf{a}_{k},m_{k}, \overline{\mathbf{y}}_{k},\overline{\mathbf{g}}_{k} |\underline{\mathbf{y}}_{k},\underline{\mathbf{g}}_{k})$ in (\ref{condition-pdf}). It is commonly assumed that given $\mathbf{y}_k$ and $\mathbf{a}_k$, the measurement vector $\mathbf{z}_{k}$ is independent of $\mathbf{g}_k$. According to the definition of $\overline{\mathbf{g}}_{k}$, it is a deterministic zero vector to represent the group structure of the new PTs. We assume that given $\underline{\mathbf{y}}_k$, the association vector $\mathbf{a}_{k}$ and the augmented new PT states $\overline{\mathbf{y}}_{k}$ are independent of $\mathbf{g}_k$. By the chain rule and the conditional independence assumption, we have
\begin{align}\label{p24}
\begin{split}
p(\mathbf{z}_{k}, \mathbf{a}_{k},m_{k}, \overline{\mathbf{y}}_{k},\overline{\mathbf{g}}_{k} |\underline{\mathbf{y}}_{k},\underline{\mathbf{g}}_{k})&=p(\mathbf{z}_{k}, \mathbf{a}_{k},m_{k}, \overline{\mathbf{x}}_{k},\overline{\mathbf{r}}_{k},\overline{\mathbf{g}}_{k} |\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k},\underline{\mathbf{g}}_{k})
\\
&=p(\mathbf{z}_{k}| \mathbf{a}_{k},m_{k}, \overline{\mathbf{x}}_{k},\overline{\mathbf{r}}_{k},\overline{\mathbf{g}}_{k},\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k},\underline{\mathbf{g}}_{k})p( \mathbf{a}_{k},m_{k}, \overline{\mathbf{x}}_{k},\overline{\mathbf{r}}_{k},\overline{\mathbf{g}}_{k}|\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k},\underline{\mathbf{g}}_{k})
\\
&=p(\mathbf{z}_{k}| \mathbf{a}_{k},m_{k}, \overline{\mathbf{x}}_{k},\overline{\mathbf{r}}_{k},\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k})p( \mathbf{a}_{k},m_{k}, \overline{\mathbf{x}}_{k},\overline{\mathbf{r}}_{k}|\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k}).
\end{split}
\end{align}
Let $f(\mathbf{z}_{k}^{(m)}|\underline{\mathbf{x}}_{k}^{(i)})$ denote the pdf of the measurement $\mathbf{z}_{k}^{(m)}$ conditioned on the legacy PT state $\underline{\mathbf{x}}_{k}^{(i)}$. The target-originated measurements are assumed conditionally independent of each other and also conditionally independent of all clutters. Moreover, the number of clutters is assumed Poisson distributed with a mean $\mu_{\mathrm{c}}$, and the clutters are assumed independent and identically distributed with pdf $f_{\mathrm{c}}(\mathbf{z}_{k}^{(m)})$ \cite{BP-MTT1,BP-MTT2}. Then, the pdf $p(\mathbf{z}_{k}| \mathbf{a}_{k},m_{k}, \overline{\mathbf{x}}_{k},\overline{\mathbf{r}}_{k},\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k})$ is given by
\begin{align}\label{pz0}
\begin{split}
p(\mathbf{z}_{k}| \mathbf{a}_{k},m_{k}, \overline{\mathbf{x}}_{k},\overline{\mathbf{r}}_{k},\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k})=\big(\prod_{m=1}^{m_{k}} f_{\mathrm{c}}(\mathbf{z}_{k}^{(m)})\big)\big(\prod_{i \in \mathcal{D}_{\mathbf{a}_{k}}}\frac{f(\mathbf{z}_{k}^{(a_{k}^{(i)})} | \underline{\mathbf{x}}_{k}^{(i)})}{f_{\mathrm{c}}(\mathbf{z}_{k}^{(a_{k}^{(i)})})}\big)\big(\prod_{m^{\prime} \in \mathcal{I}_{\overline{\mathbf{r}}_{k}}}\frac{f(\mathbf{z}_{k}^{(m^{\prime})} | \overline{\mathbf{x}}_{k}^{(m^{\prime})})}{f_{\mathrm{c}}(\mathbf{z}_{k}^{(m^{\prime})})}\big),
\end{split}
\end{align}
where $\mathcal{D}_{\mathbf{a}_{k}}$ and $\mathcal{I}_{\overline{\mathbf{r}}_{k}}$ are the index sets of detected legacy PTs and new PTs at time $k$, respectively,
\begin{align*}
&\mathcal{D}_{\mathbf{a}_{k}}:=\{i\in \{1,\ldots,n_{k}\}: \underline{r}_k^{(i)}=1, a_{k}^{(i)}\neq 0\},
\\
&\mathcal{I}_{\overline{\mathbf{r}}_{k}}:=\{m\in \{1,\ldots,m_{k}\}: \overline{r}_k^{(m)}=1\}.
\end{align*}
Since the new PTs and the legacy PTs at time $k$ are grouped at the next time instance, we assume that the new PTs are independent of the legacy PTs at time $k$. Then, the pdf $p( \mathbf{a}_{k},m_{k}, \overline{\mathbf{x}}_{k},\overline{\mathbf{r}}_{k}|\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k})$ is obtained as
\begin{align}\label{pz1}
\begin{split}
p( \mathbf{a}_{k},m_{k}, \overline{\mathbf{x}}_{k},\overline{\mathbf{r}}_{k}|\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k})&=p(\overline{\mathbf{x}}_{k}|\mathbf{a}_{k},\overline{\mathbf{r}}_{k},m_{k},\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k})p( \mathbf{a}_{k},\overline{\mathbf{r}}_{k},m_{k}|\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k})
\\
&=p(\overline{\mathbf{x}}_{k}|\overline{\mathbf{r}}_{k},m_{k})p( \mathbf{a}_{k},\overline{\mathbf{r}}_{k},m_{k}|\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k}),
\end{split}
\end{align}
with
\begin{align*}
p(\overline{\mathbf{x}}_{k}|\overline{\mathbf{r}}_{k},m_{k})=\big(\prod_{m\in\mathcal{I}_{\overline{\mathbf{r}}_{k}}} f_{\mathrm{b}}(\overline{\mathbf{x}}_{k}^{(m)})\big)\prod_{m^{\prime}\notin\mathcal{I}_{\overline{\mathbf{r}}_{k}}\atop m^{\prime}\in \{1,\ldots,m_{k}\}} f_{\mathrm{d}}(\overline{\mathbf{x}}_{k}^{(m^{\prime})}),
\end{align*}
where $f_{\mathrm{b}}(\overline{\mathbf{x}}_{k}^{(m)})$ is a prior pdf for the new PTs. The number of the new PTs at time $k$
is assumed Poisson distributed with a
mean $\mu_{\mathrm{b}}$, which is independent of the number of legacy PTs and of the number of clutters. Let $p_{\mathrm{d}}(\underline{\mathbf{x}}_{k}^{(i)})$ denote the probability of the legacy PT $\underline{\mathbf{x}}_{k}^{(i)}$ detected by sensor at time $k$, that is, with the probability $1-p_{\mathrm{d}}(\underline{\mathbf{x}}_{k}^{(i)})$ of misdetection. The prior pmf of $\mathbf{a}_{k}$, $\overline{\mathbf{r}}_{k}$ and $m_{k}$ conditioned on $\underline{\mathbf{x}}_{k}$, $\underline{\mathbf{r}}_{k}$ is given by
\begin{align}\label{pz3}
\begin{split}
p( \mathbf{a}_{k},\overline{\mathbf{r}}_{k},m_{k}|\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k})&=\frac{1}{m_{k} !}e^{-\mu_{\mathrm{b}}}(\mu_{\mathrm{b}})^{|\mathcal{I}_{\overline{\mathbf{r}}_{k}}|}e^{-\mu_{\mathrm{c}}}(\mu_{\mathrm{c}})^{m_{k}-|\mathcal{D}_{\mathbf{a}_{k}}|-|\mathcal{I}_{\overline{\mathbf{r}}_{k}}|}\psi(\mathbf{a}_{k})\big(\prod_{m\in\mathcal{I}_{\overline{\mathbf{r}}_{k}}}\Gamma_{\mathbf{a}_{k}}^{(m)}\big)
\\
&\quad\times \big(\prod_{i\in\mathcal{D}_{\mathbf{a}_{k}}}\underline{r}_k^{(i)}p_{\mathrm{d}}(\underline{\mathbf{x}}_{k}^{(i)})\big)\prod_{i^{\prime} \notin \mathcal{D}_{\mathbf{a}_{k}}\atop i^{\prime}\in \{1,\ldots,n_{k}\}}(1-\underline{r}_k^{(i^{\prime})}p_{\mathrm{d}}(\underline{\mathbf{x}}_{k}^{(i^{\prime})})),
\end{split}
\end{align}
with
\begin{align*}
\psi(\mathbf{a}_{k})&:=
\begin{cases}
0, &\begin{array}{l}\exists\ i,\ i^{\prime}\in \{1,\ldots,n_{k}\}, \text { such that } i\neq i^{\prime} \text { and } a_{k}^{(i)}=a_{k}^{(i^{\prime})}\neq0
\end{array} \\
1, &\begin{array}{l}\text{otherwise, }\end{array}
\end{cases}
\\
\Gamma_{\mathbf{a}_{k}}^{(m)}&:=
\begin{cases}
0, &\begin{array}{l}\exists\ i \in \{1,\ldots,n_{k}\}, \text { such that } a_{k}^{(i)}=m
\end{array} \\
1, &\begin{array}{l}\text{otherwise, }\end{array}
\end{cases}
\end{align*}
where the indicator functions $\psi(\mathbf{a}_{k})$ and $\Gamma_{\mathbf{a}_{k}}^{(m)}$ ensure that each measurement can only be associated once, either with a PT or with clutter. According to (\ref{pz0})-(\ref{pz3}), the pdf in (\ref{p24}) can be written as
\begin{align}\label{pl}
\begin{split}
p(\mathbf{z}_{k}, \mathbf{a}_{k},m_{k}, \overline{\mathbf{y}}_{k},\overline{\mathbf{g}}_{k} |\underline{\mathbf{y}}_{k},\underline{\mathbf{g}}_{k})&\propto\psi(\mathbf{a}_{k})\big(\prod_{i=1}^{n_{k}}q(\underline{\mathbf{x}}_{k}^{(i)}, \underline{r}_{k}^{(i)}, a_{k}^{(i)}; \mathbf{z}_k)\big)\prod_{m=1}^{m_{k}}v_1(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)}, \mathbf{a}_{k})v_2(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)}; \mathbf{z}_{k}),
\end{split}
\end{align}
where $q(\underline{\mathbf{x}}_{k}^{(i)}, \underline{r}_{k}^{(i)}, a_{k}^{(i)}; \mathbf{z}_k)$ is defined as
\begin{align*}
\begin{split}
q(\underline{\mathbf{x}}_{k}^{(i)}, 1, a_{k}^{(i)}; \mathbf{z}_k):=
\begin{cases}
\frac{p_{\mathrm{d}}(\underline{\mathbf{x}}_{k}^{(i)})f(\mathbf{z}_{k}^{(a_{k}^{(i)})}| \underline{\mathbf{x}}_{k}^{(i)})}{\mu_{\mathrm{c}}f_{\mathrm{c}}(\mathbf{z}_{k}^{(a_{k}^{(i)})})}, &\begin{array}{l}a_{k}^{(i)}\neq0
\end{array} \\
1-p_{\mathrm{d}}(\underline{\mathbf{x}}_{k}^{(i)}), &\begin{array}{l}a_{k}^{(i)}=0,\end{array}
\end{cases}
\end{split}
\end{align*}
with $q(\underline{\mathbf{x}}_{k}^{(i)}, 0, a_{k}^{(i)}; \mathbf{z}_k):=\mathrm{I}(a_{k}^{(i)})$, and $v_1(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)}, \mathbf{a}_{k})$ is defined as
\begin{align}\label{v1}
\begin{split}
v_1(\overline{\mathbf{x}}_{k}^{(m)}, 1, \mathbf{a}_{k}):=
\begin{cases}
0, &\begin{array}{l}\exists\ i \in \{1,\ldots,n_{k}\},\ \text {such that } a_{k}^{(i)}=m
\end{array} \\
\frac{\mu_{\mathrm{b}}}{\mu_{\mathrm{c}}}f_{\mathrm{b}}(\overline{\mathbf{x}}_{k}^{(m)}), &\begin{array}{l}\text{otherwise,}\end{array}
\end{cases}
\end{split}
\end{align}
with
$v_1(\overline{\mathbf{x}}_{k}^{(m)}, 0, \mathbf{a}_{k}):=f_{\mathrm{d}}(\overline{\mathbf{x}}_{k}^{(m)})$, and
\begin{align}\label{v2}
\begin{split}
v_2(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)}; \mathbf{z}_{k}):=
\begin{cases}
\frac{f(\mathbf{z}_{k}^{(m)}| \overline{\mathbf{x}}_{k}^{(m)})}{f_{\mathrm{c}}(\mathbf{z}_{k}^{(m)})}, &\begin{array}{l}\overline{r}_{k}^{(m)}=1
\end{array} \\
1, &\begin{array}{l}\overline{r}_{k}^{(m)}=0.\end{array}
\end{cases}
\end{split}
\end{align}
Consequently, substituting (\ref{ps}) and (\ref{pl}) into (\ref{pdf}), the joint posterior pdf $p(\mathbf{y}_{1:k}, \mathbf{g}_{1:k}, \mathbf{a}_{1:k}|\mathbf{z}_{1:k})$ is factorized as
\begin{align}\label{p1}
\begin{split}
p(\mathbf{y}_{1:k}, \mathbf{g}_{1:k}, \mathbf{a}_{1:k}|\mathbf{z}_{1:k})&\propto\prod_{k^{\prime}=1}^{k}p(\underline{\mathbf{y}}_{k^{\prime}},\underline{\mathbf{g}}_{k^{\prime}}|\mathbf{y}_{k^{\prime}-1},\mathbf{g}_{k^{\prime}-1})\big(\prod_{i=1}^{n_{k^{\prime}}}q(\underline{\mathbf{x}}_{k^{\prime}}^{(i)}, \underline{r}_{k^{\prime}}^{(i)}, a_{k^{\prime}}^{(i)}; \mathbf{z}_{k}^{\prime})\big)
\\
&\quad\times\psi(\mathbf{a}_{k^{\prime}})\prod_{m=1}^{m_{k^{\prime}}}v_1(\overline{\mathbf{x}}_{k^{\prime}}^{(m)}, \overline{r}_{k^{\prime}}^{(m)}, \mathbf{a}_{k^{\prime}})v_2(\overline{\mathbf{x}}_{k^{\prime}}^{(m)}, \overline{r}_{k^{\prime}}^{(m)}; \mathbf{z}_{k^{\prime}}).
\end{split}
\end{align}
\begin{remark}
As shown in the factorization (\ref{p1}) of the joint posterior pdf, the group structure not only helps to model the evolution of targets, but also has an important impact on the data association (i.e., the likelihood calculation). Therefore, it is crucial to consider the group structure in the GTT problem.
\end{remark}
\section{The Proposed GTBP Method}
In this section, we derive a further factorization of the joint posterior pdf $p(\mathbf{y}_{1:k},\mathbf{g}_{1:k}, \mathbf{a}_{1:k}|\mathbf{z}_{1:k})$ by stretching the factor $\psi(\mathbf{a}_{k})$, and then propose the GTBP method.
\subsection{Factor Stretching and Joint Posterior pdf}
Note that in (\ref{p1}), we factorize the joint posterior pdf $p(\mathbf{y}_{1:k}, \mathbf{g}_{1:k}, \mathbf{a}_{1:k}|\mathbf{z}_{1:k})$ into some products. However, the factor $\psi(\mathbf{a}_{k})$ is a coupled function of all entries of the target-oriented association vector $\mathbf{a}_{k}$, which may suffer high-dimensional discrete marginalizations when using BP to compute the messages. To avoid this, the stretching principle in factor graphs can be applied. Following \cite{FG-BP,BP-MTT1,BP-MTT2}, we consider introducing the measurement-oriented association vector $\mathbf{b}_{k}:=\left[b_{k}^{(1)},\ldots,b_{k}^{(m_{k})}\right]^{\mathrm{T}}$ with
\begin{align*}
b_{k}^{(m)}:=
\begin{cases}
i \in\left\{1, \ldots, n_{k}\right\}, &\begin{array}{l}\text { if the measurement } \mathbf{z}_k^{(m)} \text { is } \text{generated by } \underline{\mathbf{x}}_k^{(i)}
\end{array} \\
0, &\begin{array}{l}\text { if } \mathbf{z}_k^{(m)} \text { is not generated } \text { by a legacy PT}.
\end{array} \\
\end{cases}
\end{align*}
Notably, the measurement-oriented association vector $\mathbf{b}_{k}$ is redundant with $\mathbf{a}_{k}$, that is, one of the two association vectors is determined and the other is determined as well. By introducing $\mathbf{b}_{k}$, the factor $\psi(\mathbf{a}_{k})$ can be stretched and equivalently replaced by
\begin{align*}
\psi(\mathbf{a}_{k},\mathbf{b}_{k}):=\prod_{i=1}^{n_{k}}\prod_{m=1}^{m_{k}}\Psi_{k}^{i,m}(a_{k}^{(i)},b_{k}^{(m)}),
\end{align*}
where
\begin{align*}
\Psi_{k}^{i,m}(a_{k}^{(i)},b_{k}^{(m)}):=
\begin{cases}
0, &\begin{array}{l}a_{k}^{(i)}=m,\ b_{k}^{(m)}\neq i \text{ or } b_{k}^{(m)}=i,\ a_{k}^{(i)}\neq m
\end{array} \\
1, &\begin{array}{l}\text{otherwise. }\end{array}
\end{cases}
\end{align*}
Hereafter, we abbreviate $\Psi_{k}^{i,m}(a_{k}^{(i)},b_{k}^{(m)})$ as $\Psi_{k}^{i,m}$ for notational convenience.
Note that in (\ref{v1}), the condition that there exists $i \in \{1,\ldots,n_{k}\}$ such that $a_{k}^{(i)}=m$ is equal to the condition that $b_{k}^{(m)}\in\{1,\ldots,n_{k}\}$. According to the definitions (\ref{v1})-(\ref{v2}) of $v_1(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)}, \mathbf{a}_{k})$ and $v_2(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)}; \mathbf{z}_{k})$, one can easily verify that their product can be replaced by $v(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)}, b_{k}^{(m)}; \mathbf{z}_{k}^{(m)})$,
\begin{align}\label{def-v}
\begin{split}
v(\overline{\mathbf{x}}_{k}^{(m)}, 1, b_{k}^{(m)}; \mathbf{z}_{k}^{(m)}):=\begin{cases}
0, &\begin{array}{l}b_{k}^{(m)}\in\{1,\ldots,n_{k}\}
\end{array} \\
\frac{\mu_{\mathrm{b}}f_{\mathrm{b}}(\overline{\mathbf{x}}_{k}^{(m)})f(\mathbf{z}_{k}^{(m)}| \overline{\mathbf{x}}_{k}^{(m)})}{\mu_{\mathrm{c}}f_{\mathrm{c}}(\mathbf{z}_{k}^{(m)})}, &\begin{array}{l}b_{k}^{(m)}=0,\end{array}
\end{cases}
\end{split}
\end{align}
with
$v(\overline{\mathbf{x}}_{k}^{(m)}, 0, b_{k}^{(m)}; \mathbf{z}_{k}^{(m)}):=f_{\mathrm{d}}(\overline{\mathbf{x}}_{k}^{(m)})$. Let $\mathbf{b}_{1:k}$ denote the stacked measurement-oriented association vector from time 1 to time $k$. Thus, we can further factorize the joint posterior pdf $p(\mathbf{y}_{1:k}, \mathbf{g}_{1:k}, \mathbf{a}_{1:k}, \mathbf{b}_{1:k}|\mathbf{z}_{1:k})$ as follows:
\begin{align}\label{p2}
\begin{split}
p(\mathbf{y}_{1:k},\mathbf{g}_{1:k}, \mathbf{a}_{1:k}, \mathbf{b}_{1:k}|\mathbf{z}_{1:k})&\propto\prod_{k^{\prime}=1}^{k}\bigg(p(\underline{\mathbf{y}}_{k^{\prime}},\underline{\mathbf{g}}_{k^{\prime}}|\mathbf{y}_{k^{\prime}-1},\mathbf{g}_{k^{\prime}-1})\big(\prod_{i=1}^{n_{k^{\prime}}}q(\underline{\mathbf{x}}_{k^{\prime}}^{(i)}, \underline{r}_{k^{\prime}}^{(i)}, a_{k^{\prime}}^{(i)}; \mathbf{z}_{k^{\prime}})\prod_{m=1}^{m_{k^{\prime}}}\Psi_{k^{\prime}}^{i,m}\big)
\\
&\quad \times\prod_{m^{\prime}=1}^{m_{k^{\prime}}}v(\overline{\mathbf{x}}_{k^{\prime}}^{(m^{\prime})}, \overline{r}_{k^{\prime}}^{(m^{\prime})}, b_{k^{\prime}}^{(m^{\prime})}; \mathbf{z}_{k^{\prime}}^{(m^{\prime})})\bigg).
\end{split}
\end{align}
A factor graph representation for this factorization is shown in Fig. \ref{figure2}, which is mainly depicted for the time $k$. Herein, the factor nodes and variable nodes are drawn with squares and circles, respectively.
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.8\linewidth]{fig2.eps}
\caption{The factor graph description of the factorization (\ref{p2}) for GTT, shown for the time $k$. Some abbreviations are used: $p_{\underline{\mathbf{g}}|\mathbf{y}}:=p(\underline{\mathbf{g}}_{k}|\mathbf{x}_{k-1}, \mathbf{r}_{k-1})$, $p_{\underline{\mathbf{y}}|\underline{\mathbf{g}},\mathbf{y}}:=p(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k}|\underline{\mathbf{g}}_{k}, \mathbf{x}_{k-1}, \mathbf{r}_{k-1})$, $q^{(i)}:=q(\underline{\mathbf{x}}_{k}^{(i)}, \underline{r}_{k}^{(i)}, a_{k}^{(i)}; \mathbf{z}_{k})$, $v^{(m)}:=v(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)}, b_{k}^{(m)}; \mathbf{z}_{k}^{(m)})$.
}
\label{figure2}
\end{figure}
\subsection{The GTBP Method}
Based on the factorization (\ref{p2}) and the devised factor graph in Fig. \ref{figure2}, we calculate the beliefs in detail via the message passing scheme (\ref{ftv}), and then obtain the desired marginal posterior pdfs and pmfs.
\subsubsection{Prediction of Group Structure and Target State}
First, the message $\alpha_k(\underline{\mathbf{g}}_{k})$ passed from the factor node $p(\underline{\mathbf{g}}_{k}|\mathbf{x}_{k-1},\mathbf{r}_{k-1})$ to the variable node $\underline{\mathbf{g}}_{k}$ is calculated as
\begin{align}\label{pre-gs}
\begin{split}
\alpha_k(\underline{\mathbf{g}}_{k})=\sum_{\mathbf{r}_{k-1}\in\mathcal{R}_{k-1}}\int p(\underline{\mathbf{g}}_{k}|\mathbf{x}_{k-1}, \mathbf{r}_{k-1})\widetilde{p}(\mathbf{x}_{k-1},\mathbf{r}_{k-1})\mathrm{d}{\mathbf{x}_{k-1}},
\end{split}
\end{align}
where $\widetilde{p}(\mathbf{x}_{k-1},\mathbf{r}_{k-1})$ is the approximation of the marginal posterior pdf $p(\mathbf{x}_{k-1},\mathbf{r}_{k-1}|\mathbf{z}_{1:k-1})$ obtained at time $k-1$. Then, the message $\alpha_k(\underline{\mathbf{y}}_{k})=\alpha_k(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k})$ passed from the factor node $p(\underline{\mathbf{y}}_{k}|\underline{\mathbf{g}}_{k},\mathbf{y}_{k-1})$ to the variable node $\underline{\mathbf{y}}_{k}$ is calculated as
\begin{align}\label{alpha-gs}
\begin{split}
\alpha_k(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k})&=\sum_{\underline{\mathbf{g}}_{k}\in\underline{\mathcal{G}}_{k}}\sum_{\mathbf{r}_{k-1}\in\mathcal{R}_{k-1}}\int\alpha_k(\underline{\mathbf{g}}_{k})\widetilde{p}(\mathbf{x}_{k-1},\mathbf{r}_{k-1}) p(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k}|\underline{\mathbf{g}}_{k}, \mathbf{x}_{k-1}, \mathbf{r}_{k-1})
\mathrm{d}{\mathbf{x}_{k-1}}
\\
&=\sum_{\underline{\mathbf{g}}_{k}\in\underline{\mathcal{G}}_{k}}\alpha_k(\underline{\mathbf{g}}_{k})\alpha_k(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k}|\underline{\mathbf{g}}_{k}),
\end{split}
\end{align}
where
\begin{align}\label{alpha1}
\begin{split}
\alpha_k(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k}|\underline{\mathbf{g}}_{k})=\sum_{\mathbf{r}_{k-1}\in\mathcal{R}_{k-1}}\int \widetilde{p}(\mathbf{x}_{k-1},\mathbf{r}_{k-1})p(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k}|\underline{\mathbf{g}}_{k}, \mathbf{x}_{k-1}, \mathbf{r}_{k-1})\mathrm{d}{\mathbf{x}_{k-1}}.
\end{split}
\end{align}
\begin{remark}
Note that in (\ref{alpha-gs}), the message $\alpha_k(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k})$ involves the weighted summation of $\alpha_k(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k}|\underline{\mathbf{g}}_{k})$ over possible group structures. That is, the evolution of targets is modeled as the co-action of the group or single-target motions under different group structures. Herein, the group structure probabilities $\alpha_k(\underline{\mathbf{g}}_{k})$, namely the target state transition mode weights, will be updated by the data association presented subsequently. Particularly, if the group structure is deterministic and each target is divided into a group, then GTBP degrades to the BP method for MTT \cite{BP-MTT2}.
\end{remark}
\subsubsection{Measurement Evaluation and Iterative Data Association}
The messages $\beta_k(a_{k}^{(i)})$ passed from the factor nodes $q(\underline{\mathbf{x}}_{k}^{(i)}, \underline{r}_{k}^{(i)}, a_{k}^{(i)}; \mathbf{z}_{k})$ to the variable nodes $a_{k}^{(i)}$ are computed by
\begin{align}\label{beta}
\begin{split}
\beta_k(a_{k}^{(i)})&=\sum_{\underline{\mathbf{r}}_{k}\in\underline{\mathcal{R}}_{k}}\int q(\underline{\mathbf{x}}_{k}^{(i)}, \underline{r}_{k}^{(i)}, a_{k}^{(i)}; \mathbf{z}_{k}) \alpha_k(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k})\mathrm{d}{\underline{\mathbf{x}}_{k}}.
\end{split}
\end{align}
For the new PTs, the messages $\xi_k(b_{k}^{(m)})$ passed from the factor nodes $v(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)}, b_{k}^{(m)}; \mathbf{z}_{k}^{(m)})$ to the variable nodes $b_{k}^{(m)}$ are computed by
\begin{align}\label{xi}
\begin{split}
\xi_k(b_{k}^{(m)})&=\sum_{\overline{r}_{k}^{(m)}\in\{0,1\} }\int v(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)}, b_{k}^{(m)}; \mathbf{z}_{k}^{(m)})\mathrm{d}{\overline{\mathbf{x}}_{k}^{(m)}}
\\
&=\int v(\overline{\mathbf{x}}_{k}^{(m)}, 1, b_{k}^{(m)}; \mathbf{z}_{k}^{(m)})\mathrm{d}{\overline{\mathbf{x}}_{k}^{(m)}}+1.
\end{split}
\end{align}
Once the incoming messages $\beta_k(a_{k}^{(i)})$ to the data association part have been calculated, the iterative message calculation between all the nodes $a_{k}^{(i)}$, $b_{k}^{(m)}$ and $\Psi_{k}^{i,m}$ are performed. In the iteration, the messages $\varphi_{\Psi_{k}^{i,m} \rightarrow b_{k}^{(m)}}^{\left[\ell\right]}(b_{k}^{(m)})$ and $\varphi_{\Psi_{k}^{i,m} \rightarrow a_{k}^{(i)}}^{\left[\ell\right]}(a_{k}^{(i)})$ are updated by
\begin{align}
\begin{split}\label{message-beta}
&\varphi_{\Psi_{k}^{i,m} \rightarrow b_{k}^{(m)}}^{\left[\ell\right]}(b_{k}^{(m)})=\sum_{a_{k}^{(i)}=0}^{m_{k}}\beta_k(a_{k}^{(i)})\Psi_{k}^{i,m}\prod_{{m^{\prime}=1}\atop{m^{\prime}\neq m}}^{m_{k}}\varphi_{\Psi_{k}^{i,m^{\prime}} \rightarrow a_{k}^{(i)}}^{\left[\ell\right]}(a_{k}^{(i)}),
\end{split}
\\
\begin{split}\label{message-xi}
&\varphi_{\Psi_{k}^{i,m} \rightarrow a_{k}^{(i)}}^{\left[\ell\right]}(a_{k}^{(i)})=\sum_{b_{k}^{(m)}=0}^{n_{k}}\xi_k(b_{k}^{(m)}) \Psi_{k}^{i,m}\prod_{{i^{\prime}=1}\atop{i^{\prime}\neq i}}^{n_{k}}\varphi_{\Psi_{k}^{i^{\prime},m} \rightarrow b_{k}^{(m)}}^{\left[\ell-1\right]}(b_{k}^{(m)}),
\end{split}
\end{align}
where the subscript $\ell$ denotes the number of iteration, $i=1,\ldots,n_{k}$ and $m=1,\ldots,m_{k}$. The iterative loop of (\ref{message-beta})-(\ref{message-xi}) is initialized by setting
\begin{align}\label{ini-beta}
\varphi_{\Psi_{k}^{i,m} \rightarrow b_{k}^{(m)}}^{\left[0\right]}(b_{k}^{(m)})=&\sum_{a_{k}^{(i)}=0}^{m_{k}}\beta_k(a_{k}^{(i)}) \Psi_{k}^{i,m}.
\end{align}
An efficient implementation of Matlab code for this iteration is provided in \cite{Williams3}. The iteration of (\ref{message-beta})-(\ref{message-xi}) terminates when meeting the max number of iteration or the Frobenius norm of the beliefs between two consecutive iterations is less than a certain threshold. We denote the number of iterations when meeting the stopping criteria as $\ell_k$, then the messages passed from $a_{k}^{(i)}$ to $q(\underline{\mathbf{x}}_{k}^{(i)}, \underline{r}_{k}^{(i)}, a_{k}^{(i)}; \mathbf{z}_{k})$ are obtained as
\begin{align}\label{kappa}
\kappa_k(a_{k}^{(i)})= \prod_{m=1}^{m_{k}}\varphi_{\Psi_{k}^{i,m} \rightarrow a_{k}^{(i)}}^{\left[\ell_k\right]}(a_{k}^{(i)}),
\end{align}
for $i=1,\ldots,n_{k}$, and the the messages passed from $b_{k}^{(m)}$ to $v(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)}, b_{k}^{(m)}; \mathbf{z}_{k}^{(m)})$ are obtained as
\begin{align}\label{iota}
\iota_k(b_{k}^{(m)})= \prod_{i=1}^{n_{k}}\varphi_{\Psi_{k}^{i,m} \rightarrow b_{k}^{(m)}}^{\left[\ell_k\right]}(b_{k}^{(m)}).
\end{align}
\subsubsection{Measurement Update and Belief Calculation}\label{cite_sub}
When the messages $\kappa_k(a_{k}^{(i)})$ are obtained, we can calculate the messages $\gamma_k^{(i)}(\underline{\mathbf{x}}_k^{(i)}, \underline{r}_k^{(i)})$ for the legacy PTs, which are passed from $q(\underline{\mathbf{x}}_{k}^{(i)}, \underline{r}_{k}^{(i)}, a_{k}^{(i)}; \mathbf{z}_{k})$ to $\underline{\mathbf{y}}_k^{(i)}$,
\begin{align}\label{gamma_k}
\begin{split}
\gamma_k^{(i)}(\underline{\mathbf{x}}_k^{(i)}, 1)=\sum_{a_{k}^{(i)}=0}^{m_{k}} q(\underline{\mathbf{x}}_{k}^{(i)}, 1, a_{k}^{(i)}; \mathbf{z}_{k}) \kappa_k(a_{k}^{(i)}),
\end{split}
\end{align}
with $\gamma_k^{(i)}(\underline{\mathbf{x}}_k^{(i)}, 0)=\kappa_k(0)$. As a consequence, the belief $\widetilde{p}(\underline{\mathbf{y}}_{k})=\widetilde{p}(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k})$ approximating the marginal posterior pdf $p(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k}|\mathbf{z}_{1:k})$ is calculated as
\begin{align}\label{app-pdf}
\begin{split}
\widetilde{p}(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k})&=\frac{1}{C(\underline{\mathbf{x}}_{k})} \alpha_k(\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k})\prod_{i=1}^{n_{k}}\gamma_k^{(i)}(\underline{\mathbf{x}}_k^{(i)}, \underline{r}_k^{(i)})
\\
&=\frac{1}{C(\underline{\mathbf{x}}_{k})}\sum_{\underline{\mathbf{g}}_{k}\in\underline{\mathcal{G}}_{k}}\big(\alpha_k(\underline{\mathbf{g}}_{k})\alpha_k(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k}|\underline{\mathbf{g}}_{k})\prod_{i=1}^{n_{k}}\gamma_k^{(i)}(\underline{\mathbf{x}}_k^{(i)}, \underline{r}_k^{(i)})\big),
\end{split}
\end{align}
where $C(\underline{\mathbf{x}}_{k})$ is a normalization constant such that $\sum_{\underline{\mathbf{r}}_{k}\in\underline{\mathcal{R}}_{k}}\int\widetilde{p}(\underline{\mathbf{x}}_{k},\underline{\mathbf{r}}_{k})\mathrm{d}{\underline{\mathbf{x}}_{k}}=1$.
For the new PTs, the messages $\varsigma_k^{(m)}(\overline{\mathbf{x}}_k^{(m)}, \overline{r}_k^{(m)})$ passed from $v(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)}, b_{k}^{(m)}; \mathbf{z}_{k}^{(m)})$ to $\overline{\mathbf{y}}_k^{(m)}$ are given by
\begin{align}\label{varsigma}
\begin{split}
\varsigma_k^{(m)}(\overline{\mathbf{x}}_k^{(m)}, 1)&=\sum_{b_{k}^{(m)}=0}^{n_{k}} v(\overline{\mathbf{x}}_{k}^{(m)}, 1, b_{k}^{(m)}; \mathbf{z}_{k}^{(m)}) \iota_k(b_{k}^{(m)})
\\
&=\frac{\mu_{\mathrm{b}}f_{\mathrm{b}}(\overline{\mathbf{x}}_{k}^{(m)})f(\mathbf{z}_{k}^{(m)}| \overline{\mathbf{x}}_{k}^{(m)})}{\mu_{\mathrm{c}}f_{\mathrm{c}}(\mathbf{z}_{k}^{(m)})}\iota_k(0),
\end{split}
\end{align}
with $\varsigma_k^{(m)}(\overline{\mathbf{x}}_k^{(m)}, 0)=\sum_{b_{k}^{(m)}=0}^{n_{k}} \iota_k(b_{k}^{(m)})f_{\mathrm{d}}(\overline{\mathbf{x}}_{k}^{(m)})$. Then, the belief $\widetilde{p}(\overline{\mathbf{y}}_{k}^{(m)})=\widetilde{p}(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)})$ approximating the marginal posterior pdf $p(\overline{\mathbf{x}}_{k}^{(m)}, \overline{r}_{k}^{(m)}|\mathbf{z}_{1:k})$ is calculated by
\begin{align}\label{new-pt}
\widetilde{p}(\overline{\mathbf{x}}_{k}^{(m)},\overline{r}_{k}^{(m)})&=\frac{1}{C(\overline{\mathbf{x}}_k^{(m)})} \varsigma_k^{(m)}(\overline{\mathbf{x}}_k^{(m)}, \overline{r}_{k}^{(m)}),
\end{align}
where $C(\overline{\mathbf{x}}_k^{(m)})$ is a normalization constant such that $\sum_{\overline{r}_{k}^{(m)}\in\{0,1\}}\int\widetilde{p}(\overline{\mathbf{x}}_{k}^{(m)},\overline{r}_{k}^{(m)})\mathrm{d}{\overline{\mathbf{x}}_{k}^{(m)}}=1$.
Furthermore, the beliefs $\widetilde{p}( \underline{\mathbf{g}}_{k})$ approximating the marginal
posterior pmfs $p(\underline{\mathbf{g}}_{k}|\mathbf{z}_{1:k})$ are obtained as
\begin{align}\label{pdf-g}
\begin{split}
\widetilde{p}( \underline{\mathbf{g}}_{k})&=\frac{1}{C(\underline{\mathbf{x}}_{k})}\alpha_k(\underline{\mathbf{g}}_{k})\sum_{\underline{\mathbf{r}}_{k}\in\underline{\mathcal{R}}_k}\int\alpha_k(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k}|\underline{\mathbf{g}}_{k})\prod_{i=1}^{n_{k}}\gamma_k^{(i)}(\underline{\mathbf{x}}_k^{(i)}, \underline{r}_k^{(i)})\mathrm{d}{\underline{\mathbf{x}}_{k}}.
\end{split}
\end{align}
\subsubsection{Target Declaration, State Estimation and Pruning}
The obtained belief $\widetilde{p}(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k})$ can be used for target declaration, state estimation and pruning. Concretely, the beliefs $\widetilde{p}(\underline{\mathbf{x}}_{k}^{(i)},\underline{r}_{k}^{(i)})$ and $\widetilde{p}( \underline{r}_{k}^{(i)})$ approximating the pdf $p(\underline{\mathbf{x}}_{k}^{(i)},\underline{r}_{k}^{(i)}|\mathbf{z}_{1:k})$ and the pmf $p(\underline{r}_{k}^{(i)}|\mathbf{z}_{1:k})$ are derived by
\begin{align*}
\widetilde{p}(\underline{\mathbf{x}}_{k}^{(i)}, \underline{r}_{k}^{(i)})&=\sum_{\underline{\mathbf{r}}_{k}\backslash r_{k}^{(i)}}\int\widetilde{p}(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k})\mathrm{d}{(\underline{\mathbf{x}}_{k}\backslash\underline{\mathbf{x}}_{k}^{(i)})},
\\
\widetilde{p}( \underline{r}_{k}^{(i)})&=\sum_{\underline{\mathbf{r}}_{k}\backslash r_{k}^{(i)}}\int\widetilde{p}(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k})\mathrm{d}{\underline{\mathbf{x}}_{k}}.
\end{align*}
Then, the target declaration can be performed by comparing the existence probability with a given threshold $P_{\mathrm{e}}$, i.e., the legacy PT $\underline{\mathbf{x}}_{k}^{(i)}$ is confirmed at time $k$ if $\widetilde{p}( \underline{r}_{k}^{(i)}=1)>P_{\mathrm{e}}$. By means of the
minimum mean square error (MMSE) estimator, the state estimation for these PTs are obtained as
\begin{align}\label{mse}
\widehat{\underline{\mathbf{x}}}_{k}^{(i)}=\int\underline{\mathbf{x}}_{k}^{(i)}\frac{\widetilde{p}(\underline{\mathbf{x}}_{k}^{(i)}, \underline{r}_{k}^{(i)})}{\widetilde{p}( \underline{r}_{k}^{(i)})}\mathrm{d}{\underline{\mathbf{x}}}_{k}^{(i)}.
\end{align}
Analogously, the implementation of the target declaration and the state estimation for the new PTs are the same as for the legacy PTs. Finally, a pruning step is performed to remove unlikely PTs. Specifically, let $P_{\mathrm{pr}}$ be the pruning threshold, and then the PTs with existence beliefs smaller than $P_{\mathrm{pr}}$ are removed, i.e., the legacy PTs with $\widetilde{p}( \underline{r}_{k}^{(i)}=1)<P_{\mathrm{pr}}$ and the new PTs with $\widetilde{p}(\overline{r}_{k}^{(m)}=1)<P_{\mathrm{pr}}$.
\subsection{Computational Complexity and Scalability}\label{sec-scalability}
As a highly efficient and flexible algorithm, BP provides a scalable solution to the data association problem. By exploiting the scalability of BP, we propose a GTBP method for GTT. Under the assumption of a fixed number of BP iterations, we analyze the computational complexity of the proposed GTBP method as follows. Specifically, for the prediction of the group structure, (\ref{alpha-gs}) requires to be calculated $|\underline{\mathcal{G}}_{k}|$ times. That is, its computational complexity scales as $\mathcal{O}(|\underline{\mathcal{G}}_{k}|)$. In the calculation of (\ref{app-pdf}), the computational complexity is linear in the number of group partitions. Furthermore, the computational complexity of (\ref{beta})-(\ref{gamma_k}) scales as $\mathcal{O}(n_km_k)$, where the number of measurements $m_k$ increases linearly with the number of legacy PTs, new PTs and false alarms. Notably, the worst case is that the number of PTs increases up to the maximum possible number of PTs $N_{\text{max}}$. Consequently, the overall computational complexity of GTBP scales linearly in the number of group partitions and quadratically in the number of legacy PTs.
It is worth noting that the computational complexity can be further reduced in different ways, e.g., gating preprocessing of targets and measurements \cite{Mallick2013}, censoring of messages \cite{BP-ETT1} and preserving the $M$-best group partitions, etc. Specifically, gating technology can be used to keep the number of considered group partitions $|\underline{\mathcal{G}}_{k}|$ and the size of iterative data association at a tractable level. Message censoring ignores these messages related to new PTs that are unlikely to be an actual target. Preserving the $M$-best group partitions at each time step reduces the computational complexity of calculating the messages that involve the summation over possible group partitions (e.g., (\ref{alpha-gs}), (\ref{beta}) and (\ref{app-pdf})).
\section{Particle-based GTBP Implementation}
For general nonlinear and non-Gaussian dynamic system, it is not possible to obtain an analytical expression for the integral calculation of the aforementioned messages and beliefs. In this section, we consider an approximate particle implementation of the proposed GTBP method. Assume that the belief $\widetilde{p}(\mathbf{x}_{k-1},\mathbf{r}_{k-1})$ at time $k-1$ is approximated by a set of weighted particles $\{\{(\mathbf{x}_{k-1}^{(i,l)}, w_{k-1}^{(i,l)})\}_{i=1}^{n_k}\}_{l=1}^{L}$, where $L$ is the number of particles. Note that the summarization $\sum_{l=1}^{L}w_{k-1}^{(i,l)}$ provides an approximation of the marginal posterior pmf $p(r_{k-1}^{(i)}=1|\mathbf{z}_{1:k-1})$. Specific calculations of the above messages and beliefs using particles are given as follows.
\subsection{Prediction}
For each possible group partition $\underline{\mathbf{g}}_{k}\in\underline{\mathcal{G}}_{k}$, an approximation $ \widetilde{\alpha}_k(\underline{\mathbf{g}}_{k})$ of the message $\alpha_k(\underline{\mathbf{g}}_{k})$ (\ref{pre-gs}) is calculated via the weighted particles,
\begin{align}\label{alpha_g}
\begin{split}
\widetilde{\alpha}_k(\underline{\mathbf{g}}_{k})&=\widetilde{C}\prod_{i\in\Lambda_{\underline{\mathbf{g}}_{k}}}\big(P_0(1-P_0)^{N(\underline{\mathbf{g}}_{k})-1}(1-\sum_{l=1}^{L}w_{k-1}^{(i,l)})+\sum_{l=1}^{L}w_{k-1}^{(i,l)}P_{i,\underline{g}_{k}^{(i)}}^{(l)}\prod_{j\in \{1,\ldots,N(\underline{\mathbf{g}}_{k})\}\backslash\underline{g}_{k}^{(i)}}(1-P_{i,j}^{(l)})\big),
\end{split}
\end{align}
where $\widetilde{C}$ is a normalization constant such that $\sum_{\underline{\mathbf{g}}_{k}\in\underline{\mathcal{G}}_{k}}\widetilde{\alpha}_k(\underline{\mathbf{g}}_{k})=1$, and $1-\sum_{l=1}^{L}w_{k-1}^{(i,l)}$ provides an approximation
of $\int\widetilde{p}(\mathbf{x}_{k-1}^{(i)},r_{k-1}^{(i)}=0)\mathrm{d}{\mathbf{x}_{k-1}^{(i)}}$. The quantities $P_{i,\underline{g}_{k}^{(i)}}^{(l)}$ are calculated according to (\ref{Pij}) by using the particles $\mathbf{x}_{k-1}^{(i,l)}$, $i\in\Lambda_{\underline{\mathbf{g}}_{k}}$. Notably, the computation cost of the summarization in (\ref{alpha_g}) increases with the number of particles. As an alternative, one may approximately computing (\ref{alpha_g}) by using the state estimates to reduce the computation, i.e.,
\begin{align}\label{alpha_g_appro}
\begin{split}
\widetilde{\alpha}_k(\underline{\mathbf{g}}_{k})&\approx \widetilde{C}\prod_{i\in\Lambda_{\underline{\mathbf{g}}_{k}}}\big(P_0(1-P_0)^{N(\underline{\mathbf{g}}_{k})-1}(1-\sum_{l=1}^{L}w_{k-1}^{(i,l)})+(\sum_{l=1}^{L}w_{k-1}^{(i,l)})\widehat{P}_{i,\underline{g}_{k}^{(i)}}\prod_{j\in \{1,\ldots,N(\underline{\mathbf{g}}_{k})\}\backslash\underline{g}_{k}^{(i)}}(1-\widehat{P}_{i,j})\big),
\end{split}
\end{align}
where $\widehat{P}_{i,j}$ are computed by using the state estimates. As described in Subsection \ref{sec-scalability}, we also can apply the $M$-best strategy to further reduce the computational complexity, i.e., preserving the $M$ most likely group partitions and renormalizing the preserved messages $\widetilde{\alpha}_k(\underline{\mathbf{g}}_{k})$. To simplify notations, we redefine $\underline{\mathcal{G}}_{k}:=\{1,\ldots,M\}$ as an index set of the preserved group partitions at time $k$, which can be easily implemented by associating each preserved $\underline{\mathbf{g}}_{k}$ with a unique index $g\in\underline{\mathcal{G}}_{k}$. By replacing $\widetilde{p}(\mathbf{x}_{k-1},\mathbf{r}_{k-1})$ in (\ref{alpha1}) with particles, the message $\alpha_k(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k}|g)$ under given group partition $g\in\underline{\mathcal{G}}_{k}$ is approximated by
\begin{align*}
\alpha_k(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k}|g)\approx\prod_{i=1}^{n_k}\widetilde{\alpha}_k(\underline{\mathbf{x}}_{k}^{(i)}, \underline{r}_{k}^{(i)}|g),
\end{align*}
where $\widetilde{\alpha}_k(\underline{\mathbf{x}}_{k}^{(i)}, 1|g)$ is represented by a set of weighted particles $\{\underline{\mathbf{x}}_{k}^{(i,l,g)},\underline{w}_{k,*}^{(i,l,g)}\}_{l=1}^{L}$. More specifically, for the PTs belonging to the groups $j\neq0$ in the group partition $g$, the particles $\underline{\mathbf{x}}_{k}^{(i,l,g)}$ are drawn from the group transition density (\ref{pdf-model}), where the offsets and the virtual leaders are calculated by using corresponding particles at time $k-1$ according to (\ref{offset})-(\ref{vl}). Otherwise, the particles $\underline{\mathbf{x}}_{k}^{(i,l,g)}$ are drawn from the single-target state transition density in (\ref{sg}). Furthermore, the weight $\underline{w}_{k,*}^{(i,l,g)}$ is updated by
\begin{align}\label{w1}
\underline{w}_{k,*}^{(i,l,g)}=p_{\mathrm{s}}(\mathbf{x}_{k-1}^{(i,l)})w_{k-1}^{(i,l)}.
\end{align}
\subsection{Measurement Evaluation, Update and Belief Calculation}
Next, an approximation $\widetilde{\beta}_k(a_{k}^{(i)})$ of the message $\beta_k(a_{k}^{(i)})$ in (\ref{beta}) can be calculated from the weighted particles $\{\{\underline{\mathbf{x}}_{k}^{(i,l,g)},\underline{w}_{k,*}^{(i,l,g)}\}_{l=1}^{L}\}_{g=1}^{M}$,
\begin{align}\label{app-beta}
\begin{split}
\widetilde{\beta}_k(a_{k}^{(i)})&= \sum_{g=1}^{M}\widetilde{\alpha}_k(g)\sum_{l=1}^{L}q(\underline{\mathbf{x}}_{k}^{(i,\l,g)}, 1, a_{k}^{(i)}; \mathbf{z}_{k}) \underline{w}_{k,*}^{(i,l,g)}+\mathrm{I}(a_{k}^{(i)})\sum_{g=1}^{M}\widetilde{\alpha}_k(g)(1-\sum_{l=1}^{L}\underline{w}_{k,*}^{(i,l,g)}).
\end{split}
\end{align}
According to (\ref{def-v}) and (\ref{xi}), we have $\xi_k(b_{k}^{(m)})=1$ for $b_{k}^{(m)}\neq0$, and
\begin{align*}
\begin{split}
\xi_k(0)&= \int\frac{\mu_{\mathrm{b}}f_{\mathrm{b}}(\overline{\mathbf{x}}_{k}^{(m)})f(\mathbf{z}_{k}^{(m)}| \overline{\mathbf{x}}_{k}^{(m)})}{\mu_{\mathrm{c}}f_{\mathrm{c}}(\mathbf{z}_{k}^{(m)})}\mathrm{d}{\overline{\mathbf{x}}_{k}^{(m)}}+1,
\end{split}
\end{align*}
which can be approximated by the particles $\{\overline{\mathbf{x}}_{k}^{(m,l)}\}_{l=1}^{L}$ sampled from the prior distribution $f_{\mathrm{b}}(\overline{\mathbf{x}}_{k}^{(m)})$ with weights $\overline{w}_{k,*}^{(m,l)}$, i.e.,
\begin{align}\label{app-xi}
\begin{split}
\widetilde{\xi}_k(0)&= \frac{\mu_{\mathrm{b}}}{\mu_{\mathrm{c}}f_{\mathrm{c}}(\mathbf{z}_{k}^{(m)})}\sum_{l=1}^{L} \overline{w}_{k,*}^{(m,l)}f(\mathbf{z}_{k}^{(m)}| \overline{\mathbf{x}}_{k}^{(m,l)})+1.
\end{split}
\end{align}
The approximate messages $\widetilde{\beta}_k(a_{k}^{(i)})$ and $\widetilde{\xi}_k(b_{k}^{(m)})$ obtained above are substituted
for corresponding messages in the iterative data association step (\ref{message-beta})-(\ref{ini-beta}). After the iterations terminate, approximate messages $\widetilde{\kappa}_k(a_{k}^{(i)})$ and $\widetilde{\iota}_k(b_{k}^{(m)})$ of (\ref{kappa}) and (\ref{iota}) are derived. Then, the approximate messages $\widetilde{\gamma}_k^{(i)}(\underline{\mathbf{x}}_k^{(i)}, \underline{r}_k^{(i)})$ are obtained as
\begin{align}\label{app-gamma}
\begin{split}
\widetilde{\gamma}_k^{(i)}(\underline{\mathbf{x}}_k^{(i)}, 1)&=\sum_{a_{k}^{(i)}=0}^{m_{k}} q(\underline{\mathbf{x}}_{k}^{(i)}, 1, a_{k}^{(i)}; \mathbf{z}_{k}) \widetilde{\kappa}_k(a_{k}^{(i)}),
\end{split}
\end{align}
with $\widetilde{\gamma}_k^{(i)}(\underline{\mathbf{x}}_k^{(i)}, 0)=\widetilde{\kappa}_k(0)$. Then, an approximation of the belief $\widetilde{p}(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k})$ is obtained as
\begin{align*}
\widetilde{p}(\underline{\mathbf{x}}_{k}, \underline{\mathbf{r}}_{k})&\propto \sum_{g=1}^{M}\widetilde{\alpha}_k(g)\prod_{i=1}^{n_{k}}\widetilde{p}(\underline{\mathbf{x}}_{k}^{(i)}, \underline{r}_{k}^{(i)}|g),
\end{align*}
with
\begin{align*}
\widetilde{p}(\underline{\mathbf{x}}_{k}^{(i)}, \underline{r}_{k}^{(i)}|g):=\widetilde{\alpha}_k(\underline{\mathbf{x}}_{k}^{(i)}, \underline{r}_{k}^{(i)}|g)\widetilde{\gamma}_k^{(i)}(\underline{\mathbf{x}}_k^{(i)}, \underline{r}_k^{(i)}),
\end{align*}
where $\widetilde{p}(\underline{\mathbf{x}}_{k}^{(i)}, 1|g)$ can be represented by a set of particles $\{\underline{\mathbf{x}}_{k}^{(i,l,g)}\}_{l=1}^{L}$ with the following nonnormalized weights
\begin{align}\label{wa1}
\begin{split}
\underline{w}_{k,*}^{A(i,l,g)}&=\underline{w}_{k,*}^{(i,l,g)}
\widetilde{\gamma}_k^{(i)}(\underline{\mathbf{x}}_{k}^{(i,l,g)}, \underline{r}_k^{(i)})
\\
&=\underline{w}_{k,*}^{(i,l,g)}\sum_{a_{k}^{(i)}=0}^{m_{k}} q(\underline{\mathbf{x}}_{k}^{(i,l,g)}, 1, a_{k}^{(i)}; \mathbf{z}_{k}) \widetilde{\kappa}_k(a_{k}^{(i)}).
\end{split}
\end{align}
Similarly, the nonnormalized
weights corresponding to $\widetilde{p}(\underline{\mathbf{x}}_{k}^{(i)}, 0|g)$ are obtained as
\begin{align}\label{wb1}
\underline{w}_{k,*}^{B(i,g)}=(1-\sum_{l=1}^{L}\underline{w}_{k,*}^{(i,l,g)}) \widetilde{\kappa}_k(0),
\end{align}
where $1-\sum_{l=1}^{L}\underline{w}_{k,*}^{(i,l,g)}$ provides an approximation of $\int\alpha_k(\underline{\mathbf{x}}_{k}^{(i)}, 0|g)\mathrm{d}{\underline{\mathbf{x}}_{k}^{(i)}}$. Hence, $\widetilde{p}(\underline{\mathbf{x}}_{k}^{(i)}, 1|g)$ can be represented by a set of weighted particles $\{\underline{\mathbf{x}}_{k}^{(i,l,g)},\underline{w}_{k}^{(i,l,g)}\}_{l=1}^{L}$, where
\begin{align}\label{w1-update}
\underline{w}_{k}^{(i,l,g)}=\frac{\underline{w}_{k,*}^{A(i,l,g)}}{\sum_{l=1}^{L}\underline{w}_{k,*}^{A(i,l,g)}+\underline{w}_{k,*}^{B(i,g)}}.
\end{align}
Thus, the particle-based approximation of
the posterior pmfs $p(\underline{r}_{k}^{(i)}=1|\mathbf{z}_{1:k})$ are obtained as
\begin{align}\label{ex-legacy}
\widetilde{p}( \underline{r}_{k}^{(i)})\approx\sum_{g=1}^{M}\widetilde{\alpha}_k(g)\sum_{l=1}^{L}\underline{w}_{k}^{(i,l,g)}.
\end{align}
According to (\ref{mse}), the state estimation for the legacy PTs $\underline{\mathbf{x}}_{k}^{(i)}$ are obtained as
\begin{align}\label{est-legacy}
\widehat{\underline{\mathbf{x}}}_{k}^{(i)}=\sum_{g=1}^{M}\sum_{l=1}^{L}\frac{\widetilde{\alpha}_k(g)\underline{w}_{k}^{(i,l,g)}\underline{\mathbf{x}}_{k}^{(i,l,g)}}{\sum_{g=1}^{M}\widetilde{\alpha}_k(g)\sum_{l=1}^{L}\underline{w}_{k}^{(i,l,g)}}.
\end{align}
Furthermore, the beliefs $\widetilde{p}(g)$ approximating the marginal posterior pmfs $p(g|\mathbf{z}_{1:k})$ in (\ref{pdf-g}) are obtained as
\begin{align*}
\widetilde{p}(g)=\frac{\widetilde{\alpha}_k(g)\prod\limits_{i=1}^{n_{k}}\sum\limits_{\underline{r}_{k}^{(i)}\in\{0,1\}}\int\widetilde{p}(\underline{\mathbf{x}}_{k}^{(i)},\underline{r}_{k}^{(i)}|g)\mathrm{d}{\underline{\mathbf{x}}_{k}^{(i)}}}{\sum\limits_{g=1}^{M}\widetilde{\alpha}_k(g)\prod\limits_{i=1}^{n_{k}}\sum\limits_{\underline{r}_{k}^{(i)}\in\{0,1\}}\int\widetilde{p}(\underline{\mathbf{x}}_{k}^{(i)},\underline{r}_{k}^{(i)}|g)\mathrm{d}{\underline{\mathbf{x}}_{k}^{(i)}}},
\end{align*}
where the particle-based approximation are given by
\begin{align}\label{est-g}
\widetilde{p}(g)=\frac{\widetilde{\alpha}_k(g)\prod\limits_{i=1}^{n_{k}}(\sum_{l=1}^{L}\underline{w}_{k,*}^{A(i,l,g)}+\underline{w}_{k,*}^{B(i,g)})}{\sum\limits_{g=1}^{M}\widetilde{\alpha}_k(g)\prod\limits_{i=1}^{n_{k}}(\sum_{l=1}^{L}\underline{w}_{k,*}^{A(i,l,g)}+\underline{w}_{k,*}^{B(i,g)})}.
\end{align}
Note that for each PT $i$ at time $k-1$, $L$ particles $\{\mathbf{x}_{k-1}^{(i,l)}\}_{l=1}^{L}$ are propagated to $L\times M$ particles $\{\{\underline{\mathbf{x}}_{k}^{(i,l,g)}\}_{l=1}^{L}\}_{g=1}^{M}$ based on the $M$-best group partitions. To reduce the $L\times M$ particles to $L$ particles, a resampling step is performed according to the beliefs $\widetilde{p}(g)$.
For the new PTs, a particle-based approximation $\widetilde{\varsigma}_k^{(m)}(\overline{\mathbf{x}}_k^{(m)}, 1)$ of the messages $\varsigma_k^{(m)}(\overline{\mathbf{x}}_k^{(m)}, 1)$ in (\ref{varsigma}) is given by $\{\overline{\mathbf{x}}_{k}^{(m,l)}\}_{l=1}^{L}$ with nonnormalized weights
\begin{align}\label{wa2}
\overline{w}_{k,*}^{A(m,l)}=\overline{w}_{k,*}^{(m,l)}\times\frac{\mu_{\mathrm{b}}f(\mathbf{z}_{k}^{(m)}| \overline{\mathbf{x}}_{k}^{(m,l)})\widetilde{\iota}_k(0)}{\mu_{\mathrm{c}}f_{\mathrm{c}}(\mathbf{z}_{k}^{(m)})},
\end{align}
and the nonnormalized weights corresponding to $\widetilde{\varsigma}_k^{(m)}(\overline{\mathbf{x}}_k^{(m)}, 0)$ are given by
\begin{align}\label{wb2}
\overline{w}_{k,*}^{B(m,l)}=\sum_{b_{k}^{(m)}=0}^{n_k}\widetilde{\iota}_k(b_k^{(m)}).
\end{align}
Thus, the beliefs $\widetilde{p}(\overline{\mathbf{x}}_{k}^{(m)}, 1)$ in (\ref{new-pt}) are approximated by the particles $\{(\overline{\mathbf{x}}_{k}^{(m,l)}, \overline{w}_{k}^{(m,l)})\}_{l=1}^{L}$, where
\begin{align}\label{w2-update}
\overline{w}_{k}^{(m,l)}&=\frac{\overline{w}_{k,*}^{A(m,l)}}{\sum_{l=1}^{L}\overline{w}_{k,*}^{A(m,l)}+\overline{w}_{k,*}^{B(m,l)}}.
\end{align}
Then, the marginal posterior pmfs $p(\overline{r}_{k}^{(m)}=1|\mathbf{z}_{1:k})$ are approximated by
\begin{align}\label{ex-new}
\widetilde{p}( \overline{r}_{k}^{(m)})\approx\sum_{l=1}^{L}\underline{w}_{k}^{(m,l)},
\end{align}
and the state estimation for the new PT $\overline{\mathbf{x}}_{k}^{(m)}$ are obtained as
\begin{align}\label{est-new}
\widehat{\overline{\mathbf{x}}}_{k}^{(m)}=\sum_{l=1}^{L}\frac{\overline{w}_{k}^{(m,l)}\overline{\mathbf{x}}_{k}^{(m,l)}}{\sum_{l=1}^{L}\overline{w}_{k}^{(m,l)}}.
\end{align}
A pseudo-code description
of the particle-based implementation of $M$-best GTBP is summarized as follows, which preserves the $M$-best group partitions at each time step. For notational convenience, we ignore the changes in the indices of the legacy PT $i$ and the new PT $m$ before and after pruning in Algorithm \ref{A1}.
\begin{algorithm}[ht]
\caption{Particle-based Implementation of the $M$-best GTBP Algorithm}
\label{A1}
\begin{algorithmic}[0]
\renewcommand{\algorithmicrequire}{\textbf{Initialize:}}
\Require \\
Set $\mathbf{y}_0$ and $\mathbf{g}_0$ as empty vectors;
\renewcommand{\algorithmicrequire}{\textbf{Input at time $k$:}}
\Require \\
Weighted particles $\{\{(\mathbf{x}_{k-1}^{(i,l)}, w_{k-1}^{(i,l)})\}_{i=1}^{n_k}\}_{l=1}^{L}$, and measurements $\mathbf{z}_{k}$;
\renewcommand{\algorithmicrequire}{\textbf{Output at time $k$:}}
\Require \\
Legacy PTs : state estimation $\widehat{\underline{\mathbf{x}}}_{k}^{(i)}$, beliefs $\widetilde{p}( \underline{r}_{k}^{(i)})$ and weighted particles $\{\{(\underline{\mathbf{x}}_{k}^{(i,l)}, \underline{w}_{k}^{(i,l)})\}_{i=1}^{n_{k}}\}_{l=1}^{L}$;\\
New PTs: state estimation $\widehat{\overline{\mathbf{x}}}_{k}^{(m)}$, beliefs $\widetilde{p}( \overline{r}_{k}^{(m)})$ and weighted particles $\{\{(\overline{\mathbf{x}}_{k}^{(m,l)},\overline{w}_{k}^{(m,l)})\}_{m=1}^{m_k}\}_{l=1}^{L}$;\\
Group structure: the preserved $M$-best group structures and corresponding probabilities $\widetilde{p}(g)$;
\renewcommand{\algorithmicrequire}{\textbf{Run:}}
\Require
\State {\bf Step 1:} compute $\widetilde{\alpha}_k(g)$, $g\in\underline{\mathcal{G}}_k$ via (\ref{alpha_g_appro}), preserve the $M$ most likely group partitions and renormalize $\widetilde{\alpha}_k(g)$;
\State {\bf Step 2:} for each group partition $g$, draw the particles $\underline{\mathbf{x}}_{k}^{(i,l,g)}, l=1,\ldots,L$ from the group transition density $(\ref{pdf-model})$ or the single-target state transition density in (\ref{sg}), and compute corresponding weights $\underline{w}_{k,*}^{(i,l,g)}$ via (\ref{w1});
\State {\bf Step 3:} compute $\widetilde{\beta}_k(a_{k}^{(i)})$, $a_{k}^{(i)}=0,\ldots,m_k$ via (\ref{app-beta}), draw particles with equal weights from the prior pdf $f_{\mathrm{b}}(\overline{\mathbf{x}}_{k}^{(m)})$, i.e., $\{(\overline{\mathbf{x}}_{k}^{(m,l)}, \overline{w}_{k,*}^{(m,l)}=\frac{1}{L})\}_{l=1}^{L}$, and compute $\widetilde{\xi}_k(b_{k}^{(m)}=0)$ via (\ref{app-xi});
\State {\bf Step 4:} run the iterative data association (\ref{message-beta})-(\ref{ini-beta}), and compute $\widetilde{\kappa}_k(a_{k}^{(i)})$ and $\widetilde{\iota}_k(b_{k}^{(m)})$ via (\ref{kappa})-(\ref{iota});
\State {\bf Step 5:} for the legacy PTs $i\in\{1,\ldots,n_k\}$, calculate the weights $\underline{w}_{k}^{(i,l,g)}$ via (\ref{wa1})-(\ref{w1-update}), and then obtain $\widetilde{p}( \underline{r}_{k}^{(i)})$, $\widehat{\underline{\mathbf{x}}}_{k}^{(i)}$, $\widetilde{p}(g)$ via (\ref{ex-legacy})-(\ref{est-g}), respectively;
\State {\bf Step 6:} for the new PTs $m\in\{1,\ldots,m_k\}$, calculate the weights $\overline{w}_{k}^{(m,l)}$ via (\ref{wa2})-(\ref{w2-update}), and then obtain $\widetilde{p}( \overline{r}_{k}^{(m)})$, $\widehat{\overline{\mathbf{x}}}_{k}^{(m)}$ via (\ref{ex-new}) and (\ref{est-new}), respectively;
\State {\bf Step 7:} prune the legacy PTs and new PTs with existence probabilities less than the threshold $P_{\mathrm{pr}}$;
\State {\bf Step 8:} according to the probability $\widetilde{p}(g)$, a resample step for each preserved legacy PT is performed to reduce the $L\times M$ particles $\{\{\underline{\mathbf{x}}_{k}^{(i,l,g)}\}_{g=1}^{M}\}_{l=1}^{L}$ to $L$ particles $\{\underline{\mathbf{x}}_{k}^{(i,l)}\}_{l=1}^{L}$ with equal wights $ \underline{w}_{k}^{(i,l)}=\frac{\widetilde{p}(\underline{r}_{k}^{(i)})}{L}$;
\\
\Return $\{\{(\underline{\mathbf{x}}_{k}^{(i,l)}, \underline{w}_{k}^{(i,l)})\}_{l=1}^{L},\ \widetilde{p}( \underline{r}_{k}^{(i)}),\ \widehat{\underline{\mathbf{x}}}_{k}^{(i)}\}_{i=1}^{n_{k}}$, $\widetilde{p}(g)$, $\{(\overline{\mathbf{x}}_{k}^{(m,l)},\overline{w}_{k}^{(m,l)})\}_{l=1}^{L},\ \widetilde{p}( \overline{r}_{k}^{(m)}),\ \widehat{\overline{\mathbf{x}}}_{k}^{(m)}\}_{m=1}^{m_k}$;
\end{algorithmic}
\end{algorithm}
\section{Simulation}
In this section, we simulate two typical GTT scenarios to demonstrate
the performance of the proposed GTBP method. The simulation setting and performance comparison results are presented as follows.
\subsection{Simulation Setting}
In scenario 1, we consider tracking an unknown number of group targets, involving the group splitting and merging. A total of 100 time steps with the time sampling interval $\Delta T = 2s$ is simulated, and four targets appear in the scene. Let the kinematics of the individual targets described by the state vector $\mathbf{x}_k^{(i)}=\left[x_k^{(i)},\dot{x}_k^{(i)},y_k^{(i)},\dot{y}_k^{(i)}\right]^{\mathrm{T}}$ of planar position and velocity. Here, we use one constant velocity (CV) model and two constant turn (CT) models \cite{Mallick2013} without process noise to generate the true trajectories of the four targets, where the state transition matrices of the corresponding models are
\begin{align*}
F_{\mathrm{CV}}:=\left[\begin{array}{cccc}
1 & \Delta T & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & \Delta T \\
0 & 0 & 0 & 1
\end{array}\right],\quad F_{\mathrm{CT}}^{(j)}:=\left[\begin{array}{cccc}
1 & \frac{\sin \omega^{(j)} \Delta T}{\omega^{(j)}} & 0 & -\frac{1-\cos \omega^{(j)} \Delta T}{\omega^{(j)}} \\
0 & \cos \omega^{(j)} \Delta T & 0 & -\sin \omega^{(j)} \Delta T \\
0 & \frac{1-\cos \omega^{(j)} \Delta T}{\omega^{(j)}} & 1 & \frac{\sin \omega^{(j)} \Delta T}{\omega^{(j)}} \\
0 & \sin \omega^{(j)} \Delta T & 0 & \cos \omega^{(j)} \Delta T
\end{array}\right],
\end{align*}
respectively, where $j\in\{1,2\}$, the turn rates $\omega^{(1)}=2.25^{\circ}/s$ and $\omega^{(2)}=-2.25^{\circ}/s$.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{fig3.eps}
\caption{The ground truths simulated in scenario 1. Starting and stopping positions are marked with $\circ$ and $\square$, respectively.}
\label{figure3}
\end{figure}
\begin{table}[H]
\centering
\caption{The lifespan (time step) and initial speeds of the targets.}
\label{table1}
\begin{tabular}{ccc}
\toprule
Target indices & Lifespan & Initial states
\\
\midrule
1 &$\left[1,80\right]$ & $\left[800, 10, 3255, -10\right]^{\mathrm{T}}$
\\
2 &$\left[1,80\right]$ &$\left[740, 10\sqrt{2}, 3000, 0\right]^{\mathrm{T}}$
\\
3 &$\left[1,80\right]$ &$\left[800, 10, 2745, 10\right]^{\mathrm{T}}$
\\
4 &$\left[21,100\right]$ &$\left[1010, 8, 2500, -8\right]^{\mathrm{T}}$
\\
\bottomrule
\end{tabular}
\end{table}
The simulated ground truths, lifespan and initial state vectors are shown in Fig. \ref{figure3} and Table \ref{table1}, respectively. As shown in Fig. \ref{figure3}, the targets 1, 2 and 3 fly towards the directions of their velocities based on the CV model, before executing the coordinated turn motions based on the CT model, which makes the three targets get closer and then merge into one group. Then, the group target move in a triangular formation based on the CV model, accompanied by the birth of the target 4. Followed by the splitting of the group, the targets 1, 2 and 3 gradually move away from each other. Finally, the simulated scenario ends with the termination of the target 4.
Assuming that the sensor is located at the origin, and the ranges of radius and azimuth are 0-5000$m$ and 0-$2\pi$ $rad$, respectively. The measurement likelihood is given by $f(\mathbf{z}_{k}^{(m)}|\underline{\mathbf{x}}_{k}^{(i)})=\mathcal{N}(\mathbf{z}_{k}^{(m)};H\mathbf{x}_{k}^{(i)},\sigma_{\mathbf{w}}^2I_2)$, where
\begin{align*}
H:=\left[\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0
\end{array}\right],\quad I_2:=\left[\begin{array}{cc}
1 & 0 \\
0 & 1
\end{array}\right],
\end{align*}
and the standard deviation (Std) of the measurement noise is $\sigma_{\mathbf{w}}=10m$. The clutter pdf $f_{\mathrm{c}}(\mathbf{z}_{k}^{(m)})$ is assumed uniform on the surveillance region, and the Poisson mean number of clutters is $\mu_{\mathrm{c}}=10$ if not noted otherwise.
\subsection{Simulated Methods and Performance Metric}
We compare the recently developed BP method \cite{BP-MTT2} and the proposed GTBP method. Moreover, the performance of GTBP preserving different numbers of group partitions are also tested. For notational convenience, we abbreviate the method implemented by Algorithm \ref{A1} and $M=2$ as GTBP-2best. In order to evaluate the tracking algorithm exclusively, we employ the same birth pdf $f_{\mathrm{b}}(\overline{\mathbf{x}}_{k}^{(m)})$ for all tested methods, which is constructed by using the measurements at the previous time step \cite{birth-pdf}. If not noted otherwise, we set the Poisson mean number of new PTs, the maximum possible number of PTs, the number of particles (for representing each legacy PT or new PT state), the detection probability and survival probability as $\mu_{\mathrm{b}}=10^{-5}\times\mu_{\mathrm{c}}$, $N_{\text{max}} = 8$, $L = 3000$, $p_{\mathrm{d}}(\underline{\mathbf{x}}_{k}^{(i)})=0.995$ and $p_{\mathrm{s}}(\underline{\mathbf{x}}_{k}^{(i)})=0.9999$, respectively. The grouping constant $P_{0}$ in (\ref{Pij}) for incorporating nonexistent PTs into groups is set to $0.001$. In the iterative data association, the iteration is stopped if the Frobenius norm of the beliefs between two consecutive iterations is less than $10^{-5}$ or reaching the maximum number of iterations 100. All tested methods perform a message censoring step \cite{BP-ETT1} with a threshold 0.9. The thresholds for target declaration and pruning are $P_{\mathrm{e}}=0.8$ and $P_{\mathrm{pr}}=10^{-5}\times\mu_{\mathrm{c}}$, respectively. Furthermore, we use the CV model with the process noise $Q_{k}=\sigma_{\mathbf{v}}^2GG^{\mathrm{T}}$ for tracking, where $\sigma_{\mathbf{v}}=10m/s^2$ is the Std of the process noise and
\begin{align*}
G:=\left[\begin{array}{cccc}
\frac{\Delta T^{2}}{2} & \Delta T &0 &0\\
0 & 0 & \frac{\Delta T^{2}}{2} &\Delta T
\end{array}\right]^{\mathrm{T}}.
\end{align*}
To evaluate the tracking performance, we use the OSPA$^{(2)}$ distance $\check{d}_{p, q}^{(c)}(X, Y ; w)$ as the performance metric, which is able to capture different kinds of tracking errors such as track switching and fragmentation \cite{ospa2}. If not noted otherwise, the cutoff parameter, the order parameters and the window length are set to $c=50$, $p=1$, $q=2$ and $w=10$ (with uniform weights), respectively.
\subsection{Simulation Results of Scenario 1}
Fig. \ref{figure4} plots the average total OSPA$^{(2)}$ of BP, GTBP-2best and GTBP-4best over 100 Monte Carlo runs versus the time step. Specific results and reasons are given as follows:
\begin{itemize}
\item Before the time step 10, it shows that the three methods have similar performance for the reason of using the same track initialization settings and the targets 1, 2 and 3 move as ungrouped targets.
\item Between the time steps 10 and 80 (i.e., the GTT stage with the occurrence of group merging and splitting), GTBP-2best and GTBP-4best outperform BP for the reason of estimating the uncertainty of the group structure. In addition, GTBP-4best outperforms GTBP-2best as a result of preserving more group partitions at each time step.
\item After the time step 80, the average total OSPA$^{(2)}$ of the three methods gradually become the same, since there is only the ungrouped target 4 in the scene and thus the proposed GTBP method degenerates to the BP method.
\end{itemize}
Furthermore, the two spikes around the time steps $k=20$ and $k=80$ are caused by the windowing effects, the track initiation and termination delays.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\linewidth]{fig4.eps}
\caption{The average total OSPA$^{(2)}$.}
\label{figure4}
\end{figure}
In more detail, we plot the average OSPA$^{(2)}$ for the group target (including the targets 1, 2 and 3) and the ungrouped target 4 in Figs. \ref{figure5}-\ref{figure6}, respectively. Fig. \ref{figure5} shows that GTBP-2best and GTBP-4best outperform BP when tracking the group target. The reasons are the same as that for the results in Fig. \ref{figure4}. Furthermore, Fig. \ref{figure6} shows that the three methods have almost the same performance when tracking the ungrouped target 4, which validates the fact that GTBP degrades to the classical BP method in the case of tracking ungrouped targets.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{fig5.eps}
\caption{The average OSPA$^{(2)}$ of the group target, including the targets 1, 2, 3.}
\label{figure5}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{fig6.eps}
\caption{The average OSPA$^{(2)}$ of the target 4.}
\label{figure6}
\end{figure}
\subsection{Simulation Results of Scenario 2}
To further evaluate the performance of the proposed GTBP method, we simulate a coordinated GTT scenario and compare the average total OSPA$^{(2)}$ and the average runtimes in different cases. In this scenario, we perform a fixed number of 20 BP iterations and use $L = 1000$ particles for all test methods. If not noted otherwise, the group consists of five targets generated by the CV model and CT models with an initial speed of 10m$/$s, where the ground truths are shown in Fig. \ref{figure7}. The initial position of the target 1 is fixed at 800 m and 3000 m along the $x$-axis and $y$-axis, respectively, and the other adjacent targets within the group are separated by 50 m along the $y$-axis.
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.6\linewidth]{fig7.eps}
\caption{The ground truths simulated in scenario 2. Starting and stopping positions are marked with $\circ$ and $\square$, respectively.}
\label{figure7}
\end{figure}
Figs. \ref{figure8}a-\ref{figure8}c plot the average total OSPA$^{(2)}$ (using window length $w=20$ with uniform weights) over 100 time steps and 100 Monte Carlo runs versus the Std of the measurement noise $\sigma_{\mathbf{w}}$, the Poisson mean number of clutters $\mu_{\mathrm{c}}$ and the number of preserved group partitions $M$, respectively. More specifically,
\begin{itemize}
\item Fig. \ref{figure8}a shows the average total OSPA$^{(2)}$ of BP and GTBP versus $\sigma_{\mathbf{w}}$ for $\mu_{\mathrm{c}}=10$ and $M=2$, which increase with the adding of $\sigma_{\mathbf{w}}$, since the target spacing is fixed and the group becomes more and more indistinguishable when increasing $\sigma_{\mathbf{w}}$. Furthermore, GTBP outperforms BP for the reason of jointly inferring the group structure uncertainty.
\item Fig. \ref{figure8}b shows the average total OSPA$^{(2)}$ of BP and GTBP versus $\mu_{\mathrm{c}}$ for $\sigma_{\mathbf{w}}=10$ and $M=2$, which increase slightly as $\mu_{\mathrm{c}}$ increases. The reason may be that the number of false tracks initialized by clutters increases. Furthermore, GTBP obtains better tracking performance than BP for the same reasons analyzed in Fig. \ref{figure8}a.
\item Fig. \ref{figure8}c shows the average total OSPA$^{(2)}$ of GTBP versus $M$ for $\sigma_{\mathbf{w}}=10$ and $\mu_{\mathrm{c}}=10$. The average total OSPA$^{(2)}$ decreases as the number of preserved group partitions $M$ increases, since preserving more group partitions results in a more accurate approximation of the joint posterior pdf and thus leads to further performance improvement.
\end{itemize}
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{fig8.eps}
\caption{The average total OSPA$^{(2)}$ over 100 time steps and 100 Monte Carlo runs. (a): versus $\sigma_{\mathbf{w}}$ for 5 actual targets, $\mu_{\mathrm{c}}=10$ and $M=2$; (b): versus $\mu_{\mathrm{c}}$ for 5 actual targets, $\sigma_{\mathbf{w}}=10$ and $M=2$; (c): versus $M$ for 5 actual targets, $\sigma_{\mathbf{w}}=10$ and $\mu_{\mathrm{c}}=10$.}
\label{figure8}
\end{figure}
Furthermore, to demonstrate the excellent scalability and low complexity of the proposed GTBP method, we investigate how the runtime of GTBP scales in the number of preserved group partitions $M$, the Poisson mean number of clutters $\mu_{\mathrm{c}}$ and the number of actual targets within a group. The simulation is run on a laptop with an Intel(R) Core(TM) i5-10300H 2.50 GHz platform with 8 GB of RAM. Figs. \ref{figure9}a-\ref{figure9}c plot the average runtimes over 100 time steps and 100 Monte Carlo runs versus $M$, $\mu_{\mathrm{c}}$ and the number of actual targets within a group, respectively. The results indicate that the average runtime scales linearly in the number of preserved group partitions, linearly in the number of sensor measurements (which grows linearly with $\mu_{\mathrm{c}}$), and quadratically in the number of actual targets. Thus, GTBP has excellent scalability for GTT. Notably, the average runtime of GTBP is less than 0.1s for 5 actual targets, $M=2$ and $\mu_{\mathrm{c}}=50$, and is nearly 1s for 50 actual targets, $M=2$ and $\mu_{\mathrm{c}}=10$, which confirm that GTBP has a low complexity and it is applicable for the tracking scenarios that include large numbers of clutters and group targets.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{fig9.eps}
\caption{The average runtimes per time step of GTBP. (a): versus $M$ for 5 actual targets and $\mu_{\mathrm{c}}=10$; (b): versus $\mu_{\mathrm{c}}$ for 5 actual targets and $M=2$; (c): versus the number of actual targets for $M=2$, $\mu_{\mathrm{c}}=10$ and the maximum possible number of PTs set to $N_{\text{max}}=20, 30, \cdots, 60$.}
\label{figure9}
\end{figure}
\section{Conclusion}
In this paper, we focus on the GTT problem, where the targets within groups are closely spaced, and the groups may split and merge. We proposed a scalable GTBP method within the BP framework, which jointly infers target existence variables, group structure, data association and target states. By considering the group structure uncertainty, GTBP can capture the group structure changes such as group splitting and merging. Moreover, the introduction of group structure variables enables seamless and simultaneous tracking of multiple group targets and ungrouped targets. Specifically, the evolution of targets is modeled as the co-action of the group or single-target motions under different group structures. In particular, GTBP has excellent scalability and low complexity that only scales linearly in the numbers of preserved group partitions and sensor measurements, and quadratically in the number of targets. Numerical results verify that GTBP obtains better tracking performance and has excellent scalability in GTT. Future research direction may include the generalization of GTBP to multisensor fusion.
\bibliographystyle{ieeetr}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,025
|
X-Moto ist ein freies Open-Source-2D-Motocross-Computerspiel, das für Linux, FreeBSD, macOS und Microsoft Windows entwickelt wird. Das Spiel wurde zusätzlich für AmigaOS 4 portiert. Bei X-Moto spielt die Fahrphysik eine wichtige Rolle. Das Spielprinzip ist dem von Elasto Mania nachempfunden, weist aber einige Unterschiede auf. Das Spielt steht unter der GNU General Public License (GPL).
Aufgrund der Beliebtheit des Spieles und des ebenfalls verfügbarem Inksmoto Level-Editor für alle Plattformen sind von Freiwilligen tausende von Zusatzleveln erstellt und auf der Website des Spieles zur Verfügung gestellt worden.
Es gibt einen öffentlichen X-Moto Server. Darüber kann das Spiel mit anderen über das Internet gemeinsam gespielt werden.
Spielprinzip
Das Ziel des Spieles ist es, mit einem Motocross einen Parcours mit Hindernissen, Loopings, Felsvorsprüngen etc. zu überwinden, dabei Erdbeeren einzusammeln und die Blume am Ende des Levels zu erreichen. Dazu dosiert man mit der Steuerung Gas und Bremse und hebt oder drückt die Maschine über das Lenkrad. Zur Rekordjagd kann man die Fahrt des Rekordes zusätzlich neben der eigenen Fahrt als Geist darstellen lassen.
Entwicklung
Das Spiel nutzt die Skriptsprache Lua. Für die Physikberechnung wird die 2D-Physik-Engine Chipmunk genutzt. Weiterhin basiert das Spiel auf der Open Dynamics Engine, auf SDL und OpenGL.
Die erste offizielle Version 0.1.0 (alpha) ist am 29. Mai 2005 erschienen, seitdem wird das Spiel kontinuierlich weiterentwickelt und verbessert. Seit der Alphaversion 0.1.14 können High Scores und damit die Streckenrekorde über das Spiel heruntergeladen werden. Seit der Alphaversion 0.1.8, X-Moto erlaubt das Spiel die Aufnahme und Wiedergabe der Rennen.
Verbreitung
Das Spiel war Teil der c't-Software-Kollektion 6/2009 ist Bestandteil zahlreicher Downloadportale sowie im Repository gängiger Linux-Distributionen verfügbar.
Rezeption
Das Spiel mache einen guten Eindruck und habe ein schnell süchtig machendes Gameplay. Die Weiterentwicklung sei langsam. Es gäbe über 1000 Level, Tendenz steigend.
Weblinks
Website des Projektes
Einzelnachweise
Computerspiel 2005
Freies Computerspiel
Windows-Spiel
Mac-OS-Spiel
Linux-Spiel
Rennsimulation
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 647
|
Apac in the World
Apac District :: Home of the Langi :: Northern Uganda :: Africa
Re-vamping Apac in the World
July 15, 2011 by Line E. Gissel
After many months of hibernation, this blog is now ready to face the world again. If you live in Apac and you like writing, and would like to reflect publicly about all things Apac, Langi or Ugandan, why not become a writer for the blog???
Just write an email to apac.blog@gmail.com, and we will fashion you with a username so you can become a contributor to the blog.
With many good wishes,
Posted in People, Uganda | Leave a Comment »
On local power politics: messy, shady and more or less unchecked
May 22, 2009 by Line E. Gissel
Back stabbing. Character assassinations. Plots to undermine fellow elected politicians or fellow civil servants. And, crucially, thinking 20 steps ahead. These are the ingredients of Apac Politics. This district in Uganda, where 0.2 per cent of the households have access to electricity and 0.1 per cent of the adult population holds a diploma or a degree, provides ample study for a contemporary Macchiavelli, while Kasparov could learn a thing or two from those at the epicentre of the eternal conflicts.
The district council is comprised of councillors from (the governing) NRM party and, overwhelmingly, (the opposition party) UPC. (Some say this explains the total lack of interest in the district by the central government.) The executive committee is headed by District Chairman, Hon. Nicholas Opio Bunga, a retired teacher. He selected his fellow resident of Inomo sub-division as his Vice-Chairman. His council includes two councillors who have stood out in the past year: Apac's very own 'Jack the Zipper', Hon. Malakwang, who attacked two women with a scissor for wearing trousers (see previous blog posts), and the councillor for Ibuje, who smashed the glass front of the district notice board because his private construction firm failed to secure a public contract. Neither culprit was disciplined by the Hon. Chairman, 'father of the district'. These are the least of the council's antics.
A month ago, staff at Apac Hospital went on strike. Peaceful collective action in Uganda is rarer than Hummers on the streets of Kampala, so eyebrows were raised. Doctors, nurses and assistants protested the 'disappearance' of their 30 per cent salary top-up, paid by the WHO, and designed to combat the rampant desertion of essential health workers from northern Uganda. According to the council executive, the money had appeared on an account, but nobody had 'remembered' what the money was for, and so had been 'disappeared'. (The district receives 27 billion shillings annually from the central government, and so should be used to keeping track of bank statements…). It is widely believed – and not disproved – that the money was 'eaten', 'privatised'. Neither the Chief Finance Officer, nor the Secretary for Finance or the Chairman offered to explain the matter. Nobody offered to pay back. In the end, the top-ups were partly paid.
Who checks the checkers? The powers and excesses of the district civil service are, in theory, checked by the council. But what happens when councillors have an interest in not checking key civil servants? The answer, according to the democracy school, is for the population to register their dismay through the ballot or by revoking the powers of their representatives. Both are provided for in the Ugandan constitution. But before the population can act, they need to know about the abuses of power. In this sense, knowledge is power. The police and the inspectorate of government can of course investigate on the basis of suspicion, but rarely do; the radio can broadcast any events and discoveries, but is owned by a sub-county politician; and the civil society can demand for accountability. All these actors are either under-capacitated or compromised, and often both.
The Chief Finance Officer answers to the Chief Administrative Officer (CAO), the head of the district civil service. But in a coincidence of perfect timing, Apac has been without a CAO for the past months. The Deputy CAO – upon the refusal by the CAO of another district (Kotido district) to accept his transfer to Apac! – was appointed as Acting CAO by the Ministry of Local Government. But the executive committee of the council wrote to the Ministry to oppose this appointment and the Ministry has hesitated to identify the head of the civil service, the implementing arm of the local government.
In Uganda, every district has a Resident District Commissioner, whose responsibility is to monitor the implementation of central and local government services. In Apac, the RDC had to step in to sort out the hospital crisis. As he reports directly to the President of the country, and in the absence of an angry electorate, he was one of the only people with sufficient powers to put pressure on the council and the civil service to find the missing resources so the hospital could call off the strike.
In this patriarchal and old-fashioned society, the scapegoat of the hospital strike has been a young doctor, one of only two doctors (the hospital is nominated to have seven). Since the collective action took place, he has been at the receiving end of intimidation and character assassination. Perhaps the voters will register their disappointment with the district leaders. Until then, old men in positions of power and authority have a great time doing entirely as they please.
Posted in Conflict, Politics | Tagged corruption, local politics, the democracy school | 3 Comments »
…And word has it that George Bush is also around…
March 3, 2009 by Line E. Gissel
Apparently – and this is difficult to understand – the warlord Joseph Kony, leader of the Lord's Resistance Army, the rebel movement which is killing thousands of civillians in the DR Congo and South Sudan and have turned his war against President Museveni into a regional conflict, has a son… called George Bush! Perhaps Kony named his offspring in honour of a fellow strongman whose name he heard all the time on his satellite radio in Garamba Forest. Or, he felt inspired by the fact that his rebel army was listed on the list of terrorist movements globally, which George W. Bush initiated. Or, he shares the Acholi love for grand history-making names, as described below. Or? You tell me.
Posted in Conflict, People | Tagged Globalisation, LRA | Leave a Comment »
Meet Livingstone, Chairman Mao, Mus(s)olini and Ronald Reagan
February 23, 2009 by Line E. Gissel
Ugandans have an affinity for grand names, whether of the famous or the infamous kind. High-profile members of the public are Livingstone Okello Okello, a Member of Parliament (Chua County/Kitgum District), Chairman Mao, the chairperson of Gulu District, Ethan Musolini, a motivational speaker and CEO of Success Africa, and Ronald Reagan Ukumo, also Member of Parliament (Aswa County/Gulu District). Imagine that Mao has a meeting with Reagan and Livingstone in Parliament, it must happen quite often as they are all three Acholi political leaders, Mao at the district level and Reagan and Livingstone at the national levels. Or that Musolini gives business tips to Mao…!
We are sure to see a lot of Barack and Michelle coming up soon. The other day I met a man, who had just become a father for the first time. His daughter was to be Sasha, after Obama's second-born.
Other things are already named Obama. Across the country there are numerous Obama Supermarkets and Obama Hotels. And Apac has its own Obama Mudslide on the daily Apac-Kampala bus:
The new mudslide on the Felista bus that ferries people between Apac and Kampala
Posted in People, Politics, Uganda | Tagged Globalisation | 3 Comments »
20 days of 'Lightening Thunder' against the LRA and over 400 civilians killed in retaliation: What will 2009 bring?
January 5, 2009 by Line E. Gissel
The Juba Peace Talks look unmistakenly failed. The past 20 days the government has renewed its military offensive against the Lord's Resistance Army, together – it claims – with the Congolese and South Sudanese military. According to the government, the attacks were aimed at forcing Kony back to the negotiating table, after having failed to sign the peace agreement five times. Well, Operation Lightening Thunder did not compel Kony back to the Peace Talks; I am not sure anyone believed they ever would.
The UPDF, the national army, have hit various LRA camps in the heavily forested Garamba National Park in north eastern DR Congo, but somehow Joseph Kony and his fellow insurgents seem to leave these camps in good time. Rather than divine intervention, it is, of course, likely that the LRA is assisted by a source of insider information about any forthcoming attacks. The UPDF says today that they have killed 13 insurgents in total. The media has not been allowed access to the sites, so there has been no independent verification of events.
Independently verified has, however, been LRA's retaliatory attacks on civilians. Which is probably the most worrying aspect of the renewed war between the Government of Uganda and the Lord's Resistance Army. The latter has attacked a number of villages in South Sudan, DR Congo and in the area bordering the Central African Republic; last week 45 people were massacred in a church 10 kms from the town of Doruma in DRC.
It is difficult to get a clear overview of the figures involved. Aid agencies estimate that over 400 civilians have been killed, Caritas quotes a figure of 486. The tabloid paper The Red Pepper reported that 65,000 people have been internally displaced since the attacks began almost three weeks ago.
On Friday morning they attacked trucks in Tori and Yei, South Sudan; and Friday night they were back in the forest, attacking the chief station of the Garamba park rangers. The Red Pepper claimed to know that they were heading south towards Uganda.
People in Apac remember the fact that the LRA, after the government's Operation Iron Fist against its bases in Sudan in 2002, re-invaded parts of Northern Uganda and came as far south as Lira, Apac and Soroti! Their reach of these districts signalled their strength: Lira, Apac and Soroti are hundreds of kilometres from the Sudanese border; the most southers of these three districts, Apac is situated almost in the middle of Uganda!
If they could do that in 2002, the question remains, will they be able to again? Access to information – independently verified – seems as important as ever: In this region, where governments certainly appear to be unable to protect their own citizens, information is the most important means of protection.
The fact that part of the LRA consists of abductees makes the issue exceedingly complex. Over the past two decades, the Ugandan and South Sudanese governments failed to protect their villages and to prevent the abduction of children and young people; now these same governments want to kill the LRA insurgents, including the victims-turned-soldiers whose abduction they failed to prevent in the first place. But if they do not attack – and eradicate – the LRA, the government claims, there never will be peace in Northern Uganda.
Right now, people who happen to live at the intersection of the Central African Republic, South Sudan and north eastern DR Congo seem to be most at risk. It is a tragedy that these are three failed states. Although the prolonged existence of the LRA has always has regional aspects – funded by Sudan to destabilise Uganda – it now has become a regional destabilising force as it finds its victims at the margins of three – or four?- basket cases of African Governance.
Only the gods will know what 2009 has in store for this region…
Posted in Conflict, Politics, Uganda | Tagged access to information, LRA | 1 Comment »
Uganda on Obama: Through the lens of ethnicity and post-colonial political trauma, Obama's ascent to power signals a Luo revival to be celebrated or feared…
November 11, 2008 by Line E. Gissel
In Apac and with few exceptions, male surnames begin with O and those of females begin with A. The names are Luo. The word, Lwo, has entered the vocabulary of many non-Africans in 2008. The year began with 'ethnic riots' between the Luo and the Kikuyu of Kenya, and ended with a certain Barack Obama, partly of Lwo lineage, winning the US elections.
In the immigrant country of the USA, it is virtually impossible to judging a person based on her surname. Is a Rice white or black, poor or rich? But in Uganda, the ethnic make-up of somebody is instantly determined on the basis of his surname: Anyone with an O-name is from a northern tribe, those with K-names are likely to be Baganda, and those with M, N, T-names are probably from western Uganda. The political history of colonial and post-colonial Uganda has contributed to the charged nature of surnames beginning with O, Luo names.
The British recruited Luos and other northern tribes into the army, and favoured the southern tribes with the education system and the civil service. The country's first president (1966-70), Milton Obote, was a Langi from Apac, whose politics alienated many non-Luo people, particularly the Baganda. When Idi Amin took control of the state (1971-79), he eliminated many Luos in the army, to prevent a come-back for Obote. Obote did come back (1980-85), but was toppled by Tito Okello, who lost (or ceded, depending on your persuasion) power to Yoweri Museveni who remains president to this day. His rule has been challenged twice in insurgencies by Luo militants, led by Alice 'Lakwena' Auma and Joseph Kony. The willingness of Luos of different tribes to mobilise behind Obote, Okello, Auma and Kony has given rise to the perception that these tribes are inherently militaristic, easy to mobilise, fearless, strong and – dangerous…
The New Vision newspaper reported today that Ugandan MPs had celebrated the election of Obama: "Conspicuously, names of most MPs in attendance, started with the letter O. From opposition leader Ogenga Latigo, [to] Odonga Otto, Okupa Alijah, Otafiire Kahinda, they were all there. Others adopted the letter O, to suit the occasion. Deputy speaker Rebecca Kadaga became 'O'daga, Igeme Nabeeta became 'O'beta."
It appears that there is such thing as the 'Lwo factor' in Ugandan politics; and in the political sphere, perceptions matter. Here in Apac, many feel that the national army could have eliminated the Lords' Resistance Army if it had wanted to; and furthermore that it served the government to keep the Luo in check by its 'own' insurgency. (The counter-claim is that the LRA received financial support from Luo abroad.) Exiled Lwo Olara Otunno claimed in 2006 that the IDP camps in northern Uganda were so badly protected and serviced, that they aimed to eliminate the 1.5 million camp dwellers. President Museveni was among the first three heads of state to congratulate Mwai Kibaki upon winning the (disputed) Kenyan elections, defeating the Lwo opponent, Raila Odinga. And when the media earlier this year focused on the regional distribution of high-level state jobs, it emerged that 'northerners' occupy seven per cent of positions of power in the state despite constituting 19 per cent of the population of Uganda.
This narrative of deliberate marginalisation or silent persecution is alive today, in the north. Such feelings are often felt most strongly, and articulated most frequently, by those in the diaspora. Yesterday, a letter from Canada to the editor of New Vision, thus argued that "Over the years if you were of Luo background in Uganda and Kenya you were likely to face this silent hatred, cynicism and even ridicule because of your Luoness. After the overthrow of Obote I, some people had to change their Luo names to make them look non-Luo. For example from Okobel the name was changed to Kobel to remove the 'O' to protect such a person from easy identification… In East Africa, the election of Barack Obama brings home a revolution to not only all citizens, but particularly to those who are Luo who had felt despised for no apparent reason, except that they are Luo. Barack Obama's election should be significant and therapeutic to all, especially the Luo in Uganda and Kenya who had been suffering from the trauma of being invisible and isolated."
Obama's ascendancy brings hope, to some, of a Luo revival. While the election of Obama was made possible by a sense of nationhood in the US, in East Africa the event is interpreted through the lens of ethnic or tribal differences.
Posted in Globalisation, People, Politics, Uganda | Tagged Globalisation, Luo | Leave a Comment »
On 'fake accountability'
October 17, 2008 by Line E. Gissel
You would think that the phrase 'fake accountability' was an oxymoron: how can something provide accountability and then fake it at the same time? Well, it is very well-known concept here in Apac. In fact, I was yesterday asked to contribute to it! Fake accountability is when accountability is doctored, made up. It is mainy 'paper accountability' – receipts, attendance lists, quotations – that are falsely made up; 'physical accountability' is more difficult to doctor as it concerns real things on the ground: whether the contract is performed, whether the purchased item physically exists, whether something is available?
Yesterday I attended a dialogue meeting organised by an NGO. It was a fruitful exchange of ideas (and blame) between CSOs and Lower Local Government officials, both elected and appointed, with the aim of ensuring that Apac district will perform well in the up-coming Local Government Assessment. Every year, Apac fails the assessment, and thereby loses 20 per cent funding from the Ministry of Local Government. Which means 20 per cent less spending on public services. The underlying reason for this constant struggle to pass the assessment's minimum criteria is the nature of governance in Apac.
Following the end of the Cold War, and particularly the way in which it ended, the donor community and Western governments thought it wise to democratise Africa from below: the invested heavily in a civil society, which was supposed to keep the state in check. As the civil society thereby stands conceptually opposite the state, it is often supposed to be substantially opposite too – but, it is not… Often, the civil society mirrors the public sector, perhaps because the underlying reasons for public conduct are societal.
Yesterday, as I signed the attendance list for the dialogue meeting – a list that would constitute part of the evidence that the event actually took place – I was handed another attendance list. That of another meeting, which never had and never would take place. But for which money was already spent. The list would be presented to the donors as proof of expenditure on transport refunds, lunch and sitting allowances for the participants. Four people had already signed the document, with or without noticing the discrepancy in meeting titles.
The list was never circulated further, and the first sheet was thrown away. Wisely or unwisely. You see, the organiser of the fictitious event is a relatively powerful person in Apac. Paradoxically, the meeting was themed good governance in the civil society sector.
Posted in Uncategorized | Tagged corruption, workshops | Leave a Comment »
The million-dollar question is whether ownership of a development process can be transferred
October 1, 2008 by Line E. Gissel
The no. 1 ingredient that almost all donor-recipient relationships contain is the need for the recipient to feel a sense of ownership. Ownership of the project or programme. Of the development process or initiative. Of a change process which has only become possible because of a factor from without: the idea, the rationale, the money, the equipment.
The million-dollar question is: it is possible? It is possible to define the (often narrow) parameters of funding, select the recipients, fund the programme – and at the same time transfer ownership of the objectives, activities and entire process to the recipients?
Ownership means that the (local) recipients own the project and determine a range of factors, from recruitment of project employees to budget allocations. It assumes that you take better care of things you own, that you become more dedicated because you own it.
But the donor often needs to satisfy her own donors, whether governments, larger organisations or the general public in the West, and therefore does not feel that she can let everything be determined by the (new) owners. Because, what if the recipients take decisions with which the donor disagrees? Should the latter step in and 'remind' the recipients of the 'right' path, the objectives of the partnership, or should she stick it out, risking that the project takes on unforeseen or undesirable dimensions, becomes subject to non-liberal local dynamics, or is used for private rather than public gains? If the answer lies somewhere in between these two options, the question remains whether a path between donor control and local ownership exists at all. And, if so, which amount of donor control would disable the sense of ownership?
The jury is still out, I'm afraid.
Today is International Day of Democracy.
September 15, 2008 by Line E. Gissel
This morning, the radio read a statement which highlighted Democracy Day, a recent addition to the long list of international days. In 2007 the UN Generally Assembly apparently adopted the day, defining democracy as a
"universal value based on the freely-expressed will of people to determine their own political, economic, social and cultural systems, and their full participation in all aspects of life."
The statement was translated into Lwo by the newsreaders, who were apparently struggling with the vernacular terms for some of these words. The influx of new concepts from without seem to have taken place too quickly for leb Lango (the tongue of the Lango) to adopt. So the news piece was about elections, a small part of the wider notion of democracy, rather than about an opening up of the political space and the participation of citizens in local decision-making. These things happen. The word for Treasurer in Lango, for instance, translates into 'keeper of the money'… a rather misleading term, particularly when one considers the fact that there is so much corruption here.
Across the globe, meanwhile, American keepers of the money either filed for bankruptcy or flagged their warning signs. But the global village was not so global as for this news to travel all the way to Apac where life went on as usual, and people called the radio with comments and lamentation. It shall be interesting to see if the shocks of a global financial crisis are felt here in this seemingly isolated part of the world.
Posted in People | Leave a Comment »
To workshop or not to workshop: the political economy of the civil society sector
August 3, 2008 by Line E. Gissel
Is it because Uganda has been a donor darling since the early 1990s? Or because the civil society is marked by poverty of ideas? Or perhaps because everybody wants development and nobody wants change (see previous post).
Whatever the reason, the civil society sector in Uganda can be summarised by one word: workshopping. Or 'workshop hopping'. Civil society activists hop from workshop to workshop, at their regional capitals or, mostly, in Kampala:
To be consulted on a particular issue, such as the new NGO Amendment Bill or the indictment of the LRA leadership by the International Criminal Court. To be sensitised on a value that is deemed important, such as rights-based approaches to development or gender equality. To discuss issues that confront their own sector, such as NGO accountability. To have their capacity built in, say, decentralisation policies or stakeholder analysis methodologies. To be briefed about a new funding opportunity such as an EU development programme. To engage with the local or central government in 'dialogue meetings'.
This culture of workshopping has generated three challenges:
The need to translate workshop knowledge and ideas into real work, at desks and in fields across the country.
The need to follow up the resolutions and ways forward generated at the workshops; to see how far things go once the participants leave the hotels, conference centres and community halls.
The need to de-monetarise knowledge and skills. At the moment, workshop participants get, expect and rely on transport refunds, per diems, out-of-pocket facilitation, allowances for accommodation and dinner… you name it.
The other day, a Head of Department at the Apac District administration lamented that his department cannot get community members to attend his meetings, sensitisations and workshops because he does not have a budget for the various forms of 'facilitation' which they expect. Farmers leave the meeting on, say, new farming technologies or value addition once they hear that there will be 'no facilitation' such as a transport refund or an allowance. It is a real problem.
During most workshops, the first session will concern 'Expectations and Fears'. It is common to hear participants list 'transport refund' as an expectation and 'not enough facilitation' as a fear, after which the workshop organisers will have to explain which levels of 'facilitation' their budget allows.
One day, I gave a lift to Kampala to four workshop participants. We reached the conference centre earlier than planned; the invitation had just told up-country participants to register in the evening, so as to be ready for the morning session on the following day. They complained that the workshop organisers probably only had booked dinner for them, and not lunch. That now they would have to meet the cost of the lunch themselves. Perhaps they forgot that if they had been in Apac, they would have had to buy lunch for themselves; or that my lift had saved them the transport cost, since they would get a transport refund at the end of the workshop.
Their thinking, it seems, indicate that in Uganda there exist a culture of workshopping, a particular set of seemingly self-evident practices and interpretations of life and the world. It is so central to the whole NGO set-up of this Equatorial country, that this blog will explore its many aspects over the coming months.
Posted in People | 1 Comment »
The Public in Apac…
"[F]or while the public realm may be great, it cannot be charming precisely because it is unable to harbor the irrelevant." - Hannah Arendt
Vegetables Currently at Apac Market
Cabbages, green leaves, green peppers, red onions, tomatoes. That's it.
Categories Select Category Africa (2) Conflict (4) Economics (5) Environment (5) Globalisation (2) Malaria (4) People (15) Politics (8) Uganda (8) Uncategorized (4)
iHaveNoTribe.com - I Am Kenyan
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,107
|
//---------------------------------------------------------------------------//
//---------------------------------------------------------------------------//
/*!
* \file MCLS_SolverFactory.hpp
* \author Stuart R. Slattery
* \brief Linear solver factory declaration.
*/
//---------------------------------------------------------------------------//
#ifndef MCLS_SOLVERFACTORY_HPP
#define MCLS_SOLVERFACTORY_HPP
#include <string>
#include "MCLS_SolverManager.hpp"
#include <Teuchos_RCP.hpp>
#include <Teuchos_ParameterList.hpp>
#include <Teuchos_Describable.hpp>
#include <unordered_map>
namespace MCLS
{
//---------------------------------------------------------------------------//
/*!
* \class SolverFactory
* \brief Factory class for generating solver managers.
*/
template<class Vector, class Matrix>
class SolverFactory : public virtual Teuchos::Describable
{
public:
//@{
//! Typedefs.
typedef Vector vector_type;
typedef Matrix matrix_type;
typedef SolverManager<Vector,Matrix> Solver;
typedef std::unordered_map<std::string,int> MapType;
//@}
//! Constructor.
SolverFactory();
// Creation method.
Teuchos::RCP<Solver>
create( const std::string& solver_name,
const Teuchos::RCP<Teuchos::ParameterList>& solver_parameters );
private:
// Solver enum.
enum MCLSSolverType {
ADJOINT_MC,
FORWARD_MC,
ADJOINT_MCSA,
FORWARD_MCSA,
ADJOINT_ANDERSON,
FORWARD_ANDERSON,
FIXED_POINT
};
// String name to enum/integer map.
MapType d_name_map;
};
//---------------------------------------------------------------------------//
} // end namespace MCLS
//---------------------------------------------------------------------------//
// Template includes.
//---------------------------------------------------------------------------//
#include "MCLS_SolverFactory_impl.hpp"
//---------------------------------------------------------------------------//
#endif // end MCLS_SOLVERFACTORY_HPP
//---------------------------------------------------------------------------//
// end MCLS_SolverFactory.hpp
// ---------------------------------------------------------------------------//
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,499
|
Fellow del Cains College dal 1631 e successivamente dell'All Souls College su proposta di William Laud, divenne presto cappellano di Carlo I d'Inghilterra e lo sostenne nella guerra contro Oliver Cromwell.
Fu autore di opere come Sermoni (1653), Il Santo Vivere (1650), Il Santo Morire (1651) e Il grande esempio (1649), libri di preghiere quotidiane che culminarono con The Golden Grove (1655).
Dopo la morte di Carlo gli fu assegnata (1657) una cattedra a Lismore, la diocesi di Down e Connor e la supervisione della diocesi di Dromore.
Altri progetti
Collegamenti esterni
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,234
|
2015 WCWS Game 4 Quotes and Notes UCLA 7, Oregon 1
UCLA 7, Oregon 1
An Interview With: COACH KELLY INOUYE-PEREZ, GABRIELLE MAURICE, ALLY CARDA, STEPHANY LAROSA
THE MODERATOR: We're joined by UCLA head coach Kelly Inouye-Perez. To her right we have student- athlete Ally Carda, student-athlete Gabrielle Maurice and student-athlete Stephany LaRosa.
COACH INOUYE-PEREZ: First, we'd like to say it's a great opponent we just played in Oregon. Very talented. We had some dog fights against them. So we knew today was going to be a great game. But I'm so proud of my Bruins. They just made a little history for themselves here. First World Series, to be able to get out there and get the W in the fashion they did playing their game. I'm just very proud of them. We're not done yet. But day one is the big one, and I'm really proud of where we are right now.
Q. Coach, if you can, talk about the barrage that five-inning run?
COACH INOUYE-PEREZ: You know, I think what's most impressive about this team, I couldn't even tell you any one individual that stood out tonight. Some great at-bats. For Steph (Stephany Larosa) to be able to come out and strike and throw that first punch was outstanding in the first couple of runs that we scored. And then Gabi (Gabrielle Maurice). But definitely that one inning — that's something that we do well. When we get going we can definitely score in bunches. That's a product of them having quality two-strike, two- out at-bats. They do a great job of being able to compete down to the last pitch. So, very unselfish team and quality at-bats across the board, and definitely some big runs to get Ally (Carda) some assurance so she didn't have to be so perfect at the end.
Q. Stephany and Ally, curious what role the two of you have played for each other over the years and how has that partnership worked over the years?
STEPHANY LAROSA: Me being the catcher, I'm kind of new to the position, still it's only my second year. But Ally (Carda) has done a great job for me in a sense where she's definitely instilled that confidence in me that I could get behind the plate regardless of having no experience at all. So much credit to her for kind of instilling that in me and it makes me come out here and be as strong as I am behind the plate because of her.
ALLY CARDA: I'll give the credit right back to her. I think she instills the confidence in me as a pitcher. Especially playing Oregon tonight, they're a tough team and unfortunately earlier in the year we lost two. So for her to be behind the plate with me tonight and to re- ensure that we're in a good place and make sure my stuff's moving. It gives me great confidence and makes me nice and calm on the mound, and she makes me feel I can do anything out there. So it's been great.
Q. Ally, early on, you know, you had many of your pitches come in late, you got it back. Talk about being in the circle and just having command and being able to dominate your opponent?
ALLY CARDA: I think we've really prepared for the team tonight and for this tournament. We've been doing it all year. For me on the mound, I think I've been taking it, we talk a lot about it, but one pitch at a time. I need to focus every pitch. We can't really take any breaks because all these teams are really good. They're great hitters. So once I take one break, that's where we're in trouble. So everything I've been focusing on is just one pitch at a time and making good pitches move and putting them where I need to.
Q. A little bit off topic, but we've seen some creative stuff in dugouts over the course of the year. And especially here this weekend. LSU bought a goldfish. You guys had fun stuff going on, Stephany and Gabrielle, if you could both touch on this. Any symbolism to what you guys do in the dugout or any fun stories connected to it?
STEPHANY LAROSA: Absolutely we have a theme established. We like to produce runs. But when you get to this point in the season your dugout is a huge part. They're kind of where your energy comes from. So we kind of take anything and we run with it. Just something to buy into. So being able to buy into lime drives and producing runs, it's big for the team, and it's definitely contagious, and I think it plays a big part, especially out here tonight.
GABRIELLE MAURICE: Going off of that, I think it just keeps everything simple in the dugout, not making anything much bigger than it is, and it keeps everyone loose and goofy so we can play relaxed and I think it really benefits us.
Q. Stephany, to have to wait four years to get here, were you able to savor where you were out there, were you too locked into the actual game to kind of savor it?
STEPHANY LAROSA: No, absolutely. I think it's a big thing. Try not to make things as big as they are but we talk a lot about living in the moment, taking one pitch at a time and enjoying it because it is our senior year, it's our last hurrah, and to be out here with this group of girls it's going to be quite a memorable experience.
An Interview with: COACH MIKE WHITE, CHERIDAN HAWKINS, JANIE TAKEDA
THE MODERATOR: We're joined by Oregon head coach Mike White, student-athletes Cheridan Hawkins and Janie Takeda. Coach, general comments about the game.
COACH WHITE: Obviously congratulations to UCLA. It's good to see at least one Pac-12 school go through right now. Obviously not the way we kind of wrote it up. But UCLA were opportunistic and they had opportunities, hit the ball hard the second innings there, we kind of settled down, got into a game where it could have gone either way for quite a while. I don't think the score, 7-1, really tells the story of the game. It was a lot closer than that game. They just happened to get that big break in, I think, it was the sixth innings and kind of blow it up a little bit. What I'm really proud about our ladies, they never quit. They keep trying, keep trying. And just the game comes down to inches, as I told the team before. Just didn't go our way today. So the big thing for us now is bouncing back and getting ready to play Alabama.
Q. Coach, talk about being in the moment and then moving on to the next game.
COACH WHITE: Well, yeah, at this stage, it really comes down to again a little bit of luck. And you have the talent there. We had the talent. We didn't get the luck tonight. UCLA played very well. Excellent defense. Made some great plays out there. Ally Carda made some great pitches. But that's the story of the game. That's what makes this game great; you don't know what's going to happen. Now we have to come out, rebound, get after Alabama. And you have to play for our life, so to speak. We'll make sure that we're not going home Saturday. And we look to fight another day.
Q. Coach, how do you keep everybody kind of focused and locked in during a weather delay like that?
COACH WHITE: Well, in Oregon it rains every now and then. So we're kind of used to that. That wasn't a problem. In fact, it was good for our team. I thought the team came back and it was a 0-0 game for the next four and a half innings. I thought we bounced back pretty well after that. As I said, we hit the ball hard. We really hit the ball hard. I don't know if you noticed the radar game, (Ally) Carda was hitting 71 there a couple times. She was pumped. But I thought we did a really good job. Like I tell you, a few inches either way, we could have had some runs up on the board.
Q. Janie, the play in the first where you get thrown out, you hit a ball really hard with runners on I think your next time up. Any one of those opportunities stick out the most where you think, hey, if we could have just got that one to go our way, it might have gone a different way?
JANIE TAKEDA: Yeah, I mean, you can go over the game thousands of times in your head and think about what could have happened, what you could have done differently. I think the most important thing is to take the positives away, obviously reflect on any mistakes I made and fix them the next time I step on the field. Yeah.
Q. Cheridan, first can you talk about the two pitches in the second inning that went out of the park; and then secondly, knowing your hitters and how well they can get you back in the game, what was your mindset from that point forward?
CHERIDAN HAWKINS: I think with the home runs, I felt really aggressive. And they're good hitters and they make adjustments and I think that I felt confident in those pitches I threw and I felt like I attacked the zone well. They hit the ball. They're a good hitting team. I think for offense, I didn't ever lose hope that we wouldn't hit. They've had my back so many times and sometimes it just doesn't go our way. And like Coach White said, we did hit the ball really hard quite a few times and it just didn't fall for us. And that's all right. And we'll get it next time.
Q. Cheridan, you have been tough to beat two times in a row, been a couple of years since that happened, why do you is think that. And does it give you confidence going forward you can bounce back from this?
CHERIDAN HAWKINS: I think we're pretty determined, and we're just — I think a moment today we didn't make every pitch count. And I think we do a good job of learning from our mistakes. And the most we can do is compete on Saturday. And obviously no one likes losing. So, we're going to hopefully come out aggressive and just attack and play Oregon softball and see what happens.
In the fourth game of the 2015 Women's College World Series, No. 7 seed UCLA defeated No. 2 seed Oregon, 7-1. The Bruins improved to 51-10 on the year, while the Ducks fell to 51-7 overall.
With the win, UCLA advances to face No. 3 seed Michigan on Friday at 8:30 p.m. CT. Oregon will play No. 6 seed Alabama at 1:30 p.m. CT on Saturday.
In its 25th WCWS appearance (first since 2010), UCLA moved to 91-29 overall at the tournament. Oregon is now 4-7 overall in its fourth WCWS appearance (1989, 2012, '14, '15).
UCLA senior pitcher Ally Carda moved to 32-6 on the season. In a full game's work, she allowed just one run on six hits. Carda struck out three batters and walked one.
Two second-inning home runs gave UCLA a 2-0 advantage. Senior Stephany LaRosa sent a solo shot to center field, while sophomore Gabrielle Maurice hit one to left field, her 10th of the year, with two outs.
LaRosa's home run was her 20th of the season and brought her RBI total to 70. She has hit home runs in seven of her last eight games and is riding a 16-game hitting streak.
The Bruins added to their lead in the sixth inning, scoring five runs in the frame. Junior Mysha Sataraka hit a two-run double to right field, senior Gracie Goulder added another run with a single and freshman Kylee Perez sent a double to right field to score two runs.
As a pinch hitter, freshman Lauren Lindvall put the Ducks on the board with an RBI single to right field, scoring junior Koral Costa from second base.
With a single in the fourth inning, Oregon junior Hailey Decker increased her hitting streak to 14 games.
Oregon junior pitcher Cheridan Hawkins fell to 30-4 on the season with the loss. In 5.2 innings, Hawkins gave up seven runs on eight hits. She walked one batter and struck out six. In relief, senior Karissa Hovinga pitched 1.1 innings, allowing one hit and striking out a batter.
The contest was in a lighting delay for 49 minutes in the top of the third inning.
Attendance for Session 2 was 8,360.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 2,478
|
Q: Error django.db.utils.OperationalError al hacer makemigrations estoy intentando conectar Django con PostgreSQL (que lo he instalado en una máquina virtual con Windows 10), pero cada vez que intento hacer el makemigrations me tira el error del título (django.db.utils.OperationalError). En la máquina está el firewall desactivado (ya que si no me tira error), en modo puente, con detección de redes, ... Puedo hacer ping, así que conexión entre ellas hay... No sé dónde estará el fallo... Si me podéis echar una manilla... Os lo agradecería muchísimo.
La versión de Django es la 3.0.5 y la de Python 3.8.2, aunque he probado con la 3.7.7 también y el error sigue saliendo.
models.py:
from django.db import models
class Clientes(models.Model):
nombre = models.CharField(max_length=30)
direccion = models.CharField(max_length=50)
email = models.EmailField()
telefono = models.CharField(max_length=7)
class Articulos(models.Model):
nombre = models.CharField(max_length=30)
seccion = models.CharField(max_length=20)
precio = models.IntegerField()
class Pedidos(models.Model):
numero = models.IntegerField()
fecha = models.DateField()
entregado = models.BooleanField()
settings.py:
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'gestionPedidos', # añadido
'django.contrib.sites', # añadido
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'TiendaOnline.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'TiendaOnline.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'articulosclientes',
'USER': 'postgres',
'PASSWORD': 'root',
'HOST': '192.168.1.49',
'DATABASE_PORT': '5432',
}
}
# Password validation
# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.0/howto/static-files/
STATIC_URL = '/static/'
A: podría ser un error de escritura en la parte de DATABASES del archivo settings.py, específicamente en el host. Al trabajar con postgresql el host debería ser: 127.0.0.1
Escrito quedarías así:
'HOST': '127.0.0.1',
O podrías escribirlo como 'localhost' así:
'HOST': 'localhost',
Posteriormente, ya podrías ejecutar en el terminal la siguiente linea
python manage.py makemigrations
Y finalmente para ver los resultados en postgresql, ejecuta en el terminal
python manage.py migrate
A: 'ENGINE':'django.db.backends.postgresql'
esta mal, es
'ENGINE':'django.db.backends.postgresql_psycopg2'
ademas de poner
'HOST':'localhost'
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,715
|
{"url":"https:\/\/www.gamedev.net\/forums\/topic\/642794-tileable-fbm-noise\/","text":"Followers 0\n\n# Tileable fBM Noise ?\n\n## 7 posts in this topic\n\nHi guys, I'm trying to make 2D fBM tileable. I can make the underlying Perlin Noise tileable by adding an integer period to the implementation but the fBM built from it does not come out tileable. Any info would be appreciated.\n\n0\n\n##### Share on other sites\n\nYou should reduce x and y modulo px and py, once at the beginning; then you wouldn't need per-octave tricks.\n\nMultiplying x and y by the scale factor is seriously wrong; the correlation s very likely to be visible. Instead, you could reduce px and py for finer scales and perturb the noise function (does it include a \"seed\"?) to make completely unrelated noise at each octave.\n\nAnother missing detail: the amplitude of each octave, definitely different if you want fBM noise.\n\n0\n\n##### Share on other sites\n\nThe simplest way to make 2D noise tileable is to make it periodic using some periodic function(s). Eg:\n\nfScaleX = x \/ fPeriodSize; \/\/0 <= x <= fPeriodSize\nfPeriodX = fScaleX * 2 * PI; \/\/0 <= fPeriod <= 2PI\nfRadius = 1.f; \/\/modify this to scale noise\n\n\/\/apply same to y\n\nEdited by irreversible\n0\n\n##### Share on other sites\nYou can also use some form of interpolation noise (value noise or gradient noise) where the values assigned in the grid are periodic.\n0\n\n##### Share on other sites\n\nHi guys,\n\nI am indeed using a 2D noise function, it's Perlin's Improved Noise, here it is:\n\nIt makes some pretty nice tileable textures but maybe they aren't as nice as they can be ? Some of your posts are hard to understand, maybe now that you see my code you can be more specific ? Thanks.\n\nfloat pnoise2( float x, float y, int px, int py )\n{\nint ix0, iy0, ix1, iy1;\nfloat fx0, fy0, fx1, fy1;\nfloat s, t, nx0, nx1, n0, n1;\n\nix0 = FASTFLOOR( x ); \/\/ Integer part of x\niy0 = FASTFLOOR( y ); \/\/ Integer part of y\nfx0 = x - ix0; \/\/ Fractional part of x\nfy0 = y - iy0; \/\/ Fractional part of y\nfx1 = fx0 - 1.0f;\nfy1 = fy0 - 1.0f;\nix1 = (( ix0 + 1 ) % px) & 0xff; \/\/ Wrap to 0..px-1 and wrap to 0..255\niy1 = (( iy0 + 1 ) % py) & 0xff; \/\/ Wrap to 0..py-1 and wrap to 0..255\nix0 = ( ix0 % px ) & 0xff;\niy0 = ( iy0 % py ) & 0xff;\n\nnx0 = grad2(perm[ix0 + perm[iy0]], fx0, fy0);\nnx1 = grad2(perm[ix0 + perm[iy1]], fx0, fy1);\nn0 = LERP( t, nx0, nx1 );\n\nnx0 = grad2(perm[ix1 + perm[iy0]], fx1, fy0);\nnx1 = grad2(perm[ix1 + perm[iy1]], fx1, fy1);\nn1 = LERP(t, nx0, nx1);\n\nreturn 0.507f * ( LERP( s, n0, n1 ) );\n}\n\n\n0\n\n##### Share on other sites\nIt probably does work okay, and if it works for your needs then awesome; but a potential issue with performing tiling by constraining each octave to a repeating lattice pattern can be demonstrated by this image, of a ridged fractal being built up in layers:\n\nThe upper-left portion is 1 octave, upper right is 2, lower left is 3, and bottom right is 4 octaves. If you look at that image closely you will see some horizontal and vertical lines that grow more pronounced as more octaves are added. This is a consequence of generating noise on a lattice grid. Even though Perlin's gradient and simplex noise variants use wavelet functions to throw the peaks and valleys off of the grid points, the underlying grid structure is still there and it can appear as artifacts in the final result. Some fractal variants demonstrate the artifacts more strongly than others, but they are always going to be there. You can modify lacunarity to use non-integral values, and this will help to mitigate the problem somewhat since this ensures that successive grid lines don't exactly line up with one another. (Of course, this prohibits using your method of seamless tiling, due to the fractional sizes of successive layer grids.)\n\nIn order to make a noise fractal tile in your method, you have to use the underlying grid and make the grid pattern repeat based on some period, so you're going to have to live with the artifacts.\n\nHowever, one common trick to reduce or eliminate the grid artifacts is to apply a randomized rotation around an arbitrary axis for each successive noise layer. That is, for each layer you generate a random axis\/angle and use it to rotate the input coordinate before sampling the noise function for that layer. Here is a build-up of a ridged 4-layer fractal with random domain rotations:\n\nIf you look closely at it, you don't see the horizontal and vertical lines that occur in the previous image. That is because each layer is rotated on a different axis, and the rotation of a layer removes the pattern of noise from the grid axis (or, at least, causes you to \"view\" the grid from something other than straight-on). Thus, successively layer grid artifacts do not line up with previous grids, and the result is that no grid artifacts are manifest in the final result.\n\nThe problem with this approach is that you can't implement tiling using the periodic grid-based method you use, since that relies upon the edges of the periodically repeating grid pattern you use for each layer lining up with the edges of the image in the output, but the rotation throws that off completely.\n\nAdditionally, constraining your tiling algorithm to act upon the underlying grid of each layer of noise assumes that the noise that you are using is grid-based in the first place. If you want to use something that isn't grid-based, such as sparse convolution noise (where you take a scattering of non-grid-based randomized points and convolute them to produce the final signal) then you are stuck, since there is no underlying grid to manipulate to create periodicity. In that case, you would need to use some other type of seamless tiling algorithm (such as the blending or the 4D mapping I mentioned) in order to make it tile seamlessly.\n\nIt might not be an issue for you, though, depending on your application. If your algorithm works for you, then you probably don't need to mess with it. Just be aware that there can be artifacts that you might have to deal with.\n\nEdit: Holy crikey, this post editor is funky sometimes Edited by JTippetts\n1\n\n##### Share on other sites\n\nJTippetts, thanks a lot for explaining all that to me, I am really understanding it now; I can see you've done a lot of work with noise functions. Right now I'm pretty happy with my tileable 2D noise because I'm using it only for textures. One day I'd like to spend more time with this stuff, I find it fascinating.\n\nThanks again.\n\n0\n\n## Create an account\n\nRegister a new account","date":"2017-07-26 02:54:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.46454954147338867, \"perplexity\": 1702.8472057854688}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-30\/segments\/1500549425751.38\/warc\/CC-MAIN-20170726022311-20170726042311-00704.warc.gz\"}"}
| null | null |
\section{Introduction}\label{sec1}
Resonance, a nonlinear phenomenon, which often occurs when the wave numbers or frequencies of two or more waves satisfy appropriate resonance condition \cite{ml}. Such a unique physical phenomenon is widely observed in both linear and nonlinear dynamical systems. Among many, a classical example is the long-wave short-wave resonance interaction (LSRI) model which finds applications in fluid dynamics \cite{benny,grimshaw1}, plasma physics \cite{zakharov,oikawa}, nonlinear optics \cite{kivol,lsrinim,ablowitz2}, Bose-Einstein condensation \cite{frantz,frantz1}, and biophysics \cite{boiti}. This LSRI takes place when both high frequency short-wave (SW) and low frequency long-wave (LW) obey the Zakharov-Benney condition: the group velocity of the SW ($v_g=d\omega(k)/ dk$) must exactly or almost matches the phase velocity of the LW ($v_p=\omega/k$). That is $v_g=v_p$. The LSRI literature originally starts from the theoretical investigation on Langmuir waves in plasma where the generalized Zakharov equations were derived \cite{zakharov}. After this pioneering work by Zakharov, there have been several experimental and theoretical research activities based on the LSRI phenomenon in different contexts ranging from
lower dimensions \cite{rede,ma,rede1} to higher dimensions \cite{boyd,funakoshi, ohta1,radha}, with single component \cite{kanna1} to multi-component \cite{kanna2,kanna3,kanna4,chen1,sazonov,jetp2009,myrza86}. These studies also report the existence of several types of nonlinear localized wave structures \cite{kanna1,kanna2, kanna3, chen3,wazwaz,alrazi1,alrazi2,alrazi3}, namely bright soliton with a single-hump structure \cite{kanna1,kanna2, kanna3, chen3} and bright soliton with a double-hump structure \cite{stalin-lsri}, dark soliton \cite{chen1,kanna3,kanna4}, breathers \cite{chen5}, and rogue-waves \cite{chow1, crespo1,chow2,crespo2,chen4},
and their novel properties have also been exhibited there. The main focus of this paper is to present the soliton, both bright and dark, solutions and breather solution for the recently introduced generalized long-wave short-wave resonance interaction (LSRI) system
\begin{eqnarray}
&& iS_t+S_{xx}+(i\alpha L_x+\alpha^2L^2-\beta L- 2 \alpha \lvert S\rvert^2)S=0,\nonumber \\
&& L_t=2(\lvert S\rvert^2)_x. \label{1}
\end{eqnarray}
The system (\ref{1}) has been introduced in \cite{bib1}, where the authors have established the integrability of the above system by providing its ($3\times 3$) Lax Pair. In system (\ref{1}), $S$ $(\equiv S(x,t))$ describes the short-wave and $L$ $(\equiv L(x,t))$ represents the long-wave, and suffices $x$ and $t$ denote the partial
derivatives with respect to spatial and evolutional coordinates, respectively, and the nonlinearity coefficients $\alpha$ and $\beta$ are real parameters. The nonlinearities arise in Eq. (\ref{1}) because of the self interaction of the short-wave packet, as in the case of NLS equation, and the interaction between LW and SW. The formation of soliton in the SW component is essentially due to the balance of its dispersion by the nonlinear interactions of LW and SW and the self interaction of the SW. The self interaction of the SW determines the formation and evolution of soliton in the LW component.
We wish to point out that the generalized LSRI system (\ref{1}) reduces to two well known LSRI models. For example, the system (\ref{1}) becomes the following Yajima-Oikawa (YO for short) system for $\beta=\pm 1$ and $\alpha=0$ \cite{oikawa},
\begin{eqnarray}
iS_t+S_{xx}\pm LS=0, ~~~
L_t=2(\lvert S\rvert^2)_x, \label{2.a}
\end{eqnarray}
and it turns into the Newell LSRI system,
\begin{eqnarray}
iS_t+S_{xx}+(iL_x+L^2- 2 \lvert S\rvert^2)S=0,~~~
L_t=2(\lvert S\rvert^2)_x, \label{2.b}
\end{eqnarray}
for $\beta=0$ and $\alpha=1$ \cite{newell,ling,bib3}. In Ref. \cite{oikawa}, the formation and the interaction of solitons are studied within the framework of the YO-system (\ref{2.a}) by the inverse scattering technique (IST) while the Langmuir waves coupled with ion-acoustic waves propagating in one-direction. An alternate long-wave short-wave model (\ref{2.b}) has been proposed, and the nature of the solitons is analyzed using IST, by Newell in Ref. \cite{newell} to describe Benney's theory of the nonlinear interaction of long and short waves. The present LSRI system (\ref{1}) proposed in \cite{bib1} can be treated as the general one to explain the interaction of long and short-waves.
From the literature, we find that the nature of the solitons, their underlying analytical forms and their interaction properties have not been unravelled so far for Eq. (\ref{1}). This is what it is intended to be reported in this paper. By applying the Hirota bilinear method, multi-bright and multi-dark-soliton solutions of the system (1) are constructed along with the breather solution. An important fact is that these multi-soliton solutions are written in a compact way using the Gram determinants. By doing so, we find that the fundamental bright soliton of the present LSRI system behaves like the KdV soliton since it possesses the amplitude dependent velocity property. While imposing a special condition on the system parameter $\beta$ and the velocity of soliton, it also acts like the NLS soliton. The existence of these properties simultaneously in the present generalized LSRI system (\ref{1}) is not possible in the other single and multi-component YO LSRI systems \cite{oikawa,bib3,kanna1} and in the derivative YO or Newell LSRI system too \cite{newell,bib3}. Further, very interestingly, the bright solitons undergo V and Y-type resonance interactions by tuning the phase shift regime. Such a possibility is not observed earlier in the YO-system (\ref{2.a}). In addition to these, an interesting fact which we observe in the present LSRI system is the appearance of a standing breather in the breather patterns. We get the soliton in a periodic wave pattern by tuning the background wave field. Apart from these, by fixing the velocity resonance condition appropriately, various types of bright and dark bound states are also brought out.
In general, to solve any integrable nonlinear partial differential equations (PDEs), the following analytical methods have been widely used in the soliton literature \cite{ml,istbook,dtbook}. For instance, (i). Inverse scattering transform, (ii). Darboux transformation method, (iii). B\"{a}cklund transformation method, (iv). Hirota bilinear method and (v). Lie-symmetry analysis. The first four methods have been used to derive more general soliton solutions, whereas using the last Lie symmetry analysis a limited class of solitary wave solutions/similarity solutions can be derived by reducing the given nonlinear PDE into an ordinary differential equation. Each of the methods have their own advantages and demerits. One can derive all possible soliton solutions, including breathers, rogue waves, bright, and dark soliton solutions, using the above first four methods. However, it is not possible to derive such solutions using the Lie symmetry analysis, which only provides the information about the solitary wave solutions not the general soliton solutions. We also briefly point out these various aspects in Table I.
\begin{table}[h]
\begin{minipage}{174pt}
\caption{Advantages and disadvantages of the various analytical methods}\label{tab1}%
\begin{tabular}{@{}llll@{}}
\toprule
Method & Advantage & Disadvantage\\
\midrule
Inverse Scattering Transform & \multirow{3}{9em}{Multi-soliton solutions can be obtained and the Cauchy initial value problem can be solved completely } & Too technical \vspace{2.0cm} \\
Darboux transformation & \multirow{3}{9em}{Multi-soliton solutions can be obtained }& \multirow{3}{9em}{Cauchy initial value problem cannot be solved fully} \vspace{0.8cm}\\
B\"{a}cklund transformation & \multirow{3}{9em}{Multi-soliton solutions can be obtained } & \multirow{3}{9em}{Cauchy initial value problem cannot be solved fully} \vspace{0.6cm}\\
\\
Hirota bilinear method &\multirow{3}{9em}{Multi-soliton solutions can be obtained }& \multirow{3}{9em}{Cauchy initial value problem cannot be solved fully} \vspace{0.6cm} \\
\\
Lie-symmetry analysis & \multirow{3}{9em}{Solitary wave solutions/similarity solutions can be obtained} & \multirow{3}{9em}{Only particular solutions can be obtained} \vspace{1.0cm}\\
\botrule
\end{tabular}\end{minipage}
\end{table}
The rest of the paper is organized as follows: In Sect. 2, the fundamental as well as the higher-order bright soliton solutions are derived, and the various interaction dynamics associated with the bright solitons are explained in Sect. 3 with appropriate asymptotic analysis. The one-and two-dark-soliton solutions are given in Sect. 4 and the various possible collision dynamics of two-dark solitons are explained in Sect. 5. In Sect. 6, we demonstrate the breather solution of the system (\ref{1}) and its characteristics with suitable graphical illustration. In Sect. 7, the obtained results are summarized. For completeness, the $N$-bright and $N$-dark soliton solutions in Appendices A and $B$, respectively, are presented.
\section{Bright soliton solutions}
To derive the soliton and breather solutions of the system (\ref{1}), the Hirota bilinear method, in which one has to introduce an appropriate bilinearizing transformation in order to obtain the bilinear forms of a given nonlinear partial differential equation, is adopted. Following Hirota \cite{hirota}, to get the bilinear forms of Eq. (\ref{1}), we introduce the bilinearizing transformations
\begin{eqnarray}
S(x,t)=\frac{g}{f},~ ~L(x,t)=i\frac{\partial}{\partial x}\log\frac{f^*}{f}, ~~g\equiv g(x,t),~~f\equiv f(x,t),\label{2}
\end{eqnarray}
in it. In the above, both the unknown functions $g$ and $f$ are complex functions. While doing the bilinearization of Eq. (\ref{1}), we choose $\alpha=1$, without loss of generality. Substitution of (\ref{2}) in Eq. (\ref{1}) yields its corresponding bilinear forms as given below:
\begin{eqnarray}
(iD_t+D_x^2)g\cdot f=0, ~~ i(D_t+\beta D_x)f\cdot f^*=D_x^2f\cdot f^*,~~iD_t f\cdot f^*=-2gg^*, \label{3}
\end{eqnarray}
where the Hirota's bilinear operators $D_x$ and $D_t$ are defined in \cite{hirota}.
Substituting the standard expansions for the unknown functions $g$ and $f$,
\begin{eqnarray}
g=\epsilon g_1+\epsilon^3 g_3+...,~~~~
f=1+\epsilon^2 f_2+\epsilon^4 f_4+...,
\label{4}
\end{eqnarray}
in Eqs. (\ref{3}), one gets a system of linear PDEs. The set of linear PDEs arises after collecting the coefficients of same powers of $\epsilon$, which is a formal series expansion parameter, and equating the terms corresponding to each power of $\epsilon$ individually to zero. By solving these linear PDEs recursively (at an appropriate order of $\epsilon$), we obtain the explicit forms of $g$ and $f$. Such explicit forms constitute the bright soliton solutions to the underlying generalized LSRI system (\ref{1}).
\begin{figure}[]
\centering
\includegraphics[width=0.85\linewidth]{fundamental-bright.eps}
\caption{Fundamental bright soliton of the generalized LSRI system (\ref{f1}) is illustrated in Fig. (a) for $k_1=1+0.75i$, $\gamma_1=1$, and $\beta=1$. The corresponding soliton compression graph is depicted in Fig. (b) for the system parameter $\beta=-1$. }
\label{f1}
\end{figure}
\subsection{One-soliton solution}
The fundamental bright soliton solution of the system (\ref{1}) can be obtained by solving the following set of equations
\begin{subequations}
\begin{eqnarray}
&&D_1g_1\cdot 1=0, ~~~D_2(1\cdot f_2^*+f_2\cdot 1)=D_x^2(1\cdot f_2^*+f_2\cdot 1)\nonumber\\
&&iD_t(1\cdot f_2^*+f_2\cdot 1)=-2g_1g_1^*,\nonumber
\end{eqnarray}
\end{subequations}
along with the initial seed solution, $g_1=\gamma_1e^{\eta_1}$, $\eta_1=k_1x+ik_1^2t$.
Here, $D_1$ and $D_2$ are defined as $D_1\equiv iD_t+D_x^2$, $D_2\equiv i(\beta D_x+D_t)$, respectively. The explicit forms of $g_1$ and $f_2$ give rise to the fundamental bright soliton solution of the system (\ref{1}). It reads as
\begin{subequations}\begin{eqnarray}
&&S(x,t)=\frac{\epsilon g_1 }{1+\epsilon^2 f_2}=\frac{\gamma_1e^{\eta_1}}{1+e^{\eta_1+\eta_1^*+\delta}}, \label{5a}\\
&&L(x,t)=i\frac{\partial }{\partial x}\log\frac{1+\epsilon^2 f_2^*}{1+\epsilon^2 f_2}=i\frac{\partial }{\partial x} \log\frac{1+e^{\eta_1+\eta_1^*+\delta^*}}{1+e^{\eta_1+\eta_1^*+\delta}},\label{5b}
\end{eqnarray}\end{subequations}
where
$\displaystyle{e^{\delta}= \frac{\lvert\gamma_1\rvert^2(i\beta+2k_1^*)}{(k_1+k_1^*)^2(k_1-k_1^*)}}$. The small parameter $\epsilon$ does not contribute anything to the structure of soliton and so one can choose it as $1$, without loss of generality (or subsume as an additional constant in the wave variable $\eta_1$). The profile structures of the SW and LW are described by the two complex constants $k_1$ and $\gamma_1$ and the system parameter $\beta$. We wish to note that the bright soliton solution (\ref{5a})-(\ref{5b}) exactly coincides with the already reported fundamental bright soliton solution of the derivative LSRI system \cite{bib3} when $\beta=0$. Therefore, the fundamental bright soliton solution derived by us for model (\ref{1}) can be considered as more general. To understand the properties of the obtained soliton solution (\ref{5a})-(\ref{5b}) further, we rewrite it in hyperbolic form. It turns out to be
\begin{subequations}\begin{eqnarray}
&&S(x,t)=A_Se^{i\eta_{1I}}\mbox{sech}(\eta_{1R}+\frac{\delta}{2}),~~~A_S=k_{1R}\bigg(\frac{2\gamma_1k_{1I}}{\gamma_1^*(\beta-2ik_1^*)}\bigg)^{\frac{1}{2}},\label{6a}\\ &&L(x,t)=\frac{A_L}{\frac{(\beta-2k_{1I})}{\lvert 2k_1-i\beta\rvert}+\cosh(2\eta_{1R}+\frac{\delta+\delta^*}{2})}, ~~~A_L=-\frac{4k_{1R}^2}{\lvert 2k_1-i\beta\rvert},\label{6b}
\end{eqnarray}\end{subequations}
where $\eta_{1R}=k_{1R}(x-2k_{1I}t)$, $\eta_{1I}=k_{1I}x+(k_{1R}^2-k_{1I}^2)t$. Here, $\eta_{1R}$, $k_{1R}$, and $\eta_{1I}$, $k_{1I}$ are the real and imaginary parts of $\eta_1$, and $k_1$, respectively. In the above, $A_S$ and $A_L$ represent the respective amplitudes of the soliton in the SW and LW components and they propagate from $-x$ to $+x$ direction with the velocity $v=2k_{1I}$. Note that $\frac{\delta}{2}=\frac{1}{2}\log \frac{\lvert\gamma_1\rvert^2(i\beta+2k_1^*)}{(k_1+k_1^*)^2(k_1-k_1^*)}$ is complex. The central positions of the SW and LW are obtained as $\frac{\delta+\delta^*}{4k_{1R}}=-\frac{1}{4k_{1R}}\log\frac{\lvert\gamma_1\rvert^4(i\beta+2k_1^*)(i\beta-2k_1)}{(k_1+k_1^*)^4(k_1-k_1^*)^2}$. A typical profile of the fundamental bright soliton solution of the system (\ref{1}) is displayed in Fig. \ref{f1}(a). Then, we plot the solution (\ref{6a})-(\ref{6b}) in Fig. \ref{f1} (b) with $\beta<0$. The graph clearly demonstrates that the soliton profiles, in both the SW and LW components, are compressed significantly. This kind of simultaneous amplification and compression of optical pulses is indeed observed in an experiment \cite{agrawal} and it is useful in nonlinear optics applications to generate picosecond or femtosecond pulses.
\begin{figure}[]
\centering
\includegraphics[width=0.4\linewidth]{amp-1.eps}~ \includegraphics[width=0.4\linewidth]{amp-2.eps}\\
\includegraphics[width=0.4\linewidth]{amp-3.eps}~ \includegraphics[width=0.4\linewidth]{amp-4.eps}
\caption{Amplitude-velocity relation of the fundamental bright soliton in the present LSRI system (\ref{1}) and in the other LSRI models (\ref{2.a}) and (\ref{2.b}). To draw Fig. \ref{f2}(a) we fix the parameter values as $k_{1R}=0.5$, $\beta=1$, and $\gamma_{1}=1$. For Figs. \ref{f2}(b), and \ref{f2}(c) we consider $\beta$ value as $-1$, $\beta=0$, respectively and the other values remain the same as in the previous case. For Fig. \ref{f2} (d) we illustrate the amplitude-velocity relation graph for the YO system with $k_{1R}=0.5$, and $\gamma_{1}=1$. }
\label{f2}
\end{figure}
Another interesting property associated with the fundamental bright soliton solution (\ref{6a})-(\ref{6b}) of the system (\ref{1}) is the explicit appearance of soliton velocity in the amplitude parts of both the SW and LW components. As a result, the taller soliton will travel faster not only in the SW component, but also in the LW component. This special property is akin to KdV solitons \cite{ml}. We remark that this interesting property is distinct from the property of fundamental bright soliton of the YO system \cite{oikawa}, where the velocity appears only in the SW component. Such amplitude dependent velocity property of the bright soliton is illustrated in Fig. \ref{f2} for the present generalized LSRI system (\ref{1}) and the other single component LSRI systems (\ref{2.a}) and (\ref{2.b}) \cite{bib3,oikawa}. For instance, in the present LSRI system (\ref{1}) with $\beta>0$, we find that the amplitude $A_L$ is decreasing with respect to $v$ whereas the amplitude $A_S$ is increasing as illustrated in Fig. \ref{f2}(a). We also observe a similar scenario for $\beta<0$, which is illustrated in Fig. \ref{f2}(b). Further, for completeness, we draw the amplitude-velocity relation graphs for the other cases, the derivative YO system ($\beta=0$) \cite{bib3}, and the YO system $\alpha=0$, $\beta=-1$ \cite{oikawa,ma}, in Figs. \ref{f2}(c) and (d), respectively.
It is very important to point out that the bright soliton in the generalized system also acts like the NLS bright soliton for the choice $\beta=2k_{1I}$. That is the soliton in the underlying system (\ref{1}) no longer possesses the amplitude dependent velocity property. In this situation, the solution (\ref{6a})-(\ref{6b}) gets reduced as
\begin{subequations}\begin{eqnarray}
&& S(x,t)=(\frac{i\gamma_1}{\gamma_1^*})^{1/2}k_{1R}e^{i\eta_{1I}}\mbox{sech}(\eta_{1R}+\frac{\delta}{2}),~~~\frac{\delta}{2}=\frac{1}{2}\log\frac{\lvert\gamma_1\rvert^2}{\sqrt{4ik_{1R}^2k_{1I}}},~\label{nls-sol1}\\
&&L(x,t)=-2k_{1R}\mbox{sech}(2\eta_{1R}+\frac{\delta+\delta^*}{2}),~~~\frac{\delta+\delta^*}{2}=\frac{1}{2}\log\frac{\lvert\gamma_1\rvert^4}{16k_{1R}^4k_{1I}^2}.\label{nls-sol2}
\end{eqnarray}\end{subequations}
The latter expressions clearly indicate that the amplitude of the soliton does not depend on its velocity and the bright soliton of the form (\ref{nls-sol1})-(\ref{nls-sol2}) propagates like the NLS bright soliton with the velocity $2k_{1I}$. This interesting property is not possible in the other single and multi-component LSRI systems \cite{oikawa,bib3,kanna1}.
\subsection{Two-soliton solution}
Next, we find that the two series forms in Eq. (\ref{4}) get terminated for the two bright soliton solution of the system (\ref{1}) as $g=\epsilon g_1+\epsilon^3 g_3$ and $f=1+\epsilon^2 f_2+\epsilon^4 f_4$. The resultant forms constitute the two-soliton solution and it turns out to be
\begin{subequations}
\begin{eqnarray}
&&\hspace{-0.5cm}S=\frac{1}{f}\bigg(\gamma_1e^{\eta_1}+\gamma_2e^{\eta_2}+\Delta_{121^*}e^{\eta_1+\eta_2+\eta_1^*}+\Delta_{122^*}e^{\eta_1+\eta_2+\eta_2^*}\bigg),\label{7a}\\
&&\hspace{-0.5cm}L=i\frac{\partial}{\partial x}\log\frac{f^*}{f},\\
&&\hspace{-0.5cm}f=1+\delta_{11^*}e^{\eta_1+\eta_1^*}+\delta_{12^*}e^{\eta_1+\eta_2^*}+\delta_{21^*}e^{\eta_2+\eta_1^*}+\delta_{22^*}e^{\eta_2+\eta_2^*}\nonumber \\&&\hspace{0.2cm} +\delta_{121^*2^*}e^{\eta_1+\eta_1^*+\eta_2+\eta_2^*},\label{7c}\\
&&\hspace{-0.5cm}\delta_{ij^*}=\frac{\gamma_{i}\gamma_{j}^*(i\beta+2k_j^*)}{(k_i+k_j^*)^2(k_i-k_j^*)},~\Delta_{12i^*}=(k_2-k_1)\big(\frac{\gamma_2\delta_{1i^*}}{k_2+k_i^*}-\frac{\gamma_1\delta_{2i^*}}{k_1+k_i^*}\big),\nonumber\\
&&\hspace{-0.5cm}\delta_{121^*2^*}=\vert k_1-k_2\rvert^2\bigg[\frac{\delta_{11^*}\delta_{22^*}}{(k_1+k_2^*)(k_2+k_1^*)}-\frac{\delta_{12^*}\delta_{21^*}}{(k_1+k_1^*)(k_2+k_2^*)}\bigg]~i,j=1,2,\nonumber\label{7d}
\end{eqnarray}\end{subequations}
where $\eta_j=k_jx+ik_j^2t$, $j=1,2$. The above two-soliton solution is characterized by four-arbitrary complex parameters, $k_j$ and $\gamma_j$, $j=1,2$ and one system parameter $\beta$. These parameters non-trivially contribute to the collision properties of the two bright solitons as we explain below. We also get the explicit forms of $N$-bright soliton solution of the generalized LSRI system, which is given in Appendix \ref{secA1}.
\section{Collision dynamics of bright solitons}
The interesting aspect of the generalized LSRI system (\ref{f1}) is that the bright solitons associated with it undergo different types of interactions apart from the standard elastic collision. For example, they exhibit (i) resonance interactions and (ii) soliton bound state or soliton molecule for the appropriate choices of wave parameters. First, we perform the asymptotic analysis in order to confirm the elastic nature of collision among the two bright solitons, then we will analyse the resonance interactions and soliton bound states in detail.
\subsection{Elastic collision: Asymptotic analysis }
To study the interaction dynamics of the solitons completely, we perform a detailed asymptotic analysis of the two-soliton solution (\ref{7a})-(\ref{7c}) and deduce the explicit forms of the individual solitons at the limits $t\rightarrow \pm\infty$. To investigate this, we consider
$k_{jR}>0$, $j=1,2$, $k_{1I}>k_{2I}$, which corresponds to either the case of a head-on collision or the case of an overtaking collision between the two solitons (depending on the signs of $k_{jI}$'s). However, here, we have considered the head-on collision among the two bright solitons. In this situation the two fundamental solitons are well separated and subsequently the asymptotic forms of the individual solitons can be deduced from the solution (\ref{7a})-(\ref{7c}) by incorporating the asymptotic nature of the wave variables $\eta_{jR}=k_{jR}(x-2k_{jI}t)$, $j=1,2$, in it. The wave variables $\eta_{jR}$'s behave asymptotically as (i) Soliton 1: $\eta_{1R}\simeq 0$, $\eta_{2R}\rightarrow\pm \infty$ as $t\rightarrow \pm\infty$ and (ii) Soliton 2: $\eta_{2R}\simeq 0$, $\eta_{1R}\rightarrow\mp \infty$ as $t\rightarrow\pm\infty$. Correspondingly, these results lead to the following asymptotic forms of individual bright solitons.\\
(a) Before collision: $t\rightarrow -\infty$\\
Soliton 1: In this limit, the asymptotic forms of both the SW and LW are deduced from the two-soliton solution (\ref{7a})-(\ref{7c}) for soliton 1 as given below:
\begin{subequations}\begin{eqnarray}
&&\hspace{-0.5cm}S(x,t)\simeq A_S^{1-}e^{i\eta_{1I}}\mbox{sech}(\eta_{1R}+\phi_S^{1-}),~~~A_S^{1-}=k_{1R}\bigg(\frac{2\gamma_1k_{1I}}{\gamma_1^*(\beta-2ik_1^*)}\bigg)^\frac{1}{2},\label{8a}\\
&&\hspace{-0.5cm}L(x,t)\simeq \frac{A_L^{1-}}{\frac{(\beta-2k_{1I})}{\lvert2k_1-i\beta\rvert}+\cosh(2\eta_{1R}+\phi_L^{1-})},~~~ A_L^{1-}=-\frac{4k_{1R}^2}{\lvert2k_1-i\beta\rvert},\label{8b}
\end{eqnarray}\end{subequations}
where the phase terms are given by
\begin{eqnarray}
\phi_S^{-1}=\frac{1}{2}\log\frac{\lvert\gamma_1\rvert^2(i\beta+2k_1^*)}{(k_1+k_1^*)^2(k_1-k_1^*)},~~~\phi_L^{-1}=\frac{1}{2}\log\frac{-\lvert\gamma_1\rvert^4\lvert 2k_1-i\beta\rvert^2}{(k_1+k_1^*)^4(k_1-k_1^*)^2}.\nonumber
\end{eqnarray}
In the latter, superscript ($1-$) represents the soliton $1$ before collision and the suffices $S$ and $L$ denote the SW and LW components, respectively.\\
Soliton 2: The following asymptotic forms of the soliton 2 are deduced from the solution (\ref{7a})-(\ref{7c}). They read as
\begin{subequations}\begin{eqnarray}
&&\hspace{-0.8cm}S(x,t)\simeq A_S^{2-}e^{i(\eta_{2I}+\theta_2)}\mbox{sech}(\eta_{2R}+\phi_S^{2-}),~A_S^{2-}=k_{2R}\bigg(\frac{2\gamma_2k_{2I}}{\gamma_2^*(\beta-2ik_2^*)}\bigg)^\frac{1}{2},\label{9a}\\ &&\hspace{-0.8cm}L(x,t)\simeq\frac{A_L^{2-}}{\frac{(2k_{2I}-\beta)}{\lvert2k_2-i\beta\rvert}+\cosh(2\eta_{2R}+\phi_L^{2-})},~ A_L^{2-}=\frac{4k_{2R}^2}{\lvert 2k_2-i\beta\rvert},\label{9b}\\
&&\hspace{-0.8cm}e^{i\theta_2}=\frac{(k_1-k_2)(k_1+k_2^*)(k_1+k_2)^{\frac{1}{2}}(k_2^*-k_1)^{\frac{1}{2}}}{(k_1^*-k_2^*)(k_1^*+k_2)(k_1^*+k_2^*)^{\frac{1}{2}}(k_2-k_1^*)^{\frac{1}{2}}}.\nonumber
\end{eqnarray}\end{subequations}
Here, the phase terms are defined as
\begin{eqnarray} &&\phi_S^{2-}=\frac{1}{2}\log\frac{\lvert\gamma_2\rvert^2(i\beta+2k_2^*)\lvert k_1-k_2\rvert^4\lvert k_1+k_2\rvert^2}{\lvert k_1-k_2^*\rvert ^2\lvert k_1+k_2^*\rvert^4(k_2-k_2^*)(k_2+k_2^*)^2},\nonumber \\\text{and}&&
\phi_L^{2-}=\frac{1}{2}\log\frac{\lvert \gamma_2\rvert^4(i\beta+2k_2^*)(i\beta-2k_2)\lvert k_1-k_2\rvert^8\lvert k_1+k_2\rvert^4}{\lvert k_1-k_2^*\rvert^4\lvert k_1+k_2^*\rvert^8(k_2-k_2^*)^2(k_2+k_2^*)^4}.\nonumber\end{eqnarray}
In the latter, superscript ($2-$) represents the soliton $2$ before collision. \\
(b) After collision: $t\rightarrow +\infty$\\
Soliton 1: Similarly, in this long time limit, the asymptotic forms of both the SW and LW are obtained as
\begin{subequations}\begin{eqnarray}
&&\hspace{-0.8cm}S(x,t)\simeq A_S^{1+}e^{i(\eta_{1I}+\theta_1)}\mbox{sech}(\eta_{1R}+\phi_S^{1+}),~A_S^{1+}=k_{1R}\bigg(\frac{2\gamma_1k_{1I}}{\gamma_1^*(\beta+2ik_1^*)}\bigg)^\frac{1}{2},\label{10a}\\ &&\hspace{-0.8cm}L(x,t)\simeq\frac{A_L^{1+}}{\frac{(\beta-2k_{1I})}{\lvert 2k_1-i\beta\rvert}+\cosh(2\eta_{1R}+\phi_L^{1+})},~~ A_L^{1+}=-\frac{4k_{1R}^2}{\lvert2k_1-i\beta\rvert},\label{10b}\\
&&\hspace{-0.8cm}e^{i\theta_1}=\frac{(k_1-k_2)(k_1+k_2)^{\frac{1}{2}}(k_1^*-k_2)^{\frac{1}{2}}}{(k_1^*-k_2^*)(k_1^*+k_2^*)^{\frac{1}{2}}(k_1-k_2^*)^{\frac{1}{2}}}. \nonumber
\end{eqnarray}\end{subequations}
The corresponding phase terms are calculated as
\begin{eqnarray}
&&\phi_S^{1+}=\frac{1}{2}\log\frac{\lvert\gamma_1\rvert^2\lvert k_1-k_2\rvert^4\lvert k_1+k_2\rvert^2(2k_1^*+i\beta)}{(k_1-k_1^*)(k_1+k_1^*)^2\lvert k_1-k_2^*\rvert^2\lvert k_1+k_2^*\rvert^4},\nonumber \\
\text{and}&& \phi_L^{1+}=\frac{1}{2}\log\frac{\lvert\gamma_1\rvert^4\lvert k_1-k_2\rvert^8\lvert k_1+k_2\rvert^4(2k_1^*+i\beta)(-2k_1+i\beta)}{(k_1-k_1^*)^2(k_1+k_1^*)^4\lvert k_1-k_2^*\rvert^4\lvert k_1+k_2^*\rvert^8}.\nonumber \end{eqnarray}
In the the latter, superscript ($1+$) represents the soliton $1$ after collision.\\
Soliton 2: For the soliton 2, the asymptotic expressions turn out to be
\begin{subequations}\begin{eqnarray}
&&S(x,t)\simeq A_S^{2+}e^{i\eta_{2I}}\mbox{sech}(\eta_{2R}+\phi_S^{2+}),~A_S^{2+}=k_{2R}(\frac{2\alpha_2k_{2I}}{\alpha_2^*(\beta-2ik_2^*)})^\frac{1}{2},\label{}\\ &&L(x,t)\simeq\frac{A_L^{2+}}{\frac{(\beta-2k_{2I})}{\lvert2k_2-i\beta\rvert }+\cosh(2\eta_{2R}+\phi_L^{2+})},~ A_L^{2+}=-\frac{4k_{2R}^2}{\lvert2k_2-i\beta\rvert},\label{}
\end{eqnarray}\end{subequations}
where
\begin{eqnarray} &&\phi_S^{2+}=\frac{1}{2}\log\frac{\lvert\gamma_2\rvert^2(i\beta+2k_2^*)}{(k_2+k_2^*)^2(k_2-k_2^*)}, \nonumber\\
\text{and} && \phi_L^{2+}=\frac{1}{2}\log\frac{\lvert\gamma_2\rvert^4(i\beta+2k_2^*)(i\beta-2k_2)}{(k_2+k_2^*)^4(k_2-k_2^*)^2}.\nonumber \end{eqnarray}
\begin{figure*}[]
\centering
\includegraphics[width=0.5\linewidth]{two-soliton-1.eps}~ \includegraphics[width=0.5\linewidth]{two-soliton-2.eps}
\caption{Elastic collision among the two bright solitons of the system (\ref{1}). The parameter values are $k_1=1+i$, $k_2=0.5-0.5i$, $\gamma_1=0.8$, $\gamma_2=0.45$ and $\beta=1$. }
\label{f3}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=0.8\linewidth]{figure5.eps}
\caption{In the top panel, the resonance interaction among the two bright solitons are demonstrated and the corresponding space-time plots are given in the bottom panel. They shows that the two bright solitons take a finite time to interact in both the SW and LW components. }
\label{f4}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=0.8\linewidth]{figure-6.eps}
\caption{In the top panel, the resonance interaction among the two bright solitons is illustrated and the corresponding space-time plot is demonstrated in the bottom panel. Here, the resonance interaction happens, among the two bright solitons, for a longer time period than one in Fig. \ref{f4}. This is achieved by tuning the phase shift regime further. }
\label{f5}
\end{figure*
\begin{figure*}[]
\centering
\includegraphics[width=0.4\linewidth]{two-soliton-vy-type-collision-1.eps}~ \includegraphics[width=0.4\linewidth]{vy-type-collision-2.eps}
\caption{V and Y type resonance interactions among the two bright solitons. They arise by setting the phase shift $\Delta \Phi_S=\Delta \Phi_L\rightarrow \infty$. It can be fixed by setting the condition $k_{2R}=-k_{1R}$ and $k_{2I}=-k_{1I}$.}
\label{f6}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=0.4\linewidth]{bound-soliton-parallel-1.eps}~ \includegraphics[width=0.4\linewidth]{bound-soliton-parallel-2.eps}\\
\includegraphics[width=0.4\linewidth]{bound-one-oscillation-1.eps}~ \includegraphics[width=0.4\linewidth]{bound-soliton-one-oscillation-2.eps}\\
\includegraphics[width=0.4\linewidth]{bound-soliton-two-oscillation-1.eps}~ \includegraphics[width=0.4\linewidth]{bound-soliton-two-oscillation-2.eps}\\
\caption{Top panel denotes the parallel propagation bound soliton state. Middle panel represents breathing-type bound state where oscillation occurs in one of the solitons and bottom panel illustrates breathing type-soliton state where oscillations occur in both the solitons. }
\label{f7}
\end{figure*}
The above asymptotic analysis shows that the amplitudes of the solitons remain the same before and after collisions. Consequently, the transition intensities are always unimodular. That is, \begin{eqnarray} \lvert T_S^{j}\rvert^2=\frac{\lvert A_S^{j+}\rvert^2}{\lvert A_S^{j-}\rvert^2}=1, ~\text{and}~
\lvert T_L^{j}\rvert^2=\frac{\lvert A_L^{j+}\rvert^2}{\lvert A_L^{j-}\rvert^2}=1, ~j=1,2.\end{eqnarray}
It implies that the bright solitons of the generalized LSRI system always undergo a shape preserving collision, with a finite phase shift, thereby confirming the elastic nature of the collision. Correspondingly, the energy of each of the solitons is conserved. Such an elastic collision is displayed in Fig. \ref{f3}, where the dark-like profile appears in the LW component essentially because of the negative sign that arises in the amplitude part. The phase shifts suffered by the solitons in both the SW and LW components are obtained as
\begin{equation}
\Delta\Phi_S^1=\frac{1}{2}\log\frac{\lvert k_1-k_2\rvert ^4\lvert k_1+k_2\rvert^2}{\lvert k_1+k_2^*\rvert ^4\lvert k_1-k_2^*\rvert^2}=-\Delta\Phi_S^2,~\Delta\Phi_L^1=-\Delta\Phi_L^2=2\Phi_S^1.\label{13}
\end{equation}
The above implies that the two bright solitons are located exactly opposite with each other after the collision process and their positions are mainly influenced by the wave numbers, $k_j$, $j=1,2$.
\subsection{Resonance interactions}
The bright solitons of the system (\ref{1}) exhibit interesting resonance interaction patterns for appropriately chosen wave parameters. These patterns will appear in the interaction regime during the soliton collision and they can be viewed as an intermediate state. Such a state essentially arises when the phase shifts due to collision become infinity or larger value. A typical example of resonance interaction pattern is depicted in Fig. \ref{f4} for the parameter values $k_1=0.5+0.5i$, $k_2=0.45-0.5i$, $\gamma_1=0.9$, $\gamma_2=0.45$. The figure shows that the interaction regime
gets extended and the two solitons take larger time to interact. This is clearly distinct from the standard collision, which is demonstrated in Fig. \ref{f3}, where the interaction happens without much delay. However, in Fig. \ref{f4} the interaction period is finite and after that the two bright solitons split and travel with their own velocities. One can realize that the intermediate state in the SW component as zero amplitude soliton as discussed in the case of the higher dimensional LSRI system \cite{kanna3}. In contrast to this, we observe a standing breather, like pattern that appears in the LW component. Such pattern exists only for a shorter duration and it is clearly different from the one that has been widely discussed in the rogue wave theory. We wish to note that one can also tune the interaction regime further by setting a condition $k_{2R}\approx k_{1R}$ along with the choice $\beta=-1$. This is illustrated in Fig. \ref{f5}.
Apart from the above pattern, we also observed another interesting interaction pattern when we fix the condition $k_{2R}=-k_{1R}$ and $k_{2I}=-k_{1I}$. We call such a pattern as a V-Y type resonance interaction pattern which is displayed in Fig. \ref{f6} with the parameter values $k_1=2+1.05i$, $k_2=-2-1.05i$, $\gamma_1=0.25$, $\gamma_2=0.5$, and $\beta=1$. From this figure, we observe that the interaction regime becomes infinity. This is because of the fact that the phase shifts $\Delta \Phi_S$ and $\Delta \Phi_L$ (Eq. (\ref{13})) tend to infinity for $k_2=-k_1$. Consequently, in the SW component, the two bright solitons approach each other only asymptotically and form a zero amplitude resonant soliton, whereas in the LW component they form a standing breather pattern which is extended up to an infinite interaction regime.
Next, we show that the existence of different types of bound soliton state or soliton molecule, which is recently a hot topic in soliton theory and has potential applications in optical telecommunications. This novel structure essentially arises when the two solitons propagate with either equal or nearly equal velocity and it can be considered as a special case of the standard two soliton solution (interacting soliton state). Depending on the choice of the central position, there exist two types of such soliton state: (i) Parallel propagation, and (ii) Breather. We find that these bound soliton structures also exist in the generalized LSRI system (\ref{1}). To explore the bound soliton state in Eq. (\ref{f1}), we fix the velocity resonance condition as $v_1(=2k_{1I})\approx v_2(=2k_{2I})$ and also $k_{1R}=k_{2R}$ so that the two bright solitons can propagate with almost the same velocity and they form a soliton molecule structure. A typical parallel propagating bound soliton state is displayed in Fig. \ref{f7}(a1)-(a2). To obtain this soliton state we fix the parameter values as $\beta=1$, $k_1=0.65+i$, $k_2=0.65+0.99i$, $\gamma_1=0.5$ and $\gamma_2=0.35$. Then to get the breathing soliton molecule, we consider the same velocity resonance condition but with $k_{1R}\neq k_{2R}$. The outcome is depicted in Fig. \ref{f7}(c1)-(c2), where the two bright solitons exhibit oscillatory behaviors. By fixing the parameter values as $\beta=1$, $k_1=2+i$, $k_2=0.5+0.995i$, $\gamma_1=1$ and $\gamma_2=1.35$, we bring out this soliton molecule structure. This breathing soliton molecular structure can be easily identified by rewriting the two-soliton solution (\ref{7a})-(\ref{7c}) in hyperbolic forms. The resultant forms will contain trigonometric functions $\cos(\eta_{1I}-\eta_{2I})$ and $\sin(\eta_{1I}-\eta_{2I}$) in the denominator of both the expressions for $S(x,t)$ and $L(x,t)$. Due to this fact, breathing behaviour emerges in the bright-soliton bound states. Note that one can tune the oscillatory behavior in any one of the solitons by tuning the values of $\gamma_j$'s. For example, we control the oscillation that occurs in the second soliton by fixing $\gamma_2=0.35$ and keeping all the other parameters the same as the one used in Fig. \ref{f7}(c1)-(c2). A typical graph of such bound soliton state is illustrated in Fig. \ref{f7}(b1)-(b2). It clearly indicates that the oscillation completely suppressed in the second soliton while it still persists in the other soliton structure.
\section{Dark soliton solutions}
Next, to derive the dark-soliton solution, now we consider the following transformations \cite{bib3}
\begin{eqnarray}
S(x,t)=\tau e^{i\theta}\frac{g(x,t)}{f(x,t)},~ L(x,t)=i\frac{\partial}{\partial x}\log\frac{f^*}{f},~ \theta=lx-(l^2+2\lvert\tau\rvert^2)t. \label{14}
\end{eqnarray}
While deriving the dark-soliton solutions one has to consider the non-vanishing boundary condition $S\rightarrow \tau e^{i\theta}$ and $L\rightarrow 0$ when $\lvert x\rvert \rightarrow \infty$, which are included in the above transformations.
Here $\tau$ is a complex constant and $l$ is a real constant. Substituting Eq. (\ref{14}) in Eq. (\ref{1}), we arrive at the bilinear forms of Eq. (\ref{1}). They read as\begin{subequations}
\begin{eqnarray}
&&(iD_t+2ilD_x+D_x^2)g\cdot f=0, ~~ i(D_t+\beta D_x)f\cdot f^*=D_x^2f\cdot f^*,\label{15a}\\
&&iD_t f\cdot f^*=2\lvert\tau\rvert^2(\lvert f\rvert^2-\lvert g\rvert^2). \label{15b}
\end{eqnarray}\end{subequations}
By solving these bilinear equations along with series expansions,
\begin{equation}
g(x,t)=1+\epsilon g_1+\epsilon^2 g_2+\epsilon^3 g_3+...,~~ f(x,t)=1+\epsilon f_1+\epsilon^2 f_2+\epsilon^3 f_3+...,
\end{equation}
we obtain the fundamental as well as multi-dark soliton solutions as given below.
\subsection{One-dark soliton solution}
The fundamental dark soliton solution of the system (\ref{1}) is obtained as
\begin{subequations}\begin{eqnarray}
&&S(x,t)=\tau e^{i\theta}\frac{1+\epsilon g_1}{1+\epsilon f_1}=\tau e^{i\theta}\frac{1+z_1e^{\eta_1+\eta_1^*}}{1+y_1e^{\eta_1+\eta_1^*}},~z_1=-\frac{p_1-il}{p_1^*+il}y_1,\\
&&L(x,t)=i\frac{\partial}{\partial x}\log\frac{1+y_1^*e^{\eta_1+\eta_1^*}}{1+y_1e^{\eta_1+\eta_1^*}},~y_1=-i\frac{i\beta+2p_1^*}{p_1+p_1^*},
\end{eqnarray}\end{subequations}
along with a constraint condition \begin{equation}
p_{1R}=\pm \bigg[\frac{\lvert\tau\rvert^2(2l-\beta)}{2p_{1I}}-(p_{1I}-l)^2\bigg]^{\frac{1}{2}}.\label{18}
\end{equation}
Here, $\eta_1=p_1x+ip_1^2t+\eta_{1}^{(0)}$, where $p_1$ and $\eta_1^{(0)}$ are complex constants.
The above fundamental dark soliton solution can be rewritten as
\begin{subequations}
\begin{eqnarray}
&&S(x,t)=\frac{\tau}{2}e^{i\theta}\bigg[(1+\kappa)-(1-\kappa)\tanh(\eta_{1R}+\frac{\delta}{2})\bigg],\label{19a}\\
&&L(x,t)=- \frac{4p_{1R}^2}{(\beta-2p_{1I})+\lvert 2p_{1R}+i(\beta-2p_{1I})\rvert \cosh(2\eta_{1R}+\frac{\delta+\delta^*}{2})},\label{19b}~
\end{eqnarray}
\end{subequations}
where $\kappa=-\frac{p_1-il}{p_1^*+il}$, $e^{\delta}=-i\frac{i\beta+2p_1^*}{p_1+p_1^*}$, and $\eta_{1R}=p_{1R}(x-2p_{1I}t+\frac{\eta_{1R}^{(0)}}{p_{1R}})$. The dark-soliton solution (\ref{19a})-(\ref{19b}) is described by three complex constants, $\tau$, $p_1$ and $\eta_1^{(0)}$ and two real constants, $l$ and $\beta$. The dark-soliton propagates in both the SW and LW components with the velocity $v=2p_{1I}$. The solution (\ref{19a}) admits an anti-dark soliton on a constant background $\lvert \tau\rvert^2$ in the SW component when $p_{1R}>0$, otherwise it admits dark (or grey) soliton for $p_{1R}<0$. However, the solution (\ref{19b}) always exhibits bright soliton nature in the LW component. These possibilities are demonstrated in Fig. \ref{f8}. For example, in Fig. \ref{f8}(a1), we display an anti-dark (SW) and bright soliton (LW) profiles for $p_1=1+0.5i$, $\tau=0.5+0.5i$, $l=1$ and $\beta=1$. From this figure, one can observe that an anti-dark soliton is definitely distinct from the usual bright soliton because it appears on a non-vanishing background field. Then, we illustrate a grey soliton profile in Fig. \ref{f8}(a2), where the intensity of the soliton is lower than the constant background and it does not reach zero intensity anywhere along the $x$-axis. We bring out such a grey soliton profile by fixing the value of $p_{1R}$ as $-0.5$ and the other parameter values are taken as the same as the one fixed in Fig. \ref{f8} (a1). We also display a dark or black-soliton profile with minimum intensity (intensity reached to zero) in Fig. \ref{f8}(a3) for $p_{1I}=0.5$. In Figs. \ref{f8}(b1)-(b3), we depict their corresponding shape compression plots for $\beta=-1$. We wish to remark that the dark soliton of the generalized LSRI system (\ref{1}) also possesses the amplitude dependent velocity property as the dark soliton has been clearly explained in the case of the derivative YO system \cite{bib3}.
\begin{figure*}[]
\centering
\includegraphics[width=0.85\linewidth]{one-dark.eps}
\caption{Various fundamental dark-soliton profiles of the system (\ref{1}) are shown. In Fig. (a1) we depict an anti-dark soliton profile whereas a grey soliton profile is displayed in Fig. (a2). A complete black or dark soliton profile is illustrated in Fig. (a3). In all these figures the corresponding bright soliton profile is drawn in the LW component. The bottom panel (b1)-(b3) displays their corresponding shape compression plots for $\beta=-1$. }
\label{f8}
\end{figure*}
Further, interestingly, we also observe that the dark-soliton solution (\ref{19a})-(\ref{19b}) turns into a periodic solution for a lower values of $l$. In this situation, the wave number $p_1$ turns out to be pure imaginary so that hyperbolic form of the dark soliton solution becomes a periodic function. Such a possibility is illustrated in Fig. \ref{f9} with different $l$ values and $\beta>0$. For $l=0.6$, $p_1=1.33i$, $\tau=0.5+0.5i$, and $\beta=1$, we find that in-phase periodic waves appear in both the SW and LW components whereas anti-phase periodic waves occur for $l=0.5$, $p_1=1.5i$ (the other parameter values are same as the one mentioned above). These examples are displayed in Figs. \ref{f9}(a) and (b), respectively. A doubly-periodic wave arises in the LW component for the choice $l=0.3$ and $p_1=1.76i$. An interesting fact that can be observed from Figs. \ref{f9}(b) and (c) is that in the LW component the intensities of the periodic waves are higher than the background field. This feature is striking contrast with the soliton profiles that are drawn in Fig. \ref{f8}, where all the soliton profiles in the LW component appear only in the zero background. However, we also observe the zero background periodic wave in the LW component. This is demonstrated in Fig. \ref{f9}(a). Note that one can also observe a similar kind of periodic waves in the case of $\beta<0$.
\begin{figure*}[]
\centering
\includegraphics[width=0.85\linewidth]{periodic.eps}
\caption{Periodic solution of the generalized LSRI system (\ref{1}). In Fig. (a), we display in-phase periodic waves whereas anti-phase periodic waves are demonstrated in Fig. (b). A doubly periodic wave is brought out in Fig. (c). }
\label{f9}
\end{figure*}
\subsection{Two-dark soliton solution}
The two-dark soliton solution of the generalized LSRI system (\ref{f1}) is derived and it reads as
\begin{subequations}\begin{eqnarray}
&&S(x,t)=\tau e^{i\theta}\frac{1+\epsilon g_1+\epsilon^2 g_2}{1+\epsilon f_1+\epsilon^2 f_2}\nonumber\\
&&\hspace{1.0cm}=\tau e^{i\theta}\frac{1+z_1e^{\eta_1+\eta_1^*}+z_2e^{\eta_2+\eta_2^*}+z_{12}e^{\eta_1+\eta_1^*+\eta_2+\eta_2^*}}{1+y_1e^{\eta_1+\eta_1^*}+y_2e^{\eta_2+\eta_2^*}+y_{12}e^{\eta_1+\eta_1^*+\eta_2+\eta_2^*}},\label{20a}\\
&&L(x,t)=i\frac{\partial}{\partial x}\log\frac{1+\epsilon f_1^*+\epsilon^2 f_2^*}{1+\epsilon f_1+\epsilon^2 f_2}\nonumber\\
&&\hspace{1.0cm}=i\frac{\partial}{\partial x}\log\frac{1+y_1^*e^{\eta_1+\eta_1^*}+y_2^*e^{\eta_2+\eta_2^*}+y_{12}^*e^{\eta_1+\eta_1^*+\eta_2+\eta_2^*}}{1+y_1e^{\eta_1+\eta_1^*}+y_2e^{\eta_2+\eta_2^*}+y_{12}e^{\eta_1+\eta_1^*+\eta_2+\eta_2^*}},\label{20b}
\end{eqnarray}\end{subequations}
where $\eta_j=p_jx+ip_j^2t+\eta_{j}^{(0)}$, $z_j=-\frac{(p_j-il)}{(p_j^*+il)}y_j$,~ $y_j=-i\frac{(i\beta+2p_j^*)}{(p_j+p_j^*)}$,~$p_{jR}=\pm \bigg[\frac{\lvert\tau\rvert^2(2l-\beta)}{2p_{jI}}-(p_{jI}-l)^2\bigg]^{\frac{1}{2}}$, ~$j=1,2$,~ $z_{12}=z_1z_2\Omega_{12}$, $y_{12}=y_1y_2\Omega_{12}$, $\Omega_{12}=\frac{\lvert p_1-p_2\rvert^2}{\lvert p_1+p_2\rvert^2}$. The two-dark soliton solution (\ref{20a})-(\ref{20b}) is characterized by five complex constants $p_j$, $\eta_{j}^{(0)}$, $j=1,2$, $\tau$ and two real constants $l$ and $\beta$. These parameters control the dynamics as well as the structures of two dark solitons and they also provide the possibility of obtaining three permissible collision scenarios, namely (i) anti-dark - anti-dark solitons collision, (ii) anti-dark - dark solitons collision, and (iii) dark-dark solitons collision. These collision scenarios are analyzed in the subsequent section. We have also obtained $N$-dark soliton solution of the system (\ref{1}), which is given in Appendix B.
\section{Collision dynamics of dark solitons: Asymptotic analysis}
As we have mentioned above, we came across three types of collision scenarios between the dark-solitons. To characterize each of them we have performed the appropriate asymptotic analysis, from which we deduce the explicit forms of the individual dark solitons at the asymptotic time limit $t\rightarrow \pm \infty$. However, here we present the asymptotic analysis corresponding to head-on collision among the two anti-dark solitons only. To perform it we consider the parametric choice, $p_{1R}<p_{2R}$, $p_{1I}>p_{2I}$. By following the procedure described in the case of collision among the bright solitons, we also deduce the following asymptotic forms for anti-dark solitons.\\
(a) Before collision: $t\rightarrow -\infty$\\
Soliton 1: $\eta_{1R}\simeq 0$, $\eta_{2R}\rightarrow -\infty$
\begin{subequations}
\begin{eqnarray}
&&S(x,t)=\frac{\tau}{2}e^{i\theta}\bigg[(1+\kappa_1)-(1-\kappa_1)\tanh(\eta_{1R}+\phi_{S}^{1-})\bigg],\label{24a}\\
&&L(x,t)=- \frac{4p_{1R}^2}{(\beta-2p_{1I})+\lvert 2p_{1R}+i(\beta-2p_{1I})\rvert \cosh(2\eta_{1R}+\phi_{L}^{1-})},\label{24b}~
\end{eqnarray}
\end{subequations}
where $\kappa_1=-\frac{(p_1-il)}{(p_1^*+il)}$, $\phi_S^{1-}=\frac{1}{2}\log\frac{-i(i\beta+2p_1^*)}{(p_1+p_1^*)}$ and $\phi_L^{1-}=\frac{1}{2}\log\frac{\lvert i\beta+2p_1^*\rvert^2}{(p_1+p_1^*)^2}$. In the latter, superscript $(1-)$ denotes the soliton 1 before collision and subscripts $S$ and $L$ represent the SW and LW, respectively. \\
Soliton 2: $\eta_{2R}\simeq 0$, $\eta_{1R}\rightarrow +\infty$
\begin{subequations}
\begin{eqnarray}
&&S(x,t)=\frac{\tau}{2}e^{i\theta+\Theta_1}\bigg[(1+\kappa_2)-(1-\kappa_2)\tanh(\eta_{2R}+\phi_{S}^{2-})\bigg],\label{25a}\\
&&L(x,t)=- \frac{4p_{2R}^2}{(\beta-2p_{2I})+\lvert 2p_{2R}+i(\beta-2p_{2I})\rvert \cosh(2\eta_{2R}+\phi_{L}^{2-})}.\label{25b}~
\end{eqnarray}
\end{subequations}
Here,
\begin{eqnarray}
&&\kappa_2=-\frac{(p_2-il)}{(p_2^*+il)},~~~~~\Theta_1=\log\frac{-(p_1-il)}{p_1^*+il},\nonumber\\&&\phi_S^{1-}=\frac{1}{2}\log\frac{-i\lvert p_1-p_2\rvert^2(i\beta+2p_2^*)}{\lvert p_1+p_2^*\rvert^2(p_2+p_2^*)},~ \text{and}~~
\phi_L^{1-}=\frac{1}{2}\log\frac{\lvert p_1-p_2\rvert^2\lvert i\beta+2p_2^*\rvert^2}{(p_2+p_2^*)\lvert p_1+p_2^*\rvert^2}.\nonumber
\end{eqnarray}
In the above, the superscript $(2-)$ denotes the soliton 2 before collision. \\
(b) After collision: $t\rightarrow +\infty$\\
Soliton 1: $\eta_{1R}\simeq 0$, $\eta_{2R}\rightarrow +\infty$
\begin{subequations}
\begin{eqnarray}
&&S(x,t)=\frac{\tau}{2}e^{i\theta+\Theta_2}\bigg[(1+\kappa_1)-(1-\kappa_1)\tanh(\eta_{1R}+\phi_{S}^{1+})\bigg],\label{26a}\\
&&L(x,t)=- \frac{4p_{1R}^2}{(\beta-2p_{1I})+\lvert 2p_{1R}+i(\beta-2p_{1I})\rvert \cosh(2\eta_{1R}+\phi_{L}^{1+})},\label{26b}~
\end{eqnarray}
\end{subequations}
where
\begin{eqnarray}
&&\Theta_2=\log\frac{-(p_2-il)}{p_2^*+il},~~~ \phi_S^{1+}=\frac{1}{2}\log\frac{-i\lvert p_1-p_2\rvert^2(i\beta+2p_1^*)}{\lvert p_1+p_2^*\rvert^2(p_1+p_1^*)},\nonumber\\ \text{and} &&\phi_L^{1+}=\frac{1}{2}\log\frac{\lvert p_1-p_2\rvert^2\lvert i\beta+2p_1^*\rvert^2}{(p_1+p_1^*)\lvert p_1+p_2^*\rvert^2}.
\end{eqnarray}
In the above, the superscript $(1+)$ denotes the soliton 1 after collision.\\
\underline{Soliton 2}: $\eta_{2R}\simeq 0$, $\eta_{1R}\rightarrow -\infty$
\begin{subequations}
\begin{eqnarray}
&&S(x,t)=\frac{\tau}{2}e^{i\theta}\bigg[(1+\kappa_2)-(1-\kappa_2)\tanh(\eta_{2R}+\phi_{S}^{2+})\bigg],\label{27a}\\
&&L(x,t)=- \frac{4p_{2R}^2}{(\beta-2p_{2I})+\lvert 2p_{2R}+i(\beta-2p_{2I})\rvert \cosh(2\eta_{2R}+\phi_{L}^{2+})}.\label{27b}~
\end{eqnarray}
\end{subequations}
In the above, $\phi_S^{2+}=\frac{1}{2}\log\frac{-i(i\beta+2p_2^*)}{p_2+p_2^*}$, $\phi_L^{2+}=\frac{1}{2}\log\frac{\lvert i\beta+2p_2^*\rvert^2}{(p_2+p_2^*)^2}$. Here, the superscript $(2+)$ represents the soliton 2 after collision.
\begin{figure*}[]
\centering
\includegraphics[width=0.75\linewidth]{collision-two-anti-darks.eps}
\caption{Elastic collision dynamics of the two anti-dark solitons is displayed with the parameter values $p_1=0.51+0.75i$, $p_2=0.66+0.25i$, $l=1$, $\tau=0.5+0.5i$, and $\beta=1$. }
\label{f10}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=0.75\linewidth]{collision-anti-dark-dark.eps}
\caption{Elastic collision dynamics between a dark soliton and an anti-dark soliton. }
\label{f11}
\end{figure*}
The above asymptotic analysis clearly shows that the two anti-dark solitons retain their shape during the collision scenario, except for a finite phase shift, thereby confirming the elastic nature of the collision. A typical elastic collision among the two anti-dark solitons is depicted in Fig. \ref{f10}. Then, in Fig. \ref{f11} we display the collision between a dark soliton and an anti-dark solitons. To bring out this figure we fix the parameter values as $p_1=1+0.65i$, $p_2=-1.5+0.25i$, $l=0.85$, $\tau=1+i$, and $\beta=1$. From Fig. \ref{f11}, it is evident that the dark and anti-dark solitons are well separated initially and their structures are invariant under collision. A similar situation is also observed during the interaction among the two dark solitons and this scenario is depicted in Fig. \ref{f12}. We have calculated the phase shift suffered by the two anti-dark solitons during the collision process and they turn out to be
\begin{equation}
\Delta\Phi_{SW}^1=\frac{1}{2}\log\frac{\lvert p_1-p_2\rvert ^2}{\lvert p_1+p_2^*\rvert ^2}=-\Delta\Phi_{SW}^2,~\Delta\Phi_{LW}^1=-\Delta\Phi_{LW}^2=2\Phi_{SW}^1.\label{28}
\end{equation}
\begin{figure*}[]
\centering
\includegraphics[width=0.75\linewidth]{collision-dark-dark.eps}
\caption{Collision dynamics of two dark solitons is drawn with the values $p_1=-0.51+0.8i$, $p_2=-0.66+0.25i$, $l=0.85$, $\tau=1+i$, and $\beta=1$.}
\label{f12}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=0.8\linewidth]{breather-periodic.eps}
\caption{Dark solitons behave like a breather in the periodic background field. In Figs. (a1) and (c1) we display bright breather like behaviour of anti-dark soliton in the SW component whereas in Fig. (b1) dark breather like pattern is observed in the SW component. In contrast to this, in all the figures (a2), (b2) and (c2), a bright breather like pattern is observed in the LW component. }
\label{f13}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=0.85\linewidth]{dak-soliton-bound-states.eps}
\caption{The different types of dark soliton bound states are demonstrated. In Fig (a1), we display anti-dark soltion bound state structure whereas in Fig. (b1) we illustrate the existence of anti-dark-dark solitons bound state. Then, two dark-solitons bound state structure is depicted in Fig. (c1). In addition to these bound state structure of the SW component, a parallel propagating bright-solitons bound structure always appears in the LW component and they are illustrated in Figs. (a2), (b2) and (c2). The parameter values are: (i) (a1)-(a2): $p_1=0.5+0.45i$, $p_2=0.5+0.4445i$, $l=1$, $\tau=0.5+0.5i$, and $\beta=1$. (ii) (b1)-(b2): $p_1=-0.36+0.515i$, $p_2=0.36+0.5i$, $l=0.65$, $\tau=0.5+0.5i$, and $\beta=1$. (iii) (c1)-(c2): $p_1=-0.5+0.45i$, $p_2=-0.5+0.4445i$, $l=1$, $\tau=0.5+0.5i$, and $\beta=1$. }
\label{f14}
\end{figure*}
As we pointed in the one-dark soliton case, the two-dark soliton solution also exhibits periodic behaviour for low values of the wave number $l$ of the background wave $\tau e^{i\theta}$. Such a possibility is illustrated in Fig. \ref{f13}. From this figure, one can identify that the two dark solitons do not completely change into periodic waves. On the other hand, one of the dark/anti-dark solitons
behave like a breather in a periodic background wave field. From Figs. \ref{f13}(a1)-(b1), we observe that an anti-dark soliton (or a dark-soliton) in the SW component turns into a bright breather (or dark breather) like structure on the periodic wave background. From Figs. \ref{f13}(a2)-(b2), we also observe a bright breather-like pattern in the LW component. In addition to this, a breathing pattern is observed in both the SW and LW components, which is demonstrated in Figs. (\ref{f13}) (c1)-(c2). The presence of a dark soliton in the periodic background will be useful in connection with the recent literature on the theory of rogue waves in periodic background wave field \cite{psky1,psky2}. Further, in Fig. \ref{f14}, we display the three types of parallelly propagating dark-soliton bound states. We note that the resonance soliton and breathing type bound state do not exist in the dark-soliton case.
\begin{figure*}[]
\centering
\includegraphics[width=0.85\linewidth]{breather-type-1.eps}
\caption{Breather solution of the generalized LSRI system (\ref{1}) is illustrated for $\beta>0$. The parameter values are fixed as follows: (a1)-(a2): $\phi_1=0.5i$, $\phi_2=\phi_1^*+\pi$, $l=0.5$, $\tau=0.5$, and $\beta=1$. (b1)-(b2): $\phi_1=0.75$, $\phi_2=\phi_1^*+\pi$, $l=0.5$, $\tau=0.5$, and $\beta=0.25$. (c1)-(c2): $\phi_1=0.25+0.25i$, $\phi_2=\phi_1^*+\pi$, $l=0.5$, $\tau=0.5$, and $\beta=0.25$. }
\label{f15}
\end{figure*}
\begin{figure*}[]
\centering
\includegraphics[width=0.85\linewidth]{breather-type-2.eps}
\caption{Breather solution of the generalized LSRI system (\ref{1}) is illustrated for $\beta<0$. A singular breather periodic in both $x$ and $t$ is demonstrated in Figs. (a1)-(a2) for $\phi_1=0.5$, $\phi_2=\phi_1^*+\pi$, $l=0.5$, $\tau=0.5$, and $\beta=-1$. The two interacting breathers are illustrated in Figs. (b1)-(b2) for $\phi_1=0.35$, $\phi_2=\phi_1^*+\pi$, $l=0.5$, $\tau=0.5$, and $\beta=-0.25$. In Fig. (c1), we find that a stationary breather and a moving breather are emerging out from the SW component. In contrast to this, in the LW component, a stationary breather along with the two interacting breathers are observed. To display Figs. (c1)-(c2), we set the parameter values as $\phi_1=0.35+0.35i$, $\phi_2=\phi_1^*+\pi$, $l=0.5$, $\tau=0.5$, and $\beta=-0.25$. }
\label{f16}
\end{figure*}
\section{Breather solution}
To get the breather solution, one has to consider the same bilinear transformation (Eq. (\ref{14})) that has been used to derive the dark-soliton solution. By doing so, we obtain the following
functions $g$ and $f$ corresponding to the breather solution of the generalized LSRI system (\ref{1}):
\begin{subequations}\begin{eqnarray}
&&\hspace{-1.0cm}g=1+e^{\eta_1+2i\phi_1}+e^{\eta_2+2i\phi_2}+A_{12}e^{\eta_1+\eta_2+2i(\phi_1+\phi_2)},\label{30a}\\
&&\hspace{-1.0cm}f=1+e^{\eta_1}+e^{\eta_2}+A_{12}e^{\eta_1+\eta_2},\label{30b}\\
&&\hspace{-1.0cm}\text{where}\nonumber\\&&\hspace{-1.0cm} A_{12}=\frac{1}{D}\bigg(p_1^2\sin(\phi_1-\phi_2)[\sin(\phi_1+\phi_2)-\sin(\phi_1-\phi_2)]-p_2^2\sin(\phi_1-\phi_2)\nonumber\\
&&\times[\sin(\phi_1+\phi_2)+\sin(\phi_1-\phi_2)]-(p_1-p_2)^2\cos(\phi_1-\phi_2)\nonumber\\
&&\times[\cos(\phi_1-\phi_2)-\cos(\phi_1+\phi_2)]\bigg),\nonumber\\
&&\hspace{-1.0cm}D=-p_1^2\sin(\phi_1+\phi_2)[\sin(\phi_1+\phi_2)-\sin(\phi_1-\phi_2)]-p_2^2\sin(\phi_1+\phi_2)\nonumber\\
&&\times[\sin(\phi_1+\phi_2)+\sin(\phi_1-\phi_2)]+(p_1+p_2)^2\cos(\phi_1+\phi_2)\nonumber\\
&&\times[\cos(\phi_1-\phi_2)-\cos(\phi_1+\phi_2)],\nonumber
\end{eqnarray}\end{subequations}
where $\eta_j=p_jx-\Omega_jt$, $\Omega_j=2lp_j-p_j^2\cot\phi_j$, $p_1=\frac{1}{2}\bigg(i\beta-\sqrt{-\beta^2+16\lvert\tau\vert^2\sin^2\phi_1+8i\lvert \tau\rvert^2\sin2\phi_1}\bigg)$, $p_2=\frac{1}{2}\bigg(i\beta+\sqrt{-\beta^2+16\lvert\tau\vert^2\sin^2\phi_2+8i\lvert \tau\rvert^2\sin2\phi_2}\bigg)$. Here, $p_j$, $\Omega_j$ and $\phi_j$, $j=1,2,$ are complex constants. A typical singular time-periodic breather is displayed in Fig. \ref{f15}(a1)-(a2) with the parameter values $\phi_1=0.5i$, $\phi_2=\phi_1^*+\pi$, $l=0.5$, $\tau=0.5$, and $\beta=1$. This figure shows that the breather obtained by us is similar to Kuznetsov-Ma soliton \cite{kma} which has been widely discussed in the context of rogue-waves. For $\beta=0.25$, the solution (\ref{30a})-(\ref{30b}) admits two interacting breathers in both the components, where one of the breathers is stationary along $x=0$. This is illustrated in Fig. \ref{f15}(b1)-(b2). Further, as we demonstrated in Fig. \ref{f15}(c1)-(c2), we also come across another breather pattern by considering the phase, $\phi_1$, as complex and for a low positive value of $\beta$. From this pattern, we observe that, in the SW component, the two breathers propagate in opposite directions and they collide with each other. The final outcome is reflected in changing their positions. In contrast to this, in the LW component, in addition to the two interacting breathers moving in the opposite directions, there is a stationary breather that appears along $x=0$. Furthermore, one also gets similar breather patterns for $\beta<0$. Such a possibility is illustrated in Fig. \ref{f16}.
\section{Conclusion}\label{sec13}
In this paper, first we have derived $N$-bright and $N$-dark soliton solutions for the generalized LSRI system (\ref{1}) through the Hirota bilinear method. Then, by considering the fundamental bright and dark soliton solutions as well as their higher-order forms, we have discussed their various propagation and collision properties in detail. The interesting aspect of the present generalized LSRI system is that the bright soliton, in general, behaves like KdV soliton. However, under a special condition, it acts like the NLS soliton. Further, we found that the dark-soliton admits three types of dark soliton profiles. Further, the asymptotic analysis confirmed that both the bright and dark solitons always exhibit elastic collision only. In addition to these, we also demonstrated the existence of resonant interactions among the two bright solitons, and soliton molecules. Finally, by deriving the breather solution we have illustrated the various breather patterns graphically by tuning the phase values and a system parameter $\beta$. The present study will be useful in fluid dynamics, plasma physics, nonlinear optics and other closely related disciplines of physics.
\backmatter
\bmhead{Acknowledgments}
The works of Mokhtar Kirane, and Stalin Seenimuthu, are supported by Khalifa University of Science and Technology, Abu-Dhabi, UAE, under the Project Grant No. 8474000355. Lakshmanan Muthusamy thanks DST-SERB for the award of a DST-SERB National Science Chair (NSC/2020/000029).
\section*{Declarations}
\begin{itemize}
\item
The authors declare that they have no conflict of interest
\item All data generated or analyzed during this
study are included in the article
\end{itemize}
\begin{appendices}
\section{$N$-bright soliton solution}\label{secA1}
The explicit form of $N$-bright soliton solution of Eq. (\ref{1}) can be expressed using Gram determinant in the following way:
\begin{eqnarray}
g=\begin{vmatrix}
A & I & \phi^T \\
-I &B & {\bf 0}^T \\
{\bf 0} & -C & 0
\end{vmatrix},~~f=\begin{vmatrix}
A & I \\
-I &B \\
\end{vmatrix},~~f^*=\begin{vmatrix}
A' & I \\
-I &B^* \\
\end{vmatrix},\label{A.1a}
\end{eqnarray}
The various elements of matrices $A$, $A'$ and $B$ are obtained from the following,
\begin{eqnarray}
A_{ij}=\frac{k_j^*}{(k_i+k_{j}^*)}e^{\eta_i+\eta_{j}^*},~ A_{ij}'=-\frac{k_i}{(k_i+k_{j}^*)}e^{\eta_i+\eta_{j}^*},b_{ij}=-\frac{\gamma_i^*\gamma_j(i\beta+2k_{j}^*)}{(k_i^{*2}-k_j^{2})},\nonumber
\end{eqnarray}
$\eta_j=k_jx+ik_j^2t$, $i, j=1,2,...,N$.
The row matrices in Eq. (\ref{A.1a}) are defined below: \\
$\phi=\begin{pmatrix}
e^{\eta_1} & e^{\eta_2} & . & . & . & e^{\eta_N}
\end{pmatrix}$, $C=\begin{pmatrix}
\gamma_1 & \gamma_2 & . & . & . &\gamma_N
\end{pmatrix}$, ${\bf 0}$ is a $N$-component zero row matrix and $\sigma=I$ is a $(N\times N)$ identity matrix. The above $N$-soliton solution is characterized by $(2N)$ arbitrary complex parameters, $k_j$ and $\gamma_j$, $j=1,2$ and one system parameter $\beta$.
\section{$N$-dark soliton solution}\label{secA2}
The $N$-sark soliton solution of the system (\ref{f1})
is given by
\begin{subequations}\begin{eqnarray}
&&g=\begin{vmatrix}\displaystyle{
\delta_{jk}+i\bigg(\frac{i\beta+2p_k^*}{p_j+p_k^*}}\bigg)\bigg(\frac{p_j-il}{p_j^*+il}\bigg)e^{\eta_j+\eta_k^*}
\end{vmatrix}_{N\times N},\\
&&f=\begin{vmatrix}\displaystyle{
\delta_{jk}-i\bigg(\frac{i\beta+2p_k^*}{p_j+p_k^*}}\bigg)e^{\eta_j+\eta_k^*}
\end{vmatrix}_{N\times N},
\end{eqnarray}\end{subequations}
where $\eta_j=p_jx+ip_j^2t+\eta_{j}^{(0)}$, $j=1,2,...,N$, $p_j$'s and $\eta_j^{(0)}$'s are complex constants. The constraint conditions are obtained and they turn out to be
$p_{jR}=\pm \bigg[\frac{\lvert\tau\rvert^2(2l-\beta)}{2p_{jI}}-(p_{jI}-l)^2\bigg]^{\frac{1}{2}}$, ~$j=1,2,...,N$. Here, $p_{jR}$ and $p_{jI}$'s are the real and imaginary parts of $p_j$'s. The imaginary parts of $p_j$'s govern the velocity of the solitons and $\eta_{j}^{(0)}$'s define the phase of the solitons.
\end{appendices}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,105
|
{"url":"http:\/\/jokerwang.com\/wp-content\/one\/421.html","text":"A game uses a deck of $$n$$ different cards, where $$n$$ is an integer and $$n \\geq 6.$$ The number of possible sets of 6 cards that can be drawn from the deck is 6 times the number of possible sets of 3 cards that can be drawn. Find $$n.$$\n\n(\u7b2c\u4e8c\u5341\u4e09\u5c4aAIME2 2005 \u7b2c1\u9898)","date":"2017-08-19 13:08:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5590568780899048, \"perplexity\": 78.52450149217555}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886105451.99\/warc\/CC-MAIN-20170819124333-20170819144333-00386.warc.gz\"}"}
| null | null |
\section{Introduction}{\label{s:introduction}}
The solar atmosphere provides a favourable environment for the generation and
propagation of internal gravity waves (or internal waves). Turbulent convection
from subsurface regions penetrating locally into a stably stratified medium
above it, is thought to excite internal waves, along with acoustic waves. These
waves couple the lower atmosphere with the higher layers by transporting energy,
and presumably contributing to the heating of the upper solar atmosphere.
However, the short radiative timescales and the presence of strong magnetic
fields in these regions influence the internal waves. The effects that magnetic
fields may have on the generation and propagation of these waves are still
unknown.
Internal waves are a natural response of a gravitationally stratified medium to
any disturbance of its equilibrium state, with buoyancy acting as the
equilibrium restoring force. Internal waves are ubiquitous in the Earth's
atmosphere and have been extensively studied for their role in the circulation
patterns in the oceans and the terrestrial atmosphere. They form an essential
component in the general circulation models (GCM) that provide accurate global
weather predictions. The downward propagating, east-west oscillatory patterns
known as Quasi-Biennial Oscillations (QBO) observed in the Earth's atmosphere
below 35\,km in tropical latitudes are due to momentum transport by internal
waves. Tsunamis in open oceans excite internal waves that propagate up to
ionospheric heights causing traveling ionospheric disturbances
\citep{2005GeoJI.160..840A}.
Studies of internal waves in the solar atmosphere began with
\citet{1963ApJ...137..914W}
following a suggestion by
\citet{1960CaJPh..38.1441H},
a pioneer in the field of terrestrial atmospheric physics, that internal waves
could play an important role in coronal heating. Later work invoked internal
waves to explain the then elusive 5-min oscillations of the solar atmosphere
\citep{1960IAUS...12..321L,
1962ApJ...135..474L}.
The theoretical framework put forth by
\citet{1964ApJ...139...48M}
tried to explain these oscillations due to frequencies below the acoustic
cut-off value, a regime where the internal gravity waves exist. Later, a number
of works explored the existence of trapped internal gravity waves due to a
temperature dip
\citep{1965ApJ...142..335U,
1967ApJ...147..181U}
or due to ionization effects
\citep{1971SoPh...16...51T}
and related those to the observed solar oscillations. These studies later gave
way to trapped acoustic waves in the solar interior as the sole agent
responsible for the oscillations
\citep{1970ApJ...162..993U,
1971ApL.....7..191L}.
Despite the fact that they did not play a role in the observed oscillations,
studies of internal waves continued in view of explaining the heating of the
upper atmosphere.
\citet{1967IAUS...28..429L}
suggested that internal waves are efficiently generated by ``tongues of
turbulence'' that reach up into the photosphere where they contribute to
atmospheric heating.
\citet{1967SoPh....2..385S}
discussed the generation of internal waves by turbulence in an isothermal,
stratified atmosphere. However, the short radiative relaxation times in the
photosphere raised questions about the mere existence of internal waves in these
regions
\citep{1967ARA&A...5...67S,
1969SSRv....9..713K,
1970A&A.....4..189S,
1973SoPh...30..319C,
1980SSRv...27..301L}.
Analytical studies of the complete magneto-acoustic-gravity (MAG) spectrum in a
simple stratified atmosphere have been carried out by a number of authors
starting with
\citet{1958ApJ...127..459F},
\citet{1982A&A...112...16Z},
\citet{1982A&A...112...84L},
\citet{1984A&A...132...45Z},
\citet{1992ApJ...396..311H},
\citet{1997RSPSA.453..943B},
\citet{2001ApJ...548..473C},
\citet{2015GApFD.109..168C},
to cite a few.
Some of the first observational evidences suggesting the existence of internal
waves in the solar atmosphere were presented by
\citet{1976SoPh...47..435S}
and
\citet{1978A&A....70..345C}.
An extensive study of internal waves in the solar atmosphere focusing on the
energy dissipation and their possible signatures on spectral lines was carried
out by
\citet{1981ApJ...249..349M,
1982ApJ...263..386M}.
They concluded that the energy dissipation of internal waves due to non-linear
wave-breaking is dominant in the mid-chromosphere and that they deposit all of
their energy at these heights, hardly ever reaching the corona. While the
detection of internal waves in the solar atmosphere has been questioned
\citep{1968ApJ...152..557F,
1979ApJ...231..570L},
a series of observations reported evidence of internal waves in the solar
atmosphere
\citep{1981A&A....95..221D,
1984MmSAI..55..147S,
1987A&A...175..263S,
1989A&A...213..423D,
1991A&A...242..271M,
1991A&A...244..492B,
1991A&A...252..827K,
1993A&A...274..584K,
1997A&A...324..704S,
2001A&A...379.1052K,
2003A&A...407..735R}.
Using high spatial and temporal resolution spectroscopic observations in
multiple lines with ground and space-based telescopes and with the help of 3D
numerical simulation,
\citet{2008ApJ...681L.125S}
reported the first ``unambiguous'' detection of propagating internal waves in a
magnetically quiet region of the solar atmosphere. They claimed that the energy
flux of internal waves was sufficient for balancing the radiative losses of the
chromosphere. They also observed that internal waves are suppressed in strong
magnetic field regions as a result of reflection and conversion to other wave
modes. Soon after,
\citet{2008MNRAS.390L..83S}
found signatures of internal waves in temperature fluctuations derived from the
Fe\,{\textsc i} ($\lambda$\textnormal{=}532.418\,nm) spectral line, a
temperature sensitive line formed at photospheric heights, raising questions
about their presence at these heights despite strong radiative damping.
\citet{2011A&A...532A.111K}
have reported the presence of internal waves and estimated their energy flux
using observations in the lines of Fe\,{\textsc i}
($\lambda$\textnormal{=}557.6\,nm, 543.4\,nm) that form at an average height of
380\,km and 570\,km, respectively. Recent work by
\citet{2014SoPh..289.3457N}
also shows signature of internal waves in the SDO/HMI Dopplergrams. However, the
numerical models in their work fail to show a clear signature of internal waves.
This discrepancy may be due to the extent of the simulated domain, or the
radiative damping in the model, or the upper boundary conditions. Despite recent
observational confirmation of the existence of internal waves in the solar
atmosphere, not much research was done towards understanding the
power suppression of these waves in magnetic field regions.
Many different wave co-exist and interact with each other in the solar
atmosphere. The surface-gravity waves ($f$-mode) and the evanescent tails of the
solar $p$-modes exist in the atmosphere. In magnetic flux tubes,
magneto-acoustic waves are generated as a result of continuous buffeting by
granules
\citep{1999ApJ...519..899H}
and by strong inter-granular downdrafts
\citep{2011ApJ...730L..24K},
which propagate upwards and partially escape the flux tube to propagate as
acoustic waves in the medium outside
\citep{2009A&A...508..951V}.
The magneto-acoustic waves that propagate up along the flux tubes undergo
transmission and conversion at the equipartition level, the height where the
ratio of sound speed ($c_{S}$) to Alfv\'{e}n speed ($v_{A}$) drops below 1. The
resulting fast magneto-acoustic waves get partially refracted travelling
downwards in the atmosphere and partially convert to Alfv\'{e}n waves near the
apex of the refractive wave path
\citep{2012ApJ...746...68K}.
Internal waves can also couple to magneto-acoustic and Alfv\'{e}n waves as shown
by
\citet{2010MNRAS.402..386N,
2011MNRAS.417.1162N}.
The whole sequence of wave production and coupling, starting from the solar
surface up to heights where Alfv\'{e}n waves are produced, has to be clearly
understood in order to account for the energy distribution among various wave
modes at different heights. Radiative damping in the low-photosphere and
non-linear effects leading to wave-breaking above the mid-chromosphere,
spatially restrict the propagation of internal waves in the Sun's atmosphere,
making their observation difficult.
In this paper, we use realistic numerical simulations of the solar atmosphere to
study the acoustic-gravity wave spectrum's properties in the presence of
magnetic fields. This work is a substantial extension to the linear analysis
that was carried out by
\citet{1981ApJ...249..349M,
1982ApJ...263..386M},
that also neglected the effects of magnetic field. Realistic simulations that
take into account essential physics like non-local radiative transfer and an
equation of state that adequately describes the solar plasma are needed to
explain the observed properties of internal waves in the solar atmosphere.
Theoretical work on MAG waves has been carried out by a number of authors, but
atmospheric internal gravity waves in the presence of spatially intermittent and
temporally evolving magnetic fields is a less explored field. Whether the
presence of a magnetic field modifies the background properties and indirectly
affects the propagation of internal waves or whether the changes in the plasma
$\beta$ and magnetic field orientation restrict the occurrence of internal waves
to an even smaller region or perhaps suppresses them completely is still not
clear. This paper addresses some of these aspects with state-of-the-art
numerical simulations and attempts to fill some gaps in our understanding of
atmospheric internal gravity waves.
The paper is structured as follows: In Section~\ref{s:num_sim}, we discuss the
numerical setup, the construction of the model, and give a detailed description
of the properties of the non-magnetic and magnetic model in the context of
internal waves. In Section~\ref{s:spectral_analysis}, we carry out a spectral
analysis of the 3D simulation, where the emergent phase and energy flux spectra
are presented, highlighting the differences between the two models. In
Section~\ref{s:discussion}, we present a detailed discussion on the various
effects that can explain the differences between the two models. The summary and
conclusion of the paper is provided in Section~\ref{s:conclusion}.\\
\section{Numerical models}{\label{s:num_sim}}
The numerical simulations of solar convection presented in this paper were
carried out using the {CO$^{\rm 5}$BOLD} code
\citep{2012JCoPh.231..919F}.
The code solves the equations of (magneto-)hydrodynamics for a fully
compressible gas with a realistic equation of state, taking non-local radiative
transfer into account. Here, we use five opacity groups, adapted from the MARCS
stellar atmosphere package
\citep{2008A&A...486..951G}.
We take a 3D snapshot from an earlier model of relaxed convection, computed
using {CO$^{\rm 5}$BOLD}, and extend the domain by tiling it in the horizontal
directions. The new computational domain has a size of
{38.4\,Mm}\,$\times$\,{38.4\,Mm}\,$\times$\,{2.8\,Mm}, with a horizontal cell
size of 80\,km and a vertical cell size varying from 50\,km in the lower part of
the computational domain down to 20\,km in the upper atmosphere, discretized on
480\,$\times$\,480\,$\times$\,120 grid cells. The domain reaches $\sim$1.5\,Mm
below the level of average Rosseland optical depth $\tau_{R}\textnormal{=}1$
(where we define the $z$ axis such that $\langle z(\tau_{R}\textnormal{=}1)\rangle=0$) and $\sim$1.3\,Mm above it. A constant gravity of g=275\,m\,s$^{-2}$
acts in the box. The tiling results in a periodic pattern due to the previous
periodic boundary condition. This pattern is eliminated by superimposing a random velocity
pattern, with rms value of 0.5v$_{x,y}$ (v$_{x}$ and v$_{y}$ are the horizontal components of the velocity), on the model between $z$=$-100$ and 0\,km
(below the average $\tau_{R}\textnormal{=}1$ surface) over the entire
horizontal scale and advancing the solution over several turnover timescale
(approx.\, 190\,min). Taking this solution as the initial model, a hydrodynamic
(HD) and a magneto-hydrodynamic (MHD) simulation run is carried out. For the
entire HD run, starting with the small domain, we use the Roe solver with
VanLeer reconstruction
\citep[see][for the details
on the computational methods]{2012JCoPh.231..919F}.
The HLL-MHD solver with PP reconstruction is used for the MHD run
\citep{2013MSAIS..24..100S}.
For creating the MHD model, the extended initial HD model is embedded with a
uniform vertical field of 50\,G in the entire domain and advanced over a
magnetic field redistribution timescale of approximately 600\,s. During this
time the uniformly distributed fields are swept towards the inter-granular lanes
by granular flow, forming localised flux concentrations with magnetic field
strengths surpassing 1.5\,kG at z=0\,km. This model serves as a representation
of an internetwork region of the quiet-Sun. The HD solution is advanced for the
same duration to match with that of the MHD run. Both the hydrodynamic
(``non-magnetic'') and magneto-hydrodynamic (``magnetic'') solutions are then
advanced for 8 hours physical time with snapshots taken every 30 seconds. A
summary of the numerical setup and physical properties of the two simulated
models are shown in Table~\ref{tab:model_summary}.
\begin{table}[h!]
\centering
\caption{Numerical setup and physical properties of the two
simulated models.}\label{tab:model_summary}
\begin{tabular}{lcc}
\hline
\hline
& Non-magnetic & Magnetic \\
\hline
Snapshot cadence & \multicolumn{2}{c}{30\,s}\\
Duration of simulation & \multicolumn{2}{c}{8\,hrs}\\
Computational grid & \multicolumn{2}{c}{480$\times$480$\times$120}\\
Domain size &
\multicolumn{2}{c}{38.4$\times$38.4$\times$2.8\,Mm$^{3}$}\\
Computational cell size &
\multicolumn{2}{c}{80$\times$80$\times$(50-20)\footnote{The vertical
cell size varies from
50\,km in the lower part of the computational domain
down to 20\,km in the upper atmosphere.} km$^{3}$}\\
Numerical scheme & Roe & HLL-MHD \\
Reconstruction & VanLeer & PP/VanLeer\\
Temperature, $T_{\rm eff}$ & 5798$\pm$3\,{\rm K} &
5773$\pm$4\,{\rm K}\\
Intensity contrast, $\delta I_{\rm rms}$
&15.57$\pm$0.13\,\% &15.32$\pm$0.11\,\%\\
\hline
\end{tabular}
\end{table}
Periodic boundary conditions are used for the side boundaries in both models.
The velocity field, radiation, and the magnetic field components are periodic in the
lateral directions, which results in the inhibition of waves with horizontal
wavelengths larger than the width of the box. The top boundary is open for fluid
flow and outward radiation, with the density decreasing exponentially
in the boundary cells outside
the domain. The vertical component of the magnetic field is constant across the
boundary and the transverse component drops to zero at the boundary. In both
models, the bottom boundary is set up in such a way that the in-flowing material
carries a constant specific entropy of ${\rm 1.773\times10^{9}\,erg\,g^{-1}\,K^{-1}}$
resulting in a radiative flux corresponding to an effective temperature
($T_{\rm eff}$) of $\sim$5770\,K. The bottom boundary conditions for the magnetic
fields are the same as for the top boundary.
The spatially and temporally averaged temperature profile of the two models is
shown in Figure~\ref{fig:temperature}. Also shown in the background is the
temperature distribution from a single snapshot of the non-magnetic model taken
at t=4h after the start of the simulation. Although the average temperature in
the upper layers becomes constant, there are instances when the temperature
increases locally, hinting to a weak shock-heated chromosphere. The two models
show exactly the same temperature profile, but the granular sizes show slight
differences.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{vigeesh_f1.pdf}
\caption{
Average temperature as a function of height in the non-magnetic
(dashed) and the magnetic (solid) model. The gray background shows the
temperature distribution for a single snapshot of the non-magnetic run
taken at t=4h.}
\label{fig:temperature}
\end{figure}
In Figure~\ref{fig:bolometric_intensity}, we show the emergent bolometric
intensity from the two models 4~hours after the start of the simulation. It is
to be noted that, while the average size of granules in the non-magnetic model
peaks at 2\,Mm, the average granules in the magnetic model are larger. This is
due to the more diffusive nature of the HLL-MHD numerical solver, compared to
the Roe solver. However, this difference between the non-magnetic and magnetic
model does not seem to influence the overall spectra of the generated internal
gravity waves as will be further explained in Sect.~\ref{s:conclusion}. The average rms bolometric intensity
contrast, $\delta I_{\rm rms}$, of the non-magnetic and magnetic models, are
15.57\,\% and 15.31\,\%, respectively (see Table \ref{tab:model_summary}).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{vigeesh_f2a.pdf}\\
\includegraphics[width=\columnwidth]{vigeesh_f2b.pdf}\\
\caption{
Emergent bolometric intensity from: a) the non-magnetic and b)
the magnetic model at t=4h.}
\label{fig:bolometric_intensity}
\end{figure}
The large spatial and temporal coverage of the two models give us the
opportunity to study the different wave phenomena in Fourier space. All the
physical variables are decomposed into their Fourier components along the
horizontal directions and in time. In the following, we present the properties
of the model in frequency-space for a better understanding of the different wave
phenomena present in the simulation. The rest of the paper is based on this
decomposition and hence we attempt a detailed presentation.
\subsection{The dispersion relation ($k_{h}\textnormal{-}\omega$
diagram)}\label{ss:kwdiagram}
In an infinite, homogenous, compressible medium in the absence of an external
force field, any small-amplitude perturbation propagates as acoustic wave owing
only to the compressibility of the medium. The propagation is isotropic and
non-dispersive with all the frequencies travelling at the characteristic sound
speed ($c_{\rm s}$) in all directions. In the presence of an external force like
gravity, the propagation becomes anisotropic and acoustic waves are modified,
with waves below a certain frequency becoming vertically non-propagative.
Acoustic waves propagating horizontally, also called Lamb waves, are unaffected
and therefore are non-dispersive. A continuously stratified fluid supplies a
restoring force, in the form of buoyancy, resulting in the propagation of
internal gravity waves. The coupling of the two waves in a compressible
stratified medium, like that of the solar atmosphere, results in their
separation into gravity-modified acoustic and compressibility-modified gravity
waves. The two types of waves occupy distinct branches in the frequency-wave
number domain ($k_{h}\textnormal{-}\omega$ space) with a band of evanescent
disturbance, separating the two branches. While, stratification results in a
cut-off frequency for the acoustic waves, the effect of compressibility
modifies the internal wave spectra at small horizontal
wavenumbers ($k_{h}$\textless 1/(2$H_{\varrho})$, where $H_{\varrho}$ is
the density scale height) from propagating. A detailed exposition on these
waves is provided by
\citet{2001wafl.book.....L}.
The addition of magnetic fields to such a medium introduces waves due to the
magnetic tension and pressure forces, that couple to the other waves already
present in the medium, resulting in a spectrum of magneto-acoustic-gravity
waves. Linearizing the full MHD equations about a uniformly stratified
background state and assuming a wave-like solution, one obtains the dispersion
relation for the magneto-acoustic gravity waves. Further assuming that the presence of a
magnetic field just modifies the background atmosphere, the coupling to the
magnetohydrodynamic waves can be neglected. The dispersion relation of the waves
then reduce to
\citep[see][for a derivation]{2014masu.book.....P}
\begin{equation}
k_{z}^2 = \frac{(\omega^2 - \omega_{\rm ac}^2)}{c_{\rm s}^2} -
\frac{(\omega^2 - N^2) k_{h}^2}{\omega^2 },
\label{eq:disp_relation}
\end{equation}
where $\omega$ is the frequency, $k_{h}$ is the horizontal wavenumber ($k_{h}^2
\textnormal{=} k_{x}^2 + k_{y}^2$), $c_{\rm s}$ is the adiabatic sound speed,
$\omega_{\rm ac}$ is the acoustic cut-off frequency, and $N$ is the
Brunt-V\"{a}is\"{a}l\"{a} frequency, explained later in
Equations~(\ref{eq:acutoff}) and (\ref{eq:bruntfreq}).
The \textit{local dispersion relation}, given by
Equation~(\ref{eq:disp_relation}), separates the wave-behaviour in the
$k_{h}\textnormal{-}\omega$ diagram also known as the diagnostic diagram. The
two regions of propagation in the $k_{h}\textnormal{-}\omega$ diagram are
obtained by setting $k_{z}^2=0$ in Equation~(\ref{eq:disp_relation}) \cite[see
e.g.,][]{1981NASSP.450..263L}, with the $k_{z}^2\textgreater0$ domain isolating
the vertically propagating solution from the evanescent region
($k_{z}^2\textless0$). A schematic of such a diagnostic diagram for a
compressible, gravitationally stratified medium for a given height in the
atmosphere is shown in Figure~\ref{fig:kwschematic}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{vigeesh_f3.pdf}
\caption{
Schematic diagram showing different regimes of wave propagation in a
compressible, gravitationally stratified medium for a given height in
the atmosphere. The shaded area marks regions of vertical propagation
of acoustic and the gravity waves. The propagation boundaries separate
the vertically propagating ($k_{z}^2 \textgreater 0 $) from the
evanescent ($k_{z}^2 \textless 0 $) solutions. The solid curve
represents the propagation boundaries obtained from the non-isothermal
cut-off frequencies defined in Equations~(\ref{eq:acutoff}) and
(\ref{eq:bruntfreq}). The dashed curves are obtained when we use the
isothermal approximation for $\omega_{\rm ac}$. The dispersion
relation for the surface-gravity wave is shown in gray. The long
dashed gray line corresponds to the dispersion relation for the Lamb
waves.}
\label{fig:kwschematic}
\end{figure}
For small $k_{h}$ ($\textless 1/(2 H_{\varrho})$), the lowest frequency with
which a gravity-modified acoustic wave can propagate upward is limited by the
acoustic cut-off frequency
($\omega_{\rm ac} \textnormal{=} c_{\rm s}/(2H_{\varrho})$),
which, for an isothermal atmosphere, is a function of the
sound speed and the density scale height ($H_{\varrho}$), referred to as the
Lamb frequency. However, in the non-isothermal case like that of the solar
atmosphere, the gradients in temperature modify the cut-off frequency. While
there are different expressions for the cut-off frequency, depending on
different representations of the wave equation
\citep{1995A&A...293..586M,
1998A&A...337..487S},
in this paper, we adopt the one due to
\citet{1984ARA&A..22..593D},
viz.,
\begin{equation}
\omega_{\rm ac}^2 = \frac{c_{\rm s}^2}{4 H_{\varrho}^2} \left(1-2\frac{{\rm d}
H_{\varrho}}{{\rm d} z}\right),
\label{eq:acutoff}
\end{equation}
which is obtained when the wave equation is cast in terms of
$\varrho^{1/2}c_{s}^2 {\nabla\cdot v}$ as the oscillating function. The
difference in the diagnostic diagram between the isothermal and the
non-isothermal case for a particular height in the atmosphere is also shown in
Figure~\ref{fig:kwschematic}.
Internal waves exist below the acoustic cut-off frequency and have horizontal
phase velocity less than the sound speed in the medium. The maximum frequency of
propagation for internal waves is set by the Brunt-V\"{a}is\"{a}l\"{a} frequency
($N$), also called the stratification or buoyancy frequency. For a
non-isothermal atmosphere, it is defined as, \\
\begin{equation}
N^{2} = g\left(\frac{1}{H_{\varrho}} - \frac{1}{\gamma H_{\rm p}}\right),
\label{eq:bruntfreq}
\end{equation}
where, $\gamma$ is the ratio of the specific heats ($c_{P}/c_{V}$). Recalling
that the pressure scale height ($H_{\rm p}$) is equivalent to the density scale
height ($H_{\varrho}$) in an isothermal atmosphere, the expression for the
Brunt-V\"{a}is\"{a}l\"{a} frequency in an isothermal case can be recovered. In
the presence of a magnetic field, the Brunt-V\"{a}is\"{a}l\"{a} frequency can be
further modified, but we do not consider this effect here.
A fluid element vertically displaced from its equilibrium position will
oscillate and emit gravity waves provided the background atmosphere satisfies
the Schwarzschild criterion for stability ($N^2\textgreater 0$). If there are
local departures from the stability criterion due to the overshot material in a
stable stratified surrounding, the fluid element becomes unstable and rises up,
cools down by radiating and falls back, completing the convective cycle. In
observations, the frequency range covering the internal waves is dominated by
the convective noise, but the propagation properties of the internal waves have
been studied by carrying out a phase spectra analysis of these waves.
We have presented the diagnostic diagram and the significance of distinguishing
the two-wave behavior in such a diagram. In the following section, we will look
at the analysis of the simulation data based on this diagnostic diagram.
\section{Spectral Analysis}{\label{s:spectral_analysis}}
The complex cross-spectrum of two real-valued processes: $f(\boldsymbol{x}, t)$,
$g(\boldsymbol{x}, t)$, is defined as:
\begin{mathletters}
\begin{eqnarray}
\mathcal{S}_{f,g} (\boldsymbol{k}, \omega) & \equiv & \mathcal{C}_{f,g} (\boldsymbol{k}, \omega)
+ i \mathcal{Q}_{f,g} (\boldsymbol{k}, \omega), \nonumber \\
& = & \mathcal{F}(\boldsymbol{k}, \omega)~\overline{\mathcal{G}(\boldsymbol{k}, \omega)}.
\label{eq:cross_spectra}
\end{eqnarray}
\end{mathletters}
$\mathcal{F}(\boldsymbol{k}, \omega)$ and $\mathcal{G}(\boldsymbol{k}, \omega)$ are the Fourier
transforms of the two processes, with the overbar representing the complex
conjugate. The real part of $\mathcal{S}$ is known as the co-spectrum
($\mathcal{C}$), and gives the correlation of the in-phase/anti-phase Fourier
components ($\boldsymbol{k}, \omega$) of the two processes. The imaginary part of
$\mathcal{S}$ is known as the quadrature spectrum ($\mathcal{Q}$) and represents
the correlation of the out-of-phase Fourier components between the two processes
\citep{hayashi1982}. These quantities will be further explored in the context of
energy fluxes of the internal waves discussed in
Section~\ref{ss:energy_flux_spectra}.
Using the cross-spectrum, the phase lag or the phase difference between the two
processes is formally given as,
\begin{equation}
\phi_{f,g} (\boldsymbol{k}, \omega) = \tan^{-1} \left[\frac{\mathcal{Q}_{f,g} (\boldsymbol{k},
\omega)}{\mathcal{C}_{f,g} (\boldsymbol{k}, \omega)}\right],
\label{eq:phase}
\end{equation}
where, $\phi(\boldsymbol{k}, \omega)$ is known as the phase difference spectrum, or
simply the phase spectrum. However, Equation~(\ref{eq:phase}) gives reliable
phases only if the two processes are linearly dependent for a given Fourier
component. The linear dependence of the two processes is measured by the
coherence spectrum ($\mathcal{K}$), defined as,
\begin{equation}
\mathcal{K}_{f,g}^{2}(\boldsymbol{k}, \omega) = \frac{\mathcal{C}_{f,g}^2 (\boldsymbol{k},
\omega) + \mathcal{Q}_{f,g}^2 (\boldsymbol{k}, \omega)}{\mathcal{S}_{f,f} (\boldsymbol{k},
\omega)~\mathcal{S}_{g,g} (\boldsymbol{k}, \omega)},
\end{equation}
with $\mathcal{S}_{f,f}$ representing the auto-spectrum of process $f$ and
$\mathcal{S}_{g,g}$ representing the auto-spectrum of process $g$, according to
Equation~(\ref{eq:cross_spectra}). The phase spectra, together with the
coherence spectra give an estimate of the phase-difference between the two
processes, with $\mathcal{K}$=1, when the two processes are linearly related,
and $\mathcal{K}$=0, when no linear dependence exists for the given Fourier
component.
In our analysis, the components of velocity and various other thermodynamic
quantities are extracted from the two models for the entire duration of the
simulation. We then carry out the analysis in the three-dimensional Fourier
space by transforming the data cube of the derived quantities consisting of two
horizontal spatial ($x,y$) and one temporal ($t$) direction, using Fast Fourier
Transform (FFT). This is done so for each horizontal plane of the vertical
coordinate grid (the $z$ axis) to obtain a four-dimensional data set of the
relevant quantities on a ($k_{x},k_{y},\omega,z$) grid. The derived quantities
are then represented on a $k_{h}\textnormal{-}\omega$ diagram for each height
level by azimuthally averaging over the $k_{x}\textnormal{-}k_{y}$ plane. With
the domain spanning 38.4~Mm in the horizontal directions and 8 hours long, we
have a spectral resolution of 0.164~Mm$^{-1}$ in horizontal wavenumber and
138~$\mu$Hz in frequency. The grid resolution of 80~km results in a Nyquist
wavenumber ($k_{\rm Ny} \textnormal{=} \pi/\delta x$) of 39.25 Mm$^{-1}$ of
which we are only interested in horizontal wavenumbers below 8 Mm$^{-1}$, where the bulk of IGWs occur.
A vertical and horizontal grid constant of respective 20~km and 80~km is sufficient
to capture the range of the internal wave spectrum in the models as will be
discussed in Sect.~\ref{s:discussion}.
Snapshots from the simulations were taken at 30\,s interval resulting in a
Nyquist frequency ($\nu_{\rm Ny}$) of 16.66\,mHz. Since the
Brunt-V\"{a}is\"{a}l\"{a} frequency in the atmosphere is typically below 5\,mHz,
we show in the following only the analysis up to the frequency range of 8\,mHz.
\subsection{Phase and coherence
spectra}{\label{ss:phase_diff}}
Acoustic waves and internal waves have different polarization properties and
therefore show different behaviour in their phase spectra. Unlike for acoustic
waves, the velocity fluctuations of internal waves and therefore the energy
transport (ray path) of the wave is perpendicular to the wave vector $\boldsymbol{k}$.
Moreover, the wave vector is always directed towards the plane of the source of
perturbation that excited the wave
\citep[see e.g.,][]{sutherland2010}.
Hence, an internal wave transporting energy
at an angle to the vertical, with an upward component, will have a downward
propagating phase component which shows up as negative phase lag between two
geometrical heights. This behaviour can be clearly identified by computing the
phase spectra obtained from velocity measurements at two different heights. The
diagnostic potential of the phase and coherence diagram was explored in a series
of papers by
\citet{1989A&A...213..423D},
\citet{1989A&A...224..245F},
\citet{1990A&A...228..506D},
\citet{1990A&A...236..509D},
and
\citet{1992A&A...266..560D}.
These have been used to separate out the internal wave signature from the low
frequency convective noise.
In the following, we look at the velocity-velocity ($v$-$v$) phase spectra,
which shows the phase lag between the velocities measured at two different
heights. The $v_{z}$-$v_{z}$ phase spectra are determined from the vertical
component of the velocity for a pair of heights as described in the beginning of
Section~\ref{s:spectral_analysis} and represented in the form of the diagnostic
diagrams. While phase spectra determined from observations of the solar
atmosphere rely on spectral lines formed over a particular height range, in this
work we focus only on phase spectra obtained from pairs of plane parallel,
geometrical height levels. Figure~\ref{fig:phase_diff_3heights} shows the
$v_{z}$-$v_{z}$ phase spectra for pairs of heights for the non-magnetic (left
panels) and for the magnetic (right panels) model of
Table~\ref{tab:model_summary}. In order to better understand the effect of
magnetic fields on the propagation of internal waves, we study the phase spectra
obtained from three carefully selected pairs of heights. These heights are
chosen in such a way that they probe three regions of interest in the magnetic
case. The colors represent the phase differences ($\phi$) and the shading
represents the coherency ($\mathcal{K}$), with corresponding colorbars shown on
the right of the plots. Positive phases (upward) are represented with a
progressively yellow to red color-scale and the negative phases (downward) are
shown with a green to blue color-scale. The shading scale for the coherency is
shown on the top of the colorbar. The gray curve in each plot shows the
dispersion relation of the surface gravity waves. The dashed and solid curves
correspond to the propagating boundaries of the two wave branches at the lower
and the upper height, respectively.
The first pair of heights, $z\textnormal{=}100$\,km and
$z\textnormal{=}240$\,km, lies close to the surface, where the internal waves
are thought to be excited by overshooting convection. In the magnetic model,
this height range probes a gas-dominated part of the atmosphere
($\beta$\textgreater 1, where $\beta$ is the ratio of the gas pressure to the
magnetic pressure) . The diagnostic diagram of these two heights is shown in
Figure~\ref{fig:phase_diff_3heights}a, where we see that both models have
generated significant amounts of internal waves, which show up as downward
phases in the internal gravity wave-regime of the diagnostic diagram (the green
area below the lower dashed curve that show phase difference of around
$-10^{\circ}$ over a height difference of 140\,km). Although, the generation of
the internal waves and how magnetic fields influence the generation is of great interest,
we defer such a study to a later paper. Here, we focus only on the propagation
properties of these waves in the presence of magnetic fields. As can be seen,
the downward phases are restricted to the region below the dashed curve,
suggesting that the excited internal waves are propagating only below the
boundary determined by the lowest Brunt-V\"{a}is\"{a}l\"{a}
frequency (in this case, the $N$ of the lower
height).
The two spectra of the excited internal waves in
Figure~\ref{fig:phase_diff_3heights}a are qualitatively the same regardless of
whether being generated in the convective or magneto-convective model. It should
be noted that the magnetic model, however, inhibits surface gravity waves, the
spectrum of which is clearly seen as a green ridge extending along the gray
curve in the non-magnetic model. This could be due to the fact that the magnetic
fields in the simulation box are predominantly vertical so that the propagation
of the nearly horizontal surface gravity waves are hindered by their presence.
Now we turn to Figure~\ref{fig:phase_diff_3heights}b, the second pair of heights
($z\textnormal{=}140$\,km and 600\,km), which are still within predominantly gas
dominated regions ($\beta\textgreater 1$). But, in the atmosphere that these
heights probe, the surfaces of constant plasma-$\beta$ are rugged with
occasional strong magnetic fields dipping the plasma-$\beta$ surfaces. The
non-magnetic model shows the signature of internal waves with the downward
phases with phase differences of around $-90^{\circ}$ over a height difference
of 460\,km. In the magnetic model they are significantly reduced, suggesting
that the magnetic fields have a major influence on the internal waves as they
propagate upwards. Here again, the negative phase difference, and therefore the
propagating region in the diagnostic diagram is mainly below the boundary set by
the $N$ of the lower height. Also note that the coherence has reduced as evident
from the increased shading for the larger wavenumbers, since we are probing
heights separated by a larger distance. The surface-gravity waves (ridge along
the gray curve), on the other hand, are still present in the non-magnetic model,
but they are completely absent in the magnetic model.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{vigeesh_f4a.pdf}
\end{center}
\begin{center}
\includegraphics[width=\columnwidth]{vigeesh_f4b.pdf}
\end{center}
\begin{center}
\includegraphics[width=\columnwidth]{vigeesh_f4c.pdf}
\end{center}
\caption{
$v_{z}\textnormal{-}v_{z}$ phase spectra estimated between: a)
$z\textnormal{=}100$\,km and $z\textnormal{=}240$\,km; b)
$z\textnormal{=}140$\,km and $z\textnormal{=}600$\,km; and c)
$z\textnormal{=}560$\,km and $z\textnormal{=}900$\,km, for the
non-magnetic model (left) and the magnetic models (right). The dashed
black curves represent the propagation boundaries obtained from the
non-isothermal cut-off frequencies defined in
Equations~(\ref{eq:acutoff}) and (\ref{eq:bruntfreq}) for the lower
height and the solid curves correspond to the upper height. The gray
curve is the dispersion relation of the surface-gravity waves. The
colors represent the phase differences ($\phi$) and the shading shows
the coherency ($\mathcal{K}$).}
\label{fig:phase_diff_3heights}
\end{figure}
Figure~\ref{fig:phase_diff_3heights}c refers to the third pair of heights
($z\textnormal{=}560$\,km and 900\,km), where the first height is in a gas
dominated region ($\beta\textgreater 1$) and the second height is in the
magnetic field dominated region ($\beta\textless 1$). We see that most of the
internal waves are absent in the magnetic model (phase difference of
$0^{\circ}$ over a height difference of 340\,km). Some regions of the diagnostic
diagram in the internal wave regime of the magnetic model also show positive phase differences (upward
propagating phases) of around $10^{\circ}$. According to their polarisation
properties, this suggests that the wave energy is propagating downwards in the
atmosphere.
In summary, the non-magnetic case shows a strong negative phase difference in
the internal wave region in all three pairs of heights, while the magnetic case
shows a clear signature of upward propagating internal waves for the pair of
heights in the lower atmosphere and mostly zero to positive phase differences in
the upper atmosphere. From the above analysis, we observe that the presence of
nearly vertical magnetic fields influences internal waves and it results in
their suppression or partial reflection in the atmosphere. There are several
ways by which internal waves can behave this way, and we explore some of these
factors in Section~\ref{s:discussion} to understand the behaviour that we see in
our simulation.
\subsection{Energy flux spectra}{\label{ss:energy_flux_spectra}}
The phase spectrum analyses show that in the case of the magnetic model the
internal waves are absent or even show a positive phase difference because they
propagate down in the higher layers. This means that in this case they are
either destroyed or reflected back and are transporting their energy downwards,
unlike the acoustic waves which mainly transport their energy upwards in the
atmosphere. An estimate of the energy flux spectra can shed some light on the
actual energy transport by internal waves in the presence of magnetic fields.
A propagating wave transports energy to the far field, when pressure and
velocity oscillate in-phase. In order to estimate the vertical component of the
linearized mechanical energy flux of these waves, we look at the co-spectrum of
the pressure fluctuations, $\Delta p$, and the vertical component of the
velocity, $v_{z}$
\citep{2001wafl.book.....L},
averaged over one wavelength (1/2 factor). As described at the beginning of
Sect.~\ref{s:spectral_analysis}, the co-spectrum gives us the in-phase
cross-spectrum, which in this case, is the active mechanical energy flux
transported by the waves,
\begin{mathletters}
\begin{eqnarray}
F_M (\boldsymbol{k}, \omega)
& = & \frac{1}{2} \mathcal{C}_{\Delta p, v} (\boldsymbol{k}, \omega), \nonumber \\
& = & \frac{1}{2} {\rm Re} [\Delta p(\boldsymbol{k}, \omega)\,\overline{v(\boldsymbol{k}, \omega)}].
\label{eq:energy_flux}
\end{eqnarray}
\end{mathletters}
The energy flux, $F_{M}$, calculated using Equation~(\ref{eq:energy_flux}) in
the $k_{x}$-$k_{y}$ plane is then azimuthally averaged and represented on the
diagnostic diagram. Figure~\ref{fig:energy_2heights}a and
\ref{fig:energy_2heights}b shows the energy flux spectra computed at a height of
$z\textnormal{=}360$\,km and $z\textnormal{=}700$\,km, respectively. Positive
values correspond to upward flux and negative values correspond to downward
flux. The energy flux spectra computed for $z\textnormal{=}360$\,km (see
Figure~\ref{fig:energy_2heights}a) show that both the acoustic and the internal
waves transport their energy upwards in both the magnetic and non-magnetic
model. When we look at the energy flux spectra at $z\textnormal{=}700$\,km (see
Figure~\ref{fig:energy_2heights}b), it is clear that the internal waves in the
non-magnetic model still carry a positive flux, which means they are propagating
and transporting energy predominantly upwards. However, the magnetic model shows
a mixture of positive and negative energy flux in the gravity wave regime (in
locations where there is a negative phase difference in the right panel of
Figure~\ref{fig:phase_diff_3heights}b), suggesting that at this height, the
waves are propagating in both directions up and down, and thus energy is
transported in both directions. The upward propagating waves are probably the
one that are generated in the lower atmosphere, and the downward propagating
waves are the one reflected from the top layer of the atmosphere.
In this work, we have not attempted to compute the Poynting flux from the
magnetic model, as we cannot do a comparative study with the non-magnetic model.
Future work will explore the emergent Poynting flux by comparing different
magnetic models.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{vigeesh_f5a.pdf}
\end{center}
\begin{center}
\includegraphics[width=\columnwidth]{vigeesh_f5b.pdf}
\end{center}
\caption{
Energy flux spectra at heights of a) $z\textnormal{=}360$\,km and b)
700\,km of the non-magnetic model (left) and the magnetic model
(right). The solid black curves represent the propagation boundaries
obtained from the non-isothermal cut-off frequencies defined in
Equations~(\ref{eq:acutoff}) and (\ref{eq:bruntfreq}). The gray curve
is the dispersion relation of the surface-gravity waves.}
\label{fig:energy_2heights}
\end{figure}
\section{Discussion}{\label{s:discussion}}
We now focus our attention on explaining the behaviour of internal waves that
are seen in the numerical models, particularly the absence or the downward
propagation in the magnetic model, which is also partially evident from the
energy flux spectra. We explore different factors that may affect the
propagation of internal waves in a realistic atmosphere. All the factors
considered below can restrict the possible height range over which internal
waves can occur in the solar and, generally, in stellar atmospheres. We start by
looking at the differences in the height dependence of the diagnostic diagram in
both models and how this affects the propagation of internal waves, followed by
the influence of radiative damping and non-linear effects and finally the
presence of magnetic fields. We will see that, while the lower and upper
limiting boundaries of the internal wave cavity are determined by the radiative
damping effects and flow parameters, respectively, the propagation within the
allowed domain is strongly influenced by magnetic fields.
We note that the effect of numerical diffusion becomes important at the level of a
couple of grid cells only. The artificial diffusion in {CO$^{\rm 5}$BOLD} is invoked at
shock fronts or for waves with large amplitudes, where strong gradients of
velocity exist. Since gravity waves do not shock or do not steepen very much,
they are not affected by artificial numerical diffusion; it influences waves of short
wavelengths only, which, however, are irrelevant in this study since we see the
effects of magnetic fields mainly at long wavelengths. Also, current observations
of IGWs do not have the spatial resolution to detect power at such short wavelengths.
On the other hand, since in our models the horizontal wave number of the propagating IGWs
is smaller than 7 Mm$^{-1}$ (see Fig.~\ref{fig:phase_diff_3heights}a), which corresponds to wavelengths larger than
$\approx 1000$\,km, they are well resolved with the horizontal grid spacing of 80\,km.
Likewise, in the vertical direction, Fig.~\ref{fig:phase_diff_3heights}a together with Eq.~(\ref{eq:disp_relation}) tells us that
$k_z < 40$\,Mm$^{-1}$ corresponding to wavelengths larger than $\approx 160$\,km,
which are well resolved with the present vertical grid spacing of 20\,km.
\subsection{Variation of the diagnostic diagram with height}{\label{ss:heightvariation}}
In the case of a convectively stable, uniformly stratified atmosphere, $N^2$ is
positive and constant and an internal wave can freely propagate throughout the
atmosphere. However, in a more realistic atmosphere like the one we simulate,
$N$ varies with height. Variations or discontinuities in $N$ result in partial
reflection or trapping (ducting) of internal waves within the domain. As we have
seen in Section~\ref{ss:kwdiagram}, a spectral band of evanescent disturbances
(white region in Figure~\ref{fig:kwschematic}) separates the gravity-modified
acoustic waves from the internal gravity waves (gray region in
Figure~\ref{fig:kwschematic}). Waves with a specific ($k_{h}$, $\omega$) that
fall in either of these two gray regions in the diagnostic diagram, of a certain
height, have oscillatory solutions at that particular height and propagate as
waves with their characteristic nature. All other combinations of ($k_{h}$,
$\omega$) are evanescent in the atmosphere. The parameters that set these limits
are mainly $\omega_{\rm ac}$ and $N$, which vary as a function of height in the
real solar atmosphere leading to changing wave behaviour, i.e, a changing
diagnostic diagram with height.
Figure~\ref{fig:acutoffs} shows the time-averaged $\omega_{\rm ac}$ as a
function of height in the two simulations that are presented in this paper
(black curves). The variation of $\omega_{\rm ac}$ is the result of the changing
temperature and stratification. The $\omega_{\rm ac}$ for the iso-thermal case
is shown in gray which takes into account only the local sound speed and density
scale height.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{vigeesh_f6.pdf}
\caption{
Temporally and horizontally averaged isothermal (gray curves) and the non-isothermal (black curves)
acoustic cutoff ($\omega_{\rm ac}$) frequency as a function of
height in the non-magnetic (dashed) and the magnetic (solid) model
above $z\textnormal{=}0$\,km. The gray and red scatter indicate the
temporal variation of the non-isothermal acoustic cut-off for the
non-magnetic and the magnetic simulations, respectively.}
\label{fig:acutoffs}
\end{figure}
Figure~\ref{fig:nbrunts} shows the time-averaged $N$ as a function of height in
the two simulations. The time-averaged $N$ for the isothermal case is shown in
gray. In both figures, the gray and red scatter show the temporal variation of
the non-isothermal value for the non-magnetic and magnetic models, respectively.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{vigeesh_f7.pdf}
\caption{
Temporally and horizontally averaged isothermal (gray curves) and the
non-isothermal (black curves) Brunt-V\"{a}is\"{a}l\"{a} frequency as a
function of height in the non-magnetic (dashed) and magnetic (solid)
models above $z\textnormal{=}0$\,km. The gray and red scatter indicate
the temporal variation of the non-isothermal Brunt-V\"{a}is\"{a}l\"{a}
frequency for the non-magnetic and the magnetic simulations,
respectively.}
\label{fig:nbrunts}
\end{figure}
In order to fully understand the propagation and transport of energy by the two
types of waves, it is important to know the local diagnostic diagram as a
function of height and thus the critical frequencies as a function of height, as
shown in Figures~\ref{fig:acutoffs} and \ref{fig:nbrunts}. Oscillating solutions
to the wave equation for a particular ($k_{h}$, $\omega$) may exist over the
entire domain or only for a particular range of heights. A wave at a particular
height with a frequency that falls in the white region in
Figure~\ref{fig:kwschematic} is partially \textit{reflected} at the respective
limits, as discussed in connection with Equation~(\ref{eq:disp_relation}),
beyond which it becomes \textit{evanescent}. If such a limit exists at another
height for the same wave, and a propagating wave solution exist for the region
between these two heights, then the wave is said to be \textit{trapped}. On the
other hand, if oscillatory solutions exists on either side, then the waves can
\textit{tunnel} through this barrier. Following the above criterion for the
range of wavelengths present in our simulation, the diagnostic diagram can be
separated into different regions for each branch of the acoustic-gravity
spectrum.
According to Figure~\ref{fig:nbrunts} it is clear that the propagating branch of
internal waves occupy nearly the same region of the $k_{h}\textnormal{-}\omega$
diagram in both models, because $N$ as a function of height is almost identical.
A wave that propagates into a region where it has no oscillatory solution is
partially reflected back towards the propagating region, the rest becoming
evanescent on the opposite side. In our models, these reflecting surfaces for
the internal waves occur in the low photosphere where $N$ sharply drops with
depth\footnote{
\citet{1981ApJ...249..349M}
considered a 1D atmosphere with effects of ionization and external forcing due
to ``turbulent pressure'' which causes a decrease in $N$ with height having the
consequence that the bottom of the chromosphere acts as a reflecting layer for
waves propagating upwards.} (see Figure~\ref{fig:nbrunts}). Trapped internal
waves in our model occupy a very small region in the $k_{h}\textnormal{-}\omega$
diagram with frequencies close to the maximum $N$ in the entire box. Since these
waves have frequencies close to $N$, their phases propagate almost horizontally,
transporting their energy upwards, which makes them important for energy
transport to the upper atmosphere. However, the range of frequencies that are
trapped is very small in both our models, lying within the concave stretch of
$N$ from $z\textnormal{=}$0.4\,Mm to 1.2\,Mm in Figure~\ref{fig:nbrunts}. This
is small compared to previous work, which considered a larger height range with
a sharp decrease in $N$ with height.
From Figure~\ref{fig:nbrunts} it is evident that the reflection that we observe
in the magnetic model cannot be not due to the variation of $N$ with height
because $N$ remains nearly constant higher up in the atmosphere. In our specific
case, we have only the lower part of the atmosphere acting as a reflecting layer
for the internal gravity waves propagating downwards and the non-magnetic and
magnetic model show a similar variation of $N$ with height.
\subsection{Radiative damping}{\label{ss:damping}}
Internal waves are thought to be generated by overshooting convection into the
stably stratified layer above. While, the
lower boundary for the waves to exist is determined by the positivity of $N^2$
(which is the condition for a convectively stable region), radiative effects
play an important role in damping the waves higher up in the atmosphere. Near
the surface, the radiative relaxation time, $\tau_{\rm{rad}}$, defined in the optically
thin limit as
\citep{1957ApJ...126..202S},
\begin{equation}
\tau_{\rm{rad}} = \frac{\varrho c_{V}}{16\kappa\sigma T^3},
\end{equation}
drops sharply to values of seconds. Thus temperature fluctuations are smoothed
out on comparable timescales. However, $\tau_{\rm{rad}}$ rapidly increases with height
again, so that radiative effects have no influence on the propagation of
internal waves in the layers above the mid-photosphere. Internal waves with
periods larger than $\tau_{\rm{rad}}$ are destined to be strongly damped in the near
surface layers. The effect of radiative damping on internal waves has been
extensively studied by
\citet{1982ApJ...263..386M},
who consider a simple linear height dependent Newtonian cooling and assume
different initial energy fluxes for the waves.
The damping ratio, $1/2N\gamma\tau_{\rm{rad}}$, characterises the effect of radiative
damping of internal waves. Figure~\ref{fig:radiative} shows the damping ratio as
a function of height in both our models. Also shown in gray is the approximation
used by
\citet{1982ApJ...263..386M}
for comparison. It can be clearly seen from the plot that the gravity waves
undergo heavy radiative damping below a height of 0.2\,Mm, where the damping
ratio is above 1. However, the waves are unaffected by radiative damping higher
up in the atmosphere.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{vigeesh_f8.pdf}
\caption{
Damping ratio as a function of height in the non-magnetic (dashed)
and magnetic (solid) model. The gray and red scatter indicate the
temporal variation of the damping ratio for the non-magnetic and the
magnetic simulation, respectively. The gray curve represents the
approximation used by
\citet{1982ApJ...263..386M}.}
\label{fig:radiative}
\end{figure}
In the lower atmosphere, it is clear from the phase spectra that we still see
signatures of upward propagating internal waves, despite strong radiative
damping. It seems that the internal wave flux generated by the convective
overshooting is still strong enough so that a significant amount of internal
waves survive (see Figure~\ref{fig:phase_diff_3heights}a) in regions where the
damping ratio is above 1. Non-local radiative transfer can have an inverse
effect in the sense that instead of smoothing, the spatial temperature
fluctuations are enhanced, as was conjectured by
\citet{1982ApJ...263..386M}
which needs to be further investigated.
\subsection{Non-linear interaction}{\label{ss:nonlinear}}
Internal waves dissipate their energy by breaking into turbulence. In a large
eddy simulation like the one that we carry out here, wave breaking is very
limited. Nevertheless, it is worthwhile to have an estimate of the effect of
different processes that may lead to the breaking of internal waves into
turbulence, or forming critical layers. A `critical level' is defined as the level at which
the mean flow speed becomes comparable to the horizontal phase speed of the
wave. The most important among them is the effect of a background flow, like the presence of a strong shear flow or
vorticity. In the case of a background plane-parallel shear flow, the height at
which the horizontal phase speed becomes comparable to the background flow
speed, will act as a critical layer resulting in the reflection of waves. The
importance of shear flows for gravity waves can be characterized by the
Richardson number (${\rm Ri}$), defined as,
\begin{equation}
{\rm Ri} = {N^2}/{\left(\frac{{\rm d} v_{h}}{{\rm d} z}\right)^2},
\end{equation}
where $v_h$ is the horizontal component of the velocity. The estimated value of
${\rm Ri}$ in our model atmosphere is everywhere larger than 0.25
\citep[see e.g.,][]{1988PApGe.126..103L},
suggesting that the atmosphere is dynamically stable and shear flows that are
strong enough to lead to dynamical instabilities do not exist.
Another stability condition considered by
\citet{1981ApJ...249..349M}
is the ratio of the wave vorticity, $\zeta$, and $N$.
Figure~\ref{fig:non-linear} shows the ratio of the average fluid vorticity and
$N$ as a function of height. We find that the ratio, $\zeta/N$, is small in both
models above 0.1\,Mm, suggesting that instabilities do not develop as a result
of the flow vorticity in our models. Note however, that $\zeta/N$ is larger and
increases with height in the magnetic model compared to the non-magnetic model,
probably because of the generation of vorticity by the magnetic field in the
low-$\beta$ regime
\citep{2011A&A...526A...5S,
2012ASPC..456....3S,
2012Natur.486..505W}.
We also observe that the vortices in the non-magnetic model near the surface are
larger compared to the magnetic model as also reported in observations by
\citet{2016ApJ...824..120S}.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{vigeesh_f9.pdf}
\caption{
Non-linearity parameter ($\zeta/N$) as a function of height in the
non-magnetic (dashed) and magnetic (solid) models. The gray and red
scatter indicate the temporal variation of the ratio $\zeta/N$ for the
non-magnetic and the magnetic simulation, respectively.}
\label{fig:non-linear}
\end{figure}
We recall that the simulations presented in this paper were carried out on a
coarse grid of 80\,km cell size in the horizontal directions which only
marginally captures the development of strong vortical flows. Having a higher
spatial resolution may likely result in vortical flows having a stronger effect
on the internal waves. In fact, high-resolution simulations with a smaller box
size than the one presented here shows that $\zeta/N$ rises above 1 in the top
layers where magnetic fields are present. As can be seen in the
Figure~\ref{fig:non-linear}, there are instances when the $\zeta/N$ increases
and strides above 1 close to the top boundary in the magnetic model. This
implies that vortical motions must be considered a possible reason why internal
waves are absent in the magnetic model.
\subsection{Linear mode coupling}{\label{ss:coupling}}
The presence of magnetic fields itself may play a significant role in modifying
the nature of internal waves in places where they exist.
\citet{2010MNRAS.402..386N,
2011MNRAS.417.1162N}
considered internal wave propagation in a VAL-C solar reference atmosphere,
containing a uniform magnetic field with different field inclinations. Using
generalized ray theory and with the help of linear simulations, they show that
the internal waves are reflected within the region where plasma
$\beta$\textgreater1, and convert to downwardly propagating slow waves
(predominantly magnetic in nature). The presence of strongly inclined fields
(with an inclination of 80$^\circ$ or more) in these regions can modify the
waves and convert them to acoustic (in case of 2D) or Alfv\'{e}n waves (in case
of 3D) and guide them along the field lines with radiative damping playing only
a minor role
\citep{2011MNRAS.417.1162N}.
In more realistic simulations like the one we consider in this paper, it is
difficult to specify an average height of the plasma $\beta\textnormal{=}1$
surface or a characteristic inclination of the magnetic fields. The magnetic
fields are continuously shuffled and reformed in the inter-granular lanes
forming a complex structure as shown in Figure~\ref{fig:mag_field_snapshot}. In
order to show how the plasma $\beta\textnormal{=}1$ surface or the magnetic
field inclination vary, we compute the average values of $\beta$, the sound
speed, $c_{\rm s}$, the magnitude of the Alv\'{e}n velocity, $v_{\rm A}$, the
vertical component, $B_{\rm v}$, and the horizontal component of magnetic field,
$B_{\rm h}$, given as $B_{\rm h}^2\textnormal{=}B_{\rm x}^2+B_{\rm y}^2$ over
the entire simulation run as a function of height. Figure~\ref{fig:beta_csva}
shows the plasma $\beta$ (dashed) and the ratio of $c_{\rm s}$ to $v_{\rm A}$
(solid), as a function of height, averaged over horizontal planes and in time
over the entire simulation. Also shown is the temporal scatter of
$c_{\rm s}$/$v_{\rm A}$ (light gray) and of plasma $\beta$ (red). From
Figure~\ref{fig:beta_csva}, it is evident that the domain below
$z\textnormal{=}0.8$\,Mm is gas dominated, although there are localized regions
of strong magnetic field that dip the $\beta$ surface down to $z\textless 0$.
According to
\citet{2010MNRAS.402..386N},
internal waves in our model are less likely to be present above
$z\textnormal{=}0.7$\,Mm as most of them will undergo conversion to slow
(predominantly magnetic) waves and reflect back before reaching this height.
Our simulation also shows a significant horizontal component of the magnetic
field at photospheric heights, in agreement with recent observations of the
solar atmosphere
\citep{2008ApJ...672.1237L,
2012ApJ...751....2O}.
Figure~\ref{fig:b_incl} shows the average horizontal (solid curve) and vertical
component (dashed) of the magnetic field along with the average field
inclination (dotted) and its temporal scatter shown in gray. The vertical
component of the magnetic field dominates in the entire domain mainly due to the
relatively strong (50\,G) uniform vertical field, of the initial configuration.
However, the fields tend to be inclined around 0.5\,Mm, with a maximum average
inclination of $40^{\circ}$, which can act as a portal for internal
waves to escape into the layers above and convert to acoustic and Alfv\'{e}n
waves. This conversion is highly dependent on the field angle and the model that
we have does not have strongly inclined fields (fields above an inclination of
80$^\circ$) to facilitate this pathway. From the phase spectrum analysis, we do
not see a strong transmission of the internal waves into the upper atmosphere
(see Figure~\ref{fig:phase_diff_3heights}b,c right panels). We can conclude that
most of the waves sense the $c_{\rm S}\textnormal{=}v_{\rm A}$ surface and are
reflected back within the high-$\beta$ region, but we cannot say if it is due to
mode coupling or non-linear shear flow interaction.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{vigeesh_f10.pdf}\\
\caption{
Snapshot of the absolute magnetic field strength, $|B|$, at
$t\textnormal{=}4$\,h in the simulation, taken at
$z\textnormal{=}0$\,km.}
\label{fig:mag_field_snapshot}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{vigeesh_f11.pdf}\\
\caption{
Temporally and horizontally averaged ratio $c_{\rm s}/v_{\rm A}$ (solid curve)
and plasma $\beta$ (dashed curve) in the magnetic simulation. The red and gray scatter show the temporal variation of
$c_{\rm s}/v_{\rm A}$ and plasma $\beta$ , respectively.}
\label{fig:beta_csva}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{vigeesh_f12.pdf}\\
\caption{
Temporally and horizontally averaged components of the magnetic
field in the magnetic simulation. The vertical field is shown as a
dashed curve and the horizontal field is shown as a solid curve. The
dotted curve shows the average inclination of the field from the
vertical, and the gray curves show the temporal scatter of the average
inclination.}
\label{fig:b_incl}
\end{figure}
\section{Summary and Conclusion}{\label{s:conclusion}}
Internal gravity waves in the solar atmosphere are thought to be generated
mainly by the overshooting of convective matter into the stably stratified
atmosphere lying above. Strong radiative cooling in the immediate vicinity of
the solar surface causes these waves to quickly damp, but they are believed to
be present higher up in the atmosphere where the radiative timescales are large.
Theoretical studies show that the flow field higher up in the atmosphere may
lead to the breaking of internal waves to turbulence resulting in a complete
dissipation of their energy in the mid-chromosphere, before even reaching to
coronal heights. Additional complications are brought about by the presence of
magnetic fields in this region, questioning their ability to transport energy in
the solar atmosphere at all. A clear understanding of the gravity-wave phenomena
occurring in the lower solar atmosphere requires a comprehensive treatment in
three dimensions, including the effects of magnetic fields, non-local radiative
transfer and realistic equation of state.
In this paper, we have presented a study of the acoustic-gravity wave spectrum
emerging from a realistic simulation of solar convection. A purely hydrodynamic
and a MHD simulation were carried out to highlight the effect of the magnetic
fields on the propagation of internal waves. The generated internal waves in
both models are studied in the spectral-domain by looking at the emergent phase
spectra between two heights in the atmosphere and estimating the energy flux
spectra. These studies were carried out in the light of the observations by
\citet{2008ApJ...681L.125S}
that the gravity waves are suppressed at locations of magnetic flux. These
authors assumed that the suppression is a result of mode conversion of internal
waves to Alfv\'{e}n waves.
Our analysis shows that the internal waves are generated in both models and
overcome the strong radiative damping in the lower photosphere to propagate into
the higher layers. The radiative damping is strong below z=200\,km but the phase
difference spectra show signatures of these waves even below this height,
suggesting that the mechanism generating them efficiently imparts enough energy
to the wave to overcome the strong radiative damping. But the magnetic fields
affect these waves as they propagate higher up in the atmosphere as evident from
the differences between the phase difference spectra of the non-magnetic and the
magnetic model. We explore different causes that may lead to the observed
signatures and the differences in the phase difference spectra of the waves. We
conclude that the internal waves in the quiet Sun most likely undergo mode
coupling to the slow magneto-acoustic waves as described by
\citet{2010MNRAS.402..386N,
2011MNRAS.417.1162N}
and are mostly reflected back into the atmosphere. Looking at the height
dependence of the phase spectra, we confirm that this reflection happens well
within the region where the average plasma-$\beta$ is larger than 1 (i.e. within
the gas-dominated region), confirming the mode-coupling scenario. This is also
in agreement with the energy flux spectra, which shows a mixed upward and
downward transport of energy in the internal gravity wave regime for the
magnetic case in the higher layers. Since the magnetic fields in our model are
mostly vertical, conversion to Alfv\'{e}n waves is highly unlikely. Conversion
to Alfv\'{e}n waves is not facilitated unless there is a significantly inclined
magnetic field present. The effect of the horizontal fields on the propagation
of internal waves will be explored in a later paper. We also note out that the
strong suppression that is observed within magnetic flux-concentration
\citep{2008ApJ...681L.125S}
may be the effect of non-linear wave breaking due to the vortex flows that are
ubiquitously present in these regions. We also find that the surface-gravity
waves are strongly suppressed in the magnetic model as we go higher up in the
atmosphere, likely due to the strong vertical component of the magnetic field.
The analysis presented in this paper is based on models computed with different
numerical solvers, which resulted in a smaller size of the granules in the
non-magnetic run. However, a preliminary study using the identical MHD solver for
both runs shows that the particular propagation properties of internal waves
that are found in this paper are independent of the solver. The granules are of
the same size and match with the sizes that we see in the magnetic model
of the present paper.
This analysis has shown that the internal waves are strongly affected by the
magnetic fields present on the Sun. Recognizing that a considerable amount of
internal wave flux is produced in the near surface layers, and that these waves
can couple with other magneto-atmospheric waves, it is important to fully
understand the transfer of energy from these waves to other waves in the
atmosphere of the Sun. In a broader context, a clear insight into the internal
wave spectrum will help us to connect the missing link in our understanding of
all the different wave phenomena in the solar atmosphere and their individual
role in heating the upper atmosphere either directly or indirectly.
\acknowledgements
This work was supported by a NASA EPSCoR award to NMSU under contract
No.\,NNX09AP76A and NSF PAARE award AST-084 9986. The research leading to these
results has received funding from the European Research Council under the
European Union's Seventh Framework Programme (FP7/2007-2013) / ERC Grant
Agreement n.\,307117 and n.\,312844.
We especially thank Stuart Jefferies for his talk at the Fifty Years of Seismology of the Sun and Stars conference in Tucson,
that stimulated our interest in studying internal gravity waves.
The authors are grateful to Bernhard Fleck for detailed comments on a draft of this paper.
GV acknowledges the helpful discussions with
Markus Roth, Nazaret Bello Gonz{\'a}lez, Patrick Gaulme, Thierry Appourchaux, and the
{CO$^{\rm 5}$BOLD} community.
We would like to thank the anonymous referee for his/her detailed comments, which helped us to improve the paper.
\software{CO$^{\rm 5}$BOLD \citep{2012JCoPh.231..919F}}
\bibliographystyle{aasjournal}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,443
|
\section{Initial Comments \label{com}}
In years past we have been working on weak interaction inverse beta
decay while interacting with various collective modes of motion in
condensed matter systems. Our considerations have been recently
criticized\cite{Ciuchi:2012}. The difference of opinion on the rate
of neutron production in hydride battery cathodes has a brief
history
starting from a talk at Roma La Sapienza by Y. Srivastava cited by
Ciuchi {\it et al}\cite{Ciuchi:2012}.
At this talk, a discussion arose where some of the authors of \cite{Ciuchi:2012} mentioned
disagreements with the results presented by YS by factors of $10^{40}$, later reduced to
a factor of $10^{20}$.
We pointed out that our estimates were based on an actual calculation of the collective process
and directed them to references \cite{WL1:2006,WL2:2007} {\bf } where inverse beta decay had
been proposed as a mechanism to activate neutron production. Subsequently, some members of
that group calculated the inverse beta decay and in an internal report concluded that we were still
$10^7$ high in our estimates of neutron production rates\cite{Polosa}.
Towards the goal of reaching full agreement, we suggested that they work in analogy
to the muon inverse decay process. As a result, \cite{Ciuchi:2012} the initial
disagreement in neutron production rates between us has presently mowed {\em way down from
forty to a mere two orders of magnitude}.
Our purpose in this note is to give in public the last needed corrections to the
Ciuchi {\it et al} model that would bring the results of their calculation
in line with theoretical results in collective mode studies on this subject\cite{WL1:2006,WL2:2007}
and most recent experimental\cite{Cirillo:2012} findings. A complete discussion of the issues involved is
under preparation and will be presented shortly.
\section{Danger in the Numbers \label{DN}}
The Ciuchi {\it et al} team asserts that the factor of two or three orders
of magnitude would render the inverse beta decay unobservable.
Fortunately they {\it are completely incorrect} in this regard. There
are experiments carried out by those in
D. Cirillo {\it et al}\cite{Cirillo:2012} that reside in Naples. They have
actually observed both nuclear transmutations and actual neutrons in
hydride metallic battery cathodes. Even if our theoretical neutron
counting rates were high by a factor of \begin{math} 300 \end{math},
then Cirillo {\it et al} could still and indeed did experimentally observe nuclear
transmutations.
Ciuchi et. al. use our numbers from papers dealing with other applications
but {\it not} batteries. For example, they start
from neutron production rate with the time honored formula
\begin{equation}
\Gamma (e^- p^+ \to n + \nu_e )=|\psi (0)|^2 v \sigma
\label{Ciuchi1}
\end{equation}
wherein the amplitude for finding one electron at position
\begin{math} {\bf r} \end{math} and one proton at position
\begin{math} {\bf R} \end{math} is
\begin{equation}
\psi =\psi ({\bf r}-{\bf R}),
\label{Ciuchi2}
\end{equation}
\begin{math} v \end{math} is the relative velocity and
\begin{math} \sigma \end{math} is the \begin{math} e^- p^+ \end{math}
cross section.
The relative velocity value employed by Ciuchi {\it et al} is copied from
our paper on exploding wires thus arriving at a theory of exploding
batteries\cite{explode}. Absurdities would also arise from Ciuchi {\it et al}
taking our numbers from a paper describing neutron rates in lightening bolts.
All these papers of ours are cited and numbers copied from them even though they
are clearly irrelevant for describing neutron production on metal hydride
cathodes.
\section{Many Body Wave Functions \label{mbw}}
The wave function problem not properly taken into account by Ciuchi {\it et al}
is that the time honored Eqs.(\ref{Ciuchi1}) and (\ref{Ciuchi2}) hold
true if and only if there is precisely one electron and one proton in the
initial incoming quantum state. If one is trying to treat
\begin{math} N \end{math} protons and \begin{math} N \end{math} electrons
then the charge neutral wave function Eq.(\ref{Ciuchi2}) would have to be
replaced by
\begin{equation}
\Psi=\Psi({\bf r}_1,{\bf r}_2,\ldots,{\bf r}_N,{\bf R}_1,{\bf R}_2,\ldots,{\bf R}_N)
\label{mbw1}
\end{equation}
with spins and other degrees of freedom left implicit.
Thus, for (say) \begin{math} N\sim 10^{16} \end{math}
participating in a surface plasmon, the probability \begin{math} |\psi(0)|^2 \end{math}
employed by Ciuchi {\it et al} does not in reality exist. The many body version of the
probability of finding an electron on top of a proton is described by the correlation
function
\begin{equation}
C= \frac{1}{N} \left<\Psi \right| \sum_{i=1}^N \sum_{j=1}^N
\delta({\bf r}_i-{\bf R}_j)\left|\Psi \right>
\label{mbw2}
\end{equation}
or the quantum field theory equivalent.
What is here crucial is that the cathode is hot. It is sufficient;y hot for the cathode to glow optically
and light up the laboratory. Thus one must employ a thermal average
\begin{equation}
C_T= \frac{1}{N} \left< \sum_{i=1}^N \sum_{j=1}^N
\delta({\bf r}_i-{\bf R}_j)\right>_T
\label{mbw3}
\end{equation}
at an optical noise temperature that we have theoretically estimated\cite{WL2:2007}
to be \begin{math} T\sim 5000 K^{\rm o} \end{math} in agreement with experiment\cite{Cirillo:2012}.
As one must, we employ
\begin{math} C_T \end{math} and {\em not} \begin{math} |\psi (0)|^2 \end{math}
for the plasma physics problem at hand. It is this truncation from the many body collective
aspect [\begin{math}C_T\end{math}] to the two body
[\begin{math}|\psi(0)|^2\end{math}] which is at the heart of the difference
in their and our estimate of the rates. The plasmon modes contributing to
Eq.(\ref{mbw3}) determine the parameter \begin{math} \beta \end{math} as shown in
our work\cite{WL1:2006,WL2:2007} on metal hydride cathodes.
\section{Concluding Statement \label{conc}}
No significant argument has been provided against our nuclear physics results.
The experimental evidence of neutron production and nuclear transmutations in
properly designed plasma discharge electrolytic cells\cite{Cirillo:2012} agrees with
our theoretical analysis and belies the theoretical arguments given in\cite{Ciuchi:2012}
against a hefty production of neutrons in hydride cells.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,921
|
PressClub Global · Article.
BMW Group founds company: IDEALworks GmbH to develop and distribute innovative robots and management software for logistics solutions
20.11.2020 Press Release
• Further development and distribution of in-house developed Smart Transport Robot, STR • Next generation of STR to launch end of 2020
BMW Group Facilities
Hanns Huber
Attachments(1x, ~161.99 KB)
Photos(1x, ~4.36 MB)
This article in other PressClubs
PressClub Canada PressClub France PressClub Japan PressClub Middle East PressClub Poland PressClub Portugal PressClub Romania
Munich. The BMW Group breaks new ground in the field of logistics as it founds IDEALworks GmbH – a fully-owned subsidiary headquartered in Munich. The aim is to become a leading supplier of autonomous robotics solutions in the logistics sector. The name "IDEAL" stands for Industry Driven Engineering for Autonomous Logistics.
"In founding IDEALworks, we are creating a new business segment for our logistics solutions. In recent years, our logistics innovation team has been working in depth on the digitalization and automatization of production logistics and has developed some unique solutions. The Smart Transport Robot, STR, in particular has met with great response and has seen demand from both within and outside of the BMW Group. Founding IDEALworks GmbH is now the logical next step for the BMW Group as a driver of innovation," explained Milan Nedeljković, the member of the Board of Management of BMW AG responsible for Production, to mark its foundation.
"We are entering completely new terrain with IDEALworks GmbH. Up until now, our development has focused on automotive production and its logistics," said Jimmy Nassif, CTO IDEALworks GmbH. He continued: "Our perspective is changing now. We are becoming a provider of logistics robotics beyond the automotive industry. We are preparing some innovations for the coming months."
Since 2015, the innovations team from BMW Group Logistics has been working on future-focused industry 4.0 solutions in the fields of virtual reality, augmented reality, in- and outdoor logistics robots, paperless logistics and smart devices. Many of these solutions are already in series production at BMW Group production locations. In 2019, BMW Group Logistics received the prestigious Deutscher Logistik Preis [German Logistics Award]. The Smart Transport Robot and its management software were also recognized as part of this award.
IDEALworks GmbH launches its first product with the Smart Transport Robot
The Smart Transport Robot, STR, was developed in 2015 in collaboration with the Fraunhofer Institute. The flat, autonomous and mobile robots can transport goods weighing up to one ton to their destination. They independently calculate the best route and move freely around the space using the SLAM (Simultaneous Localization and Mapping) navigation method. The SLAM algorithm does not require permanent navigation transmitters to be installed in buildings and can therefore be set up quickly in a new environment without requiring any structural adjustments. An integrated battery module from the BMW i3 is able to supply the STR with power for at least an entire shift. The next generation of the STR will be rolled out at the end of 2020. Currently, more than 130 STRs are already in series production at several different BMW Group production sites.
Successful pilot projects in the non-automotive sector
"With the Smart Transport Robot, we have launched a highly competitive product. From October onwards, we have been carrying out pilot projects at companies from a wide range of industries. These trials show just how robust and versatile the STR is," explained Markus Bauer, COO IDEALworks GmbH. "The success of the pilot project and the resulting demand for the STR were decisive in founding IDEALworks GmbH. We want to develop IDEALworks into a top player among the providers of industrial logistics robots in the long term," Bauer continued.
The new company IDEALworks GmbH is located in Munich.
In its initial phase, the team consists of around 30 experts from a wide range of fields and nationalities.
Get in touch with IDEALworks here:
www.idealworks.com
Communications at IDEALworks GmbH:
hello@idealworks.com
If you have any questions, please contact:
Hanns Huber, Communications Production Network BMW Group
Telephone: + 49 89 382-31181
E-Mail: Hanns.HA.Huber@bmw.de
Julian Friedrich, Head of Communications Production Network BMW Group
E-Mail: Julian.Friedrich@bmw.de
Internet: www.press.bmw.de
E-Mail: presse@bmw.de
With its four brands BMW, MINI, Rolls-Royce and BMW Motorrad, the BMW Group is the world's leading premium manufacturer of automobiles and motorcycles and also provides premium financial and mobility services. The BMW Group production network comprises 31 production and assembly facilities in 15 countries; the company has a global sales network in more than 140 countries.
In 2019, the BMW Group sold over 2.5 million passenger vehicles and more than 175,000 motorcycles worldwide. The profit before tax in the financial year 2019 was € 7.118 billion on revenues amounting to € 104.210 billion. As of 31 December 2019, the BMW Group had a workforce of 126,016 employees.
The success of the BMW Group has always been based on long-term thinking and responsible action. The company has therefore established ecological and social sustainability throughout the value chain, comprehensive product responsibility and a clear commitment to conserving resources as an integral part of its strategy.
LinkedIn: https://www.linkedin.com/company/bmw-group/
Article Offline Attachments.
Press relase BMW Group founds IDEALworks GmbH PDF, EN, 161.99 KB
The following applies to consumption figures for vehicles with new type approval, September 2017 onward: The figures for fuel consumption, CO2 emissions and energy consumption are obtained in accordance with the specified measuring procedure (EC Regulation No. 715/2007), as issued and amended. The figures are for a basic-version vehicle in Germany. The bandwidths allow for differences in the choice of wheel and tire sizes and items of optional equipment and can be changed by the configuration.
Obtained on the basis of the new "Worldwide harmonized Light vehicles Test Procedure" (WLTP), the figures are converted back to the "New European Driving Cycle" (NEDC) for the sake of comparability. Values other than those stated here may be used for the purposes of taxation and for other vehicle-related duties relating to CO2 emissions.
More information about official fuel consumption figures and the official specific CO2 emissions of new passenger cars can be obtained from the "guideline on fuel consumption, CO2 emissions and current consumption of new passenger cars", available here: https://www.dat.de/co2/.
Specifications of the BMW 8 Series Gran Coupe, valid from 11/2020.
Specifications of the BMW 8 Series Convertible, valid from 11/2020.
Specifications of the BMW 8 Series Coupé, valid from 11/2020.
Specifications of the BMW 7 Series, valid from 11/2020.
Der BMW 3er Touring - Preisliste für Deutschland.
Further News for: Smart Logistics · BMW Group Facilities · Production Plants
Support Links Imprint Legal Notices Privacy PolicyCookies News Feeds
Language of file attachment
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 318
|
#pragma once
#ifndef EGLPLUS_COLOR_BUFFER_TYPE_1303292057_HPP
#define EGLPLUS_COLOR_BUFFER_TYPE_1303292057_HPP
#include <eglplus/enumerations.hpp>
namespace eglplus {
/// EGL color_buffer_type enumeration
/**
* @ingroup eglplus_enumerations
*/
EGLPLUS_ENUM_CLASS_BEGIN(ColorBufferType, EGLenum)
#include <eglplus/enums/color_buffer_type.ipp>
EGLPLUS_ENUM_CLASS_END(ColorBufferType)
#if !EGLPLUS_NO_ENUM_VALUE_NAMES
#include <eglplus/enums/color_buffer_type_names.ipp>
#endif
#if !EGLPLUS_ENUM_VALUE_RANGES
#include <eglplus/enums/color_buffer_type_range.ipp>
#endif
} // namespace eglplus
#endif // include guard
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,535
|
import React, { Component, PropTypes } from 'react';
import { connect } from 'react-redux';
import { push } from 'react-router-redux';
import { logout } from 'redux/modules/auth';
import { open, close, set } from 'redux/modules/drawer';
import { display as displaySnack } from 'redux/modules/snackbar';
import { updateIntl } from 'react-intl-redux';
import AppBar from 'material-ui/AppBar';
// import Badge from 'material-ui/Badge';
import IconButton from 'material-ui/IconButton';
import FlatButton from 'material-ui/FlatButton';
import Drawer from 'material-ui/Drawer';
import MenuItem from 'material-ui/MenuItem';
import Divider from 'material-ui/Divider';
import NavigationMenu from 'material-ui/svg-icons/navigation/menu';
import Person from 'material-ui/svg-icons/social/person';
import Home from 'material-ui/svg-icons/action/home';
import HelpOutline from 'material-ui/svg-icons/action/help-outline';
import ShoppingCart from 'material-ui/svg-icons/action/shopping-cart';
import Favorite from 'material-ui/svg-icons/action/favorite';
import Forum from 'material-ui/svg-icons/communication/forum';
import Business from 'material-ui/svg-icons/communication/business';
import config from '../../config';
import theme from '../../theme/mui-theme';
import itMessages from '../../i18n/it-messages';
import styles from './NavigationBar.scss';
// TODO It might be wise to further separate session handling from the navigation.
// TODO This component need a lot of splitting up
@connect(state => ({ auth: state.auth.user, isOpen: state.drawer.open }), { pushState: push, logout, open, close, set, displaySnack, updateIntl })
export default class NavigationBar extends Component {
static propTypes = {
auth: PropTypes.object,
isOpen: PropTypes.bool,
open: PropTypes.func.isRequired,
close: PropTypes.func.isRequired,
set: PropTypes.func.isRequired,
displaySnack: PropTypes.func.isRequired,
logout: PropTypes.func.isRequired,
pushState: PropTypes.func.isRequired,
updateIntl: PropTypes.func.isRequired
}
// Display Snackbar and redirect on login / logout
componentWillReceiveProps(nextProps) {
// Login in progress
if (!this.props.auth && nextProps.auth) {
this.props.displaySnack('You have signed in!', 5000);
this.props.pushState('/');
}
// Logging out
else if (this.props.auth && !nextProps.auth) {
this.props.displaySnack('You have signed out!', 5000);
this.props.pushState('/');
}
}
logout(event) {
event.preventDefault();
this.props.logout();
this.props.close();
}
goto(url) {
this.props.pushState(url);
this.props.close();
}
renderActions() {
const { auth } = this.props;
return (
<div>
<FlatButton label="EN" onTouchTap={() => this.props.updateIntl({ locale: 'en' })} />
<FlatButton label="IT" onTouchTap={() => this.props.updateIntl({ locale: 'it', messages: itMessages })} />
<IconButton onTouchTap={() => this.goto(auth ? '/account' : '/login')}>
<Person color={theme.palette.accent2Color} />
</IconButton>
{auth &&
<IconButton onTouchTap={() => this.goto('/favourites')}>
<Favorite color={theme.palette.accent2Color} />
</IconButton>
}
<IconButton onTouchTap={() => this.goto('/cart')}>
<ShoppingCart color={theme.palette.accent1Color} />
</IconButton>
</div>
);
}
render() {
const { auth, isOpen, open, set } = this.props;
return (
<div>
<AppBar
title={<span className={styles.header}>{config.app.title}</span>}
onTitleTouchTap={() => this.props.pushState('/')}
iconElementLeft={<IconButton onTouchTap={open}><NavigationMenu /></IconButton>}
iconElementRight={this.renderActions()}
/>
<Drawer docked={false} open={isOpen} onRequestChange={set}>
<MenuItem leftIcon={<Home />} onTouchTap={() => this.goto('/')} primaryText="Home" />
<Divider />
<MenuItem leftIcon={<Forum />} onTouchTap={() => this.goto('/blog')} primaryText="Blog" />
<MenuItem leftIcon={<Forum />} onTouchTap={() => this.goto('/faq')} primaryText="FAQ" />
<MenuItem leftIcon={<Business />} onTouchTap={() => this.goto('/contact')} primaryText="Contact Us" />
<MenuItem leftIcon={<HelpOutline />} onTouchTap={() => this.goto('/about')} primaryText="About Us" />
<Divider />
{!auth && <MenuItem leftIcon={<Person />} onTouchTap={() => this.goto('/login')} primaryText="Login" />}
{!auth && <MenuItem leftIcon={<Person />} onTouchTap={() => this.goto('/register')} primaryText="Register" />}
{/* If logged in */}
{auth && <MenuItem leftIcon={<Person />} onTouchTap={() => this.goto('/account')} primaryText="Account" />}
{auth && <MenuItem leftIcon={<Person />} onTouchTap={event => this.logout(event)} primaryText="Logout" />}
</Drawer>
</div>
);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,222
|
Capellas: Homeland defense is teamwork
Hewlett-Packard President Michael Capellas tells attendees at a conference that the government will need to adapt to integrate technology into its homeland security efforts.
Margaret Kane
July 18, 2002 1:45 AM PDT
WASHINGTON--The challenge for the government in adapting technology to homeland defense lies not with the technology, but with the policies behind it, Hewlett-Packard President Michael Capellas told attendees at a conference here.
"This is not a technology problem," Capellas said on Thursday. "We have the technology. And it's really not a money issue. It's an integration issue. It's how do we get all this data to work together and how do we take the steps together to execute."
Capellas' comments echoed statements made by Thomas Siebel at his keynote earlier in the day at the E-Gov 2002 conference. Siebel demonstrated his company's homeland security product, built off of existing customer relationship management software. Those sorts of products may require agencies to share information that has previously been insulated.
"We have a culture in all our judicial, governmental and law enforcement agencies of keeping information private," Siebel said, "That may have to change."
Another major problem that needs to be resolved is government procurement policy, which leads to piecemeal approaches that end up being more expensive than integrated efforts, Capellas said.
"Government RFPs (request for proposals) and procurement cycles are absolutely not the right thing," said Capellas, receiving applause from the audience. They "actively encourage you not to put pieces together but to sell each piece at the cheapest possible price, even if it will cost more to put it together."
Capellas, coming off the deeply complicated HP and Compaq Computer merger, offered his sympathies to those in charge of putting together the new department of homeland defense. "I share your pain," he said, noting the problems of combining hundreds of thousands of workers with different goals and agendas.
And he acknowledged that while he was "particularly pleased" with HP's recent launch as a newly combined business, he was prepared for the "psychological lull" that could follow.
"You will always go through a bit of growing pains on that phase," Capellas said.
His advice to the new department: Encourage people to believe that they are all on the same team, integrate security procedures, and agree on common data that can be shared between teams.
"It doesn't mean you have to break down sovereignty, but you have to agree on what data to share," Capellas said.
Discuss: Capellas: Homeland defense is teamwork
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 2,470
|
Constancia (anteriormente llamada Constancia-Algodonal) es una localidad peruana del distrito de Fernando Lores, ubicada en la provincia de Maynas, en el departamento de Loreto.
Nace debido a los asentamientos de varios obreros, que vinieron a trabajar contratados por varias empresas en la época de la fiebre del caucho de la selva peruana.
Geografía
El poblado de Constancia se ubica al sur de la Provincia de Maynas, región Loreto en la zona nororiental del Perú. Localizada en el Distrito de Fernando Lores. Muy cerca de la quebrada de Tamshiyacu.
La mayoría del relieve es selvático debido a que pertenece al extenso territorio del Amazonas.
Clima
Estando cerca de la línea ecuatorial, Constancia posee un clima tropical lluvioso, con temperaturas que van desde los 15 °C a 31 °C. La temperatura promedio anual de Constancia es 23 °C, con una humedad relativa promedio del 115%. La temporada de lluvia es de noviembre a mayo, con la red fluvial en su punto más alto en mayo y su nivel más bajo en octubre.
Personajes
La Tigresa del Oriente
Referencias
Localidades del departamento de Loreto
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,686
|
package commands;
import org.eclipse.core.commands.ExecutionEvent;
import org.eclipse.core.commands.ExecutionException;
import org.eclipse.core.commands.IHandler;
import org.eclipse.core.commands.IHandlerListener;
public class GenerateGraphviz extends UmpleSuperHandler implements IHandler {
@Override
public void addHandlerListener(IHandlerListener handlerListener) {
// TODO Auto-generated method stub
}
@Override
public void dispose() {
// TODO Auto-generated method stub
}
@Override
public Object execute(ExecutionEvent event) throws ExecutionException {
compileUmpleFile(event,true,"GvStateDiagram","GvClassTraitDiagram","GvClassDiagram","GvEntityRelationshipDiagram");
return null;
}
@Override
public boolean isEnabled() {
return true;
}
@Override
public boolean isHandled() {
return true;
}
@Override
public void removeHandlerListener(IHandlerListener handlerListener) {
// TODO Auto-generated method stub
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,038
|
Home News Giving youth a chance to explore digital space
Giving youth a chance to explore digital space
PHIRI CAWE
Excited to be part of Innovation Hub Africa held at the Philippi Business Hub, new students hope to unlock the business world and get employment after finishing up the programme.
To help bridge the gap between high school, tertiary education and employment, Hillsong Africa has launched Innovation Hub Africa, a programme to equip young people with the skills required to pursue careers in the digital sphere.
Through the initiative, now in its second year, training is offered at no cost to the students and is scheduled to run for one year.
The project aims to inspire, educate and enable young unemployed people, aged 18 to 25, from the townships.
Facility manager Brandon Vergotine, said his organisation hoped to assist aspiring innovators.
"That's a great opportunity for young aspiring entrepreneurs or business people.
"With the programme there will come a lot of opportunities, of course," he said.
The training focuses on leadership, identity, digital marketing skills, as well as other soft skills. Project leader Maurisa Moloto said the students would be able to access opportunities that would unlock the potential of the next generation, ultimately connecting them to local and global digital economy.
"We are very excited to see the second intake. One of the exciting things is to start their own businesses. With mentorship and leadership skills they will get here. All is possible," said Ms Moloto.
The students were excited to be part of the initiative.
Zimkhitha Jali said her challenge was marketing herself and her skills.
"I hope this would be the best way to reach many people out there. Digital is the way to go. People live with technology these days. So this will be helpful to reach my goals. I am interested in digital marketing for that reason.
"Hopefully I will benefit something," she said.
Former student and an innovator Mthuthuzeli Jodo called on the new intake to take the programme seriously, saying they should be proud of themselves and where they come from.
Previous articleFake City electricity officials scam residents
Next articleRead of the Week
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,777
|
@implementation XGModel
//MJCodingImplementation
+ (NSDictionary *)objectClassInArray
{
return @{ @"modelArray": [MyModel class] };
}
@end
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,588
|
Перевод слова valentine, американское и британское произношение, транскрипция, словосочетания, примеры использования.
The Valentines: Австралийская софт-рок-группа, существовавшая в период с 1966 по 1970 год. Известна как одна из ранних групп покойного вокалиста AC/DC Бона Скотта.
Valentine's Day is a day of love that is celebrated by lovers around the world. Celebrated on the 14th of February each year, this year it will be celebrated on a Friday.
What are Valentines? Come February 2, every young heart throbbing with love and romance sums up all its passion in words to create that magical thing called the "Valentine".
valentines vectors and photos - free graphic resources. 14,387 Valentines Graphics. Heart wreath valentine background 46,413 256 1 years ago.
Pharmercy Valentines. "Let's keep the skies clear together and forever!" Drew my favorite couple for this year's Valentines Day!
Valentines - это... Что такое Valentines?
Valentines is located in Vienna. Free WiFi access is available in this homestay. The accommodation will provide you with a TV and a DVD player.
St. Valentine's Day. День Святого Валентина. АНГЛИЙСКИЙ ЯЗЫК.
Valentine's Day cards are the most popular gifts. Some valentines are very fancy; they are decorated with ribbons, paper lace and images of cupids.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 799
|
<?php
namespace Application\Controller\Doctor;
use Zend\Mvc\Controller\AbstractActionController;
use Zend\View\Model\ViewModel;
use Application\Form\Doctor\CreatePrescriptionForm;
class RecipeController extends AbstractActionController
{
public function listAction()
{
$view = new ViewModel(array(
'message' => 'Hello world',
));
$view->setTemplate('application/doctor/index/index');
return $view;
//return new ViewModel();
}
public function createPrescriptionAction(){
$objectManager = $this->getServiceLocator()->get('Doctrine\ORM\EntityManager');
if($user = $this->identity()){
if ($user->getType() == 2) {
$id_doctor = $user->getId();
$form = new CreatePrescriptionForm($objectManager, $id_doctor);
if ($this->request->isPost()) {
$form->setData($this->request->getPost());
if($form->isValid()) {
$prescription = $form->getObject();
$userId =$form->get('prescription')->getValue();
$relatedUser = $objectManager->find('Application\Entity\UserProfile', $userId);
$idClient = $relatedUser->getUser();
//var_dump($idClient);
$userWithPrescr = $objectManager->find('Application\Entity\User', $idClient);
$prescription->setUser($userWithPrescr);
$prescription->setDoctor($user);
//$prescription->setDrugs($drugs);
$objectManager->persist($prescription);
$objectManager->flush();
//return $this->redirect()->toRoute('admin/index4',
//array('controller' => 'index', 'action'=> 'viewusers'));
return $this->redirect()->toRoute('doctor/index1',
array('controller' => 'index', 'action'=> 'seePatients'));
}
}
else{
//$id = $this->params()->fromRoute('id');
//if (isset($id)) {
//
// $client = $objectManager->find('Application\Entity\User', $id);
// if (!isset($client)) {
// $this->flashMessenger()->addErrorMessage(sprintf('Could not find client with id %s',$id));
// return $this->redirect()->toRoute('admin/index4',
/// array ('controller' => 'index',
// 'action'=> 'viewusers')
// );
//}
//$form->bind($client);
}
}
else{
return $this->redirect()->toRoute('doctor/index1',
array('controller' => 'auth', 'action'=> 'login'));
}
}
/*
$qb= $objectManager
->createQueryBuilder()
->select('u.id','u.firstname','u.lastname' )
->from('\Application\Entity\User','u')
->leftJoin('\Application\Entity\UserProfile', 'up', \Doctrine\ORM\Query\Expr\Join::WITH, 'up.id = u.id')
->where('up.doctor = '.$id_doctor);
$result = $qb->getQuery()->getResult();
*/
else{
return $this->redirect()->toRoute('doctor/index1',
array('controller' => 'auth', 'action'=> 'login'));
}
$view= new ViewModel(array('form' => $form));//, 'clients' => $result));
$view->setTemplate('application/doctor/prescription/prescription');
return $view;
}
public function viewPrescriptionAction(){
$objectManager = $this->getServiceLocator()->get('Doctrine\ORM\EntityManager');
$user = $this->identity();
if($user = $this->identity()){
if ($user->getType() == 2) {
$idpatient = $this->params()->fromRoute('id');
$patientfound = $objectManager->getRepository('Application\Entity\UserProfile')->findOneBy(
array('id' => $idpatient));
$idprofil = $patientfound->getUser();
$prescriptionfound = $objectManager->getRepository('Application\Entity\Prescription')->findBy(
array('user' => $idprofil));
}
else{
return $this->redirect()->toRoute('doctor/index1',
array('controller' => 'auth', 'action'=> 'login'));
}
}else
{
return $this->redirect()->toRoute('doctor/index1',
array('controller' => 'index', 'action'=> 'detail'));
}
$view= new ViewModel(array('prescriptions'=>$prescriptionfound));
$view->setTemplate('application/doctor/index/seeprescription');
return $view;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 795
|
Probably the most frustrating complications faced simply by those having bucks shops is normally repairing revenues for you to prior levels. It is difficult to help know how to start. It is normally hard to be familiar with what to help do. But with any little data files event a new very effect strategy will be developed. By using a small amount of instant tips people who have a dollars store can easily know the suitable behavior for taking. Within this article I gift two basic steps you should take instantly if gross sales drop out of. The tips can provide the exact data you have to zero within on the main ideal steps to improve gross sales so that you can new greater degrees. Part #1) Information plus always check your monetary outlets revenues at a by-hour basis. Should you own a good dollars retail store you will need to consistently keep track of the effectiveness about your store. One connected with the ideal items of files you can collect can be by-hour revenue. You should also browse through the actual number about purchases for each hour to get maximum an knowledge for modifications in the size of your own personal average transacting. Collect the following data one week per month. Choose this particular information start to build a graphic regarding how revenues throw right into your save. If situations are really firmer often the most clear work with regarding lower and also no gross sales information is to reduce keep hours. Precisely why pay payroll, utilities and all other expenses involving opening your store if you find one hr definitely not creating sales and profits? Whenever cost cutting has to manifest as well as your reserve helps you to get the transformation, consider for the short term reducing retail outlet hours. Of course you? ll need to be able to ratchet all of them less difficult in the course of the vacations. If anyone? comienza acquired a recent surprising lower around revenues, put in information about staffing through the hours that you are seeing the particular decrease. Step #2) Cost-free step to very much start actually working the main slow occasions yourself. Especially those with dollar retailers know this is exactly one with the ideal ways to very much begin to be able to keep the details associated with just about any difference in by-hour revenues. Presently there are a lot of amazing benefits connected with your current presence. Initial there will be the possibility of a salaries reduction during the hours you will work the exact sales carpet. Even significantly better announcement; your own salaries goes down, but your staffing doesn? t. You are simply on holiday replacing staff yourself. Folks that own a buck keep realize an individual of the exact best ways to get the authentic answer to thoughts is certainly talking with consumers. Your vision is so that you can accomplish just of which. Come across out every detail they usually are able to promote. Raisers and also professionals associated with buck suppliers can then work with this info to determine the ideal following methods to help reconstruct revenue and also cut costs. Go through more on this subject page antropologias.org !
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,647
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.