text
stringlengths 14
5.77M
| meta
dict | __index_level_0__
int64 0
9.97k
⌀ |
|---|---|---|
import fcntl
import json
import os
from . import BaseReporter
class JSONFileReporter(BaseReporter):
"""
Reports to a file in a JSON format.
"""
def __init__(self, output_file=None):
"""
:param output_file: a file name to which the reports will be written.
"""
super(JSONFileReporter, self).__init__()
self.output_file = output_file
def output_values(self, counter_values):
JSONFileReporter.safe_write(counter_values, self.output_file)
@staticmethod
def _lockfile(file):
try:
fcntl.flock(file, fcntl.LOCK_EX)
return True
except IOError, exc_value:
# IOError: [Errno 11] Resource temporarily unavailable
if exc_value[0] == 11 or exc_value[0] == 35:
return False
else:
raise
@staticmethod
def _unlockfile(file):
fcntl.flock(file, fcntl.LOCK_UN)
@staticmethod
def safe_write(value, filename):
""" safely writes value in a JSON format to file
"""
fd = os.open(filename, os.O_CREAT | os.O_WRONLY)
JSONFileReporter._lockfile(fd)
try:
file = os.fdopen(fd, "w")
file.truncate()
json.dump(value, file)
finally:
JSONFileReporter._unlockfile(fd)
file.close()
# fd is now close by the with clause
@staticmethod
def safe_read(filename):
""" safely reads a value in a JSON format frome file
"""
fd = os.open(filename, os.O_RDONLY)
JSONFileReporter._lockfile(fd)
try:
file = os.fdopen(fd, "r")
return json.load(file)
finally:
JSONFileReporter._unlockfile(fd)
file.close()
# fd is now close by the with clause
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,701
|
Virtual and In-person Classes
JPAS Summer Camps
JPAS Jr.
JPAS Connect
Virtual and
In-person Classes
JPAS (Ages: 7-18) and JPAS Jr. (Ages:3-6) online and in-person classes and workshops are amazing! Children will learn from and work with some of the finest industry professionals. Kids and teens, get ready to be in touch with your inner artist, build your confidence, learn skills, and have fun along the way. We have a variety of engaging in-person and online classes and workshops for your child. In addition to the classes below, check out our JPAS Connect page for creative, fun, and binge-worthy activities.
Contact J Performing Arts Space Director, Alise Robinson for a complimentary one-on-one consultation to help determine which JPAS classes are perfect for your child!
Set up your appointment: arobinson@jccdallas.org or 214-239-7140.
Follow us on Facebook and Instagram for daily interactions.
Sign up today and unleash your artistic power!
Please scroll down to view all classes!
Be Your Own Producer JR. (Ages: 8+)
Broadway's Ashley Kate Adams is back for a one day Be Your Own Producer JR. Workshop!
After the smashing success of the #BYOP Jr. Summer Camp, #BYOP Jr. is back to encourage your kids to create their own materials this August! Join Broadway's Ashley Kate Adams for this exciting and interactive 1-day workshop online!
For all CREATIVE KIDS who have more than one artistic interest: writing, acting, singing, arts & crafts, playing music – you know the kind, the ones who can't sit still! Class list of activities include: Claiming your artistic identity, imagination station, storyboarding, the writer's room – how to make your own TV show, expression confession, mini-pitch, and an independent artistic project.
Instructor: Ashley Kate Adams (Netflix's UNBREAKABLE KIMMY SCHMIDT) & Producer of AKA Studio Productions (Rules of Cool, Mulligan)
Date: Sunday, August 23
Time: 10:00 – 2:00 pm
Price: Members $80 | Non Members $100
Audition Prep Masterclass (Ages: 9+)
Join Broadway's Ashley Kate Adams for a 1-day workshop on building your training towards Broadway and landing the roles you desire. We will work on audition prep, making a great first impression in a mock audition and end the workshop with a Q & A about all things Broadway.
Date: Thursday, October 15
Price: Members $50 | Non Members $65
Film Acting for the Young Artist (Ages: 9+)
Taught by TV and Movie Actress Linda Leonard with special Guests appearances from Casting Directors and Agents
Lights, Camera, Action! Learn and develop on-camera acting techniques that are used in television, commercials, and films. No matter your skill set this class will be tailored to individual students' abilities! The class offers real-world experience and helps actors to be comfortable working on-camera.
Instructor: Linda Leonard (Broadway National Tour of Cats, Numerous credits in theater, TV & Film – Dallas, New York, Los Angeles, and Abroad)
Dates: Thursdays, September 24 – November 12
Time: 4:30 – 5:30 pm
Price: Members $180 | Non Members $225
View Instructor Bio
Acting For the Stage (Ages: 9+)
Learn the art of acting in this course! Students are trained to become actors, and taught how to develop vocal, improvisation, characterization, and physical acting skills, all while having tons of fun and gaining performance experience. The course culminates in a virtual showcase during the last week of class, in which everyone gets a chance to shine and show off all their new skills!
Instructor: Avra Aron, published writer, global acting and writing teacher and Tisch School of Arts graduate
Dates: Wednesdays, Sept. 30 – November 18
Join Via Zoom
Performing Arts Games! (Ages: 8+)
Learn the art of acting in this course! Students are trained to become actors, and taught how to develop vocal, improvisation, characterization, and physical acting skills, all while having tons of fun and gaining performance experience. The course culminates in a showcase during the last week of class, in which everyone gets a chance to shine and show off all their new skills!
Instructor: Avra Aron, published writer, global acting and writing teacher, and Tisch School of Arts graduate.
Dates: Thursdays, October 1 – October 22
Cost: Members $85 | Non Members $108
Calling all Creative Kids and recent #BYOP JR. Graduates, #BYOP JR. is back with Ashley Kate Adams! Join Ashley Kate Broadway TV/ Film Actress, Producer & Children's Theatrical Director based in NYC for this highly creative class, for the one-of-a-kind kid featuring new materials and an exciting weekly course! Awesome class for kids that have more than one artistic interest: writing, acting, singing, arts & crafts, playing music – you know the kind, the ones who can't sit still! Class list of activities include: Claiming your artistic identity, imagination station, storyboarding, the writer's room – how to make your own TV show, expression confession, mini-pitch, and an independent artistic project.
Dates: Tuesdays, September 29 – November 17
Beginner Guitar 101 (Ages: 8+)
Go from Guitar Zero to Guitar Hero in an encouraging environment. Students will break down the guitar, learn to play by ear, and build a strong skillset in 4 weeks. They will learn the names of the notes, how to tune, strum, play simple chords by ear and musical internals, the foundation of ear training. By the end of the class, students will have learned basic chords and melodies from some of their favorite songs. Singing along is encouraged!
Dates: Mondays, October 5 – November 23
We offer a variety of private lessons ranging from dance to singing to many different instruments. Lessons available in packs of four, in either 30-minute or 1-hour sessions. All four lessons must be used within three months of purchase.
JPAS online classes and one-day workshops are going to be amazing! This fall children will learn from and work with some of the finest industry professionals. Kids and teens, get ready to be in touch with your inner artist, build your confidence, learn skills, and have fun along the way!
Musical Theater Dance (Ages: 8+)
Get ready to move and groove. During this class, we will focus on three Broadway shows! We will learn original Broadway choreography and some jazz choreography to music from Frozen, Moana, and Hamilton. The class is limited to 10 students.
Instructor: Skylar Duvall
Date: Mondays, October 5 – November 23
Triple Threat Theater: Broadway Revue (Ages: 8+)
Travel to shows from all over the world such as Matilda, Wicked, School of Rock, Hairspray, and Newsies! Acting, singing, dancing! Broadway workshops, a showcase, and lots of fun and learning! Work with a Choreographer, Director, and Music Director to hone your skills! The class is limited to 10 students.
Instructors: Cherish Robinson and Skylar Duvall
Dates: Wednesdays, September 16 – October 28
Staged Straight Play (Ages: 9+)
Come "play" with us! Students will perform in a contemporary, serio/comic play directed by Linda Leonard a Broadway Actress. An excellent opportunity to stretch young artists acting skills and get connected with the world of theater in today's environment. The play will deal with modern subjects and modern challenges. Learn from Ms. Linda Leonard one of the best directors in Dallas! The class is limited to 10 students.
Instructor: Linda Leonard
Dates: Sundays, October 18 – December 13
Pop Star Boot Camp (Ages: 8+)
Lead by Cherish Robinson Producer behind Elle King and Vocalist for the likes of Erika Badu. Dream of recording your own song? This class gives you first-hand artist development of recording, vocal training, and on-stage performance. Come to class with a song you would like to showcase. Learn vocal technique, microphone etiquette while working towards your big day in the studio. Second, to last Tuesday or Thursday of class performers will record their song and video will be taken for them to use on social media or share with family and friends. Singers will book ½ hour private sessions. The last day of class will be the final showcase for their fellow pop star boot camp peers. LIMITED SPACE AVAILABLE.
Kids need to come to first class prepared with a song of choice and a karaoke track (mp3) on a thumb drive.
Instructor: Cherish Robinson
Dates: Tuesday, Sept. 22 – October 13
Art Experiences with Susan
Art Experiences is an innovative choice-based art program that offers students the opportunity to respond to their own ideas and interests through artmaking. Working at their own pace, students learn problem-solving, independent thinking, cooperative learning, and persistence through creative discovery. Veteran art teacher Susan Stein guides the community of student artists, determines their needs, creates structure, and introduces a large variety of new mediums and techniques. Class size limited to 10 students.
Instructor: Susan Stein earned her Bachelor of Fine Arts degree from Washington University in St. Louis and has 20 years of teaching experience
Dates: Mondays, Sept. 21 – Dec. 14*
Times: 4:30- 5:30 pm
Price: Members $285 | $358 Non Members
Supply fee: $50
*No class on Sept 28th for Yom Kippur
Dates: Tuesdays, Sept. 22 – Dec. 8
Times: 4:30-5:30 pm
Ages: 5 – 8
AT THE J
We're ready to welcome you back to the J. Our in-person JPAS classes follow all safety protocols which include but are not limited to online pre-screening, practicing social distancing, wearing masks, continuous cleaning, and daily deep cleans. Classes have limited availability (maximum of 10 students per class) to keep engagement safe, strong, and exciting!
Artistic Staff
Skylar Duvall
Skylar Duvall is so proud to be back for her fourth year with JPAS! She is a graduate from KD Conservatory and is currently teaching dance at Dance Company of Wylie. "I feel so fortunate to get to work with, not only the amazing kids in our JPAS program, but also the incredibly talented staff. I'm so happy that I get to call my passion, my job," Skylar says.
Cherish Robinson
Cherish Robinson has over 20 years of experience as an artist. She has traveled throughout Europe and the United States as a singer, producer, composer, music director, and songwriter. with musical references spanning over genres such as classical, jazz, blues, gospel, music theater, R&B and pop. She has sung behind some of R&B's most impressive singers including Erykah Badu and The Roots (Tonight Show Band) and even appeared on The Colbert Show as a background vocalist. Cherish has musically/vocally produced and arranged albums for artists such as, Elle King and Paul Cauthen. She has been a contributing arts and entertainment writer for Elisia Magazine, the co-host of a radio show, The Midday Lockdown.BroadwayWorld.com has heralded her as "one of DFW's sharpest musical theater moments in recent memory" even mentioning her as one of DFW's Fresh Faces of 2017. She was recently seen in SISTER ACT at Savannah Theater in Savannah, GA (Deloris). MADAGASCAR at Casa Manana (Gloria the Hippo), DETROIT '67 at Jubilee Theater (Bunny), SISTER ACT at Irving MainStage (Deloris), PASSING STRANGE at Theater Three (Desi), and Grand Prairie Arts Center ALL SHOOK UP (Sylvia/Music Director). As an AEA member, she won the 2020 Column Award winner (Best Actress in a Musical) for her performance as Deloris in SISTER ACT at Watertower Theatre. She also loves teaching/coaching voice, piano, and organ to her many students!
Avra Aron
Avra Aron is a writer, actress, and teacher. A graduate from the Stella Adler Conservatory at NYU's Tisch School of the Arts, she spent the last eight years working abroad in London where she directed various plays and musicals, including one which was shown at the Royal Academy of Dramatic Arts. She also directed students in preparation for London Academy of Music and Dramatic Arts examinations. Her writing has been published in literary journals, including one which was listed as a Notable Special Issue of 2018 by 'The Best American Essays 2019.' She was a finalist for the VanderMey Nonfiction Prize.
Linda Leonard
Linda Leonard has been a professional director, choreographer, teacher, actor (AEA,SAG/AFTRA), singer, and dancer for over 45 years, here in the Metroplex and in NYC, Chicago, Los Angeles, and abroad. Linda was in the Broadway national tour of Cats. Linda has directed or choreographed for WaterTower Theatre, Stage West, Circle Theatre, Echo Theatre, CitiRep, and Theatre Three to name a few in Dallas and Fort Worth. Linda was the Education Director for WaterTower Theatre (3 years), is a professor at KD Conservatory, and her credits include award-winning roles such as Ann Richards (Critics Forum and Column Awards) and Aurora in The Kiss of The Spiderwoman (Leon Rabin, Column and Critics Forum Awards). Linda also has numerous credits in film and television, and is represented by The Horne Agency in Dallas/LA, and directed a web series about autism, "Saving Hope", which premiered in March of 2019. Linda is dedicated to the process of creativity, nurturing, and empowering young artists in every aspect of the arts.
Ashley Kate Adams
Ashley Kate Adams (AKA Studio Productions) : is an award-winning Actress & Producer who made her Broadway Debut at the age of 23 in Tony Award Winning Revival of LA CAGE AUX FOLLES. She has appeared on television in UNBREAKABLE KIMMY SCHMIDT (NETFLIX), ROYAL PAINS (USA), RULES OF COOL (FULLSCREEN), can be heard in TRUE DETECTIVE (HBO), THE RIGHTEOUS GEMSTONES (HBO) in LOGAN LUCKY, GEMINI MAN & seen in films such as PITCHING TENTS (Hulu), 1 Message (Dish Network) & stars in the upcoming short film LOVE. She recently won the Best Actress Award at the 2018 New York Theatre Festival for the play LOVE and originated the role of Carole in "A Christmas Carole: A One-Woman Show". Off-Broadway & Regional Credits include the Culture Project, Paper Mill Playhouse, NYMF, Ensemble Theatre of Cincinnati, MTC and more. Proud Motion Capture Performer for Rockstar Video games, featured in RED DEAD REDEMPTION 2 & GRAND THEFT AUTO V: ARENA WARS. She started her Production Company, AKA Studio Productions, in 2011 and their work has been selected for over 150+ film festivals. TV Properties include: MULLIGAN (2018 LA Film Festival, 1st Place Flickers RIIFF) , RULES OF COOL (FULLSCREEN), CAPITAL ADVICE (ITV Fest) & Films include: ACE (Toronto Inside Out Fest), photo op (Winner SENE Film), and the upcoming film's BLINDSIGHT (LA Shorts) & ABSENT MIND (Toronto International Shorts). She inspired and Co-Produced the feature film BEAUTY MARK which was an official selection of the LA Film Festival in 2017 and was distributed by the Orchard. Her feature film BOY HERO is slated to film in Fall of 2020 in Co-Production with Pigasus Pictures.
Eric Keyes
In Eric Keyes you not only have a great teacher but a doer. Eric has performed all over the world and has put out 6 albums of original music including his last Solo album "Back in Blue." He has recorded with guitar greats Allan Holdsworth & Clint Strong. Eric teaches by example so you can learn to trust your ears and your instincts. He has a completely original method called "The Invisible Guitar." Using this method students learn in a fun, encouraging environment that will inspire them to be the best they can be.
Susan Stein
Susan Stein has over 20 years of teaching experience. Susan believes everyone can benefit from creating art, regardless of whether they are "good at it". Her Art Experiences Classes allow students to express their individuality through their artwork and succeed to the best of their ability. As learners create, they build important life skills including problem solving, brainstorming, inventive thinking, working together, pride, persistence, intuitiveness, hand-eye coordination and so much more!Today's world requires constant innovation and problem solving. The creative thinkers of today will be the successful, happy people of tomorrow. Through artmaking, even adults can increase their brain capacity leading to clearer thinking, new ideas and fresh perspectives. Making art is proven to be therapeutic, reducing stress and improving focus.
Sakura Brunette
Sakura Brunette hails from the land of moose, mounties and maple syrup. She has been costuming for 9 years. In 2015, she graduated from McMaster University with a BA in Art History and Theatre & Film. Shortly after graduating she relocated to Texas. Her recent credits include, Noise's Off, Comedy of Tenors, Mamma Mia, Midsummer Night's Dream, Little Shop of Horrors, Southern Exposure, Move Over Mrs. Markham and Lion King, JR.
The JPAS Experience at Levine Academy
JPAS is offering some fabulous performing arts enrichment classes taught by our JPAS staff hosted at Levine Academy – classes are open to everyone. You DO NOT need to be a Levine student to attend.
Playwrights Perform
Unleash your creative power by writing and performing your own show! Students get the amazing opportunity to form an acting company and write a play through brainstorming, improvisation, and work-shopping scenes. Students rehearse together before proudly sharing their creation with the world on the last day of class, complete with costumes, music, and much dramatic flair!
Tuesdays, January 28 – April 7
Teacher: Avra Aron
Improv-sational
An introduction for anyone who wants to learn the power and fundamentals of improvisation…the basic tools, rules and philosophy through games, drills, and simple scenes. Experience the power of play, and the excitement of improvisational fun in a safe environment. A strong foundation in improvisation sharpens acting abilities and promotes creativity and personal growth.
Mondays, January 27 – April 6
No class Feb. 17
Our JPAS JR series is exclusively for our Goldberg Early Childhood Center for preschool children 3 – 5 years and is designed to get their feet moving and their imaginations flowing all while teaching skills and building confidence. Learn the basics in a fun, safe, and nurturing setting intended to unleash the little theater bug in the youngest of children.
Go on a magical musical adventure every week with Avra and her ukulele! Each class takes the form of a musical story, during which preschoolers sing, move to music, read musical notation, learn Italian musical terms, develop appreciation for classical music, and grow awareness of pitch, tempo, and dynamics – all while having so much fun, they don't even realize they are learning!
TIME: Mondays 1:00-1:45 pm (GECC Room 5)
DATES: January 11-May 10 (no class March 15 or March 29)
AGE OF PARTICIPANTS: Preschool 3s
MEMBER PRICE: $375
NON MEMBER PRICE: $470
ARTISTIC STAFF: Avra Aaron
TIME: Wednesdays, 1:00-1:45 pm (GECC Room 6)
DATES: January 13-May 12 (no class March 17)
TIME: Fridays, 1:00-1:45 pm (GECC Room 7)
DATES: January 15 – May 17 (no class March 19)
Using theater games and improv as our creative play our actors will learn the art of drama while improving their imagination, creativity and social skills.
AGE OF PARTICIPANTS: 4-5
TIME: Tuesdays, 2:00-2:45 pm (GECC Room 9)
DATES: January 12 -May 11 (no class March 16)
TIME: Fridays, 2:00-2:45 pm (GECC Room 10)
DATES: January 15 – May 7 (no class March 19)
JPAS Presents...Art Experiences with Susan
Art Experiences is an innovative choice-based art program that offers students the opportunity to respond to their own ideas and interests through artmaking. Working at their own pace, students learn problem solving, independent thinking, cooperative learning, and persistence through creative discovery. Veteran art teacher Susan Stein guides the community of student artists, determines their needs, creates structure and introduces a large variety of new mediums and techniques.
Class size limited to 10.
Winter Sessions
Mondays, January 11 – March 8 (in person)
Price: $215 members | $270 nonmembers
$50 supply fee
Tuesdays, January 12 – March 9 (in person)
Spring Sessions
Mondays, March 22 – May 17 (in person)
No class: March 29
Time: 4:30 – 5:30pm
Tuesdays, March 23 – May 11 (in person)
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,168
|
Financial Q&A: Tips to Pay Less Tax or Get a Bigger Refund
Laura answers tax questions from readers, listeners, followers, and group members that will help you understand how to pay less tax, defer it, or to boost your tax refund and save more money every year.
Laura Adams, MBA
Money Girl
You can't avoid taxes, but there are many ways to legally pay less so you keep more of your hard-earned money.
In this post, I'll answer some common and not-so-common questions about taxes from Money Girl readers, podcast listeners, Twitter followers, and private Facebook group members. This Q&A will help you understand how to pay less tax, defer it, or boost your tax refund every year.
Free Resource: Laura's Recommended Tools—use them to earn more, save more, and accomplish more with your money!
Question #1: Chesley says, "My W-4 says that I'm married, but I've actually been divorced for a few years. Is it important to update that—or is the status I put on my tax return what really matters?"
The filing status you enter on your tax return is what's most important because it determines how much tax you'll owe. However, if you're an employee, you should update your W-4 any time you have a life change, such as getting married or divorced, having a child, or earning more or less income. In fact, the IRS says you're supposed to make W-4 updates within ten days after a major life event.
The amount of tax your employer withholds from your pay depends on your income and the information you submit on your W-4, such as your marital status and how many allowances you have.
If your W-4 doesn't accurately reflect your situation, you could have too little or too much tax withheld. Not paying enough means you could get a big, unexpected tax bill. Or, if you overpaid during the year, you'll end up with a tax refund.
While a refund sounds good, it simply means that you gave Uncle Sam an interest-free loan on your money throughout the year, instead of using it for your own good.
So be sure to adjust your withholding if you get big tax refunds every year. That's an easy way to give yourself a raise! Save or invest the money instead of letting the government use it.
However, your withholding typically won't be correct down to the penny. That's because the worksheets don't account for every possible situation, such as having additional taxable income from interest, dividends, alimony, unemployment compensation, or self-employment income.
You can update your withholding anytime during the year by completing IRS Form W-4 and submitting it to your employer. If you need help, use the IRS Withholding Calculator or ask your human resources or payroll department.
See also: What Is the Marriage Tax Penalty?
Laura Adams received an MBA from the University of Florida. She's an award-winning personal finance author, speaker, and consumer advocate who is a trusted and frequent source for the national media. Her book, Debt-Free Blueprint: How to Get Out of Debt and Build a Financial Life You Love was an Amazon #1 New Release. Do you have a money question? Call the Money Girl listener line at 302-364-0308. Your question could be featured on the show.
More Tips from Money Girl
What Is the Self-Employment Tax?
How to Pay Less in Taxes (Part 1)
How Retirement Accounts Save Money on Taxes
What Is the Marriage Tax Penalty?
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,750
|
\section{Introduction}
The cousins of fractional quantum Hall (FQH) effect \cite{PhysRevLett.50.1395,PhysRevLett.48.1559} on two-dimensional (2D) lattices have been attracting great interest recently. In a Chern band possessing a nonzero Chern number as an analog of a single Landau level \cite{PhysRevLett.106.236802,PhysRevLett.106.236803,PhysRevLett.106.236804}, the interaction of particles in the fractionally filled band leads to strongly correlated states named fractional Chern insulators (FCIs) \cite{review1,review2,review3,sheng2011fractional,PhysRevX.1.021014,liu2012fractional,yao2013realizing, PhysRevLett.110.185301,wu2013bloch,andreasprl,wang2013tunable}. Compared with their FQH counterparts, a strong net external magnetic field is no longer an indispensable element, and FCIs are expected to be much more robust against high temperature \cite{PhysRevLett.106.236802}.
Among various FCIs, the most exotic members are those that support excitations, i.e., anyons, obeying non-Abelian statistics \cite{Moore1991362,PhysRevLett.66.802}. These non-Abelian anyons are essential resources in topological quantum computation \cite{Nayak}. However, the realization of non-Abelian FCIs (as well as non-Abelian FQH states) in realistic models is usually very difficult. Up to now, except in a few cases of bosons \cite{PhysRevLett.110.185301,PhysRevB.88.081106,PhysRevB.88.205101}, almost all numerically confirmed non-Abelian FCIs are stabilized by peculiar multi-particle interactions \cite{PhysRevLett.108.126805,Bernevig_PRB85,zoology,PhysRevB.87.205137,Weyl,greiter2009non,greiter2014parent} (the stabilization of bosonic non-Abelian FQH states is also usually much easier than that of the corresponding fermionic states). Considering that electronic materials and fermions in optical lattices are natural platforms for Chern bands, the discovery of non-Abelian FCIs stabilized by realistic two-body interactions in fermionic systems is highly demanded. This construction is of fundamental interest and facilitates the future implementation of topological quantum computation.
In this paper, we report progress in this direction. We choose a simple generalization of the triangular Hofstadter model. By introducing the next-nearest neighbor hopping, the lowest Bloch band can be tuned to be very flat even for a large value of flux density. This is a compelling feature that makes this model an ideal platform to search for non-Abelian FCIs. Enlightened by the positive effect of long-range interactions on the stabilization of bosonic non-Abelian FCIs \cite{PhysRevB.88.205101}, we turn on the experimentally realistic dipolar interaction \cite{DipolarExp1,DipolarExp2,DipolarExp3} between fermions, supplemented by two-body short-range attractions that might be controlled by Feshbach
resonances \cite{PhysRevLett.103.080406}. By using exact diagonalization, we study the many-body system in several aspects, such as the energy spectrum, the particle-cut entanglement spectrum \cite{PhysRevLett.101.010504,PhysRevLett.106.100405,PhysRevX.1.021014}, and the adiabatic continuity to the system with multi-particle interactions.
We obtain convincing numerical results to confirm the existence of the non-Abelian $\nu=1/2$ Moore-Read FCIs \cite{Moore1991362}. Through the analysis of two-particle energy spectrum, we show that our choice of the attraction strength is reasonable to stabilize the Moore-Read FCIs. Some encouraging evidence that supports the \emph{Z}$_3$ $\nu=3/5$ Read-Rezayi FCIs \cite{PhysRevB.59.8084} is also discovered. The stabilization of the fermionic $\nu=3/5$ Read-Rezayi FCI is very exciting because its Fibonacci anyon excitation is necessary for universal quantum computation.
\section{Single-particle model and band topology}
We consider spinless fermions on a 2D triangular lattice penetrated by an uniform magnetic field. Assuming fermions only hop between nearest-neighbor (NN) and next-nearest-neighbor (NNN) sites (Fig.~\ref{fg:lattice}), the single-particle Hamiltonian is
\begin{equation}
\label{eq:H_0}
H_0=-\sum_{\langle i,j\rangle,\langle\langle i,j \rangle\rangle}t_{ij}e^{\textrm i\phi_{ij}}c^\dagger_i c_j,
\end{equation}
where $c_i$ ($c_i^\dagger$) is the fermionic annihilation (creation) operator on site $i$, $t_{ij}=t$ for NN sites and $t_{ij}=t'$ for NNN sites, and $\phi_{ij}$ is indicated in Fig.~\ref{fg:lattice}.
Our model is actually the triangular version of the well-known Hofstadter model \cite{hof} with extra hopping between NNN sites.
\begin{figure}
\centerline{\includegraphics[width=1.0\linewidth] {shiyitu.png}}
\caption{(Color online) (a) The schematic graph of our triangular lattice model with $(\vec a_1,\vec a_2)$ and $(\vec b_1,\vec b_2)$ representing lattice vectors and reciprocal lattice vectors respectively. $(m,n)$ labels the lattice site. (b) Complex hopping amplitudes between NN sites. (c) Complex hopping amplitudes between NNN sites. One can easily verify that the magnetic flux per each plaquette (two triangles) is $\phi$ in units of the flux quantum $\phi_0$.}
\label{fg:lattice}
\end{figure}
If the magnetic flux density $\phi=p/q$, where $p$ and $q$ are coprime integers, each unit cell consists of $q$ sites in the $\vec a_1$ direction. The band structure should exhibit $q$ Bloch bands, each of which can be labeled by a Chern number \cite{tknn}. For the special case of $p=1$, all bands are separated from each other by finite gaps. Their Chern numbers can be described by a simple picture: the $(q-1)$ lower bands have unit Chern number $C_{i<q}=1$, while the $q$th (highest) band has Chern number $C_q=-q+1$, satisfying $\sum_{i=1}^qC_i=0$. This Chern number distribution is different from the one on the square lattice \cite{mk,wang2013tunable} because of the different butterfly structures of the energy spectra.
By optimizing the ratio of $t'/t$, we can tune the lowest band to be nearly flat. For example, when $\phi=1/3$ and $t'/t=0.16$, the flatness of the lowest band, defined as the ratio between the band gap $\Delta$ and the bandwidth $w$, reaches $\frac{\Delta}{w}\approx64$ [Fig.~\ref{fg:onebody}(a)]. With larger $q$, we can even obtain more than one flatband. For $\phi=1/5$ and $t'/t=0.10$, the flatness of the lowest two bands are $\frac{\Delta_1}{w_1} \approx 1449$ and $\frac{\Delta_2}{w_2} \approx 284$ respectively [Fig.~\ref{fg:onebody}(b)]. Interestingly, the fluctuation of the Berry curvature $F$ of the lowest band is much smaller than that on the square lattice with the same flux density \cite{PhysRevB.88.205101}. For $\phi=1/3$ and $t'/t=0.16$, $F$ of the lowest band is quite uniform [Fig.~\ref{fg:onebody}(c)] and close to the mean value $\overline F= \frac{\sqrt{3}q}{4\pi} \approx 0.4135$. This suggests that the lowest band of our triangular lattice model is a more suitable host for fractional Chern insulators than that of the square lattice model.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{onebody.pdf}}
\caption{(Color online) Band structure for (a) $H_0(\phi{=}\frac13)$, $t'/t=0.16$; and (b) $H_0(\phi{=}\frac15)$, $t'/t=0.10$. (c) The Berry curvature of the lowest band for $H_0(\phi{=}\frac13)$ with $t'/t=0.16$. The region within the white solid line is one third of a Brillouin zone.
\label{fg:onebody}}
\end{figure}
\section{Moore-Read FCIs at $\nu=1/2$}
Now we consider $N_{e}$ interacting fermions partially filled in the lowest flatband on the torus. We assume dipolar potential $v(\mathbf r)=1/|\mathbf r|^3$ between fermions, which is experimentally realistic for neutral fermions and can be realized by trapping ultracold polar molecules in optical lattices \cite{DipolarExp1,DipolarExp2,DipolarExp3}. We also include short-range two-body attractive Hubbard terms between $n$-th NN terms [$n=1$ corresponds to the NN interaction, $n=2$ corresponds to the NNN interaction, etc.]. It has been proposed in Ref.~\cite{PhysRevLett.103.080406} that the $n=1$ NN term due to the $s$-wave scattering between fermions in optical lattices can be controlled by Feshbach
resonances. The whole two-body interaction Hamiltonian is
\begin{equation}\label{eq:H_LR}
H_{2\textrm{b}}=\sum_{i<j}V_{\textrm{d-d}}(\mathbf r_i-\mathbf r_j)n_in_j-\sum_{m=1}^{n_{\textrm{max}}} U_m \Big(\sum_{(i,j)\in\mathcal N_m}n_in_j\Big),
\end{equation}
where
\begin{equation}
V_{\textrm{d-d}}(\mathbf r)=\!\!\!\sum^{+\infty}_{s,r=-\infty}\!\!v(\mathbf r+sN_1\vec a_1+rN_2\vec a_2)
\end{equation}
that is periodic for $N_1\times N_2$ lattice sites.
We assume that the strength of interaction is much smaller than the band gap but larger than the bandwith, so $H_0$ is quenched and we can project $H_{2\textrm{b}}$ onto the occupied lowest band. We diagonalize the projected Hamiltonian $H_{2\textrm{b}}$. Because each magnetic unit-cell contains $q$ sites, there are $\emph{\b{N}}_1\times\emph{\b{N}}_2$ unit cells with $\emph{\b{N}}_{1}=N_1/q$ and $\emph{\b{N}}_{2}=N_2$. The band filling factor $\nu$ is defined as $\nu= {N_e} /(\emph{\b{N}}_{1}\emph{\b{N}}_{2})$. Since the total translation operator commutes with both $H_0$ and $H_{2\textrm{b}}$, each energy level can be labeled by a 2D total momentum $(K_1,K_2)$ with $K_{1,2}=0\thicksim {(\emph{\b{N}}_{1,2} {-} 1)}$.
\begin{figure}
\centerline{\includegraphics[width=1.0\linewidth]{dipolarFCI.pdf}}
\caption{(Color online) Evidence of the $\nu=1/2$ MR FCIs as the ground states of $H_{2\textrm b}$ with $\phi=1/3$, $t'/t=0.16$, $n_{\textrm{max}}=1$, and $U_1=0.79$. (a) The net potential for the combination of dipolar interaction and two-body attractive NN interaction as a function of distance $r$ on the lattice. (b) The low-energy spectra at $\nu=1/2$ for $N_{e}=6,8,12$ on $N_1\times N_2=6\times6$, $6\times8$, $9\times8$ lattices, respectively. (c) The $x$-direction spectral flow for $N_{e}=8$ on the $N_1\times N_2=6\times8$ lattice. (d) The PES for $N_{e}=12$ and $N_{A}=5$ on the $N_1\times N_2=9\times8$ lattice. The number of states below the gap (indicated by the green arrow) is 30648.
\label{fg:LR_halffilling}}
\end{figure}
We first focus on $\nu{=}1/2$ to look for the non-Abelian Moore-Read (MR) FCIs. Compelling evidence, as displayed in Fig.~\ref{fg:LR_halffilling}, demonstrates that the ground states are indeed in the MR phase for flux densities as high as $\phi=1/3$. By choosing $n_{\textrm{max}}=1$, $U_1=0.79$ to weaken the repulsion between NN sites [Fig.~\ref{fg:LR_halffilling}(a)], we observe six quasidegenerate ground states for each system size that we study, and they are separated from the excited levels by an energy gap much larger than the ground-state splitting [Fig.~\ref{fg:LR_halffilling}(b)]. This degeneracy is robust against the twisted boundary conditions, i.e., the six ground states never mix with excited levels in the spectral flow [Fig.~\ref{fg:LR_halffilling}(c)]. In order to further investigate the topological order of the ground-states, we compute the commonly used particle-cut entanglement spectrum (PES) \cite{PhysRevLett.106.100405,PhysRevX.1.021014,Bernevig_PRB85} to rule out the possibility of other effects such as the charge density wave.
After dividing the whole system into two parts $A$ and $B$ with $N_{A}$ and $N_{B}$ particles respectively, and tracing out part $B$ from the density matrix $\rho=\frac1m \sum^m_{i{=}1}|\Psi^i\rangle\langle\Psi^i|$ of the ground state manifold, where $|\Psi^i\rangle$ represents the $i$-th state of $m$ degenerate ground states in the manifold, we can obtain the PES level defined as $\xi_i = -\ln\lambda_i$ with $\lambda_i$ the eigenvalues of the reduced density matrix $\rho_{ A}=\textrm{Tr}_{B}\rho$.
We find that a clear gap (which may increase for smaller $N_A$) exists in the PES, below which the number of levels matches the quasihole excitation counting of the MR state predicted by the $(2,4)-$admissible rule \cite{Bernevig_PRB85} [Fig.~\ref{fg:LR_halffilling}(d)]. All these results above conclusively confirm the existence of the $\nu=1/2$ MR FCIs in the presence of dipolar interaction and attractive NN interaction (\ref{eq:H_LR}) at high flux densities.
Similar results can also be obtained for different system sizes and flux densities. For example, we can choose a smaller flux density $\phi=1/5$ and project $H_{2\textrm{b}}$ to either the lowest or the second lowest flatband [Fig.~\ref{fg:onebody}(b)]. As both of these two flatbands have unit Chern number, we can stabilize the $\nu=1/2$ MR FCIs on each of them. Which band is fractionally filled is determined by the chemical potential.
\begin{figure}
\centerline{\includegraphics[width=1.0\linewidth]{continuty.pdf}}
\caption{(Color online) The adiabatic continuity between the ground states of $H_{3\textrm b}$ and $H_{2\textrm b}$. We choose $U_{3\textrm b}=1.16$ in $H_{3\textrm b}$ and $\phi=1/3$, $t'/t=0.16$, $n_{\textrm{max}}=1$, $U_1=0.79$ in $H_{2\textrm b}$. (a) The low-energy spectrum of $H_{\textrm{3b}}$ for $N_{ e}=12$ on the $N_1\times N_2=9\times8$ lattice. The number of ground states in each sector is labeled on the chart. (b) The evolution of the low-energy spectrum of $H_{\textrm{int}}(\lambda)$ with $\lambda$ from $0$ to $1$ for $N_{e}=12$ on the $N_1\times N_2=9\times8$ lattice. The energy gap is made equal at $\lambda=0$ and $\lambda=1$.\label{fg:continuty}}
\end{figure}
\section{Adiabatic continuity to the ground states of three-body interactions}
It is already known that the $\nu=1/2$ MR FCIs can be stabilized by NN three-body repulsive interactions \cite{zoology,Bernevig_PRB85}. In our lattice model, we consider the three-body repulsion on each triangular plaquette, i.e.,
\begin{equation}
\label{3bhamil}
H_{3\textrm{b}}\!=U_{3\textrm{b}}\sum_{\langle i,j,k\rangle\in\bigtriangleup,\bigtriangledown} n_i n_j n_k
\end{equation}
with $U_{3\textrm{b}}>0$. The low-energy spectrum of this three-body interaction for $N_{ e}=12$ fermions at $\nu=1/2$ with $\phi=1/3$ is displayed in Fig.~\ref{fg:continuty}(a). As expected, we find a ground-state manifold of six-fold degeneracy. Moreover, further analysis including the quasihole excitations and PES supports that the ground states are indeed in the MR phase.
The existence of MR FCIs for the three-body interaction enables us to investigate the adiabatic continuity between the ground states of $H_{2\textrm{b}}$ and $H_{3\textrm{b}}$. In order to achieve this, we construct a Hamiltonian interpolating between $H_{2\textrm{b}}$ and $H_{3\textrm{b}}$, i.e.,
\begin{equation}
H_{\textrm{int}}(\lambda)=(1-\lambda)H_{2\textrm{b}}+\lambda H_{3\textrm{b}},
\end{equation}
where $\lambda\in[0,1]$ is the interpolation parameter. When $\lambda$ continuously increases from $0$ to $1$, this Hamiltonian evolves from $H_{2\textrm{b}}$ to $H_{3\textrm{b}}$. We diagonalize $H_{\textrm{int}}(\lambda)$ to study how does the energy spectrum evolve with $\lambda$. The result for $N_{e}=12$ fermions is shown in Fig.~\ref{fg:continuty}(b). The parameters in $H_0$ and $H_{2\textrm b}$ are the same as those used in the previous section. During the interpolation from $\lambda=0$ to $\lambda=1$, we find that there are always six quasidegenerate ground states well separated from high excited levels. The fact that the energy gap does not close during the interpolation suggests that the ground states at $\lambda=0$ and $\lambda=1$ are adiabatically connected and in the same phase. This adiabatic continuity provides another convincing evidence that the ground states of the two-body long-range interaction $H_{2\textrm{b}}$ at $\nu=1/2$ are indeed MR FCIs.
\section{Two-particle spectrum analysis}
In the Landau level physics, any rotation (translation) invariant two-body Hamiltonian is determined by its Haldane's pseudopotential parameters $\mathcal{V}_m$ \cite{pseudopotential}, which can be calculated analytically. Although we do not have an elegant formula for Chern band like for Landau levels, we can still approximately extract pseudopotential parameters from the energy spectrum of two interacting particles \cite{andreasprl}. These pseudopotential parameters in the Chern band provide guidance for what interaction we should use to stabilize a target FCI \cite{PhysRevB.88.205101}. Therefore, in order to further understand why the Hamiltonian $H_{2\textrm{b}}$ can stabilize the $\nu=1/2$ MR FCIs, we consider the two-fermion problem.
In Fig.~\ref{fg:twoparticle}(a), we show the high-energy spectrum of two fermions interacting by $H_{2\textrm{b}}$ with the parameters used to stabilize the $\nu=1/2$ MR FCIs. The energy levels form pairs and are almost independent on $(K_1,K_2)$. We can tentatively identify the first (highest) pair as $\mathcal{V}_1$, the second pair as $\mathcal{V}_3$, and the third pair as $\mathcal{V}_5$ (note that the pseudopotential parameters of even order do not appear in the two-particle spectrum for fermions) \cite{andreasprl,PhysRevB.88.205101}. The pairing of energy levels can be easily seen in Fig.~\ref{fg:twoparticle}(b), where we choose $(K_1,K_2)=(0,0)$ sector and rescale the highest energy level as $1$. We find that $\mathcal{V}_3/\mathcal{V}_1\approx0.6$ for $H_{2\textrm{b}}$ with $U_1=0.79$, while $\mathcal{V}_3/\mathcal{V}_1$ is only roughly $0.3$ for the pure dipolar interaction. Considering a large $\mathcal{V}_3/\mathcal{V}_1$ is also crucial for the stabilization of MR states in the second Landau level \cite{PhysRevB.78.155308}, the pseudopotential parameters of our dipolar interaction supplemented by two-body NN attractions are very reasonable.
\begin{figure}
\centerline{
\includegraphics[width=1.0\linewidth]{twoparticles.pdf}}
\caption{(Color online) The two-particle spectrum analysis of $H_{2\textrm b}$ with $\phi=1/3$, $t'/t=0.16$, $n_{\textrm{max}}=1$ and $U_1=0.79$. (a) The two-particle spectrum on the $N_1\times N_2=12\times12$ lattice. Only the highest levels are plotted. The energy levels form pairs and are identified as Haldane's pseudopotential parameters $\mathcal{V}_1,\mathcal{V}_3,\mathcal{V}_5$. (b) The two-particle energy level $E_n$ (rescaled by the highest level $E_1$) in the $(K_1,K_2)=(0,0)$ sector versus $n$ on the $N_1\times N_2=12\times12$ lattice. The case of pure dipolar interaction ($U_1=0$) is also plotted for comparision.
\label{fg:twoparticle}}
\end{figure}
\begin{figure}
\centerline{
\includegraphics[width=1.0\linewidth]{RRstate.pdf}}
\caption{(Color online) Evidence of the $\nu=3/5$ RR FCIs as the ground states of $H_{2\textrm b}$. We choose (~\textbf{\large{--}}~) $n_{\textrm{max}}=1$, $U_1=1.26$, $t'/t=0.11$ for $\phi=1/4$; ({\boldmath\large$\times$}) $n_{\textrm{max}}=3$, $(U_1,U_2,U_3)=(1.31,0.2,0.02)$, $t'/t=0.10$ for $\phi=1/5$; and ({\boldmath$\triangle$}) $n_{\textrm{max}}=1$, $U_1=1.55$, $t'/t=0.08$ for $\phi=1/6$. (a) The net potential for the combination of dipolar interaction and attractive NN, NNN, and NNNN interactions as a function of distance $r$ on the lattice with $n_{\textrm{max}}=3$, $(U_1,U_2,U_3)=(1.31,0.2,0.02)$. The attractions are so strong that net potentials for NN and NNN sites become negative. (b) The low-energy spectrum for $N_{e}=12$ at $\nu=3/5$ with different flux densities $\phi=1/4,1/5$ and $1/6$ on $N_1\times N_2=8\times10$, $10\times10$, $12\times10$ lattices, respectively. (c) The $x$-direction spectral flow for $N_{\textrm e}=12$ with $\phi=1/4$ on the $N_1\times N_2=8\times10$ lattice. (d) The $N_{ A}=4$ PES for $N_{e}=12$ with $\phi=1/6$ on the $N_1\times N_2=12\times10$ lattice. The number of states below the entanglement gap (indicated by the green arrow) is $4765$.
\label{fg:RR_states}}
\end{figure}
\section{$Z_3$ Read-Rezayi FCIs at $\nu=3/5$}
Compared with the $\nu=1/2$ MR FCIs, the $\nu=3/5$ $Z_3$ Read-Rezayi (RR) FCIs are more appealing because the Fibonacci anyon excitations of these states can be used to perform universal quantum computation (the Majorana anyon excitations of the MR FCIs cannot). However, the RR FCIs are more fragile and sensitive to the interactions and sample sizes. In order to stabilize these state, we need finer tuning of the interaction than what we do in the search of MR FCIs.
The higher filling fraction and its odd denominator make the number of available lattice samples accessible by exact diagonalization at $\nu=3/5$ is much less than that at $\nu=1/2$. However, we still obtain encouraging evidence of the RR FCIs at high flux densities. By setting appropriate $n_{\textrm{max}}$ and $U_m$, we observe ten quasidegenerate ground states in the low-energy spectrum for $N_{ e}=12$ fermions with $\phi=1/4,1/5$ and $1/6$ [Fig.~\ref{fg:RR_states}(b)]. This ten-fold degeneracy is robust against the twisted boundary conditions [Fig.~\ref{fg:RR_states}(c)]. The counting of levels below the gap of the PES also matches the requirement of the $(3,5)$-admissible rule \cite{Bernevig_PRB85}[Fig.~\ref{fg:RR_states}(d)]. All these evidences support that the ground states at $\nu=3/5$ are the RR FCIs.
\section{Conclusion}
In this paper, we have demonstrated that a combination of long-range dipolar interaction and two-body short-range attractions for interacting fermions on a triangular lattice can exhibit ground states as non-Abelian fractional Chern insulators. Our single-particle model is a simple generalization of the triangular Hofstadter model by adding an extra next-nearest-neighbor hopping. This extra term is crucial for tuning the lowest band to be nearly flat. After switching on interactions in this flatband, we have observed robust $\nu=1/2$ Moore-Read FCIs for flux densities as high as $1/3$. Besides the topological degeneracy, spectral flow and entanglement spectrum, the adiabatic continuity to the ground states of the three-body interaction also proves the ground states of our two-body long-range interaction are indeed in the MR phase. We compute the two-fermion energy spectrum and extract the Haldane's pseudopotential parameters of our long-range interaction, which are reasonable compared with the known results in Landau levels. The encouraging evidence is also discovered for the more exotic $\nu=3/5$ $Z_3$ Read-Rezayi FCIs at flux densities as high as $1/4$.
The interactions discussed in our scheme are quite promising to be realized \cite{DipolarExp1,DipolarExp2,DipolarExp3,PhysRevLett.103.080406}.
Considering the recent successful experimental realizations of the Hofstadter model \cite{hofstadterexp1,hofstadterexp2}, our results may provide insights into the experimental preparation of fermionic FCIs in optical lattices.
It is promising that the fermionic non-Abelian FCIs can be similarly stabilized by appropriate two-body long-range interactions in Chern bands of other lattice models \cite{unpublish}, including those with higher Chern number bands \cite{PhysRevB.86.241111,liu2012fractional}. We highlight the physical significance of our scheme for the realistic dipolar interaction, applicable possibly in various lattice configurations and crucial role in topological quantum computation.
Z.~L. thanks E.~J.~Bergholtz and E.~Kapit for related collaborations and N.~Regnault for discussions. Z.~L. was supported by the Department of Energy, Office of Basic Energy Sciences through Grant No.~DE-SC0002140. This work was partially supported by NSFC (11175248).
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,286
|
package com.thinkaurelius.titan.hadoop.formats.util.input;
import com.thinkaurelius.titan.diskstorage.keycolumnvalue.SliceQuery;
import com.thinkaurelius.titan.graphdb.database.RelationReader;
import com.thinkaurelius.titan.graphdb.types.TypeInspector;
import com.thinkaurelius.titan.hadoop.FaunusVertexQueryFilter;
/**
* @author Matthias Broecheler (me@matthiasb.com)
*/
public interface TitanHadoopSetup {
public TypeInspector getTypeInspector();
public SystemTypeInspector getSystemTypeInspector();
public RelationReader getRelationReader(long vertexid);
public VertexReader getVertexReader();
public SliceQuery inputSlice(FaunusVertexQueryFilter inputFilter);
public void close();
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,458
|
Microsoft Azure Artificial Intelligence (AI) Portfolio Review
HomeAI
By William Elcock
One of the largest players in the artificial intelligence (AI) market is Microsoft Azure.
Microsoft Azure's AI portfolio is extensive, and the brand reported $192.7 million in AI revenue for 2019, according to a 2020 report by IDC.
See below to learn about Microsoft's Azure artificial intelligence offerings, which are a key part the company's global portfolio:
Microsoft Azure AI Portfolio
Here's are some of Azure's key offerings in its AI portfolio:
Anomaly Detector: Helps developers detect changes to be able to quickly identify problems with apps
Azure Machine Learning: Gives data scientists and developers tools to build, train, and deploy machine learning models and foster team collaboration
Azure Cognitive Search: Helps users identify the most relevant content with features like using machine learning (ML) to understand user intent
Computer Vision: Helps users to extract text from images, generate image descriptions, moderate content, and understand the movement of people in physical spaces.
Language Understanding: Allows apps to understand natural language. This lets developers build models that interpret user goals and allows apps to extract key information from conversational phrases.
Microsoft Genomics: Sequences genomes in the cloud. Researchers can use the scale of Microsoft Azure to process genomics data in the cloud.
Azure Health Bot: Lets health care organizations build bots that are capable of accessing a medical database and have natural language capabilities. This allows apps to understand clinical terms.
Translator: Instantly translates to and from over 90 languages and employs machine translation
Speech-to-text: Automatically transcribes spoken audio into text. Works for over 85 languages and variants. Models can be enhanced, so speech-to-text conversions are geared towards certain fields
Speaker Recognition: Can identify speakers based on their unique voice characteristics
See more: Artificial Intelligence Market
Microsoft Azure AI Partnerships
Organizations can partner with Microsoft Azure to offer their customers AI services.
These are several of Azure's AI partnerships:
Simplifai Emailbot
Simplifai Emailbot can understand emails, classify them, and extract relevant information. This can be leveraged by customer service teams, allowing employees to focus on more important tasks. This solution was made by Acuvate Software.
IT Helpdesk Bot
This platform lets businesses automate their IT help desk with virtual agents. The bot has been pre-trained with 300 FAQs and can be added to the Microsoft Teams application. This solution was made by Acuvate Software.
SpinOne for Office 365
SpinOne, or Spinbackup, for Office 365 is an AI-driven solution that offers automated daily backups and ransomware protection. This solution was made by Spin Technology.
Microsoft Azure AI Use Cases
Technology: Lumen
Lumen Technologies provides technology infrastructure solutions. They experienced rapid growth but needed a way to integrate all of their different analytics systems. They were able to achieve this with Microsoft Azure Synapse Analytics. Prior to switching, Lumen's customer service speed was negatively impacted. After using the Microsoft Azure Synapse Analytics platform to design a solution known as Intelligent Digital Delivery, the team at Lumen was able to gather data from all across the company, so internal stakeholders are now able to view everything in the delivery funnel and sort the data according to their preferences. This new analytics solution improved productivity and helped reinforce the customer-focused nature of Lumen.
Government: NYC Department of Environmental Protection
The NYC Department of Environmental Protection is home to a large number of employees who often have IT needs. This can put a strain on IT personnel, especially after hours. After the development of a solution called Ask BIT with Azure, the department was able to implement a chatbot that operates around the clock, addressing queries and resolving tickets. This helps the department avoid the cost and time tied to responding to after hours requests.
Insurance: Munich Re
Munich Re is a global insurance company that carries out complex calculations to determine various insurance practices. This requires a large amount of compute power. Initially, this was all done on premises. This became unsustainable, as the amount of data that needed to be processed grew. Munich Re made use of Azure Data Lake to help process all of its data, along with Azure Data Science Virtual Machines for machine learning. These tools help the firm handle, process, decide on use cases at its locations, while keeping costs under control.
See more: Artificial Intelligence: Current and Future Trends
User Reviews of Microsoft Azure AI
Users are giving various products in the Microsoft Azure AI/ML portfolio favorable scores online:
Gartner Peer Insights: 4.3/5
Industry Recognition of Microsoft Azure AI
Microsoft Azure's AI technology has been recognized, for instance, by Frost & Sullivan as the leader in global AI platforms for health care as part of its 2020 Best Practices Awards.
Microsoft Azure in the AI market
Microsoft Azure holds the second largest share of the AI software market (5.6% in 2019), according to a 2020 report by IDC.
In comparison, IBM holds the largest share of the AI software market in the report (8.8%), and SAS ranks third (4.4%).
The AI software market was worth an estimated $3.5 billion in 2019, IDC says.
See more: Top Performing Artificial Intelligence Companies
AWS Releases Amazon QuickSight Q to Generate BI Visualizations From Questions
How AI is Being Used in Robotics
Oracle Opens Cloud Region in Chicago
Yahoo Selects AWS Public Cloud for Ad Division
10 Best Dashboard Software and Tools for 2023
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,807
|
Mike Levin (D)
CD48 CD50
US House '12
Darrell Issa* (R-Inc) 159,725 58.16%
Jerry Tetalman (D) 114,893 41.84%
Darrell Issa* (R-Inc) 98,161 60.17%
Dave Peiser (D) 64,981 39.83%
Doug Applegate (D) 154,267 49.74%
Mike Levin (D) 166,453 56.42%
Diane L. Harkey (R) 128,577 43.58%
Mitt Romney (R) 153,856 52.25%
Barack Obama* (D-Inc) 134,447 45.66%
Hillary Clinton (D) 159,081 50.66%
Donald J. Trump (R) 135,576 43.18%
Gavin Newsom (D) 153,703 51.49%
John H. Cox (R) 144,801 48.51%
Elizabeth Emken (R) 151,156 53.31%
Dianne Feinstein* (D-Inc) 132,390 46.69%
Kevin De Leon (D) 117,487 46.81%
Steve Poizner (NPP) 155,670 56.69%
Ricardo Lara (D) 118,949 43.31%
YES 157,028 53.81%
NO 134,799 46.19%
Two-county coastal district with 29% of the voters located in the Orange County portion, the largest cities being Dana Point, San Clemente and San Juan Capistrano, and 71% in the San Diego County portion, stretching from Camp Pendleton in the north to Del Mar to the south. The largest cities are Carlsbad, Oceanside, Vista and Encinitas.
Orange County (25.25%) San Diego County (74.75%)
AD73 (25.25%) AD75 (1.78%) AD76 (63.13%) AD77 (4.29%) AD78 (5.56%)
SD36 (89.68%) SD38 (4.24%) SD39 (6.09%)
CD49 (100.00%)
R +2.02%
DEM: 31.99% (133,875) -- REP: 34.01% (142,348) -- NPP: 27.79% (116,322) -- OTH: 0.32% (1,353)
MIKE LEVIN (D) is a director of the Center for Sustainable Energy, director and co-founder of Sustain OC, and director of government affairs at FuelCell Energy, a fuel cell company that designs, manufactures, sells, and services fuel cell power plants for distributed power generation. Prior to his involvement in the clean energy industry, he served as an attorney at Bryan Cave LLP, focusing on environmental and energy regulatory compliance, project development, and government advocacy. He serves on the California Hydrogen Business Council Board of Directors, and previously served on the National Finance Committee for the Hillary for America 2016 Presidential campaign. A native of Lake Forest and a CORO Fellow, he holds a bachelor's in Political Science from Stanford, where he served as student body president, and a JD from Duke University. He resides in San Juan Capistrano with his wife, Chrissy, and their 2 young children. ENDORSEMENTS: Board of Equalization Member Fiona Ma, Controller Betty Yee, Reps. Adam Schiff, Jimmy Gomez, BOLDPAC, DFA Action. Campaign Consultants: Parke Skelton (SG&A Campaigns), Fundraising: Daily Consulting
FreedomWorks 2018 0%
Gun Owners of America 2018 F-%
NumbersUSA 2018 0%
Planned Parenthood 2018 n/a%
Progressive Punch 2019 100%
Progressive Punch (Lifetime) 2019 100%
MikeLevinForCongress/
Tweets by MikeLevinCA
p12: R +14.16% DEM: 101,148 (28.62%) REP: 151,155 (42.78%) NPP: 83,211 (23.55%) TOTAL - 353,365 - TURNOUT: 34.96%
g12: R +13.16% DEM: 106,846 (28.65%) REP: 155,945 (41.81%) NPP: 90,752 (24.33%) TOTAL - 372,964 - TURNOUT: 76.84%
p16: R +9.15% DEM: 108,714 (30.67%) REP: 141,159 (39.82%) NPP: 86,699 (24.46%) TOTAL - 354,453 - TURNOUT: 50.90%
g16: R +6.60% DEM: 120,775 (31.19%) REP: 146,338 (37.79%) NPP: 99,853 (25.79%) TOTAL - 387,229 - TURNOUT: 83.12%
p18: R +5.19% DEM: 121,561 (31.16%) REP: 141,786 (36.35%) NPP: 105,111 (26.95%) TOTAL - 390,064 - TURNOUT: 47.73%
g18: R +3.76% DEM: 124,330 (31.01%) REP: 139,392 (34.77%) NPP: 115,113 (28.71%) TOTAL - 409,249 - TURNOUT: 73.94%
African-American: 18,663 (2.60%)
Mean Household Income: $106,306
Owner Occupied: 150,108 (59.70%)
Renter Occupied: 101,124 (40.30%)
Graduate Degree: 77,077 (16.10%)
About a quarter of the voters are located in the Orange County portion of the district around the communities of Dana Point and San Clemente, which are located halfway between the downtowns of Los Angeles and San Diego. Republicans outnumber Democrats two to one here.
The majority of the voters are in northern San Diego, where Republicans account for nearly 60% of the voters. Overall, Republicans outnumber Democrats by double digits, but Obama was able to win here by 1 point in his California landslide in 2008. Jerry Brown lost the district by 18 points in 2010 and 10 points in 2014. Voters along the coast are economically conservative; they tend to be moderate on social and environmental issues, but it's the economic issues that are at the top of these voters' minds.
The northern portion of the district includes Camp Pendleton, situated along what otherwise would be prime California beachfront real estate. Many locals will tell you the Marine Corps base is the only buffer that prevents the Los Angeles and San Diego Metro areas from merging.
Orange, San Diego
Carlsbad, Dana Point, Del Mar, Encinitas, Mission Viejo, Oceanside, Rancho Santa Margarita, San Clemente, San Diego, San Juan Capistrano, Solana Beach, Vista
SD36, SD38, SD39
AD73, AD75, AD76, AD77, AD78
92003, 92007, 92008, 92009, 92010, 92011, 92014, 92024, 92028, 92029, 92037, 92054, 92055, 92056, 92057, 92058, 92067, 92075, 92078, 92081, 92083, 92084, 92091, 92121, 92127, 92130, 92624, 92629, 92651, 92672, 92673, 92675, 92677, 92679, 92688, 92691, 92692, 92694
This once safely red seat has beaten a quick retreat away from the GOP, with Darrell Issa retiring after squeaking out a tight win in the 2016 election. His endorsed successor, former Board of Equalization member Diane Harkey, was abandoned by national Republican groups and trounced 56.42%/43.58% by Democrat Mike Levin, putting this seat in the blue column. Republican Brian Maryott, the Mayor of San Juan Capistrano and 8th-place finisher in the 2018 top two primary, has announced another run in 2020.
Levin raised $442,990 in the first quarter and ended with $657,118 on hand. Maryott reported recepts of $295,738 in the first quarter, with $250,000 directly contributed out of his own pocket. He ended the period with $283,939 on hand.
Two other Republicans, former San Clemente city councilman Steve Knoblock and entrepreneur Mara Fortin, filed with the FEC, although Fortin shuttered her campaign within weeks of filing and Knoblock terminated his committee in September after raising no money.
Democratic Party Pre-Endorsement Conference Endorsement (Primary): Levin, Mike
Democratic Party Primary Endorsement: Levin, Mike
Republican Party Primary Endorsement: Maryott, Brian
BEGINNING $
ENDING $
LEVIN, MIKE DEM 558,059 1,464,985 837,633 1,185,411 5,000 0 09/30/2019 0 C00634253
FORTIN, MARA REP 0 54,828 54,828 0 34,858 34,858 08/01/2019 0 C00708511
KNOBLOCK, STEVEN CRAIG REP 0 0 0 0 0 0 06/30/2019 0 C00699009
MARYOTT, BRIAN L MR REP 9,758 623,832 272,143 361,447 30,993 0 09/30/2019 0 C00666859
Mike Levin D 2019-11-12 2019-11-25 ORANGE
Nadia Smalley D 2019-09-12 SAN DIEGO
Brian Maryott R 2019-11-14 2019-11-19 ORANGE
Mike Levin D U.S. Representative, 49th District
Brian Maryott R Mayor/Financial Planner
Follow @MikeLevinCA
FEC ID# H8CA49058
Brian Maryott (R)
BRIAN MARYOTT (R), 55, is a retired financial service executive and first-term councilman for San Juan Capistrano, first winning election to the 5th Council District in 2016. Prior to his retirement this year, he was a Senior Vice President with Wells Fargo Advisors. Raised in Massachusetts, he graduated from American International College in Western Massachusetts and worked for the Massachusetts House of Representatives for several years, serving as a Legislative Aide and Staff Director. Maryott previously mounted an unsuccessful run for this seat in the 2018 primary, spending $750,000, including $700,000 loaned to his own campaign, and finishing 8th of 16 with 3.02% at a cost of around $136 per vote. He is running on a platform focused on sound fiscal stewardship.
Diane Harkey R 46468 25.52% ✔
Mike Levin D 31850 17.49% ✔
Sara Jacobs D 28778 15.80%
Doug Applegate D 23850 13.10%
Kristin Gaspar R 15467 8.49%
Rocky Chavez R 13739 7.55%
Paul Kerr D 8099 4.45%
Brian Maryott R 5496 3.02%
Michael D Schmitt R 2379 1.31%
Joshua Schoonover R 1362 0.75%
Craig Nordal R 1156 0.63%
David Medway R 1066 0.59%
Robert Pendleton NPP 905 0.50%
Danielle St. John Grn 690 0.38%
Joshua Hancock Lib 552 0.30%
Jordan Mills PAF 233 0.13%
Mike Levin D 166453 56.42% ✔
Diane Harkey R 128577 43.58%
When the lines were drawn, the 49th Congressional district had a GOP voter registration advantage of over fourteen points. Over time, that advantage has collapsed to just 3.76% and masks a moderate streak that has become difficult to reconcile with the Trump-oriented strain of Republicanism that is on the ascent. Of the 7 CA Republican-held districts that voted for Hillary Clinton, it was the only one to vote in favor of supporting the plastic bag ban in 2016, and the only one to vote in favor of the Citizens United advisory measure. Until this year, Hillary Clinton was the sole Democrat to have ever carried a state or district-wide race.
Just days after CA39 Rep Ed Royce's January announcement that he would retire at the end of the term, incumbent Republican Darrell Issa announced that he would not seek re-election to his seat. Issa's departure marked the second open California House seat in the 2018 cycle. Issa, first elected to the House in 2000, narrowly survived a bruising 2016 election against retired Marine Colonel Doug Applegate, eking out a narrow 1,621 vote victory in the closest House race that year. Issa spent $6.3 million in 2016, but had almost no outside help. Applegate reported spending $1.6 million, and received an additional $3.6 million of assistance from the DCCC, House Majority PAC, and other groups. In September 2018, President Trump nominated Issa to be Director of the United States Trade and Development Agency, and in December, Issa re-activated a federal 'Issa for Senate' account that had been dormant since the early 2000s, transferring the $645,000 balance in his congressional account.
Shortly after Issa's announcement, Board of Equalization member Diane Harkey jumped into the race, followed soon by Assemblyman Rocky Chavez and first-term San Diego County Supervisor Kristin Gaspar. With Democrats no longer having the luxury of running against Issa, the abundance of candidates became an issue, and a concerted effort to winnow the field began. Democrat Christine Prejean dropped out before the end of the filing period, and Applegate made an abrupt change in his residency in February, which would have positioned him to run for the San Diego County Board of Supervisors. His residency change came too late, however, and Applegate had no recourse but to remain in the race here.
Once the nomination period closed, the ballot was set with 16 candidates—four Democrats, eight Republicans, a Libertarian, a Green Party candidate, a Peace and Freedom candidate, and a No Party Preference candidate. All four Democrats had consistently reported solid fundraising numbers, prompting fears that the absence of a clear front-runner could produce a same-party Republican runoff in November. Most polling showed a tight field with Chavez and Harkey in close contention for the number one spot. In mid-May, the DCCC reserved its first batch of air time on ads opposing Chavez, while holding their fire on the more baggage-laden Harkey. Close to $6 million in outside spending came into play in the primary, with the EMILY's List-affiliated Women Vote! the biggest single spender, deploying over $2.3 million on ads supporting Democrat Sara Jacobs. The group's support followed a series of sizable contributions from Jacob's grandfather, billionaire philanthropist Irwin Jacobs. Outside of their spending, the biggest story was the $1.9 million in ads savaging Republican Rocky Chavez. The DCCC, Priorities USA, and the House Majority PAC unleashed volley after volley on Chavez shamelessly attacking him from the right for crossing the aisle to vote with Democrats on several key issues during his time in the legislature.
Republican Diane Harkey, who raised $474,278 and had been endorsed by Darrell Issa, was the top vote-getter in the primary, winning 25.52%. Harkey's win was the preferred outcome for the district's Republican voters, and likely for Democratic voters, as well. The second spot in the runoff went to Democrat Mike Levin, a clean energy advocate who raised $1.7 million going into the primary and finished with 17.49%. Democrat Sara Jacobs, who contributed over $2 million to her own campaign, came in 3rd with 15.8%. Fourth place went to 2016 Democratic candidate Doug Applegate, who raised $914,990 and received 13.1%. Fifth place went to Republican Kristin Gaspar, a San Diego County Supervisor who raised $356,355 and finished with 8.5%. Pummeled by Democratic attacks, Rocky Chavez finished 6th with 7.54%. Democrat Paul Kerr, a businessman who raised over $5.9 million, $5.1 million of which came from his own pocket, finished 7th with 4.45% (at a cost of around $729 per vote). Republican Brian Maryott, Mayor Pro Tem of San Juan Capistrano, raised $745,000, including $700,000 loaned to his own campaign, and finished 8th with 3.02%. Republican Mike Schmitt, a neuroaudiologist and small businessman, finished 9th with 1.31%. Republican Joshua Schoonover, a patent attorney, finished 10th with 0.75%. Republican Craig Nordal, a real estate businessman, finished 11th with 0.63%. Republican David Medway, a physician/business owner, finished 12th with 0.59%. No Party Preference candidate Robert Pendleton, a surgeon/businessman, finished 13th with 0.5%. Green Party candidate Danielle St. John, a human rights advocate, finished 14th with 0.38%. Libertarian Joshua Hancock finished 15th with 0.3%, and Peace and Freedom candidate Jordan Mills finished 16th with 0.13%.
Democratic candidates collectively received 50.84% in the primary, while Republicans received 47.86%. Republicans were quick to launch ads tying Levin to Nancy Pelosi, but that line of attack proved ineffective.
As was the case for Republicans elsewhere, Harkey's campaign was seriously outmatched, with Levin's campaign outpsending her $5.1 million to $1.6 million. Harkey's campaign spent $700,000 on TV and radio ad buys with another $200,000 on direct mail, while Levin's campaign spent close to $1.8 million on TV ads, $400,000 on digital ads, and at least $500,000 on direct mail.
Outside spending dropped after the primary, with all but $20,000 of the $4.4 million from 19 different groups spent to Levin's benefit. A particularly disastrous interview with the San Diego Union-Tribune's editorial board did little to shake the perception that Harkey's campaign was a slow-motion train wreck. The Democrats' House Majority PAC spent $1.7 million, LCV Victory spent $822,000, and former New York Mayor Mike Bloomberg's Independence USA spent $700,000. Harkey was savaged by $3.2 million in opposition spending, while Levin received supportive spending of $1.2 million.
The first votes on election night had Harkey down by 5 points, and things only got worse for her from there. The ballots counted post-election day were brutal, going against her by a 60.3%/39.7% margin. Once the dust had settled, Levin trounced her 56.4%/43.6%, winning by close to 38,000 votes.
Democratic Party Pre-Endorsement Conference Endorsement (Primary): GOES TO CAUCUS
Democratic Party Primary Endorsement: NO CONSENSUS
APPLEGATE, DOOUGLAS D 63,143 1,005,398 1,065,917 2,624 5,500 0 09/30/2018 4 C00581595
ISSA, DARRELL E R 329,152 1,795,547 1,476,039 648,660 0 0 11/26/2018 0 C00350520
LEVIN, MIKE D 0 6,050,130 5,249,346 800,783 22,591 0 11/26/2018 16,827 C00634253
KERR, PAUL D 0 8,129,176 7,935,488 193,688 250,000 2,000,000 09/30/2018 0 C00650036
JACOBS, SARA D 0 2,890,250 2,729,781 160,469 0 0 09/30/2018 0 C00660837
SCHOONOVER, JOSHUA R 0 12,323 12,323 -440 0 0 06/05/2018 0 C00664557
HARKEY, DIANE R 0 1,644,786 1,640,070 4,716 77,100 100,000 11/26/2018 0 C00665513
GASPAR, KRISTIN R 0 387,953 383,742 4,211 19,735 0 09/30/2018 0 C00666842
MARYOTT, BRIAN R 0 760,194 750,394 9,800 10,000 700,000 09/30/2018 0 C00666859
CHRISTINA, PREJEAN D 0 37,554 37,554 0 0 0 06/30/2018 0 C00667063
CHAVEZ, ROCKY R 0 425,256 425,256 0 0 127,439 08/15/2018 0 C00667006
MEDWAY, DAVID DR. REP 0 0 0 0 0 0
SCHMITT, MICHAEL D REP 0 7,961 7,811 149 7,670 0 09/30/2018 0 C00673988
NORDAL, CRAIG REP 0 11,779 11,878 57 9,994 10,644 09/30/2018 0 C00672055
PENDLETON, ROBERT OTH 0 26,510 24,688 1,821 17,000 17,000 05/16/2018 0 C00673830
MILLS, JORDAN P OTH 0 0 0 0 0 0
HANCOCK, JOSHUA L LIB 0 0 0 0 0 0
ST JOHN, DANIELLE GRE 0 0 0 0 0 0
Doug Applegate D 2018-02-26 2018-03-07 SAN DIEGO
Supriya Christopher D 2018-01-22 ORANGE
Davis Goodman D 2018-03-08 ORANGE
Sara J Jacobs D 2018-02-13 2018-03-09 SAN DIEGO
Paul Kerr D 2018-02-23 2018-03-09 SAN DIEGO
Daniel Perlman D SAN DIEGO
Christina Prejean D 2018-02-26 SAN DIEGO
Danielle T St John Grn 2018-02-14 2018-03-09 SAN DIEGO
Joshua L Hancock Lib 2018-01-26 2018-02-05 2018-03-05 2018-03-05 ORANGE
Robert Pendleton NPP 2018-02-13 2018-03-08 SAN DIEGO
Jordan Mills PAF 2018-03-05 2018-03-09 SAN DIEGO
David Arnold R 2018-01-18 ORANGE
Christina M Borgese R 2018-02-07 2018-02-07 ORANGE
Rocky Chavez R 2018-03-07 2018-03-09 SAN DIEGO
Kristin Gaspar R 2018-03-01 2018-03-13 SAN DIEGO
Diane Harkey R 2018-02-15 2018-02-26 ORANGE
Bryan Maryott R 2018-01-17 2018-02-05 2018-02-13 2018-02-27 ORANGE
David Medway R 2018-02-13 2018-03-14 SAN DIEGO
Craig A Nordal R 2018-02-12 2018-03-08 SAN DIEGO
Mike Schmitt R 2018-03-02 2018-03-06 ORANGE
Joshua Schoonover R 2018-03-05 2018-03-05 SAN DIEGO
Doug Applegate D Attorney/Father/Businessperson
Sara Jacobs D Education Nonprofit CEO
Paul G Kerr D Small Business Owner
Mike Levin D Clean Energy Advocate
Danielle St. John Grn Human Rights Advocate
Joshua L Hancock Lib No Ballot Designation
Robert Pendleton NPP Surgeon/Businessman/Artist
Jordan P Mills PAF Professor
Rocky J Chavez R Assemblymember
Kristin Gaspar R Chairwoman, San Diego County Board of Supervisors
Diane L Harkey R Taxpayer Representative/Businesswoman
Brian Maryott R Mayor Pro Tem
David Medway R Physician/Business Owner
Craig A Nordal R Real Estate Businessman
Mike Schmitt R Neuroaudiologist/Small Businessman
Johshua Schoonover R Patent Attorney
D: 133,875 (31.99%) | R: 142,348 (34.01%) | NPP: 116,322 (27.79%)
R +2.02
295,030 votes cast (2039 added)
Clean Energy Advocate DEM 166,453 +1,350 56.42% 1 1
Diane L. Harkey
Taxpayer Representative/Businesswoman REP 128,577 +689 43.58% 2 2 -37,876
Doug Applegate (D)
Follow @ApplegateCA49
DOUGLAS L. APPLEGATE is a trial lawyer and the principal of his own law firm. He is also a veteran of Operations Desert Storm and Iraqi Freedom, having retired from the Marine Corps after 32 years of service at the rank of colonel. He holds a bachelor's in Economics from Arizona State University, where he also received his law degree. He resides in San Clemente. ENDORSEMENTS: Sens. Ricardo Lara, Toni Atkins, Asm. Tom Daly, National Nurses United, VoteVets, former Rep. Loretta Sanchez, Justice Democrats; Campaign Manager: Luis Vizcaino (Luis Vizcaino Communications), Finance Director: Mara Lasko, Fundraising: Katharine Meyer Borst, Research: Point Loma Strategic Research
Campaign | Facebook | Twitter | Linkedin | Youtube
Paul Kerr (D)
Follow @KerrForCongress
PAUL KERR (D) is a real estate investor and veteran of the U.S. Navy. Raised in Arizona, his family moved to San Diego County when he was 16. He enlisted in the Navy at the age of 17 and completed a tour off the coast of Vietnam after the war ended. He enrolled at San Diego State University at the age of 29, earning a bachelor's in Economics. After graduating, he worked for seven years as a commercial real estate appraiser with the Andrew A. Smith Company. He served as an Acquisitions Officer with Fairfield Residential, and joined Davlyn Investments in 1998, where he now serves as President. He resides in San Diego.
Sara Jacobs (D)
Follow @sarajacobsca
SARA JACOBS (D) is a former Policy Advisor for the Hillary for America Presidential Campaign and as a Conflict & Stabilization Policy Officer for the State Department under the Obama Administration. Prior to that, she worked at the United Nations in a variety of capacities, serving on the Innovation Unit at UNICEF, in the UN's Department of Peacekeeping Operations, and in Peace & Development for UNDP. She most recently served as CEO of Project Connect, a non-profit focused on improving internet connectivity and improving access at schools across the globe. Born in Del Mar, she graduated from Torrey Pines HS, holds a bachelor's in political science and a master's in international relations from Columbia University, and currently resides in Encinitas. She is the granddaughter of billionaire philanthrophist and Qualcomm co-founder Irwin Jacobs.
Danielle St. John (Grn)
DANIELLE ST. JOHN (Grn) is a former Dental Office Manager at Marcos Ortega DDS and founder of Oral Benefits Solutions. A San Diego native, she went to Moon Senior HS and Carlsbad HS and resides in Carlsbad.
Joshua Hancock (Lib)
JOSHUA L HANCOCK (Lib), 39, is a Southern California Edison worker and former Marine MP who has been an Oceanside resident for over 20 years.
Robert Pendleton (NPP)
ROBERT PENDLETON (NPP) is an ophthalmologist and Medical Director for Pendleton Eye Center and North Coast Surgery Center. A native of the San Diego area, he graduated from La Jolla HS in 1977, then enrolled in UC-Davis, earning his bachelor's in Biochemistry in 1981. He earned his masters in Chemistry from UC-Davis in 1982, then enrolled in the MD/Ph.D. program at the University of Illinois, earning his MD from University of Illinois-Chicago and his PhD in biochemistry from University of Illinois at Urbana-Champaign in 1990. He completed a one-year Internal Medicine Residency at Northwestern University's School of Medicine, followed by a three-year Ophthalmology Residency. He began his private practice in Buffalo, NY in 1994. In 1996, he relocated to Minnesota and opened a practice in the Brainerd Lakes area. In 1997, he returned to his native California and established the Pendelton Eye Center in Oceanside. He has served on the Board of Trustees for the Oceanside Museum of Art, and resides in Carlsbad.
Jordan Mills (PAF)
JORDAN MILLS (PAF) is a socialist, educator, and union organizer. Raised in San Diego, he graduated from Vista HS, then studied Communications at San Diego State University. Since 2000, he has worked as a professor and debate coach at Southwestern College in San Diego County. He resides in Oceanside with his wife, Ann Johnson.
Diane Harkey (R)
DIANE HARKEY (R) (b. 6/20/51) was first elected to the Dana Point City Council in 2004. Mid-way through her first term on the Council, she made an unsuccessful run in a 2006 Special Election for SD35, losing to Tom Harman by just 236 votes. Two years later, she successfully ran for the State Assembly, winning re-election in 2010 and 2012. After being termed out of her Assembly seat, she successfully ran for the newly-redrawn Board of Equalization 4th District, defeating Democrat Nader Shahatit by a decisive 61.4%/38.6% margin. Prior to holding office, she enjoyed a 30-year career in corporate finance and banking. She holds a bachelor's in Economics from UC-Irvine, and resides in Dana Point. ENDORSEMENTS: Darrell Issa
Campaign | Facebook
Kristin Gaspar (R)
KRISTIN GASPAR (R) was elected to the San Diego County Board of Supervisors in 2016, narrowly unseating incumbent Dave Roberts by 1,272 votes in a 50.28%/49.72% contest. Roberts had been embroiled in controversy after the Board paid out $310,000 to settle claims made by former employees alleging inappropriate use of County funds, promoting a hostile work environment, retaliation against staff members, and several other issues. Prior to her election, she served on the Encinitas City Council, first winning election in 2010 as the youngest person ever elected to the council. She is the CFO for Gaspar Doctors of Physical Therapy, which opened in Encinitas in 1994 and is owned and operated by her husband, Paul, a former director of the California Physical Therapy Association. She holds a bachelor's in Journalism from Arizona State University. She and her husband reside in Encinitas. ENDORSEMENTS: San Diego Mayor Kevin Faulconer, Rep Ed Royce. Campaign Consultant: Jason Cabel Roe
Rocky Chavez (R)
ROCKY CHÁVEZ (R) (b. 5/12/51) spent more than 28 years as a U.S. Marine, rising to the rank of colonel and serving as chief of staff of the 4th Marine Division. Chávez was elected to the Oceanside City Council in 2002 where he served for seven years. In 2009, he was appointed Undersecretary of the CA Dept of Veterans' Affairs by then Gov. Arnold Schwarzenegger. He was elected to the 76th Assembly District in 2012, winning re-election in 2014 and 2016. He briefly explored running for U.S. Senate in 2016, but withdrew before the primary. He graduated from CSU Chico. He and his wife, Mary, have three children, one of whom is a physician. ENDORSEMENTS: Former Gov. Arnold Schwarzenegger
Michael D Schmitt (R)
MIKE SCHMITT (R) is a health care practicioner boasting he has more advanced degrees than any other candidate running in the race.
David Medway (R)
DR. DAVID C. MEDWAY (R), 51, is an internist specializing in weight control. He lists his ballot designation as 'Physician/Business Owner'. He earned his bachelor's in Psychology from UCLA and his MD from George Washington University. He resides in Carlsbad with his wife, Laura, an attorney.
Craig Nordal (R)
Follow @nordal4congress
CRAIG A NORDAL (R) is the principal at Nordal Appraisal, a real estate appraisal firm. Born in Hemet, he graduated from Hemet HS, then earned his bachelor's in Agriculture-Fruit Industries at California State Polytechnic University-Pomona in 1978. He earned a second bachelor's in Music in 1984 from San Diego State University, and also earned a teaching credential in Music Performance. Since 1989, he has worked as a self-employed residential real estate appraiser. A self-described 'disciple of Jesus Christ' and a strong supporter of President Trump, he is running on an anti-abortion, anti-gay marriage, pro-gun platform. He resides in Encinitas.
Joshua Schoonover (R)
Follow @JSSchoonover
JOSHUA SCHOONOVER (R) is a patent attorney at Coastal Patent Law Group. After earning his bachelor's in Chemical Physics from San Diego State University in 2005, he worked as a project engineer for GeneOhm developing antigen detecting diagnostics for use in medical applications. In 2007, he enrolled in Western Sierra Law School and began working as a mechanical engineer developing syringe delivery and storage systems for Artes Medical. He continued working as a patent agent for REVA Medical from 2008 to 2009, and finished his law degree in 2011. He served as President of the National Association of Patent Practitioners from 2011 until 2013, when he was admitted to the California Bar. A former NASCAR driver, he resides in Oceanside with his wife and daughter.
Campaign Ads
Remember Me ( + Applegate)
campaign ad From Doug Applegate
Unfit For Command ( + Applegate)
Knockout ( - Chavez)
ie ad From DCCC
Lavish ( - Harkey)
ie ad From House Majority PAC
Time and Again ( - Harkey)
ie ad From Independence USA PAC
Gas Mask ( - Harkey)
ie ad From League of Conservation Voters
Out For Herself ( - Harkey)
ie ad From Priorities USA
The Truth About Diane Harkey ( - Harkey)
campaign ad From Mike Levin
Tax Fighter ( + Harkey)
campaign ad From Diane Harkey
Different ( + Jacobs)
campaign ad From Sara Jacobs
Inspired ( + Jacobs)
My Daughter ( + Jacobs)
Something Big ( + Jacobs)
Will ( + Jacobs)
ie ad From Women Vote!
New Generation ( + Jacobs)
Tough Challenges ( + Jacobs)
Rigged ( + Kerr)
campaign ad From Paul Kerr
Medicare for All ( + Kerr)
American Dream ( + Kerr)
Line in the Sand ( + Kerr)
Reckless ( + Kerr)
Garage ( + Kerr)
Predatory ( - Levin)
Adam Schiff ( + Levin)
Immigration ( + Levin)
Clean Energy ( + Levin)
Planned Parenthood ( + Levin)
No Toll Road ( + Levin)
San Onofre ( + Levin)
Comparative ( + Levin)
Darrell Issa R-Inc 84626 50.82% ✔
Doug Applegate D 75808 45.52% ✔
Ryan Glenn Wingo NPP 6087 3.66%
Darrell Issa R-Inc 155888 50.26% ✔
Doug Applegate D 154267 49.74%
This is an ordinarily safe Republican district, although the GOP's voter registration advantage narrowed from 14% in 2012 to just 6.59% this year. Nevertheless, Republican incumbent Darrell Issa has consistently polled at or around 60% and defeated his opponents by margins of around 20% since the advent of the top two primary, allowing any concerns about the district's structural shift to be swept under the rug. This year's primary results were not to be so easily dismissed.
Issa was the top vote getter in the primary, raising $759,000 and finishing with 50.82%. Democrat Doug Applegate put together a credible campaign, raising $186,000 and taking the number two spot with 45.52%. No Party Preference candidate Ryan Glenn Wingo, a MiraCosta Community College student, reported raising no money and claimed the remaining 3.66%. With Issa receiving just 1.64% over the combined total of his two opponents, there was growing excitement among Democrats salivating at the chance to take down a Congressman they viewed as one of the leading instigators of what they considered to be a series of witch hunts against the Obama Administration during his time as chairman of the House Oversight Committee.
Following the primary, a DCCC-commissioned poll showed Issa and Applegate tied at 43% with 14% undecided. The poll also found GOP nominee Donald Trump with 60% unfavorable/34% favorable ratings and trailing Hillary Clinton 41% to 38% with 21% undecided within the district--Issa became one of the controversial GOP nominee's most fervent supporters. Issa's net worth of over $350 million ranks him as the wealthiest member of Congress by a wide margin, and he maintained a significant cash on hand advantage of $3.7 million to Applegate's $135,000. Beginning in early September, the House Majority PAC, a SuperPAC affiliated with Democratic leadership, launched the first salvos opposing Issa, with the DCCC entering the fray a month later. By the time all the receipts were tallied, nearly $3.7 million in independent expenditures had been logged. The DCCC was by far the biggest spender, deploying $2.3 million to support Applegate along with another $102,000 to oppose Issa. The House Majority PAC added another $791,000 to support Issa and $200,000 opposing Issa. The CA Labor Federation jumped in with an additional $154,000 opposing Issa. Fundraising for both Applegate and Issa rocketed into overdrive, with final campaign statements showing Issa spending a staggering $6.2 million and Applegate spending $1.6 million. Issa resorted to sending out campaign mailers featuring President Obama, thanking him for signing legislation Issa supported, a move Obama derided as 'shameless' in response.
Court records were unearthed in September that proved embarrassing for Applegate's campaign. The records revealed that in 2004 Applegate was accused of stalking his ex-wife. A judge granted two restraining orders and forced Applegate to temporarily surrender his firearms. Additional records were unearthed highlighting a 2000 DUI charge for which Applegate eventually pled guilty to reckless driving.
In late September, the Issa campaign launched www.colonedougapplegateforcongress.com, an attack website targeting Applegate on taxes, trade, and a host of other issues.
On election night, Issa was clinging to a close 3,979 vote lead. As the provisional votes were counted, the results began to narrow, prompting Issa to fire off a fundraising e-mail with the subject line 'I won, But Now the Liberals Are Trying to Steal the Election' and warning of 'thousands of illegal, unregistered voters' tilting the results to his opponent. The race remained uncalled until November 28th, when the AP finally declared Issa the winner. Issa's 1,621 vote victory was the closest House, Assembly, or Senate race in 2016, and the worst performance for any CA GOP incumbent in a district with a Republican registration advantage. Final turnout in the primary was 51.98%, while 80.15% of the district's registered voters cast a ballot for this race in November, the single highest percentage for any House, Assembly, or Senate race in 2016.
View DCCC Research Page: Darrell Issa (R-Inc) NRCC Research Page: Doug Applegate
Issa TV Spots: Our Congressman • Litmus Test
APPLEGATE, DOUGLAS LOREN DEM 0 2,082,204 1,678,302 27,495 33,544 5 12/31/2016 6,700 C00581595
WINGO , RYAN GLENN NPA 0 0 0 0 0 0
ISSA, DARRELL REP 3,750,024 2,890,441 6,311,313 329,152 48,302 0 12/31/2016 0 C00350520
Darrell Issa* (R)
Follow @darrellissa
DARRELL ISSA (pronounced EYE-sah) is of Lebanese descent. Prior to his election to Congress, he founded an electronics manufacturing firm specializing in automobile convenience, audio and security products.
Issa entered the U.S. Army during his senior year in high school and, after attending college on an ROTC scholarship, attained the rank of captain (tank platoon leader). He is a graduate of Siena Heights University in Adrien, Michigan and is currently serving on its Board of Trustees. He and his wife, Katharine, live in Vista and have one son.
Official | Facebook | Twitter | Youtube
Ryan Glenn Wingo (IND)
RYAN G. WINGO (NPP) is a student at MiraCosta Community College running on a fiscally conservative, socially liberal, pro-2nd amendment platform.
Dave Peiser D 25946 28.39% ✔
Noboru Isagawa D 8887 9.72%
Dave Peiser D 64981 39.83%
Three candidates appeared on the June Top Two Primary ballot. The top vote getter by a wide margin was incumbent Republican Darrell Issa with 62%. Democratic businessman Dave Peiser came in a distant second with 28%. Democrat Noboru Isagawa, a retired investment counselor from Laguna Niguel, came in third with 10%. Another Democrat, Johnny Moore from Oceanside, ran a write-in campaign and received 16 votes.
Peiser made a moderate effort, raising and spending about $84,000. But this is a safe Republican district and Issa was easily reelected.
ISAGAWA, NOBORU DEM 0 0 0 0 0 0
MOORE, JOHNNY DEM 0 0 0 0 0 0 0 C00555185
PEISER, DAVE DEM 0 94,235 85,321 8,273 13,000 13,000 12/31/2014 0 C00549212
ISSA, DARRELL REP 1,749,490 3,750,001 1,749,467 3,750,024 5,710 0 12/31/2014 0 C00350520
Dave Peiser (D)
Dave Peiser
Noboru Isagawa (D)
Noboru Isagawa
Jerry Tetalman D 35816 30.68% ✔
Dick Eiden NPP 7988 6.84%
Albin Novinec NPP 1626 1.39%
Jerry Tetalman D 114893 41.84%
When the Citizens Redistricting Commission approved the new district lines, the residences of GOP Reps. Darrell Issa and Brian Bilbray were both located in the district, but Bilbray unsuccessfully sought reelection in CD52, losing to Democrat Scott Peters.
Three candidates challenged Issa in the June Top Two Primary. Issa, with 61% of the vote, was easily the top vote getter. Democrat Jerry Tetalman, a real estate agent, former nurse and an anti-war activist, came in second with 31%, thus qualifying him for the November runoff against Issa. The other two, both running as No Party Preference candidates were Dick Eiden (7%), a retired longtime political activist for progressive causes; and Albin Novinec (1%), a realtor and retired Marine.
Tetalman put in a modest effort, raising and spending just under $100,000, but the effort had minimum impact on the election outcome.
TETALMAN, JERRY DEM 0 136,168 132,720 3,448 8,000 13,000 12/31/2012 0 C00500975
EIDEN, RICHARD JOHN NPA 0 30,282 30,282 0 0 0 09/30/2012 0 C00506659
NOVINEC, ALBIN DENNIS NPA 0 0 0 0 0 0 0 C00518704
ISSA, DARRELL REP 386,002 2,478,710 1,115,222 1,749,490 10,817 0 12/31/2012 0 C00350520
Jerry Tetalman (D)
Jerry Tetalman
Dick Eiden (IND)
Dick Eiden
Albin Novinec (IND)
Albin Novinec
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,790
|
\section*{Introduction}
The temperature anisotropies of the cosmic microwave background offer a unique window towards the physics
of the early Universe and for the understanding of the large-scale structures.
Current observations of the temperature anisotropies power spectrum, \( C_{\ell } \),
point toward the existence of a well localized first acoustic peak\cite{firstacousticpeak}.
If this result is confirmed by the next generation of CMB experiments, it supports
models of large-scale structure formation from adiabatic scalar fluctuations
at the expense of models of topological defects and more particularly of cosmic
strings \cite{Kibble}\cite{stringCl}. Furthermore, the shape and normalization
of the local matter density power spectrum, \( P(k) \), is also in bad agreement
with the CMB data for such models \cite{Albrecht}. This suggests that \emph{only}
a small fraction of the large-scale inhomogeneities might be due to topological
defects. However, recent studies have shown that in realistic models of inflation
cosmic string formation seems quite natural at the end of the inflationary period:
it is a natural outcome in Super-Symmetry inspired scenario\cite{Deffayet};
it can also be obtained during a pre- or reheating process \cite{Preheat}.
The effects of cosmic strings on the last scattering surface temperature map
have been described by Kaiser \& Stebbins \cite{KSeffect}. If a cosmic string
is moving against an homogeneous surface of uniform temperature, the energy
of the deflected photon \cite{Vilenkin} is enhanced or reduced (the photons
are then blueshifted or redshifted) depending on whether the photon is passing
behind or ahead the moving string, a mechanism through which temperature anisotropies
are generated. The aim of this paper is to explore the possibilities of having
similar effects for the polarization properties. Obviously, if the background
surface is unpolarized, the deflected photons remain unpolarized and no effects
can be observed. However if the background sky is polarized then the polarization
pattern is affected through lens effects and, in particular a geometrical deformation
can naturally induce \( B \)-type polarization out of the \( E \) component.
This mechanism has been described for the large-scale structures \cite{LSS-BPol}
and recognized as a major source of \( B \)-type small-scale CMB polarization.
We are interested here in the case of cosmic strings for which this effect can
be easily investigated and visualized. Note that, by doing so, we neglect a
possible coupling with an axionic field associated with the string that could
induce significant photon-string non-gravitational couplings to a finite distance\cite{HarveyNaculich}.
The background model of large-scale structure formation of this paper is therefore
inflation driven adiabatic fluctuations with a few cosmic strings that may have
survived from a late time phase transition although with a significant linear
energy density. Note that what we are describing here is a secondary effect
from a perturbation theory point of view in the sense that it is quadratic in
the metric perturbation: it is a coupling between the local gravitational potential
and the potential on the last-scattering surface (the Kaiser-Stebbins effect
is also a secondary effect). We do not attempt to describe the primary anisotropies
induced at the recombination time that have been examined in other studies\cite{StringCMBPol}.
In sect. 1 we examine in detail the effects induced by straight strings and
by circular loops. Sect. 2 contains the result of simulations of the effect.
Then, in sect. 3, we estimate the detectable amplitude of the \( B \) component
in the case of a straight string driven deformation.
\vspace{0.4cm}
\section{Cosmic string lens effect}
In inflationary scenario, at any given scale, scalar perturbations give birth
to a scalar polarization pattern -- that is to say, to \( E \)-type polarization
-- whereas tensor modes, that can induce both \( E \) and \( B \)-type polarization,
contribute only at very large scale \cite{noBpol}. This result accounts for
the symmetry of the fluctuations. It implies that at small scales, the pseudo-scalar
\( B \) field,
\begin{equation}
\label{eqB}
B\equiv \Delta ^{-1}[(\partial x ^{2}-\partial y ^{2})\, U-2\partial x \partial y \, Q],
\end{equation}
defined\footnote{%
Throughout the paper we work in the small angular scale limit. See \cite{AllSkyPolarization}
for a general discussion of these properties.
} from the Stokes parameters \( Q \) and \( U \) is zero. The polarization
field is then entirely defined by the scalar \( E \) field,
\begin{equation}
E\equiv \Delta ^{-1}[(\partial x ^{2}-\partial y ^{2})\, Q+2\partial x \partial y \, U].
\end{equation}
Since the polarization vector is parallel transported along the geodesics, a
gravitational lens affects the polarization simply by displacing the apparent
position of the polarized light source\cite{Schneider}. In other words, the
observed Stokes parameters \( \hat{Q} \) and \( \hat{U} \) are given in terms
of the \emph{primary} (i.e. unlensed) ones by:
\begin{equation}
\label{eqDep}
\hat{Q} (\vec{\alpha} )=Q(\vec{\alpha} +\vec{\xi} ),\qquad \hat{U} (\vec{\alpha} )=U(\vec{\alpha} +\vec{\xi} ).
\end{equation}
where \( \vec{\xi} \) is the displacement field at angular position \( \vec{\alpha} \)
(\( \vec{\alpha} \) is a 2D vector that gives the sky coordinates in the small
angle limit). The displacement, \( \vec{\xi} \), is given by the integration of
the gravitational potential along the line-of-sights. We will assume in this
paper that the only potential acting as lens is the cosmic string potential.
It obviously depends on the shape, equation of state and dynamics of the string.
Putting (\ref{eqDep}) in (\ref{eqB}), we can write a general expression of
\( \Delta \hat{B} \) in presence of lenses;
\begin{equation}
\begin{array}{l}
\Delta \hat{B} (\vec{\alpha} )=-2Q_{,ij}(\vec{\alpha} +\vec{\xi} )(\delta ^{i}_{x}+\xi _{,x}^{i})(\delta _{y}^{j}+\xi ^{j}_{,y})\\
-2Q_{,i}(\vec{\alpha} +\vec{\xi} )\xi _{,xy}^{i}\\
+U_{,ij}(\vec{\alpha} +\vec{\xi} )\\
\: \: \: \times [(\delta _{x}^{i}+\xi _{,x}^{i})(\delta _{x}^{j}+\xi _{,x}^{j})-(\delta _{y}^{i}+\xi _{,y}^{i})(\delta _{y}^{j}+\xi _{,y}^{j})]\\
+U_{,i}(\vec{\alpha} +\vec{\xi} )(\xi _{,xx}^{i}-\xi ^{i}_{,yy}).
\end{array}
\end{equation}
There are no reasons for the displacement field to preserve a non-zero \( B \)-type
cosmic microwave background polarization simply because the two scalar field composition (one being
the primary scalar perturbation, the other the line-of-sight gravitational potential)
breaks the parity invariance.
For illustration we examine here explicitl{\small y} the two special cases of
a straight cosmic string and of a circular cosmic string, both of them in a
plane orthogonal to the line-of-sight.
\subsection{The case of a straight string}
Let us assume that a straight string is aligned along the \( y \)-axis. Then
the displacement is uniform at each side of the string. The deflection angle,
\( \alpha =4\pi G\mu \) \cite{Vilenkin} (where \( G \) is the Newton constant
and \( \mu \) the string linear energy density) induces a displacement given
by,
\begin{equation}
\xi _{x}=\pm \xi _{0},\quad \xi _{0}=4\pi G\mu \frac{{\mathcal{D}}_{\textrm{CMB},\textrm{string}}}{{\mathcal{D}}_{\textrm{string}}},
\end{equation}
the sign depending on which side of the string one observes; the displacement
along \( y \) is obviously 0. \( {\mathcal{D}}_{\textrm{CMB},\textrm{string}} \)
and \( {\mathcal{D}}_{\textrm{string}} \) are the cosmological angular distances
between, respectively, the last scattering surface and the string, and the string
and the observer. In the following, we will assume that we are in the most favorable
case for detection, when the ratio of the distance is about unity, hence removing
any geometrical dependence on the cosmological parameters. Then, the string
lays at equal distance between the last scattering surface and the observer\footnote{%
this means a redshift of 3 for an Einstein-de Sitter universe.
}. For a \( G\mu \) around \( 10^{-6} \) \cite{Vilenkin}, the typical expected
displacement is about less than 10 arc seconds.
We can write the expression of the Stokes parameters,
\begin{eqnarray}
\hat{Q} (x,y) & = & Q(x-\xi _{0},y)\, \theta (x-x_{0})\nonumber \\
& + & Q(x+\xi _{0},y)\, \left( 1-\theta (x-x_{0})\right) ,
\end{eqnarray}
where \( \theta \) is the step function, \( x_{0} \) is the position of
the string. The same expression holds for \( \hat{U} \). Since the primary polarization
map is \( B \) free, the Laplacian of the observable \( B \) field is finally
given by,
\begin{eqnarray}
\Delta \hat{B} (\vec{\alpha} ) & = & \delta (x-x_{0})\left( \left| U_{,x}\right| _{\! -}^{\! +}-2\left| Q_{,y}\right| _{\! -}^{\! +}\right) \nonumber \\
& + & \delta '(x-x_{0})\left| U\right| _{\! -}^{\! +},\label{Bobsstraight}
\end{eqnarray}
where we define
\begin{equation}
\left| X\right| _{\! -}^{\! +}\equiv \hat{X}(x_{0}^{+})-\hat{X}(x_{0}^{-})=X(x_{0}-\xi _{0})-X(x_{0}+\xi _{0}).
\end{equation}
One can note in Eq. (\ref{Bobsstraight}) that the effect is entirely due to
the discontinuity induced by the string on the polarization map. Furthermore,
the \( B \) component of the polarization will only be non-zero on the string
itself. Obviously, the efficiency with which such an effect will be observed
depends on the angular precision of the detectors as we discuss later.
\subsection{The case of a circular string}
The case of a collapsing circular string loop allows to complement our analysis
with the effect of the string curvature. As shown in \cite{deLaix} the lens
effect of such a string, when facing the observer, is equivalent to the one
of a static linear mass distribution. The structure of the displacement field
is then simple. Let us consider a loop centered at the origin of our coordinate
and of radius \( \alpha _{l} \). If one observes towards a direction through
the loop, the displacement is nil. Outside the loop, the displacement decreases
as \( {\alpha _{l}}/{\alpha } \). We have
\begin{equation}
\vec{\xi} (\vec{\alpha} )=-2\xi _{0}\frac{\alpha _{l}}{\alpha ^{2}}\vec{\alpha} \: \, \, \, \, \mathrm{for}\: \, \, \, \, \alpha >\alpha _{l}.
\end{equation}
Note that in \( \xi _{0} \), \( \mu \) is an effective quantity that contains
the effects of dynamics as well. Then, \( \hat{Q} \) is
\begin{eqnarray}
\hat{Q} (\vec{\alpha} ) & = & Q\left[ \vec{\alpha} \left( 1-2\xi _{0}\frac{\alpha _{l}}{\alpha ^{2}}\right) \right] \, \theta (\alpha -\alpha _{l})\nonumber \\
& + & Q(\vec{\alpha} )\, \left[ 1-\theta (\alpha -\alpha _{l})\right] .
\end{eqnarray}
Two effects are induced in this case. The first one, also present in the straight
string case, comes from the discontinuity of the polarization field; this is
the \emph{strong lensing} effect. It is due to the existence of a critical region
in the source plane where objects can have multiple images (two in this case,
but it can be more in general \cite{deLaix}). The second one is a \emph{weak
lensing} effect simply due the deformation of the source plane; it will be small
compared to the other. This latter effect is investigated in more detail in
\cite{CestMoi}. We expect these two effects to be present for any string model.
\section{Simulated maps}
We present in Figs. \ref{FiveMinB}-\ref{OneMinE} the simulation results of
circular cosmic string (30 arc minutes radius, \( \xi _{0}=5'' \)). The cosmic microwave background polarization
realization uses \( C_{\ell } \) calculated with a standard \( \Lambda \mathrm{CDM} \)
model. Only scalar primary perturbations are used here since we do not expect
any significant tensor mode at such a small scale; without the string, there
is no signal in the \( B \) component.
The hot and cold (black and white) patches run along the string path. They come
from the \( \delta ' \) term in the eq. (\ref{Bobsstraight}). Its amplitude
is the result of a finite difference in the \( U \) field at distance \( 2\xi _{0} \)\emph{.}
We will see in last section that, at small filtering resolution, this term dominates
the amplitude of \( B \) polarization.
\begin{figure}
{\par\centering \resizebox*{8cm}{8cm}{\includegraphics{DelB.corG.eps}} \par}
\vspace{.3cm}
\caption{\label{FiveMinB} \protect\( B\protect \) field for a circular loop crossing
a \protect\( 50'\times 50'\protect \) window. The filter resolution is 5 arc
minutes. At this scale, the \protect\( B\protect \) field is less than \protect\( 1\%\protect \)
the \protect\( E\protect \) one. The very faint patches that can be noticed
above the string are \emph{weak lensing} effect signature (a few percent of
the \emph{strong lensing} effect coming from the critical region). }
\end{figure}
\begin{figure}
{\par\centering \resizebox*{8cm}{8cm}{\includegraphics{DelE.corG.eps}} \par}
\vspace{.3cm}
\caption{\label{FiveMinE} Same as Fig. \ref{FiveMinB} for the \protect\( E\protect \)
field. At this scale the string remains completely ``diluted'' in the \protect\( E\protect \)
field and cannot be seen (the effect is smaller than 1\% of the mean primary
\protect\( E\protect \) signal). }
\end{figure}
It is interesting to note here that the \emph{weak lensing} effect is negligible
at these scales. Besides, even the discontinuity effect is really small at a
5 arc minute angular scale. A 4 times better resolution enhances significantly
the signal (about 10 times). Note also that the hot and cold spots along the
strings have the same linear sizes as the typical peaks of the polarization
field. We expect that this feature and the very peculiar shape of the effect
on the cosmic microwave background polarization maps could help discriminating between this effect
and other secondary polarization sources or foregrounds (lensing from large-scale
structures, dust polarization...) The extraction of a clean cosmic microwave background polarization out
of a signal with foregrounds has been studied\cite{prunetbouchet} at larger
scale. Few is known about contamination of the \( B \) signal at the scale
we are looking at here.
\begin{figure}
{\par\centering \resizebox*{8cm}{8cm}{\includegraphics{DelB.corG2.eps}} \par}
\vspace{.3cm}
\caption{\label{OneMinB} Same as \ref{FiveMinB} with a better resolution (1.2'). The
discontinuity effect is less diluted. The \protect\( B\protect \) field is
now in amplitude about 10\% of the typical \protect\( E\protect \) fluctuations. }
\end{figure}
\begin{figure}
{\par\centering \resizebox*{8cm}{8cm}{\includegraphics{DelE.corG2.eps}} \par}
\vspace{.3cm}
\caption{\label{OneMinE} Same as \ref{FiveMinE} for the 1.2' resolution. The discontinuity
effect is now visible in \protect\( E\protect \). Lens effects induce mode
couplings that create structures at very small scales dominating over primary
structures.}
\end{figure}
\section{Amplitude of a Straight String effect }
We come back to the case of a straight cosmic string. It is easy to estimate
the amplitude of the effect which only consists of a discontinuity. At small
angles we can decompose the \( E \) field in plane wave Fourier modes,
\begin{equation}
E(\vec{\alpha} )=\int \frac{{\mathrm{d}}^{2}l}{2\pi }\tilde{E}(\vec{l} ){\mathrm{e}}^{{\mathrm{i}}\vec{\alpha} .\vec{l} }.
\end{equation}
It is straightforward to write for \textbf{\( B \)}, in Fourier space,
\begin{eqnarray}
\Delta \hat{B} (x,y) & = & 2\int \frac{{\mathrm{d}}^{2}l}{2\pi }\tilde{E}(\vec{l} ){\mathrm{e}}^{{\mathrm{i}}\vec{\alpha} .\vec{l} }\left( {\mathrm{e}}^{{\mathrm{i}}\xi _{0}l_{x}}-{\mathrm{e}}^{-{\mathrm{i}}\xi _{0}l_{x}}\right) \nonumber \\
& & \! \! \! \! \! \! \! \! \! \times \left[ \frac{l_{x}l_{y}}{l^{2}}\delta '(x-x_{0})+{\mathrm{i}}\frac{l_{y}^{3}}{l^{2}}\delta (x-x_{0})\right] .
\end{eqnarray}
This expression makes sense only if convolved with a \emph{test function};
that is to say convolved with a suitable window function. For simplicity, we
assume that our observational device window is described by a Gaussian window
function \( W \) of width \( \alpha _{w} \),
\begin{equation}
W(\vec{\alpha} )=\frac{1}{2\pi \alpha _{w}}{\mathrm{e}}^{-\frac{\alpha ^{2}}{2\alpha _{w}^{2}}},\: \tilde{W}(k_{x},k_{y})={\mathrm{e}}^{-\frac{\alpha _{w}^{2}\, (k_{x}^{2}+k_{y}^{2})}{2}}.
\end{equation}
Then we have,
\begin{eqnarray}
\Delta \hat{B} _{W}(x,y) & = & 2\int \frac{{\mathrm{d}}^{2}l}{2\pi }\frac{{\mathrm{d}}k}{2\pi }\tilde{E}(\vec{l} ){\mathrm{e}}^{{\mathrm{i}}\left[ x_{0}(l_{x}-k)+xk+yl_{y}\right] }\nonumber \\
& & \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \! \times \tilde{W}(k,l_{y})\left( {\mathrm{e}}^{{\mathrm{i}}\xi _{o}l_{x}}-{\mathrm{e}}^{-{\mathrm{i}}\xi _{o}l_{x}}\right) \left[ {\mathrm{i}}\frac{l_{x}l_{y}k}{l^{2}}+{\mathrm{i}}\frac{l_{y}^{3}}{l^{2}}\right] .
\end{eqnarray}
The r.m.s. of \( \Delta \hat{B} _{W} \) can then be expressed as a function
of the \( E \) power spectrum \( C_{E}(l) \),
\begin{eqnarray}
\left\langle \left( \Delta \hat{B} _{W}\right) ^{2}\right\rangle & = & 2{\mathrm{e}}^{-\frac{(x-x_{o})^{2}}{\alpha _{w}^{2}}}\int \frac{{\mathrm{d}}^{2}l}{\pi ^{3}}C_{E}(l)\sin ^{2}(\xi _{0}l_{x})\nonumber \label{test} \\
& & \! \! \! \! \! \! \! \! \! \! \! \! \! \times {\mathrm{e}}^{-l_{y}^{2}\alpha _{w}^{2}}\left[ \frac{l_{x}^{2}l_{y}^{2}(x-x_{0})^{2}}{l^{4}\alpha _{w}^{6}}+\frac{l_{y}^{6}}{l^{4}\alpha _{w}^{2}}\right] .\label{DBobsW}
\end{eqnarray}
\( C_{E}(l), \) has a natural cutoff due to the Silk damping scale, \( l_{\textrm{damp }} \)(\( 1/l_{{\mathrm{damp}}}\sim 10' \)),
a scale much bigger than the induced displacement. Therefore, we can replace
\( \sin ^{2}(\xi _{0}l_{x}) \) by its expansion \( \xi _{0}^{2}l_{x}^{2} \).
Then the amplitude of the effect grows like \( \xi _{0} \). Besides, if the
size of the window is smaller than the typical scale of \( \Delta E \) structures,
\( \alpha _{w}\ll \alpha _{\textrm{peaks}} \)with \( \alpha _{\textrm{peaks}}\sim 10^{-3} \),
we have \( \exp (-l_{y}^{2}\alpha _{w}^{2})\sim 1 \). And from Eq. (\ref{DBobsW})
we can then calculate:
\begin{eqnarray}
\begin{array}{l}
\frac{\left\langle \left( \Delta \hat{B} _{w}(x=x_{0}\pm \alpha _{w},y)\right) ^{2}\right\rangle ^{1/2}}{\left\langle \left( \Delta E\right) ^{2}\right\rangle ^{1/2}}=\frac{1}{4\sqrt{\pi \mathrm{e}}}\frac{\xi _{0}}{\alpha _{w}}\sqrt{5+8\frac{\alpha ^{2}_{\textrm{peaks}}}{\alpha ^{2}_{w}}}.
\end{array}\label{eqapprox}
\end{eqnarray}
The distance to the string, \( x=x_{0}\pm \alpha _{w} \), have been chosen
to give a realistic account of the effect (in the simulation this correspond
almost to the peak of the hot and cold patches). Fig.\ref{approx} shows these
results for a \( \Lambda \mathrm{CDM} \) model -- the only dependence with
the cosmological parameters appears in the position of the polarization peaks
which depends essentially on the global curvature of the Universe. Our approximation
is not exact at 5', but is fairly accurate at 1'. Numerically, we found that,
at 5' resolution, the amplitude of the effect evolves like \( \sim 325\, \xi _{0} \)
at the position \( x=x_{0}\pm \alpha _{w} \); it is \( \sim 1.5\times 10^{-2} \)
when \( \xi _{0}=10'' \). The slope at small \( \alpha _{w} \) is due to the
\( \alpha ^{2}_{\textrm{peaks}}/\alpha ^{2}_{w} \) in eq. (\ref{eqapprox})
that is to say to the \( \delta ' \) in eq. (\ref{Bobsstraight}). This is
in good agreement with the simulations that suggested an effect of this order
of magnitude dominated, at small \( \alpha _{w} \), by the finite difference
in \( U \) field at \( 2\xi _{0} \) scale.
\begin{figure}
{\par\centering \resizebox*{8cm}{!}{\includegraphics{newrapBE.eps}} \par}
\caption{\label{approx} A comparison of the exact computation of \protect{}}
\( \sqrt{\langle \Delta B^{2}\rangle /\langle \Delta E^{2}\rangle }\protect \)
(dashed line) at \protect\( x=x_{0}\pm \alpha _{w}\protect \) and \protect\( \xi _{0}=10''\protect \)
and its approximation (see eq. \ref{eqapprox}). The amplitude of the effect
is about 10\% at 1' scale, about 1.5\% at 5'. The agreement between the exact
amplitude and eq. (\ref{eqapprox}) weakens for scales above \protect\( 2\sim3' \protect \).
\end{figure}
\section*{Conclusions}
The so far the planned cosmic microwave background experiments will have, at their very best, a 5'
resolution \cite{Plancketal}. We showed that at this angular scale and assuming
that it exists a string with \( \xi _{0}=10'' \) (which is perhaps somewhat
enthusiastic), we can expect a signal in \( B \)-type polarization with an
amplitude of about 1\% the signal in \( E \)-type polarization. This signal
is too weak to be actually detected. Beside, the weak lensing is expected to
induce \( B \)-type polarization at the same scale and is likely to hide any
string effect. Improving the resolution of the detector however will dramatically
enhance the detectability of a string effect; at 1' scale, we expect indeed
to gain a factor 10 in the amplitude of the effect that should make the detection
possible. The detection of cosmic strings through this effect (or through the
Kaiser-Stebbins effect which also requires a good angular resolution) will probably
be possible only with the post Planck generation of instruments\cite{PostPlanck}.
\acknowledgements
The authors would like to thank P. Peter, L. Kofman and especially J.P. Uzan
for encouraging discussions and comments on the manuscript. We also thank A.
Riazuelo for the use of his Boltzmann CMB code.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,381
|
\section{Introduction}\label{sec:Intro}
Given $\omega \in S^1\setminus\{1\}$, the Levine-Tristram signature and nullity of a link~$L$ are given
by the signature and nullity of $(1-\omega)A+(1-\overline{\omega})A^T$, where
$A$ is any Seifert matrix for $L$~\cite{Levine69, Tristram}. For
a~$\mu$-colored link, i.e.\ an oriented link~$L$ in~$S^3$ whose components are
partitioned into~$\mu$ sublinks~$L_1, \dotsc , L_\mu$, the
Levine-Tristram signature and nullity have been generalized to multivariable functions
\[ \sigma_L, \eta_L \colon \mathbb{T}^\mu \to \mathbb{Z}, \]
where $ \mathbb{T}^\mu$ denotes the set $(S^1 \setminus \{1\})^\mu$~\cite{CimasoniFlorens}.
Apart from their $3$-dimensional definition using C-complexes~\cite{Cooper, CimasoniFlorens},
a $4$-dimensional interpretation in the smooth setting has been given by
Cimasoni-Florens using branched covers and the $G$-signature theorem
for elements of $\mathbb{T}^\mu$ of finite order~\cite[Theorem 6.1]{CimasoniFlorens}.
We focus on another interpretation by Viro~\cite{Viro09} using directly the complements of surfaces bounding the link in the $4$-ball.
We shall always work in the topological (locally flat) category.
Let $F$ be a union~$F_1 \cup \dots \cup F_\mu \subset D^4$ of properly embedded locally flat surfaces
that only intersect each other transversally in double points and whose boundary is
a colored link~$L \subset S^3$.
Since the first homology group of the exterior~$W_F$ of such a \emph{colored bounding surface} $F \subset D^4$ is free abelian,
any choice of $\omega \in \mathbb{T}^\mu$ gives rise to a coefficient system $H_1(W_F;\mathbb{Z}) \to U(1)$
and thus to a twisted signature~$\operatorname{sign}_\omega(W_F)$.
The twisted signature~$\operatorname{sign}_\omega (W_F)$ is independent of the
colored bounding surface $F$
and defines an invariant of colored links~\cite[Section 2.3]{Viro09}.
Building on \cite[Theorem 1.3]{ConwayFriedlToffoli}, we give a proof to the following statement of~\cite[Section 2.5]{Viro09} in Proposition~\ref{prop:ColorSignature}. The corresponding result for the nullity is proven in Proposition~\ref{prop:NullityNoCcomplex}.
\begin{proposition}\label{prop:ViroCimasoniFlorens}
Let $L$ be a $\mu$-colored link and let $\omega \in \mathbb{T}^\mu$. For any colored bounding surface $F$, the twisted signature $\operatorname{sign}_\omega(W_F)$ coincides with the multivariable signature $\sigma_L(\omega)$.
\end{proposition}
Cimasoni and Florens showed that the signature $\sigma_L(\omega)$ is invariant under smooth link concordance~\cite[Theorem 7.1]{CimasoniFlorens} for those $\omega=(\omega_1, \dots, \omega_\mu) \in \mathbb{T}^\mu$
that satisfy the following condition: there exists a prime $p$ such that for all $i$, the order of $\omega_i$ is a power of $p$. For the same subset of $\mathbb{T}^\mu$, they provide lower bounds on the genus and on the number of double points of smooth surfaces in $D^4$ bounded by a colored link $L$~\cite[Theorem 7.2]{CimasoniFlorens}, extending the
Murasugi-Tristram inequality~\cite{Murasugi,Tristram} to the multivariable setting.
Building on the approach used in~\cite{NagelPowell} to study
concordance invariance of the Levine-Tristram signature, we consider the subset $\mathbb{T}_!^\mu$ of $\mathbb{T}^\mu$ given by those $\omega$'s which are not roots of any polynomial $p\in \mathbb{Z}[t_1^{\pm 1},\dots ,t_\mu^{\pm 1}]$ whose evaluation on $(1,\dotsc, 1)$ is invertible. This set includes the elements considered by Cimasoni and Florens~\cite[Section 7]{CimasoniFlorens}; see Proposition~\ref{prop:TPContainedT!}.
A \emph{colored cobordism} between two $\mu$-colored links $L$ and $L'$ is a collection of properly embedded locally flat surfaces~$\Sigma = \Sigma_1 \cup \dots \cup \Sigma_\mu$
in $S^3 \times [0,1]$ which have the following properties: the surfaces only intersect each other transversally in double points, each surface~$\Sigma_i$ has boundary~$L_i \sqcup -L_i'$, and each connected component of $\Sigma_i$ has at least one boundary component in $S^3 \times \{0\}$
and one in $S^3 \times \{1\}$.
Our first main result gives bounds on the Euler characteristic and on the number of double points in such a cobordism, generalizing Powell's treatment of a genus bound for the Levine-Tristram signature~\cite{Powell}.
\begin{theorem}\label{thm:GenusIntro}
Let $\Sigma = \Sigma_1 \cup \dots \cup \Sigma_\mu$ be a colored cobordism between two $\mu$-colored links $L$ and $L'$. If $\Sigma$ has $c$ double points, then
\[|\sigma_L(\omega)-\sigma_{L'}(\omega)| + |\eta_L(\omega)-\eta_{L'}(\omega)|
\leq \sum_{i=1}^{\mu} -\chi(\Sigma_i) + c \]
for all $\omega\in \mathbb{T}_!^\mu$.
\end{theorem}
Two $\mu$-colored links~$L$ and~$L'$ are \emph{concordant} if
there exists a $\mu$-colored cobordism between $L$ and $L'$ that has no
intersection points and consists exclusively of annuli.
As an application of Theorem~\ref{thm:GenusIntro}, we extend two different results of Cimasoni and Florens to the topological setting and to a bigger set of values of the variable $\omega$. The first result relaxes the conditions under which the signature and nullity are an obstruction to colored concordance~\cite[Theorem $7.1$]{CimasoniFlorens}. See Corollary~\ref{cor:ConcordanceViaGenus} for a proof.
\begin{corollary}
\label{cor:ConcordanceIntro}
The multivariable signature and nullity are topological concordance invariants at all $\omega \in \mathbb{T}^\mu_!$
\end{corollary}
As a second consequence of Theorem~\ref{thm:GenusIntro}, we obtain a generalization of~\cite[Theorem $7.2$]{CimasoniFlorens}; the latter result being itself an extension of the Murasugi-Tristram inequality~\cite{Murasugi, Tristram}. In what follows, we denote the first Betti number of a surface~$F$ by $\beta_1(F)$. We refer the reader to
Corollary~\ref{cor:CimasoniFlorens72} for a proof of the next result and to Remark~\ref{rem:Genus}
for a comparison with a similar result obtained by Viro~\cite[Section $4$]{Viro09}.
\begin{corollary}\label{cor:CimasoniFlorens72Intro}
Let $F=F_1 \cup \cdots \cup F_\mu$ be a colored bounding surface for a $\mu$-colored link $L$
such that $F_1,\dots, F_\mu$ have a total number of $m$ connected components,
intersecting in $c$ double points.
Then, for all $\omega \in \mathbb{T}_!^\mu$, we have
\[ |\sigma_L(\omega)|+|\eta_L(\omega)-m+1| \leq \sum_{i=1}^\mu \beta_1(F_i) +c. \]
\end{corollary}
The last part of this article deals with $0.5$-solvable cobordisms.
This notion was defined by Cha~\cite{Cha} giving a relative version of the notion of Cochran-Orr-Teichner's $n$-solvability~\cite{CochranOrrTeichner}.
We refer to Section~\ref{sec:Solvable} for the precise definition of $n$-solvable cobordant links, however note that abelian link invariants are not expected to distinguish $0.5$-solvable cobordant links.
For instance, if two links are $1$-solvable cobordant,
then their first non-zero Alexander polynomials agree up to norms and their Blanchfield pairings are Witt
equivalent~\cite[Theorems $B$ and $C$]{Kim}. Our final result is the corresponding
statement for the multivariable signature and nullity.
\begin{theorem}\label{thm:SolvableNullitySignature}
If two $\mu$-colored links $L$ and $L'$ are $0.5$-solvable cobordant, then
\[\eta_{L}(\omega)=\eta_{L'}(\omega) \quad \text{and} \quad \sigma_{L}(\omega)=\sigma_{L'}(\omega)\]
for all $\omega \in \mathbb{T}^m_!$.
\end{theorem}
Since concordant links are $n$-solvable cobordant for all $n$, Theorem~\ref{thm:SolvableNullitySignature} can be viewed as a vast refinement of Corollary~\ref{cor:ConcordanceIntro}.
\begin{remark}\label{rem:WhitneyGropeFormulation}
Note that the notion of $n$-solvable cobordism is related to Whitney tower/grope concordance. See \cite{Cha}
for the definition of these notions. In particular, using~\cite[Corollary 2.17]{Cha},
Theorem~\ref{thm:SolvableNullitySignature} implies that the multivariable signature and
nullity are invariant under height~$3$ Whitney tower/grope concordance.
\end{remark}
\begin{remark}
The \emph{Alexander nullity} $\beta(L)$ of a colored link $L$ is the $\mathbb{Z}[t_1^{\pm 1},\ldots,t_\mu^{\pm 1}]$-rank of its Alexander module. Kim~\cite[Theorem C]{Kim} showed that the Alexander nullity is invariant under $1$-solvable cobordisms. In Proposition~\ref{prop:Invariance}, we improve this result by proving invariance under $0.5$-solvable cobordisms. Note also that this statement does not follow from the invariance of the nullity function $\eta_L(\omega)$ since $\beta(L)=\operatorname{min}\lbrace \eta_L(\omega) \ | \ \omega \in \mathbb{T}^\mu \rbrace$~\cite[Proposition 2.3]{CimasoniConwayZacharova}.
\end{remark}
\medbreak
This paper is organized as follows. Section~\ref{sec:Prelim} introduces the necessary
background material on twisted homology and signatures.
Section~\ref{sec:4dDef} introduces the colored signature and nullity and
proves Theorem~\ref{thm:GenusIntro} together with its applications.
Section~\ref{sec:Plumbed} introduces plumbed $3$-manifolds and proves some results
about their signature defects. These form the technical foundation for the proof of Theorem~\ref{thm:SolvableNullitySignature}, which is the subject of Section~\ref{sec:Solvable}.
\subsection*{Acknowledgments.}
We thank Christopher Davis for sharing his insights of $0.5$--solvability,
which helped us immensely in navigating through the technicalities of Section~\ref{sec:Solvable}.
The authors wish to thank Ana Lecuona, David Cimasoni, Vincent Florens, Stefan Friedl, Paul Kirk, Andrew Nicas and Mark Powell for helpful discussions.
We are indebted to the referees for their detailed and helpful suggestions.
AC thanks UQ\`AM for its hospitality and was supported by the NCCR SwissMap funded by the Swiss FNS.
MN is grateful for his stay at the University of Regensburg funded by the SFB 1085, which started the project.
ET was supported by the GK ``Curvature, Cycles and Cohomology'',
funded by the Deutsche Forschungsgemeinschaft (DFG).
MN was supported by a CIRGET postdoctoral fellowship, and by a Britton postdoctoral fellowship from McMaster University.
\section{Twisted homology, signatures and concordance roots}\label{sec:Prelim}
In Section~\ref{sub:Twisted}, we set up the conventions on twisted homology.
In Section~\ref{sub:IntersectionForm},
we review twisted intersection forms, which leads us to discuss the additivity of the signature in Section~\ref{sub:NovikovWall}.
In Section~\ref{sub:ConcordanceRoots}, we generalize the concept of Knotennullstellen~\cite{NagelPowell}.
\subsection{Twisted homology}\label{sub:Twisted}
We start by fixing some notation and conventions regarding twisted homology. After that, we review two universal coefficient spectral sequences and apply them to a particular abelian coefficient system.
\medbreak
Let $X$ be a connected CW-complex and let $Y \subset X$ be a possibly empty subcomplex.
Denote by $p \colon \widetilde{X} \to X$ the universal cover of $X$ and set
$\widetilde{Y}:=p^{-1}(Y)$, so that $C(\widetilde{X},\widetilde{Y})$ is a
left $\mathbb{Z}[\pi_1(X)]$-module. Given a ring $\mathbb{F}$ with involution, we can consider
homomorphisms~$\phi \colon \mathbb{Z}[\pi_1(X)] \to \mathbb{F}$ of rings with involutions, which
means that $\phi(g^{-1}) = \overline {\phi(g)}$ for all $g \in \pi_1(X)$. Such a homomorphism~$\phi$
turns $\mathbb{F}$ into a $(\mathbb{F},\mathbb{Z}[\pi_1(X)])$-bimodule, which we denote by~$R$. We may consider the left $\mathbb{F}$--modules
\begin{align*}
H_*(X,Y;R)&=H_* \left(R \otimes_{\mathbb{Z}[\pi_1(X)]} C(\widetilde{X},\widetilde{Y}) \right), \\
H^*(X,Y;R)&=H_*\left( \text{Hom}_{\text{right-}\mathbb{Z}[\pi_1(X)]}(C(\widetilde{X},\widetilde{Y})^\text{tr},R) \right),
\end{align*}
where the \emph{transposed module}~$M^\text{tr}$ of an $S$-module~$M$ has the same underlying abelian
group with multiplication flipped using the involution.
Our main examples of twisted homology and cohomology modules will come from the following examples.
\begin{example}\label{ex:Ccoefficients}
Let $\varphi \colon \pi_1(X) \to \mathbb{Z}^\mu=\langle t_1,\dots,t_\mu
\rangle$ be a homomorphism and let~$\omega = (\omega_1,\dots,\omega_\mu) \in \mathbb{T}^\mu \subset \mathbb{C}^\mu$.
Composing the induced map
$\mathbb{Z}[\pi_1(X)] \to \mathbb{Z}[\mathbb{Z}^\mu] $ with the
map~$\mathbb{Z}[\mathbb{Z}^\mu] \xrightarrow{\alpha} \mathbb{C}$ which evaluates $t_i$ at $\omega_i$, produces a morphism~$\phi \colon \mathbb{Z}[\pi_1(X)] \to \mathbb{C}$ of rings with involutions.
In turn, $\phi$ endows $\mathbb{C}$ with a $(\mathbb{C},\mathbb{Z}[\pi_1(X)])$-bimodule structure.
To emphasize the choice of $\omega$, we shall write $\mathbb{C}^\omega$ instead of $\mathbb{C}$.
Since $\mathbb{C}^\omega$ is a $(\mathbb{C},\mathbb{Z}[\pi_1(X)])$-bimodule, we may
consider the complex vector spaces $H_k(X,Y;\mathbb{C}^\omega)$ and
$H^k(X,Y;\mathbb{C}^\omega)$.
Consider the ring $\Lambda_S=\mathbb{Z}[t_1^{\pm 1},\dots,t_\mu^{\pm 1},(1-t_1)^{-1},\dots,(1-t_\mu)^{-1}]$
and observe that since none of the $\omega_i$ are equal to $1$, the map $\phi\colon \mathbb{Z}[\pi_1(X)] \to \mathbb{C}$ factors through a
map $\Lambda_S \to \mathbb{C}$. In particular, the homology
$\mathbb{C}$-vector space~$H_k(X,Y;\mathbb{C}^\omega)$ is the $k$--th homology of the chain complex~$\mathbb{C} \otimes_{\Lambda_S} C(X,Y;\Lambda_S)$.
\end{example}
\begin{example}
\label{ex:FieldCoefficients}
Let $\mathbb{Q}(\mathbb{Z}^\mu)$ denote the field of fractions of $\Lambda:=\mathbb{Z}[t_1^{\pm 1},\ldots,t_\mu^{\pm 1}]$. Given a homomorphism $\varphi \colon \pi_1(X) \to \mathbb{Z}^\mu=\langle t_1,\dots,t_\mu \rangle$, the canonical map $\Lambda \to \mathbb{Q}(\mathbb{Z}^\mu)$ endows $\mathbb{Q}(\mathbb{Z}^\mu)$ with a $(\mathbb{Q}(\mathbb{Z}^\mu),\mathbb{Z}[\pi_1(X)])$--bimodule structure. In particular, we may consider the $\mathbb{Q}(\mathbb{Z}^\mu)$-vector spaces $H_k(X,Y;\mathbb{Q}(\mathbb{Z}^\mu))$. Observe that since $\mathbb{Q}(\mathbb{Z}^\mu)$ is the field of fractions of both $\Lambda$ and $\Lambda_S$, we deduce that $H_k(X,Y;\mathbb{Q}(\mathbb{Z}^\mu))$ is canonically isomorphic to both $\mathbb{Q}(\mathbb{Z}^\mu) \otimes_{\Lambda} H_k(X,Y;\Lambda)$ and $\mathbb{Q}(\mathbb{Z}^\mu) \otimes_{\Lambda_S} H_k(X,Y;\Lambda_S)$.
\end{example}
Most of our main results will involve either the coefficient system $R=\mathbb{C}^\omega$ or the coefficient system $R=\mathbb{Q}(\mathbb{Z}^\mu)$. When we mention that a statement holds for both coefficients systems, it will always be understood that when $R=\mathbb{C}^\omega$ (resp.~$R=\mathbb{Q}(\mathbb{Z}^\mu)$) we take $\mathbb{F}=\mathbb{C}$ (resp. $\mathbb{F}=\mathbb{Q}(\mathbb{Z}^\mu)$).
In order to discuss the relation between homology and cohomology, we introduce
some further notation. First, using the fact that $\phi$ is a morphism of rings
with involution, one can check that
\begin{align*}
\text{Hom}_{\text{right-}\mathbb{Z}[\pi]}(C(\widetilde{X},\widetilde{Y})^\text{tr},R) & \to \text{Hom}_{\text{left-}\mathbb{F}}(R \otimes_{\mathbb{Z}[\pi]}C(\widetilde{X},\widetilde{Y}),\mathbb{F})^\text{tr} \\
f & \mapsto \left( (r \otimes \sigma) \mapsto r \overline{f(\sigma)} \right)
\end{align*}
is a well-defined isomorphism of chain complexes of left $\mathbb{F}$-modules.
The isomorphism of chain complexes induces an evaluation homomorphism
\[ \text{ev} \colon H^k(X,Y;R) \to \text{Hom}_{\text{left-}\mathbb{F}}(H_k(X,Y;R),\mathbb{F})^\text{tr}\]
of left $\mathbb{F}$--modules. This evaluation map is not an isomorphism in general.
Nevertheless, it can be studied using the universal coefficient spectral
sequence~\cite[Theorem 2.3]{Levine77}. For the sake of concreteness, instead of
giving the most general statement, we shall focus on the cases described in Examples~\ref{ex:Ccoefficients} and~\ref{ex:FieldCoefficients}.
\begin{proposition}\label{prop:UCSS}
Let $(X,Y)$ be a CW pair and let $\omega \in \mathbb{T}^\mu$. Suppose $R$ is either~$\mathbb{C}^\omega$ or $\mathbb{Q}(\mathbb{Z}^\mu)$, viewed as a $(\mathbb{F},\mathbb{Z}[\pi_1(X)])$--module. Then, for each $k$, evaluation provides the following isomorphism of left $\mathbb{F}$-vector spaces:
$$ H^k(X,Y;R) \cong \operatorname{Hom}_{\emph{left-}\mathbb{F}}(H_k(X,Y;R),\mathbb{F})^\text{tr}.$$
\end{proposition}
\begin{proof}
There is a spectral sequence with $E_2^{p,q}\cong \operatorname{Ext}_\mathbb{F}^q(H_p(X,Y;R),\mathbb{F})$, which converges to $H^*(X,Y;R)$~\cite[Theorem 2.3]{Levine77}. The result now follows: since $\mathbb{F}$ is a field, the $\operatorname{Ext}$ groups vanish for $q>0$. We also refer to~\cite[Theorem 5.4.4 and Proposition 7.5.4]{ConwayThesis} for further details.
\end{proof}
Given a pair $(X,Y)$, we denote the rank of $H_i(X,Y)$ by $\beta_i(X,Y)$ and the dimension of $H_i(X,Y;R)$ by $\beta_i^{R}(X,Y)$ when $R$ is either $\mathbb{C}^\omega$ or $\mathbb{Q}(\mathbb{Z}^\mu)$.
As an application of Proposition~\ref{prop:UCSS}, we prove the following lemma.
\begin{lemma}\label{lem:DualityUCSS}
Let $\omega \in \mathbb{T}^\mu$, let $R$ be either $\mathbb{C}^\omega$ or $\mathbb{Q}(\mathbb{Z}^\mu)$ and let $W$ be a $4$-dimensional manifold whose boundary decomposes as $\partial W=M \cup_{\partial} M'$,
where $M$ and $M'$ are (possibly empty) connected $3$-manifolds with $\partial M = \partial M'$.
If $W$ is equipped with a homomorphism~$H_1(W;\mathbb{Z}) \to \mathbb{Z}^\mu$,
then $\beta_{4-i}^{R}(W,M)=\beta_i^{R}(W,M')$ for $i=0,1$.
\end{lemma}
\begin{proof}
By duality, $H_{4-i}(W,M;R) \cong H^i(W,M';R)$. Using Proposition~\ref{prop:UCSS}, we deduce that
$H^i(W,M';R) \cong \text{Hom}_{\mathbb{F}}(H_i(W,M';R),\mathbb{F})^\text{tr}$ for $i=0,1$. The result now follows immediately.
\end{proof}
As observed in Example~\ref{ex:FieldCoefficients}, there is a canonical isomorphism of $H_k(X,Y;\mathbb{Q}(\mathbb{Z}^\mu))$ with $\mathbb{Q}(\mathbb{Z}^\mu) \otimes_{\Lambda_S} H_k(X,Y;\Lambda_S)$.
On the other hand, a particular case of the universal coefficient spectral sequence in homology is needed to deal with $\mathbb{C}^\omega$-coefficients; see e.g.~\cite[Chapter 2]{Hillman}.
\begin{proposition} \label{prop:UCSSTor}
Given a CW-pair $(X,Y)$ and $\omega \in \mathbb{T}^\mu$, there exists a spectral sequence
\begin{enumerate}
\item converging to $H_*(X,Y;\mathbb{C}^\omega)$
\item with $E^2_{p,q}\cong \text{Tor}^{\Lambda_S}_{p}(H_{q}(X,Y;\Lambda_S),\mathbb{C}^\omega)$
\item with differentials $d^r$ of degree $(-r,r-1).$
\end{enumerate}
More specifically, there is a filtration
$$ 0 \subset F_n^0 \subset F_n^1 \subset \dots \subset F_n^n=H_n(X,Y;\mathbb{C}^\omega)$$
with $F_n^p/F_n^{p-1} \cong E_{p,n-p}^\infty$.
\end{proposition}
As for cohomology, we provide an easy application of this spectral sequence, to which we shall often refer.
\begin{lemma} \label{lem:UCSSTorExample}
Let $X$ be a connected CW-complex together with a homomorphism $H_1(X;\mathbb{Z}) \to
\mathbb{Z}^\mu = \mathbb{Z}\langle e_1, \dots, e_\mu\rangle$ such that at least one
generator~$e_i$ is in the image.
If $\omega \in \mathbb{T}^\mu$, then
$H_0(X;\mathbb{C}^\omega) = 0$.
Furthermore, $H_1(X;\mathbb{C}^\omega)$ is isomorphic to $\mathbb{C}^\omega \otimes_{\Lambda_S} H_1(X;\Lambda_S)$.
\end{lemma}
\begin{proof}
Using the assumption on the map $H_1(W;\mathbb{Z}) \to \mathbb{Z}^\mu$,
the $\Lambda_S$-module~$H_0(W;\Lambda_S)$ vanishes; see e.g.~\cite[Lemma 2.2]{ConwayFriedlToffoli}).
Thus Proposition~\ref{prop:UCSSTor} immediately implies that $H_0(X;\mathbb{C}^\omega) = 0$. Next, we prove the statement involving $H_1(X;\mathbb{C}^\omega)$. Using the notations of Proposition~\ref{prop:UCSSTor}, the differential $0=\text{Tor}^{\Lambda_S}_2(H_0(X;\Lambda_S),\mathbb{C}^\omega)=E_{2,0} \to E_{0,1}$ is zero. Consequently,~$E_{1,0}^\infty=E_{1,0}^2=0$ and $E_{0,1}^\infty=E_{0,1}^2=\mathbb{C}^\omega \otimes_{\Lambda_S} H_1(X_L;\Lambda_S)$. It follows that $H_1(X_L;\mathbb{C}^\omega)=\mathbb{C}^\omega \otimes_{\Lambda_S} H_1(X_L;\Lambda_S)$, as desired.
\end{proof}
\subsection{Twisted intersection forms and signatures}\label{sub:IntersectionForm}
Here, we review twisted intersection forms. Our main example lies in
the coefficient system introduced in Example~\ref{ex:Ccoefficients}. We
conclude with a short bordism argument showing the vanishing of some signature defects.
\medbreak
Given a compact oriented $n$--dimensional manifold~$W$ and a map $\mathbb{Z}[\pi_1(W)]\to \mathbb{F}$ between rings with involutions. Again, we distinguish the ring~$\mathbb{F}$ from the~$(\mathbb{F},\mathbb{Z}[\pi_1(W)])$--bimodule~$R$. We denote the Poincar\'e duality isomorphisms by $\operatorname{PD} \colon H_k(W,\partial W;R) \cong H^{n-k}(W;R)$ and~$\operatorname{PD} \colon H_k(W;R) \cong H^{n-k}(W,\partial W;R)$. Composing the map induced by the inclusion $(W,\emptyset) \to (W,\partial W)$ with duality and evaluation produces the map
\[ \Phi \colon H_k(W;R) \to H_k(W,\partial W;R) \xrightarrow{\operatorname{PD}} H^k(W;R) \xrightarrow{\text{ev}}\text{Hom}_{\text{left-}\mathbb{F}}(H_k(W;R),\mathbb{F})^\text{tr} .\]
The main definition of this section is the following.
\begin{definition}\label{def:int}
The \emph{$R$-twisted intersection pairing}
\[ \lambda_R \colon H_i(W;R) \times H_i(W;R) \to \mathbb{F} \]
is defined by $\lambda_{R}(x,y)=\Phi(y)(x)$.
\end{definition}
The form $\lambda_R$ is hermitian, but need not be
nonsingular. In particular, the space $\operatorname{im} (H_1(\partial W;R) \to H_1(W;R)) $ is annihilated by $\lambda_R$. We conclude this section by giving a crucial example of this set-up.
\begin{example} \label{ex:DerivedSeries}
Let $W$ be a compact connected oriented $4$-manifold. Set $\pi=\pi_1(W)$ and
let $\pi^{(n)}=[\pi^{(n-1)},\pi^{(n-1})]$ denote its derived series starting at $\pi^{(0)}=\pi$.
The projection $\pi \to \pi/\pi^{(n)}$ gives rise to the
$\mathbb{Z}[\pi/\pi^{(n)}]$-modules $H_k(W;\mathbb{Z}[\pi/\pi^{(n)}])$ and we may consider the
$\mathbb{Z}[\pi/\pi^{(n)}]$-twisted intersection pairing
\[ \lambda_n \colon H_2(W;\mathbb{Z}[\pi/\pi^{(n)}]) \times H_2(W;\mathbb{Z}[\pi/\pi^{(n)}] )
\to \mathbb{Z}[\pi/\pi^{(n)}],\]
as in Definition~\ref{def:int}. Of particular interest to us is the case where~$n=1$ and $\pi/\pi^{(1)}=H_1(W; \mathbb{Z})$ is free abelian of rank $\mu$.
In this case, $\mathbb{Z}[\pi/\pi^{(1)}]$ is nothing but the commutative ring~$\Lambda=\mathbb{Z}[t_1^{\pm 1},\dots,t_\mu^{\pm 1}]$ of Laurent polynomials.
\end{example}
We now consider the twisted intersection form in the setting of Example~\ref{ex:Ccoefficients}.
Let $W$ be a $4$-dimensional manifold with (possibly empty) boundary together with a
map~$\varphi \colon \pi_1(W) \to \mathbb{Z}^\mu=\mathbb{Z}\langle t_1,\dots,t_\mu \rangle$.
Given an element~$\omega \in \mathbb{T}^\mu \subset \mathbb{C}^\mu$,
we equip the ring~$\mathbb{C}$ with the $(\mathbb{C},\mathbb{Z}[\pi_1(W)])$-module structure
described in Example~\ref{ex:Ccoefficients} and consider the
$\mathbb{C}$-vector spaces~$H_k(W;\mathbb{C}^\omega)$.
As in Definition~\ref{def:int}, we may consider the twisted intersection form
\[ \lambda_{\mathbb{C}^\omega} \colon H_2(W;\mathbb{C}^\omega) \times H_2(W;\mathbb{C}^\omega) \to\mathbb{C}.\]
We write $\operatorname{sign}_\omega W = \operatorname{sign} \lambda_{\mathbb{C}^\omega}$ and $\operatorname{sign} W$ for the untwisted
signature~$\operatorname{sign} \lambda_{\mathbb{Q}}$. We will usually be interested in the \emph{signature defect}
\[ \operatorname{dsign}_\omega W := \operatorname{sign}_\omega W-\operatorname{sign} W.\]
\begin{remark}\label{rem:SignatureRemark}
For a smooth closed manifold of even dimension,
the twisted signature coincides with the untwisted one
and hence the signature defect vanishes.
This can be seen by considering the twisted and untwisted Hirzebruch signature
formula~\cite[Theorem 4.7]{Berline92}, which agree if the bundle carries a flat
connection.
\end{remark}
We prove the corresponding result for topological closed $4$-manifolds over $\mathbb{Z}^\mu$
and give a proof, which does not use index theory.
\begin{proposition}\label{prop:bordism}
Let $Z$ be an oriented $4$-manifold with a map $\pi_1(Z) \to \mathbb{Z}^\mu$.
If $Z$ is closed, then
$\operatorname{dsign}_\omega Z =0$ for all $\omega \in \mathbb{T}^\mu$.
\end{proposition}
\begin{proof}
Given a space $X$, recall that the bordism group $\Omega_n(X)$ consists of
bordism classes of pairs $(N,\psi)$, where $N$ is an $n$-dimensional manifold
and $\psi \colon N \to X$ is a map; see~\cite{ConnerFloyd} for details.
Moreover, if $G$ is a group with classifying space $BG$, then $\Omega_n(G)$ is
defined as $\Omega_n(BG)$. Since the choice of the map $\varphi \colon
\pi_1(Z) \to \mathbb{Z}^\mu$ is equivalent to the choice of a homotopy class of
a map~$Z \to T^\mu = B\mathbb{Z}^\mu$, the pair $(Z,\varphi)$ produces an element in
$\Omega_4(\mathbb{Z}^\mu)$. As both the ordinary and the twisted signature vanish on closed oriented $4$-manifold which bound over $\mathbb{Z}^\mu$, for every $\omega\in \mathbb{T}^\mu$, the signature defect gives rise to a well-defined homomorphism
\[\operatorname{dsign}_\omega \Omega_4(\mathbb{Z}^\mu)\to \mathbb{Z}. \]
We want to prove that $\operatorname{dsign}_\omega$ is the trivial homomorphism.
By the Atiyah-Hirzebruch spectral
sequence~\cite[Chapter $1$, Section $7$]{ConnerFloyd}, we have an
isomorphism
\begin{align*}
\Omega_4(\mathbb{Z}^\mu) &\cong \Omega_4(pt) \oplus H_4(T^\mu;\mathbb{Z}) \\
[\psi \colon Z \to T^\mu] &\mapsto [Z \to \text{pt}] \oplus \psi_* [Z].
\end{align*}
It is therefore enough to show that the signature defect vanishes on the elements of $\Omega_4(\mathbb{Z}^\mu)$ corresponding through the above isomorphism to a set of generators of~$\Omega_4(pt)$ and $H_4(T^\mu;\mathbb{Z})$.
It is well known that $\Omega_4(pt)$ is generated
by the class of $\mathbb{C}P^2$. As $\mathbb{C}P^2$ is simply connected,
its twisted signature agrees with the untwisted one and consequently its signature defect also vanishes.
Let us pick a product structure~$T^\mu = (S^1)^\mu$ on the torus.
By the K\"unneth formula, the
abelian group~$H_4(T^\mu; \mathbb{Z})$ is generated
by the fundamental classes of the subtori~$T^4 =(S^1)^4\subset T^\mu$ given
by inclusions of factors. For every homology class $i_*([T^4])\subset H_4(T^\mu; \mathbb{Z})$, the corresponding element in $\Omega_4(\mathbb{Z}^\mu)$ is the cobordism class
$[i \colon T^4 \to T^\mu]$. The ordinary signature of $T^4$ is immediately seen to vanish. To compute the twisted signature, consider the coefficient system~$\mathbb{C}^\omega$ on
$T^4 = T^3 \times S^1$.
As $\omega \in \mathbb{T}^\mu$, this coefficient system is non-trivial on the $S^1$-factor.
Consequently, the twisted chain complex is acylic~\cite[Corollary App.B.B]{Viro09} and
$H_2(T^3 \times S^1; \mathbb{C}^\omega) = 0$. Thus, the twisted signature vanishes, and as a consequence the signature defect of the cobordism class $[i\colon T^4\to T^\mu]$ is $0$.
We deduce that the signature defect vanishes on all of~$\Omega_4(\mathbb{Z}^\mu)$.
\end{proof}
\begin{corollary}\label{cor:bordism}
Let $M$ be an oriented $3$-manifold with a map $H_1(M;\mathbb{Z})\to \mathbb{Z}^\mu$ and let $W$, $W'$ be two fillings of $M$ over $\mathbb{Z}^\mu$. Then,
$\operatorname{dsign}_\omega W= \operatorname{dsign}_\omega W'$ for all $\omega\in \mathbb{T}^\mu$.
\end{corollary}
\begin{proof}Define the closed oriented $4$-manifold $Z:=W\cup_{M} - W'$, and notice that the map to $\mathbb{Z}^\mu$ can be extended to $Z$. Thanks to Proposition~\ref{prop:bordism}, we have $\operatorname{sign}_\omega Z - \operatorname{sign} Z=0$, and by Novikov additivity we get
\[ 0 =\operatorname{sign}_\omega Z - \operatorname{sign} Z = (\operatorname{sign}_\omega W -\operatorname{sign} W) - (\operatorname{sign}_\omega W' -\operatorname{sign} W').\]
\end{proof}
\subsection{Novikov-Wall additivity of the signature}\label{sub:NovikovWall}
A theorem of Wall~\cite{Wall} computes the correction term to the additivity of the signature under the union of two manifolds along a common codimension $0$ submanifold of their boundaries, generalizing Novikov additivity.
We recall Wall's theorem in the case where the correction term vanishes.
\medbreak
Consider an oriented compact~$4$-manifold~$W$ together with an oriented, properly embedded $3$-manifold~$M$,
which separates~$W$ into two pieces~$W_\pm$. Put differently, $W = W_+ \cup_M (-W_-)$ is obtained by
gluing~$W_+$ to $-W_-$ along the submanifold~$M$. Note that $M$ is allowed to have nonempty
boundary~$\Sigma = \partial M \subset \partial W$ itself. This decomposition induces a decomposition
of the boundaries $\partial W_+ = N_+ \cup_\Sigma -M$ and $\partial W_- = N_-\cup_\Sigma -M $;
see Figure~\ref{fig:NovikovWall}. From this, we obtain a decomposition of the boundary~$\partial W = N_+ \cup_\Sigma (- N_-)$.
We equip then $\Sigma$ with the
orientation~$\Sigma = \partial M = \partial N_+ = \partial N_-$.
\begin{figure}[ht]
\includegraphics[width=6cm]{figure0.pdf}
\caption{A $2$-dimensional sketch of the Novikov-Wall set-up.}
\label{fig:NovikovWall}
\end{figure}
For a manifold $X$ with boundary $\Sigma$, define
\[
V_X:= \ker (H_1(\Sigma;\mathbb{R})\to H_1(X ;\mathbb{R})).
\]
In our setting, we are interested in the spaces $V_M$, $V_{N_+}$ and $V_{N_-}$. The following result is immediately obtained from the main theorem of~\cite{Wall}, as the correction term vanishes as soon as two of the involved subspaces coincide.
\begin{theorem}
\emph{(Novikov-Wall additivity)}\label{thm:Wall}
Let $W$ be decomposed as above as the union of $W_+$ and $-W_-$, and suppose that any two among $V_M$, $V_{N_+}$ and $V_{N_-}$ are equal. Then
\[ \operatorname{sign}(W)=\operatorname{sign} W_+ - \operatorname{sign} W_- .\]
\end{theorem}
Theorem~\ref{thm:Wall} admits a generalization to twisted coefficients.
For simplicity, in the twisted setting we shall only state a weaker result which is sufficient for our purposes. Suppose to have a map $H_1(W;\mathbb{Z})\to \mathbb{Z}^\mu$. With this map, we can construct the
local coefficient systems~$\mathbb{C}^\omega$ for every $\omega \in \mathbb{T}^\mu$, as explained in Example~\ref{ex:Ccoefficients}. The following additivity result holds for the twisted signature.
\begin{proposition} \label{prop:TwistedWall}
Suppose that $W$ is decomposed as above as the union of $-W_-$ and $W_+$.
Then, for each $\omega\in \mathbb{T}^\mu$ such that $H_1(\Sigma;\mathbb{C}^\omega)=0$,
Novikov-Wall additivity holds for the twisted signature:
\[ \operatorname{sign}_\omega W= \operatorname{sign}_\omega W_+ -\operatorname{sign}_\omega W_-.\]
\end{proposition}
\subsection{Concordance roots and vanishing results}\label{sub:ConcordanceRoots}
We generalize the concept of Knotennullstellen~\cite{NagelPowell}.
After applying this concept to a variation of a
well-known chain homotopy argument, we discuss some further properties of these
elements.
\medbreak
Let $U \subset \mathbb{Z}[t_1^{\pm 1},\dots ,t_\mu^{\pm 1}]$ be
the subset of Laurent polynomials~$p(t_1,\dots , t_\mu)$ such that $p(1,\dots ,1)=\pm 1$.
We abbreviate the Laurent ring~$\mathbb{Z}[t_1^{\pm 1},\dots ,t_\mu^{\pm 1}]$ with $\Lambda$.
\begin{definition}
\label{def:ConcordanceRoot}
An element~$\omega \in \mathbb{T}^\mu = (S^1 \setminus \{1\})^\mu$ is a \emph{concordance root}
if there is a polynomial $p \in U$ with $p(\omega)=0$. Define $\mathbb{T}^\mu_!$ to be the subset
of all elements $\omega \in \mathbb{T}^\mu$ which are \emph{not} concordance roots.
\end{definition}
Definition~\ref{def:ConcordanceRoot} is a generalization of~\cite[Definition $1.1$]{NagelPowell} to the multivariable case. The key property of non-concordance roots is that they allow us to use a well-known chain homotopy argument~\cite[Proposition 2.10]{CochranOrrTeichner}. The following results are an adaptation of~\cite[Lemma 3.1]{NagelPowell}.
To define the colored (and Alexander) nullity and the colored signature, we will use the bimodules~$\mathbb{Q}(\mathbb{Z}^\mu)$ and $\mathbb{C}^\omega$; see Definition~\ref{def:SignatureNullity} below.
A key ingredient,
necessary to prove the concordance invariance of these invariants, is the following fact: these modules are not
just~$\Lambda$--right modules, but right $U^{-1} \Lambda$-modules where the localisation~$U^{-1} \Lambda$,
inverts all elements of~$U$.
Suppose now that $\mathbb{Z}^m \to \mathbb{Z}^\mu$ is a homomorphism obtained by adding entries.
Then, the induced map of group rings $\mathbb{Z}[\mathbb{Z}^m] \to \Lambda$ fits into the following commutative diagram with the augmentation maps
\[ \begin{tikzcd}
\mathbb{Z}[\mathbb{Z}^m] \ar[rr] \ar[rd, "\operatorname{aug}"']& &\Lambda \ar[ld, "\operatorname{aug}"]\\
& \mathbb{Z} &
\end{tikzcd}. \]
Recall that, the augmentation map sends a Laurent polynomial $p(t_1,\dotsc, t_\mu)$ to its evaluation $p(1,\dotsc , 1)$.
The next lemma follows from considerations of determinants; see cf.~\cite[Proposition 2.4]{CochranOrrTeichner}.
\begin{lemma}\label{lem:DeterminantTrick}
Let $g \colon \mathbb{Z}[\mathbb{Z}^m]^k \to \mathbb{Z}[\mathbb{Z}^m]^k$ be a $\mathbb{Z}[\mathbb{Z}^m]$--module homomorphism with the property
that $\mathbb{Z} \otimes_{\mathbb{Z}[\mathbb{Z}^m]}g$ is an isomorphism. Then
\[ U^{-1} \Lambda \otimes_{\mathbb{Z}[\mathbb{Z}^m]} g \colon (U^{-1} \Lambda)^k \to (U^{-1} \Lambda)^k \]
is also an isomorphism. Consequently, so is $\mathbb{Q}(\mathbb{Z}^\mu) \otimes_{\mathbb{Z}[\mathbb{Z}^m]}g$ and $\mathbb{C}^\omega \otimes_{\mathbb{Z}[\mathbb{Z}^m]}g$.
\end{lemma}
\begin{proof} See~\cite[Section 3]{NagelPowell}. \end{proof}
\begin{lemma}\label{lem:Cone}
Let $k$ be a non-negative integer, and let $\omega$ lie in $\mathbb{T}_!^\mu$.
If $(X,Y)$ is a pair of CW-complexes over $B\mathbb{Z}^\mu$ with $H_i(X, Y; \mathbb{Z}) = 0$ for $0 \leq i \leq k$,
then both~$H_i(X,Y; \mathbb{Q}(\mathbb{Z}^\mu))$ and~$H_i(X,Y; \mathbb{C}^\omega)$ vanish for~$0 \leq i \leq k$.
\end{lemma}
\begin{proof}
We make the following abbreviations~$C^\mathbb{Z} := C(X,Y; \mathbb{Z})$ and $C^\Lambda:= C(X,Y; \Lambda)$ for
the cellular chain complexes of the pairs~$(X,Y)$. For the remainder of the proof, $i$ will be an arbitrary integer $0 \leq i \leq k$.
The chain complex~$C^\mathbb{Z}$ consists of finitely generated free $\mathbb{Z}$-modules, and as $H_i(C^\mathbb{Z}) = 0$, it admits a partial
contraction, i.e.\ homomorphisms~$s_i \colon C^\mathbb{Z}_i \to C^\mathbb{Z}_{i+1}$ with
\[ \operatorname{id}_{C_i^\mathbb{Z}} = s_{i-1} \circ d_i + d_{i+1} \circ s_i.\]
Consider the chain map~$\varepsilon \colon C^\Lambda \to C^\mathbb{Z}$ of chain complexes over $\Lambda$,
which is induced by tensoring with the augmentation map. Pick a lift $s^\Lambda_i$ of $s_i$
under $\varepsilon$, which is a homomorphism~$s_i^\Lambda \colon C^\Lambda_i \to C^\Lambda_{i+1}$
of $\Lambda$-modules such that the following diagram commutes:
\[ \begin{tikzcd}
C_i^\Lambda \ar[r, "s^\Lambda_i"] \ar[d, "\varepsilon"] & C_{i+1}^\Lambda \ar[d, "\varepsilon"]\\
C_i^\mathbb{Z} \ar[r, "s_i"] & C_{i+1}^\mathbb{Z} .
\end{tikzcd}\]
Such a lift exists because $C_i^\Lambda$ consists of free modules and the map~$\varepsilon$
is surjective. Consider the partial chain map
\[ f_i = s^\Lambda_{i-1} \circ d_i + d_{i+1} \circ s^\Lambda_i.\]
By construction,~$\mathbb{Z} \otimes_\Lambda f_i = s_{i-1} \circ d_i + d_{i+1} \circ s_i = id_{{C_i^\mathbb{Z}}}$ and
so $U^{-1} \Lambda \otimes_\Lambda f_i$ is also an isomorphism; see Lemma~\ref{lem:DeterminantTrick}. We obtain
that $U^{-1} \Lambda \otimes_\Lambda s^\Lambda_i$ is a partial chain contraction for
$U^{-1} \Lambda \otimes_\Lambda C^\Lambda$ and
\[ H_i(X,Y; U^{-1} \Lambda) = H_i(U^{-1} \Lambda \otimes_\Lambda C^\Lambda) = 0. \]
Now we tensor with either $R = \mathbb{Q}(\mathbb{Z}^\mu)$ or $R= \mathbb{C}^\omega$, which are both right~$U^{-1} \Lambda$--modules. Here, we use the fact that~$\omega \in \mathbb{T}^\mu_!$. Note that $R \otimes_\Lambda s^\Lambda_i$ is a partial chain contraction for
$R \otimes_{U^{-1} \Lambda} {U^{-1} \Lambda}\otimes_\Lambda C^\Lambda$
and so $H_i(X,Y; R) = 0$.
\end{proof}
For the remainder of the section, we collect properties of the set~$\mathbb{T}^\mu_!$ of non-concordance
roots.
For a prime~$p$, define
\[ \mathbb{T}_p^\mu := \{ \omega \in \mathbb{T}^\mu \mid \omega_i \text{ is a } p^n\text{-root of unity for some }n \} \]
and $\mathbb{T}_P^\mu := \bigcup_p \mathbb{T}_p^\mu$. This is the set for which concordance invariance properties and genus bounds are proved in~\cite[Section 7]{CimasoniFlorens}. The next result shows that the set~$\mathbb{T}_!^\mu$ of non-concordance roots contains
$\mathbb{T}_P^\mu$.
\begin{proposition}
\label{prop:TPContainedT!}
The set $\mathbb{T}_P^\mu$ is contained in $\mathbb{T}_!^\mu$.
\end{proposition}
\begin{proof}
Let $\omega \in \mathbb{T}_{p}^\mu$ and $q(t_1,\dotsc, t_\mu)$ be a polynomial such that $q(\omega)=0$.
We have to show that $q(1,\dots, 1) \neq \pm 1$.
We pick $n$ large enough such that all
$\omega_i$ are $p^n$-roots of unity.
The subgroup consisting of the $p^n$-roots of unity is cyclic.
Thus we write $\omega=(\zeta^{n_1}, \dots, \zeta^{n_\mu})$ for
a primitive $p^n$-root of unity~$\zeta$.
Define the one variable polynomial $\overline{q}(t):=q(t^{n_1},\dots, t^{n_\mu})$.
Hence, we have $\overline{q}(\zeta)=0$, so $\overline{q}(t)$ is a
multiple of the $p^n$-th cyclotomic polynomial, whose value at $1$ equals $p$.
It follows that $p$ divides $q(1,\dots,1)= \overline{q}(1)$ and so cannot be equal to $\pm 1$.
\end{proof}
The following example shows that $\mathbb{T}^\mu_!$ also contains elements which are not in $\mathbb{T}_P^\mu$, but have algebraic coordinates.
\begin{example}
\label{ex:T!MoreGeneral}
We claim that the algebraic element~$\omega=(\frac{3+4i}{5}, -1)$ is in $\mathbb{T}^2_!$, but not contained in $\mathbb{T}_P^2$.
The algebraic number~$\omega_0 = \frac{3+4i}{5}\in S^1$, has minimal polynomial $p(t)=5t^2-6t+5$
and is not a root of unity~\cite[Lemma 2.1]{NagelPowell}.
It follows that $\omega_0$ is not an element of $\mathbb{T}^1_P$.
To show that $\omega \in \mathbb{T}^2_!$, we prove that any polynomial~$q(t_1, t_2)$ with $q(\omega) = 0$
has $q(1,1) \neq \pm 1$.
Consider $\overline{q}(t):=q(t,-1)$ and note that $\frac{3+4i}{5}$ is a root of
$\overline{q}(t)$. As a consequence $4=p(1)$ divides $\overline{q}(1)$ and
$\overline{q}(1)=q(1,-1)$ is even. It follows that $q(1,1)$ must also be even.
\end{example}
\begin{lemma}\label{lem:DegenerateEntries}
Let $(\omega_1, \dots, \omega_n) \in \mathbb{T}^n_!$,
and~$\beta \colon \{1, \dots, \mu \} \to \{1, \dots, n\}$ be a map.
Then $(\omega_{\beta(1)}, \dots, \omega_{\beta(\mu)})$ is an element
of $\mathbb{T}^\mu_!$.
\end{lemma}
\begin{proof}
Let $q(t_1, \dots, t_\mu)$ be a polynomial such that $q(\omega_\beta) = 0$,
where $\omega_\beta$ denotes $(\omega_{\beta(1)}, \dots, \omega_{\beta(\mu)})$.
Define a polynomial in $n$-variables by the equality
$p(x_1, \dots, x_n) = q(x_{\beta(1)}, \dots, x_{\beta(\mu)})$.
Note that $ p(\omega_1, \dots, \omega_n) = q(\omega_{\beta(1)}, \dots, \omega_{\beta(\mu)}) = 0$ and as $(\omega_1, \dots, \omega_n) \in \mathbb{T}^n_!$, we deduce that
$q(1,\dots, 1) = p(1,\dots, 1) \neq \pm 1$.
\end{proof}
As shown in the following remark,
it is also easy to construct elements which do not belong to $\mathbb{T}_!^\mu$
and for which our main results will not apply.
\begin{remark}
Let $\omega = (\omega_1, \dots, \omega_\mu) \in \mathbb{T}^\mu$.
A consequence of Lemma~\ref{lem:DegenerateEntries} is that, if $\omega$ belongs to $T_!^\mu$, then
all the coefficients $\omega_i$ belong to $\mathbb{T}_!^1$. Phrasing it differently, if any of the coefficients of $\omega$ is a concordance root, then $\omega$ itself is a concordance root.
\end{remark}
\section{Colored signatures and nullities of links}\label{sec:4dDef}
In Section~\ref{sub:Setup}, we give a definition of the colored signature and nullity of a colored link as twisted invariants of manifolds with boundary.
Section~\ref{sub:CFSignature} shows that they coincide with the invariants introduced by Cimasoni-Florens~\cite{CimasoniFlorens}; see e.g Proposition~\ref{prop:NullityNoCcomplex} and Proposition~\ref{prop:ColorSignature}. Section~\ref{sub:Genus} introduces the notion of colored cobordism and presents the statement of Theorem~\ref{thm:Genus} which provides obstructions on the possible colored cobordisms that two given colored links can bound.
Section~\ref{sub:ProofGenus} is devoted to the proof of the theorem.
Finally, Section~\ref{sub:ApplicationsGenus} provides the applications of Theorem~\ref{thm:Genus} and puts it in relation with some previously known results. In particular, we prove the concordance invariance of the signature and nullity and present obstructions on the possible surfaces a colored link can bound in $D^4$.
\subsection{Set-up}\label{sub:Setup}
This section deals with some preliminaries on colored links and their colored bounding surfaces.
Making use of this set-up, we introduce our main invariants: the colored signature and the colored nullity.
\medbreak
Let $L = L_1 \cup \cdots \cup L_\mu \subset S^3$ be a $\mu$-colored link.
We denote the exterior of~$L$ by $X_L$ and recall that the abelian group~$H_1(X_L; \mathbb{Z})$ is freely generated by the meridians of $L$. Summing the meridians of the same color, we obtain a homomorphism~$H_1(X_L; \mathbb{Z}) \to \mathbb{Z}^\mu$.
A \emph{colored bounding surface} for a colored link $L$ is a union $F = F_1\cup \cdots \cup F_\mu$ of
properly embedded, locally flat, compact oriented surfaces~$F_i \subset D^4$
with $\partial F_i = L_i$ and which only intersect each other transversally in double points.
A \emph{bounding surface} of a link~$L$ is a union $F = F_1\cup \cdots \cup F_m$ of
properly embedded, locally flat, compact, connected and oriented surfaces~$F_i \subset D^4$
which only intersect each other transversally in double points, and $\partial F = L$.
Note that we require each $F_i$ to be connected. Forgetting about the colors, a colored bounding surface turns into a bounding surface formed by the union of its connected pieces.
As the surfaces $F_i$ are required to be locally flat, that is they admit tubular neighborhoods. Given a (possibly colored) bounding surface~$F$ of $L$, we denote by $\nu F$ the union of some choice of tubular neighborhoods for its components.
We denote then by~$W_F:= D^4 \setminus \nu F$ the exterior of $F$. For the convenience of the reader,
we give an argument for the following well-known fact.
\begin{lemma}\label{lem:MayerVietorisExterior}
Given a bounding surface $F$, the abelian group~$H_1(W_F;\mathbb{Z})$ is freely generated
by the meridians of the components~$F_i$.
\end{lemma}
\begin{proof}
Pick a small ball~$B_x$ around each intersection point~$x$ of $F$.
Note that $W_F = D^4 \setminus ( \bigcup_x B_x \cup \bigcup_i \nu F^\circ_i)$, where
the surface~$F^\circ_i$ is $F_i$ with little discs removed around the intersection points.
The Mayer-Vietoris sequence of $D^4 \setminus \bigcup_x B_x = W_F \cup \bigcup_i \nu F^\circ_i$
with $\mathbb{Z}$-coefficients gives us
\[ 0 \to H_1\Big(\bigcup_i (F_i^\circ \times S^1)\Big)
\to H_1\Big(\bigcup_i (F_i^\circ \times D^2)\Big)\oplus H_1(W_F)\to 0,\]
where the $0$'s arise as the homology $H_j\Big(D^4 \setminus \bigcup_x B_x\Big)$ for $j=1,2$.
Applying the Künneth theorem to the products $F_i^\circ \times S^1$ the sequence can be reduced to
$0 \to H_1\big(\bigcup_i \{p_i\} \times S^1; \mathbb{Z}\big) \to H_1(W_F;\mathbb{Z})\to 0$, where $p_i \in F_i$. This concludes the proof of the lemma.
\end{proof}
Consequently, there is a canonical homomorphism~$H_1(W_F;\mathbb{Z}) \to \mathbb{Z}^\mu$ which restricts to~$H_1(X_L; \mathbb{Z}) \to \mathbb{Z}^\mu$ on the link exterior: indeed the inclusion~$X_L \subset W_F$ sends the meridians of $L$ to the meridians of $F$. Since~$X_L$
and~$W_F$ are now both spaces over $\mathbb{Z}^\mu$, we can give the following definition.
\begin{definition}\label{def:SignatureNullity}
Let $F$ be a colored bounding surface for a $\mu$-colored link $L$. Given $\omega \in \mathbb{T}^\mu$, define the \emph{colored signature}~$\sigma_L(\omega)$ and the
\emph{colored nullity}~$\eta_L(\omega)$ by
\[ \sigma_L(\omega) = \operatorname{sign}_\omega W_F, \quad \eta_L(\omega) = \dim H_1(X_L; \mathbb{C}^\omega).\]
\end{definition}
Viro~\cite[Theorem 2.A]{Viro09} showed that $\operatorname{sign}_\omega W_F$
is independent of the choice of colored bounding surface. For a proof, see also the upcoming paper by Degtyarev, Florens and Lecuona~\cite{DegtyarevFlorensLecuona2}.
It is sometimes useful in the following to view $\sigma_L(\omega)$ as signature defect, which is made possible by the following result, probably well known to the experts.
\begin{proposition} \label{prop:UntwistedSign}
If $F$ is a colored bounding surface for a $\mu$-colored link $L$, the
untwisted intersection form on $W_F$ is trivial. As a consequence, the
signature~$\operatorname{sign} W_F$ vanishes and we have $\sigma_L(\omega) = \operatorname{dsign}_\omega W_F$.
\end{proposition}
\begin{proof}
Set $M_F:=\overline{\nu F} \cap W_F$ so that $\partial W_F=
X_L\cup_{L\times S^1} M_F$. Consider the portion $ H_2(M_F;\mathbb{Z})\to
H_2(W_F;\mathbb{Z})\oplus 0 \to 0$ of the Mayer-Vietoris sequence associated to the
decomposition $D^4=W_F\cup \nu F$. It follows that the map $H_2(M_F;\mathbb{Z})\to H_2(W_F;\mathbb{Z})$
is surjective. Since $M_F$ is contained in the boundary of $W_F$,
the natural map $j \colon H_2(\partial W_F;\mathbb{Z})\to H_2(W_F;\mathbb{Z})$ is surjective.
The statement follows immediately since elements of $\operatorname{im} j$ annihilate the
intersection form.
\end{proof}
\subsection{C-complexes}
\label{sub:CFSignature}
We recall the multivariable signature and nullity
functions introduced by Cimasoni-Florens~\cite{CimasoniFlorens} using C-complexes. Our main objective is to show that these invariants coincide with the colored signature and nullity defined in Section~\ref{sub:Setup}.
\medbreak
A \emph{C-complex} for a $\mu$-colored link~$L$ consists of a
collection of Seifert surfaces $S_1, \dots , S_\mu$ for the sublinks~$L_1, \dots , L_\mu$
that intersect only along clasps; see~\cite{Cooper, CimasoniPotential, CimasoniFlorens} for details.
Given such a C-complex and a
sequence~$\varepsilon=(\varepsilon_1,\varepsilon_2,\dots, \varepsilon_\mu)$ of $\pm 1$'s,
there are $2^\mu$ \emph{generalized Seifert matrices}~$A^\varepsilon$, which
extend the usual Seifert matrix~\cite{CimasoniPotential, CimasoniFlorens}.
Note that for all~$\varepsilon$,~$A^{-\varepsilon}$ is
equal to~$(A^\varepsilon)^T$. Using this fact, one easily checks that for
any~$\omega = (\omega_1,\dots,\omega_\mu)$ in the~$\mu$-dimensional torus, the matrix
\[
H(\omega)=\sum_\varepsilon\prod_{i=1}^\mu(1-\overline{\omega}_i^{\varepsilon_i})\,A^\varepsilon
\]
is Hermitian. Since this matrix vanishes when one of the coordinates of~$\omega$ is equal to~$1$,
we restrict ourselves to~$\omega \in \mathbb{T}^\mu$. The \emph{multivariable signature} is
the signature of the Hermitian matrix~$H(\omega)$ and the \emph{multivariable nullity} is $\operatorname{null}H(\omega) + \beta_0(S)-1$,
where $\beta_0(S)$ is the number of connected components of $S$.
We start by proving that~$\eta_L(\omega) =\operatorname{null}H(\omega) + \beta_0(S)-1$, i.e that the colored nullity is equal to the multivariable nullity:
\begin{proposition}\label{prop:NullityNoCcomplex}
Let $L$ be a $\mu$-colored link. For every $\omega \in \mathbb{T}^\mu$ and for any C-complex $S$ for $L$, we have the
equality~$\eta_L(\omega) =\operatorname{null}H(\omega) + \beta_0(S)-1$.
\end{proposition}
\begin{proof}
Since the multivariable nullity $\operatorname{null}H(\omega) + \beta_0(S)-1$
is independent of the chosen $C$-complex~\cite[Theorem $2.1$]{CimasoniFlorens},
pick $S$ for which there is at least one clasp between each pairs of surfaces
$S_i$ and $S_j$, so that in particular~$\beta_0(S)=1$. Note that this is
possible thanks to~\cite[Lemma $3$]{CimasoniPotential}. Using~\cite[Corollary
3.6]{CimasoniFlorens} the Alexander module
$H_1(X_L;\Lambda_S)$ admits a square presentation matrix given by $H(t)$.
Tensoring with $\mathbb{C}^\omega$ we deduce that $H(\omega)$ presents
$\mathbb{C}^\omega \otimes_{\Lambda_S} H_1(X_L;\Lambda_S)$. Using
Lemma~\ref{lem:UCSSTorExample}, we obtain that $H_1(X_L;\mathbb{C}^\omega)=\mathbb{C}^\omega
\otimes_{\Lambda_S} H_1(X_L;\Lambda_S)$ and consequently $H(\omega)$ also
presents $H_1(X_L;\mathbb{C}^\omega)$. The result follows immediately.
\end{proof}
We conclude by showing that the colored signature $\sigma_L(\omega)$ coincides with the multivariable signature of Cimasoni-Florens~\cite{CimasoniFlorens}.
\begin{proposition}\label{prop:ColorSignature}
If $L$ is a $\mu$-colored link $L$, then $\sigma_L(\omega) = \operatorname{sign} H(\omega)$, i.e.\ the colored signature is equal to the multivariable signature.
\end{proposition}
\begin{proof}
Since the colored signature is independent of the choice of a colored bounding surface,
we can take $F$ to be a
push-in of a C-complex in the $4$-ball; see~\cite[Section
3.1]{ConwayFriedlToffoli} for a precise description. By~\cite[Theorem
$1.3$]{ConwayFriedlToffoli}, the intersection pairing $\lambda_{\Lambda_S}$
is represented by $H(t)$. Since we wish to show that the intersection pairing
$\lambda_{\mathbb{C}^\omega}$ is represented by $H(\omega)$, the theorem will
follow if we manage to produce the following commutative diagram
\begin{equation}
\label{eq:Desired}
\xymatrix@R0.4cm{
\mathbb{C}^\omega \otimes_{\Lambda_S} H_2(W_F;\Lambda_S) \times \mathbb{C}^\omega \otimes_{\Lambda_S} H_2(W_F;\Lambda_S) \ar[r]\ar[d] \ar[d]& \mathbb{C}^\omega \otimes_{\Lambda_S} \Lambda_S\ar[d] \\
H_2(W_F;\mathbb{C}^\omega ) \times H_2(W_F;\mathbb{C}^\omega ) \ar[r] & \mathbb{C}.
}
\end{equation}
Further assuming $S$ to be totally connected implies that $H_i(W_F;\Lambda_S)$ vanishes for $i \neq 2$, and is a finitely generated free $\Lambda_S$-module for $i=2$~\cite[Section $3$ and Proposition $4.1$]{ConwayFriedlToffoli}.
Consider the following diagram below, where homology groups and tensor products
without coefficients are over $\Lambda_S$.
Applying the universal coefficient spectral sequence, as described in Propositions~\ref{prop:UCSS} and~\ref{prop:UCSSTor}, the first three
vertical maps in the following commutative diagram are isomorphisms
\[
\begin{tikzcd}[column sep=0.3cm]
\mathbb{C}^\omega \otimes H_2(W_F) \ar[r]\ar[d,"\cong"]
& \mathbb{C}^\omega \otimes H_2(W_F,\partial W_F) \ar[r]\ar[d,"\cong"]
& \mathbb{C}^\omega \otimes H^2(W_F) \ar[r] \ar[d,"\cong"]
&\mathbb{C}^\omega \otimes \text{Hom}_{\Lambda_S}(H_2(W_F),\Lambda_S)^{\text{tr}}
\ar[d,"\cong"] \\
H_2(W_F;\mathbb{C}^\omega ) \ar[r]
& H_2(W_F,\partial W_F;\mathbb{C}^\omega)\ar[r]
& H^2(W_F;\mathbb{C}^\omega )\ar[r]
& \text{Hom}_{\mathbb{C}}(H_2(W_F;\mathbb{C}^\omega),\mathbb{C})^{\text{tr}}.
\end{tikzcd}
\]
The last vertical map is an isomorphism since $H_2(W_F;\Lambda_S)$ is finitely generated
and free. Considering the adjoint, we precisely obtain the diagram of Equation~(\ref{eq:Desired}).
\end{proof}
\subsection{The genus bound}\label{sub:Genus}
For elements~$\omega \in \mathbb{T}_P^\mu$, the multivariable
signature and nullity are known to give lower bounds on the genus of colored
bounding surfaces~\cite[Theorem 7.2]{CimasoniFlorens}. In this section we prove
a more general result for surfaces in~$S^3 \times [0,1]$. As corollaries, we extend the concordance invariance results of~\cite[Theorem 7.1]{CimasoniFlorens}
and generalize the lower bounds of~\cite[Theorem 7.2]{CimasoniFlorens}.
\begin{definition}\label{def:cobordism}
A \emph{colored cobordism}
between two $\mu$-colored links $L$ and $L'$ is a
collection of properly embedded locally flat surfaces~$\Sigma = \Sigma_1 \cup \dots \cup \Sigma_\mu$
in $S^3 \times [0,1]$ that have the following properties: the surfaces only intersect each other in double points,
each surface~$\Sigma_i$ has boundary~$-L_i \sqcup L_i'$, and each connected component of $\Sigma_i$ has a boundary both in $S^3 \times \{0\}$ and in $S^3 \times \{1\}$. We say that $\Sigma$ \emph{has $m$ components} if the disjoint union of the surfaces $\Sigma_1,\dotsc, \Sigma_\mu$ has $m$ connected components.
\end{definition}
The main result of this section is the following lower bound.
\begin{theorem}\label{thm:Genus}
If $\Sigma = \Sigma_1 \cup \dots \cup \Sigma_\mu$
is a colored cobordism between two $\mu$-colored links $L$ and $L'$ with $c$ double points, then
\[|\sigma_L(\omega)-\sigma_{L'}(\omega)| + |\eta_L(\omega)-\eta_{L'}(\omega)|
\leq \sum_{i=1}^{\mu} -\chi(\Sigma_i) + c \]
for all $\omega\in \mathbb{T}_!^\mu$.
\end{theorem}
\begin{remark}\label{rem:formulas}
The right-hand side of the inequality can equivalently be expressed in terms of the first Betti number or of the genus of the surfaces. Suppose that $L$ is an $n$-component link, $L'$ is an $n'$-component link, and that the cobordism $\Sigma$ has $m$ components
(in the sense of Definition~\ref{def:cobordism}). Then, we have the following equalities:
\[ \sum_{i=1}^{\mu} -\chi(\Sigma_i) +c = \sum_{i=1}^{\mu}b_1(\Sigma_i) -m+c = \sum_{i=1}^{\mu} 2 g_i(\Sigma_i)+ n + n'-2m +c.\]
For this reason, we will usually refer to the inequality of Theorem \ref{thm:Genus} as a \emph{genus bound}, even if the genus does not appear explicitly in the formula.
\end{remark}
\subsection{Proof of the main theorem}\label{sub:ProofGenus}
We proceed towards the proof of Theorem~\ref{thm:Genus}, starting with a series of preliminary results.
\medbreak
First, we describe the Euler characteristic of the exterior~$W_\Sigma$ of a
colored cobordism~$\Sigma$ in $S^3 \times [0,1]$ in terms of the Euler characteristic of the surfaces $\Sigma_i$.
\begin{lemma}\label{lem:eulersigma}
Suppose $\Sigma$ is a $\mu$-colored cobordism between two colored links~$L$ and~$L'$ with $c$ double points. Then the Euler characteristic of $W_\Sigma$ is given by
\[\chi(W_\Sigma) = \sum_{i=1}^\mu -\chi(\Sigma_i) +c .\]
\end{lemma}
\begin{proof}
First, we prove that~$\chi(W_\Sigma)=-\chi(\nu \Sigma)$.
Consider the decomposition~$S^3 \times I = \nu \Sigma \cup W_\Sigma$ and set $M_\Sigma := \nu \Sigma \cap W_\Sigma$. Using the decomposition formula for the Euler characteristic yields
$\chi(S^3 \times I) = \chi(W_\Sigma) + \chi(\nu \Sigma ) - \chi(M_\Sigma)$. As the Euler characteristic of a $3$-manifold with toroidal boundary vanishes, $\chi(M_\Sigma)=0$. Since $\chi(S^3 \times I)$ also vanishes, the claim follows. Now note that $\nu \Sigma$ is homotopy equivalent to the union~$A = \bigcup_i \Sigma_i \subset S^3$.
Recall that the surfaces~$\Sigma_i$ intersect each other in $c$ points.
We apply again the decomposition formula for $A$ and obtain
\[ \chi(A) = \sum_{i=1}^\mu \chi(\Sigma_i)
-\chi\Big( \bigcup_{i \neq j} \Sigma_i \cap \Sigma_j \Big)= \sum_{i=1}^\mu \chi(\Sigma_i) -c. \]
\end{proof}
By Lemma~\ref{lem:MayerVietorisExterior}, one observes that $H_1(W_\Sigma;\mathbb{Z})$ is freely generated by the meridians of $\Sigma$. Consequently, there is a homomorphism $H_1(W_\Sigma;\mathbb{Z}) \to \mathbb{Z}^\mu$ that extends the maps on $H_1(X_L;\mathbb{Z})$ and $H_1(X_{L'};\mathbb{Z})$.
Next, we observe that with $\mathbb{C}^\omega$ coefficients, the boundary of $W_\Sigma$
behaves as the disjoint union of the link exteriors~$X_L$ and $X_{L'}$.
\begin{lemma}
\label{lem:splitboundary}
The inclusion of $X_L\sqcup X_{L'}$ into $\partial W_\Sigma$ induces an isomorphism
\[H_i(X_L;\mathbb{C}^\omega) \oplus H_i(X_{L'};\mathbb{C}^\omega) \cong H_i(\partial W_\Sigma;\mathbb{C}^\omega)\]
for all $\omega \in \mathbb{T}^\mu$.
\end{lemma}
\begin{proof}
The boundary of $W_\Sigma$ decomposes into the union of $X_L$, $X_{L'}$ and the
plumbed $3$-manifold~$M_\Sigma$.
The homology groups~$H_*(M_\Sigma;\Lambda_S)$ are zero~\cite[Proof of Lemma 5.2]{ConwayFriedlToffoli}.
The universal coefficient spectral sequence of Proposition~\ref{prop:UCSSTor}
implies that $H_*(M_\Sigma;\mathbb{C}^\omega)=0$.
The result now follows from the Mayer-Vietoris exact sequence for $\partial W_F$.
\end{proof}
The next lemma provides some information on the twisted homology of $W_\Sigma$.
\begin{lemma}\label{lem:034AndNullityBound}
If $\Sigma\subset S^3\times I$ is a $\mu$-colored cobordism between $L$ and $L'$ and $\omega \in \mathbb{T}_!^\mu$, then
\begin{enumerate}
\item $\beta_1^\omega(W_\Sigma) \leq \eta_L(\omega)$
and $\beta_1^\omega(W_\Sigma) \leq \eta_{L'}(\omega)$,
\item $H_{i}(W_\Sigma; \mathbb{C}^\omega) = 0$ for $i= 0,3,4$.
\end{enumerate}
\end{lemma}
\begin{proof}
As $W_\Sigma$ and $X_L$ are both connected, there is an isomorphism $H_0(X_L;
\mathbb{Z})\cong H_0(W_\Sigma; \mathbb{Z})$. Since the inclusion $X_L \subset W_\Sigma$ takes meridians to meridians, $H_1(X_L;\mathbb{Z})\to H_1(W_\Sigma;\mathbb{Z})$ is surjective. Combining these facts, $H_i(W_\Sigma,X_L;\mathbb{Z})=0$, so that Lemma~\ref{lem:Cone} gives $ H_i(W_\Sigma,X_L; \mathbb{C}^\omega)=0$ for
$i=0,1$. It follows from the long exact sequence of the pair $(W_\Sigma,X_L)$
that the inclusion induced map $H_1(X_L;\mathbb{C}^\omega)\to H_1(W_\Sigma;\mathbb{C}^\omega)$ is surjective, and thus
$\beta_1^\omega(W_\Sigma) \leq \eta_L(\omega)$. Repeating the argument for $X_{L'}$, the
first statement is proven.
Since the inclusion of $X_L$ into $W_\Sigma$ factors through $\partial W_\Sigma$, an analogous argument shows that $H_i(W_\Sigma,\partial W_\Sigma; \mathbb{C}^\omega)=0$ for $i=0,1$. Lemma~\ref{lem:DualityUCSS} now
implies that $H_i(W_\Sigma; \mathbb{C}^\omega)=0$ for $i=3,4$.
Note that the entries of~$\omega = (\omega_1, \ldots, \omega_\mu)$ are different from~$1$. This implies that the vector space~$H_0(W_\Sigma; \mathbb{C}^\omega)$ vanishes by its description as a quotient~\cite[Section VI.3]{Hilton97}.
\end{proof}
We conclude this section with a dimension count, which will prove itself useful to
bound the twisted signature of $W_\Sigma$.
\begin{lemma}\label{lem:EqGenus}
Denote by $j \colon H_2(\partial W_\Sigma;\mathbb{C}^\omega) \to H_2(W_\Sigma;\mathbb{C}^\omega)$
the map induced by the inclusion. Then, for $\omega \in \mathbb{T}^\mu_!$, we have
\[ \dim(\coker j)
= \beta_2^\omega(W_\Sigma)-\beta_2^\omega(\partial W_\Sigma) +\beta_1^\omega(W_\Sigma), \]
\end{lemma}
\begin{proof}
Recall that by Lemma~\ref{lem:034AndNullityBound}, the vector space $H_3(W_\Sigma;\mathbb{C}^\omega)$
vanishes.
Consider the following portion of the long exact sequence of the pair~$(W_\Sigma,\partial W_\Sigma)$:
\[ 0\to H_3(W_\Sigma, \partial W_\Sigma; \mathbb{C}^\omega)\xrightarrow{\delta}
H_2(\partial W_\Sigma; \mathbb{C}^\omega)\xrightarrow{j} H_2( W_\Sigma;\mathbb{C}^\omega).\]
By exactness, $\dim(\operatorname{im} j)=\beta_2^\omega(\partial W_\Sigma) - \dim(\operatorname{im} \delta)$.
As $\delta$ is injective, one gets
$ \dim (\operatorname{im} j) = \beta_2^\omega(\partial W_\Sigma)- \beta_3^\omega(W_\Sigma, \partial W_\Sigma)$. The result now follows since Lemma~\ref{lem:DualityUCSS} implies that
$\beta_3^\omega(W_\Sigma, \partial W_\Sigma)= \beta_1^\omega(W_\Sigma)$.
\end{proof}
We are now ready to conclude the proof of Theorem~\ref{thm:Genus}.
\begin{proof}[Proof of Theorem~\ref{thm:Genus}]
We start by proving the following inequality:
\[ |\operatorname{sign}_\omega W_\Sigma| \leq \chi(W_\Sigma)-|\eta_L(\omega)-\eta_{L'}(\omega)|. \]
As in Lemma~\ref{lem:EqGenus}, we use $j$ to denote the map~$H_2(\partial W_\Sigma; \mathbb{C}^\omega) \to H_2(W_\Sigma; \mathbb{C}^\omega)$.
Since the twisted intersection form~$\lambda_{\mathbb{C}^\omega}$ descends to a pairing on
$H_2(W_\Sigma;\mathbb{C}^\omega)/ \operatorname{im} j$, an application of Lemma~\ref{lem:EqGenus} yields
\begin{equation}\label{eq:FirstStepGenus}
|\operatorname{sign}_\omega W_\Sigma| \leq \dim \frac{H_2(W_\Sigma;\mathbb{C}^\omega)}{\operatorname{im} j} = \beta_2^\omega(W_{\Sigma})-\beta_2^\omega(\partial W_{\Sigma}) +\beta_1^\omega(W_{\Sigma}).
\end{equation}
Now, thanks to Lemma~\ref{lem:034AndNullityBound}, we have
$\chi(W_\Sigma) = \beta_2^\omega(W_\Sigma) - \beta_1^\omega(W_\Sigma)$, and using Lemma~\ref{lem:splitboundary}, one gets
$\beta_1^\omega(\partial W_\Sigma)=\eta_L(\omega)+ \eta_{L'}(\omega)$.
Using these last two identities, Equation (\ref{eq:FirstStepGenus}) can be rewritten as
\[ |\operatorname{sign}_\omega W_{\Sigma}| \leq \chi(W_\Sigma) + 2\beta_1^\omega(W_\Sigma) - \eta_L(\omega)- \eta_{L'}(\omega). \]
The desired inequality is now obtained by using Lemma~\ref{lem:034AndNullityBound} to bound $\beta_1^\omega(W_\Sigma)$
above both by $\eta_L(\omega)$ and $\eta_{L'}(\omega)$.
With the inequality above, Theorem~\ref{thm:Genus}
will follow from Lemma~\ref{lem:eulersigma} once we have established that
\[ \operatorname{sign}_{\omega} W_\Sigma = \operatorname{sign}_{L'}(\omega)-\operatorname{sign}_L(\omega). \]
Pick a colored bounding surface $F\subset D^4$ for $L$. Thanks to Proposition~\ref{prop:UntwistedSign}, we have $\sigma_L(\omega)=\operatorname{sign}_\omega(W_F)$. One can now form the surface with singularities $F':=F\cup_L \Sigma \subset D^4\cup_{S^3} S^3\times I$. Using an orientation-preserving diffeomorphism between $D^4\cup_{S^3} S^3\times I$ and $D^4$, the surface $F\cup_L \Sigma$ is sent to a colored bounding surface for~$L'$.
Its exterior~$W_{F'}$ is clearly homeomorphic to $W_F\cup_{X_L} W_\Sigma$.
Once again thanks to Proposition~\ref{prop:UntwistedSign}, we have $\sigma_{L'}(\omega)=\operatorname{sign}_\omega(W_{F'})$.
Since $H_1(L\times S^1; \mathbb{C}^\omega)=0$, Proposition~\ref{prop:TwistedWall} implies that
Novikov additivity holds for the twisted signature, yielding
\[ \operatorname{sign}_\omega W_{F'} = \operatorname{sign}_\omega W_F + \operatorname{sign}_\omega W_\Sigma. \]
Summarizing, we have shown that
$\sigma_{L'}(\omega)= \sigma_L(\omega) + \operatorname{sign}_\omega W_\Sigma$. Combining this with the inequality of Equation (\ref{eq:FirstStepGenus}) concludes the proof of Theorem~\ref{thm:Genus}.
\end{proof}
\subsection{Applications of the genus bound}\label{sub:ApplicationsGenus}
We will give two applications of Theorem~\ref{thm:Genus}.
First, we show that the colored signature and nullity are concordance invariants,
see Corollary~\ref{cor:ConcordanceViaGenus}, then we study the genus of
colored bounding surfaces in Corollary~\ref{cor:CimasoniFlorens72}.
\medbreak
Two $\mu$-colored links~$L$ and~$L'$ are \emph{concordant}
if there exists a $\mu$-colored cobordism between $L$ and $L'$ which has
no intersection points and consists exclusively of annuli.
\begin{corollary}
\label{cor:ConcordanceViaGenus}
If $L$ and $L'$ are two colored links that are concordant, then
\[ \sigma_{L}(\omega)=\sigma_{L'}(\omega) \quad \quad\text{and} \quad \quad \eta_{L}(\omega)=\eta_{L'}(\omega)\]
for all $\omega \in \mathbb{T}^\mu_!$.
\end{corollary}
\begin{proof}
We apply Theorem~\ref{thm:Genus} to the case where each $\Sigma_i$ is a union of annuli and there are no double points.
The result follows as all the terms in the right-hand side of the inequality are zero.
\end{proof}
\begin{remark}
In~\cite[Theorem 7.1]{CimasoniFlorens}, Corollary~\ref{cor:ConcordanceViaGenus} is stated for all $\omega \in \mathbb{T}_P^\mu$. However, note that~\cite[Proposition 2.3]{NagelPowell} presents two $1$-colored links $L$ and~$L'$, which have the property that $\sigma_L(z) = \sigma_{L'}(z)$ and $\eta_L(z) = \eta_{L'}(z)$ for all $z \in \mathbb{T}^{1}_P$, but such that there exists a $z_0 \in \mathbb{T}^{1}_!$ with $\sigma_L(z_0) \neq \sigma_{L'}(z_0)$ and $\eta_L(z_0) \neq \eta_{L'}(z_0)$.
\end{remark}
Note that Corollary~\ref{cor:ConcordanceViaGenus} will be significantly improved upon in Section~\ref{sec:Solvable}: the signature and nullity will be shown to be invariant under $0.5$-solvable cobordisms.
Using $\beta_1(F)$ to denote the first Betti number of a surface~$F$, an application of Theorem~\ref{thm:Genus} also gives the inequality below.
\begin{corollary} \label{cor:CimasoniFlorens72}
Let $F=F_1\cup\cdots \cup F_\mu$ be a colored bounding surface for $L$, and suppose that $F$ has $m$ components and $c$ intersection points. Then, for all $\omega \in \mathbb{T}_!^\mu$, we have
\[ |\sigma_L(\omega)|+|\eta_L(\omega)-m+1| \leq \sum_{i=1}^\mu \beta_1(F_i) +c. \]
\end{corollary}
\begin{proof}
Remove small $4$-balls in the interior of $D^4$ on each component of $F$. With
small enough balls,~$F$ will intersect the boundary spheres in unknots.
Tubing the boundary spheres together, we have constructed a $\mu$-colored
cobordism~$\Sigma$ with $m$ components between $L$ and a $\mu$-colored unlink $L'$ of
$m$ components. Thanks to the results of Section~\ref{sub:CFSignature}, we can compute the signature and
nullity of $L'$ using C-complexes~\cite[Section 2]{CimasoniFlorens}.
We pick a disjoint union of $m$ disks as a C-complex. The resulting generalized Seifert matrices are
empty, yielding $\sigma_{L'}(\omega)=0$ and $\eta_{L'}(\omega) = 0+\beta_0(S)-1=m-1$ for all $\omega\in \mathbb{T}^\mu$.
Using Theorem~\ref{thm:Genus} and Remark~\ref{rem:formulas}, we get
\[|\sigma_L(\omega)|+|\eta_L(\omega)-m+1| \leq - \sum_{i=1}^\mu \chi(\Sigma_i)
+c=\sum_{i=1}^{\mu}b_1(\Sigma_i) -m+c.\]
Now, if $C$ is any of the $m$ components of $F$, the corresponding component
$C'$ of $\Sigma$ is obtained from $C$ by removing a small disk, so that
$\beta_1(C')=\beta_1(C)+1$. Summing over all the components, we get
$\sum_{i=1}^{\mu}\beta_1(\Sigma_i)=\sum_{i=1}^{\mu}\beta_1(F_i)+m$, whence the desired
formula.
\end{proof}
The next example discusses the (non)-sharpness of the bound of Corollary~\ref{cor:CimasoniFlorens72}.
\begin{example}
We start with an example where the bound is sharp.
Consider the $1$-colored Hopf link~$H = L_1 = K_0 \cup K_1$ (with any orientation). The oriented link~$H$ bounds an annulus~$A$ in $S^3$, and we compute~$|\sigma_H(-1)| = 1$ and $\eta_H(-1) = 0$. If we push $A$ into the $4$--ball, we obtain a bounding surface~$F = F_1 = A$. The inequality of Corollary~\ref{cor:CimasoniFlorens72} is sharp:
\[ 1 = |\sigma_H(-1)|+|\eta_H(-1)-1+1| \leq \beta_1(A) +0 = 1. \]
Although it is easy to construct examples
where this inequality is not sharp,
we claim that the defect can in fact be arbitrarily large: pick a family of knots~$J_n$ such that~$J$ has the Seifert matrix of a slice knot, and topological $4$--genus~$g^\text{top}_4(J_n) \geq~n$ (such knots exist thanks to~\cite[Theorem 1.3]{Cha08}). Now consider $H(J_n) = K_0 \cup (K_1 \# J_n)$, where we tie the knot~$J_n$ into $K_1$ in a small $3$--ball disjoint from~$K_0$. The signature~$\sigma_{H(J_n)}(-1) = 1$ and the nullity~$\eta_{H(J_n)}(-1) = 0$ do not change, but we have~$g_4(H(J_n)) \geq g_4(J_n) -1 \geq n-1$, concluding the proof of the claim.
Instead, if we pick each knot $J_n$ to be topologically slice, but with smooth $4$--genus~$g_4^\text{smooth}(J_n) \geq n$ (such knots exist~\cite[Remark 1.2]{Tanaka98}), then the $H(J_n)$ provide a family of knots where the inequality is sharp in the topological category, but not in the smooth category.
\end{example}
We now compare Corollary~\ref{cor:CimasoniFlorens72} with previous results.
\begin{remark}
\label{rem:Genus}
Corollary~\ref{cor:CimasoniFlorens72} is a generalization of~\cite[Theorem 7.2]{CimasoniFlorens}.
In that paper, it is proven in the smooth setting and requires~$\omega$ to be in the set $\mathbb{T}_{P}^\mu$,
which is strictly smaller as $\mathbb{T}_!^\mu$; see Example~\ref{sub:ConcordanceRoots}.
Since all surfaces $F_i$ are assumed to be connected, $\mu$ appears instead of $m$ in their formula.
We can also recover a previous result~\cite[Theorem 1.4]{Powell} bounding the $4$-genus
of a $1$-colored link~$L$ with $m$ components.
Consider disjoint surfaces $F_1,\dots, F_m$ in $D^4$ bounding $L$.
Indeed, $F:=F_1\sqcup\cdots \sqcup F_m$ is a $1$-colored bounding surface for $L$, and applying
Corollary~\ref{cor:CimasoniFlorens72}, we get
\[|\sigma_L(\omega)|+|\eta_L(\omega)-m+1 | \leq \beta_1(F)=2g(F), \]
for $\omega\in \mathbb{T}_!^1$.
The result follows by passing to the minimum over all such collections of surfaces and observing that $\mathbb{T}_!^1$ is dense in $S^1$.
Finally, note that Viro proves inequalities similar to Corollary~\ref{cor:CimasoniFlorens72} in any odd dimension. In particular, for links in $S^3$ he obtains $|\sigma_L(\omega)|+\eta_L(\omega) \leq \beta_2(F,L)+\beta_1(F)$ and $|\sigma_L(\omega)|+\eta_L(\omega) \leq \beta_1(F,L)+\beta_0(F)$~\cite[Theorem 4.C]{Viro09}. Reworking his equations leads to the inequality
$$|\sigma_L(\omega)|+\eta_L(\omega)-m \leq \sum_{i=1}^\mu \beta_1(F_i) +c,$$
which is slightly weaker than Corollary~\ref{cor:CimasoniFlorens72}. The interested reader will note that while Viro essentially obtains his results for all $\omega \in \mathbb{T}^\mu_!$, his methods are quite different from the chain homotopy argument we rely on, see~\cite[Appendix C]{Viro09}.
\end{remark}
\section{Plumbed \texorpdfstring{$3$--manifolds}{3-manifolds} and surfaces in the \texorpdfstring{$4$--ball}{4-ball}}\label{sec:Plumbed}
In this section, we review plumbed $3$-manifolds and prove a vanishing result for their signature defect. This result is a key step in the proof of Theorem \ref{thm:SolvableNullitySignature} (which is concerned with the invariance of the signature and nullity under $0.5$-solvable cobordisms).
In Section~\ref{sub:APS}, we show this vanishing result in the case of
products of a closed surface with $S^1$.
To do so, we apply a product formula for the Atiyah-Patodi-Singer rho invariant, and pass from the smooth to the topological setting by using a bordism argument. In Section~\ref{sub:Plumbing}, we introduce the framework of plumbed $3$-manifolds and prove the main result, which is contained in Proposition~\ref{prop:PbFilling}. This proposition shows that the signature defect of a $4$-manifold vanishes if its boundary is a so-called ``balanced" plumbed $3$-manifold.
Finally, in Section~\ref{sub:SurfacesD4} we describe how plumbed $3$-manifolds arise naturally from surfaces intersecting transversally in the $4$-ball, and we perform a homological computation which is needed in Section~\ref{sec:Solvable}.
\subsection{The rho invariant of a product \texorpdfstring{$\Sigma\times S^1$}{Sigma x S1}}\label{sub:APS}
We consider the \emph{rho invariant}~$\rho(M,\alpha)$, a real number,
in the special case of $M$ being a smooth, odd dimensional manifold
with a homomorphism~$\alpha\colon H_1(M;\mathbb{Z})\to U(1)$~\cite{AtiyahPatodiSinger}.
The definition of the rho invariant requires spectral analysis
of elliptic differential operators on a manifold,
and we will not attempt to recall it.
Instead we state the following properties of $\rho(M,\alpha)$, which
will be sufficient for the purposes of this article.
\begin{proposition}\label{prop:APS}
~
\begin{enumerate}
\item\label{Item:Bounding} If $Z$ is a smooth $2n$-manifold together with a
homomorphism~$\alpha\colon H_1(Z;\mathbb{Z})\to~S^1$,
then $\rho(\partial Z,\alpha) = -(\operatorname{sign}_\alpha Z - \operatorname{sign} Z)$.
\item\label{Item:Tensor} If $N$ is a closed smooth $2m$-manifold with a homomorphism $\alpha\colon H_1(N;\mathbb{Z})\to~S^1$,
and $S^1$ comes with a homomorphism $\beta\colon H_1(S^1;\mathbb{Z})\to S^1$,
then
\[ \rho(N\times S^1, \alpha\otimes \beta)
=(-1)^m \operatorname{sign} N \cdot \rho(S^1, \beta).\]
In particular, $\rho(N\times S^1, \alpha\otimes \beta) =0$ if $m$ is odd.
\end{enumerate}
\end{proposition}
\begin{proof}
The first result is the specialization to our setting of the Atiyah-Patodi-Singer index theorem~\cite[Theorem 2.4]{AtiyahPatodiSinger}. The formula in the second statement follows from a direct computation combined with the classical Atiyah-Singer theorem.
Both results can be found in~\cite[Theorem 1.2, (iii) and (v)]{Neumann79}, where it has to be observed that the invariant considered by the author differs from the rho invariant by a sign and that $\operatorname{sign} N = \operatorname{sign}_\alpha N$ (this follows from~(\ref{Item:Bounding}) since $N$ has no boundary, or
alternatively from the Hirzebruch signature formula; see Remark~\ref{rem:SignatureRemark}). The last claim follows immediately from the fact that the ordinary signature of a closed manifold is non-trivial only in dimension $4k$.
\end{proof}
We restrict further to manifolds~$M$ with a homomorphism~$H_1(M;\mathbb{Z}) \to \mathbb{Z}^\mu$.
Since one-dimensional representations of $H_1(M;\mathbb{Z})$ factoring through $\mathbb{Z}^\mu$ are in bijection
with values~$\omega\in (S^1)^\mu$, we will denote by
$\rho_\omega(M)$ the rho invariant corresponding to the representation~$\alpha$
given by the composition
\[\alpha \colon H_1(M;\mathbb{Z}) \to \mathbb{Z}^\mu \xrightarrow{\omega} S^1.\]
Using Proposition~\ref{prop:APS}, we prove the following lemma.
\begin{lemma}\label{lem:EtaVanishing}
If $\Sigma$ is a closed oriented connected surface and $\phi \colon H_1(\Sigma \times S^1; \mathbb{Z}) \to \mathbb{Z}^\mu$ is a homomorphism, then $\rho_\omega(\Sigma \times S^1) = 0$ for all $\omega \in \mathbb{T}^\mu$.
\end{lemma}
\begin{proof}
Since $H_1(\Sigma \times S^1; \mathbb{Z}) = H_1(\Sigma;\mathbb{Z}) \oplus H_1(S^1; \mathbb{Z})$,
we may restrict $ \phi \colon H_1(\Sigma \times S^1; \mathbb{Z}) \to \mathbb{Z}^\mu$ to
each summand. This produces maps $\phi_\Sigma \colon H_1(\Sigma; \mathbb{Z}) \to \mathbb{Z}^\mu$
and $\phi_{S^1} \colon H_1( S^1; \mathbb{Z}) \to \mathbb{Z}^\mu$. Postcomposing each of
these maps with the map $\mathbb{Z}^\mu \xrightarrow{\omega} S^1$ produces maps~$\varphi, \varphi_\Sigma$
and $\varphi_{S^1}$. Since these maps fit in the commutative diagram
\[
\begin{tikzcd}[column sep=1.5cm]
H_1(\Sigma \times S^1; \mathbb{Z}) \ar[d, "pr_\Sigma \oplus pr_{S^1}"] \ar[r, "\varphi"] & S^1\\
H_1(\Sigma; \mathbb{Z}) \oplus H_1(S^1; \mathbb{Z}) \ar[r, "\varphi_\Sigma \times \varphi_{S^1}"]
& S^1 \times S^1, \ar[u,"\cdot"]
\end{tikzcd}
\]
it follows that $\varphi = \varphi_\Sigma \otimes \varphi_{S^1}$. Using point~(\ref{Item:Tensor}) of Proposition~\ref{prop:APS}, one obtains
\[\rho_\omega( \Sigma \times S^1 ) = \rho(\Sigma \times S^1, \varphi_\Sigma\otimes \varphi_{S^1})
= 0.
\]
This concludes the proof of the lemma.
\end{proof}
The following corollary is nearly immediate.
\begin{corollary}\label{cor:twSignatureVanishing}
Let $V$ be a set of closed oriented connected surfaces.
If $W$ is a $4$-manifold over $\mathbb{Z}^\mu$ with boundary
\[ \partial W = \bigsqcup_{\Sigma \in V} \Sigma \times S^1,\]
then $\operatorname{sign}_\omega W - \operatorname{sign} W= 0$.
\end{corollary}
\begin{proof}
Thanks to point (\ref{Item:Bounding}) of Proposition~\ref{prop:APS},
the number $\operatorname{sign}_\omega W - \operatorname{sign} W$ coincides with minus the rho invariants of its boundary.
By Lemma~\ref{lem:EtaVanishing} and additivity of the rho invariant under disjoint union of manifolds~\cite[Theorem $1.2.1$]{Neumann79},
we get $\operatorname{sign}_\omega W - \operatorname{sign} W=0$.
\end{proof}
Since Proposition~\ref{prop:APS} required the cobounding manifold to be smooth, one might worry
about Corollary~\ref{cor:twSignatureVanishing} only holding for smooth $4$-manifolds~$W$.
The following remark deals with this issue.
\begin{remark}\label{rem:TOP}
Let $W$ be a topological $4$-manifold bounding~$M = \bigsqcup_{\Sigma \in V} \Sigma \times S^1$.
The bordism groups are computed in both the topological
case and the smooth case by $\Omega_3(\mathbb{Z}^\mu) = H_3(\mathbb{Z}^\mu; \mathbb{Z})$.
Thus, if $M$ bounds topologically, then there also exists a smooth filling~$W'$, for which the rho invariant computation gives $\operatorname{sign}_\omega W' -\operatorname{sign} W'=0$. By
Corollary~\ref{cor:bordism}, the difference between twisted and ordinary
signature is the same for two $4$-manifolds filling the same $M$ over $\mathbb{Z}^\mu$,
so we conclude that $\operatorname{sign}_\omega W -\operatorname{sign} W$ is also zero as desired.
\end{remark}
\subsection{Plumbings and their signature defect}\label{sub:Plumbing}
After reviewing the definition of a plumbed $3$-manifold, we use the rho
invariant to observe that if a $4$-manifold~$W$ admits a balanced plumbed
$3$-manifold as its boundary, then its signature defect vanishes; see
Proposition~\ref{prop:PbFilling}. Classical references on plumbed $3$-manifolds
include~\cite{NeumannCalculus,Hirzebruch71}. See also \cite{BorodzikFriedlPowell}
for their use in our context.
\medbreak
We begin by setting up notation.
\begin{construction}\label{constr:PlumbedManifold}
Let $G = (V,E)$ be an unoriented graph with no loops. The set~$E$ is the set of oriented edges, and $s\colon E \to V$
and $t\colon E \to V$ are the source and the target maps. The involution~$i \colon E \to E$ sends an oriented edge to the corresponding edge with the opposite orientation; see e.g.~\cite[Section I.2]{Serre80}. The graph is unoriented in the sense that for each edge, the set $E$ also contains the edge with the opposite orientation. We shall sometimes also denote $i(e)$ by $\bar e$.
Assume that the set of vertices~$V$ consists of oriented, connected and compact surfaces~$F$
and that the edges~$e\in E$ are labeled by weights~$\varepsilon(e)=\varepsilon(\bar e)\in\{\pm 1\}$.
For each edge $e$, we choose an embedded disc~$D_e \subset s(e)$ in such a way that no two discs intersect.
We then remove these discs, by defining for each surface $F \in V$ the complement
\[ F^\circ = F \setminus \bigcup_{s(e) = F} D_e. \]
We define the \emph{plumbed $3$-manifold} $\operatorname{Pb}(G)$ as
\[ \operatorname{Pb}(G):=\left( \bigsqcup_{F \in V} F^\circ \times S^1\right) / \sim \]
where, for all $e \in E$ the identifications are given by
\begin{align}\label{eq:plumbingmaps}
(-\partial D_{e}) \times S^1 &\to (-\partial D_{i(e)}) \times S^1\\
(x,y) & \mapsto
\nonumber \begin{cases}
(y^{-1}, x^{-1}), & \text{if } \varepsilon(e)=1,\\
(y,x), & \text{if } \varepsilon(e)=-1.
\end{cases}
\end{align}
Since these identifications make use of orientation reversing homeomorphisms,
the $3$-manifold~$\operatorname{Pb}(G)$ carries an orientation that extends the orientation
of each~$F^\circ \times S^1$.
\end{construction}
\begin{remark}
The orientation $-\partial D_e$ is the one obtained by considering the circle
as a boundary component of $F^\circ$. This
is the opposite of the one induced by the boundary~$\partial D_e$ of the removed disk.
In the general context of plumbing disk bundles, one trivializes over the removed disks, which
causes the two formulas to flip; see e.g.~\cite[Chapter 8 p.\ 67]{Hirzebruch71}.
\end{remark}
The boundary of a plumbed $3$-manifold~$\operatorname{Pb}(G)$ is a union of tori and
the components correspond to the boundary components of the surfaces~$F \in V$.
By construction, the boundary components come with the
product structure~$\partial \operatorname{Pb}(G) = \bigsqcup_{F \in V}\partial F \times S^1$.
We define the homology class~$[\partial F] = [\partial F \times \{ pt\}]$
in $H_1(\partial \operatorname{Pb}(G); \mathbb{R})$.
In order to describe the kernel $H_1(\partial \operatorname{Pb}(G);\mathbb{R})\to H_1(\operatorname{Pb}(G);\mathbb{R})$, we introduce some more notation:
for each surface~$F \in V$ with boundary, label its
boundary components~$K_1, \dots, K_{n_F}$ and accordingly their
meridians~$\mu_1^F, \dots, \mu_{n_F}^F$ and longitudes~$l_1^F, \dots, l_{n_F}^F$.
We have the equality~$[\partial F] = \sum_{k = 1}^{n_F} [l_k^F]$.
The vertices of our graph~$G$ are surfaces. So, for each edge~$e \in E$, the expression~$t(e)$ denotes a surface and~$\mu^{t(e)}_i$ denotes the meridian of $i$--th boundary torus of $t(e)$.
The following lemma describes the kernel of the inclusion~$H_1(\partial \operatorname{Pb}(G);\mathbb{R})\to H_1(\operatorname{Pb}(G);\mathbb{R})$,
which will be useful for our applications of Novikov-Wall additivity.
\begin{lemma}\label{lem:KernelPb}
The kernel of the inclusion induced map~$H_1(\partial \operatorname{Pb}(G);\mathbb{R})\to H_1(\operatorname{Pb}(G);\mathbb{R})$
is freely generated by the elements
\[ [\partial F] - \sum_{s(e) = F} \varepsilon(e) \mu_1^{t(e)} \text{ and } \quad \mu_i^F - \mu_1^F, \]
for $F$ varying over the elements in $V$ with $\partial F \ne \emptyset$ and $2\leq i\leq n_F$.
\end{lemma}
\begin{proof}
From the construction of $\operatorname{Pb}(G)$, we see that for every edge~$e \in E$
there is a torus~$-\partial D_e \times S^1\subset s(e)\times S^1$ which is
identified with $-\partial D_{\bar{e}} \times S^1\subset t(e) \times S^1$. We denote this
torus by $T_e \subset \operatorname{Pb}(G)$. Hence, $T_e=-T_{\bar{e}}$.
Now pick an orientation~$E' \subset E$
on the edges, i.e.\ for every $e \in E$, exactly
one of the edges~$e$ and $\bar e$ is an element of $E'$.
From the construction of $\operatorname{Pb}(G)$, we obtain a Mayer-Vietoris sequence
\[ \label{eq:mayvet} \dots \to \bigoplus_{e \in E'} H_1(T_e;\mathbb{R})
\xrightarrow{i_t-i_s} \bigoplus_{F \in V} H_1(F^\circ \times S^1;\mathbb{R}) \to H_1(\operatorname{Pb}(G);\mathbb{R}) \to \cdots, \]
where $i_t, i_s$ denote the maps induced by the inclusions of $T_e$ into $t(e) \times S^1$ and $s(e)\times S^1$ respectively.
For each $F$, the inclusion~$\partial F\times S^1\to \operatorname{Pb}(G)$ factors through
the space~$\bigsqcup_{F \in V} F^\circ \times S^1$. Consequently, we have the commutative diagram
of inclusion induced maps
\[
\xymatrix@C0.8 cm@R0.4cm{
\displaystyle\bigoplus_{e \in E'} H_1(T_e;\mathbb{R}) \ar[r]^-{i_t-i_s} & \displaystyle\bigoplus_{F \in V} H_1(F^\circ \times S^1;\mathbb{R})
\ar[r]^-h& H_1(\operatorname{Pb}(G);\mathbb{R})\\
&&H_1(\partial\operatorname{Pb}(G);\mathbb{R}) \ar[ul]^f \ar[u]^j,&
}
\]
yielding $\ker j = \ker h\circ f = \{x\in H_1(\partial\operatorname{Pb}(G);\mathbb{R}) \, \vert \, f(x) \in \operatorname{im} i_t-i_s \}.$
We shall now restrict our attention to those surfaces~$F$ with $\partial F\ne \emptyset$,
and prove that both $\mu_k^F - \mu_1^F$ and $[\partial F] - \sum_{s(e) = F} \varepsilon(e) \mu_1^{t(e)}$
belong to $\ker j$.
As $F^\circ$ is connected, all elements $\mu_k^F$ for $1\leq k \leq n_F$
are equal in $H_1(\operatorname{Pb}(G);\mathbb{R})$, so the elements $\mu_k^F - \mu_1^F$ are in $\ker f$ and a fortiori in $\ker j$.
Next, we check that an element of the form~$[\partial F] - \sum_{s(e) = F} \varepsilon(e) \mu_1^{t(e)}$
is sent by $f$ to the image of $i_s-i_t$.
Note that $H_1(F^\circ \times S^1; \mathbb{R}) = H_1(F^\circ; \mathbb{R}) \oplus \mathbb{R}\langle \mu_1^F\rangle$, so that we have the relation
$[\partial F] + \sum_{s(e) = F} [- \partial D_e] = 0$ in $H_1(F^\circ \times S^1; \mathbb{R})$.
We thus obtain
\[f\Big([\partial F] - \sum_{s(e) = F} \varepsilon(e) \mu_1^{t(e)}\Big)
= \sum_{s(e) = F} \left([\partial D_e] - \varepsilon(e) \mu_1^{t(e)}\right), \]
and the claim reduces to checking that this element is
in the image of $i_t - i_s$.
Consider the class~$-[\partial D_e] \in H_1(T_e; \mathbb{Z})$.
We have $- i_s [-\partial D_e] = [\partial D_e]$ and, by the gluing map
given in Construction~\ref{constr:PlumbedManifold},
$i_t[-\partial D_e]= -\varepsilon(e) \mu_{t(e)}$.
As a result, the difference $[\partial D_e] - \varepsilon(e) \mu_1^{t(e)}$ is indeed in the image of $i_t-i_s$, and so $ [\partial F] - \sum_{s(e) = F} \varepsilon(e) \mu_1^{t(e)}$ is in $\ker j$.
Note that the elements in the statement of the lemma span a subspace~$U$, whose
dimension is the number of boundary components of $\operatorname{Pb}(G)$, i.e.\ it is half the
dimension of the space~$H_1(\partial \operatorname{Pb}(G); \mathbb{R})$.
By the half lives, half dies principle~\cite[Lemma 8.15]{Lickorish97}, the kernel~$\ker j$ has
the same dimension as $U$ and so coincides with $U$.
\end{proof}
\begin{definition}
\label{def:Balanced}
Let $G=(V,E)$ a graph with a label function $\varepsilon\colon E\to \{\pm 1\}$.
For $v,w \in V$ denote by $E(v, w) = \{ e \in E \mid s(e) = v, t(e) = w\}$
the set of all edges between $v$ and $w$. We call the integer
$p(v,w):= \sum_{e\in E(v, w)} \varepsilon(e)$ the \emph{total weight} of the pair of distinct vertices $(v,w)$.
The graph $G$ is called \emph{balanced} if $p(v,w)=0$ for all such pairs~$(v,w)$.
\end{definition}
From now on, assume that our plumbed $3$-manifold~$\operatorname{Pb}(G)$ comes with a homomorphism~$\phi\colon H_1(\operatorname{Pb}(G);\mathbb{Z}) \to \mathbb{Z}^\mu$. We call such a homomorphism \emph{meridional} if, for each
constituting piece $F^\circ \times S^1\subseteq \operatorname{Pb}(G)$ with $F \in V$,
the restriction of $\phi$ to $H_1(F^\circ \times S^1;\mathbb{Z})$ sends the class of
$\{pt\}\times S^1$ to one of the canonical generators $e_1,\dotsc, e_\mu$ of $\mathbb{Z}^\mu$.
Moreover, in the next two results we will restrict our attention to plumbings
of \emph{closed} surfaces.
The next lemma shows that if $G$ is balanced, then $\operatorname{Pb}(G)$ is cobordant to a
disjoint union of trivial surface bundles, where the cobordism has vanishing
signature defect.
\begin{lemma}\label{lem:handles}
Let $G = (V,E)$ be a balanced graph with vertices closed connected surfaces.
Suppose that $\phi \colon H_1(\operatorname{Pb}(G); \mathbb{Z}) \to \mathbb{Z}^\mu$ is a meridional homomorphism.
Then there exists a smooth $4$-manifold $Z$ over $\mathbb{Z}^\mu$ such that:
\begin{enumerate}
\item the boundary of $Z$ is a disjoint union
\[\partial Z = -\operatorname{Pb}(G) \sqcup \bigsqcup_{F \in V} \Sigma_F \times S^1,\]
where every $\Sigma_F$ is a closed oriented surface;
\item the restriction $H_1(\bigsqcup_{F \in V} \Sigma_F \times S^1; \mathbb{Z}) \to \mathbb{Z}^\mu$ is meridional;
\item $\operatorname{dsign}_\omega Z = 0$ for all $\omega \in \mathbb{T}^\mu$.
\end{enumerate}
\end{lemma}
\begin{proof}
Instead of proving the statement directly, we prove the following: if $E$ is nonempty, then
there exists a balanced graph~$G' = (V', E')$
with the same number of vertices and fewer edges than $G$, such that there exists a manifold~$Z_{G'}$ over $\mathbb{Z}^\mu$ with
$\partial Z_{G'} = -\operatorname{Pb}(G) \sqcup \operatorname{Pb}(G')$,
which induces a meridional homomorphism on $\operatorname{Pb}(G')$ and such that $\operatorname{dsign}_\omega Z_{G'}= 0$ for all $\omega \in \mathbb{T}^\mu$.
The original statement can be recovered as follows: iterate the above
to obtain a sequence of graphs~$G = G_0, \dots, G_n$ such that the set of edges of $G_n$ is empty.
Consequently, $\operatorname{Pb}(G_n) = \bigsqcup_{F \in V} \Sigma_F \times S^1$.
We then glue the $4$-manifolds together:
$Z := Z_{G_1} \cup \dots \cup Z_{G_n}$. We get $\partial Z = -\operatorname{Pb}(G) \sqcup \operatorname{Pb}(G_n)$ as required
and by Novikov additivity $\operatorname{dsign}_\omega Z = \sum_{i=1}^n \operatorname{dsign}_\omega Z_{G_i} = 0$.
Now we proceed with the proof of the modified statement.
Recall from Construction~\ref{constr:PlumbedManifold} that to each edge~$e$ corresponds the
embedded torus~$T_e = (-\partial D_e) \times S^1$. The complement of all of these tori is diffeomorphic to
$\bigsqcup_{F \in V} F^\circ \times S^1 \subset \operatorname{Pb}(G)$.
In order to produce the desired $4$-manifold~$Z$, our aim is to attach a~$D^2 \times T^2$
to the trivial bordism~$\operatorname{Pb}(G) \times I$.
Given two vertices $F_1, F_2 \in V$, we write~$E(F_1, F_2) = \{ e \in E \mid s(e) = F_1, t(e) = F_2\}$ as
in Definition~\ref{def:Balanced}. Pick two vertices $F_1,F_2 \in V$ such that $E(F_1,F_2)$ is nonempty.
As the graph is balanced, this implies we can also pick two edges~$e,e' \in E(F_1,F_2)$ such that
$\varepsilon(e) = 1$ and $\varepsilon(e') = -1$.
Now set~$X_{e,e'} := I\times I \times S^1 \times S^1$.
Consider the corresponding tori~$T_e = (-\partial D_e) \times S^1$ and
$T_{e'} = (-\partial D_{e'}) \times S^1$, with oriented neighborhoods~$I \times T_e$, $I \times T_{e'}$.
We attach~$X_{e,e'}$ to $\operatorname{Pb}(G)\times \{1\}$ along its vertical boundaries through a homeomorphism $f$ given by the following formulas:
\begin{align*}\label{eq:handlegluing}
\{ 0\} \times I \times S^1 \times S^1 &\to I \times (-\partial D_e) \times S^1 & \{ 1\} \times I \times S^1 \times S^1 &\to I \times (-\partial D_{e'}) \times S^1 \\
\nonumber (0,t, x,y) &\mapsto (t, x,y), &
(1,t, x,y) &\mapsto (t, x^{-1},y) .
\end{align*}
The induced orientations on $\{0,1\}\times I\times S^1\times S^1$ are such that the above map is orientation-reversing. As a consequence, the orientations of $\operatorname{Pb}(G)\times I$ and $X_{e,e'}$ extend to the resulting $4$-manifold
\[ Z:= X_{e,e'} \cup_f \operatorname{Pb}(G) \times I. \]
Let $a_1, a_2\in \mathbb{Z}^\mu$ the images of the meridians of $F_1$ and $F_2$ under the map $H_1(\operatorname{Pb}(G);\mathbb{Z})\to \mathbb{Z}^\mu$. Recalling the construction of $\operatorname{Pb}(G)$ given in \eqref{eq:plumbingmaps}, we see that the induced maps to $\mathbb{Z}^\mu$ on $T_e$ and $T_{e'}$ are given by
\begin{align*}
H_1(-\partial D_e\times S^1;\mathbb{Z})&\to \mathbb{Z}^\mu &H_1(-\partial D_{e'}\times S^1;\mathbb{Z})&\to \mathbb{Z}^\mu\\
[\{p\}\times S^1]&\mapsto a_1 & [\{p\}\times S^1]&\mapsto a_1\\
[-\partial D_e\times \{p\}]&\mapsto a_2 &[-\partial D_{e'}\times \{p\}]&\mapsto -a_2.
\end{align*}
The difference in the sign of the image $[-\partial D_e\times \{p\}]$ is
a consequence of the fact that the edges~$e,e'$ had opposite signs. This allows
us to define a map~$\phi_X \colon H_1(X_{e,e'}; \mathbb{Z}) \to \mathbb{Z}^\mu$
which glues with the map $\phi \colon H_1(\operatorname{Pb}(G); \mathbb{Z}) \to \mathbb{Z}^\mu$, i.e.\
the following diagram commutes:
\[
\begin{tikzcd}[column sep=0.6cm,row sep=0.6cm]
H_1(\{ 0,1\} \times I \times S^1 \times S^1; \mathbb{Z}) \ar[rr, "f_*"]\ar[dr, "\phi_X"] & &H_1(I \times T_e;\mathbb{Z})\oplus H_1(I \times T_e';\mathbb{Z}) \ar[dl, "\phi"]\\
&\mathbb{Z}^\mu.&
\end{tikzcd}
\]
\begin{figure}[ht]
\includegraphics[width=11cm]{figure1.pdf}
\caption{The effect of attaching $X_{e,e'}$ to $\operatorname{Pb}(G) \times I$ depicted in reduced dimensions}
\end{figure}
By making an additional choice of a splitting of the Mayer-Vietoris
sequence
\[ H_1(X_{e,e'};\mathbb{Z}) \oplus H_1(\operatorname{Pb}(G)\times I; \mathbb{Z}) \to H_1(Z; \mathbb{Z}) \to H_0(\{0,1\} \times I \times T^2;\mathbb{Z}), \]
we obtain a map $H_1(Z; \mathbb{Z}) \to \mathbb{Z}^\mu$ which extends
$\phi$ and $\phi_X$ on $H_1(\operatorname{Pb}(G)\times I; \mathbb{Z})$ and $H_1(X_{e,e'};\mathbb{Z})$.
The boundary of $Z$ has two components. The bottom boundary is $-\operatorname{Pb}(G)$.
The effect of adding $X_{e,e'}$ on the top boundary is
that of cutting along $T_e$ and $T_{e'}$ and gluing together the boundary component~$- \partial D_e \times S^1$
to $- \partial D_{e'} \times S^1$, and glueing $-\partial D_{i(e)} \times S^1$ to $-\partial D_{i(e')} \times S^1$.
Let $F_1' = F_1 \# T^2$ be the result of $0$--surgery along $D_e$ and $D_{e'}$
in $F_1$, and define $F_2'$ similarly.
The top boundary inherits a plumbed structure along a graph $G'$ obtained from $G$ by replacing the vertices $F_1$ and $F_2$ with $F_1'$ and $F_2'$,
and by removing the edges~$e$ and $e'$.
We have verified that $Z$ fulfills the first statement.
To conclude the proof of the proposition, it remains to prove that~$\operatorname{dsign}_\omega Z = 0$. This
is a consequence of the following claim.
\begin{claim}
The twisted and untwisted signature of $\operatorname{Pb}(G) \times I$ and $X_{e,e'}$
vanish and Novikov-Wall additivity holds when gluing these two pieces together.
\end{claim}
To prove that the signatures vanish, note that both spaces are $4$-manifolds~$W$
with the property that the inclusions of the boundary $H_2(\partial W; \mathbb{Z}) \to H_2(W; \mathbb{Z})$
and $H_2(\partial W; \mathbb{C}^\omega) \to H_2(W; \mathbb{C}^\omega)$ surject.
This implies that both the twisted and untwisted intersection forms vanish.
In particular, the twisted and untwisted signatures of $\operatorname{Pb}(G) \times I$ and $X_{e,e'}$ are zero.
Next, we consider Novikov-Wall additivity. We are gluing $W_+ = X_{e,e'}$ to
$W_- =\operatorname{Pb}(G)\times I$ along $M = \nu T_e \sqcup \nu T_{e'} \subset \operatorname{Pb}(G) \times \{1\}$. In the notations of Section \ref{sub:NovikovWall}, we have $N_+=I \times \{0,1\} \times S^1 \times S^1$ and $N_- = \operatorname{Pb}(G) \setminus M$.
The boundary of the gluing region is given by the four tori
\[ \Sigma:= -\partial D_e \times S^1 \sqcup -\partial D_{i(e)} \times S^1 \sqcup -\partial D_{e'} \times S^1 \sqcup -\partial D_{i(e')} \times S^1 .\]
We shall prove that $V_{N_+} = \ker H_1(\Sigma;\mathbb{R}) \to H_1(N_+; \mathbb{R})$
and $V_{N_-} = \ker H_1(\Sigma;\mathbb{R}) \to H_1(N_-; \mathbb{R})$ agree, so that the hypotheses of the Novikov-Wall additivity theorem are satisfied (recall Theorem \ref{thm:Wall}) .
Observing the gluing maps above, we see that the vector space $V_{N_+}$ has basis
\begin{equation} \label{eq:KernelBasis}
[-\partial D_e] + [-\partial D_{e'}], \quad [ S^1_{e}] - [S^1_{e'}], \quad
[-\partial D_{i(e)}] + [-\partial D_{i(e')}], \quad [ S^1_{i(e)}] - [S^1_{i(e')}].
\end{equation}
In order to describe $V_{N_-}$, observe that $N_-=\operatorname{Pb}(G) \setminus M$ inherits
a plumbed structure from $\operatorname{Pb}(G)$. It has the same surfaces as vertex set with $F_1$ and $F_2$
replaced by $F_1 \setminus ( D_e \cup D_{e'} )$ and $F_2 \setminus ( D_{i(e)} \cup D_{i(e')})$. Its set of
edges is obtained by removing $e$ and $e'$ from the set of edges of $G$.
Note that $\Sigma = \partial \operatorname{Pb}(G) \setminus M$ and we can use
Lemma~\ref{lem:KernelPb} to obtain a basis for $V_{N_-}$. The difference of meridians gives the basis elements
$[ S^1_{e}] - [S^1_{e'}], [ S^1_{i(e)}] - [S^1_{i(e')}]$.
The surface~$F_1$ has boundary $-\partial D_e \sqcup -\partial D_{e'}$, so that further elements of the basis are given by
\[ [-\partial D_e] + [-\partial D_{e'}] - \sum_{s(k) = v} \varepsilon(k) \mu_k
= [-\partial D_e] + [-\partial D_{e'}],\]
where the equality follows from the fact that $G$ is balanced.
The analogous statements holds for the other surface $F_2$. Consequently,
the vector space $V_{N_-}$ admits the same basis~\eqref{eq:KernelBasis} as $V_{N_+}$, and hence they coincide.
In particular, Theorem~\ref{thm:Wall} applies, and the untwisted signature
is additive.
For the twisted signature, thanks to Proposition~\ref{prop:TwistedWall},
it is enough to prove that the twisted homology vanishes for $\Sigma$.
This happens exactly if the induced $U(1)$-representation is nontrivial.
This is the case, because $\phi$ is meridional and the entries of $\omega$ are taken to be different from $1$. Consequently, the signature
defect is additive and so
\[ \operatorname{dsign}_\omega Z = \operatorname{dsign}_\omega \operatorname{Pb}(G) \times I + \operatorname{dsign}_\omega X_{e,e'} = 0.\]
\end{proof}
Using Lemma~\ref{lem:handles}, we can prove our main result about plumbed manifolds.
\begin{proposition}\label{prop:PbFilling}
Let $G = (V,E)$ be a balanced graph with vertices closed connected surfaces~$F$.
Suppose that $\phi \colon H_1(\operatorname{Pb}(G); \mathbb{Z}) \to \mathbb{Z}^\mu$ is a meridional homomorphism and that
$\operatorname{Pb}(G)$ bounds a $4$-manifold~$W$ over $\mathbb{Z}^\mu$. Then, for all $\omega \in \mathbb{T}^\mu$,
\[ \operatorname{sign}_\omega W -\operatorname{sign} W=0. \]
\end{proposition}
\begin{proof}
Since the graph is balanced, Lemma~\ref{lem:handles} produces closed
surfaces $\Sigma_F$ and a $4$-manifold $Z$ over $\mathbb{Z}^\mu$ whose
signature defect vanishes, with boundary
\[\partial Z = -\operatorname{Pb}(G) \sqcup \bigsqcup_{F \in V} \Sigma_F \times S^1.\]
One can now define $P:= W\cup_{\operatorname{Pb}(G)} Z$. Since the boundary of $P$ consists
of a disjoint union of $\Sigma_F \times S^1$,
Corollary~\ref{cor:twSignatureVanishing} guaranties that $ \operatorname{dsign}_\omega P=0$.
As we are gluing along a full boundary component, Novikov additivity holds for both the twisted and untwisted signature, leading to $\operatorname{dsign}_\omega P = \operatorname{dsign}_\omega W + \operatorname{dsign}_\omega Z$.
Since we know that both $\operatorname{dsign}_\omega P$ and
$\operatorname{dsign}_\omega Z$ vanish, $\operatorname{dsign}_\omega W$ also vanishes.
\end{proof}
\subsection{Surfaces in the \texorpdfstring{$4$--ball}{4-ball}}
\label{sub:SurfacesD4}
In the remainder of the paper, plumbed $3$-manifolds will mostly appear as boundaries of tubular neighborhoods of collections of surfaces in the $4$-ball.
\medbreak
We observe that the exterior of a bounding surface contains a plumbed $3$-manifold in its boundary.
\begin{definition}\label{defn:IntersectionGraph}
The \emph{intersection graph~$(V,E)$ of a bounding surface}~$F = F_1\cup \dots \cup F_m$ has
the vertex set~$V = \{ F_1, \dots, F_m\}$. The set of edges~$E$ consists
of triples~$e = (x, F_i, F_j)$ where $x$ is an intersection point between
the components~$F_i, F_j \in V$. The maps $s,t,i$ are defined on $e$ by
\[ s(e) = F_i \quad t(e) = F_j \quad i(e) = (x, F_j, F_i) . \]
Moreover, we assign a weight $\varepsilon(e)=\pm 1$
to each edge~$e= (x, F_i, F_j)$ corresponding to the sign of the intersection at the point~$x$.
\end{definition}
Our interest in plumbed $3$-manifolds essentially lies in the next example,
which is only balanced if the link has
pairwise vanishing
linking numbers.
\begin{example}\label{ex:PlumbingIntersections}
Let $F \subset D^4$ be a bounding surface for a link~$L$.
The boundary of the exterior~$W_F = D^4 \setminus \nu F$ decomposes
into $\partial W_F = X_L \cup_{L\times S^1} M_F$.
Plumbing the trivialized disk bundles~$F_i \times D^2$ by the intersection graph of $F$ describes
a neighborhood~$\nu F$ of $F$. In this model, the surfaces~$F_i$ are recovered
as the zero sections~$F_i \times \{0\}$~\cite[Chapter 8]{Hirzebruch71}.
As consequence, we see that $M_F$ is diffeomorphic to $\operatorname{Pb}(G)$, where $G$ is the intersection graph of $F$.
\end{example}
Let~$F=F_1\cup\cdots \cup F_m$ be a bounding surface for a link~$L$, and let $L_i$ be the sublink given by $\partial F_i$, for $i=1,\dotsc , m$.
Denote as usual the exterior of $F$ by $W_F$.
Recall from Example~\ref{ex:PlumbingIntersections} that
$\partial W_F = X_L \cup_{L\times S^1} M_F$, where $M_F$ is a
plumbed $3$-manifold. Enumerate the components of $L_i$ and denote their meridians by
$\mu_k^{L_i}$ for $1 \leq k \leq n_{L_i}$, where $n_{L_i}$ is the number of components of $L_i$.
Define the linking number between two disjoint sublinks by
\[
\lk(L_i,L_j)=\sum_{\substack{K\subset L_i\\ J\subset L_j}}\lk(K,J) ,
\]
where the sum runs over the link components of~$L_i$ and $L_j$,
and set $\lk(L_i,L_i)=0$ for all $i$. The following computation will turn out to be useful when applying Novikov-Wall additivity.
\begin{lemma}\label{lem:SameKernel}
The vector space $V_{M_F} = \ker H_1(L\times S^1; \mathbb{R}) \to H_1(M_F; \mathbb{R}) $ is generated by the elements of the form
\[ [L_i] - \sum_{j=1}^m \lk(L_i, L_j) \mu_{1}^{L_j}\quad \text{and} \quad \mu_{k}^{L_i} - \mu_1^{L_i}.\]
\end{lemma}
\begin{proof}
Consider the surface~$F = t(e)$ for an edge~$e$ and the corresponding sublink~$\partial F = \partial t(e) \subset S^3$, whose first component has meridian~$\mu^{\partial t(e)}_1$.
Applying Lemma~\ref{lem:KernelPb}, the component $F_i$ gives rise to the basis vectors
\[ [L_i] - \sum_{s(e) = F_i} \varepsilon(e) \mu_{1}^{\partial t(e)} \quad \text{and} \quad \mu_{k}^{L_i} - \mu_1^ {L_i}.\]
The result follows by observing that
\[ \sum_{s(e) = F_i} \varepsilon(e) \mu_{1}^{\partial t(e)}
= \sum_{j=1}^m (F_i\cdot F_j) \mu_{1}^{L_j}
= \sum_{j=1}^m \lk(L_i, L_j) \mu_{1}^{L_j}.\]
\end{proof}
\section{Invariance by \texorpdfstring{$0.5$--solvable}{0.5-solvable} cobordisms}\label{sec:Solvable}
The aim of this section is to prove that the multivariable signature and nullity are invariant under $0.5$-solvable cobordism. Sections~\ref{sub:H1Bordism} and~\ref{sub:05Solvable} respectively review the notion of $H_1$-cobordisms and $0.5$-solvable cobordisms. Section~\ref{sub:NullityProof} tackles the invariance of the nullity. Section~\ref{sub:SignatureProof} is concerned with invariance of the signature. Finally, Section~\ref{sub:Technical} proves some technical results which are used in Sections~\ref{sub:NullityProof} and~\ref{sub:SignatureProof}
\subsection{\texorpdfstring{$H_1$--cobordisms}{H1-cobordism}}\label{sub:H1Bordism}
In this section, we review the definition of an $H_1$-cobordisms between 3-manifolds and prove some elementary properties following~\cite{Cha}.
\medbreak
A \emph{cobordism}~$(W; M, M', \varphi)$ between two connected $3$-manifolds~$M, M'$ with a preferred orientation-preserving diffeomorphism~$\varphi\colon \partial M \to \partial M'$ is a compact connected $4$-manifold~$W$ with a decomposition~$\partial W \cong -M \cup_{\varphi} M'$. We will often suppress~$\varphi$ from the notation.
A cobordism $(W; M, M')$ is an \emph{$H_1$-cobordism}
if additionally the inclusions of $M$ and $M'$ into $W$ induce
isomorphisms~$ H_1(M; \mathbb{Z}) \xrightarrow{\cong} H_1(W; \mathbb{Z}) \xleftarrow{\cong} H_1(M' ; \mathbb{Z})$.
We start by recalling some immediate facts about $H_1$-cobordisms.
\begin{lemma}\label{lem:H1bordism}
If $(W;M,M')$ is an $H_1$-cobordism, then the following statements hold:
\begin{enumerate}
\item $H_i(W,M; \mathbb{Z})=0=H_i(W,M';\mathbb{Z})$ for all $i \neq 2$.
\item The groups $H_2(W,M;\mathbb{Z})$ and $H_2(W,M';\mathbb{Z})$ are isomorphic and free abelian.
\item Denote by $k \colon H_2(\partial W; \mathbb{Z}) \to H_2(W; \mathbb{Z})$ the map induced by the inclusion.
There exists a unique map~$\psi \colon H_2(W,M; \mathbb{Z}) \to H_2(W; \mathbb{Z})/\operatorname{im} k$ such that
\[
\begin{tikzcd}
H_2(W; \mathbb{Z})/\operatorname{im} k \ar[r] & H_2(W, \partial W; \mathbb{Z})\\
H_2(W; \mathbb{Z}) \ar[u] \ar[r]& H_2(W,M; \mathbb{Z}) \ar[u] \ar[ul, dashed, "\psi"]
\end{tikzcd}
\]
is commutative. The map~$\psi$ is an isomorphism.
\end{enumerate}
\end{lemma}
\begin{proof}
Since the first two assertions can be found in~\cite[Lemma 2.20]{Cha}, we only show here the third one here.
As a first step, we show that the map $i \colon H_2(W,M; \mathbb{Z}) \to H_2(W,\partial W; \mathbb{Z})$
arising from the long exact sequence of the triple~$(W,\partial W,M)$ is an injection. To prove this,
consider the diagram
$$\xymatrix@C0.6cm@R0.8cm{
\text{Hom}(H_1(W; \mathbb{Z}),\mathbb{Z}) \ar[r] & \text{Hom}(H_1(M'; \mathbb{Z}),\mathbb{Z}) \\
H^1(W; \mathbb{Z}) \ar[r]^f \ar[dd]^{\text{PD}}_\cong \ar[u]^\cong_{\text{ev}} & H^1(M'; \mathbb{Z}) \ar[d]^{\text{PD}}_\cong \ar[u]^\cong_{\text{ev}}\\
& H_2(M',\partial M'; \mathbb{Z}) \ar[d]^{\text{exc}}_\cong \\
H_3(W,\partial W; \mathbb{Z}) \ar[r] & H_2(\partial W, M; \mathbb{Z}) \ar[r] & H_2(W,M; \mathbb{Z}) \ar[r]^i & H_2(W,\partial W; \mathbb{Z}),
}$$
where $\text{exc}$ denotes excision. The upper square clearly commutes, while the pentagon commutes
by~\cite[Section $\text{VI}.6$, Problem $3$]{Bredon}.
Since $(W;M,M')$ is an $H_1$-cobordism, the uppermost horizontal map is an isomorphism. Consequently,
the map $f$ is an isomorphism and therefore so is the map~$H_3(W,\partial W; \mathbb{Z}) \to H_2(\partial W,M; \mathbb{Z})$.
Exactness now implies that $i \colon H_2(W,M; \mathbb{Z}) \to H_2(W,\partial W; \mathbb{Z})$ is injective.
As a second step, we show existence and uniqueness of~$\psi \colon H_2(W,M; \mathbb{Z}) \to \frac{H_2(W; \mathbb{Z})}{\operatorname{im} k}$.
The portion \[ H_2(\partial W; \mathbb{Z}) \stackrel{k}{\to} H_2(W; \mathbb{Z}) \stackrel{j}{\to} H_2(W,\partial W; \mathbb{Z}) \stackrel{\partial}{\to} H_1(\partial W; \mathbb{Z}) \stackrel{\ell}{\to} H_1(W; \mathbb{Z}) \]
of the long exact sequence of the pair $(W,\partial W)$ produces the short exact sequence in the top row of the following commutative diagram:
\[\xymatrix{
0 \ar[r] & \frac{H_2(W; \mathbb{Z})}{\operatorname{im} k} \ar[r]^-{j} & H_2(W,\partial W; \mathbb{Z}) \ar[r]^\partial & \ker \ell \ar[r]& 0 \\
& H_2(W; \mathbb{Z}) \ar[r] \ar@{->>}[u]& H_2(W,M; \mathbb{Z}) \ar[u]^i \ar@{-->}[lu]^{\psi} \ar[r] & \ker \big( H_1(M; \mathbb{Z}) \to H_1(W; \mathbb{Z}) \big). \ar[u]
}\]
Since $(W;M,M')$ is an $H_1$-cobordism, the group $\ker (H_1(M; \mathbb{Z}) \to
H_1(W; \mathbb{Z}))$ vanishes. Consequently, given $x \in H_2(W,M; \mathbb{Z})$, the composition
$\partial(i(x))$ is zero and so, by exactness, there exists $[y] \in
\frac{H_2(W; \mathbb{Z})}{\operatorname{im} k}$ such that $j([y])=i(x)$. We therefore define
$\psi(x):=[y]$. As $j$ is injective, $\psi$ is well-defined.
By construction $j \circ \psi=i$.
Next, we show that $\psi$ is an isomorphism. Injectivity is
immediate from the diagram above and the fact that $i$ is injective.
As $\ker (H_1(M; \mathbb{Z}) \to H_1(W; \mathbb{Z})) = 0$, we obtain the following commutative diagram
\[\begin{tikzcd}
H_2(W; \mathbb{Z})/\operatorname{im} k \ar[r] & H_2(W, \partial W; \mathbb{Z})\\
H_2(W; \mathbb{Z}) \ar[u, twoheadrightarrow] \ar[r, twoheadrightarrow]& H_2(W,M; \mathbb{Z}) \ar[u] \ar[ul, "\psi"]
\end{tikzcd},\]
which shows the surjectivity of $\psi$.
\end{proof}
Given an $H_1$-cobordism $(W;M,M')$ with a map $H_1(W;\mathbb{Z}) \to \mathbb{Z}^\mu$, we shall often consider homology and cohomology with twisted coefficients in either $R=\mathbb{Q}(\mathbb{Z}^\mu)$ or $R=\mathbb{C}^\omega$ (for $\omega\in \mathbb{T}_!^\mu$). In both cases, we denote the underlying fields~$\mathbb{Q}(\mathbb{Z}^\mu)$ or
$\mathbb{C}$ by $\mathbb{F}$, so that the twisted (co-)homology groups are vector spaces over $\mathbb{F}$.
As in Section~\ref{sub:Twisted}, for a pair $(X,Y)$ we denote by
$\beta_i(X,Y)$ the rank of $H_i(X,Y;\mathbb{Z})$ and by $\beta_i^\omega(X,Y)$ the dimension of $H_i(X,Y;\mathbb{C}^\omega)$.
We conclude this subsection with a consequence of Lemma~\ref{lem:Cone}.
\begin{lemma}\label{lem:ChainHomotopy}
Let $(W;M,M')$ be an $H_1$-cobordism
equipped with a homomorphism $H_1(W;\mathbb{Z}) \to \mathbb{Z}^\mu$.
Then both $H_i(W,M;\mathbb{Q}(\mathbb{Z}^\mu))$ and $H_i(W,M;\mathbb{C}^\omega)$ vanish for $i \neq 2$
and for all $\omega \in \mathbb{T}_!^\mu$. In particular, $\beta_2^\omega(W,M)$ equals $\beta_2(W,M)$.
\end{lemma}
\begin{proof}
Let $R = \mathbb{Q}(\mathbb{Z}^\mu)$ or $\mathbb{C}^\omega$.
Since $W$ is an $H_1$-cobordism, Lemma~\ref{lem:H1bordism} ensures that~$H_i(W,M;\mathbb{Z})=0$ for $i \neq 2$. Lemma~\ref{lem:Cone} implies that $H_i(W,M;R)=0$ for $i=0,1$ and Lemma~\ref{lem:DualityUCSS} guarantees that for $i=3,4$, we have
\[ H_i(W, M; R) \cong H^{4-i} (W, M'; R) \cong \operatorname{Hom}_\mathbb{F}( H_{4-i}(W, M'; R), \mathbb{F} )^{\operatorname{tr}}
= 0.\]
The last claim now follows since the Euler characteristic of $(W,M)$ may be computed indifferently using $\mathbb{Z}$-coefficients or $R$-coefficients.
\end{proof}
\subsection{\texorpdfstring{$0.5$--solvable}{0.5-solvable} cobordisms}
\label{sub:05Solvable}
We review here the notion of $0.5$-solvable cobordism as defined in~\cite{Cha}. For simplicity, we avoid discussing $n$-solvability and $n.5$-solvability, referring to~\cite{Cha} for a more general treatment.
\medbreak
In the following paragraphs, given an $H_1$-cobordism $(W;M,M')$, we use $H$ as a shorthand for $H_1(W;\mathbb{Z})$ and use $\lambda_1$ to denote the $\mathbb{Z}[H]$-valued intersection form on $H_2(W;\mathbb{Z}[H])$.
Recall the following definition from~\cite[Definition 2.8]{Cha},
which extends the definition of solvability from Cochran-Orr-Teichner's work~\cite{CochranOrrTeichner} to a relative notion.
\begin{definition}\label{def:nSolvable}
An $H_1$-cobordism~$(W;M,M',\varphi)$ is a \emph{$0.5$-solvable cobordism} if there exists a submodule~$\mathcal{L} = \langle l_1, \ldots, l_r \rangle \subset H_2(W;\mathbb{Z}[H])$ together with homology classes $d_1,\ldots, d_r \in H_2(W;\mathbb{Z})$ that satisfy the following properties:
\begin{enumerate}
\item the intersection form~$\lambda_1$ vanishes on $\mathcal{L}$;
\item the image of $\mathcal{L}$ under the composition
$H_2(W;\mathbb{Z}[H]) \to H_2(W;\mathbb{Z}) \to H_2(W,M;\mathbb{Z})$ has rank~$r \geq \frac{1}{2} \rk H_2(W, M; \mathbb{Z})$;
\item the images~$l_i' \in H_2(W; \mathbb{Z})$ of the elements~$l_i$ fulfill the relation~$\lambda_1(l_i',d_j)=\delta_{ij}$ for each~$1 \leq i,j \leq r$;
\end{enumerate}
We refer to $\mathcal{L}$ as a \emph{$1$--lagrangian}, and to $\mathcal{D}:=\langle d_1,\dots, d_r\rangle$ as its \emph{$0$--dual}.
\end{definition}
\begin{remark}
Suppose that $(W; M, M')$ is a $0.5$--solvable cobordism $(W; M, M')$ with $1$--langrangian~$\mathcal{L}= \langle l_1, \ldots, l_r \rangle$. Then the images of the $l_i$'s in $H_2(W,M;\mathbb{Z})$ span a free submodule of rank~$r$, since they are dual to the~$d_i$'s.
\end{remark}
For further reference, we make note of the following result, whose proof is outlined in~\cite[Proof of Theorem $3.2$]{Cha}.
\begin{proposition}\label{prop:ordinarysignature}
The signature~$\operatorname{sign} W$ of a $0.5$--solvable cobordism~$(W; M, M')$ vanishes.
\end{proposition}
\begin{proof}
Let $\mathcal{L}_{M,\mathbb{Z}}$ be the image of the $1$--lagrangian~$\mathcal{L}$ under~$H_2(W; \mathbb{Z}[H]) \to H_2(W, M; \mathbb{Z})$. Let $\varphi \colon H_2(W,M; \mathbb{Z}) \to H_2(W;\mathbb{Z}) / \operatorname{im} H_2(\partial W;\mathbb{Z})$ be the isomorphism of Lemma~\ref{lem:H1bordism}.
The subspace $\varphi(\mathcal{L}_{M,\mathbb{Z}})$
is Lagrangian for the non-singular intersection pairing $\lambda_\mathbb{Q}$ of $W$.
Consequently, the signature of $W$ vanishes.
\end{proof}
The next definition is an adaptation to the colored framework of the definition given by Cha~\cite{Cha}. Recall that the boundary~$L \times S^1 = \partial X_L$ of a link exterior $X_L$
inherits a product structure by longitudes and meridians, which is well-defined up to isotopy.
A bijection~$\sigma$ of the link components of two links~$L, L'$
induces an orientation-preserving
diffeomorphism~$\varphi_\sigma \colon L \times S^1 \to L' \times S^1$
preserving the product structures, which is unique up to isotopy.
\begin{definition} \label{def:nSolvLink}
Two colored links~$L, L'$ are \emph{$0.5$-solvable cobordant}
if there exists a bijection~$\sigma$ between the components of $L$ and of $L'$
which preserves the colors and
there is a $0.5$-solvable cobordism $(W; X_L, X_{L'}, \varphi_\sigma)$.
\end{definition}
\begin{example} Suppose $L$ and $L'$ are concordant, and let $W$ be a concordance exterior. Then $(W; X_L, X_{L'})$ is a homology cobordism, which is a $0.5$--solvable cobordism since $H_2(W,X_{L};\mathbb{Z})=0$.
\end{example}
Recall from Section~\ref{sub:Setup} that the exterior~$X_{L}$ of a $\mu$-colored link~$L$ is equipped
with a homomorphism~$\beta_{L} \colon H_1(X_{L}; \mathbb{Z}) \to \mathbb{Z}^\mu$. A $0.5$-solvable cobordism between two colored links $L$ and $L'$ fits into the commutative diagram
\begin{equation} \label{eq:CompLabel}
\begin{tikzcd}
H_1(X_{L}; \mathbb{Z}) \ar[r,"i"] \ar[rd, swap, "\beta_{L}"] \ar[rr, bend left, "j_\sigma"]& H_1(W;\mathbb{Z}) & \ar[l,swap,"i'"] \ar[ld, "\beta_{L'}"] H_1(X_{L'}; \mathbb{Z}) \\
& \mathbb{Z}^\mu &
\end{tikzcd}
\end{equation}
where $j_\sigma$ is the isomorphism that sends the meridian of a component $K$ of $L$ to the meridian of the corresponding component $\sigma(K)$ of $L'$.
We recall that the linking number between two disjoint sublinks is defined as the sum over the linking numbers of all their respective components (see Section \ref{sub:SurfacesD4}).
\begin{lemma}\label{rem:SameLN}
Let $L$ and $L'$ be two $H_1$--cobordant oriented links. If~$(W; X_L, X_{L'},\varphi_\sigma)$ is a cobordism between them, then
\[ \lk(J, K) = \lk(\sigma(J), \sigma(K)) \]
for each pair of components $J,K$ of $L$.
In particular, if $L$ and $L'$ are concordant as $\mu$-colored links, then~$\lk(L_i, L_j) = \lk(L_i', L_j')$ for each pair of colors $i,j$.
\end{lemma}
\begin{proof}
The abelian group $H_1(X_{L}; \mathbb{Z})$ is freely generated by the meridians of $L$, so that every element $x\in H_1(X_{L};\mathbb{Z})$ has a well defined coordinate $x_K$ corresponding to the meridian of $K$. By definition, the linking number $\lk(J, K)$ is the coordinate~$b_K$ of the longitude $b$ of $J$.
Let $b'\in H_1(X_{L'};\mathbb{Z})$ be the longitude of $\sigma(J)$.
Since the longitudes are glued together,
we have $i(b)=i'(b') \in H_1(W; \mathbb{Z})$ in Diagram~\eqref{eq:CompLabel}, and hence $j_\sigma(b)=b'$ by commutativity of the diagram.
As the map $j_\sigma$ sends meridians to meridians, it preserves the coordinates, and hence $b'_{\sigma(K)}=b_K$. The proof of the first statement is concluded by observing that $b'_{\sigma(K)}$ is by definition the linking number between $\sigma(J)$ and $\sigma(K)$. The equality concerning $\mu$-colored links follows immediately from the fact that the cobordism preserves the colors.
\end{proof}
Given a $H_1$-cobordism $(W;M,M')$ with a map $H_1(W;\mathbb{Z})\to \mathbb{Z}^\mu$, the homomorphism $\mathbb{Z}[H_1(W;\mathbb{Z})] \to \mathbb{C}$
and the canonical map~$\mathbb{Z}[H_1(W;\mathbb{Z})] \to \mathbb{Q}(\mathbb{Z}^\mu)$ induce homomorphisms $i_R \colon H_2(W;\mathbb{Z}[H_1(W;\mathbb{Z})]) \to H_2(W;R)$ and $i_{M,R} \colon H_2(W;\mathbb{Z}[H_1(W;\mathbb{Z})]) \to H_2(W,M;R)$,
where $R$ stands either for~$\mathbb{C}^\omega$ or for $\mathbb{Q}(\mathbb{Z}^\mu)$. Also, we write $\lambda_R$ for the $\mathbb{F}$-valued intersection form on~$H_2(W;R)$.
The invariance of the signature and nullity will hinge on the following two results whose proof we delay until Section~\ref{sub:Technical}.
\begin{proposition}\label{prop:ComplexLagrangian}
Let $R$ be either $\mathbb{Q}(\mathbb{Z}^{\mu})$ or $\mathbb{C}^\omega$, with $\omega \in \mathbb{T}^\mu_!$. Let $(W; M, M')$ be an $0.5$--solvable cobordism over~$\mathbb{Z}^\mu$ with $1$--lagrangian~$\mathcal{L} = \langle l_1, \ldots, l_r \rangle$.
Then both subspaces
\begin{align*}
\mathcal{L}_{R} &= \langle i_{R} (l_1), \ldots, i_{R} (l_r)\rangle \subset H_2(W; R)\\
\mathcal{L}_{M, R} &= \langle i_{M,R} (l_1), \ldots, i_{M,R} (l_r)\rangle \subset H_2(W, M; R)
\end{align*}
have dimension $r$.
Furthermore, they satisfy the following two properties:
\begin{enumerate}
\item the intersection form~$\lambda_{R}$ vanishes on $\mathcal{L}_{R}$.
\item $\dim \mathcal{L}_{R} = r \geq \frac{1}{2}\dim_\mathbb{Q} H_2(W,M;\mathbb{Q})$.
\end{enumerate}
\end{proposition}
\begin{proof} See Proposition~\ref{prop:free}. \end{proof}
When $R=\mathbb{C}^\omega$, we shall often drop the $\omega$ from the notation of the Lagrangian and simply write $\mathcal{L}_\mathbb{C}$. The next proposition provides a lower bound on the dimension of~$\mathcal{L}_\mathbb{C}$.
\begin{proposition}\label{prop:LagrangianInequality}
Let $L, L'$ be two $\mu$-colored links that are $0.5$--solvable cobordant via~$(W;X_L, X_{L'})$ with $1$--lagrangian~$\mathcal{L}$. Then
\[ \frac{1}{2} \dim_{\mathbb{C}} \left( \frac{H_2(W;\mathbb{C}^\omega)}{\operatorname{im}(H_2(\partial W;\mathbb{C}^\omega) \to H_2(W;\mathbb{C}^\omega))}\right) \leq \dim_{\mathbb{C}}(\mathcal{L}_\mathbb{C}).\]
\end{proposition}
\begin{proof} See Proposition~\ref{prop:DimensionCount}.\end{proof}
Using the two propositions above, we can now prove the invariance of the nullity and signature under 0.5-solvable cobordism.
\subsection{Nullities and \texorpdfstring{$0.5$--solvability}{0.5-solvability}}
\label{sub:NullityProof}
The next result states the invariance of the multivariable nullity and Alexander nullity under $0.5$-solvable cobordisms.
\begin{proposition}
\label{prop:Invariance}
Let $R$ be either $\mathbb{Q}(\mathbb{Z}^{\mu})$ or $\mathbb{C}^\omega$, with $\omega \in \mathbb{T}_!^\mu$. If $(W;M,M')$ is a $0.5$--solvable cobordism, then the $\mathbb{F}$-vector spaces~$H_1(M;R)$ and $H_1(M'; R)$ have the same dimension. In particular, if $L$ and $L'$ are $0.5$-solvable cobordant links, then $\beta(L)=\beta(L')$ and $\eta_L(\omega)=\eta_L(\omega)$ for all $\omega \in\mathbb{T}^\mu_!$.
\end{proposition}
\begin{proof}
Consider the exact sequence of the pair~$(W, M)$ in which $R$ coefficients are understood:
\[ 0 \to \operatorname{im} \big( H_2(W) \stackrel{i_M}{\to} H_2(W, M) \big) \to H_2(W,M) \xrightarrow{\partial} H_1(M) \to H_1(W) \to 0. \]
We use $\beta_i^R$ to denote the Betti numbers with $R$-coefficients. Since the Euler characteristic of the sequence is zero and since duality implies that $\beta_2^R(W,M)=\beta_2^R(W,M')$, the proposition boils down to showing that $\operatorname{im}(i_M)$ and $\operatorname{im}(i_{M'})$ have the same dimension. This is proved in Lemma~\ref{lem:RankImages} below.
\end{proof}
We are indebted to Christopher Davis for suggesting that we prove the following key lemma.
\begin{lemma}\label{lem:RankImages}
Let~$R$ be either $\mathbb{Q}(\mathbb{Z}^\mu)$ or $\mathbb{C}^\omega$, with $\omega \in \mathbb{T}_!^\mu$. The images of the two maps
\begin{align*}
i_M \colon H_2(W; R) \to H_2(W, M; R)\\
i_{M'}\colon H_2(W; R) \to H_2(W, M'; R).
\end{align*}
have the same dimension over $\mathbb{F}$.
\end{lemma}
\begin{proof}
Consider the three following intersection pairings:
\begin{align*}
\lambda_{W} \colon H_2(W; R) \times H_2(W; R) &\to \mathbb{F},\\
\lambda_{W,\partial W} \colon H_2(W, \partial W; R) \times H_2(W; R) &\to \mathbb{F},\\
\lambda_{W, M} \colon H_2(W, M; R) \times H_2(W, M'; R) &\to \mathbb{F}.
\end{align*}
These pairings are related as follows. First, observe that the map $i_{\partial W} \colon H_2(W;R) \to H_2(W,\partial W;R)$ induced by the inclusion factors as $i_{M,\partial W} \circ i_M$, where the map $i_{M,\partial W} \colon H_2(W, M; R) \to H_2(W, \partial W; R)$ is also induced by the inclusion. We introduce the same notation for $M'$, resulting in a map $i_{M',\partial W}$. Consider the following diagram:
\[ \xymatrix{
H_2(W; R) \ar[r]^{i_{\partial W}} \ar[rd]^{i_M}& H_2(W,\partial W;R) \ar[r]^{\text{PD}} & H^2(W; R) \ar[r]^-{\operatorname{ev}} & \operatorname{Hom}(H_2(W; R), \mathbb{F})^t \\
& H_2(W,M; R) \ar[r]^{\text{PD}} \ar[u]_{i_{M,\partial W}} & H^2(W,M'; R) \ar[r]^-{\operatorname{ev}} \ar[u]^{i_{M'}^*} & \operatorname{Hom}(H_2(W,M';R),\mathbb{F})^t \ar[u]^{i_{M'}^*}. \\
} \]
The left triangle and right square are clearly commutative, while the middle square commutes thanks to~\cite[Section 6.9 Exercise 3]{Bredon}. It now follows that for $x,z$ in~$H_2(W;R)$ and $y$ in $H_2(W,M;R)$, we obtain
\begin{align}
\label{eq:Useful}
& \lambda_{W,\partial W}(i_{\partial W}(x),z)=\lambda_W(x,z)=\lambda_{W,M}(i_M(x),i_{M'}(z)) \\
& \lambda_{W,\partial W}(i_{M,\partial W}(y),z)=\lambda_{W,M}(y,i_{M'}(z)). \nonumber
\end{align}
We introduce one last piece of notation.
By Proposition~\ref{prop:ComplexLagrangian}, the subspaces~$\mathcal{L}_R$, $\mathcal{L}_M = i_M \big( \mathcal{L}_R \big)$, and $\mathcal{L}_{M'}= i_{M'} \big( \mathcal{L}_R \big)$ all have dimension~$r$.
Now we construct a subspace
\[ \mathcal{D} = \langle d_1, \ldots, d_r \rangle \subset H_2(W, M; R) \]
by constructing the elements~$d_i$.
Pick a basis~$\mathcal{L}_R = \langle l_1, \ldots, l_r \rangle$.
Since the dimension of $\mathcal{L}_{M'}= i_{M'} \big( \mathcal{L}_R \big)$ is $r$, the elements~$i_{M'}(l_j)$ form a basis of $\mathcal{L}_{M'}$. Therefore, the assignment~$i_{M'}(l_j) \mapsto \delta_{ij}$ defines a map~$\delta_i \colon \mathcal{L}_{M'} \to \mathbb{F}$. Since~$\mathbb{F}$ is a field, $\mathcal{L}_{M'} \subset H_2(W, M'; R)$ is a direct summand and consequently $\delta_i$ extends to an element~$\delta_i \in \Hom_\mathbb{F}(H_2(W, M'; R), \mathbb{F})^t$. The element~$d_i \in H_2(W, M; R)$ corresponds to $\delta_i$ under the isomorphism $H_2(W, M;R) \xrightarrow{\cong} \Hom_\mathbb{F}(H_2(W, M';R), \mathbb{F})^t$ given by the adjoint of $\lambda_{W,M}$. This is an isomorphism, since the pairing~$\lambda_{W,M}$ is non-singular.
Consequently, the space $\mathcal{D}$ is freely generated by elements $d_1,\ldots, d_r$ that satisfy
\begin{equation}
\label{eq:Duals}
\lambda_{W, M}\big(d_i, i_{M'} (l_j) \big) = \delta_{ij}.
\end{equation}
Completely analogously, we can define a subspace~$\mathcal{D}'$ of $H_2(W,M';R)$ with a basis given by $d_1',\ldots, d_r'$. Summarizing, we now have subspaces $\mathcal{L}_M$ and $\mathcal{D}$ of $H_2(W,M;R)$ and subspaces $\mathcal{L}_{M'}$ and $\mathcal{D}'$ of $H_2(W,M';R)$.
\begin{claim}
The subspaces $\mathcal{L}_M$ and $\mathcal{D}$ intersect trivially.
\end{claim}
To prove this, start with $a \in \mathcal{L}_M \cap \mathcal{D}$ and an arbitrary $l'$ in $\mathcal{L}_{M'}$. There is an $l$ in $\mathcal{L}_R$ such that $i_{M'}(l)=l'$. Similarly, since $a$ lies in $\mathcal{L}_M$, there is a $b$ in $\mathcal{L}_R$ such that $i_M(b)=a$. Using~(\ref{eq:Useful}), we now have
\begin{equation}
\label{eq:UsefulTwo}
\lambda_{W,M}(a,l') = \lambda_{W,M}(i_M(b),i_{M'}(l))=
\lambda_W(b,l) = 0,
\end{equation}
where the last equality is due to the fact that $\mathcal{L}_R \subset \mathcal{L}_R^\perp$.
Since $a$ also lies in $\mathcal{D}$, we can write $a=\sum_i c_i d_i$. Combine Equation~(\ref{eq:UsefulTwo}) with the property of the $d_i$'s in Equation~(\ref{eq:Duals}) to deduce that~$0=\lambda_{W,M}\big(a,i_{M'}(l_j)\big)=c_j$ for each $j$. This implies that $a=0$, concluding the proof of the claim.
Using the claim it now makes sense to consider the direct sum $\mathcal{L}_M \oplus \mathcal{D} \subset H_2(W, M; R)$. Since $\mathcal{L}_M$ and $\mathcal{D}$ both have dimension at least $r$, we conclude that the dimension of $\mathcal{L}_M \oplus \mathcal{D}$ must at least be $2r$. Using Lemma~\ref{lem:ChainHomotopy}, we see that the dimension of $H_2(W, M; R)$ is equal to~$\rk H_2(W, M; \mathbb{Z}) \leq 2r$. Combining these observations and repeating them for~$\mathcal{D}'$, we deduce that~
\begin{align}
\label{eq:UsefulThree}
\mathcal{L}_M \oplus \mathcal{D} &= H_2(W, M; R), \\
\mathcal{L}_{M'} \oplus \mathcal{D}' &= H_2(W, M'; R), \nonumber\\
\label{eq:RankLM}
\dim \mathcal{L}_M &= r \text{ and } \dim \mathcal{L}_{M'}=r.
\end{align}
Recall that $i_M$ and $i_{M'}$ denote respectively the maps from $H_2(W; R)$ to $H_2(W, M; R)$ and $H_2(W, M'; R)$. Since, by definition, the subspaces~$\mathcal{L}_M$ and $\mathcal{L}_{M'}$ are images of~$\mathcal{L}_R$ under $i_M$ and $i_{M'}$, we deduce that they are subspaces of $\operatorname{im}(i_M)$ and $\operatorname{im}(i_{M'})$.
By~(\ref{eq:UsefulThree}), $\operatorname{im}(i_M)= \operatorname{im}(i_M) \cap (\mathcal{L}_M\oplus \mathcal{D})$, and the same for $M'$. Since we just argued that $\mathcal{L}_M \subset \operatorname{im}(i_M)$, and $\mathcal{L}_{M'} \subset \operatorname{im}(i_{M'})$, it follows that
\begin{align*}
&\dim \operatorname{im}(i_M) = \dim \mathcal{L}_M + \dim \big( \mathcal{D} \cap \operatorname{im}(i_{M}) \big) \\
&\dim \operatorname{im}(i_{M'}) = \dim \mathcal{L}_{M'} + \dim \big( \mathcal{D} \cap \operatorname{im}(i_{M'}) \big).
\end{align*}
Since we wish to show that $\dim \operatorname{im}(i_M)=\dim \operatorname{im}(i_{M'})$ and since $\mathcal{L}_M$ and $\mathcal{L}_{M'}$ have dimension~$r$ by Equation~(\ref{eq:RankLM}), it only remains to prove the following claim:
\begin{claim}
$\dim \big( \mathcal{D} \cap \operatorname{im}(i_M) \big) = \dim \big( \mathcal{D}' \cap \operatorname{im}(i_{M'}) \big)$.
\end{claim}
Since $\mathcal{D}$ and $\mathcal{D}'$ are freely generated by the $d_i$ and $d_i'$, there is an isomorphism $\psi \colon \mathcal{D} \to \mathcal{D}'$ obtained by mapping the $d_i$ to the $d_i'$. The claim will follow if we show that $\psi$ restricts to an isomorphism from $\mathcal{D} \cap \operatorname{im}(i_M)$ to $ {\mathcal{D}'} \cap \operatorname{im}(i_{M'})$.
First, we check that the map~$\Psi$ restricts to a map~$\Psi|_{\mathcal{D} \cap \operatorname{im}(i_M)} \colon \mathcal{D} \cap \operatorname{im}(i_M) \to \mathcal{D'} \cap \operatorname{im}(i_{M'})$. So assume that $x=\sum_i a_i d_i$ lies in $\operatorname{im}(i_M) \cap \mathcal{D}$. By definition $\psi(x)$ is equal to $x':=\sum_i a_i d_i'$, which clearly lies in $\mathcal{D}'$. Consequently we have to show that $x'$ lies in $\operatorname{im}(i_{M'})$.
Since $x$ lies in $\operatorname{im}(i_{M})$, there is a $w$ in $H_2(W; R)$ such that $i_M(w)=x$. Now consider the element $v = x' - i_{M'}(w)$ of $H_2(W, M'; R)$: to show that $x'$ lies in $\operatorname{im}(i_{M'})$, it is enough to show that $v$ lies in $\operatorname{im}(i_{M'})$. Consequently, we consider the submodule
\[ \mathcal{L}^\perp_{M'} = \{ v \in H_2(W, M'; R) \colon \lambda_{W, M'}(v, i_M(l)) = 0 \text{ for all } l\in \mathcal{L}_R\} \]
and start by verifying that $v \in \mathcal{L}^\perp_{M'}$. Recall that the~$l_j$'s form a basis of $\mathcal{L}_R$ and so it is enough to show that $\lambda_{W,M'}(v,i_{M}(l_j))$ vanishes for each~$j$. This follows successively by using the definition of $v$, the definition of the $d_i$'s in~\eqref{eq:Duals}, and the property in~(\ref{eq:Useful}):
\begin{align*}
\lambda_{W, M'}\big(v, i_M (l_j)\big)
&=\lambda_{W, M'}\big(x', i_M (l_j) \big) - \lambda_{W, M'} \big(i_{M'} (w), i_M (l_j) \big)\\
&=a_j - \lambda_W \big(w, l_j)\\
&=a_j - \lambda_{W, M} \big( x, i_{M'} (l_j) \big)\\
&= a_j - a_j =0.
\end{align*}
Note that $\mathcal{L}^\perp_{M'} \subset \mathcal{L}_{M'}$, since~$H_2(W, M'; R) = \mathcal{L}_{M'} \oplus \mathcal{D}'$ by the first claim above. Consequently, the vector~$v$ belongs to $\mathcal{L}_{M'}$ and thus to $\operatorname{im}(i_{M'})$. Since $v$ was defined as $x'-i_{M'}(w)$, we deduce that $x'$ must also lie in $\operatorname{im}(i_{M'})$, as desired. We showed that~$\Psi$ restricts to a map~$\Psi|_{\mathcal{D} \cap \operatorname{im}(i_M)} \colon \mathcal{D} \cap \operatorname{im}(i_M) \to \mathcal{D'} \cap \operatorname{im}(i_{M'})$.
Now, by interchanging the roles of~$\mathcal D$ and $\mathcal D'$ in the argument above, we learn that the inverse~$\Psi^{-1}$ restricts to a map~$\Psi^{-1}|_{\mathcal{D'} \cap \operatorname{im}(i_{M'})} \colon \mathcal{D'} \cap \operatorname{im}(i_{M'}) \to \mathcal{D} \cap \operatorname{im}(i_{M})$. This restriction is the inverse of~$\Psi|_{\mathcal{D} \cap \operatorname{im}(i_M)}$ and thus the latter is an isomorphism. This concludes the proof of the last claim and thus of the proposition.
\end{proof}
\subsection{Signatures and \texorpdfstring{$0.5$--solvability}{0.5-solvability}}\label{sub:SignatureProof}
We prove that $0.5$-solvable cobordant links have the same multivariable signatures, concluding the proof of Theorem~\ref{thm:SolvableNullitySignature} from the introduction.
\begin{theorem}
If two $\mu$-colored links $L$ and $L'$ are $0.5$-solvable cobordant, then, for all $\omega \in \mathbb{T}^\mu_!$, we have
\[ \sigma_{L}(\omega)=\sigma_{L'}(\omega).\]
\end{theorem}
\begin{proof}
Let $F,F' \subset D^4$ be colored bounding surfaces for $L$ and $L'$ respectively,
with the additional requirement that they have only a single component
per color.
We denote by $W_{F}$ and $W_{F'}$ their respective exteriors and by $X,X'$ the link exteriors.
Setting as usual $M_{F} :=\partial\overline{\nu F}$, we see that the boundary~$\partial W_{F}$
decomposes into $X \cup_{L \times S^1} M_{F}$. An analogous decomposition holds for $\partial W_{F'}$. Let $W$ be a 0.5-solvable cobordism, with $\partial W= -X\cup_\varphi X'$, where $\varphi$ identifies $L\times S^1$ with $L'\times S^1$.
We consider the $4$-manifold
\[ V:=W_{F} \cup_{X} W \cup_{X'} (-W_{F'}),\]
which has boundary $M_{F} \cup_\Sigma (-M_{F'})$, where $\Sigma$ is a disjoint union of tori.
\begin{figure}[ht]
\includegraphics[width=7cm]{figure2.pdf}
\caption{The manifold $V$ as a union of $W_{F}$, $W$ and $-W_{F'}$.}
\label{fig:Sign0.5}
\end{figure}
By diagram~\eqref{eq:CompLabel}, the coefficient systems on the link exteriors
$X$ and $X'$ extend over $W$ and thus over $V$. We shall now compute
$\operatorname{dsign}_\omega(V) = \operatorname{sign}_\omega V - \operatorname{sign} V$ in two different ways.
\begin{claim}
$\operatorname{dsign}_\omega(V)=\operatorname{dsign}_\omega (W_{F}) - \operatorname{dsign}_\omega (W_{F'}) + \operatorname{dsign}_\omega (W).$
\end{claim}
The claim is proved by a double application of Novikov-Wall additivity, each time both for the twisted and untwisted signature: first we prove additivity for the gluing along $X$ of the two manifolds $W_F$ and $W\cup_{X'} (-W_{F'})$, and then for the gluing along $X'$ of $W$ with $-W_{F'}$. In both cases the boundary of the gluing region is $\Sigma=L\times S^1$, which is identified with $L'\times S^1$ through $\varphi$. As $H_1(\Sigma;\mathbb{C}^\omega)=0$, the hypotheses of Proposition \ref{prop:TwistedWall} are satisfied in the two cases, and twisted additivity holds. Let
\[\begin{split}
V_X&= \ker (H_1(\Sigma;\mathbb{R}) \to H_1(X;\mathbb{R})), \quad V_{X'}= \ker (H_1(\Sigma;\mathbb{R}) \to H_1(X';\mathbb{R})),\\
V_{M_{F}}&=\ker (H_1(\Sigma;\mathbb{R}) \to H_1(M_{F};\mathbb{R})), \quad
V_{M_{F'}}=\ker (H_1(\Sigma;\mathbb{R}) \to H_1(M_{F'};\mathbb{R})).
\end{split}
\]
In the gluing along $X$, the three spaces to be considered for checking the hypotheses of Theorem \ref{thm:Wall} are $V_{M_{F}}$, $V_{M_{F'}}$, and $V_X$ in the same order as in the statement.
In the second gluing, it is $V_X$, $V_{M_{F'}}$, and $V_{X'}$. We show now that $V_{M_{F}}=V_{M_{F'}}$ and $V_X=V_{X'}$, so that the hypotheses are satisfied in both cases and additivity for the untwisted signature also holds. The space $V_{M_F}$ is described by Lemma~\ref{lem:SameKernel}. The space $V_{M_{F'}}$ is also described by Lemma~\ref{lem:SameKernel} as a subspace of $H_1(L'\times S^1;\mathbb{R})$. By Lemma~\ref{rem:SameLN}, the two links have the same pairwise linking numbers. Since we assumed that $F$ and $F'$ have exactly one component for each color, the two vector spaces are seen to coincide under the identification between $L\times S^1 $ and $L'\times S^1$. The spaces $V_X$ and $V_{X'}$ also only depend on the linking numbers, and once again they coincide thanks to Lemma~\ref{rem:SameLN}. Hence, Novikov-Wall additivity holds both for the twisted and untwisted signature, and the claim is verified.
Thanks to Proposition~\ref{prop:UntwistedSign}, we have $\operatorname{dsign}_\omega (W_{F})=\sigma_{L}(\omega)$ and $\operatorname{dsign}_\omega (W_{F'})=\sigma_{L'}(\omega)$. The claim gets hence rewritten as
\[\operatorname{dsign}_\omega(V)=\sigma_{L}(\omega) - \sigma_{L'}(\omega) + \operatorname{dsign}_\omega (W).\]
We will now show that both signature defects $\operatorname{dsign}_\omega(W)$ and $\operatorname{dsign}_\omega(V)$ are actually~$0$, from which the conclusion follows.
By Proposition~\ref{prop:ordinarysignature}, the ordinary signature of $W$
vanishes. Invoking Proposition~\ref{prop:LagrangianInequality}, there exists a Lagrangian $\mathcal{L}_\mathbb{C} \subset H_2(W;\mathbb{C}^\omega)$
for the nonsingular intersection form on
$H_2(W;\mathbb{C}^\omega)/\operatorname{im} (H_2(\partial W;\mathbb{C}^\omega) \to H_2(W;\mathbb{C}^\omega) )$
and thus the twisted signature of $W$ must also vanish, so that $\operatorname{dsign}_\omega (W)=0$.
To conclude the proof, it only remains to show that $\operatorname{dsign}_\omega(V)=0$.
Recall that~$\partial V=M_{F} \cup_{\Sigma} (-M_{F'})$, where $\Sigma$ is a disjoint union of tori. We have seen in Example 4.12 that $M_{F}$ can be described as a plumbing of the components of $F$ along its intersection graph. In particular, the total weight between two vertices is given by
\[p(F_i,F_j)=F_i\cdot F_j = \lk(L_i,L_j). \]
Similarly, the manifold $-M_{F'}$ is obtained by plumbing the surfaces $-F_1',\dotsc, -F_\mu'$, along the negative of the intersection graph of $F'$ (i.e.\ with its labels reversed), so that
\[p(-F_i',-F_j')=-F_i'\cdot F_j' = -\lk(L_i',L_j'). \]
The cobordism $W$ gives a bijection between the components of $L$ and those of $L'$, that induces homeomorphisms along which we can glue the components of $F$ and $F'$ in order to get closed oriented surfaces $G_i=F_i\cup_\partial -F'_i$ ($i=1,\dotsc, \mu$). Then $\partial V$ can be described as a plumbed $3$-manifold, whose plumbing graph has the surfaces $G_i$'s as vertices, and edges $E(G_i,G_j)=E(F_i,F_j)\sqcup E(-F_i',-F_j')$. In particular, for each pair of vertices, we have
\[p(G_i, G_j)=p(F_i,F_j)+p(-F_i',-F_j')= \lk(L_i, L_j) -\lk(L_i',L_j')=0, \]
as the linking numbers of $L$ and $L'$ match up.
This means that the plumbed $3$-manifold $\partial V$ is balanced, and Proposition~\ref{prop:PbFilling} now implies that $\operatorname{dsign}_\omega (V)=0$ as desired.
\end{proof}
\subsection{The proof of Proposition~\ref{prop:ComplexLagrangian} and Proposition~\ref{prop:LagrangianInequality}}
\label{sub:Technical}
At this stage, we have proved Theorem~\ref{thm:SolvableNullitySignature} skipping the proofs of Proposition~\ref{prop:ComplexLagrangian} and Proposition~\ref{prop:LagrangianInequality}. The aim of this last subsection is to prove these technical results, starting with some preliminary lemmas.
\medbreak
We consider the following set-up: let~$(W;M,M')$ be an $H_1$--bordism over~$\mathbb{Z}^\mu$, that is the cobordism is equipped with a commutative diagram
\[ \begin{tikzcd}
H_1(M; \mathbb{Z}) \ar[r, "\sim"] \ar[dr] & H_1(W; \mathbb{Z}) \ar[d]&\ar{l}[swap]{\sim} \ar[dl] H_1(M'; \mathbb{Z})\\
&\mathbb{Z}^\mu&
\end{tikzcd}.
\]
We abbreviate~$H_1(W; \mathbb{Z})$ by~$H$.
The composition~$\alpha \colon \mathbb{Z}[H] \to \mathbb{Z}[\mathbb{Z}^\mu] \to \mathbb{C}^\omega$ and the canonical inclusion $\mathbb{Z}[\mathbb{Z}^\mu] \to \mathbb{Q}(\mathbb{Z}^\mu)$ induce homomorphisms
\[ i_R \colon H_2(W;\mathbb{Z}[H]) \to H_2(W;R) \text{ and } i_{M,R} \colon H_2(W;\mathbb{Z}[H]) \to H_2(W,M;R),\]
where $R$ stands for $\mathbb{C}^\omega$ or $\mathbb{Q}(\mathbb{Z}^\mu)$.
We start with a proposition whose proof is inspired by an argument of Cochran-Orr-Teichner ~\cite[Proposition 4.3]{CochranOrrTeichner}.
\begin{proposition}\label{prop:relIndep}
Let $R$ be either~$\mathbb{Q}(\mathbb{Z}^\mu)$ or~$\mathbb{C}^\omega$, with $\omega \in \mathbb{T}^\mu_!$.
Let $(W; M, M')$ be an $H_1$--cobordism over~$\mathbb{Z}^\mu$.
Let~$\alpha_1, \ldots, \alpha_k \in H_2(W; \mathbb{Z}[H])$ be elements whose projections~$i_{M,\mathbb{Z}}(\alpha_i),\dotsc, i_{M,\mathbb{Z}}(\alpha_k) \in H_2(W, M; \mathbb{Z})$ are linearly independent. Then, the elements~$i_{M,R} (\alpha_1),\dotsc, i_{M,R} (\alpha_k)$ are linearly independent in $H_2(W, M;R)$.
\end{proposition}
\begin{proof}
First, we establish suitable CW-structures on the manifolds~$W$ and $M$.
\begin{claim}
The pair~$(W,M)$ admits a finite CW-structure (up to homotopy), that is there
exists a finite CW-complexes~$W^c$ and a subcomplex $M^c \subset W^c$ with a diagram
\[ \begin{tikzcd}
W^c \ar[r, "\sim"] & W\\
M^c \ar[r, "\sim"] \ar[u,"\subset"] & M \ar[u, "\subset"].
\end{tikzcd}, \]
where the horizontal maps are homotopy equivalences and the diagram commutes up to homotopy.
Furthermore, we can pick~$M^c$ to be a $2$--dimensional complex and $W^c$ to be $3$--dimensional.
\end{claim}
Note that $M$ is a $3$--manifold with nonempty boundary, so it admits a smooth structure and one can find a $2$--dimensional CW-structure~$M^c \xrightarrow{\sim} M$ from a Morse function without critical points of index~$3$; see~\cite[Theorem 8.1 (Index 0)]{Milnor65}.
Since $W$ is a $4$--manifold with boundary, Poincaré duality shows that the homology group~$H_k(W; \mathbb{Z}[\pi_1(W)])$ vanishes for $k \geq 4$ (this involves an explicit computation of~$H_0(W, \partial W; \mathbb{Z}[\pi_1(W)])$).
The $4$--manifold~$W$ admits a finite CW-structure, since it is an absolute neighbourhood retract~\cite[Theorem 3.3]{Han51}; see \cite{Wes77}.
Using a result of Wall~\cite[Corollary 5.1]{Wall66}, these two facts imply that there exists a $3$--dimensional CW-structure~$W^c \xrightarrow{\sim} W$.
Use the inverse of the homotopy equivalence~$W^c \xrightarrow{\sim} W$ to
obtain a map~$M^c \to W^c$ and arrange $M^c$ to be a subcomplex by replacing~$W^c$ with the mapping cylinder of $M^c \to W^c$. Since~$M^c$ was a $2$--complex, $W^c$ is still $3$--dimensional.
Before we proceed with the next claim, note that the commutativity up to homotopy is exactly the ingredient needed to construct a map between the cylinders of the inclusions~$\operatorname{Cyl}(M^c \subset W^c) \xrightarrow{\sim} \operatorname{Cyl}(M \subset W)$. Consequently, the relative homology groups of $(W, M)$, and $(W^c, M^c)$ agree.
\begin{claim}
Without increasing the dimensions of $(W^c, M^c)$, we may assume that
there exists a subcomplex~$X \subset W^c$ disjoint from $M^c$ with
\[ H_2(X; \mathbb{Z}[H]) = \mathbb{Z}[H]\langle \alpha_1, \ldots, \alpha_k \rangle.\]
\end{claim}
First, we realize the homology classes~$\alpha_i$ geometrically:
For each class~$\alpha_i \in H_2(W; \mathbb{Z}[H])$, there exists a closed oriented surface~$\Sigma_i$ together with a map~$f_i \colon \Sigma_i \to \widehat W$ such that $f_i([\Sigma_i]) = \alpha_i$~\cite[Théorème III.3]{Thom54}, where $\widehat W$ is the abelian cover of~$W$ corresponding to the composition $\pi_1(W)\to H_1(W;\mathbb{Z})$.
Use the inverse of the homotopy~$W^c \to W$ to obtain maps~$f_i^c \colon \Sigma_i \to \widehat W^c$.
Consider the space~$\widehat X = H \times \vee_i \Sigma_i$ on which $H$ acts by multiplication on the first factor. Define the $H$--equivariant map
\begin{align*}
f\colon H \times \vee_i \Sigma_i &\to \widehat W^c\\
\Big( h, x\Big) &\mapsto h \cdot f^c_i(x) \text{ for } x \in \Sigma_i.
\end{align*}
We now think of $\widehat{X}$ as a subspace of the mapping cylinder~$\operatorname{Cyl}(f)$.
Note that the quotient~$\operatorname{Cyl}(f)/ H \simeq W^c$, and $X = \widehat X / H$ is a subcomplex, which is also disjoint from~$M^c$. Replace~$W^c$ by $\operatorname{Cyl}(f)/ H$, which is a $3$--dimensional complex, since~the surfaces~$\Sigma_i$ are $2$--dimensional.
The subcomplex~$X$ has homology~$H_2(X; \mathbb{Z}[H]) = \mathbb{Z}[H]\langle \alpha_1, \ldots, \alpha_k \rangle$, which is freely generated by the~$\alpha_i$.
This concludes the proof of the claim.
Having constructed suitable CW-structures on $W$ and $M$, we now proceed with the proof.
Observe that the following quotient map is a chain isomorphism
\begin{equation}
\label{eq:Identify}
C(M^c \sqcup X, M^c; Q) \cong C(X; Q),
\end{equation}
where the coefficient system~$Q$ is either~$\mathbb{Z}$ or $R$. In particular, the assumption on the projections precisely means that the map~$H_2(M^c \sqcup X, M^c;\mathbb{Z}) \to H_2(W^c,M^c;\mathbb{Z})$ is injective. Similarly, our goal is to show that the induced map~$H_2(M^c \sqcup X, M^c;R) \to H_2(W^c, M^c ;R)=H_2(W, M; R)$ is injective. Indeed, this map sends the $\mathbb{F}$--basis~$\{ \alpha_i \}$ of $H_2(M^c \sqcup X, M^c;R) \cong H_2(X;R)$ to the elements~$\{ i_{M,R}(\alpha_i) \}$.
In order to establish injectivity, consider the following exact sequence of the triple~$(M^c, M^c \sqcup X, W^c)$ with $Q = R$ coefficients:
\begin{equation}
\label{eq:LesTriple}
\begin{tikzcd}[column sep=2.8mm]
H_3(W^c, M^c; Q) \ar[r] & H_3(W^c, M^c \sqcup X; Q) \ar[r, "\partial^Q"] & H_2(M^c \sqcup X, M^c; Q) \ar[r, "i_Q"] & H_2(W^c, M^c; Q).
\end{tikzcd}
\end{equation}
Note that Lemma~\ref{lem:ChainHomotopy} shows that the homology group~$H_3(W,M;R) = H_3(W^c,M^c;R)$ vanishes. As we shall see below,
the proposition reduces to the following claim.
\begin{claim}
The homology group $H_3(W^c,M^c \sqcup X;R)$ vanishes.
\end{claim}
Consider the long exact sequence~\eqref{eq:LesTriple} above for $Q = \mathbb{Z}$. Recall that $H_2(M^c \sqcup X, M;\mathbb{Z}) \to H_2(W^c, M^c;\mathbb{Z}) = H_2(W, M;\mathbb{Z})$ is injective by assumption, and~$H_3(W^c, M^c;\mathbb{Z}) = H_3(W, M;\mathbb{Z})$ vanishes since~$W$ is an $H_1$--bordism. This shows that~$H_3(W^c, M^c \sqcup X; \mathbb{Z}) = 0$.
Since the CW-structure of $W$ has no $4$--cells, the (cellular) chain module
$C_4(W^c, M^c \sqcup X; \mathbb{Z}) = 0$.
From these two facts, deduce that the boundary
operator~$\partial_3^\mathbb{Z} \colon C_3(W^c, M^c \sqcup X; \mathbb{Z}) \to C_2(W^c, M^c \sqcup X; \mathbb{Z})$
is injective. Now we relate this observation to the case~$R = Q$:
$\partial_3 \colon C_3(W^c, M^c \sqcup X; \mathbb{Z}[H]) \to C_2(W^c, M^c \sqcup X; \mathbb{Z}[H])$
is a homomorphism between free modules, and since $\partial_3^\mathbb{Z} \colon C_3(W^c, M^c \sqcup X; \mathbb{Z}) \to C_2(W^c, M^c \sqcup X; \mathbb{Z})$ is injective, we deduce that~$\partial_3^{R} \colon C_3(W^c, M^c \sqcup X; R) \to C_2(W^c, M^c \sqcup X; R)$ is injective by Lemma~\ref{lem:DeterminantTrick}. This implies the claim that $H_3(W^c, M^c \sqcup X; R) = 0$.
We now conclude the proof of the proposition. Using the claim and~\eqref{eq:LesTriple}, we deduce that $i_R$ is injective. As we mentioned above, this shows that the $i_{M,R}(\alpha_i)$ are linearly independent and thus the proof is concluded.
\end{proof}
The next proposition was Proposition~\ref{prop:ComplexLagrangian} above.
\begin{proposition}\label{prop:free}
Let $\omega \in \mathbb{T}_!^\mu$ and let $R$ be either $\mathbb{Q}(\mathbb{Z}^\mu)$ or $\mathbb{C}^\omega$.
Let $(W; M, M')$ be an $0.5$--cobordism over~$\mathbb{Z}^\mu$ with $1$--langrangian~$\mathcal{L} = \langle l_1, \ldots, l_r \rangle$. Then the subspaces
\begin{align*}
\mathcal{L}_{R} &= \langle i_{R} (l_1), \ldots, i_{R} (l_r)\rangle \subset H_2(W; R)\\
\mathcal{L}_{M, R} &= \langle i_{M,R} (l_1), \ldots, i_{M,R} (l_r)\rangle \subset H_2(W, M; R)
\end{align*}
have dimension
$\dim \mathcal{L}_{R}=\dim \mathcal{L}_{M,R} = r \geq \frac{1}{2}\dim_\mathbb{Q} H_2(W,M;\mathbb{Q})$. Furthermore,
the intersection form~$\lambda_{R}$ vanishes on $\mathcal{L}_{R}$.
\end{proposition}
\begin{proof}
Denote the $0$--duals of~$W$ by~$\mathcal{D} = \langle d_1, \ldots, d_r\rangle$.
Denote~$H_1(W; \mathbb{Z})$ by~$H$, and consider the map~$i_\mathbb{Z} \colon H_2(W; \mathbb{Z}[H]) \to H_2(W; \mathbb{Z})$, which is induced by the augmentation map~$\mathbb{Z}[H] \to \mathbb{Z}$.
By definition of a $0.5$--cobordism, the images~$i_\mathbb{Z} (l_i) \in H_2(W; \mathbb{Z})$ of the elements~$l_i$ fulfill the relation $\lambda_\mathbb{Z} \big(i_\mathbb{Z} (l_i),d_j)=\delta_{ij}$ for each~$1 \leq i,j \leq r$.
This relation descends to the pairing
\[ \lambda_{M,\mathbb{Z}} \colon H_2(W, M; \mathbb{Z}) \times H_2(W, M'; \mathbb{Z}) \to \mathbb{Z}, \]
that sends $(i_{M,\mathbb{Z}} (l_i), d'_j)$ to $\lambda_{M,\mathbb{Z}}(i_{M,\mathbb{Z}} (l_i), d'_j) =\delta_{ij}$
where $d'_j \in H_2(W, M';\mathbb{Z})$ is the relative class of $d_j$, that is the image of $d_j$ under the map $H_2(W;\mathbb{Z}) \to H_2(W,M';\mathbb{Z})$ induced by the canonical inclusion.
Consequently, the elements~$i_{M,\mathbb{Z}} (l_i)$ are linearly independent.
Now apply Proposition~\ref{prop:relIndep} to the elements~$l_i$ to see that the~$i_{M,R} (l_i)$'s are linearly independent.
Since $H_2(W; R) \to H_2(W, M; R)$ sends $i_{R} (l_i) \mapsto i_{M,R} (l_i)$, the elements~$i_{R} (l_i)$ are linearly independent as well.
\end{proof}
The final step is to prove Proposition~\ref{prop:LagrangianInequality}, that is the inequality
\[ \frac{1}{2} \dim_{\mathbb{C}} \left( \frac{H_2(W;\mathbb{C}^\omega)}{\operatorname{im}(H_2(\partial W;\mathbb{C}^\omega) \to H_2(W;\mathbb{C}^\omega))}\right) \leq \dim_{\mathbb{C}}(\mathcal{L}_\mathbb{C})\]
for cobordisms between link exteriors. We start with two preliminary lemmas involving twisted Betti numbers.
\begin{lemma}\label{lem:Betti}
If two $\mu$-colored links $L$ and $L'$ are
$H_1$-cobordant via $(W; X_L, X_{L'})$,
then for $i = 1,2$ and for all $\omega \in \mathbb{T}^\mu_!$ we have
$$\beta_2^\omega(W,\partial W) = \beta_2^\omega(W,X_{L})+\beta_3^\omega(W,\partial W)= \beta_2^\omega(W, X_{L'})+\beta_3^\omega(W,\partial W).$$
\end{lemma}
\begin{proof}
We start by establishing two preliminary equalities.
As $X_L$ is a link exterior,
its Euler characteristic vanishes. Since $\beta_0^{\omega}(X_L)$ and
$\beta_3^\omega(X_L)$ vanish and since $\chi^{\omega}(X_L)=\chi(X_L)=0$, we obtain
\begin{equation}
\label{eq:BettiNumberLinks}
\beta_1^\omega(X_{L})=\beta_2^\omega(X_{L}),
\end{equation}
and similarly for $L'$.
Arguing as in Lemma~\ref{lem:splitboundary}, one deduces that $\beta_i^\omega(\partial W)=\beta_i^\omega(X_L)+\beta_i^\omega(X_{L'})$. Using~(\ref{eq:BettiNumberLinks}), we then see that $\beta_1^\omega(X_{L})-\beta_2^\omega(X_{L})$ equals $\beta_1^\omega(X_{L'})-\beta_2^\omega(X_{L'})$ and therefore
\begin{equation}
\label{eq:BettiNumberBoundaryW}
\beta_1^\omega(\partial W)=\beta_1^\omega(X_L)+\beta_1^\omega(X_{L'})=\beta_2^\omega(X_L)+\beta_2^\omega(X_{L'})=\beta_2^\omega(\partial W).
\end{equation}
We now prove the first equality displayed in the lemma (the proof of the second is identical). Lemma~\ref{lem:ChainHomotopy} shows
that both modules
$H_3(W,X_L;\mathbb{C}^\omega)=0$ and $H_1(W,X_L;\mathbb{C}^\omega)=0$ vanish.
Consider the long exact
sequence of the triple $(W,\partial W,X_L)$
\begin{align*}
0 \to H_3(W, \partial W;\mathbb{C}^\omega) &\to H_2(\partial W,X_L;\mathbb{C}^\omega) \to H_2(W,X_L;\mathbb{C}^\omega) \to H_2(W,\partial W;\mathbb{C}^\omega)\to\\
&\to H_1(\partial W,X_L;\mathbb{C}^\omega) \to 0 \to H_1(W,\partial W;\mathbb{C}^\omega) \to 0
\end{align*}
and deduce that $H_1(W,\partial W;\mathbb{C}^\omega) =0$. Since the alternating sum of dimensions of an exact sequence vanishes, we obtain
\[ \beta_2^\omega(W,\partial W) =\beta_2^\omega(W,X_L)+\beta_3^\omega(W,\partial W)+\beta_1^\omega(\partial
W,X_L)-\beta_2^\omega(\partial W,X_L).\]
Thus the statement of the lemma reduces to proving the
equality~$\beta_1^\omega(\partial W,X_L)=\beta_2^\omega(\partial W,X_L)$.
To achieve this, consider the long exact sequence of the pair~$(\partial W,X_L)$:
\begin{align*}
0 \to H_2(X_L;\mathbb{C}^\omega) &\to H_2(\partial W;\mathbb{C}^\omega) \to H_2(\partial W,X_L;\mathbb{C}^\omega) \\
& \to H_1(X_L;\mathbb{C}^\omega) \to H_1(\partial W;\mathbb{C}^\omega)
\to H_1(\partial W,X_L;\mathbb{C}^\omega) \to 0.
\end{align*}
Note that $H_3(\partial W,X_L; \mathbb{C}^\omega)=0$ because of the long exact sequence of $(W,\partial W,X_L)$
together with the fact that $H_3(W,X_L; \mathbb{C}^\omega)=0$; see Lemma~\ref{lem:ChainHomotopy}.
Again, the alternate sum of dimensions
\[ \beta_2^\omega(X_L) - \beta_2^\omega (\partial W) + \beta_2^\omega (W, X_L) -\beta_1^\omega(X_L) + \beta_1^\omega(\partial W) - \beta_1^\omega(\partial W, X_L)= 0\]
vanishes, and the desired equality now follows by combining~(\ref{eq:BettiNumberLinks}) and~(\ref{eq:BettiNumberBoundaryW}).
\end{proof}
Next, we prove an inequality on the twisted Betti numbers of an
$H_1$-cobordism.
\begin{lemma} \label{lem:Inequality}
Let $L$ and $L'$ be links that are $H_1$--cobordant over~$\mathbb{Z}^\mu$ via $(W; X_L, X_{L'})$, then
\[\beta_3^\omega(W,\partial W)-\beta_1^\omega(\partial W)+\beta_1^\omega(W)\leq 0 \]
for all $\omega \in \mathbb{T}_!^\mu$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:DualityUCSS}, one gets $\beta_3^\omega(W,\partial W)=\beta_1^\omega(W)$. Arguing as in the proof of Lemma~\ref{lem:Betti}, we see that $\beta_1^\omega(\partial W) =\beta_1^\omega(X_L)+\beta_1^\omega(X_{L'})$. Since Lemma~\ref{lem:ChainHomotopy} implies that $H_1(W,X_{L};\mathbb{C}^\omega)=0$, the map $H_1(X_{L};\mathbb{C}^\omega) \to H_1(W;\mathbb{C}^\omega)$ is surjective, and so $\beta_1^\omega (W)-\beta_1^\omega (X_{L}) \leq 0$ and similarly for $X_{L'}$. Combining these facts, $\beta_3^\omega(W,\partial W)-\beta_1^\omega(\partial W)+\beta_1^\omega(W)$ is equal to $2\beta_1^\omega(W)-\beta_1^\omega(X_{L})-\beta_1^\omega(X_{L'}) =(\beta_1^\omega(W)-\beta_1^\omega(X_{L'}))+(\beta_1^\omega(W)-\beta(X_{L'})) \leq 0$, as desired.
\end{proof}
\begin{proposition}\label{prop:DimensionCount}
Let $L$ and $L'$ be $\mu$-colored links that are $0.5$-solvable cobordant via $(W; X_L, X_{L'})$. Then, for all $\omega \in \mathbb{T}_!^\mu$, the subspace~$\mathcal{L}_\mathbb{C} \subset H_2(W;\mathbb{C}^\omega)$ of Proposition~\ref{prop:free} satisfies
\[ \frac{1}{2} \dim_{\mathbb{C}} \left( \frac{H_2(W;\mathbb{C}^\omega)}{\operatorname{im}(H_2(\partial W;\mathbb{C}^\omega) \to H_2(W;\mathbb{C}^\omega))}\right) \leq \dim_{\mathbb{C}}(\mathcal{L}_\mathbb{C}).\]
\end{proposition}
\begin{proof}
Invoking Proposition~\ref{prop:free}, the dimension of $\mathcal{L}_\mathbb{C}$ is larger than half the rank of $H_2(W,X_L;\mathbb{Z})$. Using Lemma~\ref{lem:ChainHomotopy}, $\beta_2^\omega(W,X_L)=\beta_2(W,X_L)$, and so the proposition reduces to showing the inequality
\[ d:=\dim\left( \frac{H_2(W;\mathbb{C}^\omega)}{\operatorname{im} H_2(\partial W;\mathbb{C}^\omega) \to H_2(W;\mathbb{C}^\omega)}\right)
\leq \beta_2^\omega(W, X_L).\]
Set $V:=\operatorname{im} ( H_2(\partial W;\mathbb{C}^\omega) \to H_2(W;\mathbb{C}^\omega) )$.
Since we proved in Lemma~\ref{lem:ChainHomotopy} that $H_1(W,\partial W;\mathbb{C}^\omega)$ vanishes,
the long exact sequence of the pair $(W,\partial W)$ now takes the form
\[ 0 \to V \to H_2(W;\mathbb{C}^\omega) \to H_2(W,\partial W;\mathbb{C}^\omega) \to H_1(\partial W;\mathbb{C}^\omega) \to H_1(W;\mathbb{C}^\omega) \to 0.\]
Finally, using the fact that the alternating dimensions of an exact sequence sum up to zero, one gets
\begin{align*}
d
&=\beta_2^\omega(W,\partial W)-\beta_1^\omega(\partial W)+\beta_1^\omega(W) \\
&=\beta_2^\omega(W,X_L)+\beta_3^\omega(W,\partial W)-\beta_1^\omega(\partial W)+\beta_1^\omega(W) \\
& \leq \beta_2^\omega(W,X_L),
\end{align*}
where the last two steps use respectively Lemma~\ref{lem:Betti} and Lemma~\ref{lem:Inequality}.
\end{proof}
\bibliographystyle{alpha}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,045
|
L'Avare () est une comédie italienne réalisée par Tonino Cervi, sortie en 1990, avec comme interprètes principaux Alberto Sordi, Laura Antonelli, Miguel Bosé, Marie Laforêt et Christopher Lee.
Synopsis
Harpagon vit à Rome dans les années 1600. Pour éviter de se marier avec la sœur du cardinal Spinosi déjà mariée trois fois par intérêt et dont les maris sont tous morts dans des circonstances mystérieuses, il invente un mensonge selon lequel il est prêt à se marier avec une femme déjà enceinte. Il demande à Frosina, tenancière d'une maison close de vite lui trouver une femme convenable.
Harpagon a deux enfants : Cléante et Elise. Son fils veut épouser la jeune Marianne mais son père refuse tout net de lui prêter de l'argent tandis qu'Elise a une liaison secrète avec Valerio, le serviteur de son père. Harpagon a d'autres ambitions pour ses enfants : deux mariages avec des personnes âgées très riches.
Analyse
Dix ans après l'adaptation de Louis de Funès, les studios italiens tentent, à leur tour, leur adaptation de l'universelle pièce de Molière, L'Avare. Mais contrairement à son prédécesseur français, parfaitement fidèle aux dialogues originaux, cette version italienne en propose une vision nettement plus libre, n'hésitant pas à intervenir, non seulement dans le texte, mais également dans l'intrigue, où nouveaux personnages et nouvelles situations viennent étoffer l'action.
Le contexte, en outre, est ici modifié: si nous restons toujours dans le contemporain du dramaturge français, le récit a été transposé à Rome. Empruntant davantage à la tradition de la commedia dell'arte que de la comédie française, le film, tout comme ses personnages, prend donc une teinte plus typiquement italienne.
Fidèle à sa frilosité vis-à-vis d'adaptations par des étrangers de ses grands classiques, le public français n'accorda guère d'intérêt à cette libre variation qui connut du reste une sortie quasi inexistante en France.
Fiche technique
Titre français : L'Avare
Titre original :
Réalisation : Tonino Cervi
Scénario : Alberto Sordi, Rodolfo Sonego, Tonino Cervi et Cesare Frugoni
Producteur : Tonino Cervi
Musique : Piero Piccioni
Montage : Nino Baragli
Directeur de la photographie : Armando Nannuzzi
Costumes : Tireli, Alberto Verso
Genre : Comédie
Année : 1990
Durée : 117 minutes
Pays :
Distribution
Alberto Sordi : Harpagon
Franco Interlenghi : Mastro Giacomo
Christopher Lee : Cardinal Spinosi
Marie Laforêt : Comtesse Isabella Spinosi
Laura Antonelli : Frosina
Lucia Bosè : Donna Elvira
: Cleante
Carlo Croccolo : Mastro Simone
Valerie Allain : Marianne
Franco Angrisano : Don Paolino
Miguel Bosé : Valerio
Jacques Sernas : Don Guglielmo
Paolo Paoloni : Le pape
Anna Kanakis : Anna
Mattia Sbragia : Oronte
Autour du film
Avec Il malato immaginario (1979), Tonino Cervi avait déjà adapté une autre œuvre de Molière, Le Malade imaginaire. La distribution comptait déjà Alberto Sordi et Laura Antonelli, et la musique était également signée Piero Piccioni.
Notes et références
Liens externes
Film italien sorti en 1990
Comédie italienne
Film réalisé par Tonino Cervi
Film scénarisé par Alberto Sordi
Adaptation d'une pièce de théâtre de Molière au cinéma
Adaptation d'une pièce de théâtre française au cinéma
Film se déroulant au XVIIe siècle
Film se déroulant à Rome
Remake italien de film français
Film avec une musique composée par Piero Piccioni
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,104
|
Q: Series representation of $\sin(nu)$ when $n$ is an odd integer? So, out of boredom and curiosity, today I came up with a series representation for $\sin(nu)$ when $n$ is an even integer:
$$\sin(nu) = \sum_{k=1}^\frac n2 \left(\left(-1\right)^{k-1}\binom{n}{-\left|2k-n\right|+n-1}\sin\left(u\right)^{2k-1}\cos\left(u\right)^{n-2k+1}\right)\;\mathtt {if}\;n\in 2\Bbb Z$$
I was working on a similar representation for when $n$ is an odd integer, but I'm having some difficulties. There doesn't seem to be much of a pattern. If it exists, could someone please point me in its direction? If it's impossible, could you provide me with the proof?
A: Notice that, using Euler's Formula, for general $n$ a positive integer,
$$\begin{align}\sin(nx) &= \frac{e^{inx}-e^{-inx}}{2i}\\
&=\frac{(\cos{x} + i\sin{x})^n - (\cos{x} - i\sin{x})^n}{2i}\\
&=\sum_{k=0}^{n}{\binom{n}{k}\frac{\cos^k{x}(i\sin{x})^{n-k} - \cos^k{x}(-i\sin{x})^{n-k}}{2i}}\\
&=\sum_{k=0}^{n}{\binom{n}{k}\cos^k{x}\sin^{n-k}{x}\frac{i^{n-k} - (-i)^{n-k}}{2i}}\\
&=\sum_{k=0}^{n}{\binom{n}{k}\cos^k{x}\sin^{n-k}{x}\sin{\frac{1}{2}(n-k)\pi}}\end{align}$$
I am ashamed to confess blatantly that this was taken (word for word) from here, the first link returned using the Google search query "Multiple Angle Formula".
A: Using complex methods and the binomial theorem,
$$\eqalign{\sin(nu)
&={\rm Im}(\cos u+i\sin u)^n\cr
&={\rm Im}\sum_{m=0}^n \binom nm (\cos u)^{n-m}(i\sin u)^m\ .\cr}$$
As only the terms for odd $m$ contribute to the imaginary part we can take $m=2k-1$ to give
$$\sin(nu)=\sum_{k=1}^{(n+1)/2}(-1)^{k-1}\binom n{2k-1}\cos^{n-2k+1}u\sin^{2k-1}u\ .$$
A: There are very nice and simple formulas using Chebyshev polynomials of the first kind $T_n$ (if $n$ is odd) and of the second kind $U_n$ (if $n$ is even)
$$sin(nx)=(-1)^{\frac{n-1}{2}} T_n(\sin (x))$$
$$sin(nx)=(-1)^{\frac{n}{2}-1} \cos (x) U_{n-1}(\sin (x))$$
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,060
|
namespace routing
{
/// \returns true when there exists a routing mode where the feature with |types| can be used.
template <class TTypes>
bool IsRoad(TTypes const & types)
{
return CarModel::AllLimitsInstance().HasRoadType(types) ||
PedestrianModel::AllLimitsInstance().HasRoadType(types) ||
BicycleModel::AllLimitsInstance().HasRoadType(types);
}
void ReconstructRoute(IDirectionsEngine & engine, IRoadGraph const & graph,
shared_ptr<traffic::TrafficInfo::Coloring> const & trafficColoring,
my::Cancellable const & cancellable, vector<Junction> & path, Route & route);
} // namespace rouing
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,219
|
But recently, John had the opportunity to travel to Liberia to do some very different types of photographs. And he loved the opportunity. We asked him to share about his experience there and how he's learning to use business to benefit others.
Photography has opened up many doors for me over the past few years and has allowed me to meet people and go places I probably would not have had the opportunity to experience otherwise. It's a fun job, and through a recent project I worked on, photography presented me with an even greater opportunity to step outside of my comfort zone and see the potential to truly make a positive impact in the lives of others.
This past January I was hired to travel to Liberia and photograph an ad campaign for MiiR Bottles, based around the first two MiiR-funded well projects. For every bottle MiiR sells, they provide clean water for one person for an entire year. It's a really great idea, and something I was immediately interested in being a part of. Business is a very powerful tool, and I have begun to explore the idea of using business to benefit others as MiiR is doing. The possibilities are endless and I was excited to discover how I might be able to make a difference though my photography.
It was difficult for me to fully comprehend the extent of the water issues until I was in Liberia. Access to water is not necessarily the problem, but access to clean water is really where it gets complicated. We were guided through several villages by locals where we were able to get a first-hand look at the water sources where people washed dishes, bathed, and retrieved water to drink. In some cases, they always bathed and washed clothes down-stream and drank up-stream, but the problem is there are several other villages up-stream doing the same thing. Or there could be dead animals lying in the stream. There are many wells in Liberia, but unless it is a sealed pump and it has been cared for properly, there is a good chance even those are contaminated. Without being sealed, bugs, animals, trash, and sewage fall in or get washed into the well when it rains. Clean water is hard to find.
Although our trip was focused on the clean water well projects in two particular villages, we had several opportunities throughout the week to travel to other villages and hear about the many different struggles facing Liberians. One of the struggles is access to quality education. Because of the economic situation in Liberia, there are many children whose families are not able to pay for school. One of our stops was at the Chariot Daycare and Elementary School in Buchanan. It was founded and run by a wonderful man who goes by Pastor Kondoh. He has an incredible story which I talk about on my blog, and he has an incredible passion to serve those who can not help themselves. Especially the children. One of his many goals is to educate and raise up the children of Liberia who he believes are the key to bringing about change in their country.
The organization MiiR partnered with to build wells in Liberia is Well Done Organization. In addition to clean water projects, WDO recently began a child sponsorship program, to help families who can't afford to send their children to school. I was asked to take a few pictures of the students needing sponsorship. One of the young students I photographed that morning was Zachariah, and his portrait is one of my favorites. I could stare at his portrait for hours and continue to find new things I like about it. I wrote about Zachariah on my blog and encouraged my readers to consider supporting him or one of the other children at Chariot Daycare and Elementary. By the end of the day, seven children were sponsored.
While I have always perceived the value of a photograph to be great, this experience of using an image to motivate others to take action has broadened the way I think about my work. Photography has the power to not only bring about awareness, but also to inspire. We know this because we so often create images for the purpose of advertising and selling commercial products and services. I have been thinking a lot about the power of photography since returning from Liberia. How can great photography be used to inspire people to invest in the lives of others? This same question could be asked of any professional skill, and thankfully it seems to be a question more and more people are thinking and talking about. I don't have all of the answers myself, but after hearing about the seven children who were sponsored in one day, a new excitement and curiosity has awakened in me.
Although I am still in the very early stages of planning, I have begun working on a very large new project revolving around the idea of giving back to others. I will have more about this to share very soon. I am excited to see how it will go and what can be accomplished in the weeks, months and even years ahead. If you are interested in hearing updates and finding out more about this project as well as my photography, you can do so on my blog at www.keatleyphoto.com/blog and on Twitter @johnkeatley.
In addition to these images, you can see more of my photography on my website, www.keatleyphoto.com. I was given a lot of freedom to explore and photograph whatever interested me while in Liberia. Being a portrait photographer, I made a point of going out on my own to photograph people whenever I had the chance. These portraits are the ones I am really most proud of, and in the short time I have been back, I have really been able to use these images to generate quite a bit of interest and awareness about the situation in Liberia and how people can get involved.
Some really great things are happening in Liberia with the support of WDO. If you are interested in becoming involved or supporting the work being done, you can find out more at www.welldoneliberia.org.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,459
|
HealthKit
Ask iLounge
iLounge Deals Shop
Review: SplashData SplashMoney
On August 25, 2008, we reviewed a collection of 15 different personal finance applications for the iPhone and iPod touch in a roundup entitled iPhone Gems: Every Personal Finance Application, Reviewed. This review contains a review of one application from that roundup; additional comparative details can be found in the original full story.
SplashMoney ($10) by SplashData is the iPhone and iPod touch version of the popular financial software for desktop PCs, Palm OS, Pocket PC, and BlackBerry. Unlike many of the more simple finance applications covered in this article, SplashMoney aims to be something more akin to a desktop application, offering syncing, downloading of the user's latest account information directly from his or her bank, and more.
Users are shown a brief quick start guide they first run the application, which is thereafter available in the tools menu. We see this as nearly mandatory for an app as complex as SplashMoney, and would like to see this practice spread to other categories of iPhone apps—once the iPhone OS is stable enough that repeated reinstallation of apps isn't necessary. The main account screen lists all of the user's accounts, five of which are pre-created for convenience, and the balance for each. Icons at the bottom of the screen provide the ability to set and manage budgets, add accounts, view reports, and download new account information if the user's financial institution offers it. A blue button in the bottom right lists the currently available balance, either for all accounts or just an individual account, depending on where in the app the user happens to be.
As with many other iPhone finance apps, tapping on an account brings up that account's main view, with a listing of transactions, each with the appropriate amount (either debit or credit), along with a category icon next to the transaction's name. On the new transaction screen, users may enter the date, payee, type, amount, category, class, state (cleared or uncleared), and memo information.
On the budgets screen, users can set up individual budgets for a wide variety of categories, including auto, dining, entertainment, groceries, insurance, medical, salary, travel, and more, with each showing the budget amount and the amount spent. A total of all budgeted funds and the remaining balance are shown at the bottom of the screen; users can also choose to view budgets from last month, last quarter, last year, or the current month, quarter, or year.
While its interface isn't the best we've seen among competing apps, it's not bad, and SplashMoney stands alone as the only currently available personal finance application for the iPhone and iPod touch to offer syncing to a desktop application—in this case, SplashMoney for Mac or Windows—and the ability to download the latest transaction information directly from the user's financial institution. Users looking for a simple app to keep track of daily spending will most likely find this app to be a bit overpowered, but for those looking for desktop-class financial management on their iPhone or iPod touch, it's the best option currently available.
Company and Price
Company: SplashData
Website: www.splashdata.com
Title: SplashMoney
Compatible: iPhone, iPhone 3G, iPod touch
Protect and remotely control your electronic devices with the TP-Link Kasa Smart Wi-Fi power strip
VolunteerMatch now integrated into Apple Maps
Production Ted Lasso second season begins
iOS Tips gets Privacy section update
New iMacs powered by Apple Silicon to release soon
Get More Work and Play Done with a $50 Off LG 32 inch UltraFine 4K Monitor
Apple reveals behind the scenes video on 'Shot on iPhone Student Films'
M1 Mac sideloading option on iOS has been removed
Apple may extend Apple TV+ free trial until July
iLounge > Reviews > Review: SplashData SplashMoney
Submit App News
iLounge © 2001 – 2020. All Rights Reserved.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 2,335
|
Ahmad Abdel-Halim Abdel-Salam Al-Zugheir (; born 14 September 1986) is a Jordanian footballer who plays for Sahab and the Jordan national football team.
Career
Abdel-Halim, who normally plays as a left-winger but can also operate as a left back, has always been known as the "Asian Roberto Carlos" due to his thunderous left-foot and long range shots, thumping free-kicks and corner kicks, which are similar to that of the former Brazilian international star Roberto Carlos.
Many fans of Al-Wahdat and Jordan refer to him as the following nicknames "Al-Andalib", "Haleem of Al-Wahdat", and "Al-Madfaaji".
Honors and participation in international tournaments
In Asian Games
2006 Asian Games
In AFC Asian Cups
2011 Asian Cup
In WAFF Championships
2010 WAFF Championship
International goals
With U-23 Team
With Senior Team
References
External links
1986 births
Living people
Jordanian people of Palestinian descent
Jordanian footballers
Jordan international footballers
2011 AFC Asian Cup players
Al-Wehdat SC players
Shabab Al-Ordon Club players
Sahab SC players
Al-Ramtha SC players
Al-Sareeh SC players
That Ras Club players
Al-Baqa'a Club players
Shabab Al-Khalil SC players
Al-Nasr SC (Salalah) players
Footballers at the 2006 Asian Games
Sportspeople from Amman
Expatriate footballers in the State of Palestine
West Bank Premier League players
Jordanian expatriate footballers
Jordanian Pro League players
Oman Professional League players
Jordanian expatriate sportspeople in Oman
Expatriate sportspeople in Oman
Expatriate sportspeople in the State of Palestine
Jordanian expatriate sportspeople in the State of Palestine
Association football fullbacks
Association football wingers
Asian Games competitors for Jordan
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 8,564
|
Q: Get count() of data for the last 12 months, Laravel I am attempting to return a count of my users anniversaries by month on a rolling basis, which I will display in flot charts.
I think I nearly have it figured out, but I am struggling with getting the format of my column to match the comparison month.
//return an array of the last 12 months.
for ($i = 1; $i <= 12; $i++) {
$months[] = date("Y-m", strtotime( date( 'Y-m-01' )." -$i months"));
}
//$months dumps as: array(12) { [0]=> string(7) "2015-03" ...}
// find members where the anniversary column matches the $months[$i]
foreach ($months as $key => $value) {
$renewals[] = User::where('anniversary', '=', $value)->count();
}
The format of the anniversary column is 2014-4-30. How to I just grab the 'Y-m' of that column to compare to $month?
A: You could do:
$renewals[] = User::where('anniversary', 'LIKE', $value.'%')->count();
That would accept any day and only match month and year. Of course there probably are better ways to accomplish this without using LIKE.
Alternatively you could get an array with months that has first and last day and use Laravels whereBetween method.
$renewals[] = User::whereBetween('anniversary', array($firstDay, $lastDay))->count();
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,183
|
Home National Politics Grayson Campaign Needs to Pull False Ad Immediately
Grayson Campaign Needs to Pull False Ad Immediately
It's not easy for me to criticize a strong progressive like Rep. Alan Grayson (D-FL), someone who has fought hard for just about everything I believe in (e.g., the Public Option Act). And it's even more difficult when Grayson's opponent, Daniel Webster, is about as right-wing as you can get, a true "American Taliban" if there ever was one. Finally, it continues to be the case that I would strongly prefer Grayson to be reelected than Webster to replace him.
Having said all that, however, this is truly egregious, as explained by the non-partisan, respected, Annenberg Public Policy Center.
We thought Democratic Rep. Alan Grayson of Florida reached a low point when he falsely accused his opponent of being a draft dodger during the Vietnam War, and of not loving his country. But now Grayson has lowered the bar even further. He's using edited video to make his rival appear to be saying the opposite of what he really said.
In a new ad, Grayson accuses his Republican opponent Daniel Webster of being a religious fanatic and dubs him "Taliban Dan." But to make his case, Grayson manipulates a video clip to make it appear Webster was commanding wives to submit to their husbands, quoting a passage in the Bible. Four times, the ad shows Webster saying wives should submit to their husbands. In fact, Webster was cautioning husbands to avoid taking that passage as their own. The unedited quote is: "Don't pick the ones [Bible verses] that say, 'She should submit to me.'"
Watch the ad, then watch the unedited video (see after the "fold") and decide for yourself. Here are the comments by Webster, which show that he meant the exact opposite of what Grayson's ad claims he meant.
So, write a journal. Second, find a verse. I have a verse for my wife, I have verses for my wife. Don't pick the ones that say, 'She should submit to me.' That's in the Bible, but pick the ones that you're supposed to do. So instead, 'love your wife, even as Christ loved the Church and gave himself for it' as opposed to 'wives submit to your own husbands.' She can pray that, if she wants to, but don't you pray it.
The Grayson campaign needs to pull this ad immediately, if it hasn't already done so. As much as Daniel Webster is a heinously extreme, right-wing, member of the "American Taliban," taking his words and editing them to make them mean the exact opposite of what he said is not acceptable. Actually, what it reminds me of is another incident which we progressives rightly decried. In our zeal to win elections, let's never become like the Andrew Breitbarts of the world.
Alan Grayson
Previous articleWhipple Clip Dozen: Tuesday Morning
Next articleMove over Grover: Republican Norquist, the new poster boy for the Governor's High Tax Liquor Plan
Friday News: "The Covid-19 Death Toll Is Even Worse Than It Looks"; "Four Rules That Should Guide Bidenomics"; "Trump won't say the one thing...
Thursday News: "Trump is impeached yet again. But most GOP members shrug at sedition"; "How Facebook Incubated the Insurrection"; "Two Virginia police officers, man...
Wednesday News: "House to impeach Trump as GOP support grows"; "We must end the post-truth society"; "Now Social Media Grows a Conscience?"; "Security heightened...
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 2,012
|
using AspectCore.DynamicProxy;
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading.Tasks;
using Xunit;
namespace AspectCore.Tests.Injector
{
public class Intercept1 : AbstractInterceptorAttribute
{
public override Task Invoke(AspectContext context, AspectDelegate next)
{
context.Parameters[0] = "lemon";
return context.Invoke(next);
}
}
public interface IService1
{
Task<string> GetValue(string val);
}
public class Service1 : IService1
{
[Intercept1]
public async Task<string> GetValue(string val)
{
await Task.Delay(3000);
return val;
}
}
public class AsyncBlockTest : InjectorTestBase
{
[Fact]
public async Task AsyncBlock()
{
var builder = new ProxyGeneratorBuilder();
builder.Configure(_ => { });
var proxyGenerator = builder.Build();
var proxy = proxyGenerator.CreateInterfaceProxy<IService1, Service1>();
// IService proxy = new Service();
var startTime = DateTime.Now;
Console.WriteLine($"{startTime}:start");
var val = proxy.GetValue("le");
var endTime = DateTime.Now;
Assert.True((endTime - startTime).TotalSeconds < 2);
Console.WriteLine($"{endTime}:should return immediately");
var result = await val;
Console.WriteLine($"{DateTime.Now}:{result}");
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,556
|
What happened to Jeremy in vampire Diaries?
Why did Tyler leave the Vampire Diaries?
Where did Jeremy go in season 6?
Why did Jeremy leave Bonnie?
Does Jeremy Come Back Season 7?
Who ends up with Jeremy Gilbert?
Who is Jeremy's girlfriend in Vampire Diaries?
Is Jeremy going to be in Legacies?
Who does Jeremy end up with?
Is Matt a legacy?
Who does Jeremy marry in Vampire Diaries?
Who does Matt end up with?
Who does Matt end up with Vampire Diaries?
Jeremy was shot by Sheriff Forbes after Damon dodged the bullet and was revived by Bonnie, who is a witch and has connections to dead witches with the power to bring him back. However the spell that caused him to be revived gave him the power to see ghosts and he was haunted by Vicki and Anna.
Why did Michael Trevino's character exit? In The Vampire Diaries season 6 finale, Tyler Lockwood (Michael Trevino) decided to leave Mystic Falls because of Elena Gilbert (played by Nina Dobrev). Dobrev's character succumbed to a sleeping spell cast by Kai Parker, and Tyler left for parts unknown.
art schoolWith help from Elena and Damon, Jeremy graduates high school months earlier and is leaving town to go to art school.
Jeremy's sister, Elena is Bonnie's best friend. However, Jeremy turned to liquor after Bonnie's "death." After he saved her life in the Prison World, they parted on good terms despite them not being able to speak to each other, Bonnie wanted him to move on with his life, so he did.
Jeremy made his exit as a main character on The Vampire Diaries in the episode Stay. Jeremy comes back to Mystic Falls after Elena Gilbert awakens from her magical slumber. He now works as a teacher at Alaric's Salvatore Boarding School.
He was later given John's ring, which protects him from a death caused by anything supernatural. In season two, Jeremy falls in love with Bonnie Bennett and they start a relationship. Jeremy later loses both his aunt and uncle on the day of the sacrifice, leaving Elena and himself without a guardian.
Bonnie BennettVicki DonovanAnnaJeremy Gilbert/Significant othersJeremy tries to become a vampire by overdosing while having Anna's blood in his system, but he fails. He was later given John's ring, which protects him from a death caused by anything supernatural. In season two, Jeremy falls in love with Bonnie Bennett and they start a relationship.
Jeremy Gilbert appeared as a main character on six seasons of The Vampire Diaries. At some point, Jeremy left Mystic Falls presumably to continue hunting vampires, but he appeared during Legacies season 1, episode 3, "We're Being Punked, Pedro." Jeremy rescued Landon Kirby and Rafael Waithe from a werewolf hunter.
He was later given John's ring, which protects him from a death caused by anything supernatural. In season two, Jeremy falls in love with Bonnie Bennett and they start a relationship.
"Matt" Donovan is a main character on The Vampire Diaries and a guest character in the third season of The Originals and first season of Legacies.
In season two, Jeremy falls in love with Bonnie Bennett and they start a relationship. Jeremy later loses both his aunt and uncle on the day of the sacrifice, leaving Elena and himself without a guardian.
Happily ever after? Matt James chose Rachael Kirkconnell over Michelle Young during the season 25 finale of The Bachelor on Monday, March 15.
After Elena falls into a magical, coma-like slumber, Matt takes on more responsibility and eventually becomes the sheriff of Mystic Falls. During this time, Matt ejects all vampires from Mystic Falls. In season seven, Matt falls in love with his partner, Penny Ares.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,321
|
{"url":"http:\/\/farazdagi.com\/blog\/2015\/bitonic-sort-in-go\/","text":"2 Sep, 2015\n\nIn this post I will look into multi-core programming using Go. The language is good for concurrency, and handles multi-core parallel execution really well too. As a practical exercise Bitonic Sorter will be implemented. We will test the code on 8-core machine to see what benefit can parallelism provide.\n\n## 0. Context\n\nThis semester I\u2019m enrolled in CSE6220: Into to HPC class, where Prof. Vuduc tries to make sure we have a lot of fun (and learn some HPC stuff along the way).\n\nFor the first assignment, we were expected to turn a simple program into equally simple program but one that can execute some of its instructions in parallel. The program is bitonic sorter written in C, and all we had to do was to inject a single OpenMP instruction. After running the resultant code on Tech\u2019s DeepThought cluster, I got puzzled with the question: how hard will it be to implement the very same bitonic sorter in Go?\n\nTurned out, it was trivial to do!\n\nSo, this post is a quick summary of the work done. Please note that implemented sorter is primitive at best:\n\n\u2022 it runs on a single machine (with multiple cores), no distribution to separate nodes\n\u2022 it sorts only integers (easy to fix, but requires some typing..nobody mention generics here!)\n\n## 1. Prerequisites\n\nIf you wonder what Bitonic Sort is, you definitely wonder what Sorting Networks are all about.\n\nHere are some links to get you up to speed in no time:\n\nFor our purposes it is enough to understand that a sorting network is a special kind of sorting algorithm, where the sequence of comparisons is not data-dependent.\n\nSounds like a good case for parallel execution, right?\n\n## 2. Implementation\n\nYou know what is really nice about Bitonic Sort? Not only it can perform really well, it is really easy to implement.\n\nHere is the full code:\n\n## 3. Analysis (kind of)\n\nYou can do your own experiments by forking farazdagi\/bitonic repo.\n\nHere is what I did:\n\n\u2022 tried to run on a single core, on 4 cores, and on 8 cores\n\u2022 every test got three runs, average time was taken\n\u2022 the very same code was run every time i.e. even in case of a single core run, I used exactly the same concurrent program.\n\nHere is a what I got:\n\nNotes:\n\n\u2022 as you see elapsed time to sort $2^{14}$ items on a single processor quickly goes up. Our parallel execution on multiple processors can handle the load easily.\n\u2022 $2^{20} = 1.048.576$ and it takes ~$14$ seconds to sort those 1M records on 8-core machine. Pretty impressive.\n\u2022 on second figure, when we compare 4 and 8 cores, difference is not that striking. I guess our parallel execution becomes dominated by memory accesses (sorting is in shared memory).\n\n## 4. Summary and Take-Aways\n\n1. Go is truly amazing language and I will definitely experiment with its concurrency model more (as we proceed with CSE6220). So, you should probably expect more posts similar to this one.\n2. If you got interested in Bitonic sort, here is a list of awesomeness:\n\nDon\u2019t forget to fork the repo. You can also follow me on Twitter.\n\nHave a nice day and happy hacking!","date":"2017-03-29 15:03:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 3, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3786170482635498, \"perplexity\": 1996.1367304623739}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-13\/segments\/1490218190753.92\/warc\/CC-MAIN-20170322212950-00543-ip-10-233-31-227.ec2.internal.warc.gz\"}"}
| null | null |
/* eslint-disable */
import * as Types from '../../graphqlTypes.generated';
import { gql } from '@apollo/client';
import { AdminNavigationItemFieldsFragmentDoc } from './queries.generated';
import * as Apollo from '@apollo/client';
const defaultOptions = {} as const;
export type CreateNavigationItemMutationVariables = Types.Exact<{
navigationItem: Types.CmsNavigationItemInput;
}>;
export type CreateNavigationItemMutationData = { __typename: 'Mutation', createCmsNavigationItem: { __typename: 'CreateCmsNavigationItemPayload', cms_navigation_item: { __typename: 'CmsNavigationItem', id: string, position?: number | null, title?: string | null, page?: { __typename: 'Page', id: string } | null, navigation_section?: { __typename: 'CmsNavigationItem', id: string } | null } } };
export type UpdateNavigationItemMutationVariables = Types.Exact<{
id: Types.Scalars['ID'];
navigationItem: Types.CmsNavigationItemInput;
}>;
export type UpdateNavigationItemMutationData = { __typename: 'Mutation', updateCmsNavigationItem: { __typename: 'UpdateCmsNavigationItemPayload', cms_navigation_item: { __typename: 'CmsNavigationItem', id: string, position?: number | null, title?: string | null, page?: { __typename: 'Page', id: string } | null, navigation_section?: { __typename: 'CmsNavigationItem', id: string } | null } } };
export type DeleteNavigationItemMutationVariables = Types.Exact<{
id: Types.Scalars['ID'];
}>;
export type DeleteNavigationItemMutationData = { __typename: 'Mutation', deleteCmsNavigationItem: { __typename: 'DeleteCmsNavigationItemPayload', cms_navigation_item: { __typename: 'CmsNavigationItem', id: string } } };
export type SortNavigationItemsMutationVariables = Types.Exact<{
sortItems: Array<Types.UpdateCmsNavigationItemInput> | Types.UpdateCmsNavigationItemInput;
}>;
export type SortNavigationItemsMutationData = { __typename: 'Mutation', sortCmsNavigationItems: { __typename: 'SortCmsNavigationItemsPayload', clientMutationId?: string | null } };
export const CreateNavigationItemDocument = gql`
mutation CreateNavigationItem($navigationItem: CmsNavigationItemInput!) {
createCmsNavigationItem(input: {cms_navigation_item: $navigationItem}) {
cms_navigation_item {
id
...AdminNavigationItemFields
}
}
}
${AdminNavigationItemFieldsFragmentDoc}`;
export type CreateNavigationItemMutationFn = Apollo.MutationFunction<CreateNavigationItemMutationData, CreateNavigationItemMutationVariables>;
/**
* __useCreateNavigationItemMutation__
*
* To run a mutation, you first call `useCreateNavigationItemMutation` within a React component and pass it any options that fit your needs.
* When your component renders, `useCreateNavigationItemMutation` returns a tuple that includes:
* - A mutate function that you can call at any time to execute the mutation
* - An object with fields that represent the current status of the mutation's execution
*
* @param baseOptions options that will be passed into the mutation, supported options are listed on: https://www.apollographql.com/docs/react/api/react-hooks/#options-2;
*
* @example
* const [createNavigationItemMutation, { data, loading, error }] = useCreateNavigationItemMutation({
* variables: {
* navigationItem: // value for 'navigationItem'
* },
* });
*/
export function useCreateNavigationItemMutation(baseOptions?: Apollo.MutationHookOptions<CreateNavigationItemMutationData, CreateNavigationItemMutationVariables>) {
const options = {...defaultOptions, ...baseOptions}
return Apollo.useMutation<CreateNavigationItemMutationData, CreateNavigationItemMutationVariables>(CreateNavigationItemDocument, options);
}
export type CreateNavigationItemMutationHookResult = ReturnType<typeof useCreateNavigationItemMutation>;
export type CreateNavigationItemMutationResult = Apollo.MutationResult<CreateNavigationItemMutationData>;
export type CreateNavigationItemMutationOptions = Apollo.BaseMutationOptions<CreateNavigationItemMutationData, CreateNavigationItemMutationVariables>;
export const UpdateNavigationItemDocument = gql`
mutation UpdateNavigationItem($id: ID!, $navigationItem: CmsNavigationItemInput!) {
updateCmsNavigationItem(input: {id: $id, cms_navigation_item: $navigationItem}) {
cms_navigation_item {
id
...AdminNavigationItemFields
}
}
}
${AdminNavigationItemFieldsFragmentDoc}`;
export type UpdateNavigationItemMutationFn = Apollo.MutationFunction<UpdateNavigationItemMutationData, UpdateNavigationItemMutationVariables>;
/**
* __useUpdateNavigationItemMutation__
*
* To run a mutation, you first call `useUpdateNavigationItemMutation` within a React component and pass it any options that fit your needs.
* When your component renders, `useUpdateNavigationItemMutation` returns a tuple that includes:
* - A mutate function that you can call at any time to execute the mutation
* - An object with fields that represent the current status of the mutation's execution
*
* @param baseOptions options that will be passed into the mutation, supported options are listed on: https://www.apollographql.com/docs/react/api/react-hooks/#options-2;
*
* @example
* const [updateNavigationItemMutation, { data, loading, error }] = useUpdateNavigationItemMutation({
* variables: {
* id: // value for 'id'
* navigationItem: // value for 'navigationItem'
* },
* });
*/
export function useUpdateNavigationItemMutation(baseOptions?: Apollo.MutationHookOptions<UpdateNavigationItemMutationData, UpdateNavigationItemMutationVariables>) {
const options = {...defaultOptions, ...baseOptions}
return Apollo.useMutation<UpdateNavigationItemMutationData, UpdateNavigationItemMutationVariables>(UpdateNavigationItemDocument, options);
}
export type UpdateNavigationItemMutationHookResult = ReturnType<typeof useUpdateNavigationItemMutation>;
export type UpdateNavigationItemMutationResult = Apollo.MutationResult<UpdateNavigationItemMutationData>;
export type UpdateNavigationItemMutationOptions = Apollo.BaseMutationOptions<UpdateNavigationItemMutationData, UpdateNavigationItemMutationVariables>;
export const DeleteNavigationItemDocument = gql`
mutation DeleteNavigationItem($id: ID!) {
deleteCmsNavigationItem(input: {id: $id}) {
cms_navigation_item {
id
}
}
}
`;
export type DeleteNavigationItemMutationFn = Apollo.MutationFunction<DeleteNavigationItemMutationData, DeleteNavigationItemMutationVariables>;
/**
* __useDeleteNavigationItemMutation__
*
* To run a mutation, you first call `useDeleteNavigationItemMutation` within a React component and pass it any options that fit your needs.
* When your component renders, `useDeleteNavigationItemMutation` returns a tuple that includes:
* - A mutate function that you can call at any time to execute the mutation
* - An object with fields that represent the current status of the mutation's execution
*
* @param baseOptions options that will be passed into the mutation, supported options are listed on: https://www.apollographql.com/docs/react/api/react-hooks/#options-2;
*
* @example
* const [deleteNavigationItemMutation, { data, loading, error }] = useDeleteNavigationItemMutation({
* variables: {
* id: // value for 'id'
* },
* });
*/
export function useDeleteNavigationItemMutation(baseOptions?: Apollo.MutationHookOptions<DeleteNavigationItemMutationData, DeleteNavigationItemMutationVariables>) {
const options = {...defaultOptions, ...baseOptions}
return Apollo.useMutation<DeleteNavigationItemMutationData, DeleteNavigationItemMutationVariables>(DeleteNavigationItemDocument, options);
}
export type DeleteNavigationItemMutationHookResult = ReturnType<typeof useDeleteNavigationItemMutation>;
export type DeleteNavigationItemMutationResult = Apollo.MutationResult<DeleteNavigationItemMutationData>;
export type DeleteNavigationItemMutationOptions = Apollo.BaseMutationOptions<DeleteNavigationItemMutationData, DeleteNavigationItemMutationVariables>;
export const SortNavigationItemsDocument = gql`
mutation SortNavigationItems($sortItems: [UpdateCmsNavigationItemInput!]!) {
sortCmsNavigationItems(input: {sort_items: $sortItems}) {
clientMutationId
}
}
`;
export type SortNavigationItemsMutationFn = Apollo.MutationFunction<SortNavigationItemsMutationData, SortNavigationItemsMutationVariables>;
/**
* __useSortNavigationItemsMutation__
*
* To run a mutation, you first call `useSortNavigationItemsMutation` within a React component and pass it any options that fit your needs.
* When your component renders, `useSortNavigationItemsMutation` returns a tuple that includes:
* - A mutate function that you can call at any time to execute the mutation
* - An object with fields that represent the current status of the mutation's execution
*
* @param baseOptions options that will be passed into the mutation, supported options are listed on: https://www.apollographql.com/docs/react/api/react-hooks/#options-2;
*
* @example
* const [sortNavigationItemsMutation, { data, loading, error }] = useSortNavigationItemsMutation({
* variables: {
* sortItems: // value for 'sortItems'
* },
* });
*/
export function useSortNavigationItemsMutation(baseOptions?: Apollo.MutationHookOptions<SortNavigationItemsMutationData, SortNavigationItemsMutationVariables>) {
const options = {...defaultOptions, ...baseOptions}
return Apollo.useMutation<SortNavigationItemsMutationData, SortNavigationItemsMutationVariables>(SortNavigationItemsDocument, options);
}
export type SortNavigationItemsMutationHookResult = ReturnType<typeof useSortNavigationItemsMutation>;
export type SortNavigationItemsMutationResult = Apollo.MutationResult<SortNavigationItemsMutationData>;
export type SortNavigationItemsMutationOptions = Apollo.BaseMutationOptions<SortNavigationItemsMutationData, SortNavigationItemsMutationVariables>;
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,275
|
The Scientist – ballada rockowa zespołu Coldplay, pochodząca z ich drugiego albumu, A Rush of Blood to the Head. Regionalne wydania singla dostępne były w Wielkiej Brytanii, Europie, Holandii, Niemczech i Kanadzie, z kolei wydania promo CD ukazały się w Stanach Zjednoczonych i Wielkiej Brytanii.
Okładkę wydania singla stanowiło zdjęcie perkusisty grupy, Willa Championa, zrobione przez Sølve'a Sundsbø.
Produkcja
Piosenka jest opartą na dźwiękach pianina nastrojową balladą. Inspiracją do stworzenia piosenki był dla Chrisa Martina utwór George'a Harrisona "All Things Must Pass". "The Scientist" zawiera także odniesienia do krótkiej powieści The Birth-Mark Nathaniela Hawthorne'a, w której pewien naukowiec zapomina o miłości swego życia z powodu zamiłowania do nauki. Ze swojego błędu zdaje sobie sprawę, gdy jest już za późno, ponieważ jego ukochana umiera.
Covery piosenki stworzyło wielu artystów. Aimee Mann nagrała własną wersję "The Scientist", którą następnie wydała na specjalnej edycji jej albumu Lost in Space. Natasha Bedingfield, Eamon i Avril Lavigne zagrali covery piosenki w radiowym programie Jo Whiley, Live Lounge. Belinda Carlisle wykonała utwór na żywo we własnej aranżacji w reality show Hit Me Baby One More Time. Cover piosenki znalazł się także na płycie Scarred Johnette Napolitano. Brytyjski żeński kwartet All Angels również zamieścił własną wersję "The Scientist" na swoim albumie Into Paradise, który został wydany w listopadzie 2007 roku. Dodatkowo, w programie telewizyjnym MAD TV można było obejrzeć parodię teledysku, nazwaną "The Narcissist".
Lista utworów
"The Scientist" – 5:11
"1.36" feat. Tim Wheeler – 2:05
"I Ran Away" – 4:26
Wideoklip
Wideoklip "The Scientist" był bardzo popularny głównie z powodu występowania w nim "odwrotnego ruchu", czyli np. poruszania się i jazdy samochodem nie do przodu, tylko do tyłu. Element ten po raz pierwszy został użyty w teledysku "The Second Summer of Love" szkockiej grupy Danny Wilson. Poza tym podobną technikę wykorzystano w wideoklipach: "Return to Innocence" Enigmy, "Sitting, Waiting, Wishing" Jacka Johnsona, "Don't Wait" Dashboard Confessional, "Me, Myself and I" Beyoncé Knowles, "The Girl is Mine" zespołu 99 Souls oraz "Typical" Mute Math. Chris Martin przyznał, że nauka śpiewania piosenki od tyłu zajęła mu miesiąc.
Teledysk był kręcony w wielu różnych miejscach, m.in. w Londynie i Surrey, tuż przed rozpoczęciem pierwszego etapu trasy A Rush of Blood to the Head Tour. Za reżyserię "The Scientist" odpowiedzialny był Jamie Thraves. Mimo iż wideoklip kręcony był w Anglii, posiadał on tablicę rejestracyjną z Wyoming, używaną od 1983 roku do 1988 roku. Samochodem w teledysku było BMW E28, produkowane od 1982 roku do 1988 roku.
W wideoklipie wystąpiła irlandzka aktorka Elaine Cassidy.
W 2003 roku, "The Scientist" wygrał trzy nagrody MTV Video Music Awards: Best Group Video, Best Direction oraz Breakthrough Video. Teledysk otrzymał także nominację do nagrody Grammy.
Pozycje na listach
Przypisy
Bibliografia
Ballady rockowe
Single Coldplay
Pierwsze miejsce na Liście Przebojów Programu Trzeciego
Single Parlophone
Single wydane w roku 2002
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,432
|
Q: Form still returning true value even there's empty() I'm having a problem for doing my form .
When I click the button generate , I'm attempting to redirect to an error page but it stills redirect to the correct page even in the form it's empty.
if(isset($_POST['Generate'])) {
if(!empty($_POST['aid'])) {
$gen_link = "www.correctlink.com";
$_SESSION['active'] = $aid;
header("Location: http://$gen_link") ;
} else {
header("Location : http://error404.com");
}
}
Even when I click the generate button it stills redirect to www.correctlink.com
I want the user to type something in the form
Form Code:
<input type="text" name="aid" id="aid" value="Enter Your Active Here" onfocus=" if (this.value == 'Enter Your Active Here') { this.value = ''; }" onblur="if (this.value == '') { this.value='Enter Your Active Here';} "/><br /><br />
<input type="submit" class="button" value="Generate" name="Generate" id="Generate"/>
A: If you have the value attribute set to something in the form, it will submit that as the value, therefore it won't be empty. Instead of checking if it's empty, check if it equals Enter Your Active Here.
If you need a placeholder text, you could use the attribute placeholder instead of value.
A: The problem here is that you have set a default value of "Enter Your Active Here" in your textbox. If the user simply submits the form without even trying to enter anything in the textbox, the value of $_POST['aid'] becomes "Enter Your Active Here".
So what do you do? Simple, instead of checking for empty, check for
if($_POST['aid'] != "Enter Your Active Here" && ! empty(trim($_POST['aid'])))
Another solution would be to use a placeholder but since that's a HTML5 feature, the compatibility of that across browsers is limited.
EDIT: The second condition is added to make sure the code works in case javascript is disabled on the client machine and the user mischievously tries to submit form by emptying the textbox
A: The $_POST['aid'] is not empty because you set value for this as Enter Your Active Here
Use something like this
<input type="text" name="aid" id="aid" placeholder="Enter Your Active Here" .......
OR
<label>Enter Your Active Here</label><input type="text" name="aid" id="aid" .....
A: replace
if(!empty($_POST['aid']))
by
if($_POST['aid']!="" && $_POST['aid']!="Enter Your Active Here")
Thanks.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,660
|
#include "test_qgemm.h"
#include "test_qgemm_fixture.h"
template <> MlasQgemmTest<uint8_t, int8_t, int32_t, false, false>* MlasTestFixture<MlasQgemmTest<uint8_t, int8_t, int32_t, false, false>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, int8_t, int32_t, false, true>* MlasTestFixture<MlasQgemmTest<uint8_t, int8_t, int32_t, false, true>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, int8_t, int32_t, true, false>* MlasTestFixture<MlasQgemmTest<uint8_t, int8_t, int32_t, true, false>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, int8_t, int32_t, true, true>* MlasTestFixture<MlasQgemmTest<uint8_t, int8_t, int32_t, true, true>>::mlas_tester(nullptr);
template <> MlasQgemmTest<int8_t, int8_t, int32_t, false, false>* MlasTestFixture<MlasQgemmTest<int8_t, int8_t, int32_t, false, false>>::mlas_tester(nullptr);
template <> MlasQgemmTest<int8_t, int8_t, int32_t, false, true>* MlasTestFixture<MlasQgemmTest<int8_t, int8_t, int32_t, false, true>>::mlas_tester(nullptr);
template <> MlasQgemmTest<int8_t, int8_t, int32_t, true, false>* MlasTestFixture<MlasQgemmTest<int8_t, int8_t, int32_t, true, false>>::mlas_tester(nullptr);
template <> MlasQgemmTest<int8_t, int8_t, int32_t, true, true>* MlasTestFixture<MlasQgemmTest<int8_t, int8_t, int32_t, true, true>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, uint8_t, int32_t, false, false>* MlasTestFixture<MlasQgemmTest<uint8_t, uint8_t, int32_t, false, false>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, uint8_t, int32_t, false, true>* MlasTestFixture<MlasQgemmTest<uint8_t, uint8_t, int32_t, false, true>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, uint8_t, int32_t, true, false>* MlasTestFixture<MlasQgemmTest<uint8_t, uint8_t, int32_t, true, false>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, uint8_t, int32_t, true, true>* MlasTestFixture<MlasQgemmTest<uint8_t, uint8_t, int32_t, true, true>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, int8_t, float, false, false>* MlasTestFixture<MlasQgemmTest<uint8_t, int8_t, float, false, false>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, int8_t, float, false, true>* MlasTestFixture<MlasQgemmTest<uint8_t, int8_t, float, false, true>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, int8_t, float, true, false>* MlasTestFixture<MlasQgemmTest<uint8_t, int8_t, float, true, false>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, int8_t, float, true, true>* MlasTestFixture<MlasQgemmTest<uint8_t, int8_t, float, true, true>>::mlas_tester(nullptr);
template <> MlasQgemmTest<int8_t, int8_t, float, false, false>* MlasTestFixture<MlasQgemmTest<int8_t, int8_t, float, false, false>>::mlas_tester(nullptr);
template <> MlasQgemmTest<int8_t, int8_t, float, false, true>* MlasTestFixture<MlasQgemmTest<int8_t, int8_t, float, false, true>>::mlas_tester(nullptr);
template <> MlasQgemmTest<int8_t, int8_t, float, true, false>* MlasTestFixture<MlasQgemmTest<int8_t, int8_t, float, true, false>>::mlas_tester(nullptr);
template <> MlasQgemmTest<int8_t, int8_t, float, true, true>* MlasTestFixture<MlasQgemmTest<int8_t, int8_t, float, true, true>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, uint8_t, float, false, false>* MlasTestFixture<MlasQgemmTest<uint8_t, uint8_t, float, false, false>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, uint8_t, float, false, true>* MlasTestFixture<MlasQgemmTest<uint8_t, uint8_t, float, false, true>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, uint8_t, float, true, false>* MlasTestFixture<MlasQgemmTest<uint8_t, uint8_t, float, true, false>>::mlas_tester(nullptr);
template <> MlasQgemmTest<uint8_t, uint8_t, float, true, true>* MlasTestFixture<MlasQgemmTest<uint8_t, uint8_t, float, true, true>>::mlas_tester(nullptr);
static size_t QGemmRegistLongExecute() {
size_t count = 0;
count += MlasLongExecuteTests<MlasQgemmTest<uint8_t, int8_t, int32_t, false, false>>::RegisterLongExecute();
count += MlasLongExecuteTests<MlasQgemmTest<uint8_t, int8_t, int32_t, true, false>>::RegisterLongExecute();
count += MlasLongExecuteTests<MlasQgemmTest<uint8_t, uint8_t, int32_t, false, false>>::RegisterLongExecute();
count += MlasLongExecuteTests<MlasQgemmTest<uint8_t, uint8_t, int32_t, true, false>>::RegisterLongExecute();
count += MlasLongExecuteTests<MlasQgemmTest<int8_t, int8_t, int32_t, false, false>>::RegisterLongExecute();
count += MlasLongExecuteTests<MlasQgemmTest<int8_t, int8_t, int32_t, true, false>>::RegisterLongExecute();
if (GetMlasThreadPool() != nullptr) {
count += MlasLongExecuteTests<MlasQgemmTest<uint8_t, int8_t, int32_t, false, true>>::RegisterLongExecute();
count += MlasLongExecuteTests<MlasQgemmTest<uint8_t, int8_t, int32_t, true, true>>::RegisterLongExecute();
count += MlasLongExecuteTests<MlasQgemmTest<uint8_t, uint8_t, int32_t, false, true>>::RegisterLongExecute();
count += MlasLongExecuteTests<MlasQgemmTest<uint8_t, uint8_t, int32_t, true, true>>::RegisterLongExecute();
count += MlasLongExecuteTests<MlasQgemmTest<int8_t, int8_t, int32_t, false, true>>::RegisterLongExecute();
count += MlasLongExecuteTests<MlasQgemmTest<int8_t, int8_t, int32_t, true, true>>::RegisterLongExecute();
}
return count;
}
static size_t QGemmRegistShortExecute() {
size_t count = 0;
count += QgemmShortExecuteTest<uint8_t, int8_t, float, false, false>::RegisterShortExecuteTests();
count += QgemmShortExecuteTest<uint8_t, uint8_t, float, false, false>::RegisterShortExecuteTests();
count += QgemmShortExecuteTest<uint8_t, int8_t, int32_t, false, false>::RegisterShortExecuteTests();
count += QgemmShortExecuteTest<uint8_t, uint8_t, int32_t, false, false>::RegisterShortExecuteTests();
count += QgemmShortExecuteTest<int8_t, int8_t, float, false, false>::RegisterShortExecuteTests();
count += QgemmShortExecuteTest<int8_t, int8_t, int32_t, false, false>::RegisterShortExecuteTests();
if (MlasGemmPackBSize(128, 128, false /*AIsSigned*/, false /*BIsSigned*/) > 0) {
// QGEMM U8U8=float packed tests
count += QgemmShortExecuteTest<uint8_t, uint8_t, float, true, false>::RegisterShortExecuteTests();
// QGEMM U8U8=int32_t packed tests
count += QgemmShortExecuteTest<uint8_t, uint8_t, int32_t, true, false>::RegisterShortExecuteTests();
}
if (MlasGemmPackBSize(128, 128, false /*AIsSigned*/, true /*BIsSigned*/) > 0) {
// QGEMM U8S8=float packed tests
count += QgemmShortExecuteTest<uint8_t, int8_t, float, true, false>::RegisterShortExecuteTests();
// QGEMM U8S8=int32_t packed tests
count += QgemmShortExecuteTest<uint8_t, int8_t, int32_t, true, false>::RegisterShortExecuteTests();
}
if (MlasGemmPackBSize(128, 128, true /*AIsSigned*/, true /*BIsSigned*/) > 0) {
// QGEMM U8S8=float packed tests
count += QgemmShortExecuteTest<int8_t, int8_t, float, true, false>::RegisterShortExecuteTests();
// QGEMM U8S8=int32_t packed tests
count += QgemmShortExecuteTest<int8_t, int8_t, int32_t, true, false>::RegisterShortExecuteTests();
}
if (GetMlasThreadPool() != nullptr) {
count += QgemmShortExecuteTest<uint8_t, int8_t, float, false, true>::RegisterShortExecuteTests();
count += QgemmShortExecuteTest<uint8_t, uint8_t, float, false, true>::RegisterShortExecuteTests();
count += QgemmShortExecuteTest<uint8_t, int8_t, int32_t, false, true>::RegisterShortExecuteTests();
count += QgemmShortExecuteTest<uint8_t, uint8_t, int32_t, false, true>::RegisterShortExecuteTests();
count += QgemmShortExecuteTest<int8_t, int8_t, float, false, true>::RegisterShortExecuteTests();
count += QgemmShortExecuteTest<int8_t, int8_t, int32_t, false, true>::RegisterShortExecuteTests();
if (MlasGemmPackBSize(128, 128, false /*AIsSigned*/, false /*BIsSigned*/) > 0) {
count += QgemmShortExecuteTest<uint8_t, uint8_t, float, true, true>::RegisterShortExecuteTests();
count += QgemmShortExecuteTest<uint8_t, uint8_t, int32_t, true, true>::RegisterShortExecuteTests();
}
if (MlasGemmPackBSize(128, 128, false /*AIsSigned*/, true /*BIsSigned*/) > 0) {
count += QgemmShortExecuteTest<uint8_t, int8_t, float, true, true>::RegisterShortExecuteTests();
count += QgemmShortExecuteTest<uint8_t, int8_t, int32_t, true, true>::RegisterShortExecuteTests();
}
if (MlasGemmPackBSize(128, 128, true /*AIsSigned*/, true /*BIsSigned*/) > 0) {
count += QgemmShortExecuteTest<int8_t, int8_t, float, true, true>::RegisterShortExecuteTests();
count += QgemmShortExecuteTest<int8_t, int8_t, int32_t, true, true>::RegisterShortExecuteTests();
}
}
return count;
}
static UNUSED_VARIABLE bool added_to_main = AddTestRegister([](bool is_short_execute) {
return is_short_execute ? QGemmRegistShortExecute() : QGemmRegistLongExecute();
});
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,662
|
var _ = require('underscore');
var extend = require('backbone').Model.extend;
/**
* Expose.
*/
module.exports = BaseApplication;
/**
* Risotto.Application is the base Application
*/
function BaseApplication(){}
_.extend( BaseApplication.prototype, {
/**
* Default title.
*/
title: 'Risotto Application',
/**
* Handle authorization errors.
*/
onAuthorizationError: function*(koaContext, next){
},
/**
* Handle generic errors.
*/
onError: function*(koaContext, next, error){
if (Risotto.devMode){
koaContext.response.body = '<pre>' + error.stack + '</pre>';
}
Risotto.logError(error);
},
/**
* Handle not found errors.
*/
onNotFoundError: function*(koaContext, next){
}
});
/**
* Make it extendable
*/
BaseApplication.extend = extend;
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,333
|
using System;
using System.Collections.Generic;
using System.Data.Entity;
using System.Linq;
using System.Web;
namespace BusyFriend.Models
{
public class BusyFriendContext : DbContext
{
// You can add custom code to this file. Changes will not be overwritten.
//
// If you want Entity Framework to drop and regenerate your database
// automatically whenever you change your model schema, please use data migrations.
// For more information refer to the documentation:
// http://msdn.microsoft.com/en-us/data/jj591621.aspx
public BusyFriendContext() : base("name=BusyFriendContext")
{
}
public System.Data.Entity.DbSet<BusyFriend.Models.Friend> Friends { get; set; }
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,863
|
HomeHealthLong Queues And Dearth Of Covid-19 Jabs In Nepal
Long Queues And Dearth Of Covid-19 Jabs In Nepal
July 30, 2021 zenger.news Health Comments Off on Long Queues And Dearth Of Covid-19 Jabs In Nepal
KATHMANDU, Nepal — Hundreds of Nepalese were observed queuing in serpentine lines facing the scorching sun and uncertainty over the chance to get the jab outside vaccination centers on the morning of July 30.
As per the latest data, Nepal has 690,642 confirmed Covid-19 cases and has experienced 9,807 deaths. As many as 1.4 million Nepalese were vaccinated with the first dose. However, later Nepal experienced a shortage in doses.
As per Amnesty International, only 2.4 percent of the total population of about 30 million is fully vaccinated.
The visuals of long lines have become commonplace throughout all vaccine centers since the second wave of Covid-19 struck the country. Lucky ones get jabs, while some are turned away due to a shortage of vaccines.
Saroj Shrestha was one of the denizens of Kathmandu (capital of Nepal), who was turned away from the gate of a vaccination center on July 30 after the vaccine quota for the day got exhausted.
"They turned me out from the gate, citing the tokens for vaccines had been distributed for the day," Shrestha said.
"I have to come the next day, that too at around 6 am to ensure my shots as they would distribute the token for the day from 8 am."
Despite the latest purchase of four million jabs from China, the arduous wait for it increases frustration amongst people getting a vaccination.
Nepal started its immunization campaign in January after receiving donations of vaccines from India.
The United Nations Children's Emergency Fund (UNICEF) tweeted about the arrival of vaccines from the US on July 12 on their official Twitter handle.
"Over 1.5 million #COVID19 vaccines landed in #Nepal this morning," the tweet read.
"The vaccines, donated by US @statedept to COVAX arrive amidst the ruins of the recent wave."
"Just 4 percent in Nepal are fully vaccinated. To prevent more heartbreak, other countries must #DonateDosesNow."
As the nation is facing the third wave of the pandemic, queues outside vaccination centers carry the potential to fuel the infection as all health protocol measures are flouted.
In order to reduce the influx of people to major hospitals, the government is now conducting inoculation drives at local levels also. Still, it has become hard for local bodies to manage the lines.
The local bodies have claimed urgency among people. The communication gap between authorities and people has fueled the confusion in the current situation.
"Standards fixed by the state that people above 54 years need not stand in line for jabs are being followed," Keshav Thapa, a local body representative of Nagarjun Municipality in capital Kathmandu, said.
"However, for others, we have separated two lines based on gender outside and inside the vaccination center."
"There has been confusion, and people react to it. Most people think that they should get the jabs as they reach the vaccination centers."
"These vaccines are meant to avert the severe impact of Covid-19, we have been requesting them to maintain distance and stand in queues, but they want the vaccine to be administered immediately and quickly, be the first amongst others," Thapa said.
Thapa said that the influx of people from Kathmandu Metropolitan City (KMC) areas has also played a role in increasing footfalls in vaccination centers like in his area, which lies on the outskirts of the capital.
"People who have flocked to this vaccination center in Nagarjun Municipality-04 lies at a closer distance with the Kathmandu Metropolitan."
"People from Kathmandu Metropolitan stand higher than those residing here at Nagarjun Municipality. That's why the number of people standing in a queue is high," Thapa said.
But not all those who flocked to the outskirts for vaccination get jabs, and some have to return home in anger despite standing in line for hours or even the whole day, as per Thapa.
"When I was close to the entrance, they asked 100 people to queue for the vaccine. Now they are turning out from here saying doses are now out."
"I stood in the queue since 10 am, and it's been almost evening, and they also won't give us the coupon for the next day," Namita Shrestha, a denizen from Kathmandu Metropolitan who had to wait for hours and turned out from the vaccination center, said.
Though Nepal earlier had introduced an online registration system to book slots for vaccines, call for vaccines based on early bird booking is yet to start. But the government has been claiming that easy and equal access to vaccines has always been its priority which has failed in terms of execution.
(With inputs from ANI)
Edited by Amrita Das and Pallavi Mehra
The post Long Queues And Dearth Of Covid-19 Jabs In Nepal appeared first on Zenger News.
US House Passes Bill Prohibiting Map That Shows Taiwan As Chinese Territory
VIDEO: Flattop Flagship: UK's Queen Elizabeth Leads War Games Fleet With Indian Navy
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 80
|
That cold chill up your spine?
Yes it's that time of the year again.
NaNoWriMo is upon us at any moment; in fact by the time you read this it might already be too late, but probably not.
Writing 50k in 30 days is not as hard as one might think but that is not to say that it's a doddle. Either way stick with me and we'll get through this. So assuming you know what you're want to write and you are motivated to win let's do this thing shall we?
Myk Pilgrim is almost exactly just like you but with much less hair. He loves to make things up and write them down. This is his blog.
Do We Like Too Much?
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,310
|
Captain Ivory at Bona Sera in Ypsilanti, MI on 22-Apr-2016
Blues, Concert Reviews, Rock, US
Fabrizio Grossi of Supersonic Blues Machine
Grammy Nominees Bring Me The Horizon Release "Mother Tongue" Video
Lose Control by Chantel McGregor
In Memoriam: Wayne Static (Nov 04, 1965 - Nov 01, 2014) w/Final Interview
Saint Agnes Thinks it's "A Beatiful Day for Murder"
Beth Hart at O2 Academy in Leeds, UK on 09-Apr-2015
"Do you feel alright?" Beth asks a packed out crowd at the O2 Academy in Leeds to huge cheers from the audience.
Beth Hart is no stranger to this venue having played here two years previously. This time around, Beth brings with her in tow her superb new album Better Than Home.
With Hart, you get a versatile, genre-defying powerhouse of a vocalist and one of the great female singers of our time. She has real life experience in abundance which is reflected in her incredible songwriting.
It's difficult to pigeon-hole Beth into a single genre of music. Whether she is grappling with the soul greats, singing the blues, delivering heartfelt ballads from behind her grand piano or just plain old rocking out, Hart is the complete package.
Tonight she opens the show with a jazz number, a cover of Melody Gardot's "If I Tell You I Love You." Hart takes to her piano as she delves deep into her extensive back catalog with the likes of "Skin" from her 1999 album Screamin' For My Supper and the brilliant rendition of "Good as it Gets." "Delicious Surprise" has the crowd clapping along, Beth sounding phenomenal.
The band turns it up a notch as Hart channels her inner rock goddess and launches into a raunchy rendition of "For My Friends," as she dances and gyrates about on stage. The band sounds tight, including some great twin guitar harmonies coming from Jon Nichols and P.J Barth.
A healthy selection of tracks from Hart's latest album received an airing tonight including the likes of happy song "Might As Well Smile" and "Tell Her You Belong To Me," which Hart wrote about her father. However, it is "Trouble" which gets the crowd going; Hart has the them participate in some playful call and response.
Deviating from the set list, Hart decides it would be a good time to try "Tell 'Em To Hold On." As her band members are all seasoned musicians, they roll with it and sound amazing doing it while she sang her heart out.
The multi-talented Hart pulls up a chair at the front of the stage alongside guitarist Jon Nichols, as she is handed an acoustic guitar and takes us through "Get Your Shit Together", 'By Her" and the sublime "St. Teresa".
Turn back the clocks to 2012 and Hart received critical acclaim for her performance of "I'd Rather Go Blind" with the legendary Jeff Beck at the Kennedy Center Honors. However, tonight Hart gives us a spellbinding rendition of another Etta James classic in the shape of "Somethings Got a Hold On Me", leaving the Leeds crowd wanting more.
Only too happy to oblige, Hart returned to her piano, and gave a beautiful rendition of "Mama This One's For You" dedicated to her mother who was celebrating her 79th birthday, before closing out the show with "Better Than Home" to rapturous applause.
One thing to note with Hart is that no two shows are alike; her set list changes almost every night. There aren't too many artists who do that these days, and it's certainly refreshing.
You won't see many better shows this year. Beth Hart came, she sang, she conquered Leeds.
Mascot Label Group
Beth HartBetter Than HomeLeedsO2 AcademyUK
Get Into The Pit 2015
Paramore at Beacon Theatre in NYC on 06-May-2015
Def Leppard at the Metro Radio Arena in Newcastle
Biffy Clyro at Scarborough Open Air Theatre
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,450
|
Start Your Web3 Journey with Embark's New Bounties
on Community
Status Open Bounty was the first bounties project on Ethereum and, within weeks of launching it, we could see the potential in bounties controlled by smart contracts. Bounties work particularly well with trunk-based development, especially for fixing small bugs.
However, extending this to the other kinds of work required in a full DAO is tough, so we prioritised a native Status desktop app and moved most of the Open Bounty team onto the new endeavour. We felt that this was more important than bounties given our initial findings and - looking at the progress on our desktop app - we remain satisfied with this strategic decision.
It has meant, however, that Open Bounty has been in feature freeze for the last few months. In addition to this pause, we feel strongly - given how nascent the cryptocurrency community still is - that teams should not be duplicating work and re-inventing the wheel. The Gitcoin team, in particular, has been doing an amazing job with bounties. It is, after all, their primary focus.
Embark into The Ether
By working together with Gitcoin, we feel we can encourage more people to experience for themselves the benefits and autonomy of open source work, whatever their profession. To mark the occasion, we are launching the next 60 bounties we need solved through Gitcoin. Most of our bounties will be for Embark, which is written in Javascript and Nodejs. This fits in well with the active JS community Gitcoin has already built.
We're also all about those extra incentives, so if all 60 bounties are completed during the month of September, we will issue an SNT bonus on every completed bounty. If all 60 bounties are not completed, no bonus will be issued. In the spirit of open source collaboration, invite your friends, co-workers, sisters, brothers, aunties, uncles and whomever you think can complete some bounties and get to work with them on a #buidling a better, more participatory, web!
We will release 15 bounties per week for the first three weeks and the final 10 bounties in the last week of September.
Embark has been designed to be fully modular and extensible, so is able to support features that no other framework can - like plugging directly into other parts of the Ethereum stack like Whisper and Swarm, as well as supporting multiple smart contract languages like Vyper and Bamboo. All of this occurs through Embark plugins - simple javascript files that can extend vastly the capabilities of the core engine.
We can't wait to watch the open source community get stuck in and make Embark even more extendable, powerful, and easy-to-use than it already is.
The future of Ethereum depends deeply on our ability to collaborate. Success, for any decentralized network, will be defined not only by technology, but also by people and the communities of practice we can form. Lasting change depends on our ability to use new technologies in beneficial, equitable, and humane ways. We feel that the Gitcoin team understands this as well as we do, and we are excited for this next experiment in collaboration and crypto-community beyond the boundaries of traditional companies.
There's a #gitcoin chat now open in Status, so join us in there (by opening this link with Status) to discuss all the excitement. This is also the first step towards crafting conversational and social spaces that work together with the products, services, or content they are linked to, as a fundamental part of the architecture of web3 itself.
The first set of bounties will be available on Gitcoin's Issue Explorer on Monday, 3rd September.
Learn more about Embark here!
Tokenized Communities: The Beginning
Crypto has already revolutionized finance. Crypto is revolutionizing art. And crypto is revolutionizing community. We already know about many ways in which Web3 is pushing
Status @ EthCC: Hot Sticker Summer
To celebrate the return of in person Ethereum events, we're introducing the EthCC Sticker Design Competition: Hot Sticker Summer 🔥 It all starts today, Thursday June 17th 2021 and it's got a prize pool of over $1,200 in SNT & Keycards for winning submissions.
Re-Imagine Social at 0x Hack
Join us May 14th - June 2 at 0x Hack, the borderless, open & decentralized online coding marathon, as we give talks, sit on panels, and host bounties to re-imagine and rebuild social networks that take back control from the Web2 giants.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,619
|
{"url":"https:\/\/www.acmicpc.net\/problem\/9982","text":"\uc2dc\uac04 \uc81c\ud55c \uba54\ubaa8\ub9ac \uc81c\ud55c \uc81c\ucd9c \uc815\ub2f5 \ub9de\uc740 \uc0ac\ub78c \uc815\ub2f5 \ube44\uc728\n1 \ucd08 128 MB 0 0 0 0.000%\n\n## \ubb38\uc81c\n\n\"Uncle Jacques, \" you ask, \"What's for dinner?\"\n\n\"Ask me again in 10 minutes, \" Uncle Jacques replies, eyeing the weary-looking frog sitting on the shoulder of Interstate 10, in front of your dilapidated shack.\n\nYou notice the potential roadkill as it begins its journey across the vehicle-laden road. You want to know if you should begin boiling a pot of water in anticipation of frog legs for dinner or warm up the leftover possum. You fire up your Swamp 'Puter XL2 and quickly write a program to determine if it is possible for the frog to make it across the road or if it will be hit by a vehicle.\n\nExamining the patch of road in front of your shack, you notice the lanes and shoulders resemble a 10 X 10 grid of squares (shown below). You also notice that the way the frog and the vehicles are moving can be described in \"turns\". To determine if the frog makes it across the road, you quickly devise a set of rules:\n\n1. At the onset of a run, the frog can start in any square on row 0 (the starting shoulder).\n2. At the onset of a run, each vehicle will occupy a square in any column, but only in rows 1-8 (the lanes).\n3. Each turn will consist of two steps:\n\u2022 First, the frog will always remain in the same column and move one row down, towards row 9, his destination (he's not the smartest frog in the world). Next, all the vehicles move (at the same time), n squares left or right, depending on which row (lane) they are in, where n is their speed (given in the input). To simulate more approaching vehicles, if a vehicle moves off the grid, it instead \"wraps around\" and appears from the opposite side. Ex: In the grid below, if a vehicle would move to occupy column -1, it would instead occupy column 9 (column -2 would instead occupy column 8, etc.). Also, if a vehicle would move to occupy column 10, it would instead occupy column 0 (column 11 would instead occupy column 1, etc.).\n\u2022 Column\n0123456789\n----------\nR 0| |<- The frog can start in any square on row 0\no 1| |(shoulder)\nw 2| \/___ |\n3| \\ |cars in rows (lanes) 1-4 move left, or\n4| |towards column 0\n5| |\n6| ___\\ |cars in rows (lanes) 5-8 move right, or\n7| \/ |towards column 9\n8| |\n9| |<- The destination row (shoulder) of the frog\n----------\n4. The frog will succeed in crossing the interstate for a run if it can reach row 9 (without becoming roadkill) after a series of turns starting in ANY column on row 0 (he's not the dumbest frog in the world, either).\n5. The frog will become roadkill if at any point it occupies the same square as a vehicle. This includes:\n\u2022 The frog moving into a square a vehicle occupies, or\n\u2022 A vehicle \"running over\" the frog by moving over or into a square the frog occupies.\n\n## \uc785\ub825\n\nInput to this problem will consist of a (non-empty) series of up to 100 data sets. Each data set will describe the starting conditions of the interstate for a run and will be formatted according to the following description. There will be no blank lines separating data sets.\n\n1. Start line - A single line, \"START\"\n2. The next 8 lines will represent rows 1-8 (the \"lanes\" of the interstate), starting with row 1. Each line will consist of 10 integers, separated by single spaces. Each integer will represent a column for that row and will be either:\n\u2022 0, representing no vehicle occupying that square, or a non-zero integer N in the range 1 <= N <= 9, representing a vehicle is occupying that square and the non-zero integer is its speed. NOTE: The given speeds will NOT result in vehicles moving over other vehicles or into a square occupied by another vehicle (no accidents), since all the vehicles move at the same time and all vehicles on a given row are guaranteed to move at the same speed.\n3. End line - A single line, \"END\"\n\n## \ucd9c\ub825\n\nOutput for each data set will be exactly one line of output. The line will either be \"LEFTOVER POSSUM\" or \"FROGGER\" (both all caps with no whitespace leading or following).\n\n\"LEFTOVER POSSUM\" will appear if the frog can make it safely (without becoming roadkill) across the interstate after a series of turns starting in ANY column on row 0.\n\n\"FROGGER\" will be output for a data set if it fails to meet the criteria for a \"LEFTOVER POSSUM\" line.\n\n## \uc608\uc81c \uc785\ub825\n\nSTART\n3 0 0 0 0 3 0 0 0 3\n1 0 0 0 1 0 0 0 0 0\n4 0 0 0 0 0 0 4 0 0\n0 0 2 0 0 0 0 0 0 2\n5 0 0 0 0 0 0 0 0 0\n0 2 0 0 0 0 0 2 0 2\n0 0 0 4 0 0 0 0 0 0\n0 2 0 0 0 0 0 0 0 0\nEND\nSTART\n9 9 9 9 9 9 9 9 9 9\n0 0 0 0 0 0 0 0 0 0\n0 0 0 0 0 0 0 0 0 0\n0 0 0 0 0 0 0 0 0 0\n0 0 0 0 0 0 0 0 0 0\n0 0 0 0 0 0 0 0 0 0\n0 0 0 0 0 0 0 0 0 0\n0 0 0 0 0 0 0 0 0 0\nEND\nSTART\n1 0 0 0 0 0 0 0 0 0\n0 1 0 0 0 0 0 0 0 0\n0 0 1 0 0 0 0 0 0 0\n0 0 0 1 0 0 0 0 0 0\n0 0 0 1 0 0 0 0 0 0\n0 0 0 0 1 0 0 0 0 0\n0 0 0 0 0 0 0 0 0 0\n0 0 0 0 0 0 0 0 0 0\nEND\n\n\n## \uc608\uc81c \ucd9c\ub825\n\nFROGGER\nFROGGER\nLEFTOVER POSSUM","date":"2017-03-27 05:11:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.24081358313560486, \"perplexity\": 496.2377862024069}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-13\/segments\/1490218189403.13\/warc\/CC-MAIN-20170322212949-00573-ip-10-233-31-227.ec2.internal.warc.gz\"}"}
| null | null |
Q: Show shared and unique characteristics in a data I have a dataset of OTUs (observations) and plant species. I want to visualize the shared and unique OTUs among the plant species.
Here is the a part of the data
.OTU.ID T..kraussiana R..venulosa T..africanum T..repens I..evansiana Z..capensis V..unguiculata E..cordatum
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Glomus 6150 207 0 111 1144 945 9112 8862
2 Claroi~ 638 71 706 55 1415 510 10798 573
3 Glomus~ 9232 1757 269 1335 0 0 87 1210
4 Glomus~ 133 0 0 0 0 0 6764 3415
5 Glomus 5292 0 1997 0 16 857 66 6562
6 Glomus~ 1596 2675 1167 1800 27 552 0 21
7 Glomus~ 119 179 544 148 0 792 24967 2471
8 Glomus~ 10493 0 0 0 175 0 0 357
9 Glomus~ 4011 0 0 0 0 0 0 477
10 Glomus 2099 1012 15 902 0 726 0 28
A: vegan::rarecurve does the job.
library(vegan)
rarecurve(dat[-1], sample=min(rowSums(m)), col="#F48024", lwd=2,
main="This could be your title", cex=0.8)
Data:
dat <- read.table(header=TRUE, text="
.OTU.ID T..kraussiana R..venulosa T..africanum T..repens I..evansiana Z..capensis V..unguiculata E..cordatum
1 Glomus 6150 207 0 111 1144 945 9112 8862
2 Claroi~ 638 71 706 55 1415 510 10798 573
3 Glomus~ 9232 1757 269 1335 0 0 87 1210
4 Glomus~ 133 0 0 0 0 0 6764 3415
5 Glomus 5292 0 1997 0 16 857 66 6562
6 Glomus~ 1596 2675 1167 1800 27 552 0 21
7 Glomus~ 119 179 544 148 0 792 24967 2471
8 Glomus~ 10493 0 0 0 175 0 0 357
9 Glomus~ 4011 0 0 0 0 0 0 477
10 Glomus 2099 1012 15 902 0 726 0 28
")
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 9,852
|
Q: Match up two polygon layers in QGIS I have two polygon layers, one is from natural earth data (Administrative 1 division) and the other is a polygon layer with some historical regions in Europe. As you can see in the image attached there are some areas which still has the same boundary, but since these two layers come from different sources they don't match up, is there a way I can match this automatically in QGIS 3.10?
I tried the snap geometries to layer tool and it just didn't work. (either it caused QGIS to crash, or the output was just the same as the input).
A: You can also use the Move Feature(s) tool to move your polygons. Select the polygons you want to move, than activate the tool and move the selected polygons with the mouse. If you enabled snapping, you can snap the moved polygons to the other ones so that they overlap perfectly.
If not visible, active Advanced Digitizing Toolbar in Menu View / Toolbar.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 543
|
from sqlalchemy import Column
from sqlalchemy.engine.reflection import Inspector
from sqlalchemy import ForeignKey
from sqlalchemy import Index
from sqlalchemy import MetaData
from sqlalchemy import String
from sqlalchemy import Table
def upgrade(migrate_engine):
"""Add service_uuid column to volumes."""
meta = MetaData(bind=migrate_engine)
Table('services', meta, autoload=True)
volumes = Table('volumes', meta, autoload=True)
if not hasattr(volumes.c, 'service_uuid'):
volumes.create_column(Column('service_uuid', String(36),
ForeignKey('services.uuid'),
nullable=True))
index_name = 'volumes_service_uuid_idx'
indexes = Inspector(migrate_engine).get_indexes('volumes')
if index_name not in (i['name'] for i in indexes):
volumes = Table('volumes', meta, autoload=True)
Index(index_name, volumes.c.service_uuid, volumes.c.deleted).create()
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,700
|
{"url":"https:\/\/mathematikoi.wordpress.com\/2008\/02\/28\/set-algebra-a-quick-reference-guide\/","text":"\u2022 ## Recent Comments\n\n bob on Proof that Humans Are\u00a0Evi\u2026 Mike on Viking Metal Meets Viking\u2026 sir on Set Algebra: A Quick Reference\u2026 Crystal on Proof that Humans Are\u00a0Evi\u2026 Felicis on Viking Metal Meets Viking\u2026\n\n## Set Algebra: A Quick Reference\u00a0Guide\n\n[The following set-algebra laws are \u201cborrowed\u201d, in the great tradition of the internet, from Wikipedia.\u00a0 If you would like more background on why these things work, the article goes into more detail, and of course has the complete lattice of links to accompany it.]\n\nI have a god-awful memory.\u00a0 So, I\u2019m constantly struggling to find some basic property of sets whenever I\u2019m trying to prove something that I KNOW would be easy, if I could just remember the damned law.\u00a0 Well, after scouring the net, Wikipedia\u2019s version turned out to be best \u2026 but I don\u2019t need all the exposition, just the laws spelled out, easy to see.\n\nLike to here it, here it go:\n\nPROPOSITION 1: For any sets A, B, and C, the following identities hold:\n\ncommutative laws:\n\n\u2022 $A cup B = B cup A,!$\n\u2022 $A cap B = B cap A,!$\nassociative laws:\n\n\u2022 $(A cup B) cup C = A cup (B cup C),!$\n\u2022 $(A cap B) cap C = A cap (B cap C),!$\ndistributive laws:\n\n\u2022 $A cup (B cap C) = (A cup B) cap (A cup C),!$\n\u2022 $A cap (B cup C) = (A cap B) cup (A cap C),!$\n\nPROPOSITION 2: For any subset A of universal set U, the following identities hold:\n\nidentity laws:\n\u2022 $A cup varnothing = A,!$\n\u2022 $A cap U = A,!$\ncomplement laws:\n\n\u2022 $A cup A^C = U,!$\n\u2022 $A cap A^C = varnothing,!$\n\nPROPOSITION 3: For any subsets A and B of a universal set U, the following identities hold:\n\nidempotent laws:\n\n\u2022 $A cup A = A,!$\n\u2022 $A cap A = A,!$\ndomination laws:\n\n\u2022 $A cup U = U,!$\n\u2022 $A cap varnothing = varnothing,!$\nabsorption laws:\n\n\u2022 $A cup (A cap B) = A,!$\n\u2022 $A cap (A cup B) = A,!$\n\nPROPOSITION 4: Let A and B be subsets of a universe U, then:\n\nDe Morgan\u2019s laws:\n\n\u2022 $(A cup B)^C = A^C cap B^C,!$\n\u2022 $(A cap B)^C = A^C cup B^C,!$\ndouble complement or Involution law:\n\n\u2022 $A^{CC} = A,!$\ncomplement laws for the universal set and the empty set:\n\n\u2022 $varnothing^C = U$\n\u2022 $U^C = varnothing$\n\nPROPOSITION 5: Let A and B be subsets of a universe U, then:\n\nuniqueness of complements:\n\n\u2022 If $A cup B = U,!$, and $A cap B = varnothing,!$, then $B = A^C,!$\n\nPROPOSITION 6: If A, B and C are sets then the following hold:\n\nreflexivity:\n\n\u2022 $A subseteq A,!$\nantisymmetry:\n\n\u2022 $A subseteq B,!$ and $B subseteq A,!$ if and only if $A = B,!$\ntransitivity:\n\n\u2022 If $A subseteq B,!$ and $B subseteq C,!$, then $A subseteq C,!$\n\nPROPOSITION 7: If A, B and C are subsets of a set S then the following hold:\n\nexistence of a least element and a greatest element:\n\n\u2022 $varnothing subseteq A subseteq S,!$\nexistence of joins:\n\n\u2022 $A subseteq A cup B,!$\n\u2022 If $A subseteq C,!$ and $B subseteq C,!$, then $A cup B subseteq C,!$\nexistence of meets:\n\n\u2022 $A cap B subseteq A,!$\n\u2022 If $C subseteq A,!$ and $C subseteq B,!$, then $C subseteq A cap B,!$\n\nPROPOSITION 8: For any two sets A and B, the following are equivalent:\n\n\u2022 $A subseteq B,!$\n\u2022 $A cap B = A,!$\n\u2022 $A cup B = B,!$\n\u2022 $A setminus B = varnothing$\n\u2022 $B^C subseteq A^C$\n\nPROPOSITION 9: For any universe U and subsets A, B, and C of U, the following identities hold:\n\n\u2022 $C setminus (A cap B) = (C setminus A) cup (C setminus B),!$\n\u2022 $C setminus (A cup B) = (C setminus A) cap (C setminus B),!$\n\u2022 $C setminus (B setminus A) = (A cap C)cup(C setminus B),!$\n\u2022 $(B setminus A) cap C = (B cap C) setminus A = B cap (C setminus A),!$\n\u2022 $(B setminus A) cup C = (B cup C) setminus (A setminus C),!$\n\u2022 $A setminus A = varnothing,!$\n\u2022 $varnothing setminus A = varnothing,!$\n\u2022 $A setminus varnothing = A,!$\n\u2022 $B setminus A = A^C cap B,!$\n\u2022 $(B setminus A)^C = A cup B^C,!$\n\u2022 $U setminus A = A^C,!$\n\u2022 $A setminus U = varnothing,!$","date":"2018-06-25 05:53:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 57, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8145982623100281, \"perplexity\": 1302.5608028876884}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-26\/segments\/1529267867493.99\/warc\/CC-MAIN-20180625053151-20180625073151-00340.warc.gz\"}"}
| null | null |
Q: open two window with tree panel using extjs I want to open two windows. Here is the code I am trying to do this with:
Ext.define('DTApp.view.MyViewport', {
extend: 'Ext.container.Viewport',
requires: [
'DTApp.Class_Util'
],
id: 'MainWindow',
autoScroll: true,
layout: {
type: 'border'
},
initComponent: function() {
var me = this;
Ext.applyIf(me, {
items: [
{
xtype: 'treepanel',
region: 'west',
split: false,
autoRender: true,
autoShow: true,
cls: 'detail-view + x-panel-header',
width: 170,
autoScroll: true,
resizable: true,
resizeHandles: 'e',
bodyPadding: '0 0 0 0',
animCollapse: true,
collapseFirst: true,
collapsed: false,
collapsible: true,
frameHeader: false,
title: 'Menu',
titleAlign: 'left',
titleCollapse: false,
columnLines: false,
deferRowRender: true,
forceFit: false,
hideHeaders: true,
store: 'MyJsonPTreeStore',
animate: true,
rootVisible: false,
singleExpand: false,
useArrows: true,
viewConfig: {
autoShow: true,
data: {
},
id: 'MainWindow_Left_Tree',
itemId: 'MainWindow_Left_Tree',
autoScroll: false,
resizable: false,
deferInitialRefresh: true,
loadMask: true,
preserveScrollOnRefresh: true,
enableTextSelection: false,
animate: true,
}
},
{
xtype: 'panel',
region: 'center',
id: 'MainWindow_Right_Panel',
itemId: 'MainWindow_Right_Panel',
autoScroll: true,
animCollapse: true,
collapsed: false,
collapsible: false,
header: false,
title: 'My Panel',
listeners: {
render: {
fn: me.onMainWindow_Right_PanelRender,
scope: me
}
}
}
]
});
me.callParent(arguments);
},
onMainWindow_Right_PanelRender: function(component, eOpts) {
}
});
When I try to open the second window, the tree panel on the first window is removed and in the second window, it shows double nodes and events are not working.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,484
|
One More: The U's Bees
Continuum staff Summer 2014
Nestled in two corners of the University of Utah campus, a dozen hives are abuzz with a total of about 150,000 honeybees. In the rooftop garden just off the third floor of the J. Willard Marriott Library, the bees hover around flowers blooming near where about half the hives are housed on an adjacent balcony. The remaining hives are located on a balcony on the fourth floor of the Olpin Union Building, where visitors to the Crimson Room restaurant can see the bees through a locked sliding-glass door near the eatery's entrance. The honeybees all help pollinate not only the flowers but the organic vegetable gardens located on the campus.
"We wanted the hives to be somewhere visible and accessible," says Thomas Bench BS'13, who started the hives in 2012 when he was a student at the U. Bench, who graduated with a degree in environmental studies, had become interested in beekeeping and first installed a hive at his grandmother-in-law's house. He then decided it would be a good idea to see if beehives could be kept on the U campus so students as well as faculty and staff members could learn about beekeeping and the importance of bees to food supplies and the ecosystem, as well as the risks if bees disappear. Over the past 50 years, domesticated bee populations have decreased by about 50 percent due to factors including pesticide use and disease.
Bench sought the help of Chris Rodesch, a U adjunct associate professor of neurobiology who also happens to be Salt Lake County's bee inspector, and Amy Sibul, the Biology Department's community engaged learning coordinator. Bench then applied for and received an $1,800 grant from the U's Sustainable Campus Initiative Fund to get the project off the ground. The first U hives and honeybees were purchased in the spring of 2012. "We wheeled them through the Union Building after hours to the balcony," says Bench, who has been completing a management development program with Utah's Winder Farms and still leads the U's beekeeping efforts.
Other students began showing up for the weekly hive inspections, and the University of Utah Beekeepers Association was born. Local school groups also came to visit. Last year, the U beekeeping project expanded with the hives at the Marriott Library. And this year, the U beekeepers received an even larger grant from the Sustainable Campus Initiative Fund— $5,700—to buy more hives and bees for the two existing campus locations and to help the beekeepers participate in a NASA study on climate change. U student Stephen Stanko, a sophomore majoring in biology, is coordinating the involvement in the NASA project. Using a special scale, the students will be weighing the hives. By combining that data with notes on local weather patterns, the timing of nectar flows can be determined, indicating when local flora are coming into bloom and if those times are occurring earlier in the season because of possible global warming.
Meanwhile, the U beekeepers plan to start selling honey from the hives this summer at the campus Farmers Market. "Grocery store honey just isn't the same," says Kirstie Kandaris, a U senior majoring in biology who was one of the first volunteers to help with the beehives. The U Beekeepers Association also is offering discounted hives and bee packages to students as well as faculty and staff members who want to start beekeeping in their home backyards. Bench and the other volunteers will be holding informal beekeeping classes at the Union Building hives at 2 p.m. on the first and third Saturdays of each month over the summer for anyone who is interested in learning. For more information, email uofubeekeepers@gmail.com.
One thought on "One More: The U's Bees"
A super informative piece! Definitely got to check some of these places you for date night with my hubby:)
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,411
|
Північно-Македонський план нумерації телефонів — це система, яка використовується для призначення телефонних номерів у Північній Македонії. Він регулюється Агентством з електронних комунікацій (АЕК), відповідальним за телекомунікації .
Код країни Північної Македонії — +389. Коди територій завжди мають набиратися, навіть у межах країни, через велику кількість операторів фіксованого та мобільного зв'язку.
Наприклад, для виклику телефонів у Скоп'є необхідно набрати:
02 ххххххх (у Скоп'є)
02 ххххххх (у межах Північної Македонії)
+389 2 xxxxxxx (за межами Північної Македонії)
Формати нумерації для Північної Македонії:
+389 2 xxxxxxx географічні номери — Скоп'є
+389 3x xxxxxx географічні номери — східна область
+389 4x xxxxxx географічні номери — центральна та західна області
+389 5xx xxxxx платні номери
+389 7x xxxxxx мобільні номери
+389 8xx xxxxx безкоштовний виклик
1xx — загальний формат короткого коду (наприклад, 112) для аварійних ситуацій; формат 10xx — це доступ для оператора.
Для дзвінків з Македонії префікс для міжнародних дзвінків — 00.
Коди міських телефонів
Номери спеціальних служб
План нумерації в колишній Югославії
Під час існування Югославії македонські коди територій починалися з 9. 1 жовтня 1993 року до коду Македонію відокремили від коду +38 і додали «9» до коду країни (+389). Між 2000 та 2001 роками «9» у кодах територій замінили на 3 або 4. Код території для Скоп'є були змінені з (091) на (02). У 2003 році всі скопські телефонні номери були змінені з 6 до 7 цифр, з додаванням додаткової цифри перед початком номера.
Примітки
Джерела
Список розподілів МСЕ
Економіка Північної Македонії
Телефонна номерація
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,944
|
1575 Children Residing In Shelter Homes Were Victims Of Some Form Of Sexual Abuse: Govt Tells SC
The Logical Indian Crew India
August 17th, 2018 / 12:30 PM
Image Credits: Patrika
The Centre on August 14 told the Supreme Court that a survey of 9,589 child care institutions across India revealed that 1,575 inmates suffered sexual abuse and exploitation before they were rescued and placed in these homes. Reportedly, 189 inmates were victims of child pornography as well.
According to the Times Of India, the government to a bench headed by Justice Madan B Lokur, said, "Information regarding registration of cases under the Protection of Children against Sexual Offence (POCSO) Act, 2012, has been collected. The number of children found to be victims of sexual abuse is 1,575 (girls-1,286 and boys-286) and victims of pornography to be 189 (girls-40 and boys-149)."
What does the report on Child Care Institutions state?
Appearing for the Centre, Additional Solicitor General Pinky Anand told the apex court that the study was conducted between December 2015 to March 2017, the reports of which have been submitted to the Union Ministry of social justice. According to The Hindu, the survey was conducted by Childline India Foundation, the reports of which were sent to the states concerned. The report also found out that Karnataka, Telangana and Chhattisgarh have the highest rate of child pornography victims in the country.
The survey in order to make the report studied various categories of children which included orphans, abandoned, sexually abused, victims of child pornography, minors who were trafficked for domestic violence as well as those affected by natural disasters or even HIV and AIDS.
The report also found that the total number of children found to be in need of care and protection was 3,68,267 out of which 1,98,449 were boys, 1,69,726 were girls and 92 were transgenders. Anand also said that a national social audit of child care homes is being conducted which is set to be completed in October 2018, while 3500 shelter homes for children has already been audited. It also gave a list of both registered and unregistered child care institutions under the Juvenile Justice Act 2015. It further mentioned that there are 33% unregistered child care institutions in India.
Judges question ASG
Questioning the ASG, Justice Madan B. Lokur said, "What have you done about these kids? If there are 1,575 boys and girls who suffered sexual abuse, what have you done?" The bench also said that the audit should not only be about collecting data but an "active evaluation" of the children's conditions in these institutions.
According to The Hindu, Justice Deepak Gupta, who was one of the three judges, said, "Talk to the children. See if they are happy or unhappy."
The Supreme Court earlier on August 7 had observed that women and girls were getting raped "left right and centre" in India and had also stressed upon the fact that action needs to be taken to stop the menace. Reportedly, it had also held the Bihar state government accountable for funding Balika Grih without verifying its credentials.
The Muzaffarpur shelter case
In July 2018, it was revealed that 34 girls in a state-run shelter home in Bihar's Muzaffarpur were subjected to continued rape, torture and assault by the staff of the shelter home. One victim, who was rescued, claimed that most of the girls who were residing there were raped by the journalist/owner Brajesh Thakur, along with Vineet Kumar, a member of a child welfare committee, who ran the NGO.
Reportedly, it has also been revealed that Thakur was getting 40 lakh per annum from the state government to run the shelter. He was given a tender to run an old age home and a Juvenile home as well. An NGO ran the 'Balika Grih' shelter home called Sewa Sankalp Evam Vikas Samiti. As per reports, Brajesh was given the tender to run the shelter home by the state Social Welfare Department.
Also Read: Ten States Including Bihar & Uttar Pradesh Denied Audit Of Their Shelter Homes
Written by : Sromona Bhattacharyya
Tamil Nadu Plastic Ban: Some Welcome It With Innovative Ideas, Some With Protests
TN: Surprise Shelter Home Visit Unravels Truth Of Rampant Sexual Abuse
To Prevent Entry Of Foreign Nationals With History Of Child Sexual Abuse, Govt To Revise Visa Form
Only 54 Of 2874 Shelter Homes In India Follow Norms, Majority Violate Rules: Report
Chennai: 44 Children Rescued After Crackdown On Orphanage Following Sexual Abuse Allegations
Bihar Cancels Licenses Of 50 NGOs Running Shelter Homes, Social Welfare Dept To Take Over
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,623
|
This list of biochemistry awards is an index to articles on notable awards for contributions to biochemistry, the study of chemical processes within and relating to living organisms. The list gives the country of the organization that gives the award, but the award may not be limited to people from that country.
Awards
See also
Lists of awards
Lists of science and technology awards
List of biology awards
List of chemistry awards
References
biochemistry
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,038
|
package category;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import DBConfig.BaseJDBC;
public class CategoryDAO {
static Connection con;
ResultSet rs = null;
Statement stmt = null;
String testSql = null;
public Map<String, CategoryVO> getCategoryResults()throws Exception{
Map<String, CategoryVO> rows = new HashMap<String, CategoryVO>();
String query = "SELECT * FROM category";
try {
con = (Connection) BaseJDBC.getConnPool();
System.out.println("SQl "+query);
stmt = con.createStatement();
rs = stmt.executeQuery(query);
System.out.println("SQl Executed inside getCategoryResults. . .");
while (rs.next()){
System.out.println("Result Set : "+rs.getString("NAME"));
CategoryVO category = new CategoryVO();
category.setId(Integer.parseInt(rs.getString("ID")));
category.setName(rs.getString("NAME"));
rows.put(rs.getString("ID"), category);
}
} catch (SQLException e) {
e.printStackTrace();
}finally {
try {
if (con != null) con.close();
if (stmt != null) stmt.close();
if (rs != null) rs.close();
}
catch (Exception e) {
}
}
return rows;
}
public CategoryVO getCategoryResults(int param)throws Exception{
String query = "SELECT * FROM category where id = :param";
CategoryVO category = new CategoryVO();
try {
con = (Connection) BaseJDBC.getConnPool();
System.out.println("SQl "+query);
PreparedStatement ps = con.prepareStatement(query);
ps.setInt(1, param);
rs = ps.executeQuery();
System.out.println("SQl Executed inside getCategoryResults with param . . ."+param);
while (rs.next()){
System.out.println("Result Set : "+rs.getString("NAME"));
category.setId(Integer.parseInt(rs.getString("ID")));
category.setName(rs.getString("NAME"));
}
} catch (SQLException e) {
e.printStackTrace();
}finally {
try {
if (con != null) con.close();
if (stmt != null) stmt.close();
if (rs != null) rs.close();
}
catch (Exception e) {
}
}
return category;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,098
|
\section{Introduction}
In accretion disk theory one is often interested in phenomena that
occur on a ``dynamical'' timescale ${\cal T}_{0}$ much shorter
than the ``viscous'' timescale ${\cal T}[{\cal L}]$ needed for
angular momentum redistribution and the ``thermal'' timescale ${\cal
T}[{\cal S}]$ needed for entropy redistribution\footnote{We use the spherical Boyer-Lindquist coordinates $t, \phi, r, \theta$, the geometrical units $c$ $=$ $1$ $=$ $G$ and the $+---$ signature. The Kerr metric is described by the ``geometrical'' mass $M$ and the ``geometrical'' spin parameter $0 < a < 1$, that relate to the ``physical'' mass and angular momentum by the rescaling, $M = GM_{\rm phys}/c^2$, $a = J_{\rm phys}/(M\,c)$. Partial derivatives are denoted by $\partial_i$ and covariant derivatives by $\nabla_i$.},
\begin{equation}
\label{timescales} {\cal T}_{0} \ll {\rm min}\left({\cal
T}[{\cal L}], {\cal T}[{\cal S}] \right).
\end{equation}
The question whether it is physically legitimate to approximately
describe the black hole accretion flows (at least in some
``averaged'' sense) in terms of stationary (independent on $t$)
and axially symmetric (independent on $\phi$) dynamical
equilibria, is not yet resolved. While observations seem to suggest that many real
astrophysical sources experience periods in which this assumption
is quite reasonable, several authors point out that the results of
recent numerical simulations seem to indicate that the MRI and
other instabilities make the black hole accretion flows genuinely
non-steady and non-symmetric, and that the very concept of the
separate timescales (\ref{timescales}) may be questionable in the
sense that locally ${\cal T}_{0} \approx {\cal T}[{\cal L}]
\approx {\cal T}[{\cal S}]$. However, this assumption has been
made in {\it all} existing comparisons between theory and
observations, be they by detailed spectral fitting
\citep[e.g.][and references there]{sha-2007, sha-2008}, line
profile fitting \citep[e.g.][]{fab-2003}, or studying small
amplitude oscillations \citep[see][for references]{abr-2005}. It
seems that the present understanding of the black hole accretion
phenomenon rests, in a major way, on studies of stationary and
axially symmetric models.
From the point of view of mathematical self-consistency, in
modeling of these stationary and axially symmetric dynamical
equilibria, distributions of the {\it conserved} angular momentum
and entropy,
\begin{equation}
\label{distribution-lagrangian} \ell = \ell(\xi, \eta),~~~ s =
s(\xi, \eta),
\end{equation}
may be considered as being {\it free functions} of the Lagrangian
coordinates \citep{ost-1970, abr-1970, bar-1970}. The Lagrangian
coordinates $\xi, \eta$ are defined by demanding that a narrow
ring of matter $(\xi, \xi + d\xi)$, $(\eta, \eta + d\eta)$ has the
rest mass $dM_0 = \rho_0(\xi, \eta)d\xi d\eta$ with $\rho_0$ being
the rest mass density. In the full physical description, the form
of the functions in (\ref{distribution-lagrangian}) is not
arbitrary but given by the dissipative processes, like viscosity
and radiative transfer. At present, several important aspects of
these processes are still unknown, so there is still no practical
way to calculate physically consistent models of accretion flows
from first principles, without involving some ad hoc assumptions,
or neglecting some important processes. Neither the hydrodynamical
simulations (that e.g. use the ad hoc $\alpha\,=\,$const viscosity
prescription), nor the present day MHD simulations (that e.g.
neglect radiative transfer) could be considered satisfactory.
Furthermore, the simplifications made in these simulations are
mathematically equivalent to guessing free functions (such as the
entropy distribution).
Bohdan Paczy{\'n}ski pointed out that it could often be
more pragmatic to make a physically motivated guess of the final
result, e.g. to guess the form of the angular momentum and entropy
distributions.
In practice, it is far easier to guess and use the coordinate
distributions of the {\it specific} angular momentum and entropy,
\begin{eqnarray}
{\cal L} &=& {\cal L}(r, \theta),
\label{momentum-distribution}\\
{\cal S} &=& {\cal S}(r, \theta),
\label{entropy-distribution}
\end{eqnarray}
than the Lagrangian distributions (\ref{distribution-lagrangian}).
However, one does not known a priori the relation between the
conserved $\ell$ and specific ${\cal L}$ angular momenta (and
entropy), or the functions, $\xi = \xi(r,\theta)$, $\eta =
\eta(r, \theta)$. Thus, assuming (\ref{momentum-distribution}) and
(\ref{entropy-distribution}) is not equivalent to assuming
(\ref{distribution-lagrangian}), and usually it should be a
subject to some consistency conditions. We shall return to this
point in Section \ref{discussion}.
In several ``astrophysical scenarios'' one indeed guesses a
particular form of (\ref{momentum-distribution}) and
(\ref{entropy-distribution}). For example, the celebrated
\citet{sha-sun-1974} {\it thin disk} model assumes the Keplerian
distribution of angular momentum,
\begin{equation}
\label{Keplerian} {\cal L}(r, \theta) = {\cal L}_K(r) \equiv
\frac{M^{1/2}\,\left(r^2 - 2aM^{1/2}r^{1/2} + a^2\right)} {r^{3/2}
- 2Mr^{1/2} + aM^{1/2}},
\end{equation}
and the popular {\it cold-disk-plus-hot-corona} model assumes a
low entropy flat disk surrounded by high entropy, more spherical
corona. These models contributed considerably to the understanding
of black-hole accretion physics.
The mathematically simplest assumption for the angular momentum
and entropy distribution is, obviously,
\begin{eqnarray}
{\cal L}(r, \theta) &=& {\cal L}_0 = {\rm const},
\label{constant-momentum}
\\
{\cal S}(r, \theta) &=& {\cal S}_0 = {\rm const}.
\label{constant-entropy}
\end{eqnarray}
This was used by Paczy{\'n}ski and his Warsaw team to introduce the
{\it thick disk} models \citep{abr-1978, koz-1978, jar-1980,
pac-wii-1980, abr-1980, abr-1981,
pac-1982}. Thick disks have characteristic
toroidal shapes, resembling a doughnut. Probably for this reason,
Martin Rees coined the name of {\it Polish
doughnuts}\footnote{However, real Polish doughnuts
(called {\it p{\c a}czki} in Polish) have spherical shapes. They
are definitely non-toroidal
--- see e.g. http://en.wikipedia.org/wiki/Paczki ~.} for them.
Figure \ref{analytic-numerical} shows a comparison of a
state-of-art MHD simulation of black-hole accretion (time and
azimuth averaged) with a Polish doughnut corresponding to a
particular ${\cal L}_0$. Both models show the same characteristic
features of black hole accretion: (i) a funnel along the rotation
axis, relevant for jet collimation and acceleration; (ii) a
pressure maximum, possibly relevant for epicyclic oscillatory
modes; and (iii) a cusp-like self-crossing of one particular
equipressure surface, relevant for an inner boundary condition,
and for stabilization of the Papaloizou-Pringle \citep{bla-1987},
thermal, and viscous instabilities \citep{abr-1971}. The cusp is
located between the radii of marginally stable and marginally
bound circular orbits,
\begin{equation}
\label{cusp} r_{mb} < r_{cusp} < r_{ms} \equiv {\rm ISCO}.
\end{equation}
Polish doughnuts have been useful in semi-analytic studies of the
astrophysical appearance of super-Eddington accretion \citep[see
e.g.][]{sik-1971, mad-1988, szu-1996} and in analytic calculations
of small-amplitude oscillations of accretion structures in
connection with QPOs \citep[see e.g.][]{bla-2006}.
In the same context, numerical studies of their oscillation properties for different angular momentum distributions were first carried out by \citet{rez-2003a, rez-2003b}.
Moreover, Polish doughnuts are routinely used as convenient starting initial configurations in numerical simulations \citep[e.g.][]{haw-2001,dev-2003}. Recently, \citet{kom-2006} has constructed analytic models of magnetized
Polish doughnuts.
\begin{figure}
\centering
\includegraphics[width=9cm]{1518.F01.eps}
\caption{Equipressure surfaces in a very simple and analytic
Polish doughnut (left, with linear spacing), and a sophisticated,
state-of-art
full 3D MHD numerical simulation (right, with logarithmic
spacing). Although the shapes of equipressure surfaces are
remarkably similar, in the numerical model the pressure
gradient is seriously larger, and visibly enhanced along roughly
conical surfaces, approximately $30^{\circ}$ from the
equatorial plane. \citep[Figure taken from][]{abr-fra-2008}}
\label{analytic-numerical}
\end{figure}
However, a closer inspection of Figure \ref{analytic-numerical}
reveals that the numerically constructed model of accretion has a
(much) larger ``vertical'' pressure gradient than the analytic
Polish doughnut, and that in the numerical model the gradient is
visibly enhanced along roughly conical surfaces, approximately
$30^{\circ}$ from the equatorial plane. This (and several other)
detailed features of the accretion structure cannot be modeled by
either the Keplerian nor the constant angular momentum assumption
alone. We suggest and discuss in this paper a simple but flexible
ansatz, that is a combination of the two standard distributions,
Keplerian (\ref{Keplerian}) and constant
(\ref{constant-momentum}). The new ansatz preserves the virtues of
assuming the standard distributions where this is appropriate, but
leads to a far richer variety of possible accretion structures, as
are indeed seen in numerical simulations.
\section{Assumptions and definitions}
We assume that the accretion flow is stationary and axially
symmetric. This assumption expressed in terms of the
Boyer-Lindquist spherical coordinates states that the flow
properties depend only on the radial and polar coordinates $r,
\theta$, and are independent on time $t$ and azimuth $\phi$. We
also assume that the dynamical timescale is much shorter than the
thermal and viscus ones (\ref{timescales}). Accordingly, we ignore
dissipation and assume the stress-energy tensor in the perfect
fluid form,
\begin{equation}
\label{perfect-fluid}
T^i_{~k} = (p + \epsilon)u^i\,u_k - p\delta^i_{~k},
\end{equation}
with $p$ and $\epsilon$ being the pressure and energy density,
respectively. The four velocity of matter $u^i$ is assumed to be
purely circular,
\begin{equation}
\label{circular-orbits}
u^i = (u^t, u^{\phi}, 0, 0).
\end{equation}
The last assumption is not fulfilled close to the cusp (see Figure
\ref{analytic-numerical}), where there is a transition from
``almost circular'' to almost ``free-fall'' radial trajectories.
Nevertheless, the transition could be incorporated in the form of the inner
boundary condition \citep[the relativistic Roche lobe overflow,
see e.g.][]{abr-1985}.
One introduces the specific angular momentum ${\cal L}$, the
angular velocity $\Omega$, and the redshift factor $A$ by the well
known and standard definitions,
\begin{equation}
\label{definitions} {\cal L} = - \frac{u_{\phi}}{u_t}, ~~ \Omega =
\frac{u^{\phi}}{u^t}, ~~ A^{-2} = g_{tt} + 2\Omega g_{t\phi} +
\Omega^2 g_{\phi\phi}.
\end{equation}
The specific angular momentum and angular velocity are linked by
\begin{equation}
\label{velocity-momentum} {\cal L} = - \frac{\Omega\,g_{\phi\phi}
+ g_{t\phi}}{\Omega\,g_{t\phi} + g_{tt}}, ~~ \Omega = -
\frac{{\cal L}\,g_{tt} + g_{t\phi}}{{\cal L}\,g_{t\phi} +
g_{\phi\phi}}.
\end{equation}
The conserved angular momentum $\ell$ is given by,
\begin{equation}
\label{conserved-momentum} \ell = \frac{(p +
\epsilon)u_t}{\rho_0}\,{\cal L}.
\end{equation}
\section{The shapes of the equipressure surfaces}
In this section we briefly discuss one particularly useful result
obtained by \citet{jar-1980}. It states that for a perfect fluid
matter rotating on circular trajectories around a black hole,
the shapes and location of the equipressure surfaces $p(r, \theta)
=~$const follow directly from the assumed angular momentum
distribution (\ref{momentum-distribution}) alone. In particular,
they are independent of the equation of state, $p = p(\epsilon,
{\cal S})$, and the assumed entropy distribution
(\ref{entropy-distribution}).
For a perfect-fluid matter, the equation of motion
$\nabla_i\,T^i_{~k} = 0$ yields,
\begin{equation}
\label{Euler} \frac{\partial_i p}{p + \epsilon} = -\frac{1}{2}
\frac{\partial_i\,g^{tt} - 2{\cal L}\,\partial_i g^{t\phi} + {\cal
L}^2\,\partial_i g^{\phi\phi}}{g^{tt} - 2{\cal L}\,g^{t\phi} +
{\cal L}^2\,g^{\phi\phi}},
\end{equation}
which may be transformed into,
\begin{equation}
\label{von-Zeipel} \frac{\partial_i p}{p + \epsilon} = \partial_i
\ln A + \frac{{\cal L}\,\partial_i \Omega}{1 - {\cal L}\,\Omega}
\end{equation}
From the second derivative commutator $\partial_r\partial_{\theta}
- \partial_{\theta}\partial_r$ of the above equation,
\begin{equation}
\label{second-commutator-von-Zeipel} \frac{\partial_r
p\,\partial_{\theta}\epsilon -
\partial_{\theta}p\,\partial_r\epsilon}{(p + \epsilon)^2} =
\frac{\partial_r \Omega\,\partial_{\theta}{\cal L} -
\partial_{\theta} \Omega\,\partial_r{\cal L}}{(1 - {\cal
L}\,\Omega)^2},
\end{equation}
one derives \citep[see e.g.][]{abr-1971} the von Zeipel condition:
$p(r,\theta)\,$$=\,$const surfaces coincide with those of
$\epsilon(r,\theta)\,$$= $const, {\it if and only if} the surfaces
${\cal L}(r,\theta)\,$$=\,$const coincide with those
$\Omega(r,\theta)\,$$=\,$const\footnote{The best known Newtonian
version of the von Zeipel condition states that for a barytropic
fluid $p = p(\epsilon)$, both angular velocity and angular
momentum are constant on cylinders, $\Omega = \Omega(R)$, $\cal L
= \cal L(R)$, with $R = r\sin\theta$ being the distance from the
rotation axis.}. Obviously, the constant angular momentum case
satisfies the von Zeipel condition.
\citet{jar-1980} have also discussed a general, non barytropic
case. They wrote equation (\ref{Euler}) twice, for $i = r$ and $i
= \theta$, and divided the two equations side by side to get
\begin{equation}
\label{master} \frac{\partial_r\,p}{\partial_{\theta}\,p} =
\frac{\partial_r\,g^{tt} - 2{\cal L}\,\partial_r g^{t\phi} + {\cal
L}^2\,\partial_r g^{\phi\phi}}{\partial_{\theta}\,g^{tt} - 2{\cal
L}\,\partial_{\theta} g^{t\phi} + {\cal L}^2\,\partial_{\theta}
g^{\phi\phi}} \equiv - F\left(r, \theta \right).
\end{equation}
For the Kerr metric components one knows the functions $g^{ik} =
g^{ik}(r,\theta)$, and therefore the function $F(r,\theta)$ in the
right hand side of (\ref{master}) is known explicitly in terms of
$r$ and $\theta$, {\it if} one knows or assumes the angular
momentum distribution ${\cal L} = {\cal L}(r, \theta)$. This has
an important practical consequence.
Let $\theta = \theta(r)$ be the explicit equation for the
equipressure surface $p(r, \theta) =$const. It is, $d\theta/dr =
-{\partial_r}p/\partial_{\theta}p$. If the function $F(r, \theta)$
in (\ref{master}) is known, then equation (\ref{master}) takes the
form of an ordinary differential equation for the equipressure
surface, $\theta = \theta(r)$,
\begin{equation}
\label{differential} \frac{d\theta}{dr} = F(r, \theta),
\end{equation}
with the explicitly known right hand side. It may be therefore
directly integrated to get all the possible locations for the
equipressure surfaces.
\section{The angular momentum distribution}
\label{section-angular-momentum}
\subsection{Physical arguments: the radial distribution}
\label{section-physical-arguments}
\citet{jar-1980} discussed general arguments showing that the
slope of the specific angular momentum should be between two
extreme: the slope corresponding to ${\cal L} =\,$const and the
slope corresponding to $\Omega =\,$const. These two cases,
together with the Keplerian one ${\cal L} = {\cal L}_K$, may be
considered as useful archetypes in discussing arguments relevant
to the angular momentum distribution.
Indeed, far away from the black hole $r \gg r_G$, these arguments
are well known \cite[see e.g.][]{fra-2002} and together with
numerous numerical simulations show that typically (i.e. in a
stationary case with no shocks) the specific angular momentum
should be slightly sub-Keplerian ${\cal L}(r, \pi/2) \approx {\cal
L}_K(r)$. There is a solid consensus on this point.
The situation close to the black hole is less clear because there is
not sufficient knowledge of the nature of the stress operating in
the innermost part of the flow, i.e. approximately between the
horizon and the ISCO. Formally, one may consider two extreme ideal
cases, depending whether the stress is very small or very large.
In the first case, the almost vanishing stress implies that the
fluid is almost free-falling, and therefore the angular momentum
is almost constant along fluid lines. This leads to ${\cal L}(r,
\pi/2) \approx\,$const. Such situation is typical for the thin
\citet{sha-sun-1974} and slim \citep{abr-1988} accretion disks. In
the second case, one may imagine a powerful instability like MRI,
which occurs when $d\Omega/dr \ne 0$. It may force the fluid
closer to the marginally stable state $\Omega =\,$const. This
situation may be relevant for ADAFs \citep{nar-1995, abr-1995}.
\begin{figure*}
\centering
\includegraphics[width=4.45cm,height=4.45cm]{1518.F02.eps}
\hfill
\includegraphics[width=4.45cm,height=4.45cm]{1518.F03.eps}
\hfill
\includegraphics[width=4.45cm,height=4.45cm]{1518.F04.eps}
\hfill
\includegraphics[width=4.45cm,height=4.45cm]{1518.F05.eps}
\caption{
(a) and (b): the distribution of angular momentum on the equatorial plane. Thick lines correspond to the
angular momentum predicted by our analytic formula, dashed lines show the Keplerian angular momentum distribution and dots to the simulation data. (a): Kerr geometry $a=0.9$ simulations by \citet{sad-2008}. (b): Pseudo-Newtonian MHD simulations by \citet{mac-2008}. (c) and (d): angular momentum off the equatorial plane, normalized to its equatorial plane value, $\Lambda = {\cal L}(r,\theta)/{\cal L}(r, \pi/2)$. Lines correspond to the $\sin^{2\gamma}\theta$ distribution at $r = 10r_G$: long-dashed $\gamma=0.5$, dotted $\gamma = 1.0$, and short dashed $\gamma = 1.5$. Points are taken from time-dependent, fully 3-D, MHD numerical simulations. They correspond to time and azimuthal averages at the same radial location, $r = 10r_G$. (c): Points from the simulations of \citet{mac-2008} in the Paczy{\'n}ski-Wiita potential --- triangle: High temperature case; square: Low temperature case. (d): Points from the simulations of \citet{fra-2007} in Schwarzschild (dots) and $a=0.9$ Kerr (crosses) spacetimes.
}
\label{fig:ang-mom}
\end{figure*}
\subsection{The new ansatz}
\label{section-ansatz}
We suggest adopting the following assumption for the angular
momentum distribution,
\begin{equation}
\label{ansatz-general}
{\cal L}(r, \theta) = \left\{
\begin{array}{ll}
{\cal L}_0\left( \frac{{\cal L}_K
(r)}{{\cal L}_0}\right)^{\beta}
\sin^{2\gamma}\theta & \mbox{~for~ $r \geq r_{ms}$}\\
~\\
{\cal L}_{ms}(r) \sin^{2\gamma}\theta & \mbox{~for~ $r < r_{ms}$}
\end{array} \right\}
\end{equation}
The constant ${\cal L}_0$ is defined by ${\cal L}_0 \equiv
\eta\,{\cal L}_K(r_{ms})$. For the ``hydrodynamical'' case, the
function ${\cal L}_{ms}(r)$ is constant,
\begin{equation}
\label{constant-definitions-hydro}
{\cal L}_{ms}(r) = {\cal L}_0\,[{\cal L}_K(r_{ms})/{\cal L}_0]^{\beta}
= {\rm const},
\end{equation}
while for the ``MHD'' case its is calculated from the $\Omega(r) =
\Omega_K(r_{ms}) =\,$const condition,
\begin{equation}
\label{constant-definitions-MHD} {\cal L}_{ms}(r) = -
\frac{\Omega_{ms}\,g_{\phi\phi}(r, \pi/2) + g_{t\phi}(r,
\pi/2)}{\Omega_{ms}\,g_{t\phi}(r, \pi/2) + g_{tt}(r, \pi/2)}.
\end{equation}
Thus, there are only {\it three} dimensionless parameters in the
model: ($\beta$, $\gamma$, $\eta$). Their ranges are,
\begin{equation}
\label{constants-range} 0 \le \beta \le 1, ~~-1 \le \gamma \le 1,
~~~~1 \le \eta \le \eta_{max}.
\end{equation}
The function ${\cal L}_K(r)$ is the Keplerian angular momentum in
the equatorial plane, $\theta =\pi/2$, which for the Kerr metric
is described by formula (\ref{Keplerian}) and $\eta_{max} = {\cal
L}_K(r_{mb})/{\cal L}_K(r_{ms})$. An equipressure surface that
starts from the cusp is marginally bound for $\beta = 0$, $\gamma
= 0$ and $\eta = \eta_{max}$.
\subsection{Angular momentum on the equatorial plane}
On the equatorial plane, $\sin \theta = 1$, and therefore only
$\beta$ and $\eta$ (through ${\cal L}_0$) enter the distribution
formulae (\ref{ansatz-general}).
\begin{equation}
\label{ansatz-equatorial}
{\cal L}(r, \pi/2) = \left\{
\begin{array}{ll}
{\cal L}_0\left( \frac{{\cal L}_K
(r)}{{\cal L}_0}\right)^{\beta}
& \mbox{~for~ $r \geq r_{ms}$}\\ ~\\
{\cal L}_{ms} & \mbox{~for~ $r < r_{ms}$}
\end{array} \right\}
\end{equation}
When $\beta = 0$, the angular momentum is constant, ${\cal L} =
{\cal L}_0$, and when $\beta = 1$, it equals the Keplerian one,
${\cal L} = {\cal L}_K$.
For small values of $\beta$ the assumed equatorial plane angular
momentum (\ref{ansatz-equatorial}) reproduces the characteristic
shape, shown in Figure~\ref{fig:ang-mom}, which has been
found in many numerical simulations of accretion flows ---
including stationary, axially symmetric, $\alpha$ viscosity,
hydrodynamical ``slim disks'' \citep[e.g.][]{abr-1988}, and more
recent, fully 3-D, non-stationary MHD simulations
\citep[e.g.][]{mac-2008, fra-2008}. It corresponds to a
distribution that is slightly sub-Keplerian for large radii, and
closer to the black hole it crosses the Keplerian distribution
twice, at $r_{center} > r_{ms}$ and at $r_{cusp} < r_{ms}$,
forming a super-Keplerian part around $r_{ms}$. For $r < r_{cusp}$
the angular momentum is almost constant.
\subsection{Angular momentum off the equatorial plane}
Numerical simulations show that away from the equatorial plane, the angular momentum falls off.
Figure~\ref{fig:ang-mom} shows that indeed several MHD simulations \citep[][Figure 2c and 2d respectively]{mac-2008, fra-2008}, feature a drop of angular momentum away from the equatorial plane.
This behavior is reflected by the term $\sin^{2\gamma}\theta$ in (\ref{ansatz-general}). One may see that this form accurately mimics the outcome of the numerical simulations.
\citet{pro-2003a,pro-2003b} also studied axisymmetric accretion
flows with low specific angular momentum using numerical
simulations. In their inviscid hydrodynamical case
\citet{pro-2003a} found that the inner accretion flow
settles into a pressure-rotation supported torus in the equatorial
region and a nearly radial inflow in the polar funnels.
Furthermore, the specific angular momentum in the equatorial torus
was nearly constant. This behavior changes once magnetic fields
are introduced, as shown in \citet{pro-2003b}. In the MHD case,
the magnetic fields transport specific angular momentum so that in
the innermost part of the flow, rotation is sub-Keplerian, whereas
in the outer part, it is nearly Keplerian. Similar rotational
profiles are also found in MHD simulations of the collapsar model
of gamma-ray bursts \citep{pro-2003c, bai-2008}, which use a sophisticated
equation of state and neutrino cooling (instead of a simple
adiabatic equation of state). Therefore, it appears that the
rotational profile assumed in our model is quite robust as it has
been obtained in a number of numerical experiments with various
microphysics.
\section{Results}
Figures \ref{sequence-beta-gamma} and \ref{sequence-beta-gamma-05}
show sequences of models calculated with the new ansatz
(\ref{ansatz-general}) for black-hole spins $a=0$ and 0.5,
respectively. For these models we hold $\eta = \eta_{max}$ fixed,
while $\beta$ and $\gamma$ are varied over the limits of their
accessible ranges.
\begin{figure*}
\centering
\includegraphics[width=4.3cm]{1518.F06.eps}
\includegraphics[width=4.3cm]{1518.F07.eps}
\includegraphics[width=4.3cm]{1518.F08.eps}
\includegraphics[width=4.3cm]{1518.F09.eps}
%
\includegraphics[width=4.3cm]{1518.F10.eps}
\includegraphics[width=4.3cm]{1518.F11.eps}
\includegraphics[width=4.3cm]{1518.F12.eps}
\includegraphics[width=4.3cm]{1518.F13.eps}
%
\includegraphics[width=4.3cm]{1518.F14.eps}
\includegraphics[width=4.3cm]{1518.F15.eps}
\includegraphics[width=4.3cm]{1518.F16.eps}
\includegraphics[width=4.3cm]{1518.F17.eps}
%
\includegraphics[width=4.3cm]{1518.F18.eps}
\includegraphics[width=4.3cm]{1518.F19.eps}
\includegraphics[width=4.3cm]{1518.F20.eps}
\includegraphics[width=4.3cm]{1518.F21.eps}
%
\includegraphics[width=4.3cm]{1518.F22.eps}
\includegraphics[width=4.3cm]{1518.F23.eps}
\includegraphics[width=4.3cm]{1518.F24.eps}
\includegraphics[width=4.3cm]{1518.F25.eps}
%
\caption
{Equipressure surfaces for $a = 0$ and $\eta = \eta_{max} = 1.085$.
Five rows correspond to $\beta = (0.0), (0.1), (0.5), (0.9), (0.99)$
from the top to the bottom.
Four columns correspond to $\gamma = (0.0), (0.1), (0.5), (0.9)$
from the left to the right.
The upper left corner shows a ``standard'' Polish doughnut. The
lower right corner shows an almost Keplerian disk at the
equatorial plane, surrendered by a very low angular momentum
envelope.
}
\label{sequence-beta-gamma}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=4.3cm]{1518.F26.eps}
\includegraphics[width=4.3cm]{1518.F27.eps}
\includegraphics[width=4.3cm]{1518.F28.eps}
\includegraphics[width=4.3cm]{1518.F29.eps}
%
\includegraphics[width=4.3cm]{1518.F30.eps}
\includegraphics[width=4.3cm]{1518.F31.eps}
\includegraphics[width=4.3cm]{1518.F32.eps}
\includegraphics[width=4.3cm]{1518.F33.eps}
%
\includegraphics[width=4.3cm]{1518.F34.eps}
\includegraphics[width=4.3cm]{1518.F35.eps}
\includegraphics[width=4.3cm]{1518.F36.eps}
\includegraphics[width=4.3cm]{1518.F37.eps}
%
\includegraphics[width=4.3cm]{1518.F38.eps}
\includegraphics[width=4.3cm]{1518.F39.eps}
\includegraphics[width=4.3cm]{1518.F40.eps}
\includegraphics[width=4.3cm]{1518.F41.eps}
%
\includegraphics[width=4.3cm]{1518.F42.eps}
\includegraphics[width=4.3cm]{1518.F43.eps}
\includegraphics[width=4.3cm]{1518.F44.eps}
\includegraphics[width=4.3cm]{1518.F45.eps}
%
\caption
{Equipressure surfaces for $a = 0.5$ and $\eta = \eta_{max} = 1.079$.
Five rows correspond to $\beta = (0.0), (0.1), (0.5), (0.9), (0.99)$
from the top to the bottom.
Four columns correspond to $\gamma = (0.0), (0.1), (0.5), (0.9)$
from the left to the right.
The upper left corner shows a ``standard'' Polish doughnut. The
lower right corner shows an almost Keplerian disk at the
equatorial plane, surrendered by a very low angular momentum
envelope.
}
\label{sequence-beta-gamma-05}
\end{figure*}
\subsection{Equipressure surfaces on the axis of rotation}
Figures \ref{sequence-beta-gamma} and \ref{sequence-beta-gamma-05}
show an interesting change of
the behavior of equipressure surfaces close to the axis with
increasing $\gamma$. No equipressure surface can cross the symmetry
axis when the dependence of the angular momentum on $\theta$ is
weak. This is the case for the first three columns of
Figure \ref{sequence-beta-gamma} where $\gamma\leq 0.5$. On the
other hand, for the angular momentum distributions with higher
$\gamma$ the equipressure surfaces cross the axis perpendicularly.
This happens in plots of the last column of Figure
\ref{sequence-beta-gamma}. This behavior can be understood easily
from the limit of $r d\theta/dr$ as $\theta\rightarrow0$. In
Schwarzschild spacetime equations (\ref{master}) and
(\ref{differential}) give
\begin{equation}
\label{limit}
\lim_{\theta\rightarrow0^{+}}\frac{r d\theta}{dr} =
-\frac{2\mathcal{L}^2_K(r)}{\mathcal{L}^2(r,\pi/2)}
\lim_{\theta\rightarrow0^{+}}\left(\sin^{4\gamma-3}\theta\right).
\end{equation}
The limit on the right-hand side is either 0, 1 or $\infty$,
depending on the value of $\gamma$. When $\gamma<3/4$,
$rd\theta/dr =0$ and no equipressure surface goes across the axis.
On the other hand, when $\gamma>3/4$ equipressure surfaces cross
the axis perpendicularly.
Of course, a stationary torus may exist only within an
equipotential surface located inside the Roche lobe,
i.e., the critical self-crossing equipotential within the cusp
\citep[][]{abr-1985}.
\subsection{Comparison with numerical simulations}
Figure \ref{overlay} illustrates that the results of the analytic
models are well matched with results of modern 3-D MHD numerical
simulations \citep[here taken from][]{fra-2007,fra-2008}. For the
correct choice of parameters, the model can reproduce many of the
relevant features of the numerical results, including the
locations of the cusp and pressure maximum, as well as the
vertical thickness of the disk. At this stage, such qualitative
agreement is all that can be hoped for. One notable difference
between the analytic and numerical solutions is the behavior
inside the cusp. While the analytic equipressure surfaces formally
diverge toward the poles, the numerical solution maintains a
fairly constant vertical
height, which is also evident in Figure \ref{analytic-numerical}.
This is because in the region inside the cusp, our assumption (\ref{circular-orbits}) about the form of the velocity is not valid --- velocity cannot be consistent with a pure rotation only, $u^i = (u^t, u^{\phi}, 0, 0)$. In this region the radial velocity $u^r$ must be non-zero and large. Thus, accuracy of our analytic models may only be trusted in the region outside the cusp, $r > r_{cusp}$.
\section{Discussion}
\label{discussion}
In this paper we assumed a form of the angular momentum
distribution (\ref{ansatz-general}) and from this calculated the
shapes and locations of the equipressure surfaces. This may be
used in calculating spectra (in the optically thick case) by the
same ``surface'' method as used in works by \citet{sik-1971} and
\citet{mad-1988}.
We plan to construct the complete physical model of the interior
in the second paper of this series. Here, we only outline the
method by considering a simplified toy model. Let us denote $\rho
= \epsilon + p$. We assume a toy (non-barytropic) equation of
state and an entropy distribution, by writing,
\begin{equation}
p = e^{K({\cal S})}\rho, \quad
K = K(r, \theta).
\label{toy-state}
\end{equation}
Let us, in addition, define two functions connected to the entropy
distribution,
\begin{equation}
\partial_\theta\,K =\kappa(r, \theta).
\quad
\frac{\partial_r K}{\partial_{\theta} K} =
\lambda(r, \theta),
\label{two-functions-entropy}
\end{equation}
From the obvious condition that the second derivative commutator
of pressure vanishes, $(\partial_r\partial_{\theta} -
\partial_{\theta}\partial_r)p = 0$, and equations (\ref{toy-state}),
(\ref{two-functions-entropy}) and (\ref{master}) one derives,
\begin{equation}
\kappa = -\frac{\partial_r\,G_{\theta} -
\partial_{\theta}\,G_r}{G_r - \lambda\,G_{\theta}},
\label{commutator-condition}
\end{equation}
where $G_r$ and $G_{\theta}$ are defined as
\begin{equation}
G_i(r,\theta) = \frac{\partial_i p}{\rho}
\label{definition-G}
\end{equation}
and can be calculated from the angular momentum distribution using
equation (\ref{Euler}). From (\ref{commutator-condition}) it is
obvious that one cannot independently assume the functions
$\kappa(r, \theta)$ and $\lambda(r, \theta)$\footnote{A somewhat
similar situation in the case of rotating stars is known as the
von Zeipel paradox \citep{tas-1978}: {\it Pseudo-barytropic models
in a state of permanent rotation cannot be used to describe
rotating stars in strict radiative equilibrium.}}. Assuming
$\lambda(r, \theta)$ is equivalent with assuming the shapes of
isentropic surfaces. Indeed, from (\ref{two-functions-entropy})
one concludes that the function $\theta = \theta_{\cal S}(r)$ that
describes an isentropic surface is given by the equation,
\begin{equation}
\left[\frac{d\theta}{dr}\right]_{\cal S} =
-\lambda(r, \theta).
\label{isentropic}
\end{equation}
that may be directly integrated. Then the condition
(\ref{commutator-condition}) gives the physical spacing
(``labels'') to the isentropic surfaces, and through the equation
of state (\ref{toy-state}) also to equipressure surfaces and
isopicnic ($\rho ={\rm const}$) surfaces.
Note, that a possible choice $\lambda = G_r/G_{\theta}$
corresponds, obviously, to the ``von Zeipel'' case in which
equipressure and isentropic surfaces coincide. In this case the
denominator in (\ref{commutator-condition}) vanishes, implying a
singularity unless the numerator also vanishes. The condition for
the numerator to vanish is, however, equivalent to the von Zeipel
condition.
\begin{figure}
\centering
\vskip 0.2truecm
\includegraphics[width=8.0cm]{1518.F46.eps}
\vskip 0.8truecm
\includegraphics[width=8.0cm]{1518.F47.eps}
\caption
{Comparison of pressure distributions between the analytic model ({\it dark lines})
and numerical simulations ({\it colors}). The results of MHD
simulations \citep[taken from][]{fra-2007,fra-2008} have been
time-averaged over one orbital period at $r=25r_G$.
{\it Upper panel:} Schwarzschild black hole ($a=0$); the analytic model
parameters are $\eta=1.085$, $\beta=0.9$, and
$\gamma=0.18$. {\it Lower panel:} Kerr black hole ($a=0.5$); the analytic model
parameters are $\eta=1.079$, $\beta=0.7$, and
$\gamma=0.2$.
}
\label{overlay}
\end{figure}
\section{Conclusions}
The new ansatz (\ref{ansatz-general}) captures two essential
features of the angular momentum distribution in black hole
accretion disks:
\begin{enumerate}
\item On the equatorial plane and far from the black hole,
the angular momentum in the disk differs only little from the
Keplerian one being slightly sub-Keplerian, but closer in
it becomes (slightly) super-Keplerian and still closer, in the
plunging region, sub-Keplerian again and nearly constant.
\item Angular momentum may significantly decrease off the
equatorial plane, and become very low (even close to zero,
in a non-rotating ``corona'').
\end{enumerate}
Models of tori described here may be useful not only for accretion disks but also for tori that form in the latest stages of neutron star binary mergers. This is relevant for gamma ray bursts \citep{wit-1994} and gravitational waves \citep{bai-2008}.
\begin{acknowledgements}
We thank Daniel Proga and Luciano Rezzolla for helpful comments and suggestions.
Travel expenses connected to this work were
supported by the China Scholarship Council (Q.L.), the Polish
Ministry of Science grant N203 0093/1466 (M.A.A.), and the Swedish
Research Council grant VR Dnr 621-2006-3288 (P.C.F.).
\end{acknowledgements}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,264
|
Prasar Bharati and Mizzima Media of Myanmar signed a Memorandum of Understanding on Friday. This will boost centuries old ties between India and Myanmar. It will also strengthen India's soft power in its neighbourhood.
The Agreement will realize cooperation and collaboration in various fields of broadcasting. It also envisions content-sharing covring a wide range of genres, including culture, entertainment, education, science, news and sports. India and Myanmar have shared history of their independent struggle and Prasar Bhart's rich archive will help them relive tose precious moments.
Mizzima's Managing Director Soe Myint said that there is a huge audience of Indian movies and music and this collaboration among myanmarese population, will help them have more access to indian infotainment content.
This MoU will give further boost to centuries old ties of India and Myanmar. Beside, it will also act to spread india's soft power in its neighbourhood.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,299
|
We offer bespoke services for our corporate clients; the sessions are created with the needs of your employees, company dress code, key goals and the corporate brand in mind. Our sessions have been crafted to equip employees with the skills required to create that final 'polish', we want to help our corporate clients to encourage staff members to become the finished article and therefore a true representation of the corporate brand identity. Many of our sessions are also applicable for the education sector to aid the employability skills of students, job seekers and graduates.
Our workshops can be used as standalone workshops or can be used to create a programme. Many of our clients ask us to create a bespoke solutions for them accounting for particular needs.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 970
|
Q: Mule: Why does VM queue break using HTTPS, basic-auth and spring security manager I am trying to configure some basic HTTP authentication over HTTPS using the spring security manager. I have got this working previously and I have it pretty much working now with one major problem. I get the error below when my flow tries to write the message to a VM queue. My mule config works fine if I remove the the security filter (<http:basic-security-filter realm="mule-realm"/>) line but as soon as this line is present I get the error below. I'm using a custom class as the mule payload, could this be part of the problem?
Any help will be much appreciated, I have run out of ideas...
From the log:
INFO 2012-02-17 11:18:27,188 [[rhea_interoperability_layer_v2].HTTPSConnector.receiver.02] org.mule.api.processor.LoggerMessageProcessor: Structured message: RestfulHttpRequest {
url: ws/rest/v1/facilities
body: null
requestParms: [
sector: Musha
type: test
]} Full Message:
org.mule.DefaultMuleMessage
{
id=59003ade-5948-11e1-b071-65acfd51e8fc
payload=org.jembi.rhea.RestfulHttpRequest
correlationId=<not set>
correlationGroup=-1
correlationSeq=-1
encoding=UTF-8
exceptionPayload=<not set>
Message properties:
INVOCATION scoped properties:
queryTimeout=-1
INBOUND scoped properties:
Accept-Encoding=gzip,deflate
Authorization=Basic YWRtaW46YWRtaW4=
Connection=false
Host=localhost:5000
Keep-Alive=false
MULE_ORIGINATING_ENDPOINT=endpoint.https.localhost.5000
MULE_REMOTE_CLIENT_ADDRESS=/127.0.0.1:43740
User-Agent=Jakarta Commons-HttpClient/3.1
http.context.path=/
http.method=GET
http.request=/ws/rest/v1/facilities?sector=Musha&type=test
http.request.path=/ws/rest/v1/facilities
http.version=HTTP/1.1
sector=Musha
type=test
OUTBOUND scoped properties:
LOCAL_CERTIFICATES=[Ljava.security.cert.X509Certificate;@47abfd68
MULE_ENCODING=UTF-8
MULE_ENDPOINT=jdbc://insertMsg
MULE_ROOT_MESSAGE_ID=59003ade-5948-11e1-b071-65acfd51e8fc
SESSION scoped properties:
}
ERROR 2012-02-17 11:18:27,215 [[rhea_interoperability_layer_v2].HTTPSConnector.receiver.02] org.mule.exception.DefaultMessagingExceptionStrategy:
********************************************************************************
Message : An exception occurred while invoking message processor "DefaultMessageProcessorChain '(inner iterating chain) of OutboundEndpoint 'vm://normalizationQueue' request chain'
[
org.mule.endpoint.outbound.OutboundEventTimeoutMessageProcessor,
org.mule.endpoint.outbound.OutboundSessionHandlerMessageProcessor,
org.mule.endpoint.outbound.OutboundEndpointPropertyMessageProcessor,
org.mule.endpoint.outbound.OutboundRootMessageIdPropertyMessageProcessor,
org.mule.endpoint.outbound.OutboundResponsePropertiesMessageProcessor
]" with transaction "Transaction{factory=null, action=INDIFFERENT, timeout=0}".. Message payload is of type: RestfulHttpRequest
Type : org.mule.api.MessagingException
Code : MULE_ERROR-29999
Payload : RestfulHttpRequest {
url: ws/rest/v1/facilities
body: null
requestParms: [
sector: Musha
type: test
]}
JavaDoc : http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/MessagingException.html
********************************************************************************
Exception stack is:
1. org.mule.config.spring.parsers.assembly.MapEntryCombiner (java.io.NotSerializableException)
java.io.ObjectOutputStream:1180 (null)
2. java.io.NotSerializableException: org.mule.config.spring.parsers.assembly.MapEntryCombiner (org.apache.commons.lang.SerializationException)
org.apache.commons.lang.SerializationUtils:111 (null)
3. An exception occurred while invoking message processor "DefaultMessageProcessorChain '(inner iterating chain) of OutboundEndpoint 'vm://normalizationQueue' request chain'
[
org.mule.endpoint.outbound.OutboundEventTimeoutMessageProcessor,
org.mule.endpoint.outbound.OutboundSessionHandlerMessageProcessor,
org.mule.endpoint.outbound.OutboundEndpointPropertyMessageProcessor,
org.mule.endpoint.outbound.OutboundRootMessageIdPropertyMessageProcessor,
org.mule.endpoint.outbound.OutboundResponsePropertiesMessageProcessor
]" with transaction "Transaction{factory=null, action=INDIFFERENT, timeout=0}".. Message payload is of type: RestfulHttpRequest (org.mule.api.MessagingException)
org.mule.processor.TransactionalInterceptingMessageProcessor:63 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/MessagingException.html)
********************************************************************************
Root Exception stack trace:
java.io.NotSerializableException: org.mule.config.spring.parsers.assembly.MapEntryCombiner
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1180)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:346)
at java.util.ArrayList.writeObject(ArrayList.java:673)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:962)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1480)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1416)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1528)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1493)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1416)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:346)
at org.apache.commons.collections.map.AbstractHashedMap.doWriteObject(AbstractHashedMap.java:1182)
at org.mule.util.CaseInsensitiveHashMap.writeObject(CaseInsensitiveHashMap.java:145)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:962)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1480)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1416)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:346)
at java.util.TreeMap.writeObject(TreeMap.java:2275)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:962)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1480)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1416)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1528)
at java.io.ObjectOutputStream.defaultWriteObject(ObjectOutputStream.java:438)
at org.mule.MessagePropertiesContext.writeObject(MessagePropertiesContext.java:420)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:962)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1480)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1416)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1528)
at java.io.ObjectOutputStream.defaultWriteObject(ObjectOutputStream.java:438)
at org.mule.DefaultMuleMessage.writeObject(DefaultMuleMessage.java:1643)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:962)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1480)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1416)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1528)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1493)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1416)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1174)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1528)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1493)
at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:141...
********************************************************************************
My flow config
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:vm="http://www.mulesoft.org/schema/mule/vm" xmlns:https="http://www.mulesoft.org/schema/mule/https" xmlns:jdbc="http://www.mulesoft.org/schema/mule/jdbc" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:spring="http://www.springframework.org/schema/beans" xmlns:core="http://www.mulesoft.org/schema/mule/core" xmlns:mule-ss="http://www.mulesoft.org/schema/mule/spring-security" xmlns:ss="http://www.springframework.org/schema/security" xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="CE-3.2.1" xsi:schemaLocation="
http://www.mulesoft.org/schema/mule/vm http://www.mulesoft.org/schema/mule/vm/current/mule-vm.xsd
http://www.mulesoft.org/schema/mule/https http://www.mulesoft.org/schema/mule/https/current/mule-https.xsd
http://www.mulesoft.org/schema/mule/jdbc http://www.mulesoft.org/schema/mule/jdbc/current/mule-jdbc.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/spring-security http://www.mulesoft.org/schema/mule/spring-security/3.1/mule-spring-security.xsd
http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.0.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd ">
<mule-ss:security-manager>
<mule-ss:delegate-security-provider name="memory-provider" delegate-ref="authenticationManager"/>
</mule-ss:security-manager>
<spring:beans>
<ss:authentication-manager alias="authenticationManager">
<ss:authentication-provider>
<ss:user-service id="userService">
<ss:user name="admin" password="admin" authorities="ROLE_ADMIN"/>
</ss:user-service>
</ss:authentication-provider>
</ss:authentication-manager>
<spring:bean id="jdbcDataSource" name="jdbcDataSource" class="org.enhydra.jdbc.standard.StandardDataSource" doc:name="jdbcDataSource">
<spring:property name="password" value="${db.password}"/>
<spring:property name="user" value="${db.user}"/>
<spring:property name="url" value="${db.url}"/>
<spring:property name="driverName" value="${db.driverName}"/>
</spring:bean>
</spring:beans>
<context:property-placeholder xmlns:context="http://www.springframework.org/schema/context" location="classpath:my.properties"></context:property-placeholder>
<jdbc:connector name="JDBCConnector" dataSource-ref="jdbcDataSource" validateConnections="true" queryTimeout="-1" pollingFrequency="0" doc:name="JDBCConnector"/>
<https:connector name="HTTPSConnector" cookieSpec="netscape" validateConnections="true" sendBufferSize="0" receiveBufferSize="0" receiveBacklog="0" clientSoTimeout="10000" serverSoTimeout="10000" socketSoLinger="0" proxyHostname="localhost" proxyPort="80" doc:name="HTTPSConnector">
<https:tls-key-store path="keystore.jks" keyPassword="Jembi#123" storePassword="Jembi#123"/>
</https:connector>
<flow name="RESTEntryPoint" doc:name="RESTEntryPoint">
<https:inbound-endpoint exchange-pattern="request-response" host="localhost" port="5000" connector-ref="HTTPSConnector" doc:name="HTTP">
<http:basic-security-filter realm="mule-realm"/>
</https:inbound-endpoint>
<response>
<custom-transformer class="org.jembi.rhea.transformers.RestfulHttpResponseToHttpResponseTransformer" doc:name="RestfulHttpResponseToHttpResponseTransformer"/>
<logger message="Transaction responce sent from entry point: #[groovy:return message.toString();] with payload #[groovy:return message.payload.toString();]" level="INFO" doc:name="Logger"/>
</response>
<logger message="Message recieved at entry point: #[groovy:return message.toString();] with payload #[groovy:return message.payload.toString();]" level="INFO" doc:name="Log raw message"/>
<custom-transformer class="org.jembi.rhea.transformers.HttpRequestToRestfulHttpRequestTransformer" doc:name="HttpRequestToRestfulHttpRequestTransformer"/>
<logger message="Structured message: #[groovy: message.payload.toString();]" level="INFO" doc:name="Log Structured Message"/>
<jdbc:outbound-endpoint exchange-pattern="request-response" queryKey="insertMsg" responseTimeout="10000" queryTimeout="-1" connector-ref="JDBCConnector" doc:name="Persist raw message">
<jdbc:query key="insertMsg" value="insert into inbound_messages (payload, timestamp) values (#[groovy: return message.payload.toString();], now());"/>
</jdbc:outbound-endpoint>
<choice doc:name="Choice">
<when expression="message.getInboundProperty('X-SENDING-APP') != null" evaluator="groovy">
<processor-chain>
<logger message="Propagating sending app header" level="INFO" doc:name="Log propagate sending app header"/>
<message-properties-transformer doc:name="Propagate sending app header">
<add-message-property key="X-SENDING-APP" value="#[header:inbound:X-SENDING-APP]"/>
</message-properties-transformer>
</processor-chain>
</when>
<otherwise>
<processor-chain>
<logger message="No sending app header detected" level="INFO" doc:name="Log no sending app header"/>
</processor-chain>
</otherwise>
</choice>
<message-properties-transformer scope="invocation" doc:name="Message Properties">
<delete-message-property key="queries"/>
<delete-message-property key="LOCAL_CERTIFICATES"/>
</message-properties-transformer>
<logger message="Structured message: #[groovy: message.payload.toString();] Full Message: #[groovy: message.toString();]" level="INFO" doc:name="Log Structured Message"/>
<vm:outbound-endpoint exchange-pattern="request-response" path="normalizationQueue" responseTimeout="10000" mimeType="text/plain" doc:name="Queue message"/>
</flow>
</mule>
The custom payload class
package org.jembi.rhea;
import java.io.Serializable;
import java.util.HashMap;
import java.util.Map;
import java.util.StringTokenizer;
public class RestfulHttpRequest implements Serializable {
private static final long serialVersionUID = 1L;
private String url;
private String body;
private String httpMethod;
// automatically extracted when a url is set
private Map<String, String> requestParams = new HashMap<String, String>();
// HTTPMethods
public static String HTTP_GET = "GET";
public static String HTTP_POST = "POST";
public static String HTTP_PUT = "PUT";
public static String HTTP_DELETE = "DELETE";
... getters and setters for the above ...
}
A: There doesn't seem to be a VM consumer for the endpoint in your config, the only thing Mule can do is 'store' messages for this queue, and this requires payloads to be serializable in turn. Either add a consumer to the 'normalizationQueue' or make sure the payload is serializable.
A: Ok, so I don't have an answer of exactly why this was happening but I managed to solve the problem. All I did was move the all the jdbc stuff out to a separate flow that gets called after this one. This solved my problem and all is now running smoothly.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,847
|
{"url":"https:\/\/imathworks.com\/tex\/tex-latex-how-to-insert-a-double-quote-mark-in-texstudio\/","text":"# [Tex\/LaTex] How to insert a double quote mark in TeXStudio\n\ntexstudio\n\nI want to enclose my path (contains some spaces) with double quotes.\nI have tried pressing shift+' on English keyboard, but unfortunately TeXStudio translates this as two single quotes.\n\nShortly speaking, how to insert a double quote mark in TeXStudio?\n\nUpdate: TeXMakerX has changed its name to TeXStudio.\n\nIn the Shortcuts panel of \"Configure TexMakerX\" expand the Editor tab and the Special Key replacements; then either delete the row for the double quote or add one and pick a rarely used key (I used \u00a7): insert it in the first space, and insert \" in the second and third.","date":"2022-12-05 07:48:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6361179351806641, \"perplexity\": 4919.720756954176}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446711013.11\/warc\/CC-MAIN-20221205064509-20221205094509-00264.warc.gz\"}"}
| null | null |
Cabaret Vintage
Contributed by Jessica Pollack
About a year ago I was wandering Queen West with my brother Adam and we stumbled upon Cabaret Vintage. Anyone who loves scouring the racks of a good vintage store as much as I do can relate to the thrill of new territory. Walking into Cabaret is like walking into another era: the jazzy sounds of crooners past and the boudoir feel of the store itself send you back to an age of meticulous style and high glamour.
The main floor of Cabaret showcases beautifully restored pieces for men and women (prices cap at about $300). Dresses, suits, shirts, skirts, pajamas and lingerie from 1900 to the 1960's line the racks on both sides of the store, ranging from casual to cocktail. The back of the main floor is all about accessories, from old-style hats and bowties to beaded evening purses and small collectable items. On my most recent visit I picked up an adorable white lace tunic for $60.
However, on that fateful spring day a year ago, the real fun for Adam and I began in the basement. "Cabaret Backstage", as the lower level is called, contains even more vintage clothing, accessories and a small selection of shoes. A rack of dramatic hats greets you as you descend, passing vintage hat boxes and luggage sets before emerging into the main room, which is smaller than the room upstairs but full of vintage goodies: think pleated palazzo pants, striking capes and vintage cowboy shirts. My brother and I must have tried on half the store and had a ball in the process.
Cabaret is owned by poet Thomas Dreyton, a seasoned vintage retailer, who is warm and personable and has a way of making customers feel like old friends. A few days ago when I was there he spoke to me as if we'd known each other for years and insisted I attend the stores 10th anniversary party. When Adam and I were there a year ago, Thomas was out but his son Tao was equally charming and fun to chat with about vintage and then some.
When Adam and I emerged from Cabaret I had a beautiful, embroidered tunic in hand and a new vintage haunt in mind. As it turns out, we had spent so much time in the store that 4 o'clock had come and gone and my car had been towed (the retrieving of which ended up costing far more than my lovely new tunic).
Sign up for our free email newsletter so you're always in the know. You can unsubscribe anytime or contact us for details.
Penny Arcade Vintage
Iridium Spa
Hair Granted
Studio Fontana
611 Purple Factory
State & Liberty
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,767
|
Q: How to validate string that can't have certain characters using parsley I need to validate input type text that doesn't contain these characters " ' < > I've tried:
pattern=".*[^"'<>].*"
and
pattern="[^"'<>]+"
but they don't work. It appears valid if there is one valid character.
A: The problem with your current attempts is that they try to match one (or at least one) character, which is not one of the listed as undesirable characters. Instead, you should make sure this applies to all characters from the start (^) til the end ($).
pattern="^[^"'<>]+$"
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,575
|
{"url":"https:\/\/www.germainelabs.com\/product\/aimstrip-tandem-total-cholesterol-control-solution\/","text":"# AimStrip\u00ae Tandem Total Cholesterol Control Solution, 2x3mL\n\n\\$50.00\n\nThe AimStrip\u00ae Tandem Total Cholesterol Control Solution is used to test the precision of the AimStrip\u00ae Tandem Lipid Pro\ufb01le Measuring System and to detect systematic analytic deviations that may arise from reagent or analytical variation and is for testing\n\nSKU: 77312 Category:\n\n## Description\n\nRef#77312 The AimStrip\u00ae Tandem Total Cholesterol Control Solution is used to test the precision of the AimStrip\u00ae Tandem Lipid Profile Measuring System and to detect systematic analytic deviations that may arise from reagent or analytical variation and is for testing outside the body (in vitro diagnostic use only).\n\n2 x 3mL\n\nAimStrip\u00ae Tandem Sales Flyer","date":"2022-12-01 23:27:30","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8236806988716125, \"perplexity\": 12645.991433325631}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710870.69\/warc\/CC-MAIN-20221201221914-20221202011914-00095.warc.gz\"}"}
| null | null |
Q: git verify commit signature visualization tool I'm looking for any tool or suggestion how can I display git commit signatures (username and email) for a specific set of local history.
I know I can use the verify-commit for each commit hash using git verify-commit <commit_hash> but doing this for every commit is really time consuming.
I've recently started working with git and am using signed commits. I'm using Bitbucket and they're still gathering interest regarding this information BSERV-9983
Since changing the information about the commit author locally is really easy I'm looking for a local tool that would show me this information easily. I was expecting this functionality with tools such as tortoise git "show log".
Could anyone explain why this kind of information isn't displayed in tools that visualize the "show log" information for or if I missed perhaps missed this information in one of the existing tools.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,613
|
MSC Group Pledges Support from Across All Its Businesses for Hurricane Dorian Relief in the Bahamas
MSC Group Press Release
MSC Group's efforts, in addition to providing and delivering goods of primary necessity, will initially focus on semi-permanent prefabricated modular housing for the population as well as making available MSC geared ships for cargo relief service from the U.S. to the Bahamas. A high-level delegation comprised of members of MSC Group's U.S. senior management team as well as leadership of its philanthropic arm, the MSC Foundation, is on its way to Nassau for meetings with local officials, community leaders and key relief and recovery organizations to identify the most timely and urgent additional relief needs that the Group can support
MSC Group, one of the world's leading container shipping and logistics conglomerates as well as the parent company of MSC Cruises, today announced that a high-level delegation comprised of members of its U.S. senior management team as well as leadership of its philanthropic arm, the MSC Foundation, is on its way to Nassau, Bahamas. The objective of this urgent mission is to promptly identify first-hand and through engagement with local officials, community leaders and key relief and recovery organizations the most timely and urgent relief needs and how the Group can support the immediate and longer-term in-kind and funding needs of the local population and businesses as they look to rebuild in the aftermath of Hurricane Dorian.
Gianluigi Aponte, executive chairman and founder of MSC Group, said: "It is heart-breaking to see the impact and devastation that Hurricane Dorian has brought over the Bahamas and its population. The thoughts and prayers of my entire family are with the people of the Bahamas and their families and loved ones."
"As a family company and one that has lived off the sea for over 300 years, we are fully committed to supporting both immediate and longer-term relief and recovery efforts in the Bahamas. Our businesses have long been closely tied to the Bahamas and its people, with a rich history spanning over many decades. We now look forward to supporting their efforts to rebuild and recover in every way we can and through all our businesses."
MSC Group has already pledged its full support directly to Prime Minister of the Commonwealth of the Bahamas, Dr. The Hon. Hubert Alexander Minnis. Preliminary talks have in the meantime begun in earnest on how the Group and all of its arms – from cargo shipping to cruising as well as its charitable Foundation - can support, coordinate and own key logistic aspects of immediate and longer-term relief efforts and their funding. In this initial phase, in addition to providing and delivering goods of primary necessity, MSC Group's efforts will focus on semi-permanent prefabricated modular housing for the population of the areas most affected by the hurricane as well as making available geared ships for cargo relief service from the US to the Freeport and Marsh Harbour, Abaco container terminals.
MSC Group through its MSC Mediterranean Shipping Company has been present in the region for the past 20 years and it has long been the leading cargo import and export operator in the Bahamas. Its local headquarters are based in Freeport, Grand Bahama. The operation will involve a number of MSC hubs in the region, in the U.S., the Caribbean and beyond
Any statements and/or information provided in this section or in any press release published into it, is solely for general information purposes. Specifically statements and/or information provided are not meant as and cannot be construed to contain any legally binding offer by or on behalf of MSC that is open for acceptance.
Email: PRY-info@msc.com
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,529
|
Luis Alberto Lacalle Herrera (* 13. Juli 1941 in Montevideo) ist Jurist und war vom 1. März 1990 bis zum 1. März 1995 der 36. Staatspräsident von Uruguay.
Leben
Lacalle Herrera ist Enkelkind des berühmten Politikers Luis Alberto de Herrera (1873–1959). Er studierte Jura an der Universidad de la República in Montevideo. Im Alter von 17 Jahren begann er sich aktiv für Politik zu interessieren und trat der Partido Nacional bei. Nach seinem Studium bekleidete er viele politische Ämter, so war er Mitglied des Nationalen Komitees der Partido Nacional, dem Finanz-Komitee, dem Transportkomitee und dem öffentlichen Arbeitskomitee sowie Senator.
1987 wurde Lacalle zum Vizepräsidenten des Senats ernannt und am 19. Juli 1999 zum nationalen Vorsitzenden gewählt, der höchsten Position in der Partido Nacional. Als größter Erfolg während seiner Präsidentschaft ist die Schaffung des gemeinsamen Marktes Mercosur, zusammen mit den Präsidenten Carlos Menem, Argentinien, Fernando Collor de Mello, Brasilien und Andrés Rodríguez aus Paraguay anzusehen.
Lacalle Herrera ist Mitglied im Club of Rome. Er ist verheiratet mit Julia Pou, sie haben drei gemeinsame Kinder: Pilar, Luis Alberto (Präsident seit 2020) und Juan José.
Ehrungen
Für seine Tätigkeiten erhielt Lacalle die Ehrendoktorwürde von:
Universidad Complutense in Madrid
Hebräische Universität in Jerusalem
Universidad Autónoma in Guadalajara in Mexiko
Universidad Nacional in Paraguay
Weblinks
Biografie (spanisch)
Präsident (Uruguay)
Rechtsanwalt (Uruguay)
Absolvent der Universidad de la República
Mitglied des Partido Nacional (Uruguay)
Mitglied des Club of Rome
Träger des Verdienstordens der Republik Polen (Großkreuz)
Träger des Ordens des Befreiers San Martin
Träger des Nationalen Ordens vom Kreuz des Südens
Träger des Orden de Isabel la Católica
Honorary Knight Grand Cross des Order of St. Michael and St. George
Ehrendoktor der Universität Complutense Madrid
Ehrendoktor der Hebräischen Universität Jerusalem
Person (Montevideo)
Uruguayer
Geboren 1941
Mann
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 108
|
HM Revenue and Customs will begin to implement a wider, live pilot of Making Tax Digital.
Businesses with turnover above £85,000 will have to keep digital records for VAT purposes.
Further roll out of Making Tax Digital is expected to affect small businesses under the VAT threshold.
Here at Jack Ross Chartered Accountants we understand that when it boils down to it business is all about the numbers. Whether you make world-beating technology or you're a professional in the city, the numbers are the thing that help make sense of it all. Orders, sales, balance sheets, cashflow and, of course, the Holy Grail: profit. Jack Ross Chartered Accountants recognise the simple fact that numbers make the business world go round. And, as you would expect, our experienced chartered accountants cover every aspect of the numbers game, from accounting support through to succession planning and pension advice. You name it we've got it covered. But that's not all. Here at Jack Ross we make it our business to try to make sure your numbers get better and better. We plan, advise and help you improve your operation so that you can set and achieve profit target. So with Jack Ross Chartered Accountants by your side you can bank on the numbers adding up.
From time to time, we send out important information about our work, business and money matters. We don't want you to miss out on any of our essential updates. Complete our simple form below to join our mailing list.
I have run my own small law firm for over 25 years. I appointed Umar and Jack Ross as company accountants around 3 years ago and they consistently delivered straightforward, plain-speaking accountancy advice at very reasonable cost. Jack Ross has saved both me and my company many thousands of pounds. I heartily recommend Umar and his company without reservation.
If you need your figures to work for you and look good, then Umar is the man to mould them correctly. An inner city business with outer city costs without the loss of any professionalism. Constantly upto date with the latest financial regulations to make your life easy. I couldn't recommend Jack Ross high enough to make your money work for you; accounting made simple!
Jack Ross Chartered Accountants take the stress away from accounting. By accepting the advice that they give I have peace of mind that I have made the correct choices regarding accounting and tax and so would recommend Umar / Jack Ross to anyone looking for a good accountant.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 6,181
|
Vitéz dr. szilvágyi Benárd Ágost (Budapest, 1880. január 3. – Balatonkenese, 1968. június 22.) orvos, eszperantista, keresztényszocialista politikus, a trianoni békeszerződés egyik aláírója az első világháborúban vesztes Magyar Királyság képviselőjeként.
Életpályája
A nemesi származású szilvágy Benárd családban született. Édesapja, szilvágyi Benárd Lajos (1827–1885), honvédelmi minisztérium osztálytanácsosa, anyja szeniczei Gáspár Emília (1838–1907) volt. Benárd Ágostnak az apja, Benárd Lajos 1877. július 22-én I. Ferenc József magyar királytól "szilvágyi" előnévvel magyar nemességet és címeradományt kapott. Az anyai nagyszülei szeniczei Gáspár András (1804–1884), 1848-49-iki honvédtábornok, 1848-49-ik biharmegyei honvédegyesület elnöke, és biharmegyei bizottsági tag, és téglási Ercsey Emília (1812–1896) voltak.
Benárd Ágost a piaristáknál tanult, majd az orvosi diploma megszerzése után belgyógyász és sebészgyakornok volt. Az első világháború kitörésekor bevonult és összesen ötven hónapot töltött harctéren, mint hadiorvos. A háború végén már a győztes őszirózsás forradalom ellen fegyveres szervezkedésbe kezdett, egyúttal pedig bekapcsolódott a különböző jobboldali politikai mozgalmak életébe is. Amikor Károlyit a kommunisták megbuktatták és kikiáltották a Magyarországi Tanácsköztársaságot, Benárdot már másnap elfogták és halálra ítélték. A végrehajtás elől sikerült megszöknie és Bécsben csatlakozott a helyi magyar ellenforradalmi erőkhöz, ugyanakkor Ausztriában is fegyveres szervezkedésbe kezdett az ottani kommunisták ellen, sikerrel.
Szociális érzéke hamar kitűnt; miközben az I. kerületi Munkásbiztosító Pénztár főorvosa lett, Kelenföldön és Lágymányoson is ingyenes betegellátást biztosított a szegényeknek. Tagja lett a budai keresztényszocialista mozgalomnak, s így a Tanácsköztársaság bukása után kinevezték a Munkásbiztosító Pénztár igazgatójának, majd a Huszár-kormány megalakulásakor a népjóléti és munkaügyi minisztérium adminisztratív államtitkára lett; a Simonyi-Semadam-kormányban és Teleki Pál első kormányában is ugyanezen terület miniszterévé nevezték ki. 1920 és 1922 között a KNEP nemzetgyűlési képviselője, az 1922-es nemzetgyűlési választáson pedig listán jutott be a Parlamentbe a Keresztény Egység Táborának programjával.
Legjelentősebb politikai tette a trianoni békeszerződés aláírása volt a kormány képviseletében Magyarország részéről 1920. június 4-én Drasche-Lázár Alfréd rendkívüli követ és államtitkár kíséretében. A tiltakozás és ellenállás gesztusaként állva írta alá a dokumentumot.
A békeszerződés aláírása után tagja maradt a Teleki-kabinetnek egészen annak bukásáig. Benárd Ágost, miután a Bethlen-kormányba már nem kapott meghívást, a Nép című jobboldali újság főszerkesztője lett, melynek hasábjain élénk publicisztikai tevékenységet is kifejtett.
1924-ben kilépett a kormányt támogató Wolff-pártból, mert véleménye szerint a kormány elvesztette keresztényszocialista alapjait. Gömbös Gyula kormányának megalakulásával visszatért a pártpolitikába és a Nemzeti Egység Pártjába is belépett, ahol a budai szervezetek munkájában szerzett érdemeket. 1935–39 között pártja színeiben a veszprémi választókerületet képviselte az Országgyűlésben.
A második világháború kitörése után végleg visszavonult az aktív politikai élettől. 1968. június 22-én halt meg.
Irodalom
Vizi László Tamás: A trianoni békediktátum aláírói az első világháborúban, Sorsok, frontok, eszmék. Tanulmányok az első világháború 100. évfordulójára. Főszerkesztő: Majoros István Szerkesztők: Antal Gábor, Hevő Péter, M. Madarász Anita, ELTE, BTK, Budapest, 2015.
Jegyzetek
Források
Rainer Pál: A przeworski vasúti hídtól a trianoni palotáig, Trianontól Veszprémig: dr. vitéz szilvágyi Benárd Ágost és veszprémi kapcsolatai
További információk
Benárd Ágoston rövid életrajza az Életrajzi Lexikonban
Az 1935–39-es Országgyűlési Almanach 218. oldal, képpel
Tarján M. Tamás: 1880. január 3. - Megszületik Benárd Ágost, a trianoni békediktátum egyik aláírója, rubicon.hu
KNEP-tagok
NEP-tagok
Magyar orvosok
Magyarország államtitkárai
Magyarország miniszterei
Országgyűlési képviselők (KNEP)
Országgyűlési képviselők (Egységes Párt)
Országgyűlési képviselők (Nemzeti Egység Pártja)
Országgyűlési képviselők (1920–1922)
Országgyűlési képviselők (1922–1926)
Országgyűlési képviselők (1935–1939)
1880-ban született személyek
1968-ban elhunyt személyek
Budapesten született személyek
Magyar eszperantisták
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,447
|
\section{Acknowledgements}
This material is based upon work supported by the Ministry of Trade, Industry and Energy (MOTIE, Korea) under Industrial Technology Innovation Program (No.10063424, Development of distant speech recognition and multi-task dialog processing technologies for in-door conversational robots).
\section{Conclusion}
We proposed a novel meta-learning scheme for short duration speaker recognition. In order to simulate practical settings in training, we propose an episode composition in which the support and query set have different speech lengths, and combined the meta-learning scheme with global classification for obtaining discriminative embedding space. We validate our model on various speaker recognition tasks on VoxCeleb datasets, and obtain the state-of-the-art performances on short utterance speaker recognition.
\section{Experiments}
\subsection{Dataset} We experiment our method under various settings on VoxCeleb datasets. VoxCeleb1 \cite{nagrani2017voxceleb} and VoxCeleb2 \cite{chung2018voxceleb2} are large scale text-independent speaker recognition datasets, each of which consists of 1251 and 5994 speakers, respectively. The two datasets have disjoint sets of speakers. We measure the speaker verification results with equal error rate (EER) and the minimum detection cost function (minDCF or $C^{min}_{det}$) at $P_{target}$ = 0.01. We score the verification trials using cosine similarity. For unseen speaker identification, we report the average accuracy over 1000 randomly generated episodes with 95\% confidence intervals.
\subsection{Experiment setting}
We use 40-dimensional log mel-filterbank (MFB) features with a frame-length of 25 ms as input features, which overlaps the adjacent frames by 15ms. We mean-normalized the inputs along the time-axis without any voice activity detection (VAD) or data augmentation. In training episodes, we perform 1-shot 100-way classification, while setting the number of query examples for each class to 2. For memory efficiency, we set length of support set to 2 seconds and the length of the query to between half and full of the support length. For vanilla training, we use fixed length speech of 2 seconds. For frame-level feature extraction, we use ResNet34 with 32-64-128-256 channels for each residual stage. Extracted features are aggregated with TAP and are passed through the fully-connected layer to obtain 256-dimensional embeddings. We use SGD optimizer with the Nesterov momentum of 0.9 and set the weight decay to 0.0001. We set initial learning rate to 0.1 and decay it by a factor of 10 until convergence. Every experiment is done with a single NVIDIA 2080Ti GPU.
\subsection{Speaker verification for full utterance}
We first examine the results of full-duration SV to analyze the advantage of our training scheme. The results in Table \ref{tbl:full_comparison} show the model performances evaluated on VoxCeleb1 \cite{nagrani2017voxceleb} original test trial. For fair comparison, we report baselines without VAD and data augmentation except for x-vector \cite{snyder2018x} based models \cite{okabe2018attentive}. On VoxCeleb1, our proposed model outperforms previous state-of-the-arts models. For the same backbone (i.e. ResNet34), our model achieves superior performance without any aggregation and margin-based metrics. In general, additional aggregation and margin-based metric lead to the better performance. Further, our model outperforms time delay neural network (TDNN) with attentive statistic pooling \cite{okabe2018attentive}. Our model also consistently outperforms baseline models, on a larger VoxCeleb2 \cite{chung2018voxceleb2} dataset.
\subsection{Speaker verification for short utterance}
We first describe the experimental settings, then report the results of our model and other previous state-of-the-arts models for short utterance. We test our model on two datasets: 1) The original VoxCeleb1 test trial which is the same datset used to evaluate full utterances, and 2) VoxCeleb1 full dataset (1251 speakers in total). We use full-duration enrollment utterances, but randomly cropped the test utterances by 1, 2 and 5 seconds. If a test utterance is shorter than required we set it to the target length by duplicating its own segment.
To show the efficacy of our method, we perform an ablation study with VoxCeleb1 on upper rows of Table \ref{tbl:short_duration}. We observe that TDV \cite{hajavi2019deep} which aims at short segments outperforms temporal average pooling with slight margin. However, the result in the third row shows that model only trained with meta-learning outperforms TDV and conventionally trained model (See first row). Further, our proposed model which combines meta-learning with global classification obtains the best performance against other baselines trained on VoxCeleb1 with large margin.
For the comparison against other previous state-of-the-arts models \cite{hajavi2019deep, xie2019utterance, gusev2020deep}, we trained the model with VoxCeleb2 dataset and tested on VoxCeleb1 full dataset. We use the same trial as described in \cite{gusev2020deep}. For every speaker, we randomly generate trials for 100 positive pairs and 100 negative pairs. Bottom rows in Table \ref{tbl:short_duration} show that our model outperforms baseline models with significantly large margin for 1-2 seconds. Since our model does not use any aggregation techniques or margin-based optimization, we can say that its impressive improvement mostly comes from our combined learning scheme. Furthermore, note that our model uses only 40-dimensional features but the baselines use features with more than twice the dimensions. Thus our model may obtain larger performance if we use higher dimensional inputs. For the comparison against \cite{hajavi2019deep}, since UtterIdNet is not publicly available, we compared it using TDV instead.
Our performance gain is due to two reasons. First, we compose training episode with imbalance length pairs, where utterance length of query largely varies and is shorter than the length of the utterances in the support set. In Table \ref{tbl:ablation_length}, we observe that the proposed imbalance length pairs setting outperforms both equal length pairs and fixed long-short pairs. Note that \cite{wang2019centroid, anand2019few} use equal-length pairs. In our proposed setting, model comes across various length pairs at each episode, and then is meta-learned such that it can well-match the imbalanced length pairs and become robust to speech duration. Secondly, to learn more discriminative embeddings, we classify both the support and the query samples against the entire set of training classes. Unlike the conventional method which classifies the utterance of same length for each batch, our combined scheme classifies utterances of different lengths at once. It results in reduction of variance caused by speech duration and enhances inter-class clustering. By combining these two components, our proposed model achieves the state-of-the-arts performance on short utterances, while yielding good performance on full utterances.
\input{part_tex/table/unseen_SI}
\subsection{Unseen speaker identification}
We now evaluate the performance of our model on unseen speaker identification tasks. To analyze our model, we trained the model on VoxCeleb2 dataset and tested it on the whole VoxCeleb1 dataset. As done in the verification experiments, we enroll with one utterance for each speaker and set the enrollment utterances equally to 5 seconds. Therefore, we randomly sample $N$-speakers from VoxCeleb1 dataset, and then sample 1 and 5 utterances from each speaker for enrollment and test utterance, respectively. For utterance shorter than required, we handled it as done in 4.4. As shown in Table \ref{tbl:unseen_SI}, our proposed method outperforms vanilla training in every setting. The performance gap increases as the number of speakers grows. Generally, the performance of identification decreases as the number of speakers becomes larger and the length of utterance becomes shorter.
\section{Introduction}
Speaker recognition (SR) with short utterance is an important challenge since in practical settings, where test utterance could be as short as $1$ to $5$ seconds. In the past decades, i-vector \cite{dehak2010front} combined with probabilistic linear discriminant analysis (PLDA) \cite{garcia2011analysis, prince2007probabilistic} has been a dominant approach for SR. However, recently, deep neural network (DNN) based methods have shown to outperform i-vector based systems, achieving better performance on short utterances \cite{bhattacharya2017deep} over non-deep learning counterparts. Although the recent advances in deep learning make it possible to obtain impressive performance on SR such as speaker verification (SV) and identification (SI), obtaining sufficiently high performances under practical settings (e.g. short duration SR, unseen speaker SI) remains as a challenging problem. Some recent works propose to tackle these issues. Regarding SR with short utterances, \cite{gao2018improved, hajavi2019deep, MSA} introduce task-specific feature extractor which allows to extract more information from short utterances. \cite{xie2019utterance, hajavi2019deep, MSA} introduce aggregation methods to attend to more informative frames in frame-level features. In addition to these approaches, various attempts have been made to deal with short utterances. However, they do not provide a substantial performance improvements under realistic scenarios.
In this work, we aim to tackle this problem by meta-learning with imbalance length pairs. Specifically, we organize each episode such that it contains a support set of long utterances and a query set of variable short utterances. By optimizing sequence of episodes, we can train our network to match long-short utterance pairs well over conventional (referred to as 'vanilla') training, which optimizes for the same-length utterances. Also, variable length of query utterances allows the model to consider various practical situations during meta-training and thus allows us to obtain a more length-robust model for SR.
Yet, a crucial problem here is that the query samples may not be discriminative against the entire set of classes (speakers) in the training set. Thus, we further classify every sample in each episode against the whole training classes (referred to as 'global classification'). In doing so, an embedding of a short utterance becomes discriminative against other classes and is matched to its own long utterance at the same time (See Figure \ref{fig:compare_training}). Also, having a consistent framework across training and test phases, by targeting unseen speakers during training, allows the model to verify and identify unseen speakers well (See Figure \ref{fig:network}).
Our proposed learning scheme uses a ResNet34 \cite{he2016deep} as the base network architecture, which is widely used for SR. To verify the efficacy of our proposed model, we use a simplistic framework for implementation of our model, using naive pooling methods such as Temporal Average Pooling (TAP) for aggregation and non-margin metric loss. Also, we use 40-dimensional log mel-filterbank features as inputs to reduce time complexity, since the execution time should be short in realistic settings. We experiment on various settings, such as short utterance SV and unseen SI including conventional experimental settings (full utterance SV). We use VoxCeleb datasets \cite{nagrani2017voxceleb, chung2018voxceleb2} to directly compare with other models. Our model obtains the state-of-the-art results on short utterance SV and unseen SI on various datasets, although we use simpler implementations with smaller-dimensional features.
Our main contributions are as follows:
\vspace{-0.03in}
\begin{itemize}
\item We propose a meta-learning framework for short utterance speaker recognition, in which each episode is composed of support and query pairs with imbalance length utterances.
\item We further propose a training objective that combines the episodic classification loss with the global classification loss, which allows to obtain well-matched and discriminative embeddings.
\item We validate our model on VoxCeleb datasets under various realistic scenarios, including speaker verification with short duration and unseen speaker identification, and achieve the state-of-the-art results.
\end{itemize}
\section{Method}
In this work, we consider a practical setting for unseen speaker recognition, where the length of test utterance is shorter than the enrollment utterance. To solve this problem, the model should not only match a pair of utterances with different lengths from the same speaker, but also be able to recognize unseen speakers not included in training set well. To this end, we introduce a metric-based meta-learning scheme, in which support and query sets consist of long and short utterances, respectively. We also classify both support and query sets over the entire set of training classes rather than training them only for the given set of classes to obtain even more discriminative embeddings.
\begin{figure}[t!]
\centering
\vspace{-0.45in}
\includegraphics[width=0.90\linewidth]{part_tex/pdfs/gg3.pdf}
\vspace{-0.05in}
\caption{Overview of proposed meta-learning scheme on 3-way 1-shot task. We denote the features for each speaker using different colors.}
\vspace{-0.24in}
\label{fig:network}
\end{figure}
\subsection{Problem Definition}
The goal of few-shot unseen short utterance speaker recognition problem is to recognize test utterance $\tilde{\mathbf{x}}_i$ which is as short as 5 seconds as $i$-th speaker, given only few utterances $x_i$ from each speaker for enrollment, whose lengths could be longer than 5 seconds. Since the number of enrollment examples per class is too small, conventional supervised learning may obtain suboptimal performance due to overfitting.
Thus, we tackle the problem with episodic meta-learning where we learn a model over diverse tasks, such that the model learns to recognize the speaker for \emph{any} utterances, while considering a different classification problem at each time. To this end, we first compose task episodes with support set and query set. We first randomly sample $N$ classes from the given dataset, and then sample $K$ and $M$ examples from each class as the support set and query set, respectively. We define the task sampling distribution as $p(\tau)$. As a result, we have a support set $\mathcal{S} = \{(\mathbf{x}_i,y_i)\}_{i=1}^{N \times K}$ and a query set $\mathcal{Q} = \{(\tilde{\mathbf{x}}_i,\tilde{y}_i)\}_{i=1}^{N \times M}$, where $y,\tilde{y} \in \{1,\dots,N\}$ are the class labels.
\subsection{Meta-learning with imbalance length pairs}
Despite large progress on speaker recognition, speaker recognition with short utterances remains to be very challenging in realistic settings due to length mismatch between training and test utterances. As shown in \cite{kanagasundaram2019study}, in conventional training setting, this can result in performance degeneration for short utterances. A prior work~\cite{gusev2020deep} tackles this problem by training the model with short segments only, which outperforms models with long utterances, but the model obtains relatively poor performances on long speech. This trade-off is problematic in realistic settings where the test utterances could be given in any lengths.
How can we then train a length-robust model for speaker recognition? To tackle this problem, we construct the training episodes for our meta-learning framework, to contain imbalanced length pairs. In practical settings, while we can enroll long utterances to the system, the test utterances could come in any variable lengths. To simulate this situation in training phase, we set the length of the utterance for the support set to be longer than the query utterances. On the other hand, we construct the query sets such that they have variable lengths (1 to 2 seconds), that are smaller than the lengths of the support utterances.
As with \cite{snell2017prototypical}, we compute the class prototypes by averaging over the support set and enforce the query examples to become closer to their own prototypes. First, we define $\mathcal{S}_c$ as the set of support examples in class $c$ and then compute the prototype of each class $c = 1,\dots,N$ in episode:
\begin{align}
P_c = \frac{1}{|\mathcal{S}_c|} \sum_{x \in \mathcal{S}_c} f_\theta(\mathbf{x})
\end{align}
\noindent Then, we compute the distance between each query and its corresponding prototype. In this work, we use cosine similarity as the distance metric:
\begin{align}
&d(f_\theta(\tilde{\mathbf{x}}_i), P_c) = \frac{f_\theta(\tilde{\mathbf{x}}) \cdot P_c}{\|P_c\|_2} =\|f_\theta(\tilde{\mathbf{x}})\|_2 \cdot cos(\theta_{i,c})
\label{eq:dist_metric}
\end{align}
\noindent where it can be seen as the cosine similarity with an input-wise length scale. We can then obtain the probability of the sample belonging to each class $c$ as follows:
\begin{align}
p(\tilde{y}=c|\tilde{\mathbf{x}},\mathcal{S};\theta) =
\frac{\exp(d(f_\theta(\tilde{\mathbf{x}}), P_c))}
{\sum_{c'=1}^C \exp(d(f_\theta(\tilde{\mathbf{x}}), P_{c'}))}
\label{eq:conf}
\end{align}
\noindent Then, we compute the loss for each episode:
\begin{equation}
L_e^\tau(\theta) = \frac{1}{|Q|}\sum_{(\tilde{\mathbf{x}},\tilde{y}) \in Q} -\log p(\tilde{y}|\tilde{\mathbf{x}},S;\theta)
\label{eq:episode_loss}
\end{equation}
\subsection{Global classification}
With the proposed meta-learning scheme, we can make variable short utterance to be close to relatively long utterance. However, optimization only within each episode may fail to learn a discriminative embedding space. Inspired by \cite{kye2020transductive}, we additionally classify support and query samples against whole training classes. By optimizing support and query samples of different length at once, we can reduce per class variance according to utterance duration. At the same time, we can make the discriminative embeddings over all other classes in training set. Following ~\cite{kye2020transductive}, we assume a set of global prototypes for each class:
\begin{align}
\omega = \{\mathbf{w}_c \in \mathbb{R}^l|c=1,\dots,C'\}
\label{eq:global_prototype}
\end{align}
\noindent where $C'$ is the number of classes in entire training set and $l$ is dimension of embedding. For $(\mathbf{x}, \mathbf{y}) \in S \cup Q$, we predict the probability of the sample $x$ being an instance of class $y$ as follows:
\begin{equation}
p(\mathbf{y}|\mathbf{x};\theta,\omega) =
\frac{\exp(d(f_\theta(\mathbf{x}), {\mathbf{w}_{\mathbf{y}}}))}
{\sum_{c=1}^{C'} \exp(d(f_\theta(\mathbf{x}), {\mathbf{w}_{c}}))}
\label{eq:pixel_pred}
\end{equation}
and compute the global loss:
\begin{equation}
L_g^\tau(\theta,\omega) = \frac{1}{|S|+|Q|}\sum_{(\mathbf{x},\mathbf{y}) \in S \cup Q} -\log p(\mathbf{y}|\mathbf{x};\theta,\omega)
\label{eq:global_loss}
\end{equation}
\noindent where $d$ is the distance metric described in Eq. 2. Note that the global classification is conducted on both the support and query samples. Finally, our learning objective combines the episode loss in Eq. 4 with the global loss in Eq. 7.
\begin{equation}
L(\theta, \omega) =
\mathbb{E}_{p(\tau)}\left[
L_e^\tau(\theta) + \lambda L^{\tau}_{g}(\theta,\omega)\right]
\label{eq:total_loss}
\end{equation}
\noindent Here, $\lambda$ is the hyperparameter for loss balancing and we simply use $\lambda=1$. In order to compute the final objective, we sample a single task and then average loss according to the task distribution $p(\tau)$ during training. This combined objective allows our model to match imbalance length pairs, while these pairs are classified over whole training classes together.
\section{Related Work}
\textbf{DNN based speaker embedding:} Recently, DNN based methods \cite{variani2014deep, li2017deep, snyder2018x, zhang2018text, disentangle, MIRNet}, have achieved impressive performance on speaker recognition (SR), outperforming traditional i-vector systems. The key components of DNN based systems are feature extractors, aggregation of temporal features and optimization. First, many SR systems use 1D or 2D convolutional neural networks and recurrent neural networks as feature extractors, which make it possible to extract the time and frequency properties of the speaker features (MFCC, mel-filterbank). Then these extracted frame-level features are summarized into fixed-length vectors using aggregation methods which aims to capture intrinsic speaker information, such as attentive statistic pooling (ASP) \cite{okabe2018attentive}, self attentive pooling (SAP) \cite{cai2018exploring}, learnable dictionary encoding (LDE) \cite{cai2018exploring} and spatial pyramid encoding (SPE) \cite{jung2019spatial}. Then, these utterance-level features are used as inputs for with softmax classifiers with fully-connected layers. However, since softmax classifier may not obtain sufficiently discriminative embedding spaces, recent methods such as A-softmax \cite{liu2017sphereface}, AM-softmax \cite{wang2018additive} and AMM-softmax \cite{deng2019arcface} propose angular margin-based metric to reduce per-class variance.
\begin{figure}[t!]
\vspace{-0.45in}
\centering
\hfill
\subfigure[Vanilla training]{\includegraphics[clip,
trim=0.5cm 0.9cm 0.5cm 0.3cm,
width=3.9cm]{part_tex/pdfs/fig1_1.pdf}}
\hfill
\subfigure[Ours]{\includegraphics[clip,
trim=0.5cm 0.9cm 0.5cm 0.3cm, width=3.9cm]{part_tex/pdfs/fig1_2.pdf}}
\vspace{-0.15in}
\hfill
\caption{
Comparison between (a) vanilla training and (b) meta-learning with global classification. We visualize t-SNE embeddings on 10-way 5-shot task. Prototype is average of support set which consists of 5 seconds utterances. Best viewed in color.}
\vspace{-0.25in}
\label{fig:compare_training}
\end{figure}
\textbf{Metric-based meta-learning for few-shot classification:} Speaker verification and speaker identification for unseen speaker are essentially few-shot learning tasks. The goal of few-shot classification is to correctly classify unlabeled query (test) examples with only a few labeled support (enrollment) set per class. Since labeled data is scarce, conventional supervised learning in this case is prone to overfitting, and thus recent works resort to \emph{meta-learning}, which learns a model that generalize over diverse tasks to consider larger number of problems for training. One of the most popular meta-learning approach is metric-based meta-learning \cite{kye2020transductive, vinyals2016matching, snell2017prototypical, liu2019fewTPN}, which learns an embedding space that is learned to be discriminative for any given tasks. Several recent works~\cite{wang2019centroid, anand2019few} use Prototypical Networks~\cite{snell2017prototypical}, which is such a metric-based meta-learning model, for speaker recognition. Compared with above methods, our algorithm uses not only Prototypical Networks with imbalance length pairs but also global classification over the whole samples in episode.
\textbf{Speaker recognition for short utterance:} Speaker recognition for short utterance is especially challenging since the input data contains very little information about the speaker. To tackle this problem, \cite{gao2018improved, hajavi2019deep, xie2019utterance, MSA} propose aggregation techniques to extract as much information as possible from short speech. NetVLAD / GhostVLAD \cite{xie2019utterance} use attention-based pooling with learnable dictionary encoding, and time-distributed voting (TDV) \cite{hajavi2019deep} utilizes short-cut connection information with weighted sum of them, which yields impressive performance gains over GhostVLAD for short utterance SR. However, TDV obtains good performance only on short utterances and relatively low performance on long utterances. Other approaches to tackle the short utterance SR use knowledge distillation \cite{jung2019short}, generative adversarial networks \cite{zhang2018vector} and angular margin-based method \cite{huang2018angular, gusev2020deep}.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,517
|
KU ScholarWorks
School of Business Scholarly Works
Order imbalance and stock returns: Evidence from China
orderimbalance_China.pdf (762.3Kb)
Shenoy, Catherine
We investigate the relation between daily order imbalance and return in the Chinese stock markets of Shenzhen and Shanghai. Prior studies have found that daily order imbalance is predictive of subsequent returns. On the Chinese exchanges we find autocorrelation in order imbalances is similar to that of the New York Stock Exchange as reported by Chordia and Subrahmanyam (2004). We also find a strong contemporaneous relation between daily order imbalances and return. However, we do not find evidence that order imbalances predict subsequent returns. We attribute the difference in predicative power to differences in trading mechanisms on the two exchanges and to differences in the share turnover rate.
School of Business Scholarly Works [205]
Center for East Asian Studies Scholarly Works [361]
Catherine Shenoy and Zhang, Ying Jenny, Order imbalance and stock returns: Evidence from China, The Quarterly Review of Economics and Finance 47 (2007) 637-650.
Items in KU ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
We want to hear from you! Please share your stories about how Open Access to this item benefits YOU.
Contact KU ScholarWorks
KU Libraries
1425 Jayhawk Blvd
Search KU ScholarWorks
All of KU ScholarWorksCommunities & CollectionsThis Collection
The University of Kansas prohibits discrimination on the basis of race, color, ethnicity, religion, sex, national origin, age, ancestry, disability, status as a veteran, sexual orientation, marital status, parental status, gender identity, gender expression and genetic information in the University's programs and activities. The following person has been designated to handle inquiries regarding the non-discrimination policies: Director of the Office of Institutional Opportunity and Access, IOA@ku.edu, 1246 W. Campus Road, Room 153A, Lawrence, KS, 66045, (785)864-6414, 711 TTY.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 2,769
|
Q: Is it possible to remove URL in popup I have an application where I have used javascript alert popup box.
When I open that in smart device like iPhone then it shows the URL on top as shown below:
So is there a way to remove that URL.?
A: If you could find a way to set the delegate of that UIAlertView you can try to use
*
*(void)willPresentAlertView:(UIAlertView *)alertView
You can then change the title from there if this would work.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 76
|
\section{Introduction}
The formation of interstellar complex organic molecules \citep[iCOMs;][]{ceccarelli2017}, namely organic species composed of more than five atoms \citep{Herbst2009}, is of particular importance in astrochemistry.
Indeed, iCOMs are detected in various regions, such as star-forming regions \citep{Rubin1971,cazaux2003,Kahane2013,Mendoza2014,belloche2017,ligterink2017,Mcguire2018}, circumstellar envelopes of AGB stars \citep{cernicharo2000} or shocked regions \citep{Arce2008,codella2017,lefloch2017}.
Understanding how these quite "complex" compounds could be formed in the harsh conditions of ISM, \textit{i. e.} at very low temperatures and densities, is a challenging question.
So far, two main, and not incompatible, chemical theories have been invoked: solid-state chemistry \citep{Garrod2006,Woods2013,fedosev2015,oberg2016} and gas-phase reactivity \citep{Charnley1992,Balucani2015,Skouteris2018}.
In this article we focus on acetaldehyde (CH$_3$HCO), one of the first molecules to have been detected in the interstellar medium (ISM) \citep{Gottlieb1973} and one of the most abundant iCOMs.
Indeed, acetaldehyde is almost ubiquitously detected, in cold ($\sim 10$ K) and warm ($\geq 50$ K) environments \citep[e.g.]{Blake1987, cazaux2003, Bacmann2012, Vastel2014, lefloch2017, Sakai2018, Bianchi2019, Lee2019, Csengeri2019, Scibelli2020, DeSimone2020}.
Furthermore, acetaldehyde has a great prebiotic potential, being a possible precursor for several carbohydrates \citep{Pizzarello2004,cordova2005} and acrolein (CH$_2$CHCHO).
\begin{figure*}
\includegraphics[scale=0.47]{summary-black.pdf}
\caption{Scheme of the four gas-phase formation routes of acetaldehyde according to the reactions proposed in the literature and listed in \S ~\ref{sec:reviews}.
The boxes in red mark the two reactions studied in the present work.}
\label{fig:scheme-react}
\end{figure*}
The latter is a crucial intermediate in the prebiotic synthesis of various amino acids \citep{cleaves2003}.
It can also be considered as a condensation agent in the prebiotic formation of deoxyribonucleosides \citep{teichert2019}, a major component of DNA, and was used by Adolph Strecker in 1850 \citep{strecker1850} in his famous amino acid synthesis to form alanine.
Despite the ubiquity of acetaldehyde in the molecular ISM and its potential prebiotic importance, there is still not a consensus of how this molecule is formed.
It could be the product of the chemistry occurring on the grain ice surfaces \citep[e.g.][]{Garrod2006,Jones2011,Bennett2005,Martin-Domenech2020} or synthesised in the gas-phase \citep[e.g.][]{charnley2004,Vastel2014,Codella2020,DeSimone2020}.
In this article, we focus on the gas-phase formation routes that have been proposed in the literature.
Our aim is to provide a completely validated network of reactions that form acetaldehyde and that can then be used in astrochemical models.
The article is organised as follows.
In Section \ref{sec:reviews}, we provide a summary of the reactions present in the literature.
For the cases where no experimental or theoretical estimates exist in the range of temperatures and pressures valid in the molecular ISM, we carried out new theoretical computations, both on the electronic energy and kinetics.
Section \ref{sec:methods} describes the adopted computational methods and Section \ref{sec:results} the results of the computations.
In Section \ref{sec:discussion}, we discuss the implications of our new computations, and provide guidelines on the reactions and rate constants (please note that astronomers tend to write "rate coefficients", which is an equivalent terminology) to be used in astrochemical models, after the comparison with astronomical observations.
Section \ref{sec:conclusions} concludes the article.
\section{Gas-phase routes to acetaldehyde formation}
\label{sec:reviews}
Several gas-phase acetaldehyde formation reaction paths have been proposed in the literature.
They involve ion-molecule and neutral-neutral reactions.
Most of them are included in the two major databases used by astrochemical modellers, KIDA \citep[Kinetic Database for Astrochemistry:][]{Wakelam2012} and UDfA \citep[UMIST Database for Astrochemistry:][]{mcelroy2013}.
Among the various reaction paths listed in those two databases two are potentially efficient to synthesise acetaldehyde in the gas-phase:
\medskip
\begin{tabular}{ll}
(1) & CH$_3$OCH$_3$ + H$^+$ $\rightarrow$ CH$_3$CHOH$^+$ + H$_2$\\
& CH$_3$CHOH$^+$ + e$^-$ $\rightarrow$ CH$_3$CHO + H\\
(2) & C$_2$H$_5$ + O($^3$P) $\rightarrow$ CH$_3$CHO + H\\
\end{tabular}
\medskip
A third path was recently proposed by \cite{Vasyunin2017} and it is also reported in the UDfA database:
\medskip
\begin{tabular}{ll}
(3) & CH$_3$OH + CH $\rightarrow$ CH$_3$CHO + H\\
\end{tabular}
\medskip
A fourth path that starts from ethanol (CH$_3$CH$_2$OH) was finally proposed by \cite{Skouteris2018}:
\medskip
\begin{tabular}{ll}
(4) & CH$_3$CH$_2$OH + OH $\rightarrow$ CH$_3$CHOH + H$_2$O\\
& CH$_3$CHOH + O $\rightarrow$ CH$_3$CHO + OH\\
\end{tabular}
\medskip
The UDfA database also reports the reaction CH$_3$CHCH$_2$ + OH $\rightarrow$ CH$_3$CHO + CH$_3$.
However, the formation of acetaldehyde from this reaction is very unlikely, as it would require several steps in the reaction path and acetaldehyde would certainly be a (very) minor product.
We, therefore, do not consider this path further.
\medskip
While paths (1) and (4) were studied by \cite{vazart2019} and \cite{Skouteris2018}, respectively, via theoretical computations of the electronic energy and kinetics of the involved reactions, paths (2) and (3) have not been validated yet neither by experimental or theoretical works in the conditions of ISM, namely low temperatures and pressure.
Specifically, reaction (2) was studied by combined cross-beam and computational studies. However, these studies were not focused on kinetics and they were carried out in the 295-600 K temperature range \citep{Jung2011,Jang2014,Park2010}.
Reaction (3) was studied computationally, using an ab initio and DFT composite method but no kinetic computations were carried out \citep{Zhang2002}.
It was also studied experimentally in the 298-753 K temperature and 100-600 Torr pressure ranges \citep{Johnson2000}, which, unfortunately, are conditions not directly applicable to the molecular ISM.
In this work, we carry out new computations to obtain the products and rate constants in the 7-300 K temperature range for the reactions (2) and (3), in order to have a complete validated network of reactions forming acetaldehyde.
Figure \ref{fig:scheme-react} schematically summarises the four possible routes that leads to the formation of acetaldehyde in the gas-phase and the two that are studied here are pictured in red.
\section{Computational details and methods}
\label{sec:methods}
\subsection{Electronic structure computations and vibrational evaluation}
\label{subsec:comput-methods}
All the computations were carried out using the Gaussian16 suite of programs \citep{gaussian16}. The B2PLYP double hybrid functional \citep{B2PLYP} was used for all the geometry optimizations, in conjunction to the aug-cc-pVTZ triple-$\zeta$ basis set \citep{aug-cc-pvtz1,aug-cc-pvtz2}. Semiempirical dispersion effects were also included thanks to the D3BJ model of Grimme \citep{D3BJ}, leading to the so-called B2PLYP-D3/aug-cc-pVTZ level of theory. The frequencies of all the involved compounds were also evaluated using this method, in order to verify that all intermediates were true minima on the potential energy surface (PES), and that all transition states (TSs) exhibited a single imaginary frequency. The electronic energies were then reevaluated using the coupled-cluster singles and doubles approximation augmented by a perturbative treatment of triple excitations (CCSD(T), \cite{CCSDT}) in conjunction to the same basis set. This composite method will be referred to as CCSD(T)//B2PLYP-D3/aug-cc-pVTZ in the present manuscript.
\subsection{Kinetics study methods}
\label{subsec:kinetics-methods}
As in previous work \citep{Balucani2012,Leonori2013,Skouteris2015,Vazart2015,Skouteris2018} a combination of capture theory and the Rice-Ramsperger-Kassel-Marcus (RRKM) calculations was used to determine the relevant rate constants and branching ratios. For the first steps (the addition of the O($^3$P) atom to C$_2$H$_5$ and the formation of the initial Van der Waals CH...CH$_3$OH complex regarding the first and second reactions, respectively) capture theory was used. To do so, calculations were performed at various long-range distances of the reactants, and the obtained energies obtained were fitted to a $1/R^6$ functional form (both for the London dispersion forces and the rotating dipole ones). The fitting coefficient (C$_6$) was then used to obtain the capture cross section with the formula $\sigma(E) = \pi \times 3 \times 2^{-2/3} \times (C_6/E)^{1/3}$ (where $E$ is the translational energy), which was itself multiplied by the collision velocity $(2E/m)^{1/2}$ (where $m$ is the reduced mass of the reactants) to get the corresponding capture rate constants together with the maximum total angular momentum $J$ for a successful capture.
For the subsequent reactions, energy-dependent rate constants were calculated using the RRKM scheme, $J$ being conserved throughout it (for each energy, RRKM calculations are carried out separately for all values of $J$ up to the maximum one permitted). Subsequently, the master equation was solved at all relevant energies for all systems (to consider the overall reaction scheme), Boltzmann averaging was carried out to obtain temperature-dependent rate constants and, finallly, those rate constants were fitted to the form $k(T)=\alpha({\frac{T}{300K}})^\beta$ The values of $\alpha$ and $\beta$ in each case are given in Table \ref{tab:alpha-beta} of the \ref{subsec:kinetics-results} following section.
\section{Results}
\label{sec:results}
\subsection{Electronic structures and reaction paths}
\label{subsec:paths}
\begin{figure*}[h]
\includegraphics[scale=0.5]{C2H5+O-path.pdf}
\caption{Full reaction path following the addition of O($^3$P) on the C$_2$H$_5$ radical at the CCSD(T)//B2PLYP-D3/aug-cc-pVTZ level of theory. The exhibited energies include the ZPE corrections.}
\label{fig:path-C2H5+O}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.45]{CH3OH+CH-path.pdf}
\caption{Proposed reaction path of the CH$_3$OH + CH reaction at the CCSD(T)//B2PLYP-D3/aug-cc-pVTZ level of theory. The exhibited energies include the ZPE corrections.}
\label{fig:path-CH3OH+CH}
\end{figure*}
This section summarizes the electronic structures and relative energies of all relevant intermediates and transition states involved in both C$_2$H$_5$ + O($^3$P) and CH$_3$OH + CH channels. The optimized geometries and energies of each species are given in Appendix.
\textit{C$_2$H$_5$ + O($^3$P).} On Fig. \ref{fig:path-C2H5+O}, the full reaction path following the barrierless addition of O($^3$P) on the radical C$_2$H$_5$ is presented. This path was already proposed by \cite{Jung2011}, at the CBS-QB3 level of theory. The energies shown here are the CCSD(T)/aug-cc-pVTZ reevaluated electronic energies corrected with the B2PLYP-D3/aug-cc-pVTZ zero-point energies (ZPE). Starting from the first \textbf{RI1} intermediate, one can observe three types of direct dissociations: into H$_2$CO + CH$_3$, exhibiting a 75 kJ/mol barrier represented by \textbf{TS1}, into acetaldehyde CH$_3$CHO + H, through \textbf{TS2} which is \textit{ca.} 95 kJ/mol more energetic than \textbf{RI1}, or into the CH$_2$OCH$_2$ epoxide + H, if the system overpasses the 230 kJ/mol large barrier defined by \textbf{TS3}. \textbf{RI1} is also able to be converted into all three other intermediates \textbf{RI2}, \textbf{RI3} and \textbf{RI4}, through the \textbf{TS4}, \textbf{TS5} and \textbf{TS12} transition states, which exhibit 120, 117 and 220 kJ/mol barriers, respectively.
\textbf{RI3}, as far as it is concerned, can also undergo 3 types of direct dissociations: into acetaldehyde CH$_3$CHO + H, through \textbf{TS10} which has a \textit{ca.} 150 kJ/mol barrier, into the CH$_2$CHOH enol + H, after over-passing the 260 kJ/mol barrier represented by \textbf{TS11}, into CH$_2$CH + H$_2$O, a dehydration reaction requiring 310 kJ/mol to occur (Cf. \textbf{TS15}). It is linked to \textbf{RI1} through the previously mentioned \textbf{TS5} and to \textbf{RI2} through \textbf{TS6}, that exhibits a 190 kL/mol barrier.
If we take a look at \textbf{RI2}, linked to \textbf{RI1} and \textbf{RI3} thanks to \textbf{TS4} and \textbf{TS6} respectively, it can undergo five types of dissociations: into CH$_2$CH$_2$ + OH, through \textbf{TS18}, which exhibits a \textit{ca.} 100 kJ/mol barrier, followed by a loose Van der Waals complex \textbf{VdW-TS18}, into the CH$_2$CHOH enol + H, after over-passing the 140 kJ/mol barrier represented by \textbf{TS8} or into the CH$_2$OCH$_2$ epoxide, through the 250 kJ/mol barrier represented by \textbf{TS7}. \textbf{RI2} can also experience two different dehydration steps, over \textbf{TS17} or \textbf{TS19}, that will form CH$_2$CH + H$_2$O after over-passing a 300 or a 250 kJ/mol barrier, respectively.
Last, but not least, \textbf{RI4} is linked only to \textbf{RI1} through \textbf{TS12} and can be dissociated into the enol CH$_2$CHOH + H or into the epoxide CH$_2$OCH$_2$ + H. These steps exhibit 110 kJ/mol (\textbf{TS13}) and 250 kJ/mol (\textbf{TS14}) barriers, respectively.
To summarize, the possible products are, in order of stability: formaldehyde H$_2$CO + CH$_3$, acetaldehyde CH$_3$CHO + H, CH$_2$CH + H$_2$O, ethene CH$_2$CH$_2$ + OH, the enol CH$_2$CHOCH + H and the epoxide CH$_2$OCH$_2$ + H.
It is noteworthy that the first addition is barrierless and that the energies of all the involved intermediates and TSs are below that of the reactants, which makes this path viable in ISM, and that acetaldehyde CH$_3$CHO is among the possible products. A kinetics study will therefore be needed to figure out the amount actually formed via this reaction.
\textit{CH$_3$OH + CH.} Fig. \ref{fig:path-CH3OH+CH} shows the path representing the CH + CH$_3$OH reaction, only focusing on the intermediates and TSs that involves energies below that of the reactants, and therefore viable in ISM. Other addition or insertion first steps were considered but were too high energetically and therefore not pictured here. The exhibited energies are again the CCSD(T)/aug-cc-pVTZ reevaluated electronic energies corrected with the B2PLYP-D3/aug-cc-pVTZ zero-point energies (ZPE).
It is notable that this path is very similar to the previous one, due to the identical intermediates involved. The main differences on this new path are the higher energy of the reactants that shifts the path downwards and the existence of a Van der Waals complex \textbf{VdW} before the two TSs \textbf{TS-insCH} and \textbf{TS-insOH}. These TSs represent the insertion of CH inside of the C-H bond and the O-H bond of methanol and exhibit 42 and 33 kJ/mol barriers, respectively. The first intermediates here are therefore \textbf{RI2} and \textbf{RI4} and not \textbf{RI1} anymore. All these factors will play a role on the kinetics of the system, which is again needed to verify the efficiency of this path in forming acetaldehyde. All the transition states involved in both channels are depicted in Fig. \ref{fig:TSs} in Appendix.
\subsection{Kinetics results}
\label{subsec:kinetics-results}
The RRKM method was used to evaluate the rate constants of the formation of the major products of each reaction, and, more particularly, acetaldehyde. The results are shown in Fig. \ref{fig:rates}.
\begin{figure*}
\includegraphics[scale=0.65]{rates-vert.pdf}
\caption{Rate constants as a function of temperature for the formation of the major products of the C$_2$H$_5$ + O($^3$P) and CH$_3$OH + CH reactions, respectively.}
\label{fig:rates}
\end{figure*}
\textit{C$_2$H$_5$ + O($^3$P).} A factor 2/3 was applied to the rate constants obtained for this reaction, based on the work by \cite{harding2005}, which stated that out of three states, only two are reactive for this system. One can see on Fig. \ref{fig:rates} that the major products of this reaction should be H$_2$CO + CH$_3$, directly followed by CH$_3$CHO + H, at any temperature. This can be explained by the facts that only one step is required to reach them from \textbf{RI1} and that the TSs that need to be over-passed are quite low in energy. It is an encouraging result, as acetaldehyde is the compound of interest here. The back-dissociation into the reactant is negligible due to the huge stabilization of the first intermediate (\textbf{RI1}, by 366.2 kJ/mol).
\textit{CH + CH$_3$OH.} As far as the second reaction is concerned, H$_2$CO + CH$_3$ and CH$_2$CH$_2$ are supposed to be the major products at low temperatures, but when the temperature is increasing (after \textit{ca.} 170 K), back-dissociation becomes prevailing due to the existence of the Van der Waals complex that can, quite easily, dissociate into the reactants. It is noticeable that, unfortunately, acetaldehyde is formed in a negligible amount, as it requires several steps including quite high transition states in the reaction to be reached.
In order to be used by astrochemical models, we fitted the computed rate constants between 7 and 300 K with the function $k(T)=\alpha({\frac{T}{300K}})^\beta$, leaving $\alpha$ and $\beta$ as free parameters.
Please note that, in order to obtain a better fitting, we split the temperature in two ranges, above and below 95 K.
The obtained values of $\alpha$ and $\beta$ are reported in Table \ref{tab:alpha-beta} for the major formation products of both C$_2$H$_5$ + O($^3$P) and CH$_3$OH + CH reactions.
\begin{table*}
\begin{center}
\caption{Summary of the computed reaction rate constants as a function of the temperature of the major products of the C$_2$H$_5$ + O($^3$P) and CH$_3$OH + CH reactions studied in this work.
The rate constants were fitted with the function $k(T)=\alpha({\frac{T}{300K}})^\beta$, used in astrochemical models.
Columns 2 and 3 report $\alpha$ and $\beta$ in two temperature ranges (column 4) , 7-95 K and 95-300 K, chosen for a better fit than that obtained with only one range.
Columns 5 and 6 list the reaction rate constants (in cm$^3$ s$^{-1}$) computed at 10 K and 100 K, namely the approximate temperatures of cold molecular clouds and a hot cores/corinos, respectively.
Finally, for comparison, columns 7 and 8 quote the values given in the KIDA \citep{Talbi2011,Wakelam2012} and UDfA \citep{Woodall2007,mcelroy2013} databases.}
\label{tab:alpha-beta}
\begin{tabular}{lccccc|cccc}
\hline
& \multicolumn{5}{c}{This study} & \multicolumn{2}{c}{KIDA} & \multicolumn{2}{c}{UDfA} \\
Reaction & $\alpha$ [cm$^3$ s$^{-1}$] & $\beta$ & T [K] & k$_{10 K}$ [cm$^3$ s$^{-1}$] & k$_{100 K}$ [cm$^3$ s$^{-1}$] & $\alpha$ [cm$^3$ s$^{-1}$]& $\beta$ & $\alpha$ [cm$^3$ s$^{-1}$] & $\beta$ \\
\hline
C$_2$H$_5$ + O($^3$P) $\rightarrow$ CH$_3$CHO + H & $1.21\times10^{-10}$ & 0.16 & 7-300 & $0.71\times10^{-10}$ & $1.02\times10^{-10}$ & $8.80\times10^{-11}$ & 0 & $1.33\times10^{-10}$ & 0 \\
C$_2$H$_5$ + O($^3$P) $\rightarrow$ H$_2$CO + CH$_3$ & $3.65\times10^{-10}$ & 0.18 & 7-95 & $1.99\times10^{-10}$ & $3.03\times10^{-10}$ & $6.60\times10^{-11}$ & 0 & $2.67\times10^{-11}$ & 0 \\
& $3.82\times10^{-10}$ & 0.21 & 95-300 \\
C$_2$H$_5$ + O($^3$P) $\rightarrow$ CH$_2$CH$_2$ + OH & $3.87\times10^{-12}$ & 0.13 & 7-95 & $2.51\times10^{-12}$ & $3.35\times10^{-12}$ & $4.40\times10^{-11}$ & 0 & - & - \\
& $3.75\times10^{-12}$ & 0.10 & 95-300 \\
\hline
\hline
CH$_3$OH + CH $\rightarrow$ CH$_3$CHO + H & $1.84\times10^{-13}$ & -0.07 & 7-95 & $2.21\times10^{-13}$ & $1.75\times10^{-13}$ & - & - & $2.49\times10^{-10}$ & -1.93 \\
& $6.74\times10^{-14}$ & -0.95 & 95-300 \\
CH$_3$OH + CH $\rightarrow$ H$_2$CO + CH$_3$ & $1.16\times10^{-9}$ & 0.06 & 7-95 & $9.02\times10^{-10}$ & $9.62\times10^{-10}$ & - & - & - & - \\
& $4.00\times10^{-10}$ & -0.88 & 95-300 \\
CH$_3$OH + CH $\rightarrow$ CH$_2$CH$_2$ + OH & $2.14\times10^{-10}$ & 0.10 & 7-95 & $1.44\times10^{-10}$ & $1.75\times10^{-10}$ & - & - & - & - \\
& $9.40\times10^{-11}$ & -0.63 & 95-300 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\section{Discussion}
\label{sec:discussion}
\subsection{A new network for the formation of acetaldehyde}\label{subsec:chemical-network}
In the literature, four gas-phase formation routes of acetaldehyde have been invoked (see \S ~\ref{sec:reviews}): path 1, following the recombination of the protonated acetaldehyde (CH$_3$CHOH$^+$); paths 2 and 3, via reactions of ethyl radical (C$_2$H$_5$) and methanol (CH$_3$OH) with O and CH, respectively; path 4, starting from ethanol (see the summary in Fig. \ref{fig:scheme-react}).
In this study and in two previous works \citep{vazart2019,Skouteris2018}, we studied the four reactions via theoretical computations of the electronic energy (\S ~\ref{subsec:paths}) and kinetics (\S ~\ref{subsec:kinetics-results}).
Table ~\ref{tab:alpha-beta} summarizes the results of the new computations of the present work.
We report the $\alpha$ and $\beta$ parameters, from which to compute the rate constants as a function of the temperature, and Fig. ~\ref{fig:rates} plots them.
With this new study, therefore, we are now in a position to assess which path efficiently can form acetaldehyde and, perhaps, is responsible for its presence in the ISM.
\medskip \noindent
{\it Path 1:} CH$_3$OCH$_3$ + H$^+$ \& CH$_3$CHOH$^+$ + e$^-$\\
The study of this ionic route was reported in \cite{vazart2019}, where we showed that the reaction supposed to lead to protonated acetaldehyde actually does not form it.
In fact, the reaction leads to the formation of CH$_2$OH$^+$ + CH$_4$ rather than CH$_3$CHOH$^+$ + H$_2$, as previously quoted in astrochemical databases.
Consequently, since there are not known routes that efficiently form protonated acetaldehyde, the ionic formation route of acetaldehyde is invalid.
Note that \cite{Skouteris2018} had already removed this route in their chemical network, suspecting an improbable rearrangement of the atoms for the reaction to occur.
\medskip \noindent
{\it Path 2:} C$_2$H$_5$ + O\\
This study claims that, although acetaldehyde is not the major product (formaldehyde is), acetaldehyde is synthesised at a few times $10^{-10}$ cm$^3$ s$^{-1}$, a rate constant almost unchanged between 7 and 300 K.
The reaction forms about three times more formaldehyde and thirty times less ethylene (CH$_2$CH$_2$).
Indeed, the transition state leading to acetaldehyde is slightly higher in energy than the one leading to formaldehyde (by 19 kJ/mol, cf. Fig. \ref{fig:path-C2H5+O}), which leads to the rate constant for the formation of the latter slightly faster, as seen in Fig. \ref{fig:rates} and to a branching ratio (BR) of \textit{ca.} 30\%.\\
Commonly, a 40-50\% BR is given in the literature regarding the formation of acetaldehyde by this reaction \citep{slagle1988,hoyermann1999,hack2002,harding2005}, and this can be explained using a temperature argument. Indeed, in \cite{hoyermann1999}, comparable quantum chemistry computations were performed regarding a few steps of the reaction and are in agreement with the present study (the energy difference between the transition states leading to acetaldehyde and formaldehyde being of 21 kJ/mol). They also performed a kinetics study, starting from room temperature, that shows that at their lowest temperatures, formaldehyde is the major product of the reaction. But when the temperature increases, both formaldehyde and acetaldehyde tend to be formed at the same rate, which can explain why the 40-50\% BR is ordinarily reported in the literature.
\medskip \noindent
{\it Path 3:} CH$_3$OH + CH\\
This reaction forms mainly formaldehyde and ethylene and only a negligible fraction ($\sim 2\times10^{-4}$) of acetaldehyde.
Indeed, the first intermediate requires several rearrangements in the reaction path to form acetaldehyde, while formaldehyde and ethylene can be formed after a direct dissociation of this intermediate.
Moreover, the presence of a Van der Waals complex before the formation of the first intermediates leads to the significant role of back-dissociation at temperatures higher than about 170 K and, therefore, to a decrease of the formation rate constants after this temperature.
As a result, only 1/5000 times one acetaldehyde molecule is formed, with a rate constant ($\sim 2\times10^{-13}$ cm$^3$ s$^{-1}$), which is about 650 times lower than the one from reaction (2).
In other words, this route of formation of acetaldehyde is very likely negligible, except in environments where ethyl radical or atomic oxygen are more than 650 times less abundant than methanol and CH, which is an unlikely situation (see the discussion in \S ~\ref{subsec:astro}).
\medskip \noindent
{\it Path 4:} CH$_3$CH$_2$OH + OH \& CH$_3$CHOH + OH\\
This path that leads (also) to the formation of acetaldehyde was studied via theoretical computations by \cite{Skouteris2018}.
The goal of that study was to show a gas-phase route to glycolaldehyde, but acetaldehyde is a by-product of what was called "the genealogical ethanol tree", as from the reaction of ethanol with OH and Cl, other three iCOMs can be formed (formic acid, acetic acid and acetaldehyde).
The overall rate constant of acetaldehyde formation from the ethanol tree is large enough to make it a potential source of interstellar acetaldehyde.
\medskip
In conclusion, as summed up in Fig. \ref{fig:summary}, among the four most potentially important gas-phase formation routes of acetaldehyde invoked in the literature, only two will be efficient in the ISM conditions, based on our computations: the paths 2 and 4 described in Section ~\ref{sec:reviews}.
We encourage the astrochemical modellers to use only these two reaction paths for the acetaldehyde formation in the gas-phase.
\begin{figure*}
\includegraphics[scale=0.47]{summary-react.pdf}
\caption{Scheme of the gas-phase formation of acetaldehyde according to the reactions proposed in the literature before the present study (\S ~\ref{sec:reviews}).
The color code indicates whether the reaction is validated (green) or disproved (red) by our new and old computations (see text).}
\label{fig:summary}
\end{figure*}
\subsection{Comparison of our new computation results with previous studies and astrochemical databases}\label{subsec:comp-previous}
In this section, we review how our new computations compare with experimental and previous theoretical values and with the values reported in the KIDA and UDfA databases, largely used in astrochemical models.
\medskip \noindent
{\it Reaction 2:} C$_2$H$_5$ + O\\
To the best of our knowledge, no experimental data are available in the literature. The KIDA and UDfA databases report constant values equal to $8.8\times10^{-11}$ cm$^3$ s$^{-1}$ and 1$.33\times10^{-10}$ cm$^3$ s$^{-1}$, respectively. These values are taken from \cite{Baulch2005} and \cite{Hebrard2009} in KIDA, and from NIST in UDfA.
These values compare extremely well with those computed in this work ($1.21\times10^{-10}$ cm$^3$ s$^{-1}$), especially the ones reported in UDfA.
Having said that, both databases present acetaldehyde as the major product of the reaction, which is not correct.
\medskip \noindent
{\it Reaction 3:} CH$_3$OH + CH\\
The only published experiment on this reaction is the one by \cite{Johnson2000}, who studied the global rate constant between 298 and 753 K and at a 100-600 Torr pressure, but they could not distinguish the different products of the reaction because their technique does not provide it.
In the temperature range of their study, Johnson et al. found a steep dependence on the temperature, -1.93.
Our new computations compare relatively well, within a factor two, with the Johnson et al. global rate constant at 298 K, $4.65\times10^{-10}$ against the measured $(2.5\pm0.1)\times10^{-10}$ cm$^3$ s$^{-1}$.
We did not carry out computations in the range studied by \cite{Johnson2000} but the shape of our curve suggests a decrease of the rate constants when the temperature increases, due to the prevalence of back-dissociation after 170 K, which coincides with their results.
Reaction 3 is reported in the UDfA database to form acetaldehyde with $\alpha$ equal to $2.49\times10^{-10}$ cm$^3$ s$^{-1}$ and $\beta$ equal to -1.93, (erroneously) based on the work by \cite{Johnson2000}.
However, as said, the Johnson et al. value refers to the global rate constant of the reaction, not to the formation of acetaldehyde, and the temperature dependence is in a totally different range.
On the contrary, we found more than 1000 times lower value of $\alpha$, $1.84\times10^{-13}$ cm$^3$ s$^{-1}$, compared to the value reported in UDfA, and an almost null dependence on the temperature at $\leq 95$ K.
Note that the KIDA database does report reaction 3 but assumes that it leads to the formation of CH$_3$ + H$_2$CO only, with a constant rate constant of $2.5\times10^{-10}$ cm$^3$ s$^{-1}$, which is indeed what we find.
Finally, \cite{Vasyunin2017} proposed this reaction based on the experimental study by \cite{Johnson2000} and assumed that 10\% of the reaction leads to acetaldehyde, which is wrong, and the remaining 90\% to formaldehyde.
They also kept the -1.93 temperature dependence so that they assumed an acetaldehyde formation rate constant of $1.8\times10^{-8}$ cm$^3$ s$^{-1}$ at 10 K, about five orders of magnitude larger than the values computed by us (and unreasonably high for a neutral-neutral reaction at those temperatures).
\begin{table*}
\begin{center}
\caption{CH$_3$CHO abundances measured towards hot cores, prestellar cores, hot corinos, and protostellar shocks}
\label{tab:detections}
\begin{tabular}{lccccc}
\hline
Source & T (K) & N (cm$^{-2}$) & Abundance/H$_2$ & N$_{H_2}$ (cm$^{-2}$) & Reference\\
\hline
\multicolumn{6}{c}{\it Hot cores}\\
\hline
\hline
G34.43+00.24 MM3 & 110 & 3.4 $\times$ 10$^{14}$ & - & - & [1]\\
G328.2551-0.5321 & 90-110 & 6.8 $\times$ 10$^{15}$& 3.6 $\times$ 10$^{-9}$ & 1.9 $\times$ 10$^{24}$ &[2]\\
Sgr B2 (N2) & 150 & 4.3 $\times$ 10$^{17}$ & 2.7 $\times$ 10$^{-7}$ & 1.6 $\times$ 10$^{24}$ & [3]\\
Sgr B2 (N3) & 145 & 8.5 $\times$ 10$^{16}$ & 9.4 $\times$ 10$^{-8}$ & 0.9 $\times$ 10$^{24}$ & [3]\\
Sgr B2 (N4) & 145 & 9.0 $\times$ 10$^{16}$ & 3.5 $\times$ 10$^{-8}$ & 2.6 $\times$ 10$^{24}$ & [3]\\
Sgr B2 (N5) & 145 & 2.5 $\times$ 10$^{16}$ & 2.8 $\times$ 10$^{-8}$ & 0.9 $\times$ 10$^{24}$ & [3]\\
\hline
\multicolumn{6}{c}{\it Prestellar cores}\\
\hline
\hline
L1544 & 17 & 5 $\times$ 10$^{11}$ & 1 $\times$10$^{-10}$& 5 $\times$ 10$^{21}$& [4]\\
L1544 continuum peak & 5 & 1.2$\times$10$^{12}$ & 2.2$\times$10$^{-11}$ & 5.4$\times$10$^{22}$ & [5]\\
L1544 methanol peak & 7.8 & 3.2$\times$10$^{12}$ & 2.1$\times$10$^{-11}$ & 1.5$\times$10$^{22}$ & [5]\\
Barnard 5 & 5 & 5.2 $\times$ 10$^{12}$ & 1.6 $\times$ 10$^{-9}$& 3.3 $\times$ 10$^{21}$& [6]\\
Taurus cores & 3-5 & 0.7-5.8 $\times$ 10$^{12}$ & - & - & [7]\\
\hline
\multicolumn{6}{c}{\it Hot corinos}\\
\hline
\hline
Barnard 1b-S (hot) & 200 & 8 $\times$ 10$^{14}$ & 5.7 $\times$ 10$^{-11}$& 1.4 $\times$ 10$^{25}$& [8]\\
Barnard 1b-S (cold) & 60 & 1.6 $\times$ 10$^{14}$ & 1.4 $\times$ 10$^{-11}$&1.1 $\times$ 10$^{25}$& [8]\\
HH212 & 78 & 8$\times$10$^{15}$ & 8$\times$10$^{-9}$& 10$^{24}$& [9]\\
B335 & 100 &14 $\times$ 10$^{14}$ & 24$\times$10$^{-10}$& 6 $\times$ 10$^{23}$& [10]\\
IRAS16293-2422 & 70 & 1 $\times$ 10$^{15}$ & 0.3 $\times$10$^{-8}$& 3 $\times$ 10$^{23}$ & [11]\\
IRAS16293-2422 B & 140 & 3.5 $\times$ 10$^{15}$ & - & - & [12]\\
NGC1333-IRAS4A2 & 100-200 & (1.0 - 1.9) $\times$ 10$^{16}$ & (1.1 - 7.4) $\times$ 10$^{-9}$& (1.9 - 2.7) $\times$ 10$^{24}$& [13]\\
L483 & 100-300 & 8 $\times$ 10$^{16}$ & - & - & [14]\\
SVS13-A & 35 & 12 $\times$ 10$^{15}$ & 4 $\times$ 10$^{-9}$& 3 $\times$ 10$^{24}$& [15]\\
\hline
\multicolumn{6}{c}{\it Protostellar shocks}\\
\hline
\hline
L1157-B1b & 90 & 5 $\times$ 10$^{15}$ & 2.5 $\times$ 10$^{-6}$& 2 $\times$ 10$^{21}$& [16]\\
NGC1333-IRAS4A outflow & 9-30 & 0.2-1.3 $\times$ 10$^{14}$ & - & - & [17]\\
\hline
\end{tabular}
\end{center}
[1] \citet{Sakai2018};
[2] \citet{Csengeri2019};
[3] \citet{Bonfand2019};
[4] \citet{Vastel2014};
[5] \citet{Jimenez2016};
[6] \citet{Taquet2017};
[7] \citet{Scibelli2020};
[8] \citet{Marcelino2018};
[9] \citet{Lee2019,Codella2019};
[10] \citet{Imai2016};
[11] \citet{Jaber2014};
[12] \citet{Manigand2020, Jorgensen2018, Jorgensen2016};
[13] \citet{Lopez2017};
[14] \citet{Jacobsen2019};
[15] \citet{Bianchi2019};
[16] \citet{Codella2020};
[17] \citet{DeSimone2020}\\
\end{table*}
\subsection{Astronomical observations}\label{subsec:astro}
Acetaldehyde was one of the earliest molecules to be detected in space.
After the first detection towards the galactic center in 1973 \citep{Gottlieb1973}, acetaldehyde has been detected in many star formation environments and in a large range of interstellar conditions, as summarised in Tab. ~\ref{tab:detections}.
This indicates that CH$_3$CHO is efficiently formed at all gas temperatures, from about 10 K in prestellar cores up to temperatures larger than 200 K in hot cores.
However, the measured CH$_3$CHO abundances vary by five orders of magnitude, from $\sim$ 10$^{-11}$ in cold environments (e.g. prestellar cores) to up to $\sim10^{-6}$ in warm ones (e.g. hot cores/corinos and protostellar shocks).
This may or may not indicate that a different chemical route is responsible for the formation of acetaldehyde in different environments.
In the following, in order to understand better this point, we will review the possible formation routes in cold and warm environments, respectively, in comparison with astronomical observations.
\subsubsection{Cold environments}\label{subsubsec:cold-env}
Despite the low temperatures (< 10 K), acetaldehyde is commonly observed in starless and prestellar cores \citep[e.g.]{Bacmann2012, Cernicharo2012, Vastel2014, Jimenez2016}. Recently, \citet{Scibelli2020} performed a survey of starless and prestellar cores in the Taurus molecular cloud and detected CH$_3$CHO in about 70 \% of the sample sources.
Two major paths have been invoked to explain gaseous acetaldehyde in these cold environments.
The first one relies on three steps \citep[e.g.][]{Vasyunin2013,Vastel2014,Jimenez2016}: (1) the formation of the ethyl radical on the grain surfaces, by hydrogenation of small hydrocarbons formed in the gas-phase and frozen on the grain surfaces at later stages; (2) the injection of ethyl radical in the gas phase via (perhaps) chemical desorption (see below); (3) gas-phase formation of acetaldehyde via the reaction of ethyl radical with atomic oxygen (our reaction 2).
In this scheme, the last step is now validated.
The second path, which was proposed later in the literature \citep{Vasyunin2017,Scibelli2020}, invokes: (1) the formation of methanol by CO hydrogenation on the grain surfaces; (2) the injection of methanol in the gas phase via (perhaps) chemical desorption; (3) gas-phase formation of acetaldehyde via the reaction of methanol with CH (our reaction 3).
As discussed in \S ~\ref{subsec:chemical-network}, the last step of this path occurs at a rate constant which is about five orders of magnitude lower than that used by astrochemical models \citep{Vasyunin2017} and, consequently, ineffective in reproducing the observed abundance in cold environments.
This path would be competitive with respect to the first one only if the bottleneck reactant of reaction (3), methanol, is about 650 times more abundant of that of reaction (2), ethyl radical.
Since no radio to millimeter wavelengths are available to identify ethyl radical in space, we cannot compare measured abundances of ethyl radical and methanol.
However, if one then relies on model predictions, the latter do not support 650 times more methanol then ethyl radical.
In addition, would gaseous methanol be 650 times more abundant than ethyl radical, reaction (3) would produce a formaldehyde abundance two orders of magnitude larger than that observed.
Therefore, the chemical path proposed by \cite{Vasyunin2017} is completely excluded by our new computations and we recommend to drop it from astrochemical models.
In conclusion, based on our new computations, the first path, which involves reaction (2), is the only viable one of the two invoked in the literature.
The weak ring of this formation chain remains the second step, namely the chemical desorption.
In this process, it is assumed that part of the energy released by the hydrogenation on the grain surfaces is acquired by the newly formed species (in this case C$_2$H$_5$) which can then break its bonds with the surface and be liberated into the gas phases \cite[see. e.g.,][]{duley1993}.
The fraction of released species, however, strongly depends on the species and substrate \citep{minissale2016,Oba2018} and could be null for relatively low reaction energies and strongly bound species \citep{Pantaleone2020}.
In the specific case of C$_2$H$_5$, there are no experiments or theoretical computations providing estimates or even just constraints.
We only can say that the comparison of astrochemical model predictions with observations suggest that a relatively small C$_2$H$_5$ abundance, of about $5\times10^{-9}$ with respect to molecular hydrogen, will be enough to reproduce the observations \citep{Vastel2014}.
\subsubsection{Warm environments}\label{subsubsec:warm-env}
Abundant acetaldehyde is routinely observed in high-mass hot cores \citep[e.g.][]{Blake1987, Sakai2018, Csengeri2019, Bonfand2019}, low-mass hot corinos \citep[e.g.][]{cazaux2003, Jorgensen2018, Bianchi2019, Lee2019, Codella2018, Codella2019}, and protostellar molecular shocks \citep{lefloch2017,codella2017,Codella2020, DeSimone2020}.
In this case, three paths have been invoked in the literature to explain the observations.
In the first one, everything happens on the grain surfaces in four steps \citep[e.g.][]{Garrod2006,oberg2016}:
(1) the freeze-out of species and their hydrogenation; (2) the formation of frozen radicals, in this specific case CH$_3$ and HCO, by the UV illumination of the frozen hydrogenated species; (3) when the dust warms up because of the presence of the protostar, the CH$_3$ and HCO radicals diffuse inside the grain ices, meets and combine into acetaldehyde; (4) frozen acetaldehyde is injected into the gas-phase when the dust temperature reaches the sublimation temperature of the ices or the shock sputters the ice content.
Although there is no doubt about the last step viability, namely the thermal desorption and shock sputtering, the formation of acetaldehyde from the HCO + CH$_3$ combination on the grain icy mantles (step 3) is source of debate.
Theoretical calculations show that the combination of CH$_3$ and HCO on the ice surfaces does not necessarily lead to acetaldehyde because the reaction possesses a non-negligible activation energy (up to 6.8 kcal/mol depending on the position on the ice surface) and it is in competition with the formation of CH$_4$ + CO \citep{Enrique-Romero2019,Enrique-Romero2020}.
In addition, comparison of high spatial resolution acetaldehyde observations towards protostellar shocks privileges the gas-phase formation of acetaldehyde via the C$_2$H$_5$ + O reaction in these environments \citep{DeSimone2020,Codella2020}.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.8]{GLYCO-ACE-eps-converted-to.pdf}
\caption{Column densities of glycolaldehyde (HCOCH$_2$OH) and acetaldehyde (CH$_3$CHO) measured in several astronomical sources, marked in the inset.
The two lines show the expected ratio if acetaldehyde and glycolaldehyde are both synthesised in the gas-phase the via the ethanol tree by \citet{Skouteris2018}.
The solid and dashed lines refer to the uncertain value of the first step of the path and correspond to an acetaldehyde over glycolaldehyde abundance ratio of 3.5 and 1.6, respectively.
The references for the plot are: \citet{Bianchi2019, Desimone2017, Lopez2017,Jorgensen2016,Jorgensen2018, Marcelino2018, Manigand2020, vanGelder2020,Fuente2014} and references therein; \citet{lefloch2017, Codella2020}.}
\label{fig:obs-aceta-glyco}
\end{center}
\end{figure*}
The second invoked path is the same as the first one in cold environments, described above, namely it involves the reaction (2) (C$_2$H$_5$ + O) \citep[e.g.][]{charnley2004}.
The difference in the two schemes is that the chemical desorption is replaced by the thermal desorption and the shock sputtering, both proved processes.
We have seen that our computations validate reaction (2), so that the acetaldehyde gas-phase formation path is fully viable.
The most uncertain point is the abundance of ethyl radical.
As said earlier, since its frequencies and spectroscopic data are not known yet, there is no way to verify that the route of acetaldehyde formation from ethyl radical is the true one.
\subsubsection{A new actor on the scene: the ethanol tree}\label{subsubsec:eth-tree}
The path 4, the ethanol tree, one branch of which is acetaldehyde, was introduced and studied by \cite{Skouteris2018}.
They showed that about 6\% of this reaction path ends up into acetaldehyde.
Mostly interesting, this is between 1.6 and 3.5 times the branching ratio of the formation of glycolaldehyde (HCOCH$_2$OH) (please note that the uncertainty comes from the uncertainty in the first step of the path: see \citet{Skouteris2018}), which was the focus of the study.
It is worth reminding that the ethanol tree is the only known path to the formation of glycolaldehyde in the gas-phase and that the comparison of model predictions including this path and the observed values are in agreement, so far \citep{Skouteris2018}.
Successive works have also showed a tight correlation between glycolaldehyde and ethanol, extending that found by \citet{Skouteris2018} toward the low end by more than one order of magnitude \citep{Li2019,Xue2019} and, consequently, strengthening the validity of this scheme in the glycolaldehyde production.
Although no author, not even the ethanol tree proposers, ever thought of it as an important source of acetaldehyde and, therefore, no specific modeling has been carried out so far, it is worth considering it here.
To this end, the easiest and most straightforward way to understand whether the ethanol tree is a competitive path for the formation of acetaldehyde is to compare the observed abundances of acetaldehyde and glycolaldehyde.
If the ethanol tree is the major source of acetaldehyde then the abundance ratio between the two species should be equal to the ratio of their respective path branching ratios, namely acetaldehyde should be within 1.6 and 3.5 times more abundant than glycolaldehyde.
Figure \ref{fig:obs-aceta-glyco} shows the measured column densities of glycolaldehyde and acetaldehyde towards several astronomical sources.
Please note that, since it is very difficult to derive the column density of H$_2$ and, therefore, reliable abundances, we plot the column densities, whose error bars are reported in the plot.
In the figure, we also show the theoretical ratio if both glycolaldehyde and acetaldehyde are formed via the ethanol tree.
The agreement between the predicted and the measured ratios is spectacular and strongly suggests that acetaldehyde is a daughter of ethanol.
The observations of Fig. \ref{fig:obs-aceta-glyco} refer to warm objects only as no glycolaldehyde has been detected in cold ones so far.
However, when we take into account that (i) the brightest glycolaldehyde line in the 70--150 GHz band, where cold objects are observed, is twice weaker than the one from acetaldehyde assuming the same column density and temperature for the two species and (ii) the column density of glycolaldehyde is 1.6-3.5 times smaller, the brightest glycolaldehyde lines would be between 3 and 7 times weaker than the acetaldehyde ones.
Therefore, the present non-detection upper limits to glycolaldehyde abundance in cold environments is compatible with the ethanol tree predictions so far.
For example, in L1544, one prestellar core where the full 3mm band was surveyed with IRAM-30m high sensitivity observations \citep{Lefloch2018}, acetaldehyde was detected with a signal-to-noise ratio of 6.5 \citep{Vastel2014}, which explains the non-detection of glycolaldehyde.
\cite{Jimenez2016} observed another, brighter position towards L1544 and also their non-detection of glycolaldehyde is compatible with the ethanol tree predictions.
\subsubsection{Conclusive remarks}\label{subsubsec:concl}
In summary, our new theoretical computations show that, among the four previously proposed gas-phase reactions described in Section \ref{sec:reviews}, only two are viable, the 2 and 4, as summarised in Fig. \ref{fig:summary}.
The observed difference in the acetaldehyde abundance between the cold and warm environments (Tab. \ref{tab:detections}) can be easily be attributed to the difference in the parent's abundance, ethyl radical and/or ethanol, respectively.
In cold environments, ethyl radical and/or ethanol would be present in small quantities, because only a small fraction of the frozen species is injected into the gas phase by a non-thermal process, probably chemical desorption.
In warm environments, on the contrary, all ethyl radical and/or ethanol, probably previously formed and frozen on the grain surfaces, would be injected into the gas phase.
In order to make progresses and assess whether and when the two paths are important in the acetaldehyde formation, the abundances of the parents should also be measured.
As said in \S ~\ref{subsubsec:cold-env}, this is presently impossible for the ethyl radical, the parent in path 2, since its rotational transition frequencies are unavailable.
About path 4, the ethanol tree, we found in \S ~\ref{subsubsec:eth-tree} that the observed ratio of acetaldehyde and glycolaldehyde in several warm sources compares spectacularly well with that predicted based on the branching ratios of these two species.
It remains to show that it holds also in cold environments.
Finally, it also holds the possibility that acetaldehyde is a grain-surface radical-radical product, with the caveats described in \S ~\ref{subsubsec:warm-env}.
Providing the final answer of what process dominates the formation of acetaldehyde and in what environment will require a careful modeling and comparison with an expanded set of observations.
\section{Conclusions}\label{sec:conclusions}
n this paper, we presented a critical review of the gas-phase formation routes of acetaldehyde invoked in the literature and reported in the two major astrochemical databases, KIDA and UDfA.
We found that four paths are potentially important summarised:
\begin{tabular}{ll}
(1) & CH$_3$OCH$_3$ + H$^+$ $\rightarrow$ CH$_3$CHOH$^+$ + H$_2$\\
& CH$_3$CHOH$^+$ + e$^-$ $\rightarrow$ CH$_3$CHO + H\\
(2) & C$_2$H$_5$ + O($^3$P) $\rightarrow$ CH$_3$CHO + H\\
(3) & CH$_3$OH + CH $\rightarrow$ CH$_3$CHO + H\\
(4) & CH$_3$CH$_2$OH + OH $\rightarrow$ CH$_3$CHOH + H$_2$O\\
& CH$_3$CHOH + O $\rightarrow$ CH$_3$CHO + OH\\
\end{tabular}
The first path, involving the electron recombination of protonated acetaldehyde, was previously studied and excluded by \cite{vazart2019} because the formation of protonated acetaldehyde actually does not occur.
The fourth scheme starts from ethanol and was theoretically studied and validated by \cite{Skouteris2018}.
It is called "ethanol tree" because glycolaldehyde and other iCOMs are also formed from ethanol.
The second and third paths were not validated by neither experimental or theoretical works in the ISM conditions, namely low temperature and pressure.
In this work, we investigated these two reaction paths via theoretical chemistry calculations, using a composite CCSD(T) and DFT method for the electronic structure and the RRKM scheme for the kinetics.
For both reactions, we provide the rate constants as a function of the temperature and the branching ratios, in the format used by astrochemical models, in Tab. \ref{tab:alpha-beta}.
Our new calculations validate the reaction (2) and the values quoted in KIDA and UDfA, with the one from UDfA closer to our computed values.
On the contrary, our computed rate constants of the reaction (3) are about five orders of magnitude lower than those reported in the UDfA database and used by some models.
We therefore rule out that this reaction has a role in the acetaldehyde formation, at any temperature.
In summary, we conclude that only two gas-phase reaction paths, the (2) and (4), are potentially important in the gas-phase acetaldehyde formation.
Finally, we reviewed the observations of acetaldehyde towards warm and cold objects and their formation routes, in the light of the above conclusions.
In warm sources, the measured abundance ratio between glycolaldehyde and acetaldehyde is exactly that predicted by the ethanol tree, namely the path (4).
On the other hand, \citet{Skouteris2018} showed that the glycolaldehyde abundance measured in warm objects is reproduced by astrochemical model predictions based on the ethanol tree.
We therefore conclude that, very likely, also acetaldehyde is mainly formed by it.
In order to definitively confirm this hypothesis and to verify its validity also in cold environments, the comparison between dedicated model predictions and an expanded observational data-set is necessary.
\section*{Acknowledgements}
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, for the Project "The Dawn of Organic Chemistry" (DOC), grant agreement No 741002.
\section*{Data availability}
The data underlying this article are available in the article and in its online supplementary material.
\bibliographystyle{mnras}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 673
|
Q: OnClientClick Javascript:confirm get value in textbox? So I can't seem to find anywhere how to do this. I want to be able to use the value of BuyOutPrice in the text(confirm) box but can't seem to make it work.
More specificly, how am I supposed to write this part:
('Are you sure you want to buy-out for $' + BuyOutPrice + '?')
This was just my closest guess from experience in other programming languages but it appears invalid in JavaScript.
var BuyOutPrice = '<%= Content.ComparisonPrice %>';
<asp:ImageButton ID="BuyNowButton" OnClick="BuyNowButton_Click" Style="vertical-align:top;" ImageUrl="Images/btn_buyNow.png" runat="server" OnClientClick="javascript:return confirm('Are you sure you want to buy-out for $' + BuyOutPrice + '?'); BuyNow(); return ValidateBuyNow();" />
A: If ComparisonPrice is a TextBox so try this:
<asp:TextBox ID="ComparisonPrice" runat="server"></asp:TextBox>
<asp:ImageButton ID="BuyNowButton" Style="vertical-align:top;" ImageUrl="Images/btn_buyNow.png" runat="server" OnClientClick="javascript:return confirm('Are you sure you want to buy-out for' + momo() + '?'); BuyNow(); return ValidateBuyNow();" />
And JavaScript:
<script>
function momo() {
return document.getElementById('<%= ComparisonPrice.ClientID %>').value
}
</script>
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,520
|
Your home estate! Consider the words home, family, and affinity. What do they mean to you? Your home! Does this term have special meaning for you? Can you describe it? Home, family and affinity, all three, have connotations of warmth, attraction, attachment, and closeness – a retreat from the rest of the world, a haven, a place of safety, a refuge. Webster's New World Thesaurus provides one description of home as "The whole complex associated with domestic life." Home ownership and home estate are significant components of the net worth and emotional health of stable family units. Whatever your interest or need – community issues, family issues and well being, religion and church, education, home buying, home selling, home appraisal, home inspection, mortgages, insurance, home furnishings, maintenance, home construction, environmental issues such as radon gas and lead, financial services and issues, investments, real estate, medical needs, clothing, automobiles, shopping, consumer issues – we want to explore and provide direction to sources of useful information and services that will assist you in obtaining solutions to your personal and family problems, wants, and needs. This exploration of mutual needs, attitudes, wants, and solutions, can be enhanced through participation in our blogs. Offer and receive information and assistance to and from blog participants. Discover information and attitudes you are not aware of, and contribute to the knowledge of others.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,090
|
IDLES – Ultra Mono | Album Review
Simon Moyse
If there's one thing we can safely say about Bristol's IDLES, it's that they have the ability to capture the public mood.
With their 2018 sophomore album, Joy As An Act Of Resistance, the band delved into Brexit-torn Britain, discussing misplaced patriotism ('G.R.E.A.T.') and the contribution of immigrants ('Danny Nedelko'), alongside wider issues such as toxic masculinity ('Samaritans') and the dangerous popular press ('Rottweiler'). Despite its typically abrasive punk sound, the positive energy of the record captured people's collective imagination, and the album was a surprise hit, going to number five in the UK album chart.
With the early releases from third album Ultra Mono, though, IDLES have shown their recognition of the zeitgeist to be positively clairvoyant. 'Mr Motivator' came just as most of us desperately needed a pick-me-up during lockdown, and it was just the tonic, combining energising guitars, a fun and bouncy video, and lyrics – "Like Kathleen Hanna with bear claws grabbing Trump by the pussy" – that made us smile in ways many of us hadn't done in months. It also fitted nicely with the sudden return of men in lycra to our screens, inspiring the good people to take on mass indoor exercise for the first time since the eighties.
'Grounds', meanwhile, was a powerful call-to-arms against racism and injustice, that just happened to be released right in the middle of June's global racial injustice protests. With lyrics like "Not a single thing has ever been mended, by you standing there and saying you're offended," and "I raise my pink fist and say black is beautiful," it was a perfect anthem for that moment.
Even as 'Grounds' was receiving its maiden play, on BBC Radio, and people were hearing the "do you hear that thunder?" refrain for the first time, a massive lightning storm made its way across most of the country. You just couldn't make this stuff up.
Someone up there clearly approved.
This is just the way it is with IDLES, though. They just have that certain magic, that ability to be politically conscious but fun, to be abrasive but catchy, to be aware but off-the-wall. With the band having gathered such a huge following during the process of releasing and touring Joy As An Act Of Resistance, one would have thought that this decidedly unglamorous crew of Bristolians might struggle to live up to expectations. Instead, it just seems to have driven them to up the ante even further.
Any suggestion that the band might go for a more commercial sound with this record, however, is dispelled within about five-and-a-half seconds of hitting play. Where Joy… started off with the slow-burning 'Colossus', Ultra Mono opener, 'War', goes right for the throat. A typically fierce and direct anti-war statement, it is the sonic equivalent of a morning bombardment in the trenches. Awake and alert, are we now, soldier? Well, go get your pants on then, this is just the beginning.
By the time the listener has experienced the noise of 'War', the unfettered anger of 'Grounds', and the lively energy of 'Mr Motivator', they may already feel somewhat exhausted. There is no let-up, however, and thank goodness, because 'Anxiety' is an absolute cracker. Describing the titular condition, it starts as a mildly sad tale of being dumped, building into a crescendo as other factors pile on, and all the fear and self-doubt starts to spiral. It is a marvellous oral representation of how those feelings can escalate, delivered with typically wry IDLES humour.
There is a small breather at the beginning of 'Kill Them With Kindness', with Jamie Cullum's calm piano intro – yes, really – but this is quickly replaced by full-on Rocket From The Crypt-style guitar, paced by Jon Beavis' pounding drumbeat. 'Model Village', meanwhile, is a bit of a quirky track, one that might be a bit limp were it not for the booming chorus, and also the fact that everyone knows of a closed-minded and prejudiced small town like the one that is being described. A bit of satire never goes amiss, and IDLES are rather good at it.
The band channel the garage-rock guitar style of The Hives for 'Ne Touche Pas Moi', a fierce condemnation of improper sexual behaviour: "This is a sawn off, for the cat-callers". This is a topic that the band has long been passionate about, so it is no surprise to see singer Joe Talbot at his most fervent here, ably assisted by some feral backing vocals by Jehnny Beth. With the focus here clearly being on continued problems of this nature at gigs, the chorus imploring "Ne Touche pas moi! This is my dance space," this song should be a clarion call for the live music industry to work to erase this problem once and for all.
Just when you thought this album couldn't deliver further, along comes 'Reigns', a brutally dark indictment of the impact of Tory rule. Talbot sneers over Adam Devonshire's sinister bassline in the verses, – "How does it feel to have shanked the working classes into dust? How does it feel to have won the war that nobody wants?" – before it explodes into pure anger in the chorus. IDLES have never shied away from their left-wing tendencies, but this is their most overt political statement yet, and it's utterly brilliant. Bookmarked by two more firebrands in 'Carcinogenic' and 'The Lover', the album keeps up its pace impressively.
The album's one slow(er) moment comes with the swirling melancholy of penultimate track 'A Hymn', and what a song it is. If anyone ever tells you that IDLES are nothing but a blunt instrument, this track is everything you need for an easy rebuttal. Beautifully measured and emotional, and with a simple but darkly effective guitar sound, this song of reflection and regret is right up there with the best that the band have ever written.
This truly is an exceptional record. Many will have thought that Joy As An Act Of Resistance would be the pinnacle of IDLES' career, that they couldn't possibly go any higher. Ultra Mono, though, is a significant step up, both musically and lyrically. The production, too, is exceptional; it was designed to give the feel of a hip-hop record, with Kenny Beats [JPEGMafia, Denzel Curry] having assisted with the programming on several tracks, all while retaining the feeling that the album is being performed live right next to you.
As great as Joy… was, its strength was in the message, with the music being a capsule by which to deliver it. With Ultra Mono, though, the music is front and centre, the driver, making the message travel that much further. The guitar sound is fuller and more powerful. Every song has its own identity, its own feel, and they don't miss once – every track brings something essential. As impressive as the band's fanbase already is, with December's show at the 10,000-capacity Alexandra Palace having sold out in less than 24 hours, this record will surely win many new converts. This is album of the year material, make no mistake.
Pre-order the record here, out 25th September via Partisan.
Tracks Of The Week // October 6th
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,337
|
Male mice are not natural-born fathers. Males that have never mated respond with aggression to chemical signals from newborn mice pups, whereas those that have fathered pups are more nurturing, a new study finds.
In addition to their normal sense of smell, mice and some other animals have a sensory system in their brain, known as a vomeronasal organ, that responds to chemical signals, or pheromones. The study, detailed in the March 20 issue of the Journal of Neuroscience, showed that after male mice spent some time around baby mice, neurons in this sensory organ were more active in virgin males than in mouse fathers. Suppression of the vomeronasal system in mice might be important in the transition from attack behavior to parenting, the researchers say.
Whereas female mice instinctively care for baby mice, sexually naive males (i.e., virgin males) often attack or even kill babies they encounter.
Researchers at the RIKEN Brain Science Institute in Japan wanted to understand how that shift occurs at the cellular level in the brain. They observed the behavior of virgin male mice and mouse fathers that had lived with a female and her babies when placed in a cage with babies. The babies were kept in a mesh ball to prevent any harm from hostile males.
The majority of virgin males were aggressive toward the pups, the researchers found. But after the males mated, their aggressive behavior steadily decreased as they spent more time living with their mate and babies. In other words, after the males experienced fatherhood, they became much more nurturing.
Next, the researchers examined differences between virgin males and mouse dads at the cellular level. Spending time with babies activated certain types of cells in the mice's vomeronasal systems.
The scientists confirmed that the vomeronasal organ was involved by surgically removing it from virgin males and then watching how the mice responded to babies. Now, mice that were formerly hostile toward babies suddenly lost their aggressiveness and became more nurturing. The findings provide a basis for understanding the shift to parental behavior in mice.
This study confirms earlier studies linking the aggressive behavior of male mice to the vomeronasal system, said neuroscientist Peter Brennan of the University of Bristol, U.K., who was not involved in the work. But the findings are not really applicable to humans, who don't have this kind of vomeronasal system, Brennan said.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,013
|
Dennis Bligen (born March 3, 1962) is a former American football running back. He played for the New York Jets from 1984 to 1986 and in 1987 and for the Tampa Bay Buccaneers in 1986.
References
1962 births
Living people
American football running backs
St. John's Red Storm football players
New York Jets players
Tampa Bay Buccaneers players
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,168
|
A tűzifa a tüzelőanyagként felhasznált fa egyik formája. A fa többféleképpen használható fel tüzelésre, közismert a faszén ill. égetett szén, forgács, brikett, pellet vagy fűrészpor formában történő felhasználás, azonban a tűzifa egy nem túlságosan feldolgozott állapotban történő felhasználást jelent. A tűzifa megjelenésében felismerhető a fa eredeti tönkjének vagy ágainak formája. A fát az emberiség több ezer éve használja tüzelőanyagként. A tűzifa nagy jelentősége, hogy megújuló energiaforrás.
A Pallas nagy lexikona szerint: a tűzifa "a fának a műfa feldolgozása után fennmaradó része. A tűzifa-darabok az állami erdőkben, de nagyrészt a magán erdőkben is 1 méter hosszúak és mennyiségük megállapítása végett többnyire az alábbiak szerint válogatva űrméterekbe rakják; a gyengébb darabokat néha kévékbe is kötik. Vastagság és alak szempontjából megkülönböztetünk: hasábfát, dorongfát, gallyfát és rőzsefát, tuskófát, mely az utóbbi alatt a tuskók és gyökereknek megfelelően apróra hasogatott fáját értjük."
A fák lombjukkal napenergiát kötnek le és halmoznak fel a faanyagban. Ezt meggyújtva ismét energiává változik vissza. A kályha forró tüzében a fa 85%-a gázzá, 15%-a szénné alakul. Valójában ezek égnek el, ha egyáltalán elégnek. A hagyományos kályhákból ugyanis rengeteg el nem égett fagáz távozik kihasználatlanul a kéményen át, amely szennyezi a levegőt.
Forgalmazása Magyarországon
A tűzifa forgalmazásában megkülönböztetnek lakossági tűzifa és kereskedelmi tűzifa kategóriákat, ami csak a mennyiségben különbözik egymástól – a kereskedelmi tűzifának van egy minimum mennyisége –, tehát ez nem igazi típuskategória. A tűzifát forgalmazzák kaloda nélkül és kalodában. A forgalmazók által használt mértékegységek a mázsa (q), kilogramm, térfogat szerint a köbméter (m³) – ennek különböző szorzókkal ellátott származtatott mértékegységeit a Fapiaci térfogat mértékegységek és váltószámok fejezet taglalja.
A tűzifa leggyakoribb fafajtái: akác, bükk, gyertyán, tölgy, éger, kőris. Egyéb fajok, pl. gyümölcsfák is alkalmasak tűzifának, ezek is megjelennek a forgalomban.
Faanyag kereskedelmi lánc hatósági felügyelete
Magyarországon 2016. július 1-je óta a fatermékek forgalmazása – az Európai Unió előírásainak megfelelően – szigorú feltételek mellett történhet. A faanyag kereskedelmi lánc felügyeletére kijelölt hatáskörrel rendelkező hatóság a Nemzeti Élelmiszerlánc-biztonsági Hivatal (NÉBIH), ezáltal a hazai tűzifa kereskedelem ellenőrzését is a NÉBIH végzi.
A hatályos jogszabályi előírások szerint tűzifa értékesítéskor az eladónak fogyasztó részére történő értékesítés esetén – pénzügyi bizonylat és a szállítást kötelezően kísérő szállítójegy mellett – tűzifavásárlói tájékoztatót is át kell adnia a vásárlónak.
A NÉBIH EUTR szakemberei összeállítottak egy kérdésgyűjteményt, amelynek segítségével a vásárlók elejét vehetik az őket sújtó visszaéléseknek.
Ha úgy érzi, vagy azt tapasztalja, hogy megtévesztés áldozata lett és az eladó nem hajlandó egyeztetésre, őrizze meg az eladótól kapott bizonylatokat és írja meg panaszát a NÉBIH-nek (eutr@nebih.gov.hu), de javasolt értesíteni a fogyasztóvédelmi hatóságot is!
Tudatos tűzifavásárlói magatartás elemei
Lehetőleg a nyár elején vegye meg tűzifáját!
Ha van rá lehetősége, már előző nyáron vegye meg tüzelőjét!
A kínált, de az Ön számára ismeretlen fafajú tüzelőnek nézzen utána, kérje szakember segítségét.
Ha teheti, inkább köbméterben mérve, térfogatra vegye a tüzelőt, így könnyebben ellenőrizhető a mennyisége, a fa víztartalma pedig nem befolyásolja a megvett mennyiséget!
Ha tömegre vásárol tűzifát, figyeljen oda, hogy csak hitelesített mérlegen mért, mérlegjegy, vagy más, mérlegelést hitelesítő dokumentum alapján kiállított számla birtokában fizessen.
Csak olyan legálisan működő értékesítőtől vásároljon, aki/amely hirdetésében feltünteti EUTR technikai azonosító számát vagy erdőgazdálkodói kódját, szállításkor pedig a termékre vonatkozó adatokkal ellátott szállítójegyet és tűzifavásárlási tájékoztatót is biztosít.
Győződjön meg az adott tűzifa egységcsomag tényleges faanyagtartalmáról és annak egységárát hasonlítsa össze más kereskedők által kínált hasonló termékek árával.
Takarással védje a tüzelőjét a csapadéktól, de oldalról hagyja szellőzni!
Ha száraz fával fűt, akár 30%-ot is spórolhat!
Fapiaci térfogat-mértékegységek és váltószámok
Fontos leszögezni, hogy a fa térfogatának egyetlen hiteles mértékegysége van, és az a köbméter (m3)! Ez egy 1 méter élhosszúságú (1×1×1 m) tömör fakocka térfogata.
Méteres tűzifa esetében alkalmazott mértékegységek:
A sarang a kivágott fák törzs- és ágrészeinek rendezett formában történő rakásolását, felhalmozását jelenti. A sarang közel azonos, jellemzően 1 méter szélességben, egymással párhuzamosan és szorosan elhelyezett fadarabokból álló rakat, amelyet úgy raknak össze, hogy magassága az egész tetősíkban közel azonos, határoló síkjai pedig közel vízszintesek, ill. függőlegesek.
Űrméter: (rövidítve: űrm, helytelenül űrköbméternek vagy üzemi köbméternek is nevezve): Az 1 méter hosszú, 1 méter széles és 1 méter magas (1×1×1 m) befoglaló méretű vastag tűzifából álló sarang az 1 űrméter. Ez a mennyiség azonban nem ad 1 köbméternyi faanyagot, a benne lévő famennyiség: 1 űrm = 0,57 m3.
Köbméter (helytelenül erdei köbméternek, erdészeti köbméternek, erdész köbméternek is nevezve): Mivel a sarangban a fadarabok nem illeszkednek tömören egymáshoz, ahhoz, hogy tömör köbmétert (m3) kapjunk, váltószámot kell használnunk. Általánosan elfogadott, hogy vastag tűzifából az 1 méter hosszú, 1 méter széles és 1,7 méter magas (1×1×1,7 m) befoglaló méretű sarang egyezik meg 1 köbméterrel. Vékony tűzifából az 1 méter hosszú, 1 méter széles és 2,5 méter magas sarang ad ki 1 köbmétert, ha 8 cm csúcsátmérőig kerül bele faanyag. 5 cm csúcsátmérő alatti fadarabok esetén a 2 méter hosszú, 1 méter széles és 1,7 méter magas sarang jelent 1 köbmétert. Bár az, hogy hány űrméter ad ki egy tömör köbmétert, függ az adott sarangban levő fák átlagos átmérőjétől, görbeségétől, göcsösségétől is.
Konyhakész tűzifa esetében alkalmazott mértékegységek:
szórt űrméter: azt jelenti, hogy egy 1 méter × 1 méter × 1 méteres ládába, vagy más fajta tárolóba az adott méretű és fafajú hasábok ömlesztve vannak beleszórva. A mennyiséget a keletkezett hézagokkal együtt 1 szórt űrméterként értelmezzük. A szórt űrméter átváltása: az 1 méter hosszú, 1 méter széles és 2 méter magas (1×1×2 m) halom ad ki 1 m3-t. A hasáb alaktól természetesen lehet eltérés, de az a lényeg, hogy az alakzat által befoglalt térfogat a fa és a közte lévő levegővel együtt gyakorlatilag 2 m3-t tesz ki.
Rakott űrméter (kalodánál): azt jelenti, hogy egy 1 m × 1 m × 1 m-es ládába (kalodába) a fahasábok rendezett formában, a lehető legkisebb hézagtartalommal vannak belerakva. A mennyiséget a keletkezett hézagokkal együtt 1 rakott űrméterként értelmezzük. Mivel a darabolt, hasított fát szorosabbra össze lehet rakni, mint a méteres rönköket az erdőn, ezért rakott űrméterből elegendő az 1 méter hosszú, 1 méter széles és 1,4 méter magas (1×1×1,4 m) rakat az 1 m3-hez.
Ömlesztett kalodás vagy hálós csomagolású fa: Ez minden esetben hengeralakú, és igyekeznek az eladók 1 tömör köbméternek megfelelő egységcsomagokat árusítani. Ennek például egy 1,2 m átmérőjű 1,8 m magas henger felel meg. Erről a henger térfogat számítási képletét felhasználva győződhetünk meg: (d2·π·m)/4, ahol d az alapterület átmérője, m pedig a henger magassága. Ez kiadja a hálós fa szórt űrméterét, melyből könnyen megállapíthatjuk a tiszta köbtartalmat.
Ez az előbbi példa alapján = 2,03 szórt űrméter, ami 1,01 tömör m3.
Fordítás
Jegyzetek
Források
További információk
: A "Tüzifa" cikk a Pallas nagy lexikonában
Módosult a tűzifavásárlói tájékoztató – NÉBIH
Amit a tűzifáról és a tűzifa vásárlásról tudni érdemes – Magyar Kert és Energia Klaszter, 2010. december 16.
Kapcsolódó szócikkek
Fűtőérték
Kályha, kandalló, kemence
Tüzelőanyagok
Faanyagok
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,527
|
Q: MCP73871 <=> MCU interface I am designing a small device powered by a lithium cell and I'd like to use the Microchip's MCP73871 battery charger to enable charging it from an USB port. The charger IC has three status indicator outputs: /PG, STAT1 and STAT2. In the datasheet, all are claimed to be open collector outputs capable of driving a LED directly (see typical application schematic). I'd like to use these outputs to indicate the status of the battery on the device's LCD display via the controlling PIC microcontroller.
Provided the outputs are just plain open collectors, there is no problem to use a pull-up resistor (say 50 kOhm) tied to the MCU's VDD and all should work. A problem might come, if there is a CMOS-style diode protection network on the outputs (especially the upper diode tied to VIn) - the pull-ups would power the charger IC from battery via this diode when no USB power cord is present. I know the current would be negligible if the pull-up resistor were sufficiently small. But a plain BJT in Common-Base configuration could effectively separate the "power domains".
I have used such circuit before in a similar situation (FT232R that would short the communication bus to ground when not connected to the host) and it worked fine. However, if this is not needed there, I'd prefer not to make mess and "keep it simple and stupid". I've gone through some ANs, but I have found no actual schematic of MCP73871 connected to a MCU (there are always just indicator LEDs).
So, my question is: May I rely on the assumption that these outputs have no protection circuits or anything else that would sink current when no voltage is present on the wall outlet side?
A:
the pull-ups would power the charger IC from battery via this diode
when no USB power cord is present.
The charger IC is already "powered" by either the input voltage or the battery voltage, internally, as the data sheet explains:
The MCP73871 device automatically obtains power for
the system load from a single-cell Li-Ion battery or an
input power source
So, connecting pull-up resistors to either of these voltage inputs or the output is absolutely fine.
A: If your MCU has internal pull-ups, you should be able to connect the lines directly to the MCU.
You would connect the internal pull-ups via software, so they would draw current only when the MCU is running.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,934
|
package serviceaccount
import (
"errors"
"fmt"
"time"
"github.com/golang/glog"
apiserverserviceaccount "k8s.io/apiserver/pkg/authentication/serviceaccount"
"k8s.io/kubernetes/pkg/apis/core"
"gopkg.in/square/go-jose.v2/jwt"
)
// time.Now stubbed out to allow testing
var now = time.Now
type privateClaims struct {
Kubernetes kubernetes `json:"kubernetes.io,omitempty"`
}
type kubernetes struct {
Namespace string `json:"namespace,omitempty"`
Svcacct ref `json:"serviceaccount,omitempty"`
Pod *ref `json:"pod,omitempty"`
Secret *ref `json:"secret,omitempty"`
}
type ref struct {
Name string `json:"name,omitempty"`
UID string `json:"uid,omitempty"`
}
func Claims(sa core.ServiceAccount, pod *core.Pod, secret *core.Secret, expirationSeconds int64, audience []string) (*jwt.Claims, interface{}) {
now := now()
sc := &jwt.Claims{
Subject: apiserverserviceaccount.MakeUsername(sa.Namespace, sa.Name),
Audience: jwt.Audience(audience),
IssuedAt: jwt.NewNumericDate(now),
NotBefore: jwt.NewNumericDate(now),
Expiry: jwt.NewNumericDate(now.Add(time.Duration(expirationSeconds) * time.Second)),
}
pc := &privateClaims{
Kubernetes: kubernetes{
Namespace: sa.Namespace,
Svcacct: ref{
Name: sa.Name,
UID: string(sa.UID),
},
},
}
switch {
case pod != nil:
pc.Kubernetes.Pod = &ref{
Name: pod.Name,
UID: string(pod.UID),
}
case secret != nil:
pc.Kubernetes.Secret = &ref{
Name: secret.Name,
UID: string(secret.UID),
}
}
return sc, pc
}
func NewValidator(audiences []string, getter ServiceAccountTokenGetter) Validator {
return &validator{
auds: audiences,
getter: getter,
}
}
type validator struct {
auds []string
getter ServiceAccountTokenGetter
}
var _ = Validator(&validator{})
func (v *validator) Validate(_ string, public *jwt.Claims, privateObj interface{}) (string, string, string, error) {
private, ok := privateObj.(*privateClaims)
if !ok {
glog.Errorf("jwt validator expected private claim of type *privateClaims but got: %T", privateObj)
return "", "", "", errors.New("Token could not be validated.")
}
err := public.Validate(jwt.Expected{
Time: now(),
})
switch {
case err == nil:
case err == jwt.ErrExpired:
return "", "", "", errors.New("Token has expired.")
default:
glog.Errorf("unexpected validation error: %T", err)
return "", "", "", errors.New("Token could not be validated.")
}
var audValid bool
for _, aud := range v.auds {
audValid = public.Audience.Contains(aud)
if audValid {
break
}
}
if !audValid {
return "", "", "", errors.New("Token is invalid for this audience.")
}
namespace := private.Kubernetes.Namespace
saref := private.Kubernetes.Svcacct
podref := private.Kubernetes.Pod
secref := private.Kubernetes.Secret
// Make sure service account still exists (name and UID)
serviceAccount, err := v.getter.GetServiceAccount(namespace, saref.Name)
if err != nil {
glog.V(4).Infof("Could not retrieve service account %s/%s: %v", namespace, saref.Name, err)
return "", "", "", err
}
if serviceAccount.DeletionTimestamp != nil {
glog.V(4).Infof("Service account has been deleted %s/%s", namespace, saref.Name)
return "", "", "", fmt.Errorf("ServiceAccount %s/%s has been deleted", namespace, saref.Name)
}
if string(serviceAccount.UID) != saref.UID {
glog.V(4).Infof("Service account UID no longer matches %s/%s: %q != %q", namespace, saref.Name, string(serviceAccount.UID), saref.UID)
return "", "", "", fmt.Errorf("ServiceAccount UID (%s) does not match claim (%s)", serviceAccount.UID, saref.UID)
}
if secref != nil {
// Make sure token hasn't been invalidated by deletion of the secret
secret, err := v.getter.GetSecret(namespace, secref.Name)
if err != nil {
glog.V(4).Infof("Could not retrieve bound secret %s/%s for service account %s/%s: %v", namespace, secref.Name, namespace, saref.Name, err)
return "", "", "", errors.New("Token has been invalidated")
}
if secret.DeletionTimestamp != nil {
glog.V(4).Infof("Bound secret is deleted and awaiting removal: %s/%s for service account %s/%s", namespace, secref.Name, namespace, saref.Name)
return "", "", "", errors.New("Token has been invalidated")
}
if secref.UID != string(secret.UID) {
glog.V(4).Infof("Secret UID no longer matches %s/%s: %q != %q", namespace, secref.Name, string(secret.UID), secref.UID)
return "", "", "", fmt.Errorf("Secret UID (%s) does not match claim (%s)", secret.UID, secref.UID)
}
}
if podref != nil {
// Make sure token hasn't been invalidated by deletion of the pod
pod, err := v.getter.GetPod(namespace, podref.Name)
if err != nil {
glog.V(4).Infof("Could not retrieve bound secret %s/%s for service account %s/%s: %v", namespace, podref.Name, namespace, saref.Name, err)
return "", "", "", errors.New("Token has been invalidated")
}
if pod.DeletionTimestamp != nil {
glog.V(4).Infof("Bound pod is deleted and awaiting removal: %s/%s for service account %s/%s", namespace, podref.Name, namespace, saref.Name)
return "", "", "", errors.New("Token has been invalidated")
}
if podref.UID != string(pod.UID) {
glog.V(4).Infof("Pod UID no longer matches %s/%s: %q != %q", namespace, podref.Name, string(pod.UID), podref.UID)
return "", "", "", fmt.Errorf("Pod UID (%s) does not match claim (%s)", pod.UID, podref.UID)
}
}
return private.Kubernetes.Namespace, private.Kubernetes.Svcacct.Name, private.Kubernetes.Svcacct.UID, nil
}
func (v *validator) NewPrivateClaims() interface{} {
return &privateClaims{}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,142
|
Metabolic profiling of fatty liver in young and mid...
Hypoxia-inducible factor prolyl 4-hydroxylases and metabolism by: Koivunen, Peppi Published: (2018)
NAFLD risk alleles in PNPLA3, TM6SF2, GCKR and LYPLAL1 show divergent metabolic effects by: Sliz, Eeva Published: (2018)
Metabolic profiling of pregnancy : cross-sectional and longitudinal evidence by: Wang, Qin Published: (2016)
Metabolic syndrome but not genetic polymorphisms known to induce NAFLD predicts increased total mortality in subjects with NAFLD (OPERA study) by: Käräjämäki, Aki Juhani Published: (2020)
Non-alcoholic fatty liver disease (NAFLD) : perspectives to etiology, complications and lipid metabolism by: Käräjämäki, Aki Published: (2017)
Kaikkonen, J. E., Würtz, P., Suomela, E., Lehtovirta, M., Kangas, A. J., Jula, A., Mikkilä, V., Viikari, J. S.A., Juonala, M., Rönnemaa, T., Hutri-Kähönen, N., Kähönen, M., Lehtimäki, T., Soininen, P., Ala-Korpela, M. and Raitakari, O. T. (2017), Metabolic profiling of fatty liver in young and middle-aged adults: Cross-sectional and prospective analyses of the Young Finns Study. Hepatology, 65: 491–500. doi:10.1002/hep.28899
Metabolic profiling of fatty liver in young and middle-aged adults : cross-sectional and prospective analyses of the young Finns study
Kaikkonen, Jari E.1,2; Würtz, Peter3; Suomela, Emmi1;
Lehtovirta, Miia1; Kangas, Antti J.3; Jula, Antti4; Mikkilä, Vera5,6; Viikari, Jorma S.A.7; Juonala, Markus7; Rönnemaa, Tapani7; Hutri-Kähönen, Nina8; Kähönen, Mika9; Lehtimäki, Terho10; Soininen, Pasi3,11; Ala-Korpela, Mika3,11,12; Raitakari, Olli T.5,13
1Research Centre of Applied and Preventive Cardiovascular Medicine University of Turku Turku Finland
2Department of Clinical Physiology and Nuclear Medicine Kuopio University Hospital and University of Eastern Finland Kuopio Finland
3Computational Medicine, Faculty of Medicine, University of Oulu and Biocenter Oulu, Oulu, Finland
4National Institute for Health and Welfare, Turku, Finland
6Division of Nutrition, Department of Food and Environmental Sciences, University of Helsinki, Helsinki, Finland
7Department of Medicine, University of Turku and Division of Medicine, Turku University Hospital, Turku, Finland
8Department of Pediatrics and Department of Clinical Physiology, Tampere University Hospital and University of Tampere, Tampere, Finland
9Department of Clinical Physiology, Tampere University Hospital and University of Tampere, Tampere, Finland
10Department of Clinical Chemistry, Fimlab Laboratories and School of Medicine, University of Tampere, Tampere, Finland
11NMR Metabolomics Laboratory, School of Pharmacy, University of Eastern Finland, Kuopio, Finland
12School of Social and Community Medicine and Medical Research Council Integrative Epidemiology Unit, University of Bristol, Bristol, United Kingdom
13Department of Clinical Physiology and Nuclear Medicine, Turku University Hospital, Turku, Finland
http://urn.fi/urn:nbn:fi-fe201703245835
John Wiley & Sons, 2017
Nonalcoholic fatty liver is associated with obesity-related metabolic disturbances, but little is known about the metabolic perturbations preceding fatty liver disease. We performed comprehensive metabolic profiling to assess how circulating metabolites, such as lipoprotein lipids, fatty acids, amino acids, and glycolysis-related metabolites, reflect the presence of and future risk for fatty liver in young adults. Sixty-eight lipids and metabolites were quantified by nuclear magnetic resonance metabolomics in the population-based Young Finns Study from serum collected in 2001 (n = 1,575), 2007 (n = 1,509), and 2011 (n = 2,002). Fatty liver was diagnosed by ultrasound in 2011 when participants were aged 34–49 years (19% prevalence). Cross-sectional associations as well as 4-year and 10-year risks for fatty liver were assessed by logistic regression. Metabolites across multiple pathways were strongly associated with the presence of fatty liver (P < 0.0007 for 60 measures in age-adjusted and sex-adjusted cross-sectional analyses). The strongest direct associations were observed for extremely large very-low-density lipoprotein triglycerides (odds ratio [OR] 5 4.86 per 1 standard deviation, 95% confidence interval 3.48–6.78), other very-low-density lipoprotein measures, and branched-chain amino acids (e.g., leucine OR = 2.94, 2.51–3.44). Strong inverse associations were observed for high-density lipoprotein measures, e.g., high-density lipoprotein size (OR = 0.36, 0.30–0.42) and several fatty acids including omega-6 (OR = 0.37, 0.32–0.42). The metabolic associations were attenuated but remained significant after adjusting for waist, physical activity, alcohol consumption, and smoking (P < 0.0007). Similar aberrations in the metabolic profile were observed already 10 years before fatty liver diagnosis.
Conclusion: Circulating lipids, fatty acids, and amino acids reflect fatty liver independently of routine metabolic risk factors; these metabolic aberrations appear to precede the development of fatty liver in young adults.
10.1002/hep.28899
https://oadoi.org/10.1002/hep.28899
Supported by the Academy of Finland (134309, 126925, 121584, 124282, 129378, 117797, 141071, 286284); the Social Insurance Institution of Finland; Kuopio, Tampere, and Turku University Hospital Medical Funds; Juho Vainio Foundation; Paavo Nurmi Foundation; Finnish Foundation of Cardiovascular Research; Finnish Cultural Foundation; Emil Aaltonen Foundation; and Yrjö Jahnsson Foundation. The serum NMR metabolomics platform and its development have been supported by Strategic Research Funding from the University of Oulu, the Academy of Finland (294834), the Novo Nordisk Foundation, the Sigrid Juselius Foundation, the Yrjö Jahnsson Foundation, and the Finnish Diabetes Research Foundation.
Copyright © 2016 The Authors. HEPATOLOGY published by Wiley Periodicals, Inc., on behalf of the American Association for the Study of Liver Diseases. This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,496
|
Wealth and Expenditures Interactive Tool
-Visualization: PC Fiduccia-
-See Bottom of Page for Definitions and Video Walkthrough-
Video Walkthrough Tutorial
Information & Descriptions
Information: Here we present a tool that allows you to take a look at some aspects of funding across the state and in your individual district. We offer two maps: one showing the Combined Wealth Ratio of districts and the other showing Expenditure Per Pupil. Combined Wealth Ratio is calculated by the state to determine how much a local community will be able to fund a school. This is calculated into the state's funding formula as a way to equalize the funding of districts. Districts with a lower CWR are limited in their means to fully fund a district, so state aid is greater in order to equalize spending with wealthier districts who have the means to fund schools without as much state aid. This is all part of an attempt to equalize Expenditure Per Pupil.
However, as is seen in the map, Expenditure Per Pupil is not equal across the state. There are a few variables that are important for explaining the variance across Expenditure Per Pupil, and we included those as adjustments you can make to the map.
The economically disadvantaged variable is important to note because it costs more to educate students with higher needs. With the CWR map next to the EPP map, it's easy to look at a district with a low CWR and a high EPP and think those schools are doing fine funding-wise, when in fact, it costs more to educate those students if they are largely economically disadvantaged.
The enrollment variable is important to note because it costs more to educate fewer students. A district might have an average CWR and an extremely high EPP. One look at the low enrollment of a tiny district would show that the district has to spend a lot per student in order to maintain a functioning school.
Lastly, the minority variable is an interesting slider to take a look at because it shows a different story between upstate and downstate. Districts with 50% or more minority students in upstate are more likely to have lower CWRs whereas districts with 50% or more minority students downstate are more likely to have a high CWR. However, if you look at EPP, these upstate districts are more likely to have a lower EPP and these downstate districts are more likely to have a higher EPP. So in this way, this variable shows important differences in CWR and EPP between upstate and downstate.
As you look at districts across the state and zoom into your individual district, keep these variables in mind, as they can help explain some, though not all, of the nuances in funding.
Definition: The Combined Wealth Ratio (CWR) is defined by New York State as a "…measure of a district's wealth taking into account both the district's real property [values] and the income of residents of the district" (NYSED, 2017).
It is meant to give an approximate representation of the relative wealth of the community / communities in which a school district is located. Unaltered CWR is centered at 1.0, with districts above or below a value of 1 being above or below the average state CWR, respectively. For this tool, however, we utilize a decile classification in which all districts are ranked in ranges of 10%. If a district has a CWR percentile value of 70, it means that district is wealthier than 70% of the other districts in the state. For more information on the calculation of CWR and other fiscal variables by the New York State Education Department, please utilize the link following the definition above.
© 2016 by New York Education Data Hub, Cornell University
Cornell University, Warren Hall, Ithaca, NY 14850
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,528
|
bool SbThreadSamplerThaw(SbThreadSampler sampler) {
if (!SbThreadSamplerIsValid(sampler)) {
return false;
}
return sampler->Thaw();
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,768
|
constexpr const char* DIRECTORY_MODEL = "\\Models\\";
Shared_Model::Shared_Model(Engine& engine, const std::string& filename, const bool& threaded)
{
auto newAsset = std::dynamic_pointer_cast<Model>(engine.getManager_Assets().shareAsset(
typeid(Model).name(),
filename,
[&engine, filename]() { return std::make_shared<Model>(engine, filename); },
threaded
));
swap(newAsset);
}
Model::Model(Engine& engine, const std::string& filename) : Asset(engine, filename) {}
void Model::initialize()
{
// Forward asset creation
m_mesh = Shared_Mesh(m_engine, DIRECTORY_MODEL + getFileName(), false);
// Generate all the required skins
loadMaterial(DIRECTORY_MODEL + getFileName(), m_materialArray, m_mesh->m_geometry.materials);
const size_t vertexCount = m_mesh->m_geometry.vertices.size();
m_data.m_vertices.resize(vertexCount);
for (size_t x = 0; x < vertexCount; ++x) {
m_data.m_vertices[x].vertex = m_mesh->m_geometry.vertices[x];
m_data.m_vertices[x].normal = m_mesh->m_geometry.normals[x];
m_data.m_vertices[x].tangent = m_mesh->m_geometry.tangents[x];
m_data.m_vertices[x].bitangent = m_mesh->m_geometry.bitangents[x];
m_data.m_vertices[x].uv = m_mesh->m_geometry.texCoords[x];
m_data.m_vertices[x].boneIDs.x = m_mesh->m_geometry.bones[x].IDs[0];
m_data.m_vertices[x].boneIDs.y = m_mesh->m_geometry.bones[x].IDs[1];
m_data.m_vertices[x].boneIDs.z = m_mesh->m_geometry.bones[x].IDs[2];
m_data.m_vertices[x].boneIDs.w = m_mesh->m_geometry.bones[x].IDs[3];
m_data.m_vertices[x].weights.x = m_mesh->m_geometry.bones[x].Weights[0];
m_data.m_vertices[x].weights.y = m_mesh->m_geometry.bones[x].Weights[1];
m_data.m_vertices[x].weights.z = m_mesh->m_geometry.bones[x].Weights[2];
m_data.m_vertices[x].weights.w = m_mesh->m_geometry.bones[x].Weights[3];
m_data.m_vertices[x].matID = (m_mesh->m_geometry.materialIndices[x] * 3);
}
// Calculate the mesh's min, max, center, and radius
calculateAABB(m_data.m_vertices, m_bboxMin, m_bboxMax, m_bboxScale, m_bboxCenter, m_radius);
// Finalize
Asset::finalize();
}
void Model::calculateAABB(const std::vector<SingleVertex>& mesh, glm::vec3& minOut, glm::vec3& maxOut, glm::vec3& scaleOut, glm::vec3& centerOut, float& radiusOut)
{
if (!mesh.empty()) {
const auto& vector = mesh[0].vertex;
auto minX = vector.x, maxX = vector.x, minY = vector.y, maxY = vector.y, minZ = vector.z, maxZ = vector.z;
const auto total = mesh.size();
for (size_t x = 1; x < total; ++x) {
const glm::vec3& vertex = mesh[x].vertex;
if (vertex.x < minX)
minX = vertex.x;
else if (vertex.x > maxX)
maxX = vertex.x;
if (vertex.y < minY)
minY = vertex.y;
else if (vertex.y > maxY)
maxY = vertex.y;
if (vertex.z < minZ)
minZ = vertex.z;
else if (vertex.z > maxZ)
maxZ = vertex.z;
}
minOut = glm::vec3(minX, minY, minZ);
maxOut = glm::vec3(maxX, maxY, maxZ);
scaleOut = (maxOut - minOut) / 2.0F;
centerOut = ((maxOut - minOut) / 2.0F) + minOut;
radiusOut = glm::distance(minOut, maxOut) / 2.0F;
}
}
void Model::loadMaterial(const std::string& relativePath, Shared_Material& modelMaterial, const std::vector<Material_Strings>& materials)
{
// Retrieve texture directories from the mesh file
const size_t slash1Index = relativePath.find_last_of('/');
const size_t slash2Index = relativePath.find_last_of('\\');
const size_t furthestFolderIndex = std::max(slash1Index != std::string::npos ? slash1Index : 0, slash2Index != std::string::npos ? slash2Index : 0);
const std::string meshDirectory = relativePath.substr(0, furthestFolderIndex + 1);
std::vector<std::string> textures(materials.size() * static_cast<size_t>(MAX_PHYSICAL_IMAGES));
const auto texturesSize = textures.size(), materialsSize = materials.size();
for (size_t tx = 0, mx = 0; tx < texturesSize && mx < materialsSize; tx += MAX_PHYSICAL_IMAGES, ++mx) {
if (!materials[mx].albedo.empty())
textures[tx + 0] = meshDirectory + materials[mx].albedo;
if (!materials[mx].normal.empty())
textures[tx + 1] = meshDirectory + materials[mx].normal;
if (!materials[mx].metalness.empty())
textures[tx + 2] = meshDirectory + materials[mx].metalness;
if (!materials[mx].roughness.empty())
textures[tx + 3] = meshDirectory + materials[mx].roughness;
if (!materials[mx].height.empty())
textures[tx + 4] = meshDirectory + materials[mx].height;
if (!materials[mx].albedo.empty())
textures[tx + 5] = meshDirectory + materials[mx].ao;
}
// Attempt to find a .mat file if it exists
std::string materialFilename = relativePath.substr(0, relativePath.find_first_of('.'));
modelMaterial = Shared_Material(m_engine, materialFilename, textures);
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,963
|
module Horobi
module Sub
@context = ZMQ::Context.new
def self.init
options = {
"logfile" => STDERR,
"inputs" => [],
}
OptionParser.new do |op|
op.on('-p VAL','--pidfile=VAL','pidfile path'){|v| options["pidfile"] = v}
op.on('-l VAL','--logfile=VAL','logfile path'){|v| options["logfile"] = (v == "-" ? STDOUT : v)}
op.on('-i VAL','--input-points=VAL',
"input(hub's output) point(s) such as 'tcp://127.0.0.1:5551,tcp://127.0.11.1:5551'"){|v| options["inputs"] = v.split(",")}
#op.on('-d','--daemonize','daemonize this script'){ options["daemon"] = true }
op.parse!(ARGV)
end
@options = options
if @options["inputs"].compact.length < 1
raise "subscribe input points are undefined"
end
@logger = Logger.new(options["logfile"])
@sock ||= begin
sock = @context.socket(ZMQ::SUB)
@options["inputs"].each do |point|
@logger.info("connecting to #{point}")
sock.connect(point)
end
sock.setsockopt(ZMQ::LINGER, 100)
sock
end
end
def self.listen(filter=nil, &block)
init unless @sock
@sock.setsockopt(ZMQ::SUBSCRIBE, filter.to_s)
Horobi.close_hooks do
@input.close
@context.close
end
loop do
buf = @sock.recv(ZMQ::NOBLOCK)
block.call(buf) if buf
sleep 0.1
end
end
end
end
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 532
|
Supporting Fair Wisconsin Education Fund is an investment in the future of equality in our state.
Full equality is about more than just checking off a list of legal protections – it's about ensuring that lesbian, gay, bisexual and transgender Wisconsinites and our families are able to thrive, living our lives fully, with dignity and respect, and without fear of harassment or harm. It is only through robust education, advocacy and political involvement that we will realize our shared vision of full equality for LGBT Wisconsinites.
By joining the Fairness Circle with a monthly gift starting at only $50 (or an annual gift of $600 or more) you are providing the leadership, vision and resources that Fair Wisconsin Education Fund needs to achieve our shared vision for Wisconsin.
Join this visionary group of people with your investment in the future of our movement, ensuring that we have the ability and resources to continue advancing, achieving and protecting the civil rights of LGBT people in Wisconsin.
Fairness Circle members make an annual commitment to give $600 or more, outside of event sponsorships and other gifts, providing Fair Wisconsin with the critical ongoing support that sustains the organization and builds the capacity of the movement for equality by ensuring the ability to respond quickly and effectively on issues of importance to LGBT Wisconsinites.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,877
|
Q: How to use GroupBy() and Min() with Entity Framework Core ASP .NET Let's say my database table Agents is the following,
+------------+----------------------+--------------------+------------+-----------------+---------+
| AgentCode | AgentName | WorkingArea | Commission | PhoneNo | Country |
+------------+----------------------+--------------------+------------+-----------------+---------+
| A007 | Ramasundar | Bangalore | 0.15 | 077-25814763 | |
| A003 | Alex | London | 0.13 | 075-12458969 | |
| A008 | Alford | New York | 0.12 | 044-25874365 | |
| A011 | Ravi Kumar | Bangalore | 0.15 | 077-45625874 | |
| A010 | Santakumar | Chennai | 0.14 | 007-22388644 | |
| A012 | Lucida | San Jose | 0.12 | 044-52981425 | |
| A005 | Anderson | Brisban | 0.13 | 045-21447739 | |
| A001 | Subbarao | Bangalore | 0.14 | 077-12346674 | |
| A002 | Mukesh | Mumbai | 0.11 | 029-12358964 | |
| A006 | McDen | London | 0.15 | 078-22255588 | |
| A004 | Ivan | Torento | 0.15 | 008-22544166 | |
| A009 | Benjamin | Hampshair | 0.11 | 008-22536178 | |
+------------+----------------------+--------------------+------------+-----------------+---------+
What I exactly need to query is as the following(in SQL for better understanding).
SELECT WorkingArea, MIN(Commission)
FROM agents
GROUP BY WorkingArea;
And its result is:-
WorkingArea MIN(Commission)
----------------------------------- ---------------
San Jose .12
Torento .15
London .13
Hampshair .11
New York .12
Brisban .13
Bangalore .14
Chennai .14
Mumbai .11
How can I do the same with .NET Entity Framework?
I tried the following. But it gave me only the MIN(COMMISSION) row.
IEnumerable<Agent> AgentList = _db.Agents
.GroupBy(fields => fields.WorkingArea)
.Select(fields => new Agent
{
Commission = fields.Min(fields => fields.Commission)
});
// If my Model name is Agent its corresponding database table name will be Agent**s**.
Update 1--------------------------------------------------------------------------------------------------
1. How can I do the equivalent to the sql query above, with .NET Entity Framework?
2. What if I want add also the PhoneNo column to the result? like
SELECT WorkingArea, MIN(Commission), PhoneNo
FROM agents
GROUP BY WorkingArea;
A: You need another type (not Agent) for the result output. Either define a two-field class
class WorkingAreaCommission
{
public string WorkingArea { get; set; }
public double Commission { get; set; }
}
or use anonymous type:
var AgentList = _db.Agents
.GroupBy(fields => fields.WorkingArea)
.Select(fields => new
{
WorkingArea = fields.Key,
Commission = fields.Min(fields => fields.Commission)
});
Update (to answer a question from comments)
If you need result containing more than just key and minimal values, sort by the field you need minimal value from and extract all required fields from the first record:
var AgentList = _db.Agents
.GroupBy(fields => fields.WorkingArea)
.Select(fields => fields
.OrderBy(x => x.Commission)
.Select(x => new {
WorkingArea = x.WorkingArea,
PhoneNo = x.PhoneNo,
Commission = x.Commission
})
.First()
);
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,574
|
A Jewish academic will walk from Sydney to Canberra in September to promote the boycott, divestment and sanctions campaign (BDS) against Israel for its continuing subjugation of the Palestinians.
Marcelo Svirsky is a lecturer in politics at the University of Wollongong and an Australian-Israeli Palestine activist. He is the author of several academic works on Israel-Palestine, activism and colonialism, and is an active member of the National Tertiary Education Union.
On September 23, he will begin his 10-day walk from the Sydney Opera House to the federal parliament in Canberra to raise awareness about BDS in Australia. The walk will culminate in the submission to the House of Representatives of a petition calling on federal parliament to endorse BDS.
Svirsky will arrive in Canberra on October 2, having passed through the Illawarra, the Southern Highlands, Goulburn and Bungendore. He will participate in public meetings at Wollongong, Mittagong and Goulburn.
BDS has been growing in prominence in Australia in recent months. University of Sydney academic Jake Lynch recently won a case brought against him by an Israeli law centre for spearheading the call for the academic boycott of Israeli institutions. In a major victory for the BDS movement in Australia, Lynch was awarded costs.
This victory materially boosted the prominence of BDS in Australia. In Sydney last month, calls to boycott the Israeli Film Festival led to the Supreme Court of NSW banning a protest outside the festival's opening night.
Svirsky is one of 164 Jews who signed an open letter in August calling on Jewish people "to break their silence, to take a public stand … for an end to the underlying conditions of siege and occupation which defy elementary morality, decency and humanity".
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,264
|
\section{Introduction}
In this note we study the $\claw$ problem in which given two discrete
functions $f:D\rightarrow R$ and $g:D\rightarrow R$ ($|D|=n$, $|R|=k$)
we have to determine if there is a collision, i.e., inputs $x,y\in D$
such that $f(x)=g(y)$. In contrast to the $\ed$ problem, where the
input is a single function $f:D\rightarrow R$ and we have to determine
if $f$ is injective, $\claw$ is non-trivial even when $k<n$. This
is the setting we focus on.
Both $\claw$ and $\ed$ have wide applications as useful subroutines
in more complex algorithms \cite{BJLM13,GS20} and as a means of lower
bounding complexity \cite{CKK12,ACL+20}.
$\claw$ and $\ed$ were first tackled by Buhrman et al. in 2000 \cite{BDH+05}
where they gave an $O\left(n^{3/4}\right)$ algorithm and $\Omega\left(n^{1/2}\right)$
lower bound. In 2003 Ambainis, introducing a novel technique of quantum
walks, improved the upper bound to $O\left(n^{2/3}\right)$ in the
query model \cite{ambainis2007quantum}. It was soon realized that
a similar approach works for $\claw$ \cite{CE05,MSS07,Tan09}. Meanwhile
Aaronson and Shi showed a lower bound $\Omega\left(n^{2/3}\right)$
that holds if the range $k=\Omega\left(n^{2}\right)$ \cite{aaronson2004quantum}.
Eventually Ambainis showed that the $\Omega\left(n^{2/3}\right)$
bound holds even if $k=n$ \cite{ambainis2005polynomial}. The same
lower bound has since been reproved using the adversary method \cite{Ros14}.
Until now, only the $\Omega\left(n^{1/2}\right)$ bound based on reduction
of searching was known for $\claw$ with $k=o\left(n\right)$ \cite{BDH+05}.
We consider quantum query complexity of $\claw$ where the input functions
are given as a list of their values in black box. Let $Q\left(f\right)$
denote the bounded error quantum query complexity of $f$. For a short
overview of black box model refer to Buhrman and de Wolf's survey
\cite{BdW02}. Let $[n]$ denote $\left\{ 1,2,\dots,n\right\} $.
Let $\cl{n}{k}:\left[k\right]^{2n}\rightarrow\left\{ 0,1\right\} $
be defined as
\[
\cl{n}{k}\left(x_{1},\dots,x_{n},y_{1},\dots,y_{n}\right)=\begin{cases}
1, & \text{if \ensuremath{\exists i,j\,x_{i}=y_{j}}}\\
0, & \text{otherwise}
\end{cases}.
\]
Our contribution is a quantum algorithm for $\cl{n}{k}$ showing $Q\left(\cl{n}{k}\right)=O\left(n^{1/2+\varepsilon}k^{1/4}\right)$
and a lower bound $Q\left(\cl{n}{k}\right)=\Omega\left(n^{1/2}k^{1/6}\right)$.
In section \ref{sec:Results} we describe the algorithm, and in section
\ref{sec:Lower-Bound} we give the lower bound.
\section{Results\label{sec:Results}}
\begin{thm}
\label{thm:alg}For all \textup{$\varepsilon>0$}, we have $Q\left(\cl{n}{k}\right)=O\left(n^{1/2+\varepsilon}k^{1/4}\right).$
\end{thm}
\begin{proof}
Let $X=\left(x_{1},\dots,x_{n}\right)$, $Y=\left(y_{1},\dots,y_{n}\right)$
be the inputs of the function. We denote $k=n^{\varkappa}$.
Consider the following algorithm parametrized by $\alpha\in\left[0,1\right]$.
\begin{enumerate}
\item Select a random sample $A=\left\{ a_{1},\dots,a_{\ell}\right\} \subseteq\left[n\right]$
of size $\ell=4\cdot n^{\alpha}\cdot\ln n$ and query the variables
$x_{a_{1}},\dots,x_{a_{\ell}}$.\\
Denote by $X_{A}=\left\{ x_{a}\mid a\in A\right\} $ the set containing
their values. Do a Grover search for an element $y\in Y$ such that
$y\in X_{A}$. If found, output 1.
\item[1'] Select a random sample $A'=\left\{ a'_{1},\dots,a'_{\ell}\right\} \subseteq Y$
of size $\ell$ and query the variables $y_{a'_{1}},\dots,y_{a'_{\ell}}$.\\
Denote by $Y_{A'}=\left\{ y_{a'}\mid a'\in A'\right\} $ the set containing
their values. Do a Grover search for an element $x\in X$ such that
$x\in Y_{A'}$. If found, output 1.
\item \label{enu:recstep}Run $\cl{4b\ln n}{k}$ algorithm (with the value
of $b$ specified below) with the following oracle:
\begin{enumerate}
\item To get $x_{i}$: do a pseudorandom permutation on $x_{1},\dots,x_{n}$
using seed $i$ and using Grover's minimum search return the first
value $x_{j}$ such that $x_{j}\notin X_{A}$.
\item To get $y_{i}$: do a pseudorandom permutation on $y_{1},\dots,y_{n}$
using seed $i$ and using Grover's minimum search return the first
value $y_{j}$ such that $y_{j}\notin X_{A'}$.
\end{enumerate}
\end{enumerate}
Let $B=\left\{ i\in\left[n\right]\mid x_{i}\notin X_{A}\right\} $,
$B'=\left\{ i\in\left[n\right]\mid y_{i}\notin Y_{A'}\right\} $ be
the sets containing the indices of the variables which have values
not seen in the steps 1 and 1'. We denote $\left|B\right|=b=n^{\beta}$.
Let us calculate the probability that after step 1 there exists an
unseen value $v$ which is represented in at least $n^{1-\alpha}$
variables, i.e., $v\notin X_{A}\wedge\left|\left\{ i\in\left[n\right]\mid x_{i}=v\right\} \right|\geq n^{1-\alpha}$.
Consider an arbitrary value $v^{*}\in\left[k\right]$ such that $\left|\left\{ i\mid x_{i}=v^{*}\right\} \right|\geq n^{1-\alpha}$.
For $i\in\left[\ell\right]$, let $Z_{i}$ be the event that $x_{a_{i}}=v^{*}$.
$\forall i\in\left[\ell\right]\ \Pr\left[Z_{i}\right]\geq\frac{n^{1-\alpha}}{n}$.
Let $Z=\sum_{i\in\left[\ell\right]}Z_{i}$. Then $\E\left[Z\right]=\ell\cdot\E\left[Z_{1}\right]\geq4\cdot n^{\alpha}\cdot\ln n\cdot\frac{n^{1-\alpha}}{n}=4\ln n$.
Using Chernoff inequality (see e.g. \cite{chung2006concentration}),
\[
\Pr\left[Z=0\right]\leq\exp\left(-\frac{1}{2}\E\left[Z\right]\right)\leq\exp\left(-2\ln n\right)=\frac{1}{n^{2}}.
\]
The probability that there exists such $v^{*}\in\left[k\right]$ is
at most $\frac{n^{\varkappa}}{n^{2}}=o\left(1\right)$. Therefore,
with probability $1-o\left(1\right)$ after step $1$, every value
$v\in B$ is represented in the input less than $n^{1-\alpha}$ times.
The same reasoning can be applied to step $1'$ and the set $B'$.
Therefore, with probability $1-o\left(1\right)$ both $b$ and $b'$
are at most $k\cdot n^{1-\alpha}=n^{\varkappa+1-\alpha}$.
Similarly, we show that with probability $1-o\left(1\right)$ each
$x\in B$ appears as the first element from $B$ in at least one of
the permutations of the oracle in step 2. Let $W_{i}^{x}$ be the
event that $x\in B$ appears in the $i$-th permutation as the first
element from $B$. $\E\left[W_{i}^{x}\right]=\frac{1}{b}$. Let $W^{x}=\sum_{i\in\left[4b\ln n\right]}W_{i}^{x}$.
$\E\left[W^{x}\right]=4b\ln n\cdot\frac{1}{b}=4\ln n$. $\Pr\left[W^{x}=0\right]\leq\exp\left(-2\ln n\right)=\frac{1}{n^{2}}$.
$\Pr\left[\exists x\in B:Z^{x}=0\right]\leq\frac{n}{n^{2}}=\frac{1}{n}=o\left(1\right)$.
The same argument works for $B'$. Therefore, if there is a collision,
it will be found by the algorithm with probability $1-o\left(1\right)$.
We also show that with probability $1-o\left(1\right)$, in all permutations
the first element from $B$ appears no further than in position $4\frac{n}{b}\ln n$
(and similarly for $B'$). We denote by $P_{i,j}$ the event that
in the $i$-th permutation in the $j$-th position is an element from
$B$. $\E\left[P_{i,j}\right]=\frac{b}{n}$. We denote $P_{i}=\sum_{j\in\left[4\cdot\frac{n}{b}\cdot\ln n\right]}P_{i,j}$.
$\E\left[P_{i}\right]=4\cdot\ln n$. $\Pr\left[P_{i}=0\right]\leq\exp\left(-2\ln n\right)=\frac{1}{n^{2}}$.
$\Pr\left[\exists i\in\left[4b\ln n\right]:P_{i}=0\right]\leq\frac{4b\ln n}{n^{2}}\leq\frac{4n\ln n}{n^{2}}=o\left(1\right)$.
Therefore, the Grover's minimum search will use at most $\tilde{O}\left(\sqrt{\frac{n}{n^{\beta}}}\right)$
queries.
The steps 1 and 1' use $\tilde{O}\left(n^{\alpha}\right)$ queries
to obtain the random sample, and $O\left(\sqrt{n}\right)$ queries
to check if there is a colliding element on the other side of the
input. The oracle in step 2 uses $\tilde{O}\left(\sqrt{\frac{n}{n^{\beta}}}\right)$
queries to obtain one value of $x_{i}$ or $y_{i}$.
Therefore the total complexity of the algorithm is
\[
\tilde{O}\left(n^{\alpha}+n^{\frac{1}{2}}+Q\left(\cl{4b\ln n}{k}\right)\cdot n^{\frac{1}{2}-\frac{1}{2}\beta}\right).
\]
By using the $O\left(n^{2/3}\right)$ algorithm in step 2,
\begin{align*}
Q\left(\cl{4b\ln n}{k}\right)\cdot n^{\frac{1}{2}-\frac{1}{2}\beta} & =n^{\frac{2}{3}\beta+\frac{1}{2}-\frac{1}{2}\beta}\\
& =n^{\frac{1}{2}+\frac{1}{6}\beta}\\
& \leq n^{\frac{1}{2}+\frac{1}{6}\left(\varkappa+1-\alpha\right)}\\
& =n^{\frac{4+\varkappa-\alpha}{6}},
\end{align*}
and the total complexity is minimized by setting $\alpha=\frac{4+\varkappa}{7}$.
However, we can do better than that. Notice that the $O\left(n^{2/3}\right)$
algorithm might not be the best choice for solving $\cl{4b\ln n}{k}$
in step 2.
Let $\mathcal{A}_{0}$ denote the regular $O\left(n^{\nicefrac{2}{3}}\right)$
$\cl{n}{k}$ algorithm. For $i>0$, let $\mathcal{A}_{i}$ denote
a version of algorithm from Theorem \ref{thm:alg} that in step \ref{enu:recstep}
calls $\mathcal{A}_{i-1}$. Then we show that for all $n$ and all
$0\leq\varkappa\leq\frac{2}{3}$,
\[
Q\left(\mathcal{A}_{i}\right)=\tilde{O}\left(n^{T_{i}(\varkappa)}\right),
\]
where $T_{i}(\varkappa)=\frac{\left(2^{i}-1\right)\varkappa+2^{i+1}}{2^{i+2}-1}$.
The proof is by induction on $i$. For $i=0$, we trivially have that
$Q\left(\mathcal{A}_{0}\right)=\tilde{O}\left(n^{\nicefrac{2}{3}}\right)$.
For the inductive step, consider the analysis of our algorithm. Let
us set $\alpha=T_{i}\left(\varkappa\right)$. First, notice that $T_{i}\text{\ensuremath{\left(\varkappa\right)}}$
is non-decreasing in $\varkappa$ and $T_{i}\left(\frac{2}{3}\right)=\frac{2}{3}$
for all $i$. Thus for all $\varkappa\leq\frac{2}{3}$, we have $T_{i}\left(\varkappa\right)\leq\frac{2}{3}$,
hence $\alpha\leq\frac{2}{3}$ and $\frac{\varkappa}{1-\alpha+\varkappa}\leq\frac{2}{3}$.
Second, since the coefficient of $\varkappa$ is $\frac{2^{i}-1}{2^{i+2}-1}\leq1$
the function $T_{i}\left(\varkappa\right)$ is above $\varkappa$
for $\varkappa\leq\frac{2}{3}$, establishing $\alpha-\varkappa\geq0$.
This confirms that $\alpha=T_{i}\left(\varkappa\right)$ is a valid
choice of $\alpha$.
It remains to show that the complexity of step \ref{enu:recstep}
does not exceed $T_{i}\left(\varkappa\right)$. By the inductive assumption
and analysis of the algorithm, the complexity (up to logarithmic factors)
of the second step is $n$ to the power of $\left(1-\alpha+\varkappa\right)\cdot T_{i-1}\left(\frac{\varkappa}{1-\alpha+\varkappa}\right)+\frac{\alpha-\varkappa}{2}$.
Finally, we have to show that
\[
\left(1-T_{i}\left(\varkappa\right)+\varkappa\right)\cdot T_{i-1}\left(\frac{\varkappa}{1-T_{i}\left(\varkappa\right)+\varkappa}\right)+\frac{T_{i}\left(\varkappa\right)-\varkappa}{2}\leq T_{i}\left(\varkappa\right).
\]
By expanding $T_{i-1}\left(\varkappa\right)$ and with a slight rearrangement,
we obtain
\[
\frac{(2^{i-1}-1)\varkappa+2^{i}\left(1-T_{i}\left(\varkappa\right)+\varkappa\right)}{2^{i+1}-1}\leq\frac{T_{i}\left(\varkappa\right)+\varkappa}{2}.
\]
We can further rearrange the required inequality by bringing $T_{i}\left(\varkappa\right)$
to right hand side and everything else to the other. Then we get
\[
\frac{(2^{i-1}-1+2^{i}-\frac{2^{i+1}-1}{2})\varkappa+2^{i}}{2^{i+1}-1}\leq T_{i}\left(\varkappa\right)\left(\frac{1}{2}+\frac{2^{i}}{2^{i+1}-1}\right).
\]
After simplification we obtain $\frac{\left(2^{i}-1\right)\varkappa+2^{i+1}}{2^{i+2}-1}\leq T_{i}(\varkappa)$,
which is true.
Since $\lim_{i\rightarrow\infty}\frac{2^{i}-1}{2^{i+2}-1}=\frac{1}{4}$
and $\lim_{i\rightarrow\infty}\frac{2^{i+1}}{2^{i+2}-1}=\frac{1}{2}$,
the result follows.
\end{proof}
\section{Lower Bound\label{sec:Lower-Bound}}
We show a $\Omega\left(n^{1/2}k^{1/6}\right)$ quantum query complexity
lower bound for $\cl{n}{k}$.
\begin{thm}
For all $k\geq2$, we have $Q\left(\cl{n}{k}\right)=\Omega\left(n^{1/2}k^{1/6}\right)$.
\end{thm}
\begin{proof}
Let $\psearch_{m}:\left({*}\cup[k]\right)^{m}\rightarrow[k]$ be the
partial function defined as
\[
\psearch_{m}\left(x_{1},x_{2},\ldots,x_{m}\right)=\begin{cases}
x_{i}, & \text{if }x_{i}\neq*,\forall j\neq i:x_{j}=*\\
\text{undefined}, & \text{otherwise}
\end{cases}.
\]
Consider the function $f_{n,k}=\cl{k}{k}\circ\psearch_{\left\lfloor n/k\right\rfloor }$.
One can straightforwardly reduce $f_{n,k}(x,y)$ to $\cl{n}{k+2}(x',y')$
by setting
\[
x'_{i}=\begin{cases}
x_{i}, & \text{if }x_{i}\neq*\\
k+1, & \text{if }x_{i}=*
\end{cases}
\]
and
\[
y'_{i}=\begin{cases}
y_{i}, & \text{if }y_{i}\neq*\\
k+2, & \text{if }y_{i}=*
\end{cases}.
\]
Next, we show that $Q\left(f_{n,k}\right)=\Omega\left(k^{2/3}\sqrt{n/k}\right)=\Omega\left(n^{1/2}k^{1/6}\right)$.
The fact that $Q\left(\cl{k}{k}\right)=\Omega\left(k^{2/3}\right)$
has been established by Zhang \cite{Zha05}. Furthermore, thanks to
the work done by Brassard et al. in \cite[Theorem 13]{BHK+19} we
know that for $\psearch_{m}$ a composition theorem holds: $Q\left(h\circ\psearch_{m}\right)=\Omega\left(Q\left(h\right)\cdot Q\left(\psearch_{m}\right)\right)=\Omega(Q(h)\cdot\sqrt{m})$.
Therefore,
\[
Q\left(\cl{n}{k}\right)\geq Q\left(\cl{k-2}{k-2}\circ\psearch_{\left\lfloor \frac{n}{k-2}\right\rfloor }\right)=\Omega\left(k^{2/3}\sqrt{\frac{n}{k}}\right)=\Omega\left(n^{1/2}k^{1/6}\right).
\]
\end{proof}
\section{Open Problems}
Can we show that $Q\left(\cl{n}{n^{2/3}}\right)=\Omega\left(n^{\nicefrac{2}{3}}\right)$?
In particular, our algorithm struggles with instances where there
are $\frac{n^{\nicefrac{2}{3}}}{2}$ singletons only two (or none)
of which are matching and the remaining variables are evenly distributed
with $\Theta\left(n^{\nicefrac{1}{3}}\right)$ copies each, such that
none are matching. Thus our algorithm then either has to waste time
sampling all the high-frequency decoy values or have most variables
not sampled by step \ref{enu:recstep}. If this lower bound held,
it would imply a better lower bound for evaluating constant depth
formulas and Boolean matrix product verification \cite[Theorem 5]{CKK12}.
\bibliographystyle{plain}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,537
|
Provincia Kırșehir este o provincie a Turciei cu o suprafață de 6,570 km², localizată în partea centrală a țării.
Districte
Adana este divizată în 7 districte (capitala districtului este subliniată):
Akçakent
Akpınar
Boztepe
Çiçekdağı
Kaman
Kırșehir
Mucur
Kırșehir
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,382
|
En chimie organique, un atome de carbone asymétrique est un centre stéréogène hybridé sp3, c'est-à-dire substitué avec quatre atomes ou groupes d'atomes de natures différentes.
Exemples
Dans les deux molécules suivantes, les atomes de sont asymétriques :
CH3ClBrH ;
CH3(C2H5)(CH=CH2)H.
En revanche, le du 2-bromo-2-méthylbutane (CH3Br(CH3)-CH2-CH3) n'est pas asymétrique : il est lié à quatre substituants mais ils ne sont pas tous différents. De même, le du bromochlorométhane (H2BrCl) n'est pas asymétrique car il ne possède que trois substituants différents (deux atomes d'hydrogène sont liés au carbone).
Propriétés optiques
Un carbone asymétrique confère à la molécule des propriétés optiques importantes : un pouvoir rotatoire. On dit alors qu'elle est optiquement active : elle fait tourner le plan de polarisation d'une lumière polarisée rectilignement.
Chiralité
Une structure possédant un unique centre stéréogène est chirale, c'est-à-dire qu'elle ne peut pas se superposer à son image dans un miroir plan. Deux molécules possédant une telle relation sont appelées des « énantiomères ». Une structure possédant plusieurs atomes asymétriques peut éventuellement être achirale (ladite structure possède alors un ou plusieurs plans ou centres de symétrie).
Les distances interatomiques entre les différents substituants étant les mêmes dans les deux cas, les propriétés physiques et chimiques des deux énantiomères sont les mêmes, à l'exception des propriétés douées d'un caractère chiral, telles leur pouvoir rotatoire, qui sont de signes opposés.
Articles connexes
Centre asymétrique
Chimie organique
Stéréochimie
de:Asymmetrisches Kohlenstoffatom
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,258
|
In der folgenden Liste werden die Ergebnisse der Kommunalwahlen in Winterthur aufgelistet. Es werden die Ergebnisse der Wahlen zum Grossen Gemeinderat ab 1974 angegeben. Das Feld der Partei, die bei der jeweiligen Wahl die meisten Sitze erhalten hat, ist farblich gekennzeichnet.
Der Grosse Gemeinderat tagt jeweils montagnachmittags im Rathaus. Er wurde 1895 geschaffen und umfasste damals 45 Mitglieder. Mit der Stadtvereinigung 1922 wurde die Mitgliederzahl auf 60 aufgestockt.
Parteien
AL: Alternative Liste
BDP: Bürgerlich-Demokratische Partei
DaP: Die andere Partei (früher Demokratische Partei)
EDU: Eidgenössisch-Demokratische Union
EVP: Evangelische Volkspartei der Schweiz
FDP: Freisinnig-Demokratische Partei
FPS: Freiheits-Partei der Schweiz (früher Auto-Partei)
GLP: Grünliberale Partei
GPS: Grüne Partei der Schweiz
LdU: Landesring der Unabhängigen
Mitte: Die Mitte (früher Christlichdemokratische Volkspartei)
POCH: Progressive Organisationen der Schweiz
PPS: Piratenpartei Schweiz
RB: Republikanische Bewegung
SD: Schweizer Demokraten (früher Nationale Aktion)
SP: Sozialdemokratische Partei der Schweiz
SVP: Schweizerische Volkspartei
Wahlen zum Grossen Gemeinderat
Sitzverteilung
Anmerkungen
Grafische Darstellung
Einzelnachweise
Weblink
Leitseite für die Wahlergebnisse
Winterthur
Politik (Winterthur)
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,272
|
/**
* (c) 2016 Tieto Finland Oy
* Licensed under the MIT license.
*/
'use strict';
/**
* @ngdoc directive
* @name dashboard.motionsDirective
* @description
* # motionsDirective
*/
angular.module('dashboard')
.directive('dbMotions', [function () {
var controller = ['$log', 'StorageSrv', 'CONST', '$rootScope', '$scope', 'AhjoMeetingSrv', 'Utils', '$timeout', function ($log, StorageSrv, CONST, $rootScope, $scope, AhjoMeetingSrv, Utils, $timeout) {
$log.log("dbMotions: CONTROLLER");
var self = this;
self.isIe = $rootScope.isIe;
self.motionsCont = null;
self.motions = null;
self.meetingActive = null;
self.meetingAborted = null;
var selectedMotion = null;
var mtgItemSelected = StorageSrv.getKey(CONST.KEY.MEETING_ITEM);
var scrollTimer = null;
var setDataTimer = null;
self.submitMotionCnfm = { title: 'STR_CONFIRM', text: 'STR_CNFM_SUBMIT_MOTION', yes: 'STR_YES' };
self.submitFifthGroupMotion = { title: 'STR_CONFIRM', text: 'STR_CNFM_SUBMIT_FIFTH_GROUPMOTION', yes: 'STR_YES' };
// FUNCTIONS
function setMotions(data) {
self.motions = [];
setDataTimer = $timeout(function () {
if (angular.isObject(data)) {
self.motions = (angular.isArray(data.objects)) ? data.objects : [];
self.motionsCont = data;
}
$log.debug("dbMotions.setMotions, after timeout data=", data);
}, 0);
}
function signMotion(aMotion, aMeetingGuid, aSupport) {
if (!angular.isObject(aMotion) || !angular.isString(aMeetingGuid)) {
$log.error("dbMotions.signMotion: bad args", arguments);
return;
}
$log.log("dbMotions.signMotion: ", arguments);
var copyMotion = angular.copy(aMotion);
copyMotion.actionPersonGuid = mtgItemSelected.dbUserPersonGuid;
copyMotion.meetingGuid = aMeetingGuid;
copyMotion.isUserSupported = aSupport;
AhjoMeetingSrv.updateMotion(copyMotion).then(function (resp) {
if (angular.isObject(resp) && angular.isObject(resp.motion)) {
$log.log("dbMotions.signMotion done", resp);
angular.merge(aMotion, resp.motion);
aMotion.supporters = resp.motion.supporters; // Merge won't handle removals
}
else {
$log.error("dbMotions.signMotion done ", arguments);
}
}, function (error) {
$log.error("dbMotions.signMotion: error: ", arguments);
Utils.showErrorForError(error);
}, function (/*notification*/) {
aMotion.ongoing = true;
}).finally(function () {
aMotion.ongoing = false;
});
}
function submitMotion(aMotion) {
if (!angular.isObject(aMotion)) {
$log.error("dbMotions.submitMotion: bad args", arguments);
return;
}
$log.log("dbMotions: submitMotion", arguments);
var copyMotion = angular.copy(aMotion);
copyMotion.isSubmitted = true;
copyMotion.actionPersonGuid = mtgItemSelected.dbUserPersonGuid;
copyMotion.meetingGuid = mtgItemSelected.meetingGuid;
AhjoMeetingSrv.updateMotion(copyMotion).then(function (resp) {
if (angular.isObject(resp) && angular.isObject(resp.motion)) {
$log.log("dbMotions.submitMotion done", resp);
angular.merge(aMotion, resp.motion);
}
else {
$log.error("dbMotions.submitMotion done ", resp);
}
}, function (error) {
$log.error("dbMotions.submitMotion", error);
Utils.showErrorForError(error);
}, function (/*notification*/) {
aMotion.ongoing = true;
}).finally(function () {
aMotion.ongoing = false;
});
}
function setMotionAsRead(aMotion) {
if (!angular.isObject(aMotion)) {
$log.error("dbMotions.setMotionAsRead: bad args", arguments);
return;
}
$log.log("dbMotions: setMotionAsRead", arguments);
var copyMotion = angular.copy(aMotion);
copyMotion.isUserRead = true;
copyMotion.actionPersonGuid = mtgItemSelected.dbUserPersonGuid;
copyMotion.meetingGuid = mtgItemSelected.meetingGuid;
AhjoMeetingSrv.updateMotion(copyMotion).then(function (resp) {
if (angular.isObject(resp) && angular.isObject(resp.motion)) {
$log.log("dbMotions.setMotionAsRead done", resp);
angular.merge(aMotion, resp.motion);
}
else {
$log.error("dbMotions.setMotionAsRead done ", resp);
}
}, function (error) {
$log.error("dbMotions.setMotionAsRead: ", error);
Utils.showErrorForError(error);
}, function (/*notification*/) {
aMotion.ongoing = true;
}).finally(function () {
aMotion.ongoing = false;
});
}
self.selectMotion = function (aMotion) {
$log.log("dbMotions: selectMotion", arguments);
if (self.isSelected(aMotion)) {
selectedMotion = null;
}
else {
selectedMotion = angular.isObject(aMotion) ? aMotion : null;
}
if (angular.isObject(aMotion) && aMotion.isUserRead === false) {
setMotionAsRead(aMotion);
}
};
self.isSelected = function (motion) {
return angular.equals(motion, selectedMotion);
};
self.typeString = function (id) {
var result = '-';
angular.forEach(CONST.MOTIONTYPES, function (value) {
if (angular.isObject(value) && value.id === id) {
result = value.stringId;
}
}, this);
return result;
};
self.submit = function (aMotion) {
submitMotion(aMotion);
};
self.support = function (aMotion) {
if (!angular.isObject(mtgItemSelected)) {
$log.error("dbMotions.support: invalid mtg item", mtgItemSelected);
return;
}
signMotion(aMotion, mtgItemSelected.meetingGuid, true);
};
self.removeSupport = function (aMotion) {
if (!angular.isObject(mtgItemSelected)) {
$log.error("dbMotions.removeSupport: invalid mtg item", mtgItemSelected);
return;
}
signMotion(aMotion, mtgItemSelected.meetingGuid, false);
};
$scope.$watch(function () {
return StorageSrv.getKey(CONST.KEY.MOTION_DATA);
}, function (data) {
$log.debug("dbMotions watch triggered: " + CONST.KEY.MOTION_DATA, arguments);
setMotions(data);
});
$scope.$watch(function () {
return $rootScope.meetingStatus;
}, function (status) {
self.meetingActive = (status === CONST.MTGSTATUS.ACTIVE.stateId);
self.meetingAborted = (status === CONST.MTGSTATUS.ABORTED.stateId);
});
var modeWatcher = $rootScope.$on(CONST.MTGUICHANGED, function (event, data) {
if (angular.isObject(data) && (data.blockMode === CONST.BLOCKMODE.SECONDARY || data.blockMode === CONST.BLOCKMODE.DEFAULT)) {
var motionData = StorageSrv.getKey(CONST.KEY.MOTION_DATA);
setMotions(motionData);
}
});
$scope.$on('$destroy', function () {
$log.debug("dbMotions: DESTROY");
if (angular.isFunction(modeWatcher)) {
modeWatcher();
}
if (angular.isFunction(setDataTimer)) {
setDataTimer();
}
if (angular.isFunction(scrollTimer)) {
scrollTimer();
}
});
}];
return {
scope: {},
templateUrl: 'views/motions.directive.html',
restrict: 'AE',
controller: controller,
controllerAs: 'c',
replace: 'true'
};
}]);
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,081
|
In Store Thursday, December 01, 2022 7:00 PM PT
John Truby discusses & signs THE ANATOMY OF GENRES: HOW STORY FORMS EXPLAIN THE WAY THE WORLD WORKS
Join Barnes & Noble - The Grove at Farmers Market on Thursday, December 1, 2022, as we welcome the founder and director of Truby's Writers Studio, John Truby, to the store to discuss & sign THE ANATOMY OF GENRES: HOW STORY FORMS EXPLAIN THE WAY THE WORLD WORKS, a guide to understanding the major genres of the story world.
Books can be purchased at Barnes & Noble – The Grove at Farmers Market starting Thursday, December 1st at 11:00 am. You are not required to purchase a book to attend this event, but if you'd like to get it signed, you must purchase the book before the start of the event.
Guests will be admitted into the event beginning at 6:30 PM the day of the event. Please have your wristband visible for check-in.
ABOUT JOHN TRUBY:
John Truby is the founder and director of Truby's Writers Studio. Over the past thirty years, he has taught more than fifty thousand students worldwide, including novelists, screenwriters, and TV writers. Together, these writers have generated more than fifteen billion dollars at the box office.
Truby has an ongoing program where he works with students who are actively creating shows, movies, and novel series. He regularly applies his genre techniques in story consulting work with major studios including Disney, Sony Pictures, Fox, HBO, the BBC, Canal Plus, Globo, and AMC. He lives in Los Angeles with his wife, Leslie, and their two cats, Tink and Peanut.
ABOUT THE ANATOMY OF GENRES: HOW STORY FORMS EXPLAIN THE WAY THE WORLD WORKS:
Most people think genres are simply categories on Netflix or Amazon that provide a helpful guide to making entertainment choices. Most people are wrong. Genre stories aren't just a small subset of the films, video games, TV shows, and books that people consume. They are the all-stars of the entertainment world, comprising the vast majority of popular stories worldwide. That's why businesses—movie studios, production companies, video game studios, and publishing houses—buy and sell them. Writers who want to succeed professionally must write the stories these businesses want to buy. Simply put, the storytelling game is won by mastering the structure of genres.
The Anatomy of Genres: How Story Forms Explain the Way the World Works is the legendary writing teacher John Truby's step-by-step guide to understanding and using the basic building blocks of the story world. He details the three ironclad rules of successful genre writing, and analyzes more than a dozen major genres and the essential plot events, or "beats," that define each of them. As he shows, the ability to combine these beats in the right way is what separates stories that sell from those that don't. Truby also reveals how a single story can combine elements of different genres, and how the best writers use this technique to craft unforgettable stories that stand out from the crowd.
Just as Truby's first book, The Anatomy of Story, changed the way writers develop stories, The Anatomy of Genres will enhance their quality and expand the impact they have on the world.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,012
|
Perfettamente sbagliato è un singolo del cantautore italiano Nesli, pubblicato il 7 ottobre 2016 come secondo estratto dal nono album in studio Kill Karma.
Descrizione
Perfettamente sbagliato è una riflessione molto intensa di Nesli sulla sua vita, sul passato e sui cambiamenti che la mancanza di una figura amata provoca in noi al punto che, nonostante tutto vada per il verso giusto, abbiamo in noi una sensazione che non se ne va e ci fa sentire che c'è qualcosa di "perfettamente sbagliato" nella nostra vita.
Note
Collegamenti esterni
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,056
|
"Knife against her neck:" Body cameras capture arrest of man accused of taking daughter across state lines
Kenneth Brown arrest
Kenneth Brown
INDIANA -- It was a dramatic arrest and rescue as police cornered a Milwaukee man who abducted his two-year-old daughter and took her across state lines.
Screams, cries and pleas for help can be heard in the body camera video from a police officer in Indiana. The mother of the little girl, who is also Kenneth Brown's ex, said she was terrified for her baby girl while he was on the run.
"Get out of the car. We don't want to hurt you man. Just put the knife down. It's someone's kid," police told Brown in the video.
He was in a stolen minivan, pinned against a guardrail on Interstate 65 in Indiana. He clutched his daughter, Kendra.
"Don't hurt her. Don't do it," police said. "He's got the knife literally against her (expletive) neck."
The video shows police with guns drawn -- surrounding the vehicle in an attempt to get the child to safety.
"He's trying to smother the kid right now I think. We can get this solved. You just need to stop what you're doing right now," police said.
After he refused to obey their commands, police broke the rearview window. Brown handed over the girl before cutting himself.
"He's cutting his throat. He's cutting his throat," police said.
Police used a Taser on him -- finally calming him down.
"Get out of the truck now. We are here to help you," police said.
Days after the chaotic ordeal, Kendra was back home on Monday, October 30th -- thriving -- recovering from a few scratches, but seemingly OK.
"Happy that they got her home and she's OK and she is still just as happy," Kendra's mother said.
The toddler's mother said she's working to get past the terrifying incident.
"It was just scary," she said.
Sadly, this isn't the first incident in which the toddler has been subjected to involving the police. In August, Brown refused to hand her over during an arrest for a domestic situation. He was ordered to have no contact with Kendra's mother or her kids.
"He's in custody. He's going to be there for a while. Hopefully that sticks," she said.
Kendra's mother said she's unsure why Brown took the girl, and where he was headed with her. He's facing several charges in Indiana, including intimidation, using a deadly weapon and criminal confinement. He's due in court on November 1st, which could delay extradition to Wisconsin.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,288
|
K&L Gates associate, Jamie Mitchell, talks about the firm's commitment to diversity and inclusion and how it can strengthen interpersonal relationships with clients.
K&L Gates is proud to sponsor the 2012 Corporate Counsel Women of Color Career Strategies Conference. This video introduction marks our 7th year as a sponsor.
Partner Tara Clancy speaks about her past and present as a female lawyer.
Partner Cindy Ohlenforst discusses how her passion for the law has guided her career.
Partner Mary Turk-Meena explores the importance of younger women lawyers seeing experienced women lawyers succeed.
K&L Gates has been the sponsor of the Corporate Counsel Women of Color Career Strategies Conference for the past six years. This video introduces the 2011 event.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,983
|
{"url":"https:\/\/hrj.episciences.org\/volume\/view\/id\/37","text":"# Volume 26 - 2003\n\n### 1. Mean-square upper bound of Hecke $L$-functions on the critical line.\n\nWe prove the upper bound for the mean-square of the absolute value of the Hecke $L$-functions (attached to a holomorphic cusp form) defined for the congruence subgroup $\\Gamma_0 (N)$ on the critical line uniformly with respect to its conductor $N$.","date":"2023-03-30 20:22:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8800199627876282, \"perplexity\": 305.2241254626551}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296949387.98\/warc\/CC-MAIN-20230330194843-20230330224843-00729.warc.gz\"}"}
| null | null |
Q: Provider media type matching? Any ideas how are @Provider Produces/Consumes types are matched?
I have a Provider that both Produces and Consumes Lua, I'd also like to be able to Consume XML or JSON with the built-in Providers and reply with Lua from my Provider.
I can't seem to come-up with the right combination of Produces/Consumes values in my Lua Provider to make this happen but it seems like it should be possible.
Help?
A: Application input error, we've found and fixed it.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,522
|
\section*{Introduction}
We apply Ringel duality to the representation theory of the classical algebraic super groups. Some of the motivation and inspiration for this study originates in the recent construction of the abelian envelope of Deligne's universal monoidal category of \cite{Deligne} in \cite{EHS} via general linear super groups. In the background of the construction in \cite{EHS} one has Ringel duality between finite truncations of the abelian envelope and the original category of Deligne. Recently an infinite version of Ringel duality has been developed rigorously in \cite{BSRingel} which allows to speak of Ringel duality between the actual abelian envelope and the original category of Deligne.
In the current paper, we provide an alternative proof for the truncated duality, for which we do not apply any serious representation theory of super groups. Instead we rely on the theory of Brauer algebras (the endomorphism algebras of objects in Deligne's category) and the classical construction of Schur algebras out of affine group schemes. This also allows us to extend this Ringel duality to positive characteristics and to the orthosymplectic super group, in which cases the results are new. The Ringel dualities we establish allow to transfer information between the representation theory of Brauer algebras and super groups.
The results also have applications to invariant theory of super groups. In particular we re-establish many cases of the first and second fundamental theorem of invariant theory as obtained in e.g.~\cite{Sel, DLZ, ES, Kujawa, BrCat, Yang} in characteristic zero, now with purely algebraic proofs, and extend them to positive characteristic.
The paper is organised as follows. In Sections~\ref{SecGrp} and~\ref{DiagCat} we recall the relevant notions of super algebra and diagram categories. In Section~\ref{SecCOAL} we develop some theory of centraliser coalgebras for representations of small categories. The goal is to prove that the centraliser algebra of the Brauer algebra acting on tensor powers is given by a Schur type algebra coming from the relevant super group. The study of centraliser algebras via coalgebras and Schur algebras is a classical method, see \cite{Green}. However, in order to obtain a rigorous proof which is computation free and applicable to all classical super groups we propose a method which involves the monoidal structure of the relevant Deligne category, rather than relying solely on the Brauer algebra itself.
In Section~\ref{SecSchur} we apply the theory of Section~\ref{SecCOAL} to the Brauer, the walled Brauer, the periplectic Brauer and the Brauer-Clifford algebra. Furthermore, we identify the abelian subcategory of the category of algebraic representations of the corresponding super group which is described by the centraliser algebra (the Schur algebra).
In Section~\ref{SecRingel} we show that, under certain restrictions on the parameters, the centraliser algebras of the (walled) Brauer algebra as described in Section~\ref{SecSchur} are precisely the Ringel duals of the Brauer algebras. In particular, this establishes a Ringel duality between the (walled) Brauer algebra and a category of representations of a super group, for all parameters for which the Brauer algebra is quasi-hereditary.
In Section~\ref{SecApp} we describe some applications of our results to the representation theory and invariant theory of super groups. In Appendix~\ref{AppRingel} we summarise some elementary properties of quasi-hereditary algebras and Ringel duality which are used in the main part of the paper.
\subsection*{Notation} Throughout the paper, $\Bbbk$ denotes an algebraically closed field of characteristic $p\ge 0$. We will often leave out $\Bbbk$ in subscripts. We let $\mathbb{I}$ denote the image of $\mathbb{Z}$ in $\Bbbk$, so $\mathbb{I}\simeq\mathbb{Z}$ if $p=0$ and $\mathbb{I}\simeq\mathbb{F}_p$ if $p>0$.
We set $\mathbb{N}=\{0,1,2,\cdots\}$ and denote the cyclic group of order two by $\mathbb{Z}_2=\mathbb{Z}/2\mathbb{Z}=\{\bar{0},\bar{1}\}$. For $a,b\in\mathbb{Z}$, we denote by $[\![a,b]\!]$ the set of integers $x$ with $a\le x\le b$.
For $n\in\mathbb{N}$, we denote the symmetric group on $n$ symbols by $\mathrm{S}_n$. For $\lambda\vdash n$, we have the corresponding Specht module $S(\lambda)$ of $\Bbbk\mathrm{S}_n$. We will only consider characteristics of $\Bbbk$ where the Specht modules are simple. For $r\in\mathbb{N}$, we set
$$\mathscr{J}(r)\;=\;\{r-2i\,|\, 0\le i\le r/2\}\;\subset\,\mathbb{N}\quad\mbox{and}\quad\Lambda_r:=\{\lambda\vdash i\,|\, i\in\mathscr{J}(r)\}.$$
For $(r,s)\in\mathbb{N}\times\mathbb{N}$, we will also use the following set of bipartitions
$$\Lambda_{r,s}:=\{(\lambda,\mu)\;|\;\lambda\vdash r-j, \,\mu\vdash s-j \;\mbox{for some}\; 0\le j\le \min(r,s)\}.$$
For a finite dimensional associative algebra $A$, we denote its category of finite dimensional left modules by $A$-mod.
As usual, a subcategory where the morphism sets are identical to those in the original category is called a full subcategory. Dually, a subcategory with same set of objects as the original category will be called a dense subcategory.
\section{Affine super group schemes}
\label{SecGrp}
\subsection{Elementary super algebra}
Super algebra corresponds to algebra in the symmetric monoidal category $\mathbf{svec}_{\Bbbk}$.
\subsubsection{Vector spaces} We denote by $\mathbf{svec}_{\Bbbk}$ the category of finite dimensional $\mathbb{Z}_2$-graded vector spaces. Morphisms in this category are the grading preserving $\Bbbk$-linear homomorphisms. For super spaces $V$ and $W$ we denoted the space of such morphisms by ${\rm Hom}_{\Bbbk}(V,W)$. The category $\mathbf{svec}$ is $\Bbbk$-linear and monoidal. For $v\in V_{i}$, with $i\in\{\bar{0},\bar{1}\}$ we write $|v|=i$. The parity change functor $\Pi$ satisfies $(\Pi V)_{i}=V_{i+\bar{1}}$.
The monoidal category of all $\mathbb{Z}_2$-graded vector spaces is denoted by $\mathbf{s\hspace{-0.4mm}Vec}_{\Bbbk}$.
We consider $\mathbf{svec}_{\Bbbk}$ and $\mathbf{s\hspace{-0.4mm}Vec}_{\Bbbk}$ as symmetric monoidal categories with braiding
$$\gamma_{V,W}: \;V\otimes W\,\stackrel{\sim}{\to}\, W\otimes V,\quad v\otimes w\mapsto (-1)^{|w||v|}w\otimes v.$$
Definitions as the above extend uniquely from homogeneous elements by linearity. By a ``form'' on a super vector space $V$ we will always mean a non-degenerate bilinear symmetric (with respect to $\gamma_{V,V}$) form. Such a form is even, resp. odd, if $\langle v,w\rangle=\bar{0}$ when $|u|+|v|=\bar{1}$, resp. $|u|+|v|=\bar{0}$.
\subsubsection{}For $V\in\mathbf{svec}$, we denote by $\dim V\in\mathbb{N}\times\mathbb{N}$ the pair $(m,n)$, with $m$, resp. $n$, the ordinary dimension of $V_{\bar{0}}$, resp. $V_{\bar{1}}$. By $\mathrm{sdim} V\in\mathbb{I}$ we denote the categorical dimension in $\mathbf{svec}$. Concretely, $\mathrm{sdim} V$ is the image of $m-n$ in $\Bbbk$. When working with such a fixed vector space, we use the function
$$
[\cdot]: \;\lbr1,m+n]\!] \,\to\,\mathbb{Z}_2,\quad\mbox{with}\quad
[i]\;=\;\begin{cases}
\bar{0}&\mbox{ for $ i\in\lbr1, m]\!]$},\\
\bar{1}&\mbox{ for $ i\in[\![ m+1, m+n]\!]$.}
\end{cases}$$
When we denote a basis of $V$ by $\{e_i\}$, we assume that $e_i\in V_{[i]}$.
The monoidal category $\mathbf{s\hspace{-0.4mm}Vec}$ has internal homomorphisms, denoted by
$$\underline{\mathrm{Hom}}(V,W),\quad\mbox{with} \quad \underline{\mathrm{Hom}}(V,W)_{\bar{0}}={\rm Hom}(V,W)\quad\mbox{and}\quad \underline{\mathrm{Hom}}(V,W)_{\bar{1}}={\rm Hom}(V,\Pi W).$$
For $V\in\mathbf{s\hspace{-0.4mm}Vec}$, we set $V^\ast=\underline{\mathrm{Hom}}(V,\Bbbk)$.
\subsubsection{Algebras}
A super algebra $(A,m,\eta)$ is a monoid in $\mathbf{s\hspace{-0.4mm}Vec}$. As this does not involve the braiding, this is just a $\mathbb{Z}_2$-graded algebra. However the definition of the tensor product $A\otimes B$ of two algebras involves the braiding.
Similarly, a commutative super algebra is a commutative monoid in $\mathbf{s\hspace{-0.4mm}Vec}$.
We denote the category of commutative super algebras by $\mathbf{cs\hspace{-0.3mm}Al}_{\Bbbk}$.
By definition, an $A$-module corresponds to an $M\in\mathbf{s\hspace{-0.4mm}Vec}$ with a super algebra morphism
$A\to\underline{\mathrm{End}}(M).$ The superspace $\underline{\mathrm{Hom}}_A(M,N)$ consists of $f\in \underline{\mathrm{Hom}}(M,N)$ which satisfy $f(av)=(-1)^{|a||f|}af(v)$, for homogeneous $a\in A$ and $v\in M$. It is easy to see that we get an isomorphic super space when we do not impose a minus sign in the commutation relation.
\subsubsection{Categories}\label{SupCat} Super categories and super functors are generalisations of super algebras and super modules in the same way as $\Bbbk$-linear categories and functors are generalisations of algebras and modules. Super categories and super functors are thus enriched over the monoidal category $\mathbf{s\hspace{-0.4mm}Vec}$. We will often interpret a super category with finitely many objects simply as a super algebra (with some distinguished idempotents).
A monoidal super category is defined similarly as a $\Bbbk$-linear monoidal category, but based on the super interchange law
$$(f\otimes g)\circ (h\otimes k)\;=\; (-1)^{|g||h|} (f\circ h\otimes g\circ k).$$
We refer to \cite{BE} for a complete treatment of monoidal super categories.
An example of a monoidal supercategory is the category $\underline{\mathbf{svec}}$ which has the same objects as $\mathbf{svec}$, but morphism superspaces given by the internal morphism spaces.
As a manifestation of the super interchange law, $f\otimes g$ for two homogeneous morphisms $f,g$ in $\underline{\mathbf{svec}}$ has to be interpreted as
$$(f\otimes g)(v\otimes w)\;=\; (-1)^{|g||v|} f(v)\otimes g(w),$$
with $v,w$ homogeneous elements in the relevant super spaces.
\subsubsection{Coalgebras} A super coalgebra $(C,\Delta,\varepsilon)$ is a comonoid in $\mathbf{s\hspace{-0.4mm}Vec}$, see e.g. \cite[\S 2.1.1]{Abe}. We denote by scom-$C$, the category of comodules in $\mathbf{svec}$. By definition, a comodule is a finite dimensional super vector space $M$ with a morphism
$$c_M:\, M\to M\otimes C \qquad\mbox{in $\mathbf{s\hspace{-0.4mm}Vec}$},$$
such that
$$({\rm id}_M\otimes \varepsilon)\circ c_M={\rm id}_M\qquad\mbox{and}\qquad ({\rm id}_M\otimes \Delta)\circ c_M=(c_M\otimes {\rm id}_C)\circ c_M.$$
When we consider the category of comodules in $\mathbf{svec}$ of $C$, but with all (not necessarily grading preserving) morphisms which commute with the coaction, we write $\underline{{\rm scom}}$-C. The category $\underline{{\rm scom}}$-$C$ is a super category, but not necessarily abelian. Denote by com-$C$ the category of finite dimensional comodules of $C$, regarded as an ordinary coalgebra. By definition, we have forgetful functors
\begin{equation}\label{eqFrg}\mathrm{Frg}:\;\mbox{scom-}C\;\to\;\mbox{com-}C\quad\mbox{and}\quad \underline{\mathrm{Frg}}:\;\underline{{\rm scom}}\mbox{-}C\;\to\;\mbox{com-}C,
\end{equation} where the latter is fully faithful.
\subsubsection{Matrix coefficients}\label{defCM} Fix a coalgebra $C$ in $\mathbf{s\hspace{-0.4mm}Vec}$. For $M$ in scom-$C$, we have a coalgebra morphism
\begin{equation}\label{eqMC}[-|-]_C:\;M^\ast\otimes M\to C,\quad \alpha\otimes v\mapsto [\alpha|v]_C\;:=\; (\alpha\otimes {\rm id}_C)\circ c_M(v). \end{equation}
We denote by $C^M$ the image of the above morphism.
By definition, $C^M$ is a subcoalgebra of $C$ and $M$ restricts to a comodule of $C^M$.
For a short exact sequence in scom-$C$ $$0\to M_1\to M\to M_2\to 0,$$
we have $C^M\supset C^{M_1}+C^{M_2}$, $C^{M^{\oplus n}}=C^M$ and $C^{\Pi M}=C^M$.
For any finite dimensional subcoalgebra $B$ of $C$, we can therefore consider scom-$B$ canonically as the abelian subcategory of scom-$C$ of all comodules which are subcomodules of direct sums of the $C$-comodules $B$ and $\Pi B$.
\subsubsection{}\label{dualA} For a coalgebra $(C,\Delta,\varepsilon)$ in $\mathbf{s\hspace{-0.4mm}Vec}$, we have the dual algebra $(C^\ast,m,\eta)$, with
$$m(\alpha,\beta)(x):=(\alpha\otimes\beta)\circ\Delta(x), \quad\mbox{for $\alpha,\beta\in C^\ast$ and $x\in C$, and } \eta(\lambda):=\lambda\varepsilon.$$
We could also define the dual with $m$ twisted by $\gamma$ in $\mathbf{s\hspace{-0.4mm}Vec}$. Then taking the dual would intertwine the functors forgetting the $\mathbb{Z}_2$-grading, but would interchange comodule structures on a space $V$ with module structures on $V^\ast$, rather than on $V$.
\subsubsection{Hopf algebras}
A super bialgebra is a tuple $(A,m,\eta,\Delta,\varepsilon)$ in $\mathbf{s\hspace{-0.4mm}Vec}$ such that $(A,m,\eta)$ is a monoid and $(C,\Delta,\varepsilon)$ is a comonoid for which $\Delta$ and $\varepsilon$ are algebra morphisms (in $\mathbf{s\hspace{-0.4mm}Vec}$), see \cite[Theorem~2.1.1]{Abe}.
If we additionally have an antipode $S$, see \cite[\S 2.1.2]{Abe}, then
$(A,m,\eta,\Delta,\varepsilon,S)$ is a Hopf super algebra. The antipode $S$ is unique if it exists.
We say that a Hopf super algebra $A$ is commutative if $(A,m,\eta)$ is commutative in $\mathbf{s\hspace{-0.4mm}Vec}$.
For a Hopf super algebra $A$ and two comodules $M,N$ the comodule structure on $M\otimes_{\Bbbk}N$ is given by
\begin{equation}\label{eqTP}M\otimes N\stackrel{c_M\otimes c_N}{\to}M\otimes C\otimes N\otimes C\stackrel{M\otimes \gamma_{C,N}}{\to}M\otimes N\otimes C\otimes C\stackrel{M\otimes N\otimes m}{\to}M\otimes N\otimes C.\end{equation}
The comodule structure on the dual space of a comodule $M$ is defined by
$$c_{M^\ast}(e_i^\ast)=\sum_j (-1)^{[j]([i]+[j])}e^\ast_j\otimes S(c_{ij}),$$
with $\{e_i\}$ a basis of $M$ with $c_M(e_j)=\sum_ie_i\otimes c_{ij}$.
\subsection{Algebraic super groups}
By the Yoneda embedding, we have an equivalence between the category of commutative Hopf super algebras and the opposite of the category of representable functors
$$\mathbf{cs\hspace{-0.3mm}Al}_{\Bbbk}\to \mathbf{G\hspace{-0.3mm}r\hspace{-0.3mm}p}.$$
The latter are known as affine super group schemes, or simply algebraic super groups in case the corresponding Hopf super algebra is finitely generated as an algebra.
Now fix a finitely generated commutative Hopf super algebra $\mathcal{O}$ and set
${\mathsf{G}}={\mathbf{cs\hspace{-0.3mm}Al}}(\mathcal{O},-)$ and
$${\rm Rep}{\mathsf{G}}\;:=\;\mbox{scom-}\mathcal{O}\quad\mbox{and}\quad \underline{\mathrm{Rep}}{\mathsf{G}}\;:=\;\underline{{\rm scom}}\mbox{-}\mathcal{O}.$$
\subsubsection{} We set $\mathcal{O}_+=\ker(\varepsilon)$, with $\varepsilon$ the counit of $\mathcal{O}$, and
$$\mathrm{Dist}(\mathcal{O}):= \bigcup_{i>0}(\mathcal{O}/(\mathcal{O}_+)^i)^\ast\;\subset\;\mathcal{O}^\ast.$$
Then $\mathrm{Dist}(\mathcal{O})$ inherits a Hopf algebra structure from $\mathcal{O}$, see \cite[\S 2.3.5]{Abe}, as an extension of the procedure in \ref{dualA}.
We define $\mathrm{Lie}({\mathsf{G}})$ as the space of primitive elements in $\mathrm{Dist}(\mathcal{O})$:
$$\mathrm{Lie}({\mathsf{G}})\;:=\{d\in \mathcal{O}^\ast\,|\, d(fg)= d(f)\varepsilon(g)+\varepsilon(f)d(g),\;\mbox{for all $f,g\in\mathcal{O}$}\}.$$
We interpret $\mathrm{Lie}(G)$ as a Lie super algebra for multiplication given by the supercommutator in the associative algebra $\mathrm{Dist}(\mathcal{O})$.
\begin{lemma}\label{LemDist}\cite[Remark~2(2) and Lemma~19]{Masuoka}
\begin{enumerate}[(i)]
\item If $\mathrm{char}(\Bbbk)=0$, we have $\mathrm{Dist}(\mathcal{O})=U(\mathrm{Lie} {\mathsf{G}})$.
\item If ${\mathsf{G}}_{ev}$ is connected, the natural pairing $\mathrm{Dist}(\mathcal{O})\times\mathcal{O}\to\Bbbk$ satisfies
$$u(f)\not=0\;\mbox{for some $u\in \mathrm{Dist}(\mathcal{O})$ for all $0\not=f\in\mathcal{O}$.}$$
\end{enumerate}
\end{lemma}
\subsubsection{}\label{Omod} Fix a $(M,c_M)\in \mbox{scom-}\mathcal{O}$.
This is naturally a $\mathrm{Dist}(\mathcal{O})$-module, for
$$\mathrm{Dist}(\mathcal{O})\to \underline{\mathrm{End}}(M),\quad u\mapsto (id_M\otimes u)\circ c_M,$$
and thus by restriction also a $\mathrm{Lie}{\mathsf{G}}$-module.
Now consider the finite dimensional coalgebra $\mathcal{O}^M$ as in \ref{defCM}.
We have the finite dimensional algebra $S^M:=(\mathcal{O}^M)^\ast$ and an injective algebra morphism
$$S^M\hookrightarrow\underline{\mathrm{End}}(M),\quad \alpha\mapsto ({\rm id}_M\otimes \alpha)\circ c_M. $$
By construction, the above maps yield a commutative diagram of algebra morphisms
\begin{equation}\label{CommD}\xymatrix{
\mathcal{O}^\ast\ar@{->>}[r]&S^M\ar@{^{(}->}[r]& \underline{\mathrm{End}}(M).\\
&\mathrm{Dist}(\mathcal{O})\ar[ur]\ar@{^{(}->}[ul]
}\end{equation}
\subsubsection{} For each $g\in {\mathsf{G}}(\Bbbk)$, we have the algebra morphism
$$\mathrm{ad}_g:=(g\otimes {\rm id}\otimes g)\circ (\Delta\otimes S)\circ \Delta\;:\;\; \mathcal{O}\to\mathcal{O}.$$
The following lemma, in which we ignore monoidal structures, is standard for the special case $g^2=\varepsilon$ ({\it i.e.} a homomorphism $\mathbb{Z}_2\to{\mathsf{G}}$).
\begin{lemma}\label{GenZ2}
Assume $p\not=2$ and there exists $g\in {\mathsf{G}}(\Bbbk)$ such that $\mathrm{ad}_g(f)=(-1)^{|f|}f$, for each homogeneous $f\in\mathcal{O}$. Then ${\rm Rep}{\mathsf{G}}$ has full subcategories ${\mathbf{C}}_1$ and ${\mathbf{C}}_2$ yielding equivalences
$${\mathbf{C}}_1\oplus{\mathbf{C}}_2\stackrel{\sim}{\to} {\rm Rep}{\mathsf{G}}\quad\mbox{and}\quad \Pi: {\mathbf{C}}_1\stackrel{\sim}{\to}{\mathbf{C}}_2.$$
Furthermore, with $i\in{1,2}$ and $M\in{\rm Rep}{\mathsf{G}}$, the functor $\mathrm{Frg}$ in \eqref{eqFrg} restricts to equivalences
$${\mathbf{C}}_i\stackrel{\sim}{\to} \mbox{{\rm com-}}\mathcal{O}\quad\mbox{and}\quad ( {\mathbf{C}}_i\cap\mbox{{\rm scom-}}\mathcal{O}^M) \stackrel{\sim}{\to} \mbox{{\rm com-}}\mathcal{O}^{\mathrm{Frg} M}.$$
\end{lemma}
\begin{proof}
We start by considering $\mathcal{O}$ as an ungraded coalgebra. For every $M\in \mbox{com-}\mathcal{O}$ we have
$$a_M:= ({\rm id}\otimes g)\circ c_M\;\in\;{\rm End}_{\Bbbk}(M)$$
with commutation relations
\begin{equation}
\label{commaM}(a_M\otimes{\rm id})\circ c_M\;=\; ({\rm id}\otimes \mathrm{ad}_g)\circ c_M\circ a_M\qquad\mbox{and}\qquad f\circ a_M=a_N\circ f,
\end{equation}
for all $f\in {\rm Hom}_{\mathcal{O}}(M,N)$.
Take an indecomposable $M\in\mbox{com-}\mathcal{O}$. It follows by assumption and the first equation in \eqref{commaM} that the vector space $M$ decomposes into two generalised eigenspaces of $a_M$ with eigenvalues $\lambda$ and $-\lambda$ for some $\lambda\in\Bbbk^{\times}$. Furthermore, we can impose precisely two $\mathbb{Z}_2$-gradings on $M$ which are compatible with $c_M$ for $\mathcal{O}$ now regarded as a $\mathbb{Z}_2$-graded coalgebra. Concretely we can set $M_{\bar{0}},M_{\bar{1}}$ equal to the generalised eigenspaces of $a_M$.
For any indecomposable $M\in$ com-$\mathcal{O}$, we choose one of the two options as $\widetilde{M}\in$ scom-$\mathcal{O}$ with $\mathrm{Frg}\widetilde{M}=M$. We define ${\mathbf{C}}_1$ as the full subcategory of direct sums of comodules $\widetilde{M}$ and ${\mathbf{C}}_2$ as the full subcategory of direct sums of comodules $\Pi\widetilde{M}$. It follows from the second equation in \eqref{commaM} that we have
$${\rm Hom}_{\mathcal{O}}(\widetilde{M},\Pi\widetilde{N})=0\quad\mbox{or}\quad\underline{\mathrm{Hom}}_{\mathcal{O}}(\widetilde{M},\widetilde{N})={\rm Hom}_{\mathcal{O}}(\widetilde{M},\widetilde{N}).$$
From this observation, all claims in the lemma now follow.
\end{proof}
\subsection{The classical algebraic super groups}
\subsubsection{The general linear super group}\label{DefGL} Fix $V\in \mathbf{svec}$, set $(m,n):=\mathrm{sdim} V$ and for each $R\in\mathbf{cs\hspace{-0.3mm}Al}$ we consider the right $R$-module $V_R:=V\otimes_{\Bbbk}R$.
The functor
$${\mathsf{GL}}(V):\mathbf{cs\hspace{-0.3mm}Al}\to \mathbf{G\hspace{-0.3mm}r\hspace{-0.3mm}p},\quad R\mapsto \mathrm{Aut}_R(V_R)$$
is represented by $\mathcal{O}[{\mathsf{GL}}(V)]$. Consider variables $X_{ij}$ and $Z_{ij}$ for $1\le i,j\le m+n$ of parity $|X_{ij}|=[i]+[j]=|Z_{ij}|$. After choosing a basis of $V$ we can describe $\mathcal{O}[{\mathsf{GL}}(V)]$ as the quotient of a polynomial super algebra as
$$\mathcal{O}[{\mathsf{GL}}(V)]\;:=\; \Bbbk[X_{ij}, Z_{kl}]/I,\quad\mbox{with}\quad I:=\langle \sum_jZ_{ij}X_{jk}=\delta_{ik} \rangle.$$
Furthermore, we define
$$\Delta(X_{ij})\;=\;\sum_{k}X_{ik}\otimes X_{kj},\quad\varepsilon(X_{ij})=\delta_{ij},\quad\Delta(Z_{jk})=\sum_{l}(-1)^{([j]+[l])([l]+[k])}Z_{lk}\otimes Z_{jl}$$
and $S(X_{ij})=Z_{ij}$.
We have $g\in\mathbf{cs\hspace{-0.3mm}Al}(\mathcal{O},\Bbbk)$ defined by $g(X_{ij})=(-1)^{[i]}\delta_{ij}$ which satisfies the condition in Lemma~\ref{GenZ2}.
Now $V$ is the natural ${\mathsf{GL}}(V)$-representation, with dual $V^\ast$, determined by the coactions
$$e_i\mapsto \sum_{j}e_j\otimes X_{ji}\quad\mbox{and}\quad e_i^\ast\mapsto \sum_{j}e_j^\ast\otimes Y_{ji},$$
with $Y_{ji}:=(-1)^{[j]([i]+[j])}Z_{ij}$. In particular, we have $[e_j^\ast|e_j]_{\mathcal{O}}=X_{ji}$.
We use the choice of simple positive roots of \cite[Section~4.4]{EHS}. In particular the highest weight of $V$ is $\epsilon_1$ and of $V^\ast$ it is $-\delta_1$.
\subsubsection{The orthosymplectic super group} \label{DefOSp}
Consider $V\in \mathbf{svec}$ with an even form $\langle\cdot,\cdot\rangle: V\times V\to \Bbbk$, which we also interpret in ${\rm End}(V^{\otimes 2}, \Bbbk)$. Then ${\mathsf{OSp}}(V)$ is the subgroup of ${\mathsf{GL}}(V)$ which preserves this form.
Concretely, if we choose a basis $\{e_i\}$ of $V$ and set $g_{ij}=\langle e_i,e_j\rangle$ then $\mathcal{O}[{\mathsf{OSp}}(V)]$ is the quotient of $\mathcal{O}[{\mathsf{GL}}(V)]$ with respect to the ideal generated by the elements
\begin{equation}\label{eqform}\sum_{k,l}(-1)^{[l]([k]+[i])}X_{ki}g_{kl}X_{lj}\;-\; g_{ij}.\end{equation}
We have $g\in\mathbf{cs\hspace{-0.3mm}Al}(\mathcal{O}[{\mathsf{OSp}}(V)],\Bbbk)$ defined by $g(X_{ij})=(-1)^{[i]}\delta_{ij}$, which satisfies the condition in Lemma~\ref{GenZ2}.
We refer to \cite{SW} for an introduction to the algebraic representation theory of ${\mathsf{G}}={\mathsf{OSp}}(V)$. In particular we follow the conventions {\it loc. cit.} and denote the simple highest weight module, see \cite[Lemma~4.1]{SW}, with highest weight $\xi\in X^+$ by $L_{{\mathsf{G}}}(\xi)$. For the special modules we will encounter we do not need the full description of $X^+$. We will only need weights of the form $\xi=\sum_{i=1}^n\lambda_i\delta_i$, for partitions $\lambda$, which are to be interpreted as dominant weights for ${\mathsf{Sp}}(V_{\bar{1}})$.
\subsubsection{The periplectic super group}
Consider $V\in \mathbf{svec}$ with an odd form $\langle\cdot,\cdot\rangle: V\times V\to \Bbbk$, which we also interpret in $\underline{\mathrm{End}}(V^{\otimes 2}, \Bbbk)_{\bar{1}}$. This implies that $\dim V_{\bar{0}}=\dim V_{\bar{1}}$, so in particular $\mathrm{sdim} V=0$. Then ${\mathsf{Pe}}(V)$ is the subgroup of ${\mathsf{GL}}(V)$ which preserves this form.
Concretely, if we choose a basis $\{e_i\}$ of $V$ and set $g_{ij}=\langle e_i,e_j\rangle$ then $\mathcal{O}[{\mathsf{Pe}}(V)]$ is the quotient of $\mathcal{O}[{\mathsf{GL}}(V)]$ with respect to the ideal generated by the elements \eqref{eqform}.
Take $\imath\in\Bbbk$ with $\imath^2=-1$, then $g\in\mathbf{cs\hspace{-0.3mm}Al}(\mathcal{O}[{\mathsf{Pe}}(V)],\Bbbk)$ defined by $g(X_{ij})=(-1)^{[i]}\iota\delta_{ij}$ satisfies the condition in Lemma~\ref{GenZ2}.
\subsubsection{The queer super group}
Consider $V\in \mathbf{svec}$ with $q\in \underline{\mathrm{End}}(V)_{\bar{1}}$ for which $q^2={\rm id}$. This implies that $\dim V_{\bar{0}}=\dim V_{\bar{1}}$. Then $\mathsf{Q}(V)$ is the subgroup of ${\mathsf{GL}}(V)$ which commutes with $q$.
Concretely, if we set $q(e_i)=\sum_j q_{ij}e_j$, the Hopf algebra $\mathcal{O}[\mathsf{Q}(V)]$ is the quotient of $\mathcal{O}[{\mathsf{GL}}(V)]$ with respect to the ideal generated by
$$X_{ij}\;-\;\sum_{k} X_{ik} q_{jk}.$$
\section{Diagram categories}\label{DiagCat}We briefly review some diagram categories. Since we will only use them rather superficially, we do not present full details here.
\subsection{The Brauer category}
\subsubsection{}\label{BrCat1} For $\delta\in\Bbbk$, the $\Bbbk$-linear Brauer category $\mathcal{B}(\delta)$ is introduced in \cite[\S 2.1]{BrCat}, see also~\cite[\S 9]{Deligne}. Concretely, the objects in $\mathcal{B}(\delta)$ are given by
$${\mathrm{Ob}}\mathcal{B}(\delta)\;=\;\{[i]\,|\, i\in \mathbb{N}\}$$
and the space of morphisms from $[i]$ to $[k]$ is given by the $\Bbbk$-span of all pairings of $i+k$ dots. Such a pairing is graphically represented by an $(i,k)$-Brauer diagram, which is a diagram where $i+k$ points are placed on two parallel horizontal lines, $i$ on the lower line and $k$ on the upper, with
arcs drawn to join points which are paired. Arcs connecting two points on the lower, resp. upper, line are caps, resp. cups. Composition of morphisms corresponds to concatenation of diagrams with loops evaluated at $\delta$. The Brauer category is monoidal with $[i]\otimes [j]=[i+j]$. In \cite[\S 2.2]{BrCat}, a contravariant auto-equivalence ${}^\ast$ of $\mathcal{B}(\delta)$ is introduced, which is the identity on objects and maps a diagram to its reflection in a horizontal line.
\subsubsection{}\label{DefBHR} We define dense subcategories
$$\mathcal{B}(\delta) \;\supset\;\mathcal{R}\;\supset\; \mathcal{H}\,\simeq\,\bigoplus_{i\in\mathbb{N}}\Bbbk\mathrm{S}_i,$$
where the morphism spaces in $\mathcal{H}$ are spanned by all diagrams without cups or caps and in $\mathcal{R}$ they are spanned by all diagrams without cups.
As the notation suggests, $\mathcal{R}$ and $\mathcal{H}$ do not depend on the parameter $\delta$.
We define some unital associative algebras for $r\in\mathbb{N}$. We have the set of objects $[\mathscr{J}(r)]=\{[i]\,|\, i\in\mathscr{J}(r)\}$. The algebra $\mathcal{B}_r(\delta)$, resp. $\mathcal{B}_r^{\mathbf{c}}(\delta)$, is the full subcategory of $\mathcal{B}(\delta)$ with objects $[r]$, resp. $[\mathscr{J}(r)]$. The algebra $\mathcal{R}_r$, resp. $\mathcal{H}_r$, is the full subcategory of $\mathcal{R}$, resp. $\mathcal{H}$, with objects $[\mathscr{J}(r)]$.
We can interpret modules over $\mathcal{H}_r$ as $\mathcal{R}_r$-modules where every diagram with a cap acts trivially.
\begin{prop}\label{PropCZ}
Fix $r\in\mathbb{N}$ and $\delta\in\Bbbk$.
\begin{enumerate}[(i)]
\item If $\delta\not\in\mathbb{I}$, then $\mathcal{B}_r(\delta)$ is semisimple. If $\delta=0$ and $r$ is even, or if $p\in[\![ 2,r]\!]$, then $\mathcal{B}_r(\delta)$ is not quasi-hereditary.
\item If $\delta\not=0$ or $r$ is odd, the algebras $\mathcal{B}^{\mathbf{c}}_r(\delta)$ and $\mathcal{B}_r(\delta)$ are Morita equivalent.
\item The right $\mathcal{R}_r$-module $\mathcal{B}_r^{\mathbf{c}}(\delta)$ is projective.
\item If $p\not\in[\![ 2,r]\!]$, the simple $\mathcal{B}_r^{\mathbf{c}}(\delta)$-modules can be labelled by $\Lambda_r$. Furthermore, $(\mathcal{B}_r^{\mathbf{c}}(\delta),\le)$ is quasi-hereditary for $\lambda<\mu$ if and only if $|\mu|<|\lambda|$ and with standard modules $\Delta(\lambda):=\mathcal{B}^{\mathbf{c}}_r(\delta)\otimes_{\mathcal{R}_r}S(\lambda)$.
\item The restriction of $\ast$ in \ref{BrCat1} to an anti-automorphism of $\mathcal{B}_r^{\mathbf{c}}(\delta)$ is a good duality (as in \ref{DefGood}) of the quasi-hereditary algebra $(\mathcal{B}_r^{\mathbf{c}},\le)$, if $p\not\in\lbr2,r]\!]$.
\end{enumerate}
\end{prop}
\begin{proof}The first statement in part (i) follows from the main result of \cite{Rui}. The second statement in part (ii) is \cite[Theorem~1.3]{CellQua}.
Part (ii) is \cite[Theorem~8.5.1]{Borelic}. Part (iii) is proved in \cite[Proposition~8.4.4]{Borelic}, by \cite[Definition~3.2.3]{Borelic}. Part (iv) is \cite[Theorem~8.4.1]{Borelic}.
Since the equivalence $\ast$ is the identity on ${\mathrm{Ob}}\mathcal{B}(\delta)$, the duality of $\mathcal{B}_r^{\mathbf{c}}(\delta)$ preserves the partial order of part (iv), proving part (v).
\end{proof}
\subsubsection{}
For $V\in\mathbf{svec}$ with even form and $\delta:=\mathrm{sdim}(V)$, we have a $\Bbbk$-linear symmetric monoidal functor
\begin{equation}\label{UnivO}
\mathcal{B}(\delta)\;\to\;{\rm Rep}{\mathsf{OSp}}(V),\qquad\mbox{with}\quad [1]\mapsto V\quad\mbox{and}\quad \cap\mapsto \left(\langle\cdot,\cdot\rangle : V^{\otimes 2}\to \Bbbk \right).
\end{equation}
This is well-known and follows from a straightforward extension of \cite[Theorem~3.4]{BrCat}.
\subsection{Oriented Brauer category}
\subsubsection{} For $\delta\in\Bbbk$, we have the oriented Brauer category $\mathcal{O}\hspace{-0.5mm}\mathcal{B}(\delta)$, with objects given by finite words in the alphabet $\{\vee,\wedge\}$. For a complete definition we refer to e.g.~\cite[\S 4]{ES}. The walled Brauer algebra $\mathcal{B}_{r,s}(\delta)$ is the full subcategory of $\mathcal{O}\hspace{-0.3mm}\mathcal{B}(\delta)$ corresponding to the object $\vee^{\otimes r}\otimes \wedge^{\otimes s}$ and $\mathcal{B}_{r,s}^{\mathbf{c}}(\delta)$ is the full subcategory of $\mathcal{O}\hspace{-0.3mm}\mathcal{B}(\delta)$ corresponding to the objects $\{\vee^{\otimes r-i}\otimes \wedge^{\otimes s-i}\}$ for $i\in\lbr0,\min(r,s)]\!]$.
The analogues of (ii)-(v) in Proposition~\ref{PropCZ} are proved in \cite[\S 8]{Borelic}.
\subsubsection{} Fix $V\in\mathbf{svec}$ and set $W:=V^\ast$ with pairing
$\mathrm{ev}_V:W\otimes V\to\Bbbk$ given by $\alpha\otimes v\mapsto \alpha(v).$
If $\delta=\mathrm{sdim}(V)$, we have a $\Bbbk$-linear symmetric monoidal functor
\begin{equation}\label{UnivG}
\mathcal{O}\hspace{-0.5mm}\mathcal{B}(\delta)\;\to\;{\rm Rep}{\mathsf{GL}}(V),\qquad\mbox{with}\quad \vee\mapsto V\;\;\mbox{and}\;\; \wedge\mapsto W,
\end{equation}
where the unique oriented Brauer diagram which represents a morphism from $\wedge\vee$ to the empty word is mapped to $\mathrm{ev}_V$.
\subsection{Periplectic Brauer category}
\subsubsection{}In \cite{Kujawa}, the periplectic Brauer supercategory $\mathcal{A}$ is introduced. This category has the same set of objects and spaces of morphisms as $\mathcal{B}(\delta)$. The composition of morphisms in $\mathcal{A}$ is again given by concatenation of diagrams, up to possible minus signs, with evaluation of loops at $0$. This is a monoidal super category, see also~\cite{BE, PB1}. The periplectic Brauer algebra $\mathcal{A}_r$ is the full subcategory of $\mathcal{A}$ corresponding to the object $[r]$. This is actually a reduced super algebra, {i.e.} $(\mathcal{A}_r)_{\bar{1}}=0$. We also consider the full subcategory $\mathcal{A}^{\mathbf{c}}_r$ of $\mathcal{A}$ corresponding to the objects $[\mathscr{J}(r)]$.
\subsubsection{} Fix $V\in\mathbf{svec}$ with odd form $\langle\cdot,\cdot\rangle$. We have a $\Bbbk$-linear symmetric monoidal super functor
\begin{equation}\label{UnivP}
\mathcal{A}\;\to\;\underline{\mathrm{Rep}}{\mathsf{Pe}}(V),\qquad\mbox{with}\quad [1]\mapsto V\quad\mbox{and}\quad \cap\mapsto \left(\langle\cdot,\cdot\rangle : V^{\otimes 2}\to \Bbbk \right),
\end{equation}
see \cite[Theorem~5.2.1]{Kujawa}.
\subsection{Oriented Brauer-Clifford category} \label{SecOBC}
\subsubsection{}In \cite{ComesKuj}, the oriented Brauer-Clifford supercategory $\mathcal{O}\hspace{-0.5mm}\mathcal{B}\mathcal{C}$ is introduced. It contains $\mathcal{O}\hspace{-0.5mm}\mathcal{B}(0)$ as a dense subcategory. But also has an odd isomorphism $\widetilde{q}$ of $\vee$. The Brauer-Clifford algebra $\mathcal{BC}_{r,s}$ is the full subcategory of $\mathcal{O}\hspace{-0.5mm}\mathcal{B}\mathcal{C}$ corresponding to the object $\vee^{\otimes r}\otimes \wedge^{\otimes s}$.
\subsubsection{} Fix $V\in\mathbf{svec}$ with an odd endomorphism $q$ with $q^2={\rm id}$. By \cite[\S 4.2]{ComesKuj}, we have a monoidal super functor from $\mathcal{O}\hspace{-0.5mm}\mathcal{B}\mathcal{C}$ to $\underline{\mathrm{Rep}}\mathsf{Q}(V)$ which yields a commuting diagram
$$\xymatrix{
\mathcal{O}\hspace{-0.5mm}\mathcal{B}\mathcal{C}\ar[rr]&& \underline{\mathrm{Rep}} \mathsf{Q}(V)\\
\mathcal{O}\hspace{-0.5mm}\mathcal{B}(0)\ar[rr]\ar[u]&& {\rm Rep}{\mathsf{GL}}(V)\ar[u],
}$$
where the lower horizontal arrow is the functor in \eqref{UnivG}, the left vertical arrow is the inclusion and the right vertical arrow is a forgetful functor. Furthermore, $\widetilde{q}$ is mapped to $q$.
\section{Centraliser coalgebras and monoidal functors}\label{SecCOAL}
\subsection{Definitions and basic properties}
\subsubsection{}\label{coend}Fix a small super category ${\mathbf{A}}$ with super functor $F:{\mathbf{A}}\to \underline{\mathbf{svec}}$.
We define a super vector space
$$\underline{\mathrm{C\hspace{-0.2mm}End}}_{{\mathbf{A}}}(F)\;:=\;\left(\bigoplus_{X\in{\mathrm{Ob}}{\mathbf{A}}}F(X)^\ast\otimes F(X)\right)/I$$
with $I$ the space
$$I:=\{\alpha\circ F(f)\otimes v-\alpha\otimes F(f)(v)\,|\, \alpha\in F(Y)^\ast, v\in F(X), f\in {\mathbf{A}}(X,Y)\;\mbox{ and } X,Y\in{\mathrm{Ob}}{\mathbf{A}}\}.$$
\begin{ddef}
The centraliser coalgebra of the functor $F$ is the superspace $\underline{\mathrm{C\hspace{-0.2mm}End}}_{{\mathbf{A}}}(F)$
with structure morphisms
$$\varepsilon:\underline{\mathrm{C\hspace{-0.2mm}End}}_{{\mathbf{A}}}(F)\to\Bbbk,\;\; \alpha\otimes v\mapsto \alpha(v)\qquad\mbox{and}$$
$$ \Delta: \underline{\mathrm{C\hspace{-0.2mm}End}}_{{\mathbf{A}}}(F)\to \underline{\mathrm{C\hspace{-0.2mm}End}}_{{\mathbf{A}}}(F)\otimes_{\Bbbk} \underline{\mathrm{C\hspace{-0.2mm}End}}_{{\mathbf{A}}}(F),\;\; \alpha\otimes v\mapsto\sum_i (\alpha\otimes e_{i})\otimes (e_{i}^\ast\otimes v),$$
for $\alpha\in F(X)^\ast$ and $v\in F(X)$ where $\{e_{i}\}$ denotes a basis of $F(X)$, with $X\in {\mathrm{Ob}}{\mathbf{A}}$.
\end{ddef}
\begin{rem}
\begin{enumerate}[(i)]
\item The coalgebra $\underline{\mathrm{C\hspace{-0.2mm}End}}_{{\mathbf{A}}}(F)$ would be the same (after forgetting the grading) if we interpret $F$ only as a $\Bbbk$-linear functor.
\item If we interpret $F$ as the module $\oplus_{X\in{\mathrm{Ob}}{\mathbf{A}}}F(X)$ for the super algebra ${\mathbf{A}}$, then $\underline{\mathrm{C\hspace{-0.2mm}End}}_{{\mathbf{A}}}(F)$ is just the coalgebra $F^\ast\otimes_{{\mathbf{A}}}F$.
\item If the ${\mathbf{A}}$-module $F$ is finite dimensional, then $\underline{\mathrm{C\hspace{-0.2mm}End}}_{{\mathbf{A}}}(F)$ is $\underline{\mathrm{End}}_{{\mathbf{A}}}(F)^\ast$.
\end{enumerate}
\end{rem}
The following lemma, see also~\cite[Lemma~2.7]{BSRingel}, states that the dual algebra, as in \ref{dualA}, of the centraliser coalgebra is the ordinary centraliser algebra, justifying the name of the former.
\begin{lemma}\label{LemAC}
We have a super algebra isomorphism $(\underline{\mathrm{C\hspace{-0.2mm}End}}_{{\mathbf{A}}}(F))^\ast\simeq \underline{\mathrm{End}}_{{\mathbf{A}}}(F)$.
\end{lemma}
\begin{proof}
By tensor-hom adjunction, we have a canonical isomorphism
$${\rm Hom}_{{\mathbf{A}}}(F,F^{\ast\ast})\;\stackrel{\sim}{\to}\; \underline{\mathrm{Hom}}_{\Bbbk}(F^{\ast}\otimes_{{\mathbf{A}}} F,\Bbbk).$$
It is clear that every ${\mathbf{A}}$-linear morphism from $F$ to $F^{\ast\ast}$ factors through $F\subset F^{\ast\ast}$.
It then follows by direct computation that this exchanges the algebra structures.
\end{proof}
\subsection{Monoidal structures}
\subsubsection{} Fix a strict monoidal super category $({\mathbf{A}},\otimes,{\mathbbm{1}}_{{\mathbf{A}}})$ and a strict monoidal super functor $F:{\mathbf{A}}\to\underline{\mathbf{svec}}$.
The identity $\Bbbk=F({\mathbbm{1}}_{{\mathbf{A}}})$ allows to define the morphism $\eta$ in $\mathbf{s\hspace{-0.4mm}Vec}$ as composition
$$\eta: \Bbbk\stackrel{\sim}{\to}(F({\mathbbm{1}}_{{\mathbf{A}}}))^\ast\otimes F({\mathbbm{1}}_{{\mathbf{A}}})\hookrightarrow C^0_F.$$
For all $X,Y\in{\mathrm{Ob}}{\mathbf{A}}$, the identity $F(X)\otimes_{\Bbbk}F(Y)=F(X\otimes Y)$ allows to define
$$m_{X,Y}:\; \underline{\mathrm{C\hspace{-0.2mm}End}}_{\Bbbk}(F(X))\otimes_{\Bbbk} \underline{\mathrm{C\hspace{-0.2mm}End}}_{\Bbbk}(F(Y))\;\stackrel{\sim}{\to }\; \underline{\mathrm{C\hspace{-0.2mm}End}}_{\Bbbk}(F(X\otimes Y))$$
$$ (\alpha\otimes v)\otimes (\beta\otimes w)\mapsto (-1)^{|v||\beta|} (\alpha\otimes \beta)\otimes (v\otimes w).$$
The latter morphisms together yield a morphism $m:C^0_F\otimes_{\Bbbk}C^0_F\to C^0_F$.
It follows from the definitions of monoidal super functors that $(C^0_F,m,\eta,\Delta,\varepsilon)$ is a bialgebra in $\mathbf{s\hspace{-0.4mm}Vec}$.
\begin{lemma}\label{LemBIA}
Consider an affine super group scheme ${\mathsf{G}}$ and a monoidal super category ${\mathbf{A}}$ with monoidal super functor ${\mathbf{A}}\to\underline{\mathrm{Rep}} {\mathsf{G}}$. Denote by $F$ the composition of this functor with the forgetful functor $\underline{\mathrm{Rep}} {\mathsf{G}}\to\underline{\mathbf{svec}}$. Then we have a super bialgebra morphism
$$\phi:\;\underline{\mathrm{C\hspace{-0.2mm}End}}_{{\mathbf{A}}}(F)\;\to\; \mathcal{O}[{\mathsf{G}}],\qquad \alpha\otimes v\mapsto [\alpha|v]_{\mathcal{O}[{\mathsf{G}}]}.$$
\end{lemma}
\begin{proof}
By equation~\eqref{eqMC} we have a super coalgebra morphism $\oplus_{X}(F(X)^\ast\otimes F(X))\to \mathcal{O}$.
That the morphism is zero on the space $I$ of Subsection~\ref{coend} follows from the fact that $F(f)$ is an $\mathcal{O}$-comodule morphism. That $\phi$ is an algebra morphism is a direct consequence of equation~\eqref{eqTP}.
\end{proof}
\begin{lemma}\label{LemNew}
Keep the assumptions of Lemma~\ref{LemBIA} and assume that $\phi$ is an isomorphism. For a finite set $E\subset {\mathrm{Ob}}{\mathbf{A}}$ the super space $M:=\oplus_{X\in E}F(X)$ is naturally an object in $\underline{\mathrm{Rep}}{\mathsf{G}}$ and an $A$-module for the super algebra $A:=\oplus_{X,Y\in E}{\mathbf{A}}(X,Y)$. The coalgebra morphism $M^\ast\otimes M\to\mathcal{O}$ of \eqref{eqMC} factors through an isomorphism
$$M^\ast\otimes_AM\;\stackrel{\sim}{\to}\; \mathcal{O}^M.$$
\end{lemma}
\begin{proof}
By construction, $M^\ast\otimes_AM$ is the image of the canonical morphism $M^\ast\otimes M\to \underline{\mathrm{C\hspace{-0.2mm}End}}_{{\mathbf{A}}}(F)$. Also from construction follows that the composition
$$M^\ast\otimes M\to \underline{\mathrm{C\hspace{-0.2mm}End}}_{{\mathbf{A}}}(F)\stackrel{\phi}{\to}\mathcal{O}[{\mathsf{G}}] $$
is equal to morphism~\eqref{eqMC} which by definition has image $\mathcal{O}^M$.
\end{proof}
\subsection{Applications}
\begin{thm}\label{ThmNew}
For the four super monoidal functors ${\mathbf{A}}\to \underline{\mathrm{Rep}}{\mathsf{G}}$ of Section~\ref{DiagCat}, the bialgebra morphism $\phi$ of Lemma~\ref{LemBIA} is an isomorphism.
\end{thm}
\begin{proof}
Since the algebras $\mathcal{O}[{\mathsf{G}}]$ are finitely presented the claim can be easily verified. As an example we treat the case $\mathcal{B}(\delta)\to\underline{\mathrm{Rep}}{\mathsf{OSp}}(V)$ of \eqref{UnivO}.
Since every object in $\mathcal{B}(\delta)$ is a tensor power of $[1]$ it follows that the algebra $\underline{\mathrm{C\hspace{-0.2mm}End}}_{\mathcal{B}(\delta)}(F)$ is generated by $V^\ast\otimes V=F([1])^\ast\otimes F([1])$. By definition, $\phi$ maps the latter space to the $\Bbbk$-span of the generators $\{X_{ij}\}$ of $\mathcal{O}[{\mathsf{OSp}}(V)]$. This already implies that $\phi$ is surjective, and to complete the proof it suffices to show that the relations in $\mathcal{O}[{\mathsf{OSp}}(V)]$ between the generators $X_{ij}$ are elements in the space $I$ of \ref{coend} defining $\underline{\mathrm{C\hspace{-0.2mm}End}}_{\mathcal{B}(\delta)}(F)$. The commutation relations $X_{ij}X_{kl}=(-1)^{([i]+[j])([k]+[l])}X_{kl}X_{ij}$ correspond to the elements in $I$ induced by braiding endomorphism of $[1]\otimes [1]$ in $\mathcal{B}(\delta)$. The relations \eqref{eqform} correspond to the elements in $I$ induced by $\cap$ in $\mathcal{B}(\delta)$
\end{proof}
\begin{rem}
It is easy to see that $\phi$ is always surjective when the image of ${\mathbf{A}}\to\underline{\mathrm{Rep}}{\mathsf{G}}$ contains a tensor generator of ${\rm Rep} {\mathsf{G}}$ (and its dual). It also follows in general that $\phi$ is injective when ${\mathbf{A}}\to\underline{\mathrm{Rep}}{\mathsf{G}}$ is full. The latter is not a necessary condition however. For instance, the super functor $\mathcal{R}\to\underline{\mathrm{Rep}}{\mathsf{OSp}}(V)$, with $\mathcal{R}$ the dense subcategory of $\mathcal{B}(\delta)$ of \ref{DefBHR}, is not full but leads to an isomorphism $\phi$.
\end{rem}
\section{Super Schur algebras}\label{SecSchur}
\subsection{The orthosymplectic case}\label{SchurO}
We a fix $V\in\mathbf{svec}$ with an even form. We set $(m|2n):=\dim V$ and $\delta:=m-2n=\mathrm{sdim}(V)\in\mathbb{I}$.
\subsubsection{} Now we fix $r\in\mathbb{N}$ and we set
$$T^r\;:=\;\bigoplus_{j\in\mathscr{J}(r)}V^{\otimes j}.$$ Composition of the functor in \eqref{UnivO} with the forgetful functor to $\mathbf{svec}$ yields algebra morphisms
$$\mathcal{B}^{\mathbf{c}}_r(\delta)\to {\rm End}_{\Bbbk}(T^r)\quad\mbox{and}\quad \mathcal{B}_r(\delta)\to{\rm End}_{\Bbbk}(V^{\otimes r}).$$ We define the super algebra
$$\mathcal{S}^o_r(V)\;:=\;\underline{\mathrm{End}}_{\mathcal{B}_r}(V^{\otimes r}).$$
\begin{thm}\label{ThmOSp}
Set ${\mathsf{G}}={\mathsf{OSp}}(V)$.
\begin{enumerate}[(i)]
\item We have $\mathcal{S}^o_r(V)\simeq \underline{\mathrm{End}}_{\mathcal{B}^{\mathbf{c}}_r}(T^r)$.
\item If $p\not=2$, the category $\mathcal{S}^o_r(V)\mbox{{\rm -mod}}$ is equivalent to the abelian subcategory ${\rm Rep}^{(r)}{\mathsf{G}}$ of modules in ${\rm Rep}{\mathsf{G}}$ which are subquotients of direct sums of $V^{\otimes r}$.
\item If $n\ge r$ and $p\not\in\lbr2,r]\!]$, the simple module $L_{{\mathsf{G}}}(\xi)$ for $\xi\in X^+$ is contained in ${\rm Rep}^{(r)}{\mathsf{G}}$ if and only if $\xi=\sum_{i=1}^r \lambda_i\delta_i$ for some $\lambda\in \Lambda_r$.
\end{enumerate}
\end{thm}
\begin{lemma}\label{LemOOSp}
For $\mathcal{O}:=\mathcal{O}[{\mathsf{OSp}}(V)]$, we have algebra isomorphisms $\mathcal{S}^o_r(V)\simeq (\mathcal{O}^{V^{\otimes r}})^\ast$ and $\underline{\mathrm{End}}_{\mathcal{B}^{\mathbf{c}}_r}(T^r)\simeq(\mathcal{O}^{T^r})^\ast$.
\end{lemma}
\begin{proof}
These are applications of Lemma~\ref{LemNew}, by Theorem~\ref{ThmNew}.
\end{proof}
\begin{lemma}\label{numbsimp}
If $n\ge r$, for each $\xi=\sum_{i=1}^r \lambda_i\delta_i$ with $\lambda\in \Lambda_r$, the simple module $L_{{\mathsf{G}}}(\xi)$ is contained in ${\rm Rep}^{(r)}{\mathsf{G}}$.
\end{lemma}
\begin{proof}
For $\mu\in\Lambda_r$, the ${\mathsf{G}}$-module $\mathrm{Sym}^\mu V$ has highest weight $\sum_{i=1}^r \lambda_i\delta_i$, for $\lambda:=\mu^t$. Since
$\mathrm{Sym}^\mu V$ is a direct summand of $V^{\otimes |\mu|}$, and hence a submodule of $V^{\otimes r}$, we find that $L_{{\mathsf{G}}}(\xi)$ belongs to ${\rm Rep}^{(r)}{\mathsf{G}}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{ThmOSp}] We freely use the observations in~\ref{defCM}.
Since $V^{\otimes r}$ is a submodule of $T^r$, we have $\mathcal{O}^{V^{\otimes r}}\subset\mathcal{O}^{T^r}$. Since $T^r$ is in turn a submodule of a direct sum of copies of $V^{\otimes r}$, the latter inclusion is actually an equality.
Part (i) thus follows from Lemma~\ref{LemOOSp}.
Now we claim that the abelian subcategory scom-$\mathcal{O}^{V^{\otimes r}}$ of ${\rm Rep}{\mathsf{G}}$ is the category of all subquotients of direct sums of copies of $V^{\otimes r}$ and $\Pi V^{\otimes r}$. By definition, all such subquotients belong to the subcategory. For arbitrary $\{i_1,\cdots, i_r\}\subset \lbr1,m+2n]\!]^{\times r}$ it follows by direct computation that the morphism
$$\Pi^{\sum _a[i_a]}V^{\otimes r}\to \mathcal{O}^{V^{\otimes r}},\quad e_{j_1}\otimes\cdots \otimes e_{j_r}\mapsto (-1)^{\sum_{a>b} [i_a][j_b]} X_{i_1 j_1}\cdots X_{i_r j_r}$$
is a comodule morphism in $\mathbf{svec}$. Hence $\mathcal{O}^{V^{\otimes r}}$, as an object in ${\rm Rep}{\mathsf{G}}$, is a quotient of a direct sum of copies of $V^{\otimes r}$ and $\Pi V^{\otimes r}$. Since every object in com-$\mathcal{O}^{V^{\otimes r}}$ is a subobject of a direct sum of copies of $\mathcal{O}^{V^{\otimes r}}$ and $\Pi \mathcal{O}^{V^{\otimes r}}$, our claim follows.
Now we take $g\in {\mathsf{G}}(\Bbbk)$ as in \ref{DefOSp}. We can correspondingly choose ${\mathbf{C}}_1$ as in Lemma~\ref{GenZ2} as the full subcategory of modules for which $\mathbb{Z}_2=\{\varepsilon,g\}\subset {\mathsf{G}}$ acts in the canonical way on the underlying $\mathbb{Z}_2$-graded space, {\it i.e.} $a_M(v)=(-1)^{|v|}v$ for $v\in M$. Then the full subcategory of ${\rm Rep}{\mathsf{G}}$ with objects belonging both to scom-$\mathcal{O}^{V^{\otimes r}}$ and ${\mathbf{C}}_1$ is the one of all subquotients of direct sums of copies of $V^{\otimes r}$, by the claim in the above paragraph. By Lemma~\ref{GenZ2}, this category is equivalent to $\mbox{com-}\mathcal{O}^{V^{\otimes r}}$.
By Lemma~\ref{LemOOSp} and \cite[\S 3.1]{Abe}, we have an equivalence
$$\mathcal{S}^{o}_r(V)\mbox{-mod}\;\stackrel{\sim}{\to}\; \mbox{com-}\mathcal{O}^{V^{\otimes r}},$$
which concludes the proof of part (ii).
By Lemma~\ref{LemTilt1} and Section~\ref{SecTilt} below, the algebra $\mathcal{S}^o_r(V)$ has at most $|\Lambda_r|$ simple modules up to isomorphism, so Lemma~\ref{numbsimp} describes all simple modules. This proves part (iii).
\end{proof}
\subsection{The general linear case}
\label{SecGLSchur}
We fix $m,n\in\mathbb{N}$, take $V\in\mathbf{svec}$ of $\dim V=(m|n)$ and set $W=V^\ast$ and $\delta=\mathrm{sdim}(V)$.
\subsubsection{} For $r,s\in\mathbb{N}$, we set
$$T^{r,s}\;:=\;\bigoplus_{j=0}^{\min(r,s)}V^{\otimes (r-j)}\otimes W^{\otimes (s-j)}.$$
By \eqref{UnivG}, we have algebra morphisms
$$\mathcal{B}^{\mathbf{c}}_{r,s}(\delta)\to {\rm End}_{\Bbbk}(T^{r,s})\quad\mbox{and}\quad \mathcal{B}_{r,s}(\delta)\to{\rm End}_{\Bbbk}(V^{\otimes r}\otimes W^{\otimes s}).$$ We define the super algebra
$$\mathcal{S}^g_{r,s}(V)\;:=\;\underline{\mathrm{End}}_{\mathcal{B}_{r,s}}(V^{\otimes r}\otimes W^{\otimes s}).$$
\begin{thm}\label{ThmGL}
Set ${\mathsf{G}}={\mathsf{GL}}(V)$.
\begin{enumerate}[(i)]
\item We have $\mathcal{S}^g_{r,s}(V)\simeq \underline{\mathrm{End}}_{\mathcal{B}^{\mathbf{c}}_{r,s}}(T^{r,s})$.
\item If $p\not=2$, the category $\mathcal{S}^g_{r,s}(V)\mbox{{\rm -mod}}$ is equivalent to the abelian subcategory ${\rm Rep}^{(r,s)}{\mathsf{G}}$ of modules in ${\rm Rep}{\mathsf{G}}$ which are subquotients of direct sums of $V^{\otimes r}\otimes W^{\otimes s}$.
\item If $m\ge r$, $n\ge s$ and $p\not\in\lbr2,r]\!]$, the simple module $L_{{\mathsf{G}}}(\xi)$ for $\xi\in X^+$ is contained in ${\rm Rep}^{(r,s)}{\mathsf{G}}$ if and only if $\xi=\sum_{i=1}^r \lambda_i\epsilon_i-\sum_{j=1}^s\mu_{j}\delta_j$ for some $(\lambda,\mu)\in \Lambda_{r,s}$.
\end{enumerate}
\end{thm}
\begin{lemma}
With $\mathcal{O}:=\mathcal{O}[{\mathsf{GL}}(V)]$,
we have $\mathcal{S}^g_{r,s}(V)\simeq (\mathcal{O}^{V^{\otimes r} \otimes W^{\otimes s}})^\ast$ and $\underline{\mathrm{End}}_{\mathcal{B}^{\mathbf{c}}_{r,s}}(T^{r,s})\simeq (\mathcal{O}^{T^{r,s}})^\ast$.\end{lemma}
\begin{proof}
These are applications of Lemma~\ref{LemNew}, by Theorem~\ref{ThmNew}.\end{proof}
\begin{proof}[Proof of Theorem~\ref{ThmGL}]
Mutatis mutandis the proof of Theorem~\ref{ThmOSp}.
\end{proof}
\begin{rem}
Some of these and other connections between walled Brauer algebras and general linear super groups appear for $\Bbbk=\mathbb{C}$ in \cite{BS} and \cite{EHS}.
\end{rem}
\subsection{The periplectic case}\label{SecPSchur}
Fix $n,r\in\mathbb{N}$ and take $V\in \mathbf{svec}$ with odd form and $\dim V=(n|n)$.
\subsubsection{} By \eqref{UnivP}, we have an algebra morphism
$ \mathcal{A}_r\to{\rm End}_{\Bbbk}(V^{\otimes r}).$ We define the super algebra
$$\mathcal{S}_r^p(V)\;:=\;\underline{\mathrm{End}}_{\mathcal{A}_r}(V^{\otimes r}).$$
\begin{thm}\label{ThmP} Set ${\mathsf{G}}={\mathsf{Pe}}(V)$ and $\mathcal{O}=\mathcal{O}[{\mathsf{G}}]$.
\begin{enumerate}[(i)]
\item If $p\not=2$, the category $\mathcal{S}^p_r(V)\mbox{{\rm -mod}}$ is equivalent to the abelian subcategory ${\rm Rep}^{(r)}{\mathsf{G}}$ of modules in ${\rm Rep}{\mathsf{G}}$ which are subquotients of direct sums of $V^{\otimes r}$.
\item We have $\mathcal{S}_r^p(V)\simeq (\mathcal{O}^{V^{\otimes r}})^\ast\simeq\underline{\mathrm{End}}_{\mathcal{A}_r^{\mathbf{c}}}(\oplus_{j\in \mathscr{J}(r)}V^{\otimes j}).$
\end{enumerate}
\end{thm}
\begin{proof}
Mutatis mutandis the proof of Theorem~\ref{ThmOSp}.
\end{proof}
\begin{rem}
We do not study the category ${\rm Rep}^{(r)}{\mathsf{G}}$ of Theorem~\ref{ThmP}(i) in the current paper. If $\Bbbk=\mathbb{C}$, a thorough study of ${\rm Rep}^{(r)}{\mathsf{G}}$ has been made recently in \cite{EnS}, where it is shown in particular that the category is of highest weight type. This justifies Conjecture~\ref{Conj} below.
\end{rem}
\subsection{The queer case}
We fix $n,r\in\mathbb{N}$ and consider $V\in \mathbf{svec}$ with an odd isomorphism and $\dim V=(n|n)$ and set $W=V^\ast$.
\subsubsection{} By Section~\ref{SecOBC}, we have a super algebra morphism
$$ \mathcal{BC}_{r,s}\to\underline{\mathrm{End}}_{\Bbbk}(V^{\otimes r}\otimes W^{\otimes s}).$$ We define the super algebra
$$\mathcal{S}_{r,s}^q(V)\;:=\;\underline{\mathrm{End}}_{\mathcal{BC}_{r,s}}(V^{\otimes r}\otimes W^{\otimes s}).$$
\begin{thm}\label{ThmQ} Set ${\mathsf{G}}=\mathsf{Q}(V)$ and $\mathcal{O}=\mathcal{O}[{\mathsf{G}}]$.
\begin{enumerate}[(i)]
\item We have $\mathcal{S}^q_{r,s}(V)\simeq (\mathcal{O}^{V^{\otimes r}\otimes W^{\otimes s}})^\ast$.
\item The category of modules in $\mathbf{svec}$ of the super algebra $\mathcal{S}^q_{r,s}(V)$ is equivalent to the abelian subcategory ${\rm Rep}^{(r,s)}{\mathsf{G}}$ of representations in ${\rm Rep}{\mathsf{G}}$ which are subquotients of direct sums of $V^{\otimes r}\otimes W^{\otimes s}$ and $\Pi(V^{\otimes r}\otimes W^{\otimes s})$.
\end{enumerate}
\end{thm}
\begin{proof}
Part (i) is an application of Lemma~\ref{LemNew}, by Theorem~\ref{ThmNew}.. Part (ii) follows as in the proof of Theorem~\ref{ThmOSp}, except that we do not apply Lemma~\ref{GenZ2} as we keep working with scom-$\mathcal{O}^{M}$.
\end{proof}
\section{Ringel duality for Brauer algebras}\label{SecRingel}
\subsection{The Brauer algebra}
Fix $V\in\mathbf{svec}$ with an even form and set $(m|2n)=\dim V$ and $\delta=\mathrm{sdim} V$. As before, we set $T^r=\oplus_{j\in\mathscr{J}(r)}V^{\otimes j}$. Whenever $T^r$ is considered as a $\mathcal{B}^{\mathbf{c}}_r(\delta)$-module its super structure is important in the definition of the action, but we consider it as an ordinary module of the non-super algebra $\mathcal{B}^{\mathbf{c}}_r(\delta)$.
By Proposition~\ref{PropCZ}(i) and (ii), the following theorem gives a description of the Ringel dual of the Brauer algebra, for all cases in which it is quasi-hereditary (ignoring trivial cases in which it is semisimple).
\begin{thm}\label{ThmTilt}
If $p\not\in\lbr2,r]\!]$ and $\min(m,n)\ge r$, then $T^r$ is a complete tilting module of $\mathcal{B}^{\mathbf{c}}_r(\delta)$.
A Ringel dual of $\mathcal{B}^{\mathbf{c}}_r(\delta)$ is thus given by $\mathcal{S}^o_r(V)$.
\end{thm}
We will precede the proof with some lemmata and constructions.
\begin{lemma}\label{LemInjTilt}
If a $\mathcal{B}^{\mathbf{c}}_r(\delta)$-module is injective as an $\mathcal{R}_r$-module and self-dual for the duality in Proposition~\ref{PropCZ}(v), it is a tilting module.
\end{lemma}
\begin{proof}
By Lemma~\ref{lemtilt}, a self-dual module $M$ is tilting if
$${\rm Ext}^1_{\mathcal{B}_r^{\mathbf{c}}}(\Delta(\lambda),M)\;\simeq\; {\rm Ext}^1_{\mathcal{R}_r}(S(\lambda), M)$$
vanish for all $\lambda\in\Lambda_r$. The isomorphism follows from Proposition~\ref{PropCZ}(iv) and (iii) and Shapiro's lemma. This concludes the proof.
\end{proof}
\subsubsection{} The tensor algebra of $V$ has an $\mathbb{N}$-grading defined by placing $V_{\bar{0}}$ in degree $0$ and $V_{\bar{1}}$ in degree $1$. We have a corresponding grading of vector spaces,
$T^r=\oplus_{i=0}^{r} T^r[i].$
This allows to define a filtration of $T^r$ as an $\mathcal{R}_r$-module
$$0=F_{-1}T^r\subset F_0 T^r\subset\cdots \subset F_rT^r=T^r,\quad\mbox{with}\quad F_j T^r\;=\;\bigoplus_{i=0}^{j} T^r[i]. $$
Note that $T^r$ has a canonical structure of an $\mathsf{O}(m)\times {\mathsf{GL}}(2n)$-representation and each $T^r[i]$ is a direct summand of this representation. In particular $F_\bullet T^r$ is also a filtration of the $\mathsf{O}(m)\times {\mathsf{GL}}(2n)$ representation $T^r$. We set $Q_j:= F_j T^r/F_{j-1}T^r$ and consider it as an $\mathcal{R}_r$-module and $\mathsf{O}(m)\times {\mathsf{GL}}(2n)$-representation.
\begin{lemma}\label{LemOGL}
For $0\le j\le r$, the image of
$\mathcal{R}_r\to {\rm End}_{\Bbbk}(Q_j )$
is contained in $A:={\rm End}_{\mathsf{O}(m)\times {\mathsf{GL}}(2n)}(Q_j)$. If $\min(m,2n)\ge r$ this morphism $\mathcal{R}_r\to A$ makes $A$ projective as a right $\mathcal{R}_r$-module.
\end{lemma}
\begin{proof}
For simplicity, we use the canonical isomorphism of vector spaces $T^r[j]\stackrel{\sim}{\to}Q_j$.
Set ${\mathsf{G}}=\mathsf{O}(V_{\bar{0}})\times {\mathsf{GL}}(V_{\bar{1}})$. It suffices to show that a set of generators of $\mathcal{R}_r$ is mapped to elements in ${\rm End}_{{\mathsf{G}}}(T^r[j])$. We take a basis of $T^r[j]$ induced from a homogeneous basis of $V$.
A diagram in $\mathcal{R}_r$ consisting of one cap and otherwise only non-crossing propagating lines yields a morphism which is zero on basis elements unless the cap is evaluated on two elements of copies of $V_{\bar{0}}$. In the latter case, we get the evaluation of the $\mathsf{O}(V_{\bar{0}})$-invariant bilinear form on $V_{\bar{0}}$. Diagrams which belong to $\mathcal{H}$ are mapped to braiding morphisms in ${\rm Rep}{\mathsf{G}}$. The above two types of diagrams generate the algebra $\mathcal{R}_r$.
If $\min(m,2n)\ge r$, the fundamental theorems of invariant theory, see e.g. \cite[Theorem~4.2 and Theorem~5.7]{Concini} imply that the algebra $A$ can be described diagrammatically as follows.
Consider $i$ dots on a horizontal line and $k$ dots on a parallel line above the first one. On both lines there are $j$ white dots and the others are black. We define an ``$A$-diagram'' to be a graphical representation of a pairing of the dots such that each white dot is connected with a white dot on the other line. If $j=2$, an example of an $A$-diagram is
$$\begin{tikzpicture}[scale=0.9,thick,>=angle 90]
\begin{scope}[xshift=4cm]
\node at (0,0) {$\circ$};
\node at (1,0) {$\bullet$};
\node at (2,0) {$\circ$};
\node at (-1,2) {$\circ$};
\node at (0,2) {$\bullet$};
\node at (1,2) {$\bullet$};
\node at (2,2) {$\circ$};
\node at (3,2) {$\bullet$};
\draw (2,0) to [out=120, in=-60] +(- 3,2);
\draw (0,0) to [out=70, in=-110] +(2,2);
\draw (1,0) -- +(0,2);
\draw (0,2) to [out=-70,in=180] +(1.5,-0.8) to [out=0,in=-110] +(1.5,0.8);
\end{scope}
\end{tikzpicture}
$$
The space $A$ has a basis consisting of all $A$-diagrams, with at most $r$ (and at least $j$) dots on each line and on each line a total number of dots in $\mathscr{J}(r)$. The product of two diagrams is zero unless the dots on the lower line of the left diagram match the dots on the upper line of the right diagram. When the dots match, the product is given by concatenation, with evaluation of loops at $m$. We have an obvious interpretation of $A$-diagrams as morphisms. The above diagram is a morphism
$$V_{\bar{1}}\otimes V_{\bar{0}}\otimes V_{\bar{1}}\;\to\; V_{\bar{1}}\otimes V_{\bar{0}}^{\otimes 2}\otimes V_{\bar{1}}\otimes V_{\bar{0}}.$$
In particular, the morphism $\mathcal{R}_r\to A$ is determined by the local relations
$$\begin{tikzpicture}[scale=0.9,thick,>=angle 90]
\begin{scope}[xshift=4cm]
\draw (1,0) to [out=90,in=-180] +(0.5,0.8) to [out=0,in=90] +(0.5,-0.8);
\node at (3,1) {$\mapsto$};
\draw (4,0) to [out=90,in=-180] +(0.5,0.8) to [out=0,in=90] +(0.5,-0.8);
\node at (4,0) {$\bullet$};
\node at (5,0) {$\bullet$};
\node at (6.5,1) {and};
\draw (8,0) to [out=70, in=-110] +(1,2);
\draw (9,0) to [out=110, in=-70] +(-1,2);
\node at (10,1) {$\mapsto$};
\draw (11,0) to [out=70, in=-110] +(1,2);
\draw (12,0) to [out=110, in=-70] +(-1,2);
\node at (11,2) {$\bullet$};
\node at (12,2) {$\bullet$};
\node at (11,0) {$\bullet$};
\node at (12,0) {$\bullet$};
\node at (12.5,1) {$+$};
\draw (13,0) to [out=70, in=-110] +(1,2);
\draw (14,0) to [out=110, in=-70] +(-1,2);
\node at (13,2) {$\circ$};
\node at (14,2) {$\circ$};
\node at (13,0) {$\circ$};
\node at (14,0) {$\circ$};
\node at (14.5,1) {$+$};
\draw (15,0) to [out=70, in=-110] +(1,2);
\draw (16,0) to [out=110, in=-70] +(-1,2);
\node at (15,2) {$\bullet$};
\node at (16,2) {$\circ$};
\node at (15,0) {$\circ$};
\node at (16,0) {$\bullet$};
\node at (16.5,1) {$+$};
\draw (17,0) to [out=70, in=-110] +(1,2);
\draw (18,0) to [out=110, in=-70] +(-1,2);
\node at (17,2) {$\circ$};
\node at (18,2) {$\bullet$};
\node at (17,0) {$\bullet$};
\node at (18,0) {$\circ$};
\end{scope}
\end{tikzpicture}
$$
Now we prove that $A_{\mathcal{R}_r}$ is projective. First observe that, by definition, $A$ is Morita equivalent to its centraliser algebra $fAf$ with $f\in A$ the sum over all diagrams with only non-intersecting propagating lines with the $j$ white dots on the left. It thus suffices to prove that $fA_{\mathcal{R}_r}$ is projective. It follows easily that $fA$ is isomorphic to a direct summand of $\mathcal{B}_r^{\mathbf{c}}(\delta)$ as a right $\mathcal{R}_r$-module, with the isomorphism realised by forgetting the colour of the dots. This means the claim follows from Proposition~\ref{PropCZ}(iii).
\end{proof}
\begin{lemma}\label{LemTilt1}
If $\min(m,2n)\ge r$ and $p\not\in\lbr2,r]\!]$, then $T^r$ is a tilting module for $\mathcal{B}^{\mathbf{c}}_r(\delta)$.
\end{lemma}
\begin{proof}
It follows from a straightforward computation that the canonical isomorphism from $T^r$ to $(T^r)^\ast$, induced by $V\to V^\ast$ with $v\mapsto \langle v,\cdot\rangle$, makes $T^r$ self-dual with respect to the duality in Proposition~\ref{PropCZ}(v).
By Lemma~\ref{LemInjTilt} it thus suffices to prove that $T^r$ is injective as a $\mathcal{R}_r$-module.
We prove the stronger statement that each $\mathcal{R}_r$-module $Q_j$ is injective.
Under the assumptions on $p$, the algebras $A$ in Lemma~\ref{LemOGL} are semisimple, see e.g. \cite{Rui}.
Consequently, by Lemma~\ref{LemOGL}, the functor
$${\rm Hom}_{\mathcal{R}_r}(-, Q_j)\;\simeq\; {\rm Hom}_{A}(A\otimes_{\mathcal{R}_r}-,Q_j)$$
is exact and hence $Q_j$ is injective as a $\mathcal{R}_r$-module.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{ThmTilt}]
By Lemma~\ref{LemTilt1}, the $\mathcal{B}^{\mathbf{c}}_r(\delta)$-module $T^r$ is tilting. By Lemma~\ref{numbsimp},
${\rm End}_{\mathcal{B}^{\mathbf{c}}_r}(T^r)$ has as many simple modules as $\mathcal{B}^{\mathbf{c}}_r(\delta)$, which implies that $T^r$ is complete.
\end{proof}
\subsection{The walled Brauer algebra}
Resume the notation of Subsection~\ref{SecGLSchur}.
By equipping the tensor algebra of $V\oplus W$ with an $\mathbb{N}$-grading with degree of $V_{\bar{0}}\oplus W_{\bar{0}}$ equal to $0$ and degree of $V_{\bar{1}}\oplus W_{\bar{1}}$ equal to $1$, we get a filtration of $T^{r,s}$ as a ${\mathsf{GL}}(V_{\bar{0}})\times {\mathsf{GL}}(V_{\bar{1}})\times {\mathsf{GL}}(W_{\bar{1}})$-representation.
By adapting the proof of Theorem~\ref{ThmTilt} appropriately, we find the following result.
\begin{thm}\label{ThmTiltW}
Assume $p\not\in\lbr2,\max(r,s)]\!]$, $m\ge r+s$, $n\ge \max(r,s)$ and $m-n=\delta$. Then $T^{r,s}$ is a complete tilting module of $\mathcal{B}^{\mathbf{c}}_{r,s}(\delta)$.
A Ringel dual of $\mathcal{B}^{\mathbf{c}}_{r,s}(\delta)$ is thus given by $\mathcal{S}^g_{r,s}(V)$.
\end{thm}
\subsection{The periplectic Brauer algebra}
It was proved in \cite{PB1} that $\mathcal{A}_r^{\mathbf{c}}$ is quasi-hereditary when $p\not\in\lbr2,r]\!]$ and Morita equivalent to $\mathcal{A}_r$ when $r$ is odd. Resume the notation of Subsection~\ref{SecPSchur}.
\begin{conj}\label{Conj}
Take $n\ge 2 r $ and assume $p\not\in\lbr2,r]\!]$. A Ringel dual of $\mathcal{A}_r^{\mathbf{c}}$ is given by $\mathcal{S}^p_r(V)$ for $V\in\mathbf{svec}$ with $\dim V=(n|n)$ equipped with an odd form.
\end{conj}
In characteristic zero there there is a proof of the conjecture, which goes in the opposite direction from our above proof for the (walled) Brauer algebra, by using Proposition~\ref{PropRingel}(ii), based on recent results in~\cite{EnS}.
\begin{prop}[Entova-Eizenbud, Serganova]
Conjecture \ref{Conj} is true for $\Bbbk=\mathbb{C}$ and $n\ge 8r$.
\end{prop}
\begin{proof}
In \cite[Propositions~5.2.1 and~9.2.3]{EnS} it is proved that ${\rm Rep}^{(r)}{\mathsf{Pe}}(V)$ is a highest weight category where the tilting modules are precisely the direct summands of
$$T^r\,=\,\bigoplus_{j\in \mathscr{J}(r)}\Pi^{\frac{r-j}{2}} (V^{\otimes j}).$$ The Ringel dual of ${\rm Rep}^{(r)}{\mathsf{Pe}}(V)$ is thus ${\rm End}_{\mathfrak{pe}(V)}(T^r)$. The latter algebra is precisely $\mathcal{A}^{\mathbf{c}}_r$, see e.g. \cite[Proposition~8.3.3(i) and Theorem~7.3.1(ii)]{PB1} or \cite[Theorem~5.4.2(ii)]{Kujawa}.
\end{proof}
\section{Applications to super groups and Lie algebras}\label{SecApp}
We list some consequences of our results on Brauer algebras for the representation theory of super groups. Some results are extensions of known results from characteristic zero to positive characteristic, but with entirely different proofs.
\subsection{The orthosymplectic super group}
Resume the notation of Subsection~\ref{SchurO}. In particular $V$ has dimension $(m|2n)$ and is equipped with an even form $\langle\cdot,\cdot\rangle$.
\begin{thm}
Assume $p\not\in\lbr2,r]\!]$ and $\min(m,n)\ge r$. Then ${\rm Rep}^{(r)}{\mathsf{OSp}}(V)$ is a highest weight category for partial order $\xi<\eta$ if and only if $|\xi|< |\eta|$ with $|\xi|=|\lambda|$ for $\xi=\sum_{i=1}^n\lambda_i\delta_i$, with $\lambda\in\Lambda_r$. Furthermore, ${\rm Rep}^{(r)}{\mathsf{OSp}}(V)$ does not depend on $V$, up to equivalence.
\end{thm}
\begin{proof}
By Theorems~\ref{ThmOSp}(ii), the statements are about $\mathcal{S}^o_r(V)$-mod. By Theorem~\ref{ThmTilt}, the algebra $\mathcal{S}^o_r(V)$ is Ringel dual to $\mathcal{B}_r^{\mathbf{c}}(\delta)^{\mathrm{op}}\simeq \mathcal{B}_r^{\mathbf{c}}(\delta)$.
This results thus follow from Propositions~\ref{PropRingel}(i) and~\ref{PropCZ}(iv).
\end{proof}
\begin{thm}\label{ConseqFT}
Assume $p\not\in\lbr2,r]\!]$ and $\min(m,n)\ge r$ and set $\delta=m-2n$.
\begin{enumerate}[(i)]
\item We have an algebra isomorphism $$\mathcal{B}_r(\delta)\;\stackrel{\sim}{\to}\; \underline{\mathrm{End}}_{{\mathsf{OSp}}(V)}(V^{\otimes r}).$$
\item The space of ${\mathsf{OSp}}(V)$-invariant multilinear forms $V^{\times 2r}\to \Bbbk$ is spanned by all transforms under the symmetric group of
$$v_1,v_2,\cdots,v_{2r}\;\mapsto\; \langle v_1, v_{2}\rangle\cdots\langle v_{2r-1},v_{2r}\rangle.$$
\end{enumerate}
\end{thm}
\begin{proof}
By Proposition~\ref{PropRingel}(iii) and Theorem~\ref{ThmTilt} we have algebra isomorphisms
$$\mathcal{B}_r^{\mathbf{c}}(\delta)\;\stackrel{\sim}{\to}\;\underline{\mathrm{End}}_{\mathcal{S}^o_r(V)}(T^{ r}) \;\stackrel{\sim}{\to}\;\underline{\mathrm{End}}_{{\mathsf{OSp}}(V)}(T^{ r}).$$
Part (i) is a restriction of this isomorphism. Part (ii) then follows from applying the isomorphism between $\underline{\mathrm{End}}(V^{\otimes r})$ and $\underline{\mathrm{Hom}}(V^{\otimes 2r},\Bbbk)$, see e.g.~\cite{BrCat, Kujawa}.
\end{proof}
\begin{rem}
\begin{enumerate}[(i)]
\item For $\Bbbk=\mathbb{C}$, it was recently proved in \cite{Yang, Sel} that the isomorphism in Theorem~\ref{ConseqFT}(i) holds more generally if $r<(m+1)(n+1)$. In \cite[Theorem~A]{ES}, the case $\Bbbk=\mathbb{C}$ and $r\le m+n$ was proved. Our result in positive characteristic seems to be new.
\item For $\Bbbk=\mathbb{C}$, but without the condition $\min(m,n)\ge r$, Theorem~\ref{ConseqFT}(ii) is the main result of \cite{DLZ}, proved through geometric methods.
\end{enumerate}
\end{rem}
\subsection{The general linear super group}
\begin{thm}\label{CorTiltW}
Assume $p\not\in\lbr2,\max(r,s)]\!]$, $m\ge r+s$ and $n\ge \max(r,s)$. Then ${\rm Rep}^{(r,s)}{\mathsf{GL}}(V)$ is a highest weight category which does not depend on $V$.
\end{thm}
\begin{proof}
This is an application of Proposition~\ref{PropRingel}(i) by Theorem~\ref{ThmTiltW}.
\end{proof}
\begin{rem}
If $\Bbbk=\mathbb{C}$ and $4(r+s)\le \min(m,n)$, the results in Theorem~\ref{CorTiltW} were first proved in \cite[Theorem~7.1.1 and~Corollary~8.5.2]{EHS} through completely different methods.
\end{rem}
\begin{prop}
Assume $p\not\in\lbr2,r]\!]$ and $\min(m,n)\ge r$, then we have an isomorphism $\Bbbk\mathrm{S}_r\stackrel{\sim}{\to}\underline{\mathrm{End}}_{{\mathsf{GL}}(V)}(V^{\otimes r})$.
\end{prop}
\begin{proof}
Mutatis mutandis Theorem~\ref{ConseqFT}(i).
\end{proof}
\begin{prop}\label{CorGL}
If $\Bbbk=\mathbb{C}$, the morphism $U(\mathfrak{gl}(V))\to \underline{\mathrm{End}}_{\mathcal{B}_{r,s}}(V^{\otimes r}\otimes W^{\otimes s})$ is surjective.
\end{prop}
\begin{proof}
It follows from Lemma~\ref{LemDist}(ii) that the morphism $\mathrm{Dist}(\mathcal{O})\to X^\ast$ is surjective for any finite dimensional space $X\subset \mathcal{O}$. Combined with Lemma~\ref{LemDist}(i) this shows that $U(\mathfrak{gl}(V))\to (\mathcal{O}^{V^{\otimes r}\otimes W^{\otimes s}})^\ast$ is surjective. The conclusion follows from Theorem~\ref{ThmGL}(i) and diagram~\eqref{CommD}.
\end{proof}
\subsection{The periplectic super group}
The following answers in particular \cite[Question~8.1.6]{PB1}.
\begin{prop}\label{CorP}
If $\Bbbk=\mathbb{C}$, the morphism $U(\mathfrak{pe}(V))\to \underline{\mathrm{End}}_{\mathcal{A}_r}(V^{\otimes r})$ is surjective.
\end{prop}
\begin{proof}
Mutatis mutandis Proposition~\ref{CorGL}.\end{proof}
\subsection{The queer super group}
\begin{prop}
If $\Bbbk=\mathbb{C}$, the morphism $U(\mathfrak{q}(V))\to \underline{\mathrm{End}}_{\mathcal{BC}_{r,s}}(V^{\otimes r}\otimes W^{\otimes s})$ is surjective.
\end{prop}
\begin{proof}
Mutatis mutandis Proposition~\ref{CorGL}.\end{proof}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,011
|
Poll: Majority of Voters Agree with Trump, Call Porous Border a 'Crisis'
GUILLERMO ARIAS/AFP/Getty Images
The establishment media and elected Democrats have failed to convince the American public that mass illegal immigration and drug trafficking at the United States-Mexico border is a "manufactured crisis," a new poll reveals.
The latest Quinnipiac University Poll finds that the majority of Americans, Republican voters, swing-voters, American men, as well as both college-educated whites and non-college educated whites agree that there is a "security crisis" at the southern border.
In total, 54 percent of Americans agree with President Trump's assertion that the country's porous southern border has created a "crisis" of skyrocketing illegal immigration and deadly drug trafficking. About 86 percent of Republican voters agree, as well.
Swing-voters by a majority of 54 percent say there is a crisis at the southern border, while 58 percent of American men and 50 percent of American women say the same. Roughly 51 percent of college-educated whites say there is a crisis at the border and 64 percent of non-college educated whites agree.
(Quinnipiac University Poll)
Quinnipiac Poll: Do you believe there's a security crisis along the border with Mexico (Yes/No)
Overall: 54%/43%
GOP: 86%/12%
Dems: 25%/71%
Indies: 54%/42%
College Whites: 51%/46%
Non-College W: 64%/31%
18-34 Yr. olds: 56%/39%
Whites 57%/39%
Blacks: 36%/59%
Hispanics: 51%/48%
— Ryan James Girdusky (@RyanGirdusky) January 14, 2019
The poll is a blow to the establishment media's narrative that illegal immigration and drug trafficking are not out of control at the southern border, though federal data does not support these claims.
In her response address to Trump's Oval Office address on immigration, Rep. Nancy Pelosi (D-CA) claimed the president was "manufacturing a crisis" at the southern border.
Angel Mom Mary Ann Mendoza — whose 32-year-old son, Arizona officer Brandon Mendoza was killed by an illegal alien — blasted Pelosi and the media narrative in an exclusive interview with Breitbart News.
"You will never know how hurtful that is to every one of us Angel Families who have been fighting this," Mendoza said of the "manufactured crisis" claims. "It's basically a slap in our face and a kick to our loved ones' graves. A manufactured crisis — it's unbelievable they can even come up with these words."
Currently, the federal government has remained partially shut down as House Democrats block any funding for physical barriers at the U.S.-Mexico border. A handful of Senate Republicans, meanwhile, crafted a plan to give amnesty to illegal aliens that ultimately failed to gain traction.
President Donald Trump has said he is reviewing a plan to deem the border and illegal immigration a national emergency in order to fund a wall along the southern border.
Border-crossings in November 2018 — the last month where data is available — hit close to 52,000, marking the highest level of illegal immigration in the month of November since 2006. Projections indicate that illegal immigration for next year will reach 600,000 border crossings, the highest level of illegal immigration in more than a decade. Meanwhile, drug overdoses in 2017 killed an unprecedented 72,287 U.S. residents, nearly three times the number of individuals killed by global terrorism. Nearly 50,000 of those deadly overdoses were caused by either heroin or fentanyl.
John Binder is a reporter for Breitbart News. Follow him on Twitter at @JxhnBinder.
ImmigrationPoliticsBorder WallChuck SchumerCongressDemocratsDonald TrumpDrug Traffickinggovernment shutdownillegal immigrationNancy PelosiSouthern Borderthe wallU.S.-Mexico border
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 2,301
|
\section*{Plain Language Summary}
The earth system is exceedingly complex and often chaotic in nature, making prediction incredibly challenging: we cannot expect to make perfect predictions all of the time. Instead, we can look for specific states of the system that lead to more predictable behavior than others, often termed ``forecasts of opportunity''. When these opportunities are not present, scientists need prediction systems that are capable of saying ``I don't know.'' We present a method for teaching neural networks, a type of machine learning tool, to say ``I don't know'' for classification problems. By doing so, the neural network focuses less on the predictions it identifies as problematic and focuses more on the predictions where its confidence is high. In the end, this leads to better predictions.
\clearpage
\section{Introduction}
The earth system is exceedingly complex and often chaotic in nature, making prediction incredibly challenging: we cannot expect to make perfect predictions all of the time. Instead, we look for specific states of the system that lead to more predictable behavior than others, often termed ``forecasts of opportunity'' \cite{Mariotti2020,Albers2019,Mayer2020,Barnes2020}. When skillful forecast opportunities are not present, scientists need prediction systems that are capable of saying ``I don't know.'' While this concept of forecasts of opportunity stems from weather and climate predictions, the idea is more general than this. For example, a forecast of opportunity framework may be beneficial when certain predictors are only helpful under certain circumstances. Additionally, if certain predictions (labels) are more predictable than others, or if there is unstructured noise in the training data, a system that can say ``I don't know'' may identify the more skillful predictions, when they occur.
Many approaches to identify skillful forecasts of opportunity already exist. For example, retrospective analysis of the forecast can provide a sense of the physical circumstances that can lead to forecast successes or busts \cite<e.g.>{Rodwell2013-qz}, The ensemble spread can also give a sense of uncertainty in numerical weather prediction systems \cite<e.g.>{Van_Schaeybroeck2016-lo}. \citeA{Albers2019} used a linear inverse modeling approach to identify confident subseasonal predictions and showed that these more confident predictions were indeed more skillful. Recently, \citeA{Mayer2020} and \citeA{Barnes2020} suggested that machine learning, specifically neural networks, may be a useful tool to identify forecasts of opportunity for subseasonal-to-seasonal climate predictions. Specifically, a classification network is first trained, then the predicted probabilities are ordered from largest to smallest. A selection of predictions with the highest probabilities are identified as possible forecasts of opportunity. While \citeA{Mayer2020} and \citeA{Barnes2020} show that this approach works well for classification tasks (i.e., predicting a specific category) where the network is already tasked with predicting a probability, it is less clear how one might apply this methodology to regression tasks (i.e., predicting a continuous quantity).
Most of the current machine learning approaches used to identify forecasts of opportunity, including those described above, are applied post-training. The network is first trained, and then the model confidence is assessed. Instead, here we lean heavily on work by \citeA{Thulasidasan2019} and \citeA{Thulasidasan2020} to further explore a deep learning abstention loss function for classification tasks that teaches the network to say ``I don't know'' (abstain) on certain samples \textit{during training}. The resulting controlled abstention network (CAN) preferentially learns from the samples in which it has more confidence and abstains on samples in which it has less confidence. The CAN is designed to abstain on a user-defined fraction via a PID controller, which ultimately leads to more accurate predictions than our baseline classification approach. While alternative methods have recently been suggested for abstention (rejection) during training \cite{Geifman2019-paper,Geifman2019-thesis}, the CAN approach can be easily implemented in most any network architecture designed for classification, as it only requires the addition of an abstention class to the output layer and modification of the training loss.
We demonstrate the behavior of the CAN for three use cases based on synthetic climate data where the correct answer is known. The first use case explores the utility of the CAN in situations where certain classes (labels) are more predictable than others. The second use case explores the ability of the CAN to learn in the presence of unstructured noise, that is, when there is no way to tell \textit{a priori} whether the sample is predictable or not. The third use case is modeled loosely after global teleconnections associated with the El Ni\~no Southern Oscillation \cite<e.g.>{McPhaden2006-pi,Yeh2018-tf} and explores the utility of the CAN in identifying forecasts of opportunity for climate prediction applications.
Section 2 introduces the synthetic climate data and general neural network architecture. Section 3 discusses the baseline loss function and the CAN in detail, and Section 4 presents the results. Additional discussion on the approach compared to previous approaches is provided in Section 5 and conclusions in Section 6.
\section{Data and use cases}
\subsection{Synthetic climate data}
To demonstrate the utility of the controlled abstention network (CAN), we use the synthetic benchmark data set introduced by \citeA{Mamalakis2021}. While \citeA{Mamalakis2021} provides an extensive description of this data, we give a brief overview here. The dataset consists of input fields $x_i$ and output series $y_i$ (where $i$ denotes the $i^{th}$ sample), which is a function of the input. The input fields represent monthly anomalous global sea surface temperatures (SSTs) generated from a multivariate normal distribution with a correlation matrix estimated from observed SST fields\footnote{https://psl.noaa.gov/data/gridded/data.cobe2.html}. The $i^{th}$ input sample consists of one map of SST anomalies, denoted as $x_i$. \citeA{Mamalakis2021} then define the global response $y_i$ to sample $x_i$ as the sum of local, nonlinear responses. Specifically,
\begin{linenomath*}
\begin{equation}
y_i = \sum_g F_g(x_i)
\end{equation}
\end{linenomath*}
where $g$ represents the grid point and $F_g$ is defined locally at each grid point $g$ by a piecewise linear function. The slopes $\beta_n$ (where $n$ is an integer that runs from 1 to the number of piecewise linear segments, set here to 5) of each local function are chosen randomly from a multivariate normal distribution with correlation matrix, once again, estimated from observed SST fields.
In the end, this data set consists of input maps of SSTs with spatial correlations indicative of observed SSTs, but where each input map is independent of the others. $y_i$ then represents the sum of contributions from each grid point across the globe, where that contribution is a nonlinear function (specifically piecewise linear) of the SST value at that grid point. To speed up training time, we reduce the number of grid points (pixels) from that used by \citeA{Mamalakis2021} to 60 longitudes and 15 latitudes for a total of 900 grid points per input map. An example input map is shown in Fig. \ref{fig_arch}.
\subsection{Experimental design}
We modify the synthetic climate data to make it suitable for classification by assigning each input map (i.e. sample) to one of $k=10$ classes (see Fig. \ref{fig_arch}). The ten classes are determined by binning the $y$ values into deciles. For all use cases explored, we task a neural network with ingesting a sample input map and predicting the correct class.
\subsubsection{badClasses}
For the first use case, badClasses, we modify the data set such that all samples in classes 4 and 5 are ``corrupted"; they are randomly assigned an incorrect class. Over the entire data set, 20\% of the samples are assigned incorrect labels and 80\% of the samples retain their correct labels. In this situation, we would like to to see the CAN identify the samples associated with classes 4 and 5 and abstain on them, since any perceived relationship is unreliable.
\subsubsection{mixedLabels}
For the mixedLabels use case, we modify the data set such that 5\% of all samples (no matter the class) are corrupted by assigning a randomly chosen incorrect label while 95\% of the samples retain their correct label. Unlike badClasses, in this case there is no systematic relationship between the input maps and whether the sample is corrupted or not. For mixedLabels, we would like the CAN to learn to abstain on the corrupted samples by identifying them as those that do not behave like the majority of the samples. If the CAN does this, it should be able to learn the reliable samples better. For the testing set, the CAN will not know which samples have been corrupted and which have not, but it should achieve better accuracy as it reduced the chances on learning spurious relationships during training.
\subsubsection{fooENSO}
For our last use case, fooENSO, we modify the data to loosely reflect forecasts of opportunity related to teleconnections associated with the El Ni\~no Southern Oscillation (ENSO). Warm ENSO events (El Ni\~no events) have long been known to impact global temperatures and precipitation \cite<e.g.>{McPhaden2006-pi,Yeh2018-tf}, at times leading to skillful forecasts on subseasonal-to-seasonal time scales \cite<e.g.>{Johnson2014-fh}. To mimic this behavior with our synthetic data set, we average the anomalous synthetic SSTs in the ENSO region within the equatorial eastern Pacific (dashed white box in the map in Fig. \ref{fig_arch}). When the average value in this box is larger than 0.5 (29\% of the samples), we leave the sample as is to reflect an opportunity where a strong El Ni\~no may lead to more predictable behavior of the global climate system. Samples where the average value is less than 0.5 represent ``noisy'' samples; therefore, we corrupt 50\% of these noisy samples by assigning them a randomly chosen incorrect label. As a result, 35\% of all samples are corrupt, and 29\% of all samples (i.e. those associated with a strong El Ni\~no) retain their correct label. We note that one could corrupt all of the noisy samples, instead of just 50\% of them, and the conclusions are the same (see Supp. Fig. S1). With such a setup, we would like the CAN to identify strong synthetic El Ni\~no samples (i.e. large values within the ENSO box, Fig. \ref{fig_arch}) as reliable samples to learn, while abstaining on the other samples that may have unreliable labels.
\begin{figure}
\begin{center}
\noindent\includegraphics[width=400px]{architecture.png}
\end{center}
\caption{General CAN network architecture used for all use cases. A map of synthetic sea-surface temperatures is fed into a fully connected network and tasked with classifying each sample into one of ten classes. An additional class, the abstention class, is included in the output layer for training with abstention. The number of hidden layers varies between two and three depending on the use case.}
\label{fig_arch}
\end{figure}
\subsection{Network architecture and training}
We train a fully connected feed-forward network with two to three hidden layers using Python 3.7.9 and TensorFlow 2.4. For badClasses and mixedLabels, the network has two hidden layers with 50 and 25 units, respectively. For fooENSO, we train a network with three hidden layers with 500, 250, and 20 units, respectively, to demonstrate the utility of the CAN for deep networks. Additional use cases are provided in Supp. Fig. S1. For the baseline ANN, the output layer consists of 10 units, representing each of the 10 classes (Fig. \ref{fig_arch}). For the CAN, we add an additional output unit to represent the abstention class. This will be discussed further in the following subsection.
We train with a ReLU (rectified linear unit) activation function on the hidden layers and a softmax layer at the output. The softmax layer ensures that the sum of all predicted likelihoods is 1.0 for each sample. The network is trained with a learning rate of 0.001 and batch size of 32. For badClasses and mixedLabels, we train on 8,000 samples, validate on 5,000 samples, and test on 5,000 samples. While we could train on a much larger data set, we have intentionally kept the sample size relatively small to demonstrate the utility of the CAN when the sample size is relatively low --- as is the case for many geoscience applications. For fooENSO, we train on 32,000 samples, validate on 5,000 samples, and test on 5,000 samples. The number of samples is increased for this use case to accommodate the increase in complexity of the network. That said, additional use cases with smaller training sizes are shown in Supp. Fig. S1. All quantities and figures are computed from the testing data unless otherwise specified.
We employ early stopping to automatically determine the optimal number of epochs to train. Specifically, the network stops training when the validation accuracy stops increasing, with a {\tt patience} of 30 epochs. The network with the best performance on the validation accuracy is saved. Specifically for the CAN, we select the best-performing network from epochs where the validation abstention fraction is within 0.1 of the user-chosen abstention setpoint. For all examples shown here, 50 different networks are trained for each configuration (i.e. baseline ANN and CAN) by varying the randomly initialized weights.
\section{Methods}
\subsection{Baseline networks}
The baseline artificial neural network (baseline ANN) is identical to the CAN architecture (Fig. \ref{fig_arch}) with the following exception: it does not include an abstention class, and it uses a standard cross-entropy loss function \cite<>[p. 149]{Geron2019}, which we define as
\begin{equation}
\mathcal{L}_C(x_{i,j}) = -\log{p_{i,j}}
\end{equation}
where $x$ denotes sample $i$ with true label $j$ (where $1\le j \le k$), and $p_{i,j}$ is the likelihood assigned to the correct class for sample $i$. From this point on we drop the subscript $i$ for readability.
Once the baseline ANN is trained, we invoke abstention on the least-certain predictions by thresholding the ANN-predicted likelihoods (i.e., the output of the network). Specifically, the class prediction by the network for a single sample is defined as the class with the highest likelihood. We then sort these winning likelihoods to threshold the ANN predictions. For example, we abstain on 20\% of the samples by throwing out the the 20\% smallest winning likelihoods (20\% least confident predictions). This thresholding approach for abstention has been shown to be very powerful on its own \cite<e.g.>{Mayer2020,Barnes2020}, and will serve as a comparison for the CAN.
Throughout this paper, we use ``coverage'' to denote the fraction of samples for which the network makes a prediction, and ``abstention'' to refer to the fraction of samples for which the network does not make a prediction. Thus, the percent coverage is always 100\% minus the percent abstention. For the baseline approach, abstention and coverage is computed post-training based on the predicted winning likelihoods, while for the CAN these quantities are determined during the training itself (see next section).
\subsection{Controlled Abstention Network (CAN)}
\subsubsection{NotWrong loss}
We next introduce the abstention loss function for the controlled abstention network (CAN). Unlike the baseline ANN, the CAN architecture allows the network to assign a label of \textit{abstain}. The abstention loss is designed to penalize the network for abstention, but penalizes the network even more for getting it wrong. For this reason, we have named it the NotWrong loss and define it as
\begin{equation}
\mathcal{L}_{NW}(x_j) = -\log{\left(p_j+p_{k+1}\right)} - \alpha \log{q} \label{notwrongloss}
\end{equation}
where $p_{k+1}$ is the likelihood assigned to the abstention class, $\alpha$ is a non-negative weight, and $q$ is the likelihood assigned to not abstaining,
\begin{equation}
q = 1 - p_{k+1}=\sum_{m=1}^kp_m. \label{q_def}
\end{equation}
The first term in Eq. \ref{notwrongloss} represents the likelihood of not getting the prediction wrong, that is, the sum of the likelihood of getting it correct plus the likelihood of abstaining, while the second term is a penalty term for abstaining that is weighted by $\alpha$. Without this penalty the network would abstain on every sample to minimize the loss.
Like the abstention loss introduced in \citeA{Thulasidasan2019}, the NotWrong loss has the property that during gradient descent, the network continues to learn on the abstained samples, although to a lesser extent. A proof of this is provided in the supplemental material. This feature of the loss allows the network to move samples in and out of abstention during training while it continues to learn the non-abstained samples better (as will be shown).
\subsubsection{PID controller}
\begin{figure}
\begin{center}
\noindent\includegraphics[width=195px]{history_badClasses0_NotWrongLoss_Colorado_abstSetpoint0.7_prNoise1.0_networkSeed19_npSeed99.png}
\end{center}
\caption{Example of CAN metrics during training of the badClasses experiment with abstention setpoint of 0.7 and a network random seed of 19.}
\label{fig_epochs}
\end{figure}
The parameter $\alpha$ in Eq. \ref{notwrongloss} determines how much the network is penalized for abstaining. $\alpha$ can be adaptively modified throughout training so that the network abstains on a specified percent of the training samples. Inspired by the success reported in \citeA<>[Chapter 4]{Thulasidasan2020}, we implement a discrete-time PID controller (velocity algorithm) to modulate $\alpha$ throughout training \cite<e.g,>[Eq. (1.38)]{Visioli2006}.
\citeA{Thulasidasan2020} solely explores low-abstention setpoints (e.g. 10\%), and evaluates the PID terms batch by batch. For our applications, however, we need the algorithm to work well for a broad range of abstention setpoints (e.g. from 10\% to 90\%). With a high abstension setpoint, say 90\%, and a batch size of 32, only 3 samples on average would be covered per batch --- this leads to to unstable behavior. Because of this, we evaluate the PID terms on 6 consecutive batches ($32 \times 6 = 192$ samples); this strategy leads to more stable behavior of the abstention fraction, but it does not impede training. An example where the PID controller modulates $\alpha$ to control the abstention setpoint during training is shown in Fig. \ref{fig_epochs}a,b.
\section{Results}
\subsection{General performance}
\begin{figure}
\begin{center}
\noindent\includegraphics[width=275px]{accuracy_allExperiments.png}
\end{center}
\caption{CAN testing accuracy as a function of percent coverage for 50 different simulations of each of the three use cases. Pink shading denotes the full range of accuracies of the baseline ANN, while the solid pink line denotes their median. Dots denote results for each CAN simulation; colors denote different abstention setpoints. Panel (b) includes an additional gray line that shows the full range of results from the ORACLE simulation.}
\label{fig_acc}
\end{figure}
For each use case, we train 50 different baseline ANNs and 50 different CAN networks for setpoints ranging from 0.05 to 0.95. Accuracies as a function of percent coverage for the three use cases are shown in Fig. \ref{fig_acc}. As coverage decreases (abstention increases) accuracy increases for the baseline ANNs and CANs. This demonstrates that more confident predictions are more accurate. That said, for all three use cases, the CAN exhibits higher accuracies compared to the baseline ANN for most coverages. This is further visualized in Fig. \ref{fig_max}, where we plot the maximum difference in accuracy between the CAN and the best baseline ANN for the same coverage. For most abstention setpoints the CAN shows an improvement on the baseline ANN. These improvements can be as high as an increase in accuracy of 0.045 (i.e. 4.5\%).
For the $mixedLabels$ use case, Fig. \ref{fig_acc}b shows an additional gray shaded line that represents an additional set of 50 simulations we term ORACLE \cite<see also>{Thulasidasan2020}. In this setup, we play an all-knowing oracle and remove all of the corrupted samples from the training and validation sets prior to training using the baseline ANN approach. Accuracies are then evaluated on the same testing set as the CAN. Thus, ORACLE represents an upper bound on what we could hope to expect from the CAN. For coverage fractions between 40\%-80\%, the best CAN models achieve similar accuracies to ORACLE. This suggests that the abstention process has done a nearly ideal job abstaining on the corrupted samples.
\begin{figure}
\begin{center}
\noindent\includegraphics[width=350px]{violinDifferences_finalExperiments_max.png}
\end{center}
\caption{Maximum difference in testing accuracy between the CAN and the best baseline ANN for various abstention setpoints. Positive values imply higher accuracies of the CAN compared to the best baseline ANN.}
\label{fig_max}
\end{figure}
\subsection{Strategies of the CAN}
\begin{figure}
\begin{center}
\noindent\includegraphics[width=400px]{tranquil_accuracy_allExperiments.png}
\end{center}
\caption{(a,b) Testing accuracy computed over the tranquil samples for the fooENSO and badClasses experiments. Shading denotes the full range of accuracies from the baseline ANN, while the solid lines denote the median. (c,d) Fraction of covered samples that are tranquil samples. Gray dashed lines denote the maximum fraction possible given the experimental setup.}
\label{fig_approach}
\end{figure}
There are multiple strategies that the CAN may employ to improve upon the baseline ANN: (1) the CAN may do a better job identifying the non-corrupted samples and abstaining on the corrupted samples, (2) the CAN may do a better job learning the relationship between the non-corrupted inputs and their labels/true classes, and (3) a combination of \#1 and \#2. Fig. \ref{fig_approach} explores these strategies for the badClasses and fooENSO use cases.
Beginning with the fooENSO use case (Fig. \ref{fig_approach}a,c), we once again plot the accuracy as a function of coverage, but focus only on the tranquil samples this time. We use the identifier \textit{tranquil} to denote the group of samples that should be identifiable by the network as non-corrupt (i.e. all samples that exhibit a strong El Ni\~no) and \textit{not tranquil} to denote the rest. Fig. \ref{fig_approach}c shows that while the baseline ANN and the CAN cover similar fractions of tranquil samples, the CAN is better at learning the relationship between the inputs and outputs of the tranquil samples (Fig. \ref{fig_approach}a). Thus, for this use case, the CAN outperforms the baseline ANN by taking approach \#2. For the badClasses use case (Fig. \ref{fig_approach}b,d), the CAN appears to outperform the baseline ANN by employing a combination of approaches \#1 and \#2. That is, the CAN does a significantly better job identifying the tranquil samples (i.e. samples that do not belong to classes 4 and 5; Fig. \ref{fig_approach}d). It also does a better job learning the relationship between the inputs and outputs of the tranquil samples.
\begin{figure}
\begin{center}
\noindent\includegraphics[width=300px]{lrp_tranquilFOO20.png}
\end{center}
\caption{Layer-wise propagation (LRP) relevance heatmap averaged over testing samples for the fooENSO experiment using an abstention setpoint of 0.7. The heatmap shows the mean over all 50 simulations for samples that are correctly assigned a label of 1 (i.e. $j=1$). We use the LRP-z rule but set all negative relevances to zero and scale each heatmap by the sum of the positive relevances over the map prior to averaging.}
\label{fig_lrp}
\end{figure}
Neural network explainability methods have recently attracted the attention of the climate science community to assist scientists in identifying new forecasts of opportunity \cite<e.g>{Toms2020,Barnes2020,Mayer2020}. One particular method, layer-wise relevance propagation (LRP), produces a heatmap which approximates the most relevant regions for a neural network's output according to a set of propagation rules \cite<e.g.>{Bach2015,Montavon2017,Mamalakis2021}.
Fig. \ref{fig_lrp} shows the average fooENSO LRP heatmap over all 50 simulations for samples that were correctly assigned a label of 1 (i.e. $j=1$) by the CAN. Large relevance within the ENSO region (dashed white box) demonstrates that the CAN is indeed using this region to identify successful predictions. Other regions are also relevant (non-zero), since the correct label $j=1$ is determined by the sum of contributions across the globe (see Section 2.2.3). In this way, the CAN architecture may be paired with explainability methods to identify the mechanisms behind forecasts of opportunity within data sets \cite<e.g.>{Barnes2020}. Likewise, one could also use explainability methods to explore the reasons behind the CAN's abstention \cite<e.g.>{Thulasidasan2019}.
\section{Discussion}
As discussed in the introduction, \citeA{Thulasidasan2019} and \citeA{Thulasidasan2020} introduced their own deep abstention classifier (DAC) loss function, defined as
\begin{equation}
\mathcal{L}_{DAC}(x_j) = -q\log{\left( \frac{p_j}{q }\right)} - \alpha \log{q} \label{dacloss}
\end{equation}
where all notation is the same as in Eqs. \ref{notwrongloss} and \ref{q_def}. Our NotWrong loss and the DAC loss are very similar. The important difference is that the quantity inside the first log in the DAC loss represents the likelihood of getting the prediction \textit{correct}, while in the NotWrong loss it represents the likelihood of getting the prediction \textit{not wrong}.
While both the DAC loss and NotWrong loss improve accuracies over the baseline ANN, for the use cases explored here we find that the NotWrong loss outperforms the DAC loss, as shown in Supp Fig. S2. We believe that this is because the NotWrong loss puts more energy into learning the correct answer via larger negative derivatives of the loss with respect to the correct class ($a_j$). This is demonstrated in Supp. Fig. S3, where we display the derivative of the loss with respect to $a_j$ for different value ranges of $p_j$ and $p_{k+1}$. Pink shading in Supp. Fig. S3c represents regions in phase space where the derivative of the NotWrong loss is more negative than the DAC loss, and we find this region of phase space most representative of the use cases explored here (not shown). Future work will explore this behavior further.
\section{Conclusions}
The ability to say ``I don't know'' is an important skill for any scientist.
In the context of prediction with deep learning, the identification of uncertain (unpredictable) samples is often approached post-training. In this paper, we explore an alternative: a deep learning loss function that can abstain \textit{during training} for classification problems. We introduce a new abstention loss for classification that focuses on getting the answer not wrong, rather than getting it right. The controlled abstention network (CAN) with this loss allows the network to preferentially learn more from confident samples, and ultimately outperform both the baseline ANN approach and the abstention loss of \citeA{Thulasidasan2019} and \citeA{Thulasidasan2020} for the climate use cases explored here.
An additional benefit of the abstention loss CAN is its simplicity --- it is straightforward to implement in most any network architecture, as it only requires adding an additional class to the output layer and modification of the training loss. The abstention loss framework has the potential to aid deep learning algorithms to identify skillful forecasts, which ultimately improves performance on the samples with predictability.
\acknowledgments
This work was funded, in part, by the NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) under NSF grant ICER-2019758. Once published, the code and data will be made available to the community via the Mountain Scholar permanent data repository with a permanent DOI and via Zenodo.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,877
|
\section{Introduction}
\label{sec:intro}
The abundance of halos at the massive end is sensitive to variations
in cosmological parameters such as the matter density parameter
($\Omega_{\rm m}$) and the amplitude of the power spectrum of density
fluctuations in the Universe (characterized by $\sigma_8$). Therefore
observations of the abundance of galaxy clusters which reside in such
massive halos can be used to constrain these cosmological parameters
\citep[see e.g][]{Vikhlinin2009,Mantz2010,Rozo2010}. Additionally,
such observations can also be used to constrain the phenomenological
behavior of dark energy and test the nature of gravity through
measurements of the growth of structure, questions which are of
fundamental importance for cosmologists today \citep[see
e.g.,][]{Rapetti2010, Rapetti2012}. Photometric surveys such as the
Dark Energy Survey \citep[][DES]{Frieman2005}, the Hyper-Suprime Cam
survey \citep[][HSC]{Miyazaki2006} in the immediate future and the
Large Scale Synoptic Telescope Survey (LSST) in the near future, will
result in a large catalog of optically-selected galaxy clusters, which
can be used to answer these important questions.
Identifying the galaxy cluster observables in optical surveys which
tightly correlate with halo mass, establishing the scaling relations
between these observables and halo mass and the scatter in these
relations are all crucial steps in order to achieve the scientific
goals. It is well known that the number of cluster members (also
called richness) correlates with halo mass \citep[see
e.g,][]{Becker2007, Johnston2007, Sheldon2009}, and so does the
luminosity (or stellar mass) of the brightest (or central) galaxy
\citep{Mandelbaum2006, More2009a, Moster2010, Behroozi2010, More2011a}
or the total stellar or luminosity content in the group \citep[see
e.g.,][for results based on abundance matching]{Yang2007}. However,
these scaling relations have considerable scatter, and therefore
combining multiple observables, especially those which come without
additional observational costs, is an important task.
Recently, \citet{Hearin2012} suggested that at fixed richness, the
magnitude difference (also called magnitude gap or equivalently the
luminosity ratio) between the brightest and the second brightest
galaxy, contains information about halo mass. By using a simple
subhalo abundance matching prescription they showed that at fixed
richness, the small (large) magnitude gap systems are expected to
preferentially reside in less (more) massive halos. To test the
proposition with real data, the authors used the galaxy group catalog
of \citet{Berlind2006} and showed that at fixed velocity dispersion
(proxy for halo mass), the average richness of small magnitude gap
systems is significantly larger than the average richness of the large
magnitude gap systems, and the difference is larger than that expected
if the luminosities of galaxies in every group were drawn randomly
from the galaxy luminosity function.
Theoretically, it is expected that dynamical friction causes brighter
satellites in massive halos to merge with the central galaxy,
increasing the magnitude gap between the brightest satellite and the
central galaxy. This suggests that central galaxies are expected to be
special, they occupy the deepest portion of the potential well, where
they grow by feeding on the satellites that are dragged to the center
of halos by dynamical friction. Whether this is indeed the case, or
whether the luminosity of the central galaxies is just a matter of
chance, is a matter which can be settled with observations.
In \citet{Paranjape2012}, the authors examine this question by looking
at the luminosity distribution of the brightest and the second
brightest galaxy and the magnitude gap between the two using the group
catalog of \citet{Berlind2006}. They find that the luminosity
distributions are consistent with the distribution of the brightest
and the second brightest of $N$ random draws from the galaxy
luminosity function, where $N$ is the richness of a given group. On
face value, this would imply that there is nothing special about the
brightest galaxy in a given group, and that it is just a matter of
chance that any galaxy becomes the brightest in a given group. These
results imply that the magnitude gap should not contain any more
information about the halo mass, than that contained in the richness,
in apparent contrast with the results from \citet{Hearin2012}, which
are based on the same group catalog.
In this paper, we attempt to clarify this issue, by predicting the
magnitude gap based upon the conditional luminosity function (CLF),
which describes the halo occupation distribution of galaxies in a halo
of given mass. The CLF and its variation with halo mass has been
calibrated using a wide variety of observations such as the abundance
of galaxies, their clustering and the galaxy-galaxy lensing signal
measured from the Sloan digital sky survey \citep[][SDSS]{York2000}.
We show that if galaxies occupy halos according to the CLF, it is
natural to expect that the magnitude gap depends upon the halo mass at
fixed richness. We also show that the luminosity distributions of the
brightest and the second brightest galaxies are predicted to be
different from those obtained by random draws from the luminosity
function. However, detecting this small difference just using the
luminosity distribution will require sample sizes which are larger
than the one used by \citet{Paranjape2012}.
We also note that in their paper, \citet{Paranjape2012} investigate
the luminosity-weighted marked correlation function and show that its
radial dependence implies that the luminosities of the brightest
galaxies are not a matter a chance. They conclude that their results
falsify the hypothesis that the luminosities of the brightest and the
second brightest galaxy are drawn from the global luminosity function
(i.e., without any dependence on halo mass or environment) and that
the luminosity distribution alone is not an appropriate discriminant
to investigate this issue. In this paper, our results based on the CLF
will strengthen this argument.
This paper is organized as follows. In Section~\ref{sec:clf}, we
describe the CLF framework and give analytical expressions for the
magnitude gap distribution based upon the CLF. In
Section~\ref{sec:sims}, we show the magnitude gap distribution from
Monte Carlo simulations based on the CLF and compare the results to
the analytical expression presented in Section~\ref{sec:clf}. We also
investigate the dependence of the magnitude gap upon the richness in a
group and the assumed CLF parameterisation. In
Section~\ref{sec:lumdist}, we construct mock galaxy catalogs based
upon galaxy luminosities sampled from (a) the CLF and (b) the overall
galaxy luminosity function and compare the luminosity distributions of
the brightest and the second brightest galaxy in these two catalogs.
Finally, we summarize our results in Section~\ref{sec:summary}.
For the purposes of this paper, we adopt the following convention. We
refer to galaxies as centrals (satellites), if they are drawn from the
CLF which is specific to the central (satellite) galaxies (see
Eqs.~\ref{phi_c} and \ref{phi_s}). As our fiducial model, we assume
that central galaxies are also the brightest galaxies in the halo.
Therefore, in the fiducial case, the magnitude gap is the difference
in magnitudes between the central galaxy and the brightest satellite.
However, we will also investigate cases, when the satellites are
allowed to be brighter than the central galaxy \citep[see][for
observational evidence of such a possibility]{Skibba2011}.
\begin{figure*}
\centering
\includegraphics[scale=0.7]{figmock.eps}
\caption{The distribution of the difference in magnitudes between the
brightest and the second brightest galaxy predicted by simulations in
which we populate galaxies in halos of different mass according to the
conditional luminosity function. The solid histograms show the
distribution of magnitude gaps in halos of different mass (shown using
different colors) for our fiducial model in which we assume that the
central galaxy (defined to be drawn from the central CLF) is the
brightest in the halo. The solid curves show the analytical
prediction based on Eq.~\ref{eq:pred}. The dashed histograms show the
corresponding result for the case when we allow satellites to be
brighter than the central galaxy.}
\label{fig:maggap}
\end{figure*}
\section{Conditional luminosity function}
\label{sec:clf}
The conditional luminosity function, denoted by $\Phi(L|M)$, is
defined to be the average number of galaxies of luminosities $L\pm
{\rm d} L/2$ that reside in a halo of mass $M$ \citep{Yang2003}. The
average number of galaxies in a given halo of mass $M$ can be found by
simply integrating the CLF over the luminosities of interest, e.g.,
the average number of galaxies with luminosities between $L_{\rm min}$
and $L_{\rm max}$ that reside in a halo of mass $M$ is given by
\begin{equation}
\avg{N}_M(L_{\rm min},L_{\rm max}) = \int_{L_{\rm min}}^{L_{\rm
max}} \Phi(L|M)
{\rm d} L \,.
\label{eq:ngal}
\end{equation}
For convenience, the CLF is divided in to a central galaxy component
($\Phi_{\rm c}[L|M]$) and a satellite galaxy component
($\Phi_{\rm s}[L|M]$).
We assume that the distribution $\Phi_{\rm c}(L|M)$ is described by a
lognormal distribution with a scatter, $\sigma_{\rm c}$, that is
independent of halo mass, consistent with the findings from studies of
satellite kinematics \citep{More2009b, More2009a, More2011a} and
galaxy group catalogs \citep{Yang2009},
\begin{equation}\label{phi_c}
\Phi_{\rm c}(L|M) \,{{\rm d}}L = {\log\, e \over {\sqrt{2\pi} \, \sigma_{\rm c}}}
{\rm exp}\left[- { {(\log L -\log L_{\rm c} )^2 } \over 2\,\sigma_{\rm c}^2} \right]\,
{{\rm d} L \over L}\,.
\end{equation}
The dependence of the logarithmic mean luminosity, $\log
\tilde{L}_{\rm c}$, on halo mass is given by
\begin{equation}
\log \tilde{L}_{\rm c}(M)=\log \left[ L_0
\frac{(M/M_1)^{\gamma_1}}{\left[1+(M/M_1)\right]^{\gamma_1-\gamma_2}}
\right]\,.
\end{equation}
Four parameters are required to describe this dependence; two
normalization parameters, $L_0$ and $M_1$ and two parameters
$\gamma_1$ and $\gamma_2$ that describe the slope of the
$\tilde{L}_{\rm c}(M)$ relation at the low mass end and the high mass
end, respectively.
The satellite CLF, $\Phi_{\rm s}(L|M)$ is assumed to be a
Schechter-like function,
\begin{equation}
\Phi_{\rm s}(L|M) {\rm d} L=\Phi_{\rm s}^*\left(\frac{L}{L_*}\right)^{\alpha_{\rm s}}\,
\exp\left[-\left( \frac{L}{L_*} \right)^p \right] \,\frac{{\rm d} L}{L_*}.
\label{phi_s}
\end{equation}
Here $L_*(M)$ determines the knee of the satellite CLF and is assumed
to be a factor $f_{\rm s}$ times fainter than $\tilde{L}_{\rm c}(M)$.
Motivated by results from the SDSS group catalog of \citet{Yang2008a},
we set $f_{\rm s} = 0.562$ \citep[see also][]{Reddick2012}, $p=2$, and
assume that the faint-end slope of the satellite CLF is independent of
halo mass. The logarithm of the normalization, $\Phi_{\rm s}^*$ is
assumed to have a quadratic dependence on $\log M$ described by three
free parameters, $b_0$, $b_1$ and $b_2$;
\begin{equation}
\log \Phi_{\rm s}^*=b_0+b_1\,(\log M-12)+b_2\,(\log M-12)^2\,.
\end{equation}
Note that this functional form does not have a physical motivation; it
merely provides an adequate description of the results obtained by
\citet{Yang2008a} from the SDSS galaxy group catalog. The parameters
of the conditional luminosity function and their variation with halo
mass can be constrained by using observations of the abundance, the
clustering and the galaxy-galaxy lensing signal measured from the
Sloan Digital Sky Survey \citep{More2012fisher,More2012,Cacciato2012}.
In what follows, we will use the following values for the CLF
parameters: $L_0 = 10^{9.95} h^{-2} \>{\rm L_{\odot}}$, $M_1 = 10^{11.27}
h^{-1}\>{\rm M_{\odot}}$, $\sigma_{\rm c} = 0.156$, $\gamma_1=2.94$,
$\gamma_2=0.244$, $\alpha_s=-1.17$, $b_0=-1.42$, $b_1=1.82$, and
$b_2=-0.30$, consistent with the results presented in
\citet{Cacciato2012}.
If the luminosities of galaxies in a halo are drawn in an uncorrelated
fashion, the probability that a halo of mass $M$ and richness $N$ has
a magnitude gap, $\Delta m$, or equivalently the luminosity ratio,
$f_L$, between the brightest satellite galaxy and the central galaxy
in a halo of mass $M$ is then given by\footnote{Note that our
expression differs from \citet{Paranjape2012} because in our case
the central galaxy luminosity is assumed to be sampled from a
probability distribution which differs from the distribution from
which the satellites are sampled from.}
\begin{eqnarray}
P(f_L|N,M)=
(N-1)\int_{L_{\rm min}}^{\infty}&& {\rm d} L'\,
\,P_{\rm s}(L'|M)\,P_{\rm c}\left(L'/f_L|M\right)\, \nonumber \\
&&\times\left[ P_{\rm s}(<L'|M) \right]^{(N-2)}\,.
\label{eq:pred}
\end{eqnarray}
Here, the probabilities $P_{\rm x}(L'|M)$ and $P_{\rm x}(<L'|M)$ are defined
such that
\begin{eqnarray}
P_{\rm x}(L'|M)=\frac{\Phi_{\rm x}(L'|M)}{\avg{N_{\rm x}}_M(L_{\rm
min},L_{\rm max})} \\
P_{\rm x}(<L'|M)=\frac{\avg{N_{\rm x}}_M(L_{\rm
min},L')}{\avg{N_{\rm x}}_M(L_{\rm
min},L_{\rm max})}
\end{eqnarray}
where the symbol ${\rm x}$ can either stand for central (${\rm c}$) or
satellite (${\rm s}$). The quantities $\avg{N_{\rm x}}_M$ in the relevant
luminosity intervals can be obtained by replacing $\Phi(L|M)$ by
$\Phi_{\rm x}(L|M)$ inside the integral in Eq.~(\ref{eq:ngal}). For
central galaxies we choose $L_{\rm max}=\infty$. In our model, we
assume that the central galaxies are always the brightest in the halo.
Therefore, in the case of satellites, we use the luminosity of the
central under consideration as the upper limit, i.e., $L_{\rm
max}=L'/f_L$.
The integrals for $\avg{N_{\rm c}}_M$ and $\avg{N_{\rm s}}_M$ can be written
in terms of the complementary error function and the incomplete gamma
function, respectively, such that
\begin{eqnarray}
\avg{N_{\rm c}}_M(L_1,L_2)= \frac{1}{2}&&\left[
{\rm erfc}\left(\frac{\log L_1-\log
\tilde{L}_{\rm c}}{\sqrt{2}\sigma_{\rm c}} \right) \right. \nonumber \\
&& \left. -{\rm erfc}\left(\frac{\log L_2-\log
\tilde{L}_{\rm c}}{\sqrt{2}\sigma_{\rm c}}\right)
\right] \\
\avg{N_{\rm s}}_M(L_1,L_2)= \frac{\Phi_*}{p}&&\left(
\Gamma\left[\frac{\alpha_{\rm s}+1}{p},
\left(\frac{L_1}{L_*}\right)^p\right] \right. \nonumber \\
&&-\left. \Gamma\left[\frac{\alpha_{\rm s}+1}{p},
\left(\frac{L_2}{L_*}\right)^p\right] \right) \,.
\end{eqnarray}
The probability of a halo to have a certain mass, given its richness
and the magnitude gap can be obtained from Eq.~\ref{eq:pred} and the
Bayes' theorem,
\begin{equation}
P(M|f_L,N) = \frac{P(f_L|M,N)P(M|N)}{P(f_L|N)} \,,
\end{equation}
and as expected it depends upon the mass-richness relation via the
probability distribution $P(M|N)$. The probability distribution within
a given bin of richness $[N_1,N_2]$ is given by
\begin{equation}
P(M|f_L,N_1<N<N_2) = \sum_{N=N_1}^{N_2}
\frac{P(f_L|M,N)P(N|M)P(M)}{P(f_L|N)} \,.
\end{equation}
Finally, the distribution of the magnitude gap at fixed halo mass
(without regard to the richness) is given by
\begin{equation}
P(f_L|M) = \sum_{N=2}^{\infty} P(f_L|N,M) P(N|M) \,.
\label{eq:vdb07}
\end{equation}
In what follows, we will also investigate the effect of allowing
satellite galaxies to be brighter than the central galaxies in their
halo. The analytical expressions for predicting the magnitude gap
distribution in this case are presented in the appendix.
\section{Results from Simulated Sample}
\label{sec:sims}
We now demonstrate explicitly that for fixed richness, the CLF
predicts that the magnitude gap in a given group of galaxies depends
upon the mass of the halo in which these galaxies reside. The CLF
varies with halo mass and therefore it is not surprising that this
indeed is the case. For this purpose, we generate Monte-Carlo samples
of galaxies that populate halos according to the CLF in the following
manner.
For a halo of given mass, we first draw the luminosity of its central
galaxy from $\Phi_{\rm cen}(L|M)$, given by Eq.~(\ref{phi_c}). In
order to avoid the existing correlation between halo mass and richness
affecting our conclusions, we fix the number of satellites $N_{\rm
sat}=20$. For each of the $N_{\rm sat}$ satellites, we then draw a
luminosity from the satellite CLF $\Phi_{\rm sat}(L|M)$, given by
Eq.~(\ref{phi_s}). While drawing the satellite luminosities, we adopt
a luminosity threshold, $L_{\rm min}$, corresponding to $^{0.1}M_r -
5\log h = -19$ (here $^{0.1}M_r$ indicates the SDSS $r$-band
magnitude, $K$-corrected to $z=0.1$; see Blanton {et al.~} 2003). As
mentioned before, we also assume that the satellites are always
fainter than the central galaxy drawn for a given halo.
The resultant distribution of the magnitude gaps is shown in
Fig.~\ref{fig:maggap} using a solid histogram for a wide range of halo
masses. It can be clearly seen that the distribution of the magnitude
gaps depends upon halo mass. We use Eq.~\ref{eq:pred} to predict this
distribution analytically and compare it to the results from our
simulations. The result of this analytical calculation are shown as
solid curves in Fig.~\ref{fig:maggap} which agrees well with the
magnitude gap distribution from our simulations.
For low mass halos, the distribution of magnitude gaps is peaked at
zero. However, this peak shifts away from zero as we move to larger
halo masses. This figure establishes that if galaxies populate halos
according to the conditional luminosity function (which is supported
by several observations such as galaxy group catalog, and the
observations of abundance clustering and galaxy galaxy lensing from
SDSS), the magnitude gap should have more information about the halo
mass, in addition to that conveyed by richness alone. At fixed
richness, higher mass halos tend to have larger magnitude gaps, in
agreement with \citet{Hearin2012}. Our result that the magnitude gap
distribution for low mass halos is peaked at zero, and shifts to
larger magnitude gaps for larger mass halos, may appear to be exactly
opposite of the result presented in \citet{vdb2007}. However, note
that the magnitude gap distributions we present are at {\it fixed}
richness and halo mass ($P[f_L|N,M]$), while the magnitude gap
distributions shown in the different panels in fig. 5 of
\citet{vdb2007} correspond to groups with {\it varying} richness
(thus corresponding to $P(f_L|M)$, see Eq.~\ref{eq:vdb07}), due to the
underlying mass-richness relation. We will shortly consider the effect
of changing richness on the magnitude gap distribution.
\begin{figure*}
\centering
\includegraphics[scale=0.8]{figmock5_64.eps}
\caption{Distribution of magnitude gaps (similar to
Fig.~\ref{fig:maggap}) for the case when the number of satellites is
equal to 5 and 64 is shown in the left and right hand panel,
respectively, and assuming that the central galaxy is the brightest in
the halo.
}
\label{fig:maggap_varyn}
\vspace{0.2cm}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale=0.8]{figvaryp.eps}
\caption{
The dependence of the distribution of magnitude gaps on the parameter
$p$ which governs the exponential cutoff at the bright end in the
conditional luminosity function of satellite galaxies, based on
Eq.~\ref{eq:pred}. The solid line corresponds to the fiducial case
$p=2$ while the dotted and the dot-dashed lines correspond to $p=1$
and $p=3$, respectively. The results for the two different halo masses
are shown by different colors.
}
\label{fig:varyp}
\end{figure}
First, we consider the effect of relaxing the assumption that the
centrals are the brightest in their halos. Note that in this case, the
magnitude gap could be either between the central and the brightest
satellite, or between the two brightest satellites, in case the halo
has two or more satellites brighter than the putative {\it central}
galaxy (see Appendix). The resultant magnitude gap is shown with a dashed histogram.
For low mass halos it can be hardly distinguished from the case when
we demand the central to be the brightest. It can be also seen that
for all halo masses the distribution of magnitude gaps for $\Delta
m_{12}>0.5$ is consistent with the case when the central galaxy is
assumed to be the brightest. This is also expected since the satellite
conditional luminosity function in our model dies exponentially at the
bright end. Therefore if there is a satellite galaxy brighter than
the central galaxy, the magnitude gap is not expected to be extremely
large. Therefore, the few cases when the satellite galaxy is brighter,
cause a small but noticeable increase in the probability distribution
at the small magnitude gap end.
We show the results of varying the number of satellites in
Fig.~\ref{fig:maggap_varyn}. The left hand panel shows the magnitude
gap distribution in halos of different mass, when the number of
satellites equals 5, while the right hand panel shows the same for
number of satellites equal to 64. As the number of satellites
increases (decreases) the magnitude gap tends to be smaller (larger),
as expected (and qualitatively consistent with the results presented
in fig.~5 of \citealt{vdb2007}). The analytical expectation (from Eq.~\ref{eq:pred}) is
shown as a solid curve; it describes the simulation results
accurately, and is shown as a sanity check.
The parameter $p$ governs the exponential cut-off at the bright end of
the satellite conditional luminosity function (see Eq.~\ref{phi_s}).
Based upon the analysis of offsets of the line-of-sight velocities and
projected position of the brightest galaxy relative to the mean of the
other group members, \citet{Skibba2011} concluded that the value of
$p$ ought to be closer to unity instead of the fiducial value of $2$
that we assume \citep[see also][]{Reddick2012}. Therefore, we also
show the effect of varying the parameter $p$ on the magnitude gap
distribution in Fig.~\ref{fig:varyp}. We have verified that the
predictions based upon Eq.~\ref{eq:pred} that we show in the figure
also agree with detailed simulations. As expected, decreasing the
value of $p$ causes the satellite conditional luminosity function to
fall less rapidly at the bright end which results in smaller magnitude
gaps.
Regardless of these details, it is clear that the results from this
section establish that if galaxies populate halos according to the
CLF, then at fixed richness the magnitude gap distribution should
depend upon the halo mass, in a manner which is qualitatively
consistent with \citet{Hearin2012}.
\section{Luminosity distribution of the brightest and second brightest galaxy}
\label{sec:lumdist}
\begin{figure*}
\centering
\includegraphics[scale=0.6]{figclf.eps}
\caption{ Comparison between the luminosity distribution of the
brightest and the second brightest galaxy in the halos present in the
two mock galaxy catalogs are shown in the left and right hand side
panels, respectively. {\it Upper panels:} The solid line shows the luminosity
distribution when galaxies are populated in halos according to random
draws from the CLF (Catalog A), while the dashed histogram shows the
distribution when galaxies are populated according to random draws
from the global luminosity function (Catalog B), maintaining the richness
of halos. {\it Bottom panels:} Same as the upper panels but for a
catalog with sample size comparable to the one used by
\citet{Paranjape2012}. The differences in the distribution from the
two catalogs, as quantified by the p-values from the KS-test are
indicated in each panel.
}
\label{fig:clf_lf}
\vspace{0.7cm}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale=0.8]{pdist.eps}
\caption{ Distribution of p-values from KS-test carried out on the
luminosity distribution of centrals (solid histogram) and satellites
(dotted histogram) from the two catalogs carried out on 1000 samples
with a sample size comparable to the one used by
\citet{Paranjape2012}.
}
\label{fig:ks}
\end{figure}
We would like to now
investigate the result presented in \citet{Paranjape2012}. They
demonstrate that the luminosity distribution of the brightest and the
second brightest galaxies in the group catalog of \citet{Berlind2006}
is consistent with their expected distribution if the luminosity of
galaxies in each of the groups were randomly sampled from the global
luminosity function of galaxies. To verify their result, we construct
Monte-Carlo galaxy catalogs in which galaxy luminosities are drawn
either from the conditional luminosity function or the overall
luminosity function.
We assume a standard flat $\Lambda$CDM cosmological model with matter
density $\Omega_{\rm m} = 0.27$, baryon density $\Omega_{\rm b} = 0.0469$,
Hubble parameter $h = 0.7$, spectral index $n_{\rm s}=0.95$, and a matter
power spectrum normalization of $\sigma_8 = 0.82$. We sample a large
number of haloes with masses $M>2.7\times10^{13}\>h^{-1}\rm M_\odot$ from the halo mass
function expected for such cosmology using the halo mass function
calibration presented by \citet{Tinker2008}.
We construct a mock galaxy catalog (Catalog A) by populating the dark
matter halos with model galaxies using the CLF with parameters
described in \S\ref{sec:clf}. For each halo, we first draw the
luminosity of its central galaxy from $\Phi_{\rm cen}(L|M)$, given by
Eq.~(\ref{phi_c}). Next, we draw the number of satellite galaxies,
under the assumption that $P(N_{\rm sat}|M)$ follows a Poisson
distribution with mean given by Eq.~(\ref{eq:ngal}) with $\Phi$
replaced by $\Phi_{\rm s}$, and we adopt a luminosity threshold, $L_{\rm
min}$, corresponding to $^{0.1}M_r - 5\log h = -20$, similar to the
threshold adopted by \citet{Paranjape2012}. For each of the $N_{\rm
sat}$ satellites in the halo of question, we then draw a luminosity
from the satellite CLF $\Phi_{\rm sat}(L|M)$, given by
Eq.~(\ref{phi_s}) and maintain the fiducial assumption that all
satellites are fainter than the central galaxy. We restrict ourselves
to halos with richness $N\geq12$, which gives us a sample of 319482
halos.
We construct an alternate catalog of galaxies (Catalog B) where the
luminosities of member galaxies in each halo are drawn from the global
luminosity function, $\Phi(L)$,
\begin{equation}
\Phi(L)=\int \Phi(L|M)n(M){\rm d} M\,,
\end{equation}
where $n(M)$ is the halo mass function. In practice, we randomly
sample (with replacement) from the luminosities of galaxies in
the entire previous catalog, while maintaining the richness of the
group they belong to, thus effectively sampling the galaxy
luminosities in every group from the global luminosity function of
galaxies.
The luminosity distribution of the brightest and the second brightest
galaxies in each halo for both the catalogs are shown in the upper
left and right hand panels of Fig.~\ref{fig:clf_lf}, respectively. The
peak of the magnitude distribution of the brightest galaxies in
Catalog A have a distribution which peaks at a slightly higher value
of luminosity compared to Catalog B. On the other hand, the magnitude
distribution of the second brightest galaxies shows a tail towards
larger luminosities in Catalog B compared to that in Catalog A.
However, the plot also shows that the differences are not that huge,
and detecting such differences in the magnitude distributions will
require a large sample of groups.
From our large sample of Monte-Carlo groups, we now restrict ourselves
to selecting sample sizes ($\sim350$) which are similar to those used
by \citet{Paranjape2012}. We show the results of one of the random
realizations in the bottom panel of Fig.~\ref{fig:clf_lf}. We also
obtain the corresponding cumulative distributions and use the
Kolmogorov-Smirnov (KS) statistic to compare the distributions from
the two catalogs. The p-values from the KS-test are indicated in the
corresponding panels and these values imply that the luminosity
distributions from the two catalogs, when downsampled to the size of
the catalog that \citet{Paranjape2012} use are consistent with each
other. To show that this particular random realization is not a
statistical fluke, in Fig.~\ref{fig:ks}, we show the distribution of
p-values from KS-tests carried out on 1000 random samples similar in
size to the catalog used by \citet{Paranjape2012}. The distribution of
p-values from the KS-test peak at values larger than 0.1, which
highlights the difficulty in distinguishing between the magnitude
distributions from the two catalogs with a small sample size.
This suggests that the group catalog used by \citet{Paranjape2012}
does not have enough number statistics, to detect the difference
between the luminosity distributions of the brightest (or the second
brightest) galaxies in the cases corresponding to the two catalogs.
It is well known that the luminosity of central galaxies depends upon
the halo mass in which they reside. However, it is also known that at
the massive end, the luminosity of central galaxies is a weak function
of halo mass, e.g., based on two point statistics such as the
projected galaxy-galaxy correlation function, its dependence upon
luminosity of galaxies \citep{Zehavi2005,Zheng2007,Zehavi2011}, and
the projected galaxy-matter correlation function probed by the
galaxy-galaxy lensing measurements \citep{Mandelbaum2006,
Cacciato2009, Cacciato2012}, or other probes such as satellite
kinematics \citep{More2009a,More2011a} and subhalo abundance matching
\citep{Moster2010, Behroozi2010, Yang2012}. This coupled with the
fact that the satellite fraction is very low at the bright end, could
be a reason why the differences in the magnitude distribution of the
brightest galaxy between the two different catalogs are not that
large.
We note that this insensitivity of the magnitude distributions to the
underlying halo occupation distribution was also pointed out by
\citet{Paranjape2012}, who suggested the use of two point statistics
such as the luminosity-marked correlation function, in order to
distinguish the two scenarios. Based on the radial dependence of the
marked correlation function, they concluded that the galaxy
luminosities in groups cannot be drawn from a global luminosity
function. However, their analysis does not directly address whether
this is due to a conditional luminosity function which varies with
mass, or a result of environmental dependences of the luminosity
function.
\section{Summary}
\label{sec:summary}
Recently, \citet{Hearin2012}, suggested that the magnitude gap between
the two brightest galaxies in a given halo at fixed richness contains
additional information about the halo mass. Their claim was based upon
an analysis of the galaxy group catalog constructed from the SDSS by
\citet{Berlind2006}. If correct, the magnitude gap information can be
used to reduce the scatter in the mass-richness relation in galaxy
clusters, which is important for the use of optically identified
galaxy clusters as cosmological probes. However, they claimed that
their result is at odds with the results presented in
\citet{Paranjape2012} who investigated the distribution of magnitudes
of the brightest and second brightest galaxies, from the same group
catalog. \citet{Paranjape2012} showed that these magnitude
distributions are consistent with the order statistics of the
luminosities sampled from the overall galaxy luminosity function
independent of halo mass. This would imply that the magnitude gap just
depends upon richness and does not contain extra information about the
halo mass.
We have investigated both these studies within the framework of the
conditional luminosity function (CLF), which describes the halo
occupation statistics of galaxies as a function of halo mass. The CLF
and its variation with halo mass has been calibrated using
observations of the abundance and clustering of galaxies, and the
galaxy-galaxy lensing signal in the SDSS, and is consistent with
results based upon the kinematics of satellite galaxies and abundance
matching. We have shown that if galaxies populate halos according to
the CLF and if the luminosities of central and satellite galaxies are
drawn from their corresponding CLF in an uncorrelated manner, then the
magnitude gap is expected to contain information about halo mass at
fixed richness. We have presented analytical expressions for
predicting the magnitude gap distribution at fixed richness as a
function of halo mass and verified these expressions using Monte-Carlo
simulation of galaxy catalogs populated according to the CLF.
We have shown that the magnitude distribution of the brightest
and the second brightest galaxies show significant differences,
between mock galaxy catalogs constructed by drawing galaxy
luminosities according to the CLF and those constructed according to
the luminosity function of galaxies. However, we have also shown that
these differences cannot be meaningfully detected given the small
sample size that \citet{Paranjape2012} use in their study. This shows
that the magnitude distribution of the brightest and the second
brightest galaxies is not the appropriate statistic to address the
issue of how galaxies populate dark matter halos, at least given the
current sample sizes.
These results suggest that the apparent tension between the two
studies is due to small sample size used by \citet{Paranjape2012}. The
magnitude gap at fixed richness can and does contain extra information
about the mass of a halo. As the sample size of groups grows, even the
luminosity distribution of the brightest and the second brightest
galaxies will also be able to distinguish between the two scenarios.
In this paper, we have provided an analytical model based on the CLF
to predict the magnitude gap distribution. We have also presented how
the magnitude gap distribution can vary as some of our fiducial
assumptions are changed.
It is also important to note that the CLF of galaxies in clusters can
also be directly observed, albeit as a function of optical properties
such as richness \citep[see e.g.,][]{Hansen2009}. Such observations,
when combined with halo mass indicators such as weak lensing, can in
turn be used to better constrain conditional luminosity function at
the high mass end, which will help to constrain our model for the
magnitude gap, at fixed richness.
Finally, we would like to remark that we have assumed that the
luminosities of the galaxies in every group are drawn from the
conditional luminosity function in an uncorrelated fashion. This
assumption, however, needs to be thoroughly tested. For example, if
bright satellite galaxies merge with the central galaxy due to
dynamical friction, the central galaxy will become brighter at fixed
halo mass, and the magnitude gap will correspondingly be larger.
However, simultaneously the richness of the group will decrease (which
will also cause the magnitude gap to be larger just due to the
statistics of random draws from the CLF), thus making it difficult to
disentangle the physical correlation from the effect due to changing
richness.
\section*{Acknowledgments}
I thank Andrew Hearin for discussion of his preliminary results during
my recent visit to the McWilliams Center for Cosmology at the Carnegie
Mellon University. I also thank Nick Gnedin, Andrew Hearin, Frank van
den Bosch and Andrew Zentner for their suggestions and useful comments
on an early draft of this manuscript. This research was supported by
the National Science Foundation under Grant No. NSF PHY-0551142 and a
generous endowment from the Kavli foundation.
\bibliographystyle{apj}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,184
|
\section*{OLD VERSION}
\section*{Old Introduction}
Lastly many of widely used datasets such as CIFAR10~\cite{cifar10}, ImageNet~\cite{imagenet} received semi-supervised versions that aim advancing methods for few-shot learning. These methods are intended to bring deep learning closer to applications because usually labeled data is a scarce resource~\cite{review_nlp}.
Arguably for NLP task semi-supervised setting is paid less attention, although it is just as important as for image classification. For example, recognized named entities are important features in other NLP tasks such as dialogue management~\cite{dialogues_ner}. Therefore data-efficient learning which allow recognizing domain-specific entities is important. Present paper discusses semi-supervised task definition for NER and proposes few techniques for its solving adapted from image recognition.
One prominent approach to task of learning from few examples is metric learning~\cite{metric-learning}: recent examples are Matching Networks~\cite{matching} and Prototypical Networks~\cite{prototypical} for image classification. Other approaches do exist: meta-learning~\cite{few_shot_meta}, memory-augmented neural networks~\cite{few_shot_memory}, using side information of improving classification~\cite{fusing}.
Focus of the present paper is metric-learning methods which we apply to NER framed as few-shot learning task.
Our contribution is that we
\begin{enumerate}
\item[$\bullet$] We formulated few-shot NER as Semi-supervised learning task
\item[$\bullet$] We adapted existing method that was used for CV to the few-shot NER-task and compared it with the existed models.
\end{enumerate}
\section{Models}
We implemented three variants of prototypical network models for NER task and compared their performance with that of two baseline models --- bidirectional RNN with CRF, and transfer model.
In our experiments we train separate models for each class, so that all models we report can distinguish between a target class $C$ and ``O''. This was done in order to evaluate the performance of individual classes in isolation. This setting mimics the scenario when we acquire a small number of examples of a new class and need to incorporate it to a pre-trained NER model.
\fixme{Moreover, as we show later, our models have quite small number of conflicts (cases when one word is assigned more than one class). This means that in real-world scenarios our models can be used simultaneously to label a text with all classes.}
\subsection{Data preparation}
\label{section:data_preparation}
Since each of our models predicts only one class, we need to alter the data to fit into this scenario. We separate all the available data into two parts and alter their labellings in different ways. We label the first half of the data only with instances of the target class $C$, other words receive the label ``O''. Note that since the sentences of this subset are not guaranteed to contain entities of class $C$, some sentences can be ``empty'' (i.e. containing only ``O'' labels). This data is our in-domain set. We use it in two ways:
\begin{enumerate}
\item We sample training data from it. To train a model for a particular class $C$ we need $N$ instances of this class. In order to get them we sample sentences from the in-domain set until we acquire $N$ sentences with at least one entity of class $C$. Therefore, the size of our sample depends on frequency of $C$ in the data. For example, if $C$ occurs on average in one of three sentences, the expected size of the sample is $3*N$. We refer to this data as \textbf{in-domain training}.
\item We reserve a part of the in-domain set for testing. Note that we test all models on the same set of sentences, although their labelling differs depending on the target class $C$. We refer to this data as \textbf{test}.
\end{enumerate}
Conversely, the second half of the data is labelled with all classes \textit{except} $C$. It is used as training data in some of our models. We further refer to it as \textbf{out-of-domain training}.
\subsection{RNN Baseline (Base)}
Our baseline NER model was taken from AllenNLP open-source library~\cite{allen}. The model processes sentences in the following way:
\begin{enumerate}
\item words are mapped to pre-trained embeddings (any embeddings, such as GloVe \cite{}, ELMo \cite{}, etc. can be used)
\item additional word embedding are produced using a character-level trainable Recurrent Neural Network (RNN) with LSTM cells,
\item embeddings produced at stages (1) and (2) are concatenated and used as the input to a bi-directional RNN with LSTM cells. This network processes the whole sentence and creates context-dependent representations of every word
\item a feed-forward layer converts hidden states of the RNN from stage (3) to logits that correspond to every label,
\item the logits are used as input to a Conditional Random Field (CRF) \cite{} model that outputs the probability distribution of tags for every word in a sentence.
\end{enumerate}
The model is trained by minimising negative log-likelihood of true tag sequences. We train the model using only \textbf{in-domain training} set. It has to be noted that this baseline is quite strong even in our limited resource setting. However, we found that in our few-shot case CRF does not improve the performance of the model.
\subsection{Baseline prototypical network (BaseProto)}
The architecture of the prototypical network that we use for NER task is very similar to the one of our baseline model. The main change concerns the feed-forward layer. While in the baseline model it transforms RNN hidden states to logits corresponding to labels, in our prototypical network it maps these hidden states to the $M$-dimensional space. The output of the feed-forward layer is then used to construct prototypes from the support set. These prototypes are used to classify examples from the query set as described in section \ref{section:model_theory}.
We train this model on \textbf{in-domain training} data. We divide it into two parts: $N/2$ sentences containing examples of class $C$ are used as support set, and another $N/2$ sentences with instances of $C$ and a half of ``empty'' sentences serve as query set. We use only a half of ``empty'' sentences in order to keep the original proportion of instances of class $C$ in the query set.
\subsection{Regularised prototypical network (Protonet)}
\label{section:protonet}
The architecture and training procedure of this model are the same as those of BaseProto model. The only difference is the data we use for training. At each training step we select the training data using one of two scenarios:
\begin{enumerate}
\item training data is taken from the \textbf{in-domain training} set analogously to BaseProto model,
\item training data is sampled from the \textbf{out-of-domain training} set. The sampling procedure is the same as the one used for in-domain data: we (i) randomly choose the target class $C'$ and (ii) sample sentences until we encounter $N$ sentences with at least one instance of $C'$. Note that these sentences should be labelled only with labels of classes $C'$ or ``O''.
\end{enumerate}
At each step we choose the scenario (1) with probability $p$, or scenario (2) with probability $(1-p)$.
Therefore, throughout training the network is trained to predict our target class (scenario (1)), but occasionally it sees instances of some other classes and constructs prototypes for them (scenario (2)). We suggest that this model can be more efficient than BaseProto, because at training time it is exposed to objects of different classes, and the procedure that maps objects to prototype space becomes more robust. This is also a way to leverage out-of-domain training data.
\subsection{Transfer learning baseline (WarmBase)}
We implemented a common transfer learning model --- use of knowledge about out-of-domain data to label in-domain samples. The training of this model is two-part:
\begin{enumerate}
\item We train our baseline model (``Base'') using \textbf{out-of-domain training} set.
\item We save all weights of the model except CRF and label prediction layer, and train this model again using \textbf{in-domain training} set.
\end{enumerate}
\subsection{Transfer learning + prototypical network (WarmProto)}
In addition to that, we combined prototypical network with pre-training on out-of-domain data. We first train a Base model on the \textbf{out-of-domain training} set. Then, we train a Protonet model as described in section \ref{section:protonet}, but initialise its weights with weights of this pre-trained Base model.
\section{Experimental setup}
\subsection{Dataset}
We conduct all our experiments on the Ontonotes dataset~\cite{ontonotes}. It contains 18 classes (+ $O$ class). The classes are not evenly distributed --- the training set contains over $30.000$ instances of some common classes and less than 100 for some rare classes. The distribution of classes is shown in Figure \ref{fig:ontonotes_stat}. The size of the training set is $150.374$ sentences, the size of the validation set is $19.206$ sentences.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{ontonotes_stat.png}
\caption{Ontonotes dataset --- statistics of classes frequency in the training data.}
\label{fig:ontonotes_stat}
\end{figure}
As the majority of NER datasets, Ontonotes adopts \textbf{BIO} (\textbf{B}eginning, \textbf{I}nside, and \textbf{O}utside) labelling. It provides an extension to class labels, namely, all class labels except ``O'' are prepended with the symbol ``B'' if the corresponding word is the first (or the only) word in an entity, or with the symbol ``I'' otherwise.
\subsection{Data preparation: simulated few-shot experiments}
We use training data of the Ontonotes corpus as \textbf{out-of-domain training} set (dataset labelled with all classes except the target class $C$). In practice, we simply remove labels of the class $C$ from the training data (i.e. replace them with ``O''). The validation set of Ontonotes is used as in-domain data --- we replace all labels except $C$ with the label ``O''. We sample \textbf{in-domain training} set from this transformed validation data as described in \ref{section:data_preparation}. However, the size of sample produced by this sampling procedure can vary significantly, in particular, for rare classes. This can make the results unstable.
In order to reduce the variation we alter the sampling procedure in the following way. We define a function $pr(C)$ that computes the proportion of sentences containing the target class $C$ in the validation set ($pr(C) \in [0, 1]$). Then we sample $N$ sentences containing instances of class $C$ and $\frac{N \times (1-pr(C))}{pr(C)}$ sentences without class $C$. Thus, we keep the proportion instances of class $C$ in our \textbf{in-domain training} dataset.
Therefore, we use up to 200 sentences from the Ontonotes validation set for training, and the rest $19.000$ sentences are reserved for testing.
\subsection{Design of experiments}
\fixme{this is not finished}
We conducted separate experiments for each of 18 Ontonotes classes. For each class we conducted 4 experiments with different random seeds. We report averaged results for each class.
We designed separate experiments for selection of hyper-parameters and the optimal number of training epochs. For that we selected three well-represented classes --- ``GPE'' (geopolitical entity), ``Date'', and ``Org'' (organisation) --- to conduct \textit{validation} experiments on them. We selected training sets as described above, and used the test set (consisting of $19.000$ sentences) to tune hyper-parameters and to stop the training. For other classes we did not perform hyper-parameter tuning. Instead, we used the values acquired in the validation experiments with the three validation classes. In these experiments we used the test set only for computing the performance of trained models.
The motivation of such setup is the following. In many few-shot scenarios researchers report experiments where they train on a small training set, and tune the model on a very large validation set. We argue that this scenario is unrealistic, because if we had a large number of labelled examples in a real-world problem, it would be more efficient to use them for training, and not for validation. On the other hand, a more realistic scenario is to have very limited number of labelled sentences. In that case we could still reserve a part of them for validation. However, we argue that this is also inefficient. If we have 20 examples and decide to train on 10 of them and validate on another 10, this validation will be inaccurate, because 10 examples are not enough to evaluate the performance of a model reliably. Therefore, our evaluation will be very noisy and is likely to result in suboptimal values of hyper-parameters. On the other hand, additional 10 examples can boost the quality of a model. Figure
\begin{figure*}
\includegraphics[scale=0.5]{figure_exp.png}
\caption{F1-performance of algorithms after every training epoch.}
\label{fig:}
\end{figure*}
Figure
In all our experiments we set $N$ to 20. This number of examples
In ``Protonet'' model we set $p$ to 0.5. Therefore, the model is trained on the instances of the target class $C$ half of the steps, and another half of the times it is shown instances of some other randomly chosen class.
\section{Trash}
--------------This is just some meaningless text which is needed to keep latex working -----------------------------------
Our method is targeted at resource-limited setting where some of classes are under-represented in labelled data. We have labelled data for some (but not all) classes and a large number of unlabelled sentences. In order to acquire examples of rare classes we need to label sentences manually. Manual labelling is laborious and time-consuming, so we would like to reduce the number of manually labelled examples that we need for training. In our experiments we mimic this setting by sampling a small number of examples of a particular class from the dataset.
\subsection{Dataset}
We conduct all experiments on the Ontonotes dataset~\cite{ontonotes}. It contains 18 classes (apart from ``O'' class): ...
The size of the corpus is... \fixme{add info about the corpus, train/test partition, maybe table with number of instances for each class}
We prepared the dataset in the following way:
\begin{enumerate}
\item In the training set we replaced all labels of class $C$ with ``O''. This is our query set.
\item In the validation set we replaced all labels \textit{except} those of class $C$ with ``O''
\item We sampled sentences from the validation set until finding $N$ sentences with at least one instance of class $C$. This is our support set.
\item We used the rest of validation set for testing.
\end{enumerate}
If stage (3) is produced as described above, the result of the experiment is very unstable, because we sample a different number of sentences each time. In order to alleviate this problem we alter the sampling procedure. We compute the proportion of sentences with instances of class $C$ in advance, and then sample
\section{Experiments}
\subsection{Baseline}
In this experiment we just apply model from section 5.1 to few-shot NER task. Since it is usual supervised learning, we use only bag \#1, because we can't extract knowledge from bag \#2. We call this experiment "Base".
\subsection{Baseline prototypical network}
Here we use model from section 5.2.
Using the same dataset as in BM (only bag \#1) we change the training procedure as written below.
At each training step we divide our whole dataset into 2 parts: the first one is used to compute prototypes (support set), so our network is trained using objects from the second part (query set).
Let us say we have a bag \#1 for class C so and we labeled 20 sentences each containing example of the class.
We use randomly sampled 10 of them as support set. We sample query set from a union of other 10 sentences and a subset of empty sentences. \textit{Important note}: we choose the size of empty subset to save the true proportion in a query set. According to task construction, empty subset should be 2 times smaller than original set of empty sentences from bag \#1. This remains both in real-world case and in our research case when we computed the proportion in advance.
This training procedure was designed to ensure that our neural architecture that we used for calculation of embeddings is on-par with BM. Thus we can use all the over data to further improve embeddings and enforce such data-driven regularization.
We call this model "BaseProto".
\subsection{Regularized prototypical network}
With the probabilty p we sample batch and make training step using the same procedure as in 5.2. Otherwise we sample a batch from big training dataset using following procedure:
\begin{enumerate}
\item Choose randomly one class from 17 training classes.
\item Draw a batch of sentences that contain this class and annotate only this class. Other tokens should be annotated as 'O'.
\item Draw a batch of any sentences from training dataset and annotate them the same way (we also can draw empty sentences). As you can see, we save the true proportion in this batch.
\item Use the first batch as support set and the second batch as query set.
\end{enumerate}
The probabilty p was choosen as 0.5, it seems that it doesn't affect the result sufficiently.
We call this "Protonet".
\subsection{Transfer learning baseline}
We also provide the results for common transfer learning procedure, that is divided into 2 parts.
Firstly, we train our baseline model using usual training set, but we change labels of test class by ('O').
Then we save all the weights except CRF and tag prediction layer and train our BM from Section 4.1 using that initialization of weights.
The results of this method a provided in the table.
We call this "WarmBase".
\subsection{Transfer learning + prototypical network}
We found that if we initialize our algorithm from Section 4.3 using weights received after warming procedure we made in the section 4.4, we obtain much better results.
We call this model "WarmProto".
\section{Results}
\subsection{Validation}
To measure the quality of our models correctly, we carefully constructed a validation procedure to make it similar to real-world case. As it was mentioned above, we prepared 72 few-shot tasks - 4 tasks correspond to 1 class from Ontonotes dataset. These tasks are separated into 2 groups: 12 validation and 60 test tasks. 12 validations tasks are the tasks that were generated from classes 'GPE', 'DATE' and 'ORG'. This means the following: we tune model, hyperparameters and so on based on F1-performance in 12 validation tasks. We make an assumption that we don't see test datasets of 60 test tasks until final F1-measuring that will appear in our table (excluding the fact that we compute the proportion of classes in train dataset based on test data).
\subsection{Training procedure}
Usually people use 3 datasets within one task: train, validation and test. The model is trained based on train set, it's validated based on validation part to stop training and tune hyperparameters and tested using test part.
As it was mentioned by ~\cite{realeval}, some authors use unrealisticly big validation dataset to test model in few-shot task. For example, it doesn't make sense to train a model using 20 sentences and validate it using 200 sentences, because if 220 sentences are available for us before the final prediction, we can separate them into train and validation in another proportion (for example 120/100), and it will dramatically rise the performance of the model.
Suppose we have only small number of sentences available before the final prediction. Since we have 3 validation tasks and we can use its big test datasets, we can tune hyperparameters and model based on this 3 validation tasks. The only thing that is left is a number of epochs we train out model. Here are 2 options: somehow choose it based on 3 validation tasks or separate our small dataset into 2 parts and make early stopping based on one of it's part.
We constructed the following experiment: we train our algorithms WarmProto, WarmBase, Protonet using whole train dataset and validate it every epoch. We also train all this algorithms using a half of train dataset and validate it every epoch. There are the results of the experiment on Figure 3.
As we can see, it doesn't make sense to use validation dataset, because our models don't overfit over during training. So we decided to choose the number of epoch based on 3 validation tasks. Here is a simple procedure: for every epoch from 1 to n for every validation task we compute mean F1 on its test dataset. Then we choose the epoch for which this F1 is the biggest. Of course we do that procedure for every algorithm separately.
\subsection{Final results}
For given 5 models final results are provided in the table 1. As you can see, even the simplest model "Base" produces adequate results. If we train using the same data using the prototypical network, we get comparable results, that are seen in the column "BaseProto".
If we regularize prototypical network using procedure described in 6.3, we get the results called "Protonet", that are definitely better than the previous 2 models. When we add 2 models that use transfer learning, it significanly rise the performance of the model.
*add more after the last 6 classes*
\begin{table*}[t!]
\begin{center}
\begin{tabular}{|l|ccccccc|}
\hline \bf Class name & \bf Base & \bf BaseProto & \bf WarmProtoZero & \bf Protonet & \bf WarmProto & \bf WarmBase & \bf WarmProto-CRF\\
\hline
\multicolumn{8}{|c|}{\textbf{Validation Classes}} \\
\hline
GPE & 69.75 $\pm$ 9.04 & 69.8 $\pm$ 4.16 & 60.1 $\pm$ 5.56 & 78.4 $\pm$ 1.19 & \textbf{82.02} $\pm$ \textbf{0.42} & 75.8 $\pm$ 6.2 & \underline{80.05} $\pm$ \underline{5.4} \\
DATE & 54.42 $\pm$ 3.64 & 50.75 $\pm$ 5.38 & 11.23 $\pm$ 4.57 & 56.55 $\pm$ 4.2 & \underline{64.68} $\pm$ \underline{3.65} & 56.32 $\pm$ 2.32 & \textbf{65.42} $\pm$ \textbf{2.82} \\
ORG & 42.7 $\pm$ 5.54 & 39.1 $\pm$ 7.5 & 17.18 $\pm$ 3.77 & 56.35 $\pm$ 2.86 & \underline{65.22} $\pm$ \underline{2.83} & 63.45 $\pm$ 1.79 & \textbf{69.2} $\pm$ \textbf{1.2} \\
\hline
\multicolumn{8}{|c|}{\textbf{Test Classes}} \\
\hline
EVENT & 32.33 $\pm$ 4.38 & 24.15 $\pm$ 4.38 & 4.85 $\pm$ 1.88 & 33.95 $\pm$ 5.68 & 34.75 $\pm$ 2.56 & \underline{35.15} $\pm$ \underline{4.04} & \textbf{45.2} $\pm$ \textbf{4.4} \\
LOC & 31.75 $\pm$ 9.68 & 24.0 $\pm$ 5.56 & 16.62 $\pm$ 7.18 & 42.88 $\pm$ 2.03 & \underline{49.05} $\pm$ \underline{1.04} & 40.67 $\pm$ 4.85 & \textbf{52.0} $\pm$ \textbf{4.34} \\
FAC & 36.7 $\pm$ 8.15 & 29.83 $\pm$ 5.58 & 6.93 $\pm$ 0.62 & 41.05 $\pm$ 2.74 & 43.52 $\pm$ 3.09 & \underline{45.4} $\pm$ \underline{3.01} & \textbf{56.85} $\pm$ \textbf{1.52} \\
CARDINAL & 54.82 $\pm$ 1.87 & 53.7 $\pm$ 4.81 & 8.12 $\pm$ 7.92 & 64.05 $\pm$ 1.61 & \underline{69.2} $\pm$ \underline{1.51} & 62.98 $\pm$ 3.5 & \textbf{70.43} $\pm$ \textbf{3.43} \\
QUANTITY & 64.3 $\pm$ 5.06 & 61.72 $\pm$ 4.9 & 12.88 $\pm$ 4.13 & 65.05 $\pm$ 8.64 & 67.97 $\pm$ 2.98 & \underline{69.65} $\pm$ \underline{5.8} & \textbf{76.35} $\pm$ \textbf{3.09} \\
NORP & 73.5 $\pm$ 2.3 & 72.1 $\pm$ 6.0 & 39.92 $\pm$ 10.5 & \underline{83.02} $\pm$ \underline{1.42} & \textbf{84.5} $\pm$ \textbf{1.61} & 79.53 $\pm$ 1.32 & 82.4 $\pm$ 1.15 \\
ORDINAL & 68.97 $\pm$ 6.16 & 71.65 $\pm$ 3.31 & 1.93 $\pm$ 3.25 & \textbf{76.08} $\pm$ \textbf{3.55} & 74.7 $\pm$ 4.94 & 69.77 $\pm$ 4.97 & \underline{75.52} $\pm$ \underline{5.11} \\
WORK\_OF\_ART & \underline{30.48} $\pm$ \underline{1.42} & 27.5 $\pm$ 2.93 & 3.4 $\pm$ 2.37 & 28.0 $\pm$ 3.33 & 25.6 $\pm$ 2.86 & 30.2 $\pm$ 1.27 & \textbf{32.25} $\pm$ \textbf{3.11} \\
PERSON & 70.05 $\pm$ 6.7 & 74.1 $\pm$ 5.32 & 38.88 $\pm$ 7.64 & \underline{80.53} $\pm$ \underline{2.15} & 78.8 $\pm$ 0.26 & 78.03 $\pm$ 3.98 & \textbf{82.32} $\pm$ \textbf{2.51} \\
LANGUAGE & \underline{72.4} $\pm$ \underline{5.53} & 70.78 $\pm$ 2.62 & 4.25 $\pm$ 0.42 & 68.75 $\pm$ 6.36 & 52.72 $\pm$ 11.67 & 65.92 $\pm$ 3.52 & \textbf{75.62} $\pm$ \textbf{7.22} \\
LAW & \underline{58.08} $\pm$ \underline{4.9} & 53.12 $\pm$ 4.54 & 2.4 $\pm$ 1.15 & 48.38 $\pm$ 8.0 & 44.35 $\pm$ 4.31 & \textbf{60.13} $\pm$ \textbf{6.08} & 57.72 $\pm$ 7.06 \\
MONEY & 70.12 $\pm$ 5.19 & 66.05 $\pm$ 1.66 & 12.48 $\pm$ 11.92 & 68.4 $\pm$ 6.3 & \underline{72.12} $\pm$ \underline{5.87} & 68.4 $\pm$ 5.08 & \textbf{79.35} $\pm$ \textbf{3.6} \\
PERCENT & 76.88 $\pm$ 2.93 & 75.55 $\pm$ 4.17 & 1.78 $\pm$ 1.87 & 80.18 $\pm$ 4.81 & \underline{85.65} $\pm$ \underline{3.6} & 79.2 $\pm$ 3.76 & \textbf{88.32} $\pm$ \textbf{2.76} \\
PRODUCT & 43.6 $\pm$ 7.21 & \underline{44.35} $\pm$ \underline{3.48} & 3.95 $\pm$ 0.51 & 39.92 $\pm$ 7.22 & 30.07 $\pm$ 12.73 & 43.4 $\pm$ 8.43 & \textbf{49.32} $\pm$ \textbf{2.92} \\
TIME & 35.93 $\pm$ 6.35 & 35.8 $\pm$ 2.61 & 8.6 $\pm$ 3.21 & 50.15 $\pm$ 5.12 & \underline{53.6} $\pm$ \underline{2.5} & 45.62 $\pm$ 5.64 & \textbf{59.8} $\pm$ \textbf{0.76} \\
\hline
\end{tabular}
\end{center}
\caption{\label{baseline_sl} Final results of experiments. $F_1$ score for different models and different classes. Bold means the best score, underlined means the second place.}
\end{table*}
\begin{table*}[t!]
\begin{center}
\begin{tabular}{|l|cccc|}
\hline \bf Class name & \bf WarmBase + BIO & \bf WarmBase + TO & \bf WarmProto + BIO & \bf WarmProto + TO\\
\hline
\multicolumn{5}{|c|}{\textbf{Validation Classes}} \\
\hline
GPE & 75.8 $\pm$ 6.2 & 74.8 $\pm$ 4.16 & \textbf{83.62} $\pm$ \textbf{3.89} & \underline{82.02} $\pm$ \underline{0.42} \\
DATE & 56.32 $\pm$ 2.32 & 58.02 $\pm$ 2.83 & \underline{61.68} $\pm$ \underline{3.38} & \textbf{64.68} $\pm$ \textbf{3.65} \\
ORG & 63.45 $\pm$ 1.79 & 62.17 $\pm$ 2.9 & \underline{63.75} $\pm$ \underline{2.43} & \textbf{65.22} $\pm$ \textbf{2.83} \\
\hline
\multicolumn{5}{|c|}{\textbf{Test Classes}} \\
\hline
EVENT & \underline{35.15} $\pm$ \underline{4.04} & \textbf{35.4} $\pm$ \textbf{6.04} & 33.85 $\pm$ 5.91 & 34.75 $\pm$ 2.56 \\
LOC & 40.67 $\pm$ 4.85 & 40.08 $\pm$ 2.77 & \textbf{49.1} $\pm$ \textbf{2.4} & \underline{49.05} $\pm$ \underline{1.04} \\
FAC & \underline{45.4} $\pm$ \underline{3.01} & 44.88 $\pm$ 5.82 & \textbf{49.88} $\pm$ \textbf{3.39} & 43.52 $\pm$ 3.09 \\
CARDINAL & 62.98 $\pm$ 3.5 & 63.27 $\pm$ 3.66 & \underline{66.12} $\pm$ \underline{0.43} & \textbf{69.2} $\pm$ \textbf{1.51} \\
QUANTITY & \textbf{69.65} $\pm$ \textbf{5.8} & \underline{69.3} $\pm$ \underline{3.41} & 67.07 $\pm$ 5.11 & 67.97 $\pm$ 2.98 \\
NORP & 79.53 $\pm$ 1.32 & 80.75 $\pm$ 2.38 & \textbf{84.52} $\pm$ \textbf{2.79} & \underline{84.5} $\pm$ \underline{1.61} \\
ORDINAL & 69.77 $\pm$ 4.97 & 70.9 $\pm$ 6.34 & \underline{73.05} $\pm$ \underline{7.14} & \textbf{74.7} $\pm$ \textbf{4.94} \\
WORK\_OF\_ART & \textbf{30.2} $\pm$ \textbf{1.27} & \underline{25.78} $\pm$ \underline{4.07} & 23.48 $\pm$ 5.02 & 25.6 $\pm$ 2.86 \\
PERSON & 78.03 $\pm$ 3.98 & 76.0 $\pm$ 3.12 & \textbf{80.42} $\pm$ \textbf{2.13} & \underline{78.8} $\pm$ \underline{0.26} \\
\hline
\end{tabular}
\end{center}
\caption{\label{baseline_sl} $F_1$ score for models WarmBase and WarmProto with different task constructions: with and without BIO-tagging on the training stage.}
\end{table*}
\subsection{BIO-tagging}
We also checked the hypothesis that BIO-output of model can harm the F1-performance of the algorithm.
It might be possible that it does not make sense to separate B-tags and I-tags and separate classes because of 2 reasons:
\begin{enumerate}
\item There might be too small number of I-tags since not all entities are bigger than 1 word.
\item The words contains B-tags and I-tags may be quite similar and it may be not possible to distinguish them using prototypes, which have high-variance itself because of small size of support set.
\end{enumerate}
As was mentioned above (?), we use chunk-wise F1-performance. If our model produces BIO-tags given sentences, we can extract chunks and then compute F1. But even if our model only produce TO-tags (tag/other), we can also compute F1 using the following procedure: if 'T' stands after 'O', it virtually becomes 'B', otherwise it virtually becomes 'I'. An important point is that we don't change the true chunks in the test dataset, they remain the same. We only change the way how our model produces our chunks, so we don't make out task easier. Moreover, now our model lose the possibility to predict any sequence of chunks, now it can't predict a couple chunks that go one by one.
When we train our model, we change all BIO-tags to TO-tags in both small train dataset and big train dataset (in WarmProto).
WarmProto + TO showed better results on validations tasks so firstly it seemed that TO-tagging is definitely better. However, results on test tasks showed WarmProto + TO is not as good as we thought comparing with WarmProto + BIO.
Here in the table 2 we provide the results for WarmProto and WarmBase with and without BIO-tagging.
As we can see, this BIO-tagging doesn't affect both algorithms significantly.
\section{Conclusions}
...
\section{Introduction}
Named Entity Recognition (NER) is the task of finding entities, such as names of persons, organizations, locations, etc. in unstructured text. These names can be individual words or phrases in a sentence. Therefore, NER is usually interpreted as sequence labelling task. This task is actively used in various information extraction frameworks and is one of the core components of goal-oriented dialogue systems~\cite{dialogues_ner}.
When large labelled datasets are available, the task of NER can be solved with very high quality~\cite{ner_bilstm_sota}. Common benchmarks for testing new NER methods are CoNLL-2003~\cite{conll2003} and Ontonotes~\cite{ontonotes} datasets. They both include enough data to train neural architectures in a supervised learning setting.
However, in real-world applications such abundant datasets are usually not available, especially for low-resourced languages. And even if we have a large labelled corpus, it will inevitably have rare entities that occur not enough times to train a neural network to accurately identify them in text.
This urges the need for developing methods of \textbf{few-shot} NER --- successful identification of entities for which we have extremely small number of labelled examples. One solution would be semi-supervised learning methods, i.e. methods that can yield well-performing models by combining the information from a small set of labelled data and large amounts of unlabelled data which are available for virtually any language. Word embeddings which are trained in the unsupervised manner and are used in the majority of NLP tasks as the input to a neural network, can be considered as incorporation of unlabelled data. However, they only provide general (and not always suitable) information about word meaning, whereas we argue that unsupervised data can be used to extract more task-specific information on the structure of the data.
A prominent approach to the task of learning from few examples is metric learning~\cite{metric-learning}. This term denotes techniques that learn a metric to measure fitness of an object to some class. Metric learning methods, such as matching networks~\cite{matching} and prototypical networks~\cite{prototypical}, showed good results in few-shot learning for image classification. These methods can also be considered as semi-supervised learning methods, because they use the information about structure of common objects in order to label the uncommon ones even without seeing many examples. This approach can even be used as zero-shot learning, i.e. instances of a target class do not need to be presented at training time. Therefore, such model does not need to be re-trained in order to handle new classes. This property is extremely appealing for real-world tasks.
Despite its success in image processing, metric learning has not been widely used in NLP tasks. There, in low-resourced settings researchers more often resort to transfer learning --- use of knowledge from a different domain or language. We apply prototypical networks to the NER task and compare it to commonly used baselines. We test a metric learning technique in a task which often emerges in real-world setting --- identification of instances with extremely small number of labelled examples. We show that although prototypical networks do not succeed in zero-shot NER task, they outperform other models in few-shot case.
The main contributions of the work are the following:
\begin{enumerate
\item we formulate few-shot NER task as a semi-supervised learning task,
\item we modify prototypical network model to enable it to solve NER task, we show that it outperforms a state-of-the-art model in low-resource setting.
\end{enumerate}
The paper is organized as follows. In Section \ref{sec:related_work} we review the existing approaches to few-shot NER task. In Section \ref{sec:prototypical} we describe the prototypical network model and its adaptation to the NER task. Section \ref{sec:few_shot_ner} defines the task and describes the models that we tested to solve it. Section \ref{sec:experimental_setup} contains the description of our experimental setup. We report and analyze our results in Section \ref{sec:results}, and in Section \ref{sec:conclusions} we conclude and provide the directions for future work.
\section{Related work}
\label{sec:related_work}
NER is a well-established task that has been solved in a variety of ways. Nowadays, as in the majority of other NLP tasks, the state of the art is sequence labelling with Recurrent Neural Networks \cite{ner_bilstm_sota,ner_sota}. However, neural architectures are very sensitive to the size of training data and tend to overfit on small datasets. Hence, the latest research on named entities concentrates on handling low-resourced cases, which often occur in narrow domains or low-resourced languages.
The work by Wang et al.~\cite{transfermed} describes feature transform between domains which allows exploiting a large out-of-domain dataset for NER task. Numerous works describe a similar transition between languages: Dandapat and Way~\cite{ner_embedding_transfer} draw correspondences between entities in different languages using a machine translation system, Xie et al.~\cite{crosslang} map words of two languages into a shared vector space. Both these methods allow ``translating'' a big dataset to a new language. Cotterell and Duh~\cite{ner_crosslang_joint} describe a setting where the performance of a NER model for a low-resourced language is improved by training it jointly with a NER model for its well-resourced cognate.
Besides labelled data of a different domain or language, other sources such as ontologies, knowledge bases or heuristics can be used in limited data settings~\cite{ner_ontologies}. Similarly, Tsai and Salakhutdinov~\cite{fusing} improve the image classification accuracy using side information.
Active learning is also a popular choice to reduce the amount of training data. In~\cite{activelearning} the authors apply active learning to few-shot NER task and succeed in improving the performance despite the fact that neural architectures usually require large number of training examples. A somewhat similar approach is self-learning --- training on examples labelled by a model itself. While it is ineffective in many settings,~\cite{self_learn} shows that it can improve results of few-shot NER task when combined with reinforcement learning.
The most closely related work to ours is research by Ma et al.~\cite{fine-grained} where authors learn embeddings for fine-grained NER task with hierarchical labels. They train a model to map hand-crafted and other features of words to embeddings and use mutual information metric to choose a prototype from sets of words. Analogously to this work, we aim at improving performance of NER models on rare classes. However, we do not limit the model to hierarchical classes. It makes our model more flexible and applicable to ``cold start'' problem (problem of extending data with new classes).
Beyond NLP, there also exist multiple approaches to few-shot learning. The already mentioned metric learning technique~\cite{metric-learning} benefits from structure shared by all objects in a task, and creates a representation that shows their differences relevant to the task. Meta-learning~\cite{few_shot_meta} approach operates at two levels: it learns to solve a task from a small number of examples, and at the top level it learns more general regularities about the data across tasks.
In~\cite{few_shot_memory} the authors demonstrate that memory-augmented neural networks, such as Neural Turing Machines, have a capacity to perform meta-learning with few labelled examples.
To the best of our knowledge, prototypical networks~\cite{prototypical} have not been applied to any NLP tasks before. They have a very attractive capacity of introducing new labels to a model without its retraining. None of models described above can perform such zero-shot learning. Although natural language is indeed different from images for which prototypical networks were originally suggested, we decided to test this model on an NLP task to see if it is possible to transfer this property to the text domain.
\section{Prototypical Networks}
\label{sec:prototypical}
\subsection{Model}
\label{section:model_theory}
Work by Snell et al. ~\cite{prototypical} introduces \textit{prototypical network} --- a model that was developed for classification in settings where labelled examples are scarce.
This network is trained so that representations of objects returned by its last but one layer are similar for objects that belong to the same class and diverse for objects of different classes. In other words, this network maps objects to a vector space which allows easy separation of objects into meaningful task-specific clusters.
This feature allows assigning a class to an unseen object even if the number of labelled examples of this class is very limited.
The model is trained on two sets of examples: \textit{support set} and \textit{query set}. Support set consists of $N$ labelled examples: $S$ = \{$(\textbf{x}_1, y_1)$, ...,$(\textbf{x}_N, y_N)$\}, where each $ x_i \in \mathbb{R}^{D} $ is a $D$-dimensional representation of an object and $ y_i \in \{1,2, ..., K\} $ is the label of this object. Query set contains $N'$ labelled objects: $Q$ = \{$(\textbf{x}_1, y_1)$, ...,$(\textbf{x}_{N'}, y_{N'})$\}. Note that this partition is not stable across training steps --- the support and query sets are sampled randomly from the training data at each step.
The training is conducted in two stages:
\begin{enumerate}
\item For each class $k$ we define $ S_k $ --- the set of objects from $S$ that belong this class. We use these sets to compute \textit{prototypes}:
$$ \textbf{c}_k = \frac{1}{\|S_k\|} \sum_i f_{\theta}(\textbf{x}_i), $$
where function $f_{\theta}: \mathbb{R}^D \to \mathbb{R}^M$ maps the input objects to the $M$-dimensional space which is supposed to keep distances between classes. $f_{\theta}$ is usually implemented as a neural network. Its architecture depends on the properties of objects.
Prototype is the averaged representation of objects in a particular class, or the centre of a cluster corresponding to this class in the $M$-dimensional space.
\item We classify objects from $Q$. In order to classify an unseen example \textbf{x}, we map it to the $M$-dimensional space using $f_{\theta}$ and then assign it to a class whose prototype is closer to the representation of \textbf{x}. We compute distance $d(f_{\theta}(\textbf{x}), \textbf{c}_k)$ for every $k$. We denote the measure of similarity of \textbf{x} to $k$ as $l_i = -d(f_{\theta}(\textbf{x}), \textbf{c}_k)$. Finally, we convert these similarities to distribution over classes using $softmax$ function: $softmax(l_1, ..., l_K)$. The model is agnostic about the distance function. Following ~\cite{prototypical}, we use squared Euclidian distance.
\end{enumerate}
The model is trained by optimising cross-entropy loss:
$$ L(\textbf{y}, \hat{\textbf{y}}) = - \sum_{i=1}^{N'} y_i \hspace{1mm} log \hspace{1mm} \hat{y}_i, $$
where $\hat{y}_i = softmax(l_1, ..., l_K)$.
\subsection{Adaptation to NER}
In order to apply prototypical networks to NER task, we made the following changes to the baseline model described above:
\paragraph{\textbf{Sequential vs independent objects}} Image dataset contains separate images that are not related to each other.
In contrast, in NLP tasks we often need to classify words which are grouped in sequences. Words in a sentence influence each other, and when labelling a word we should take into account labels of neighbouring words. Considering a word in isolation does not make sense in such setting. Nevertheless, in NER task we need to classify separate words, so following the description of the model from the previous section, we should assemble the support set $S$ from pairs ($w_i$, $y_i$), where $w_i$ is a word and $y_i$ is its label. However, this division can break the sentence structure, if some words in a sentence are assigned to the support set and others to query set. In order to prevent such situations we form our support and query sets from whole sentences.
\paragraph{\textbf{Class ``no entity''}} In NER task we have class \textit{O} that is used to denote words which are not named entities. It cannot be interpreted in the same way as other classes, because objects of class \textit{O} do not need to (and should not) be close to each other in a vector space. In order to mitigate this problem we modified our prediction function $softmax(l_1, ..., l_K)$. We replaced the similarity score $l_O$ for the \textit{O} class with a scalar $b_{O}$, and used the following form of softmax: $softmax(l_1, ..., l_{K-1}, b_{O})$. $b_O$ is trained along with parameters $\theta$ of the model. The initial value of $b_O$ is a hyper-parameter.
\paragraph{\textbf{In-domain and out-of-domain training}}
In original paper describing prototypical networks~\cite{prototypical} they were applied to the setting of zero-shot learning. Weights of the model are updated during training phase, but once training is over instances from test classes are only used for calculation of prototypes. Given it is usually easy to obtain few labelled examples, we modified original \textit{zero-shot} setting to \textit{few-shot} setting: we use a small number of available labelled examples of the target class during training phase. We denote this data as \textbf{in-domain} training set, and data for other classes is referred to as \textbf{out-of-domain} training. Here \textit{domains} in the traditional NLP sense are the same --- texts come from the same sources and word distributions are similar. Here we refer to discrepancy between sets of named entity classes that they use.
\section{Few-shot NER}
\label{sec:few_shot_ner}
\subsection{Task formulation}
NER is a sequence labelling task, where each word in a sentence is assigned either one of entity classes (``Person'', ``Location'', ``Organisation'', etc.) or \textit{O} class if it is not one of the desired entities.
While common classes are usually identified correctly by the existing methods, we target particularly at rare classes for which we have only a very limited number of labelled examples. To increase the quality of their identification, we use the information from other classes. Therefore, we train a separate model for every class in order to see the performance on each of them in isolation. Such formulation can also be considered as a way to tackle the ``cold start'' problem --- adapting a NER model to label entities of a new class with very little number of entities.
As it was described above, we have two training sets: \textit{out-of-domain} and \textit{in-domain}. Since we simulate the ``cold start'' problem in our experiments, these datasets have the following characteristics. The \textit{out-of-domain} data is quite large and labelled with a number of named entity classes except the target class $C$ --- this is the initially available data. The \textit{in-domain} dataset is very small and contains labels only for the class $C$ --- this is the new data which we acquire afterwards and which we would like to infuse into the model.
In order to train a realistic model we need to keep the frequency of $C$ in our \textit{in-domain} training data similar to the frequency of this class in general distribution. Therefore, if instances of this class occur on average in one of three sentences, then our \textit{in-domain} training data has to contain sentences with no instances of class $C$ (``empty'' sentences), and their number should be twice as larger as the number of sentences with $C$. In practice this can be achieved by sampling sentences from unlabelled data until we obtain the needed number of instances of class $C$.
\subsection{Basic models}
\label{sec:basic_models}
We use two main architectures --- the commonly used RNN baseline and a prototypical network adapted for the NER task. Other models we test use these two models as building blocks.
\paragraph{\textbf{RNN + CRF model}}
As our baseline we use a NER model implemented in AllenNLP open-source library~\cite{allen}. The model processes sentences in the following way:
\begin{enumerate}
\item words are mapped to pre-trained embeddings (any embeddings, such as GloVe~\cite{glove}, ELMo~\cite{elmo}, etc. can be used)
\item additional word embedding are produced using a character-level trainable Recurrent Neural Network (RNN) with LSTM cells,
\item embeddings produced at stages (1) and (2) are concatenated and used as the input to a bi-directional RNN with LSTM cells. This network processes the whole sentence and creates context-dependent representations of every word
\item a feed-forward layer converts hidden states of the RNN from stage (3) to logits that correspond to every label,
\item the logits are used as input to a Conditional Random Field (CRF)~\cite{crf} model that outputs the probability distribution of tags for every word in a sentence.
\end{enumerate}
The model is trained by minimizing negative log-likelihood of true tag sequences. It has to be noted that this baseline is quite reasonable even in our limited resource setting.
\paragraph{\textbf{Prototypical Network}}
The architecture of the prototypical network that we use for NER task is very similar to the one of our baseline model. The main change concerns the feed-forward layer. While in the baseline model it transforms RNN hidden states to logits corresponding to labels, in our prototypical network it maps these hidden states to the $M$-dimensional space. The output of the feed-forward layer is then used to construct prototypes from the support set. These prototypes are used to classify examples from the query set as described in section \ref{section:model_theory}.
We try variants of this model both with and without the CRF layer.
The architecture of the prototypical network model is provided in Figure \ref{fig:subim1}.
\begin{figure}[h]
\includegraphics[width=1.0\linewidth]{Proto.png}
\caption{Model architecture}
\label{fig:subim1}
\end{figure}
\subsection{Experiments}
We perform experiments with a number of different models. We test the different variants of prototypical network model and compare them with RNN baseline. In addition to that, we try transfer learning scenario and combine it with these models. Here we provide the description of all models we test.
\paragraph{\textbf{RNN Baseline (Base)}}
This is the baseline RNN model described above. We train it using only \textit{in-domain} training set.
\paragraph{\textbf{Baseline prototypical network (BaseProto)}}
This is the baseline prototypical network model. We train it on \textit{in-domain} training data. We divide it into two parts. If the \textit{in-domain} set contains $N$ sentences with instances of the target class $C$ and $V$ sentences ``empty'' sentences, we use $N/2$ sentences with instances of $C$ as support set, and other $N/2$ such sentences along with $V/2$ ``empty'' sentences serve as query set. We use only half of ``empty'' sentences to keep the original frequency of class $C$ in the query set. Note that the partition is new for every training iteration.
\paragraph{\textbf{Regularised prototypical network (Protonet)}}
The architecture and training procedure of this model are the same as those of \textit{BaseProto} model. The only difference is the data we use for training. At each training step we select the training data using one of two scenarios:
\begin{enumerate}
\item we use \textit{in-domain} training data, i.e. data labelled with the target class $C$ (this setup is the same as the one we use in \textit{BaseProto}),
\item we change the target class: we (i) randomly select a new target class $C'$ ($C' \neq C$), (ii) sample sentences from \textit{out-of-domain} dataset until we find $N$ instances of $C'$, and (iii) re-label the sampled sentences so that they contain only labels of class $C'$.
\end{enumerate}
At each step we choose the scenario (1) with probability $p$, or scenario (2) with probability $(1-p)$.
Therefore, throughout training the network is trained to predict our target class (scenario (1)), but occasionally it sees instances of some other classes and constructs prototypes for them (scenario (2)). We suggest that this model can be more efficient than BaseProto, because at training time it is exposed to objects of different classes, and the procedure that maps objects to prototype space becomes more robust. This is also a way to leverage out-of-domain training data.
\paragraph{\textbf{Transfer learning baseline (WarmBase)}}
We test a common transfer learning model --- use of knowledge about out-of-domain data to label in-domain samples. The training of this model is two-part:
\begin{enumerate}
\item We train our baseline RNN+CRF model using \textit{out-of-domain} training set.
\item We save all weights of the model except CRF and label prediction layer, and train this model again using \textit{in-domain} training set.
\end{enumerate}
\paragraph{\textbf{Transfer learning + prototypical network (WarmProto)}}
In addition to that, we combine prototypical network with pre-training on out-of-domain data. We first train a Base model on the \textit{out-of-domain} training set. Then, we train a Protonet model as described above, but initialise its weights with weights of this pre-trained Base model.
\paragraph{\textbf{WarmProto-CRF}}
This is the same prototypical network pre-trained on \textit{out-of-domain} data, but it is extended with a CRF layer on top of logits as described in section \ref{sec:basic_models}.
\paragraph{\textbf{WarmProto for zero-shot training (WarmProtoZero)}}
We train the same WarmProto model, but with the probability $p$ set to 0. In other words, our model does not see instances of the target class at training time. It learns to produce representations on objects of other classes. Then, at test time, it is given $N$ entities of the target class as support set, and words in test sentences are assigned to either this class or \textit{O} class based on their similarity to this prototype. This is the only zero-shot learning scenario that we test.
\section{Experimental setup}
\label{sec:experimental_setup}
\subsection{Dataset}
We conduct all our experiments on the Ontonotes dataset~\cite{ontonotes}. It contains 18 classes (+ $O$ class). The classes are not evenly distributed --- the training set contains over $30.000$ instances of some common classes and less than 100 for rare classes. The distribution of classes is shown in Figure \ref{fig:ontonotes_stat}. The size of the training set is $150.374$ sentences, the size of the validation set is $19.206$ sentences.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{ontonotes_stat.png}
\caption{Ontonotes dataset --- statistics of classes frequency in the training data.}
\label{fig:ontonotes_stat}
\end{figure}
As the majority of NER datasets, Ontonotes adopts \textbf{BIO} (\textbf{B}eginning, \textbf{I}nside, and \textbf{O}utside) labelling. It provides an extension to class labels, namely, all class labels except \textit{O} are prepended with the symbol ``B'' if the corresponding word is the first (or the only) word in an entity, or with the symbol ``I'' otherwise.
\subsection{Data preparation: simulated few-shot experiments}
We use the Ontonotes training data as \textit{out-of-domain} training set (where applicable) and sample \textit{in-domain} examples from the validation set. In our formulation, the \textit{in-domain} data is the data where only instances of a target class $C$ (class we want to predict) are labelled. Conversely, the \textit{out-of-domain} data contains instances of some set of classes, but not of the target class. Therefore, we prepare our data by replacing all labels \textit{B-C} and \textit{I-C} with \textit{O} in the training data, and in the validation data we replace all labels \textit{except} \textit{B-C} and \textit{I-C} with \textit{O}. Note that since we run the experiments for each of 18 Ontonotes classes, we perform this re-labelling for every experiment.
The validation data is still too large for our low-resourced scenario, so we use only a part of it for training. We sample our \textit{in-domain} training data as follows. We randomly select sentences from the re-labelled validation set until we obtain $N$ sentences with at least one instance of the class $C$. Note that sentences of the validation set are not guaranteed to have instances of $C$, so our training data can have some ``empty'' sentences, i.e. sentences where all words are labelled with \textit{O}. This sampling procedure allows keeping the proportion of instances of class \textit{C} close to the one of the general distribution.
In our preliminary experiments we noticed that such sampling procedure leads to large variation in the final scores, because the size of \textit{in-domain} training data can vary significantly. In order to reduce this variation we alter the sampling procedure. We define a function $pr(C)$ which computes the proportion of labels of a class $C$ in the validation set ($pr(C) \in [0, 1]$). Then we sample $N$ sentences containing instances of class $C$ and $\frac{N \times (1-pr(C))}{pr(C)}$ sentences without class $C$. Thus, we keep the proportion instances of class $C$ in our \textit{in-domain} training dataset equal to that of the validation set. We use the same procedure when sampling training examples from \textit{out-of-domain} data for \textit{Protonet} model.
\subsection{Design of experiments}
We conduct separate experiments for each of 18 Ontonotes classes. For each class we conducted 4 experiments with different random seeds. We report averaged results for each class.
We design separate experiments for selection of hyper-parameters and the optimal number of training epochs. For that we selected three well-represented classes --- ``GPE'' (geopolitical entity), ``Date'', and ``Org'' (organization) --- to conduct \textit{validation} experiments on them. We selected training sets as described above, and used the test set (consisting of $\approx 19.000$ sentences) to tune hyper-parameters and to stop the training. For other classes we did not perform hyper-parameter tuning. Instead, we used the values acquired in the validation experiments with the three validation classes. In these experiments we used the test set only for computing the performance of trained models.
The motivation of such setup is the following. In many few-shot scenarios researchers report experiments where they train on a small training set, and tune the model on a very large validation set. We argue that this scenario is unrealistic, because if we had a large number of labelled examples in a real-world problem, it would be more efficient to use them for training, and not for validation. On the other hand, a more realistic scenario is to have a very limited number of labelled sentences overall. In that case we could still reserve a part of them for validation. However, we argue that this is also inefficient. If we have 20 examples and decide to train on 10 of them and validate on another 10, this validation will be inaccurate, because 10 examples are not enough to evaluate the performance of a model reliably. Therefore, our evaluation will be very noisy and is likely to result in sub-optimal values of hyper-parameters. On the other hand, additional 10 examples can boost the quality of a model, as it can be seen in Figure \ref{fig:10_vs_20}. Therefore, we assume that optimal hyperparameters are the same for all labels, and use the values we found in validation experiments.
\begin{table*}[ht!]
\begin{center}
\begin{tabular}{|l|ccccccc|}
\hline \bf Class name & \bf Base & \bf BaseProto & \bf WarmProtoZero & \bf Protonet & \bf WarmProto & \bf WarmBase & \bf WarmProto-CRF\\
\hline
\multicolumn{8}{|c|}{\textbf{Validation Classes}} \\
\hline
GPE & 69.75 $\pm$ 9.04 & 69.8 $\pm$ 4.16 & 60.1 $\pm$ 5.56 & 78.4 $\pm$ 1.19 & \textbf{83.62} $\pm$ \textbf{3.89} & 75.8 $\pm$ 6.2 & \underline{80.05} $\pm$ \underline{5.4} \\
DATE & 54.42 $\pm$ 3.64 & 50.75 $\pm$ 5.38 & 11.23 $\pm$ 4.57 & 56.55 $\pm$ 4.2 & \underline{61.68} $\pm$ \underline{3.38} & 56.32 $\pm$ 2.32 & \textbf{65.42} $\pm$ \textbf{2.82} \\
ORG & 42.7 $\pm$ 5.54 & 39.1 $\pm$ 7.5 & 17.18 $\pm$ 3.77 & 56.35 $\pm$ 2.86 & \underline{63.75} $\pm$ \underline{2.43} & 63.45 $\pm$ 1.79 & \textbf{69.2} $\pm$ \textbf{1.2} \\
\hline
\multicolumn{8}{|c|}{\textbf{Test Classes}} \\
\hline
EVENT & 32.33 $\pm$ 4.38 & 24.15 $\pm$ 4.38 & 4.85 $\pm$ 1.88 & 33.95 $\pm$ 5.68 & 33.85 $\pm$ 5.91 & \underline{35.15} $\pm$ \underline{4.04} & \textbf{45.2} $\pm$ \textbf{4.4} \\
LOC & 31.75 $\pm$ 9.68 & 24.0 $\pm$ 5.56 & 16.62 $\pm$ 7.18 & 42.88 $\pm$ 2.03 & \underline{49.1} $\pm$ \underline{2.4} & 40.67 $\pm$ 4.85 & \textbf{52.0} $\pm$ \textbf{4.34} \\
FAC & 36.7 $\pm$ 8.15 & 29.83 $\pm$ 5.58 & 6.93 $\pm$ 0.62 & 41.05 $\pm$ 2.74 & \underline{49.88} $\pm$ \underline{3.39} & 45.4 $\pm$ 3.01 & \textbf{56.85} $\pm$ \textbf{1.52} \\
CARDINAL & 54.82 $\pm$ 1.87 & 53.7 $\pm$ 4.81 & 8.12 $\pm$ 7.92 & 64.05 $\pm$ 1.61 & \underline{66.12} $\pm$ \underline{0.43} & 62.98 $\pm$ 3.5 & \textbf{70.43} $\pm$ \textbf{3.43} \\
QUANTITY & 64.3 $\pm$ 5.06 & 61.72 $\pm$ 4.9 & 12.88 $\pm$ 4.13 & 65.05 $\pm$ 8.64 & 67.07 $\pm$ 5.11 & \underline{69.65} $\pm$ \underline{5.8} & \textbf{76.35} $\pm$ \textbf{3.09} \\
NORP & 73.5 $\pm$ 2.3 & 72.1 $\pm$ 6.0 & 39.92 $\pm$ 10.5 & \underline{83.02} $\pm$ \underline{1.42} & \textbf{84.52} $\pm$ \textbf{2.79} & 79.53 $\pm$ 1.32 & 82.4 $\pm$ 1.15 \\
ORDINAL & 68.97 $\pm$ 6.16 & 71.65 $\pm$ 3.31 & 1.93 $\pm$ 3.25 & \textbf{76.08} $\pm$ \textbf{3.55} & 73.05 $\pm$ 7.14 & 69.77 $\pm$ 4.97 & \underline{75.52} $\pm$ \underline{5.11} \\
WORK\_OF\_ART & \underline{30.48} $\pm$ \underline{1.42} & 27.5 $\pm$ 2.93 & 3.4 $\pm$ 2.37 & 28.0 $\pm$ 3.33 & 23.48 $\pm$ 5.02 & 30.2 $\pm$ 1.27 & \textbf{32.25} $\pm$ \textbf{3.11} \\
PERSON & 70.05 $\pm$ 6.7 & 74.1 $\pm$ 5.32 & 38.88 $\pm$ 7.64 & \underline{80.53} $\pm$ \underline{2.15} & 80.42 $\pm$ 2.13 & 78.03 $\pm$ 3.98 & \textbf{82.32} $\pm$ \textbf{2.51} \\
LANGUAGE & \underline{72.4} $\pm$ \underline{5.53} & 70.78 $\pm$ 2.62 & 4.25 $\pm$ 0.42 & 68.75 $\pm$ 6.36 & 48.77 $\pm$ 17.42 & 65.92 $\pm$ 3.52 & \textbf{75.62} $\pm$ \textbf{7.22} \\
LAW & \underline{58.08} $\pm$ \underline{4.9} & 53.12 $\pm$ 4.54 & 2.4 $\pm$ 1.15 & 48.38 $\pm$ 8.0 & 50.15 $\pm$ 7.56 & \textbf{60.13} $\pm$ \textbf{6.08} & 57.72 $\pm$ 7.06 \\
MONEY & 70.12 $\pm$ 5.19 & 66.05 $\pm$ 1.66 & 12.48 $\pm$ 11.92 & 68.4 $\pm$ 6.3 & \underline{73.68} $\pm$ \underline{4.72} & 68.4 $\pm$ 5.08 & \textbf{79.35} $\pm$ \textbf{3.6} \\
PERCENT & 76.88 $\pm$ 2.93 & 75.55 $\pm$ 4.17 & 1.82 $\pm$ 1.81 & 80.18 $\pm$ 4.81 & \underline{85.3} $\pm$ \underline{3.68} & 79.2 $\pm$ 3.76 & \textbf{88.32} $\pm$ \textbf{2.76} \\
PRODUCT & 43.6 $\pm$ 7.21 & \underline{44.35} $\pm$ \underline{3.48} & 3.75 $\pm$ 0.58 & 39.92 $\pm$ 7.22 & 35.1 $\pm$ 9.35 & 43.4 $\pm$ 8.43 & \textbf{49.32} $\pm$ \textbf{2.92} \\
TIME & 35.93 $\pm$ 6.35 & 35.8 $\pm$ 2.61 & 8.02 $\pm$ 3.05 & 50.15 $\pm$ 5.12 & \underline{56.6} $\pm$ \underline{2.28} & 45.62 $\pm$ 5.64 & \textbf{59.8} $\pm$ \textbf{0.76} \\
\hline
\end{tabular}
\end{center}
\caption{Results of experiments in terms of chunk-based $F_1$-score. Numbers in bold mean the best score for a particular class, underlined numbers are the second best results. Numbers are averaged across 4 runs with standard deviations calculated.}
\label{table:main_results}
\end{table*}
\begin{figure*}[ht!]
\includegraphics[scale=0.5]{figure_exp.png}
\caption{Performance of models trained on 10 and 20 sentences.}
\label{fig:10_vs_20}
\end{figure*}
\subsection{Model parameters}
In all our experiments we set $N$ (number of instances of the target class in \textit{in-domain} training data) to 20. This number of examples is small enough and can be easily labelled by hand. At the same time, it produces models of reasonable quality. Figure \ref{fig:10_vs_20} compares the performance of models trained on 10 and 20 examples. We see the significant boost in performance for the latter case. Moreover, in the rightmost plot the learning curve for the smaller dataset goes down after the 40-th epoch, which does not happen when the larger dataset is used. This shows that $N=20$ is a reasonable trade-off between model performance and cost of labelling.
In the \textit{Protonet} model we set $p$ to 0.5. Therefore, the model is trained on the instances of the target class $C$ half of the steps, and another half of the times it is shown instances of some other randomly chosen class.
We optimize all models with Adam optimizer in {\tt pytorch} implementation. Base and WarmBase methods use batches of 10 sentences during in-domain training. We train out-of-domain RNN baseline (warm-up for WarmBase and WarmProto* models) using batch of size 32. All models based on prototypical network use batches of size 100 --- 40 in support set and 60 in query set.
We also use L2-regularization with a multiplier 0.1. All models are evaluated in terms of chunk-based $F_1$-score for the target class \cite{conll2003}.
The open-source implementation of the models is available online.\footnote{\url{https://github.com/Fritz449/ProtoNER}}
\section{Results}
\label{sec:results}
\subsection{Performance of models}
We selected hyperparameters in the validation experiment and then used them when training models for other classes. We use the following values. The initial value of $b_O$ (logit for the \textit{O} class) is set to -4. We use dropout with rate 0.5 in LSTM cells for all our experiments. The dimensionality of embeddings space $M$ for all models based on prototypical network is set to 64. For all models we use learning rate of $3e-3$.
Table \ref{table:main_results} shows the results of our experiments for all classes and methods. It is clearly seen that 20 sentences is not enough to train a baseline RNN+CRF model. Moreover, we see that the baseline prototypical network (\textit{BaseProto}) performs closely to the RNN baseline. This shows that 20 instances of the target class is also not enough to construct a reliable prototype.
On the other hand, if a prototypical network is occasionally exposed to instances of other classes, as it is done in \textit{Protonet} model, then the prototypes it constructs are better at identifying the target class. \textit{Protonet} shows better results than \textit{Base} and \textit{BaseProto} on many classes.
The transfer learning baseline (\textit{WarmBase}) achieves results which are comparable with those of \textit{Protonet}. This allows to conclude that the information on structure of objects of other classes is helpful even for conventional RNN baseline, and pre-training on out-of-domain data is useful.
Prototypical network pre-trained on out-of-domain data (\textit{WarmProto}) beats \textit{WarmBase} and \textit{Protonet} in more than half of experiments. Analogously to transfer learning baseline, it benefits from the use of out-of-domain data. Unfortunately, such model is not suitable for zero-shot learning --- the \textit{WarmProtoZero} model performs below any other models including the RNN baseline.
Finally, if we enable CRF layer of \textit{WarmProto} model, the performance grows sharply. As we can see, \textit{WarmProto-CRF} beats all other models in almost all experiments.
Thus, prototypical network is more effective than RNN baseline in the setting where in-domain data is extremely limited.
\subsection{Influence of BIO labelling}
When such a small number of entities is available, the BIO labelling used in NER datasets can harm the performance of models. First of all, the majority of entities can contain only one word, and the number of \textit{I} tags can be too small if there are only 20 entities overall. This can decrease the quality of predicting these tags dramatically. Another potential problem is that words labelled with \textit{B} and \textit{I} tags can be similar, and a model can have difficulties distinguishing between them using prototypes. Again, this effect can be amplified by the fact that very small number of instances is used for training, and prototypes themselves have high variance.
\begin{table*}[ht!]
\begin{center}
\begin{tabular}{|l|cccc|}
\hline \bf Class name & \bf WarmBase + BIO & \bf WarmBase + TO & \bf WarmProto + BIO & \bf WarmProto + TO\\
\hline
\multicolumn{5}{|c|}{\textbf{Validation Classes}} \\
\hline
GPE & 75.8 $\pm$ 6.2 & 74.8 $\pm$ 4.16 & \textbf{83.62} $\pm$ \textbf{3.89} & \underline{82.02} $\pm$ \underline{0.42} \\
DATE & 56.32 $\pm$ 2.32 & 58.02 $\pm$ 2.83 & \underline{61.68} $\pm$ \underline{3.38} & \textbf{64.68} $\pm$ \textbf{3.65} \\
ORG & 63.45 $\pm$ 1.79 & 62.17 $\pm$ 2.9 & \underline{63.75} $\pm$ \underline{2.43} & \textbf{65.22} $\pm$ \textbf{2.83} \\
\hline
\multicolumn{5}{|c|}{\textbf{Test Classes}} \\
\hline
EVENT & \underline{35.15} $\pm$ \underline{4.04} & \textbf{35.4} $\pm$ \textbf{6.04} & 33.85 $\pm$ 5.91 & 34.75 $\pm$ 2.56 \\
LOC & 40.67 $\pm$ 4.85 & 40.08 $\pm$ 2.77 & \textbf{49.1} $\pm$ \textbf{2.4} & \underline{49.05} $\pm$ \underline{1.04} \\
FAC & \underline{45.4} $\pm$ \underline{3.01} & 44.88 $\pm$ 5.82 & \textbf{49.88} $\pm$ \textbf{3.39} & 43.52 $\pm$ 3.09 \\
CARDINAL & 62.98 $\pm$ 3.5 & 63.27 $\pm$ 3.66 & \underline{66.12} $\pm$ \underline{0.43} & \textbf{69.2} $\pm$ \textbf{1.51} \\
QUANTITY & \textbf{69.65} $\pm$ \textbf{5.8} & \underline{69.3} $\pm$ \underline{3.41} & 67.07 $\pm$ 5.11 & 67.97 $\pm$ 2.98 \\
NORP & 79.53 $\pm$ 1.32 & 80.75 $\pm$ 2.38 & \textbf{84.52} $\pm$ \textbf{2.79} & \underline{84.5} $\pm$ \underline{1.61} \\
ORDINAL & 69.77 $\pm$ 4.97 & 70.9 $\pm$ 6.34 & \underline{73.05} $\pm$ \underline{7.14} & \textbf{74.7} $\pm$ \textbf{4.94} \\
WORK\_OF\_ART & \textbf{30.2} $\pm$ \textbf{1.27} & \underline{25.78} $\pm$ \underline{4.07} & 23.48 $\pm$ 5.02 & 25.6 $\pm$ 2.86 \\
PERSON & 78.03 $\pm$ 3.98 & 76.0 $\pm$ 3.12 & \textbf{80.42} $\pm$ \textbf{2.13} & \underline{78.8} $\pm$ \underline{0.26} \\
\hline
\end{tabular}
\end{center}
\caption{$F_1$-scores for models WarmBase and WarmProto trained on data with and without BIO labelling. Numbers in bold mean the best score for a particular class, underlined numbers are the second best results. Numbers are averaged across 4 runs with standard deviations calculated.}
\label{table:bio_tagging}
\end{table*}
In order to check if these problems hamper the performance of our models, we performed another set of experiments. We removed BIO tagging --- for the target class $C$ we replaced both \textit{B-C} and \textit{I-C} with \textit{C}. This \textbf{TO} (tag/other) tagging reduced sparsity in the training data. We did so for both in-domain and out-of-domain training sets. The test set remained the same, because chunk-based $F_1$-score we use for evaluation is not affected by differences between BIO and TO labelling, it always considers a named entity as a whole.
Table \ref{table:bio_tagging} shows the result of WarmBase and WarmProto models trained on BIO-labelled and TO-labelled data.
It turns out that in the majority of cases the differences between $F_1$-scores of these models are not significant. Therefore, BIO labelling does not affect our models.
\section{Conclusions}
\label{sec:conclusions}
In this work we suggested solving the task of NER with \textit{metric learning} technique actively used in other Machine Learning tasks but rarely applied to NLP. We adapted a metric learning method, namely, prototypical network originally used for image classification to analysis of text. It projects all objects into a vector space which keeps distances between classes, so objects of one class are mapped to similar vectors. These mappings form a \textit{prototype} of a class, and at test time we assign new objects to classes by similarity of an object representation to class prototype.
In addition to that, we considered the task of NER in a semi-supervised setting --- we identified our target classes in text using the information about words of other classes. We showed that prototypical network is more effective in such setting than the state-of-the-art RNN model. Unlike RNN, prototypical network is suitable in cases where extremely small amount of data is available.
According to the original formulation of prototypical network, it can be used as zero-shot learning method, i.e. method which can assign an object to a particular class without seeing instances of this class at training time. We experimented with zero-shot setting for NER and showed that prototypical networks can in principle be used for zero-shot text classification, although there is still much room for improvement. We suggest that this is a prominent direction of future research.
We saw that prototypical networks shows considerably different performance on different classes of named entities. It would be interesting to perform more thorough qualitative analysis to identify characteristics of textual data which is more suitable for this method.
Finally, in our current experiments we trained models to predict entities of only a single class. In our future work we would like to check if the good performance of prototypical network scales to multiple classes. We will focus on training a prototypical network that can predict all classes of Ontonotes or another NER dataset at once.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,969
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<!-- NewPage -->
<html lang="en">
<head>
<!-- Generated by javadoc (version 1.7.0_85) on Mon Nov 09 08:38:03 EST 2015 -->
<title>Price</title>
<meta name="date" content="2015-11-09">
<link rel="stylesheet" type="text/css" href="../../../../../stylesheet.css" title="Style">
</head>
<body>
<script type="text/javascript"><!--
if (location.href.indexOf('is-external=true') == -1) {
parent.document.title="Price";
}
//-->
</script>
<noscript>
<div>JavaScript is disabled on your browser.</div>
</noscript>
<!-- ========= START OF TOP NAVBAR ======= -->
<div class="topNav"><a name="navbar_top">
<!-- -->
</a><a href="#skip-navbar_top" title="Skip navigation links"></a><a name="navbar_top_firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../../../../../overview-summary.html">Overview</a></li>
<li><a href="package-summary.html">Package</a></li>
<li class="navBarCell1Rev">Class</li>
<li><a href="class-use/Price.html">Use</a></li>
<li><a href="package-tree.html">Tree</a></li>
<li><a href="../../../../../deprecated-list.html">Deprecated</a></li>
<li><a href="../../../../../index-files/index-1.html">Index</a></li>
<li><a href="../../../../../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/CartImpl.Builder.html" title="class in com.douglaswhitehead.model.digitaldata.cart"><span class="strong">Prev Class</span></a></li>
<li><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/Price.Builder.html" title="interface in com.douglaswhitehead.model.digitaldata.cart"><span class="strong">Next Class</span></a></li>
</ul>
<ul class="navList">
<li><a href="../../../../../index.html?com/douglaswhitehead/model/digitaldata/cart/Price.html" target="_top">Frames</a></li>
<li><a href="Price.html" target="_top">No Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_top">
<li><a href="../../../../../allclasses-noframe.html">All Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_top");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<div>
<ul class="subNavList">
<li>Summary: </li>
<li><a href="#nested_class_summary">Nested</a> | </li>
<li>Field | </li>
<li>Constr | </li>
<li><a href="#method_summary">Method</a></li>
</ul>
<ul class="subNavList">
<li>Detail: </li>
<li>Field | </li>
<li>Constr | </li>
<li><a href="#method_detail">Method</a></li>
</ul>
</div>
<a name="skip-navbar_top">
<!-- -->
</a></div>
<!-- ========= END OF TOP NAVBAR ========= -->
<!-- ======== START OF CLASS DATA ======== -->
<div class="header">
<div class="subTitle">com.douglaswhitehead.model.digitaldata.cart</div>
<h2 title="Interface Price" class="title">Interface Price</h2>
</div>
<div class="contentContainer">
<div class="description">
<ul class="blockList">
<li class="blockList">
<dl>
<dt>All Known Implementing Classes:</dt>
<dd><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/PriceImpl.html" title="class in com.douglaswhitehead.model.digitaldata.cart">PriceImpl</a></dd>
</dl>
<hr>
<br>
<pre>public interface <span class="strong">Price</span></pre>
<div class="block"><p>Price interface.</p>
<p>From the W3C CEDDL specification:</p>
<p>
This object provides details of the cart price. The <tt>basePrice</tt> SHOULD be the price of
the <tt>item</tt>s before applicable discounts, shipping charges, and tax. The <tt>cartTotal</tt> SHOULD
be the total price inclusive of all discounts, charges, and tax.
</p>
<pre>
<code>digitalData.cart.<b>price</b> = {
<b>basePrice:</b> 200.00,
<b>voucherCode:</b> "Alpha",
<b>voucherDiscount:</b> 0.50,
<b>currency:</b> "EUR",
<b>taxRate:</b> 0.20,
<b>shipping:</b> 5.00,
<b>shippingMethod:</b> "UPS",
<b>priceWithTax:</b> 120,
<b>cartTotal:</b> 125
};
</code>
</pre>
<p>
<b>Reserved:</b> <tt>basePrice</tt> (Number), <tt>voucherCode</tt> (String), <tt>voucherDiscount</tt> (Number),
<tt>currency</tt> (String), <tt>taxRate</tt> (Number), <tt>shipping</tt> (Number), <tt>shippingMethod</tt> (String),
<tt>priceWithTax</tt> (Number), <tt>cartTotal</tt> (Number)
</p>
<p>For <tt>currency</tt> values, ISO 4217 is RECOMMENDED.</p>
<p>
All other names are optional and should fit the individual implementation needs in both naming
and values passed.
</p></div>
<dl><dt><span class="strong">Author:</span></dt>
<dd>douglas whitehead</dd></dl>
</li>
</ul>
</div>
<div class="summary">
<ul class="blockList">
<li class="blockList">
<!-- ======== NESTED CLASS SUMMARY ======== -->
<ul class="blockList">
<li class="blockList"><a name="nested_class_summary">
<!-- -->
</a>
<h3>Nested Class Summary</h3>
<table class="overviewSummary" border="0" cellpadding="3" cellspacing="0" summary="Nested Class Summary table, listing nested classes, and an explanation">
<caption><span>Nested Classes</span><span class="tabEnd"> </span></caption>
<tr>
<th class="colFirst" scope="col">Modifier and Type</th>
<th class="colLast" scope="col">Interface and Description</th>
</tr>
<tr class="altColor">
<td class="colFirst"><code>static interface </code></td>
<td class="colLast"><code><strong><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/Price.Builder.html" title="interface in com.douglaswhitehead.model.digitaldata.cart">Price.Builder</a></strong></code>
<div class="block">Price.Builder inner interface.</div>
</td>
</tr>
</table>
</li>
</ul>
<!-- ========== METHOD SUMMARY =========== -->
<ul class="blockList">
<li class="blockList"><a name="method_summary">
<!-- -->
</a>
<h3>Method Summary</h3>
<table class="overviewSummary" border="0" cellpadding="3" cellspacing="0" summary="Method Summary table, listing methods, and an explanation">
<caption><span>Methods</span><span class="tabEnd"> </span></caption>
<tr>
<th class="colFirst" scope="col">Modifier and Type</th>
<th class="colLast" scope="col">Method and Description</th>
</tr>
<tr class="altColor">
<td class="colFirst"><code>java.math.BigDecimal</code></td>
<td class="colLast"><code><strong><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/Price.html#getBasePrice()">getBasePrice</a></strong>()</code>
<div class="block">Returns the BasePrice object.</div>
</td>
</tr>
<tr class="rowColor">
<td class="colFirst"><code>java.math.BigDecimal</code></td>
<td class="colLast"><code><strong><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/Price.html#getCartTotal()">getCartTotal</a></strong>()</code>
<div class="block">Returns the CartTotal object.</div>
</td>
</tr>
<tr class="altColor">
<td class="colFirst"><code>java.lang.String</code></td>
<td class="colLast"><code><strong><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/Price.html#getCurrency()">getCurrency</a></strong>()</code>
<div class="block">Returns the Currency object.</div>
</td>
</tr>
<tr class="rowColor">
<td class="colFirst"><code>java.math.BigDecimal</code></td>
<td class="colLast"><code><strong><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/Price.html#getPriceWithTax()">getPriceWithTax</a></strong>()</code>
<div class="block">Returns the PriceWithTax object.</div>
</td>
</tr>
<tr class="altColor">
<td class="colFirst"><code>java.math.BigDecimal</code></td>
<td class="colLast"><code><strong><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/Price.html#getShipping()">getShipping</a></strong>()</code>
<div class="block">Returns the Shipping object.</div>
</td>
</tr>
<tr class="rowColor">
<td class="colFirst"><code>java.lang.String</code></td>
<td class="colLast"><code><strong><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/Price.html#getShippingMethod()">getShippingMethod</a></strong>()</code>
<div class="block">Returns the ShippingMethod object.</div>
</td>
</tr>
<tr class="altColor">
<td class="colFirst"><code>java.math.BigDecimal</code></td>
<td class="colLast"><code><strong><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/Price.html#getTaxRate()">getTaxRate</a></strong>()</code>
<div class="block">Returns the TaxRate object.</div>
</td>
</tr>
<tr class="rowColor">
<td class="colFirst"><code>java.lang.String</code></td>
<td class="colLast"><code><strong><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/Price.html#getVoucherCode()">getVoucherCode</a></strong>()</code>
<div class="block">Returns the VoucherCode object.</div>
</td>
</tr>
<tr class="altColor">
<td class="colFirst"><code>java.math.BigDecimal</code></td>
<td class="colLast"><code><strong><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/Price.html#getVoucherDiscount()">getVoucherDiscount</a></strong>()</code>
<div class="block">Returns the VoucherDiscount</div>
</td>
</tr>
</table>
</li>
</ul>
</li>
</ul>
</div>
<div class="details">
<ul class="blockList">
<li class="blockList">
<!-- ============ METHOD DETAIL ========== -->
<ul class="blockList">
<li class="blockList"><a name="method_detail">
<!-- -->
</a>
<h3>Method Detail</h3>
<a name="getBasePrice()">
<!-- -->
</a>
<ul class="blockList">
<li class="blockList">
<h4>getBasePrice</h4>
<pre>java.math.BigDecimal getBasePrice()</pre>
<div class="block">Returns the BasePrice object.</div>
<dl><dt><span class="strong">Returns:</span></dt><dd>BigDecimal</dd></dl>
</li>
</ul>
<a name="getVoucherCode()">
<!-- -->
</a>
<ul class="blockList">
<li class="blockList">
<h4>getVoucherCode</h4>
<pre>java.lang.String getVoucherCode()</pre>
<div class="block">Returns the VoucherCode object.</div>
<dl><dt><span class="strong">Returns:</span></dt><dd>String</dd></dl>
</li>
</ul>
<a name="getVoucherDiscount()">
<!-- -->
</a>
<ul class="blockList">
<li class="blockList">
<h4>getVoucherDiscount</h4>
<pre>java.math.BigDecimal getVoucherDiscount()</pre>
<div class="block">Returns the VoucherDiscount</div>
<dl><dt><span class="strong">Returns:</span></dt><dd>BigDecimal</dd></dl>
</li>
</ul>
<a name="getCurrency()">
<!-- -->
</a>
<ul class="blockList">
<li class="blockList">
<h4>getCurrency</h4>
<pre>java.lang.String getCurrency()</pre>
<div class="block">Returns the Currency object.</div>
<dl><dt><span class="strong">Returns:</span></dt><dd>String</dd></dl>
</li>
</ul>
<a name="getTaxRate()">
<!-- -->
</a>
<ul class="blockList">
<li class="blockList">
<h4>getTaxRate</h4>
<pre>java.math.BigDecimal getTaxRate()</pre>
<div class="block">Returns the TaxRate object.</div>
<dl><dt><span class="strong">Returns:</span></dt><dd>BigDecimal</dd></dl>
</li>
</ul>
<a name="getShipping()">
<!-- -->
</a>
<ul class="blockList">
<li class="blockList">
<h4>getShipping</h4>
<pre>java.math.BigDecimal getShipping()</pre>
<div class="block">Returns the Shipping object.</div>
<dl><dt><span class="strong">Returns:</span></dt><dd>BigDecimal</dd></dl>
</li>
</ul>
<a name="getShippingMethod()">
<!-- -->
</a>
<ul class="blockList">
<li class="blockList">
<h4>getShippingMethod</h4>
<pre>java.lang.String getShippingMethod()</pre>
<div class="block">Returns the ShippingMethod object.</div>
<dl><dt><span class="strong">Returns:</span></dt><dd>String</dd></dl>
</li>
</ul>
<a name="getPriceWithTax()">
<!-- -->
</a>
<ul class="blockList">
<li class="blockList">
<h4>getPriceWithTax</h4>
<pre>java.math.BigDecimal getPriceWithTax()</pre>
<div class="block">Returns the PriceWithTax object.</div>
<dl><dt><span class="strong">Returns:</span></dt><dd>BigDecimal</dd></dl>
</li>
</ul>
<a name="getCartTotal()">
<!-- -->
</a>
<ul class="blockListLast">
<li class="blockList">
<h4>getCartTotal</h4>
<pre>java.math.BigDecimal getCartTotal()</pre>
<div class="block">Returns the CartTotal object.</div>
<dl><dt><span class="strong">Returns:</span></dt><dd>BigDecimal</dd></dl>
</li>
</ul>
</li>
</ul>
</li>
</ul>
</div>
</div>
<!-- ========= END OF CLASS DATA ========= -->
<!-- ======= START OF BOTTOM NAVBAR ====== -->
<div class="bottomNav"><a name="navbar_bottom">
<!-- -->
</a><a href="#skip-navbar_bottom" title="Skip navigation links"></a><a name="navbar_bottom_firstrow">
<!-- -->
</a>
<ul class="navList" title="Navigation">
<li><a href="../../../../../overview-summary.html">Overview</a></li>
<li><a href="package-summary.html">Package</a></li>
<li class="navBarCell1Rev">Class</li>
<li><a href="class-use/Price.html">Use</a></li>
<li><a href="package-tree.html">Tree</a></li>
<li><a href="../../../../../deprecated-list.html">Deprecated</a></li>
<li><a href="../../../../../index-files/index-1.html">Index</a></li>
<li><a href="../../../../../help-doc.html">Help</a></li>
</ul>
</div>
<div class="subNav">
<ul class="navList">
<li><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/CartImpl.Builder.html" title="class in com.douglaswhitehead.model.digitaldata.cart"><span class="strong">Prev Class</span></a></li>
<li><a href="../../../../../com/douglaswhitehead/model/digitaldata/cart/Price.Builder.html" title="interface in com.douglaswhitehead.model.digitaldata.cart"><span class="strong">Next Class</span></a></li>
</ul>
<ul class="navList">
<li><a href="../../../../../index.html?com/douglaswhitehead/model/digitaldata/cart/Price.html" target="_top">Frames</a></li>
<li><a href="Price.html" target="_top">No Frames</a></li>
</ul>
<ul class="navList" id="allclasses_navbar_bottom">
<li><a href="../../../../../allclasses-noframe.html">All Classes</a></li>
</ul>
<div>
<script type="text/javascript"><!--
allClassesLink = document.getElementById("allclasses_navbar_bottom");
if(window==top) {
allClassesLink.style.display = "block";
}
else {
allClassesLink.style.display = "none";
}
//-->
</script>
</div>
<div>
<ul class="subNavList">
<li>Summary: </li>
<li><a href="#nested_class_summary">Nested</a> | </li>
<li>Field | </li>
<li>Constr | </li>
<li><a href="#method_summary">Method</a></li>
</ul>
<ul class="subNavList">
<li>Detail: </li>
<li>Field | </li>
<li>Constr | </li>
<li><a href="#method_detail">Method</a></li>
</ul>
</div>
<a name="skip-navbar_bottom">
<!-- -->
</a></div>
<!-- ======== END OF BOTTOM NAVBAR ======= -->
</body>
</html>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,090
|
{"url":"https:\/\/www.shaalaa.com\/question-bank-solutions\/the-lateral-surface-area-hollow-cylinder-4224-cm2-it-cut-along-its-height-formed-rectangular-sheet-width-33-cm-find-perimeter-rectangular-sheet-surface-area-of-cylinder_15468","text":"# The Lateral Surface Area of a Hollow Cylinder is 4224 Cm2. It is Cut Along Its Height and Formed a Rectangular Sheet of Width 33 Cm. Find the Perimeter of Rectangular Sheet? - Mathematics\n\nThe lateral surface area of a hollow cylinder is 4224 cm2. It is cut along its height and formed a rectangular sheet of width 33 cm. Find the perimeter of rectangular sheet?\n\n#### Solution\n\nA hollow cylinder is cut along its height to form a rectangular sheet.\n\nArea of cylinder = Area of rectangular sheet\n\n4224 cm2\u00a0= 33 cm \u00d7 Length\n\nLength = (4224 cm^2)\/ (33 cm) = 128 cm\n\nThus, the length of the rectangular sheet is 128 cm.\n\nPerimeter of the rectangular sheet = 2 (Length + Width)\n\n= [2 (128 + 33)] cm\n\n= (2 \u00d7 161) cm\n\n= 322 cm\n\nIs there an error in this question or solution?\n\n#### APPEARS IN\n\nNCERT Class 8 Maths Textbook\nChapter 11 Mensuration\nExercise 11.3 | Q 8 | Page 186","date":"2021-03-05 01:58:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.41961535811424255, \"perplexity\": 1894.651180329106}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178369553.75\/warc\/CC-MAIN-20210304235759-20210305025759-00214.warc.gz\"}"}
| null | null |
La norme (PATA) décrit une interface de connexion pour mémoires de masse (disque dur, lecteur de CD-ROM...). Elle a été conçue à l'origine par Western Digital sous le nom ou IDE. Elle est gérée par le comité T13 d'INCITS. Cette norme utilise les normes ATA () et ATAPI (). En pratique, l'ATAPI qui étend ce standard de communication à des périphériques différents des disques durs, sert à faire passer des commandes SCSI sur la couche physique de l'ATA.
La norme SATA (Serial ATA), qui l'a remplacée, utilise un bus série, permettant un câble plus fin et plus flexible tout en permettant des débits supérieurs.
Présentation
Les périphériques (disques, lecteurs de CD) sont reliés à la carte mère par une nappe souple comportant des connecteurs , parfois munis d'un détrompeur. Ces nappes étaient par le passé munies de , mais depuis l'apparition de l', les nappes à deviennent monnaie courante. La largeur standard des nappes est de .
Ces connecteurs sont identiques pour le contrôleur et les périphériques, (voir illustration).
Les cartes mères étaient le plus souvent équipées de deux ports IDE, parfois de quatre. Chaque port permet de brancher deux périphériques : un maître, un esclave. Une carte mère disposant de deux ports IDE permet donc de brancher quatre périphériques de stockage ; on parlera alors sur le premier port de maître primaire et d'esclave primaire et sur le second port de maître secondaire et d'esclave secondaire. Le passage progressif à la norme SATA a conduit dans une phase de transition à l'apparition de cartes mères équipées d'un seul port IDE ; elles formaient la très grande majorité du marché en 2009. En 2012 la plupart des cartes mères n'utilisent plus du tout ce système.
La distinction maître/esclave permet simplement de séparer logiquement les unités de stockage qui sont elles connectées physiquement en parallèle sur le contrôleur; elle n'indique pas par contre une supériorité d'un périphérique sur l'autre, telle qu'un meilleur temps d'accès ou un meilleur débit, qui eux sont similaires.
Pour effectuer cette distinction / (ou maître / esclave en français), on positionne un cavalier sur le sélecteur incorporé au périphérique, en général sur la tranche entre le connecteur destiné à la nappe et celui qui est destiné à l'alimentation électrique. Il existe aussi une position CS (, en français « sélection par le câble ») qui permet (si on positionne les deux périphériques en CS) de déterminer automatiquement lequel est maître et lequel est esclave, en fonction de la position sur le câble. Dans ce cas, le fonctionnement standard suppose que le dernier connecteur de la nappe accueille le périphérique maître (utilisé par exemple pour le disque dur contenant le système d'exploitation) tandis que le connecteur intermédiaire permet le branchement du périphérique esclave.
ATA et ATAPI
La connexion IDE tire parti des protocoles ATA/ATAPI. ATAPI (ATA with Packet Interface extension) est une extension de ATA (AT Attachement). Ce dernier est le protocole utilisé par les disques durs IDE tandis qu'ATAPI est plutôt utilisé par les lecteurs et graveurs de CD-ROM et DVD-ROM ainsi que par quelques lecteurs de disquettes spéciales de type ZIP par exemple.
La principale différence entre les deux protocoles réside dans l'existence, dans ATAPI, de l'extension Packet Interface qui implémente le jeu d'instructions Packet. De plus, de nombreuses commandes ATA sont interdites si ce jeu d'instructions est présent.
Dans les sections suivantes, les commandes réservées à ATA ou à ATAPI seront indiquées. Les commandes communes aux deux protocoles ne porteront pas de mention spéciale.
Les différents standards
Jeu d'instructions Packet
Ce jeu d'instructions constitue la principale différence entre ATA et ATAPI. Il implémente les deux commandes suivantes :
Obtention d'informations : une commande du même type existe dans le protocole ATA mais fournit des informations différentes. Ces deux commandes sont décrites plus bas.
Envoi d'une commande Packet : cette commande permet l'envoi de commandes Packet dans un format spécial par le biais du port de données. Ces commandes permettent d'envoyer plus d'informations que les commandes ATA normales. Cette commande est également décrite plus bas.
Ces commandes servent d'interface à un jeu d'instructions spéciales spécifiques au type de périphérique (CD-ROM, CD-R/RW, DVD…). Ces commandes ne sont pas définies par le protocole ATAPI.
Dans le cas des CD-ROM et des DVD, ces commandes sont définies par le T10 (Technical Committee T10, dépendant de NCITS (National Committee for Information and Technology Standards) chargé de SCSI) dans les MMC (Multimedia Commands 1, 2 et 3 actuellement).
Note : Ces commandes étaient, pour les CD-ROM, définies dans le document SFF-8020i, maintenant obsolète.
Tout système digne de ce nom doit impérativement supporter un protocole soit par le biais d'un pilote soit par celui du BIOS qui fournit déjà des fonctions d'accès aux disques durs (interruption 13h) mais ces fonctions sont limitées, lentes, et parfois buguées. Se baser sur le BIOS ne permet donc pas d'avoir un système fiable sans compter qu'en mode protégé, cela est impossible. C'est pourquoi il faut réécrire les routines d'accès aux disques pour avoir un pilote satisfaisant.
Quelques-unes des commandes de base sont décrites dans ce document
Fonctions plus avancées
(LBA)
Présentation
Le mode CHS permet d'adresser un secteur du disque en indiquant son numéro de secteur, le numéro du cylindre où il se trouve ainsi que le numéro de la tête. Malheureusement, ce mode ne permet d'adresser que , et soit octets, un peu moins de , ce qui est peu de nos jours (quoique certains disques supportent des adresses CHS supérieures à cette limite).
Au contraire, le mode LBA utilise une adresse logique sur : le 1e secteur a l'adresse 0, le l'adresse 62, le 1e secteur du cylindre l'adresse 63 (s'il y a 63 secteurs par cylindres) et ainsi de suite. Le mode LBA permet donc d'adresser 2^28*512=137438953472 octets soit 128 Go.
Utilisation, différences par rapport au mode CHS
L'utilisation du mode LBA n'est pas beaucoup plus compliquée que le mode CHS, les différences peuvent être résumées de la manière suivante :
Pour le reste, tout est identique.
Conversion d'une adresse CHS en adresse LBA et inversement
adresse logique = (numéro de secteur - 1) + (numéro de tête * nombre de secteurs par cylindre) + (numéro de cylindre * nombre de secteurs par cylindre * nombre de têtes)
secteur CHS = entier(1 + reste de (adresse logique / nombre de secteurs par pistes))
tête CHS = entier(reste de ((adresse logique / nombre de secteurs par pistes) / nombre de têtes))
piste CHS = entier(adresse logique / (nombre de secteurs par cylindre * nombre de faces))
Considérons lba l'adresse logique, c le cylindre, h la tête, s le secteur, H le nombre de têtes et S le nombre de secteurs par cylindre, voici les mêmes formules dans une syntaxe de style C (types entiers) :
lba = (s - 1) + (h * S) + (c * S * H);
s = 1 + (lba% S);
h = (lba / S)% H;
c = lba / (S * H);
Évolution du standard
Depuis 2003, le standard d'interface de connexion des mémoires de masse évolue peu à peu de l'IDE vers le Serial ATA aussi appelé S-ATA ou SATA.
Notes et références
Bibliographie
Programmation d'ATA/ATAPI
.
standard ATA-ATAPI
Codes sources
.
ATADRVR v14C, Hale Landis, le webmaster de ATA-ATAPI.com
LBA
T13 Standard ATA/ATAPI.
T10 standard SCSI et MMC
Voir aussi
Articles connexes
S-ATA
ATAPI
Commande SCSI
Liens externes
Détail des brochages des connecteurs IDE
explication détaillée du standard ATA
Périphérique (informatique)
Connectique
Bus informatique
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,764
|
RustyLogic blog: Speeding up the UI - Open Text Web Solutions Usergroup e.V.
What if we could speed up the UI for the users and make it simpler at the same time. RQL caching to the rescue!
I have been reviewing old blog posts and forum entries and came across this little gem by Kim Dezen RedDot CMS Plugin - Add Page*.
I thought I'd have a go myself but found the performance a little slow for users while the RQL churns in the background to find the template pre-assignments. Seeing as the template pre-assignments will hardly ever change once the project is built I decided to cache the results in a database resulting in an instantaneous population of the dropdown.
If a new option needs to be added to the dropdown simply delete the cache and let it be generated again!
* Note: We are currently looking for the missing download. Please inform us, if you can help us to fix it.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 483
|
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{T}{he}
area of \emph{speech emotion recognition} (SER) is an important research problem due to its key potential in fields such as \emph{human-computer interactions} (HCIs), healthcare \cite{Aldeneh_2019,Gideon_2019} and behavioral studies \cite{Narayanan_2013, Georgiou_2011}. Despite remarkable advances in emotion recognition, detecting emotions from speech is still a challenging task. The usual formulation to describe emotions is with categorical descriptors such as happiness, sadness, anger and neutral. However, this approach may not capture the intra and inter class variability across distinct emotional classes.
{\color{black}
(i.e., variability across sentences with the same emotional class labels and variability across sentences with different emotional class labels).}
An alternative representation is the use of emotional attributes, as suggested by the core affect theory \cite{Russell_2003}. The most common attributes are arousal (calm versus active), valence (unpleasant versus pleasant) and dominance (weak versus strong). Because of their direct application in many areas, it is very important to build accurate models, which can reliably predict these emotional attributes. The estimation of emotional attributes is often posed as a regression problem, where the goal is to predict the scores associated with these attributes. In particular, the emotional attribute valence is key to understand many behavioral disorders {\color{black}\cite{groenewold2013emotional, hagenhoff2013reduced}} such as \emph{post-traumatic stress disorder} (PTSD), depression, schizophrenia and anxiety. Although different approaches have been proposed to improve SER systems, the prediction of valence using acoustic features is often less accurate than other emotional attributes such as arousal or dominance. The gap in performance is significant even with different methods specially implemented to correct this problem, such as using features from other modalities \cite{Nicolaou_2011,Aldeneh_2017_2,Zhang_2019_2,Tournier_2019}, modeling contextual information \cite{Mariooryad_2013_2} or regularizing \emph{deep neural network} (DNNs) under a \emph{multitask learning} (MTL) framework \cite{Parthasarathy_2017_3, Parthasarathy_2018_3}. It is important to explore why predicting valence from speech is so difficult, and use these findings and insights to improve SER systems.
In our previous work, we studied the prediction of valence from speech \cite{Sridhar_2018}, focusing the analysis on the role of regularization in DNNs. In particular, we explored the role of dropout as a form of regularization and analyzed its effect on the prediction of valence. Our analysis showed that a higher dropout rate (i.e., higher regularization) led to improvements in valence predictions. The optimal dropout rate for valence was higher than the optimal dropout rates for arousal and dominance across different configurations of the DNNs. A hypothesis from this study was that a heavily regularized network learns features that are more consistent across speakers, placing less emphasis on speaker-dependent emotional cues. We also conducted controlled speaker-dependent experiments to evaluate this hypothesis, where data from the same speakers were included in the train and test partitions.
For valence, we observed relative gains in \emph{concordance correlation coefficient} (CCC) up to 30\% between speaker-dependent and speaker-independent experiments. The corresponding relative improvements observed for arousal and dominance were less than 4\%. These results showed that valence emotional cues include more speaker-dependent traits, explaining why heavily regularizing a DNN helps to learn more general emotional cues across speakers \cite{Sridhar_2018}. Building on these results, we propose an unsupervised personalization approach that is extremely useful in the prediction of valence.
This paper explores the speaker-dependent nature of emotional cues in the externalization of valence. We hypothesize that a regression model trained to detect valence from speech can be adapted to a target speaker. The goal is to leverage the information from the emotional cues of speakers in the train set to fine-tune a regression model already trained to perform well on the prediction of valence. Our approach identifies speakers in the train set that are closer in the acoustic space to the speakers in the test set. Data from these selected speakers are used to create an adaptation set to personalize the SER models toward the test speakers. We achieve the adaptation by using three alternative methods: unique speaker, oversampling and weighting approaches. The unique speaker approach randomly selects samples from the data obtained from the selected speakers in the train set without replacement, regardless of how many times these speakers are selected (i.e., a speaker in the train set may be found to be closer to more than one speaker in the test set).
{\color{black}The oversampling approach draws data from the selected speakers as a function of the number of times that a given speaker is selected. For instance, if a speaker in the train set is found to be closer to two speakers in the test set, the selected sentences from that training speaker is counted twice. This approach repeats the data from this speaker during the adaptation phase, so the model sees the same speech samples in multiple batches in a single epoch.} The weighting approach uses weights, where samples from the selected speakers in the train set are weighted more. This approach adds weights on the cost function during the training process, building the models from scratch. We demonstrate the idea of personalization under two scenarios: 1) separate SER models, where each of them is personalized to a single test speaker (i.e., individual adaptation models), and 2) a single SER model personalized to a pool of 50 target speakers (i.e., global adaptation model). We evaluate the approaches by monitoring the loss function on either a separate development set or the adaptation set.
Using the proposed model adaptation strategies leads to relative improvements in CCC as high as 13.52\% in the prediction of valence.
While the adaptation experiments prove to be very effective for valence, the improvements achieved for arousal and dominance are less than 1.9\% (on the MSP-Podcast corpus). This result indicates the need for a personalization method to improve the prediction of valence, highlighting the benefits of our proposed approach. The contributions of our study are:
\begin{itemize}[leftmargin=0em]
\vspace{-0.2em}
\setlength{\itemindent}{1em}
\setlength{\itemsep}{0cm}%
\setlength{\parskip}{0cm}%
\item {\color{black}We leverage the finding that the externalization of valence in acoustic features is more speaker-dependent than arousal and dominance, raising awareness on the need for special considerations in its detection.}
\item We successfully personalize a SER system using unsupervised adaptation strategies by exploiting the speaker-dependent traits.
\item We propose three alternative adaptation strategies to personalize a SER system, obtaining important relative performance improvements in the prediction of valence.
\end{itemize}
{\color{black}
One of the key strengths of this study is that we find similar speaker in the emotional feature space alone. By exploiting similarities in the emotional feature space, we have suppressed the speaker trait or text dependencies. Our approach provides a much more powerful way of comparing emotional similarities than traditional methods used for speaker identification. Likewise, our personalization approach avoids or minimizes ``concept drift.'' SER is a challenging problem, where the prediction models can become more volatile with the addition of more data over time. If the distribution of the newly acquired data starts to diverge or tend to fill up the sparse regions of the old data's distribution, the prediction results may see a drop in performance. Therefore, models built for analyzing such data quickly become obsolete. This phenomenon is referred to as concept drift. With our personalization study, we can minimize the impact of concept drift by developing personalized SER models that are tailored to target speakers. We can periodically re-fit or update the models to target speakers or even weight the data based on their historical significance to develop better personalized models.}
The paper is organized as follows. Section \ref{sec:related} discusses relevant studies on the prediction of valence from speech. It also describes the adaptation and personalization approach proposed for improving SER systems. Section \ref{sec:resources} presents the database used in this study. Section \ref{sec:analysis_1} describes the analysis on the role of regularization in the prediction of valence from speech, summarizing the study presented in our preliminary work \cite{Sridhar_2018}. Section \ref{sec:personalization} presents the proposed formulation to personalize a SER system, building on the insights learned from the analysis in Section \ref{sec:analysis_1}. Section \ref{sec:results} presents the results obtained by using our proposed approaches to personalize a SER system. {\color{black}We primarily present the results on the MSP-Podcast corpus, but we evaluate the generalization of our proposed approach with two other databases.} The paper concludes with Section \ref{sec:conclusion}, which summarizes our key findings, providing future directions for this study.
\vspace{-0.3cm}
\section{Related Work}
\label{sec:related}
\subsection{Improving the Prediction of Valence}
\label{ssec:valence_importance}
While valence is a key dimension to understand complex human behaviors, its predictions using speech features are often lower than the predictions of other emotional attributes such as arousal or dominance \cite{Trigeorgis_2016}. Therefore, several speech studies have focused on understanding and improving valence prediction.
Busso and Rahman \cite{Busso_2012} studied acoustic properties of emotional cues that describe valence. They built separate \emph{support vector regression} (SVR) models trained with different groups of acoustic features: energy, fundamental frequency, voice quality, spectral, \emph{Mel-frequency cepstral coefficients} (MFCCs) and RASTA features. They also built binary classifiers to distinguish between two groups of sentences characterized by similar arousal but different valence. The study showed that spectral and fundamental frequency features are the most discriminative for valence. Koolagudi and Rao \cite{Koolagudi_2009} claimed that MFCCs were effective to classify emotion along the valence dimension (i.e., spectral features). Cook et al. \cite{Cook_2005, Cook_2006} explored the structure of the fundamental frequency (F0), extracting dominant pitches in the detection of valence from speech. Despande et al. \cite{Deshpande_2019} proposed a reduced feature set consisting of the autocorrelation of pitch contour, \emph{root mean square} (RMS) energy and a 10- dimensional \emph{time domain difference} (TDD) vector. The TDD vector corresponds to successive differences in the speech signal. The feature set collectively led to better results than MFCCs or OpenSmile features \cite{Deshpande_2019_2}. Tursunov et al. \cite{Tursunov_2019} used acoustic descriptors associated with timbre perception to classify discrete emotions, and emotions along the valence dimension. Tahon et al. \cite{Tahon_2012} showed that voice quality features were also useful in the detection of valence.
Other studies have explored modeling strategies to improve the prediction of valence. Lee et al. \cite{Lee_2009_2} used dynamic Bayesian networks to capture time dependencies and mutual influence of interlocutors during dyadic interactions. Contextual information was found to be particularly useful in the prediction of valence, leading to relative improvements higher than the one observed for arousal. Another alternative approach to improve valence was by regularizing a DNN.
For example, Parthasarathy and Busso \cite{Parthasarathy_2017_3} showed that jointly predicting valence, arousal and dominance under a \emph{multitask learning} (MTL) framework helps to improve its prediction. The MTL framework acts as regularization in DNNs. Other approaches using MTL have shown similar findings. Since the performance of lexical models often outperforms acoustic models in predicting valence \cite{Aldeneh_2017_2}, Lakomkin et al. \cite{Lakomkin_2019} suggested the use of the output of an \emph{automatic speech recognition} (ASR) as the input of a character-based DNN.
\vspace{-0.3cm}
\subsection{Model Adaptation in SER Tasks}
\label{ssec:adapting_SER}
Unlike other speech tasks such as \emph{automatic speech recognition} (ASR) that rely on abundant data, databases used in SER are often small. Therefore, many researchers have explored the use of model adaptation techniques to generalize the models beyond the training conditions. Most of the adaptation techniques aim to attenuate sources of variability including channel, language and speaker mismatches. Early studies demonstrated the effectiveness of these techniques with algorithms based on \emph{support vector machine} (SVM) \cite{Abdelwahab_2015}.
Abdelwahab and Busso \cite{Abdelwahab_2017_2} demonstrated the importance of data selection strategy for domain adaptation. They illustrated that, incrementally adapting emotion classification models using active learning to select samples from the target domain can improve their performance. They used a conservative approach where only the correctly classified samples were used to adapt the model, leaving out the incorrect ones in order to avoid large changes in the hyperplane between the classes.
Recent efforts in model adaptation have mainly focused on DNNs, where important advances have been made in the area of transferring knowledge between domains \cite{Bengio_2011}. DNNs with their deep architectures can learn useful representations by compactly representing functions. Deng et al. \cite{Deng_2013} used sparse autoencoders to learn feature representations in the source domain that are more consistent with the target domain. This goal was achieved by simultaneously minimizing the reconstruction error in both domains. Deng et al. \cite{Deng_2014} proposed the use of unlabeled data under a deep autoencoder framework to reduce the mismatch between train and test conditions. They also simultaneously learned common traits from both labeled and unlabeled data.
Instead of the traditional method of pre-training and fine-tuning for model adaptation, Gideon et al. \cite{Gideon_2017} used progressive networks to enhance a SER system. They trained the model on new tasks by freezing the layers related to previously learned tasks and used their intermediate representations as inputs to new parallel layers. This study also used paralinguistic information from gender and speaker identity to achieve improvements. Similarly, other variants of adaptation techniques use \emph{kernel mean matching} (KMM) \cite{Hassan_2013}, \emph{Nonnegative matrix factorization} \cite{Song_2016}, \emph{domain adaptive least-squares regression} \cite{Zong_2016}, and PCANet \cite{Huang_2017}. These methods lead to improvements on emotion recognition tasks, by using hybrid frameworks involving unsupervised followed by supervised learning. Our proposed approach is different from these studies, since we aim to explicitly exploit similarities between speakers in the train and test sets, as measured in the feature space. This approach leads to powerful adaptation methods that are particularly useful to predict valence.
\vspace{-0.3cm}
\subsection{Speech Emotion Personalization}
\label{ssec:personalize_SER}
This study focuses on adapting or personalizing a SER system to a target set of speakers. Busso and Rahman \cite{Rahman_2012} demonstrated the idea of personalization using an unsupervised feature normalization scheme. They used the \emph{iterative feature normalization} (IFN) method \cite{Busso_2013_2} to reduce speaker variability, while preserving the discriminative information of the features across emotional classes. The IFN algorithm has two steps. First, it detects neutral sentences which are used to estimate the normalization parameters. Then, the data is normalized with these parameters. Since the detection of neutral speech is not perfect, this process is iteratively repeated leading to important improvements. Busso and Rahman \cite{Rahman_2012} implemented the IFN scheme as a front end of a SER system designed to recognize emotion from a target speaker, observing huge improvements in accuracy. Our study exploits the speaker-dependencies in the externalization of valence to personalize a SER system towards target speakers.
\vspace{-0.3cm}
\section{Resources}
\label{sec:resources}
\subsection{Emotional Corpora}
\label{ssec:corpora}
\subsubsection{The MSP-Podcast Corpus}
\label{sssec:corpus}
The study relies on the MSP-Podcast corpus \cite{Lotfian_2019_3}, which provides a diverse collection of spontaneous speech segments that are rich in emotional content. The speech segments are obtained from podcasts taken from various audio-sharing websites, using the retrieval-based approach proposed by Mariooryad et al. \cite{Mariooryad_2014_3}. The content of the podcasts is diverse, including discussions on sport, politics, entertainment, games, social problems and healthcare. The podcasts are segmented into speaking turns between 2.75s and 11s duration. These segments are automatically processed to discard segments with music, overlapped speech, and noisy recordings. Since most of the segments are expected to be neutral, we retrieve candidate segments to be included in the database by leveraging a diversified set of SER algorithms to detect emotions. The selected speech segments are annotated on \emph{Amazon Mechanical Turk} (AMT) using a crowdsourcing protocol similar to the one introduced by Burmania et al. \cite{Burmania_2016_2}. This crowdsourcing protocol stops the annotators in real-time if their performance is evaluated as poor. The raters annotate each speaking turn for its arousal, valence and dominance content using \emph{self-assessment manikins} (SAMs) on a seven Likert-type scale. The ground truth labels for each speaking turn is the average across the scores provided by the annotators. Although we do not use categorical annotations in this study, the corpus also includes annotations of primary and secondary emotions. The primary emotion corresponds to the dominant emotional class. The secondary emotion corresponds to all the emotional classes that can be perceived in the speech segments.
{\color{black}
The collection of the MSP-Podcast corpus is an ongoing effort. This study uses version 1.6 of the MSP-Podcast corpus, which consists of 50,362 speech segments (83h29m) annotated with emotional classes. From this set, 42,567 segments have been manually assigned to 1,078 speakers. The speaker identity for the rest of the corpus has not been assigned yet. Figure \ref{fig:corpus} illustrates the partition of the dataset used in this study. The test set has 10,124 speech segments from 50 speakers, and the development set has 5,958 speech segments from 40 speakers. Each speaker in the test and development sets has a minimum of five minutes of data. The rest of the corpus is included in the train set, which consists of a total of 34,280 speech segments. The data partition aims to create speaker-independent partitions between sets. Lotfian and Busso \cite{Lotfian_2019_3} provide more details on this corpus.
}
As shown in Figure \ref{fig:corpus}, we further split the test set into two partitions for this study: \emph{test-A} and \emph{test-B} sets. The \emph{test-A} set includes 200s of recording for each of the 50 speakers in the test set. The \emph{test-B} set includes the rest of the recordings in the test set. {\color{black}Each test speaker has at least 300s (5 mins) of data. After removing 200s from each speaker to form the \emph{test-A} set, the \emph{test-B} set is left with at least 100s of data for each speaker. The average duration per speaker in the \emph{test-B} set is 1005.96s}.
{\color{black}
\subsubsection{The IEMOCAP and MSP-IMPROV Corpora}
\label{ssec:corpus_2}
Besides the MSP-Podcast corpus, we use two other databases for our experimental evaluations. The first database is the USC-IEMOCAP corpus \cite{Busso_2008_5}, which is an audiovisual corpus and contains dyadic interactions from 10 actors in improvised scenarios. This study only uses the audio. The database contains 10,039 speaking turns, which are annotated with emotional labels for arousal, valence and dominance by at least two raters using a five-Likert scale. We also use the MSP-IMPROV corpus \cite{Busso_2017}, which is a multimodal emotional database that contains interactions between pairs of actors engaged in improvised scenarios. In addition to the conversations during improvised scenarios, the dataset also contains the interactions between the actors during the breaks, resulting in more naturalistic data. The corpus uses a novel elicitation scheme, where two
actors in an improvised scenario leads one of them to utter target sentences. For each of the target sentences, four emotional scenarios were created to contextualize the sentence to elicit happy, angry, sad and neutral reactions, respectively. This corpus consists of 8,438 turns of emotional sentences recorded from 12 actors (over 9 hours). The sessions were manually segmented into speaking turns, which were annotated with emotional labels using perceptual evaluations. Each turn was annotated for arousal, valence and dominance by five or more raters using a five-Likert scale. In both databases, the consensus emotional attribute label assigned to each utterance is the average across the scores provided by the annotators, which is linearly mapped between -3 and 3.}
\vspace{-0.3cm}
\subsection{Acoustic Features}
\label{ssec:features}
This study uses the feature set proposed for the \emph{computational paralinguistics challenge} (ComParE) in Interspeech 2013 \cite{Schuller_2013}. The features are extracted by estimating several \emph{low-level descriptors} (LLDs) such as energy, fundamental frequency and MFCCs. For each speech segment, statistics such as mean, standard deviation, range and regression coefficients are estimated for each LLD, creating \emph{high-level descriptors} (HLDs). With this approach, the feature vector is fixed regardless of the duration of the sentence. The ComParE set creates a 6,373 dimensional feature vector for each sentence.
\begin{figure}[tb]
\centering
\includegraphics[width=0.98\columnwidth]{images/Partition.pdf}
\caption{Partitions of the MSP-Podcast corpus used in this study for the train, development and test sets. The test set is further split into the \emph{test-A} and \emph{test-B} sets.}
\vspace{-0.3cm}
\label{fig:corpus}
\end{figure}
\begin{comment}
\begin{table}[tb]
\caption{Speaker-Independent baseline and speaker-dependent results}
\centering
\vspace{0.1cm}
\fontsize{7}{8}\selectfont
\begin{tabular*}{1.0\columnwidth}{@{\extracolsep{\fill}}c|c|c}
\hline
{Experiments} & & CCC\\
\hline
\hline
\multirow{1}{*} {Speaker-Independent} & {Baseline} & 0.2978\\
\hline
\multirow{1}{*} {Speaker-Dependent} & {Upper bound} & 0.3466\\
\hline
\multirow{4}{*} {Speaker-Dependent}
& 50s of every test speaker data & 0.3134\\
& 100s of every test speaker data & 0.3221\\
& 150s of every test speaker data & 0.3296\\
& 200s of every test speaker data & 0.3466\\
\hline
\end{tabular*}
\vspace{-0.4cm}
\label{tab:res1}
\end{table}
\end{comment}
\vspace{-0.3cm}
\section{Role of Regularization}
\label{sec:analysis_1}
Our proposed personalization method for valence builds upon the findings reported in our preliminary study \cite{Sridhar_2018}. This section summarizes the main findings on the role of dropout rate as a form of regularization in DNNs and its impact on SER. The study in Sridhar et al. \cite{Sridhar_2018} was conducted on an early version of this corpus. We update the analysis with the release of the corpus used for this study (release 1.6 of the MSP-Podcast corpus).
\vspace{-0.3cm}
\subsection{Optimal Dropout Rate for Best Performance}
\label{ssec:dropout_rate}
Our previous study focused on the role of dropout as a form of regularization in improving the prediction of valence \cite{Sridhar_2018}. When dropout is used in DNNs, random portions of the network are shutdown at every iteration, training a smaller network on each epoch. This approach helps in learning feature weights in random conjunction of neurons, preventing developing co-dependencies with neighboring nodes. This regularization approach leads to better generalization. We explore the role of regularization in the prediction of valence by changing the dropout rate $p$. {\color{black}The goal of this analysis is to understand the optimal value $p$ that leads to the best performance for different network configurations (i.e., different number of layers, different number of nodes per layer). We train the models for 1,000 epochs, with an early stopping criterion based on the development loss. The loss function is based on CCC, which has led to better performance than \emph{mean squared error} (MSE) \cite{Trigeorgis_2016}. We train separate regression models by changing the dropout rate $p\in\{ 0.0, 0.1, \cdots, 0.9\}$, recording the optimal dropout rate leading to the best performance on the development set. We evaluate two networks with three and seven layers, implemented with 256, 512 and 1,024 nodes per layer. Figure \ref{fig:nodes_vs_dropout} illustrates the results, showing the optimal dropout rate observed in the development set. The optimal dropout rates that give the best performance are higher for valence than arousal and dominance. While the optimal dropout rate decreases as we increase the number of layers or number of nodes per layer in a DNN, Figure \ref{fig:nodes_vs_dropout} shows that the gap between the optimal dropout rates for valence and arousal/dominance stays consistent. Interestingly, the optimal dropout rates for arousal and dominance are exactly the same across different DNN configurations, whereas it is different for valence. The results show that the need for higher regularization for valence is consistent across variations in the architectures of the DNNs.}
\begin{figure}[t]
\includegraphics[width=0.90\columnwidth]{images/n_vs_d3.pdf}
\vspace{-0.3cm}
\caption{Optimal dropout rate observed in the development set as a function of the number of nodes per layer in a DNN. The DNN is implemented with either three or seven layers.}
\label{fig:nodes_vs_dropout}
\vspace{-0.3cm}
\end{figure}
\vspace{-0.3cm}
\subsection{Speaker-Dependent versus Speaker-Independent Models}
\label{ssec:sp_dep}
Section \ref{ssec:dropout_rate} demonstrated that a DNN needs to be heavily regularized to give good predictions for valence. We hypothesize that this finding can be explained by the speaker-dependent nature of speech cues in the externalization of valence (i.e., we use different acoustic cues to express valence). A DNN with higher regularization, learns more generic trends present across speakers, leading to better generalization. To validate our hypothesis, we conduct a controlled emotion detection evaluation, where we train DNNs with either speaker-dependent or speaker-independent partitions. SER should be performed with speaker-independent partitions, where the data in the train and test partitions are from disjoint set of speakers. A model trained with data from speakers in the test set has an unfair advantage over a system evaluated with data from new speakers, resulting in overestimated performance. Our goal is to quantify the benefits of using speaker-dependent partitions.
\begin{table}[t]
\caption{Comparison of CCC values between speaker-independent and speaker-dependent conditions. The DNN is trained with four layers. The column `Gain' shows the relative improvement by training with partial data from the target speakers (\emph{test-A} set).}
\centering
\fontsize{8}{9}\selectfont
\begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}}c|c|c|c|c}
\hline
Attributes & Nodes & Speaker & Speaker & Gain\\
&&Independent&Dependent\\
\cline{3-5}
&& \emph{Test-B} set& \emph{Test-B} & (\%)\\
\hline
\hline
\multirow{3}{*}{Valence}
& 256 & 0.3076 & 0.3373 & 9.65\\
& 512 & 0.3083 & 0.3670 & 19.03\\
& 1,024 & 0.2997 & 0.3538 & 18.05\\
\hline
\multirow{3}{*}{Arousal}
& 256 & 0.7153 & 0.7216 & 0.88\\
& 512 & 0.7164 & 0.7331 & 2.33\\
& 1,024 & 0.7104 & 0.7258 & 2.16\\
\hline
\multirow{3}{*}{Dominance}
& 256 & 0.6300 & 0.6379 & 1.25\\
& 512 & 0.6374 & 0.6565 & 2.99\\
& 1,024 & 0.6253 & 0.6352 & 1.58\\
\hline
\end{tabular*}
\label{tab:results_within2}
\vspace{-0.3cm}
\end{table}
We build DNNs with four layers, implemented with 256, 512 or 1,024 nodes. The speaker-dependent model is built by adding the \emph{test-A} set in the train set (Fig. \ref{fig:corpus}). This approach creates a train set with partial knowledge about the speakers in the test set. In contrast, the speaker-independent model uses only the train set. To have a fair comparison, both models are evaluated on the \emph{test-B} set with speech samples that are not used to either train or optimize the parameters of the systems. Table \ref{tab:results_within2} shows the CCC values of the models for speaker-independent and speaker-dependent conditions. The last column calculates the relative improvements achieved under the speaker-dependent condition. We observe a performance gain up to 19.03\% for valence. The performances for the arousal and dominance models also increase, but the relative improvements are less than {\color{black}3\%. The fact that the performances increase using speaker dependent sets is expected. What is unexpected is that the relative gain is significantly higher for valence than for arousal and dominance.} These results clearly show that learning emotional traits from the target speakers in the test set has clear benefits for valence, validating our hypothesis that the externalization of valence in speech has speaker-dependent traits.
\vspace{-0.3cm}
\section{Proposed Personalization Method}
\label{sec:personalization}
\subsection{Motivation}
\label{ssec:method}
The findings in Section \ref{sec:analysis_1} suggest that leveraging data from speakers in the train set that are \emph{closer} to our target speakers in the test set should benefit our SER models. This is the premise of our proposed approach. We aim to improve the prediction of valence, bridging the gap in performance between speaker-dependent and speaker-independent conditions reported in Table \ref{tab:results_within2}. {\color{black} Data sampled from the selected speakers' recordings are used to create an adaptation set, as illustrated in Figure \ref{fig:approach_a}}. Once the closest speakers are identified, we can either adapt the models or assign more weights to samples from this adapatation set. This section describes our unsupervised personalization approach to improve the prediction of valence. Unlike the speaker-dependent settings used in Section \ref{ssec:dropout_rate} (and Sec. \ref{ssec:iemocap_improv}), the analysis and experiments in this study operate with speaker-independent partitions for train, development and test sets. The assumption in our formulation is that we have the speaker identity associated with each sentence in the test set.
\vspace{-0.3cm}
\subsection{Estimation of Similarity Between Speakers}
\label{ssec:estimation_close_speakers}
A key step in our approach is to identify speakers in the train set that are \emph{closer} to the speakers in the test set. Ideally, we would like to identify speakers who externalize valence cues in speech in a similar way. This aim is difficult with no clear solutions. We simplify our formulation by searching for similarities between speakers in the space of emotional speech features. {\color{black}By exploiting similarities on the emotional feature space, we expect to focus more on emotional patterns than on speaker traits, which would be the focus of speaker embeddings created with methods such as i-vector \cite{Dehak_2011} or the x-vector \cite{Snyder_2018}}. Our approach relies on \emph{principal component analysis} (PCA) to reduce the dimension of the space, followed by fitting a \emph{Gaussian mixture model} (GMM) to the resulting reduced feature space.
We aim to quantify the similarity in the feature space between the speaker $i$ in the train set, and the speaker $j$ in the test set, $d(i,j)$. The first step is to reduce the feature space, since we consider a high dimensional feature vector (6,373D -- Sec. \ref{ssec:features}). Reducing the feature space creates a more compact feature representation, where the similarity between speakers can be more efficiently computed. We implement this step with PCA, which is a popular unsupervised dimensionality reduction technique. {\color{black}First, we estimate the zero-mean vector $\mathbf{y}_s=\mathbf{f}_s-\mathbf{\bar{f}}$, where $\mathbf{f}_s$ is the feature vector of sentence $s$, and $\mathbf{\bar{f}}$ is the mean feature vector. Then, we concatenate these $M$ vectors, creating matrix $F$ (Eq. \ref{eq:conc}). From this matrix, we estimate the sample covariance matrix $Q$ using Equation \ref{eq:sample}. Then, we compute the eigenvectors of $Q$, selecting the ones with the highest eigenvalues, which are considered as the \emph{principal components} (PCs).
\vspace{-0.3cm}
\begin{eqnarray}
F&=& [\mathbf{y}_1,\mathbf{y}_2,\ldots,\mathbf{y}_M] \label{eq:conc}\\
Q&=& \frac{1}{M-1} FF^T \label{eq:sample}
\end{eqnarray}
}
\vspace{-0.3cm}
The PCA-based feature reduction is implemented for each speaker in the test set, creating speaker-dependent transformations. {\color{black}We use the 10 most important dimensions, which explain in average 57.9\% of the variance in the feature space.} The speech sentences from speaker $i$ (train set) are projected into the PCA space associated with speaker $j$ (test set). The speech sentences from speaker $j$ are also projected in that space. After the PCA projections, we fit two separate GMMs on the reduced feature space, one for the sentences of speakers $i$ ($p_{i}^{\mathit{train}}$), and another for the sentences of speaker $j$ ($q_{j}^\mathit{test}$). The GMMs have 10 mixtures, matching the reduced dimension of the PCA projections.
Finally, we estimate the similarity between the GMMs using the \emph{Kulback Liebler Divergence} (KLD).
\vspace{-0.3cm}
\begin{eqnarray}
p_{i}^{\mathit{train}} (x_i)&=&\sum_{n=1}^{10} w_{n(i)} N(x_i,\mu_{n(i)},\Sigma_{n(i)})\\
q_{j}^\mathit{test} (x_j)&=&\sum_{n=1}^{10} w_{n(j)} N(x_j,\mu_{n(j)},\Sigma_{n(j)})\\
d(i,j) &=& KLD (p_{i}^{\mathit{train}}, q_{j}^{\mathit{test}})
\end{eqnarray}
For a given speaker $j$ in the test set, we estimate $d(i,j)$ for all the speakers in the train set, sorting their scores in increasing order. The closest speakers in the train set are the top speakers in this ranked list. This approach is repeated for each of the 50 speakers in the test set {\color{black}(i.e., we have 50 different PCA projections)}. While this step can be implemented using all the data from the test set, we use the \emph{test-A} set to have the same amount of data for each speaker (i.e., 200s -- Fig. \ref{fig:corpus}). {\color{black}Figure \ref{fig:approach_a} illustrates the process to form the adaptation set by finding the closest set of training speakers to a target speaker. Notice that the adaptation set is a subset of the train set, for which we have labels.}
\begin{figure}[tb]
\centering
\subfigure[Selection of closest speakers to create adaptation set]{
\includegraphics[width=0.98\columnwidth]{images/approach_1.pdf}
\label{fig:approach_a}
}
\subfigure[Criteria for selecting samples for the adaptation set]{
\includegraphics[width=0.98\columnwidth]{images/approach_2.pdf}
\label{fig:approach_b}
}
\caption{Illustration of proposed personalization approach. We identify the closest set of speakers in the train set to each of the target speaker in the test set. Sentences from these speakers are randomly sampled with different selection criteria.}
\label{fig:approach}
\vspace{-0.3cm}
\end{figure}
\vspace{-0.3cm}
\subsection{Personalization Approach}
\label{ssec:Personalization}
After selecting the speakers in the train set that are closest to each of the test speakers, our next step is to leverage data from these train set speakers to either personalize or adapt the emotion prediction models for valence. We propose and evaluate three alternative methods, referred to as \emph{unique speaker}, \emph{oversampling}, and \emph{weighting} approaches. The first two approaches rely on adapting a model. We built a regression model using the train set to predict valence. The weights of the pre-trained models are frozen with the exception of the last layer, which are fine-tuned using the adaptation data. The last approach requires training the network from scratch.
\noindent
\underline{Unique speaker approach}: {\color{black}
Each speaker in the test set generates a list with its $N$ closest speakers in the train set. Since some speakers in the train set may be close to more than one speaker, the total number of selected speakers after combining the lists from the 50 speakers in the test set is less or equal to $N\times50$. The unique speaker approach considers all the data from these speakers in the train set. We create the adaptation set by sampling from the data from these speakers without replacement. Therefore, each speech segment can be considered only once in the adaptation set. Figure \ref{fig:approach_b} illustrates the process for the case when we have only 2 speakers in the test set. In the example, speakers 2, 7 and 120 in the train set are found to be close to both test speakers, hence, we consider them only once when forming the adaptation set. We implement a balance sampling criterion that aims to select approximately the same amount of data for each speakers. For example, for an adaptation set of 200s, if we have 7 unique speakers selected as the closest speakers from the test speakers, as in the example, we would randomly select approximately 28.6s for each of these speakers. We adopt this approach in an attempt to diversify and balance the speech samples selected from all the speakers in the unique speakers set. This approach uses the pre-trained models trained with all the train set, personalizing the models with the adaptation set.}
\noindent
\underline {Oversampling approach}: {\color{black}A speaker in the train set may be in the list of the closest speakers for more than one speaker in the test set. The oversampling approach assumes that these samples are more relevant during the adaptation process. If a speaker is selected $C$ times (i.e., the speaker is one of the closest speakers for $C$ speakers in the test set), the oversampling method will create $C$ copies of his/her sentences before randomly drawing the samples. This process is illustrated in Figure \ref{fig:approach_b} were speakers 2, 7 and 120 in the train set are copied twice. Therefore, more samples from these speakers will appear in the adaptation set. We form the adaptation set with a balance approach, choosing approximately the same amount of data from the selected speakers. In the example from Figure \ref{fig:approach_b}, we would select 20 second for each of the 10 sets for an adaptation set of 200s. Sentences can even be repeated on the adaptation set. This approach also fine-tunes the pre-trained model using speech samples from the oversampled adaptation set.}
\noindent
\underline{Weighting approach}: The third approach to personalize a model is by increasing the weights in the loss function during the training process for speech samples in the adaptation set (i.e., same set used in the unique speaker approach). Unlike the previous two approaches, which adapt a pre-trained system, this approach trains the regression model from scratch. {\color{black}As described in Equation \ref{eq:loss}, we use $\mathcal{L}=(1 - CCC)$ as the loss function to train our models. For the weighting approach, this cost is assigned to a sample in the train set, but not on the adaptation set. For samples in the adaptation set, we multiply the cost $\mathcal{L}$ by a factor $\lambda > 1$. Therefore, an error of a sample from the adaptation set is $\lambda$ times more costly than an error made on other samples from the train set not included in the adaptation set.} We experiment with weighting ratios of 1:2, 1:3, 1:4 and 1:5, where higher weights are assigned to samples in the adaptation set. This approach uses all the train set, increasing the importance of correctly predicting samples in the adaptation set.
The proposed unsupervised adaptation schemes can be jointly applied to all the speakers in the test set, creating a single model. We refer to this approach as \emph{global adaptation} (GA) model. Alternatively, the approaches can be individually implemented for each speaker, creating as many models as speakers in the test set. This implementation only works for the unique speaker and weighting approaches. The oversampling approach does not apply in this case, since each speech segment in the adaptation set is drawn only once (i.e., we consider one test speaker at a time). We refer to this approach as the \emph{individual adaptation} (IA) model. We evaluate both implementations in Section \ref{sec:results} using adaptation sets of different sizes.
\vspace{-0.3cm}
\section{Experimental Results}
\label{sec:results}
The prediction of valence is formulated as a regression problem implemented with DNNs with four dense layers and 512 nodes per layer. {\color{black} This setting achieved the best performance for valence on Table \ref{tab:results_within2}.} We use \emph{rectified linear unit} (ReLU) at the hidden layers and linear activations for the output layer. The DNNs are trained with batch normalization for the hidden layers. We use a dropout rate of $p=0.7$ at the hidden layers. The selection of this rate follows the findings in Section \ref{ssec:dropout_rate}, which demonstrates that a higher value of dropout is important for improving the detection of valence. We pre-train the models for 200 epochs with an early stopping criterion based on the performance on the development set. The best model is used to evaluate the results. The DNNs are trained with \emph{stochastic gradient descent} (SGD) with momentum of 0.9, and a learning rate of $r=0.001$. For the unique speaker and oversampling approaches, the learning rate is reduced to $r_{\mathit{adap}}=0.0001$ while adapting the regression model. We adapt the models with these approaches for 100 extra epochs with an early stopping criterion based on the performance on the development set. For the weighting approach, we train the models from scratch for 200 epochs with early stopping criterion, maximizing the performance on the development set. The loss function (Eq. \ref{eq:loss}) relies on the \emph{concordance correlation coefficient} (CCC), which is defined in Equation \ref{eq:ccc}.
\vspace{-0.3cm}
\begin{eqnarray}
\mathcal{L} &=&(1 - CCC) \label{eq:loss}\\
\mathit{CCC} &=& \frac{2\rho\sigma_x\sigma_y}{\sigma_x^2+\sigma_y^2+(\mu_x - \mu_y)^2} \label{eq:ccc}
\end{eqnarray}
\vspace{-0.3cm}
The parameters $\mu_x$ and $\mu_y$, and $\sigma_x$ and $\sigma_y$ are the means and standard deviations of the true labels ($x$) and the predicted labels ($y$), and $\rho$ is the Pearson's correlation coefficient between them. CCC takes into account not only the correlation between the true emotional labels and their estimates, but also the difference in their means. This metric takes care of the bias-variance tradeoff when comparing the true and predicted labels. CCC is also the evaluation metric in all our experimental evaluation.
The input to the DNNs is the 6,373D acoustic feature vector (Sec. \ref{ssec:features}). The features are normalized to have zero mean and unit standard deviation. This normalization is done using the mean and standard deviation values estimated over the training samples. {\color{black}After this normalization, we expect the features to be within a reasonable range. We remove outliers by clipping the features with values that deviate more than three standard deviations from their means (i.e., $\mu_{f_i} - 3\sigma_{f_i} \leq f_i \leq \mu_{f_i} + 3\sigma_{f_i}$).} The output of the DNNs is the prediction score for the emotional attribute.
We use the speaker-independent and speaker-dependent models described in Section \ref{ssec:sp_dep} as baselines, where the results are listed in Table \ref{tab:results_within2}. The speaker-independent model does not rely on any adaptation scheme to personalize the models to perform better on the test set. As described in Section \ref{ssec:sp_dep}, the speaker-dependent model is built by adding the \emph{test-A} set to the train set, using partial information from the speakers. While this setting is not representative of the performance expected for the regression model when evaluated on speech from unknown speakers, it provides an upper bound performance to contextualize the improvements observed with our proposed personalization methods. For analysis purposes, we report the performance of the speaker-dependent models obtained with the addition of 50s, 100s, 150s, and 200s per speaker in the test set. These extra samples are obtained from the \emph{test-A} set. We consistently evaluate all the models using the \emph{test-B} set. This experimental setting creates fair comparisons, since these samples are not used to train any of the models.
\vspace{-0.3cm}
\subsection{Global Adaptation Model}
\label{ssec:one_model}
We evaluate the performance of the system with the global adaptation model, where a single regression model is built. The three adaptation schemes are implemented by considering the 50 speakers in the test set. The adaptation set is obtained by identifying the closest speakers in the train set to each speaker in the test set (Sec. \ref{ssec:estimation_close_speakers}). We implement this approach by identifying the five closest speakers in the train set ($N=5$). Section \ref{ssec:NumberClosest} shows the results with different number of speakers. We incrementally add more samples by randomly selecting 50s, 100s, 150s, 200s, and 300s from the selected speakers associated with a given speaker in the test set (Sec. \ref{ssec:Personalization}). This process is repeated for each of the speakers in the test set to observe the performance trend as a function of the size of the adaptation data.
\begin{comment}
\begin{figure}[tb]
\centering
\includegraphics[width=8cm]{images/Global_model_legend_1.pdf}\\
\includegraphics[width=7.5cm]{images/Global_model_legend_2.pdf}\\
\includegraphics[width=1\columnwidth]{images/Global_models_plot_new.pdf}
\caption{Results on the \emph{test-B} set for the global adaptation model for the unique speaker and oversampling methods. It shows the results for the speaker-dependent and speaker-independent baselines. It also shows the speaker-dependent model implemented with different size of data from the \emph{test-A} set.}
\label{fig:adapt1}
\end{figure}
\end{comment}
\begin{figure}[tb]
\centering
\includegraphics[width=8cm]{images/Global_model_legend_1.pdf}\\
\includegraphics[width=7.5cm]{images/Global_model_legend_2.pdf}\\
\includegraphics[width=0.9\columnwidth]{images/ga_model.pdf}
\caption{Results on the \emph{test-B} set for the global adaptation model for the unique speaker and oversampling methods. It shows the results for the speaker-dependent and speaker-independent baselines. It also shows the speaker-dependent model implemented with different size of data from the \emph{test-A} set.}
\label{fig:adapt1}
\vspace{-0.3cm}
\end{figure}
First, we evaluate the unique speaker and oversampling approaches, which rely on model adaptation. Figure \ref{fig:adapt1} shows the results. The two solid horizontal lines are the speaker-dependent (green) and speaker-independent (red) baselines. The green line (triangle) corresponds to the speaker-dependent model as we increase the amount of data from the \emph{test-A} set. The performance for the unique speaker approach is shown in blue (square). We clearly observe an improvement over the speaker-independent baseline, which demonstrate that the adaptation scheme is effective even with a very small adaptation set (e.g., 50s). The pink (asterisk) line in Figure \ref{fig:adapt1} shows the performance of the oversampling approach, which leads to better performance than the unique speaker approach. {\color{black}Both approaches use the same amount of adaptation data, but rely on different criteria to select the adaptation samples. Adding samples in multiple mini-batches according to the oversampling strategy is beneficial to improve the prediction of valence. For both models, we observe consistent improvement in CCC as more data is added into the adaptation set from 50s to 200s. After this point, the performance seems to saturate, observing fluctuations. Interestingly, adaptation with 200s of data using the oversampling approach leads to better performance than the speaker-dependent baseline implemented with 50s and 100s of data from the \emph{test-A} set.}
\begin{comment}
\begin{figure}[tb]
\centering
\includegraphics[width=8cm]{images/weighting_models_legend_1.pdf}
\includegraphics[width=8cm]{images/weighting_models_legend_2.pdf}
\includegraphics[width=0.9\columnwidth]{images/weighting_models_new.pdf}
\caption{Results on the \emph{test-B} set for the global adaptation model for the weighting approach. The figure shows the results for the speaker-dependent and speaker-independent baselines.}
\label{fig:adapt3}
\end{figure}
\end{comment}
\begin{figure}[tb]
\centering
\includegraphics[width=8cm]{images/weighting_models_legend_1.pdf}
\includegraphics[width=8cm]{images/weighting_models_legend_2.pdf}
\includegraphics[width=0.9\columnwidth]{images/ga2.pdf}
\caption{Results on the \emph{test-B} set for the global adaptation model for the weighting approach. The figure shows the results for the speaker-dependent and speaker-independent baselines.}
\label{fig:adapt3}
\vspace{-0.3cm}
\end{figure}
Second, we evaluate the performance of the weighting method, which trains the models from scratch, weighting more the samples in the adaptation set (Sec. \ref{ssec:Personalization}). We evaluate the amount of data included in this selected set, including 50s, 100s, 150s, 200s per speaker in the test set. We also consider using all the data from the selected speakers. Only samples in this set are weighed more, implementing this approach with different ratios (1:2, 1:3, 1:4 and 1:5). Figure \ref{fig:adapt3} shows the results. For weighting ratios 1:2 and 1:3, the performance gradually increases by adding more data in the selected set, peaking at 200s per speaker. However, the opposite trend is observed when the weighting ratios are either 1:4 or 1:5. Increasing the weights of speech samples in the adaptation set diminishes the information provided in the rest of the train data, leading to worse performance. The best performance is obtained with a weighting ratio of 1:3, when the selected set includes 200s per speaker. Figures \ref{fig:adapt1} and \ref{fig:adapt3} show that this setting achieves similar performance than the best setting for the oversampling approach.
\vspace{-0.3cm}
\subsection{Individual Adaptation Model}
\label{ssec:50_models}
This section presents the results of our approach implemented using the individual adaptation model. This approach builds one model for each of the speakers in the test set, creating the adaptation set with the samples from the speakers in the train set that are closer to this speaker (i.e., 50 separate models). {\color{black}For each model, we attempt to select equal duration of speech samples from each of the closest train speakers in the adaptation set to balance the amount of data used from each speaker.} After adapting the models, the results are reported by concatenating the predicted vectors for each speaker in the test set. We estimate the CCC values for the entire \emph{test-B} set. The approach is implemented with the five closest speakers to each speaker in the test set. The performance of the approaches are reported by increasing the adaptation set. As explained in Section \ref{ssec:Personalization}, we only evaluate the unique speaker and weighting approaches, since the oversampling approach cannot be implemented with a single speaker in the test set.
\begin{comment}
\begin{figure}[tb]
\centering
\includegraphics[width=8cm]{images/Individual_model_legend_1.pdf}
\includegraphics[width=8cm]{images/Individual_model_legend_2.pdf}
\includegraphics[width=1\columnwidth]{images/Individual_models_plot_new.pdf}
\caption{Results on the \emph{test-B} set for the individual adaptation model using the unique speaker and weighting approaches. The figure shows the results for the speaker-dependent and speaker-independent baselines.}
\label{fig:adapt2}
\end{figure}
\end{comment}
\begin{figure}[tb]
\centering
\includegraphics[width=8cm]{images/Individual_model_legend_1.pdf}
\includegraphics[width=8cm]{images/Individual_model_legend_2.pdf}
\includegraphics[width=1\columnwidth]{images/ia1.pdf}
\caption{Results on the \emph{test-B} set for the individual adaptation model using the unique speaker and weighting approaches. The figure shows the results for the speaker-dependent and speaker-independent baselines.}
\label{fig:adapt2}
\vspace{-0.3cm}
\end{figure}
Figure \ref{fig:adapt2} shows the CCC scores obtained for different sizes of the adaptation set. The results show improvements over the speaker-independent baseline performance. The performance gains are consistently higher when all the data from the closest speakers are used. The weighting approach also leads to better performance than the unique speaker approach. However, the results are worse than the CCC values of approaches implemented with the global adaptation model (Figs. \ref{fig:adapt1} and \ref{fig:adapt3}). {\color{black}The decrease in performance in the IA model can be associated with the adaptation procedure. In the IA models, we adapt separate models for each target speaker. This procedure involves adapting 50 different models where their parameters and hyperplanes change based on a small adaptation set. This approach may be too aggressive, resulting in lower performance than adapting a single model, considering all the target speakers. We have seen similar observations in the area of active learning in SER tasks, where a more conservative adaptation strategy led to better results \cite{Abdelwahab_2017_2}. In the GA models, we are adapting a single model, where the shape and direction of the change of the hyperplane are smoother than in the case of the IA models achieving a better and stable performance}.
\vspace{-0.3cm}
\subsection{Number of Closest Speakers}
\label{ssec:NumberClosest}
This section evaluates the number of closest speakers ($N$) from the train set selected for each speaker in the test set. If this number is too small, the adaptation set will not have enough variability. If this number is too high, we will select speakers that are not very close to the target speakers. We implement the weighting approach with the ratio 1:3. Figure \ref{fig:Closest} shows the results on the \emph{test-B} set for the models implemented with the global and individual adaptation models using the proposed adaptation schemes ($N\in \{3,5,10\}$). The results clearly show that $N=5$ is a good balance, obtaining higher performance across conditions. We set $N=5$ for the rest of the experimental evaluation.
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\columnwidth]{images/close_spkrs.pdf}
\caption{Evaluation of the optimal number of closer speakers ($N$) from the train set for each of the speakers in the test set. The training data from the selected speakers is included in the adaptation set. The results corresponds to the CCC values obtained with different methods on the \emph{test-B} set.}
\label{fig:Closest}
\end{figure}
\vspace{-0.3cm}
\subsection{Minimizing Loss on Adaptation Set}
\label{ssec:training_loss}
\begin{figure}[t]
\centering
\includegraphics[width=5.5cm]{images/legend_4.pdf}
\subfigure[Global adaptation model]{
\includegraphics[trim=0 0.5cm 0 0, clip,width=0.9\columnwidth]{images/fig8_ga}
\label{fig:tr_loss_a}
}
\subfigure[Individual adaptation model]{
\includegraphics[trim=0 0 0 0, clip,width=0.9\columnwidth]{images/fig8_ia.pdf}
\label{fig:tr_loss_b}
}
\caption{Improvement in performance achieved by monitoring the loss function on the adaptation set while fine-tuning the models. The percentages over the bars indicate the relative improvements over the speaker-independent baseline. The figure shows the performance observed in the \emph{test-B} set.}
\vspace{-0.4cm}
\label{fig:tr_loss}
\end{figure}
The results of the model adaptation presented in Sections \ref{ssec:one_model}, \ref{ssec:50_models} and \ref{ssec:NumberClosest} are obtained by minimizing the loss function on the development set, a practice that aims to increase the generalization of the models. In our formulation, however, we aim to personalize a model towards a known set of speakers in the test set. Given the assumption that the selected speakers in the train set are similar to the test set, we can optimize the system by minimizing the loss function on the adaptation set when fine-tuning the model (i.e., samples from the selected speakers used for adaptation). Since we want to \emph{personalize} the system, it is theoretically correct to maximize the performance of the model on data that is found to be closer to the target speakers. This section evaluates this idea. We record the best performance of the model by using an early stopping criterion on the adaptation loss. The only special case is the weighting approach which trains the models from scratch. Since monitoring the loss exclusively on the adaptation set will ignore other samples in the train set, we decide to monitor the loss on the full train set. The differences in the weights increase the emphasis of samples from the adaptation set, achieving essentially the same goal. We use the adaptation set using the 200s condition. The weighting approach is implemented with the 1:3 ratio, which gave the best CCC in previous experiments (Figs. \ref{fig:adapt3} and \ref{fig:adapt2}).
Figure \ref{fig:tr_loss} shows the performance improvements achieved by minimizing the loss function on the adaptation set. The darker bars indicate the results obtained while monitoring the training loss, and the lighter bars indicate the results obtained while monitoring the development loss. We include the relative improvements over the speaker-independent baseline with numbers on top of the bars. The relative improvements are constantly higher when maximizing the performance on the train set, tailoring even more the models to the test speakers. We conducted a one-tailed student t-test over ten trials to assert if the differences in performance between minimizing the loss function on the development and train sets are statistically significant. We assert significance when $p$-values$<$0.05. The statistical test indicates that the differences are statistically significant for all the adaptation approaches implemented with either the global or individual adaptation models. This approach leads to performances that are closer to the speaker-dependent baseline.
\begin{comment}
\begin{table}[t]
\caption{Significance of improvement in CCC scores while monitoring the training loss as compared with monitoring the development loss during adaptation. The significance is recorded in terms of $p$-values.}
\centering
\vspace{-0.2cm}
\fontsize{7}{8}\selectfont
\begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}}c|c|c}
\hline
Adaptation Methods & Approach & $p$-value$\le$\\
\hline
\hline
\multirow{3}{*}{\shortstack{Global Model\\acrosss Speakers}}
& Oversampling & 0.005\\
& Unique Speakers & 0.0005\\
& Weighting & 0.025\\
\hline
\multirow{2}{*}{\shortstack{One Model per Speaker}}
& Unique Speakers & 0.025\\
& Weighting & 0.025\\
\hline
\end{tabular*}
\vspace{-0.4cm}
\label{tab:trloss_pvalues}
\end{table}
\end{comment}
\vspace{-0.3cm}
\subsection{Performance on Other Emotional Attributes}
\label{ssec:comparison}
The premise of this study is that valence is externalized in speech with more speaker-dependent traits than arousal and dominance. Therefore, personalization approaches to bring the models closer to the test speakers should have a higher impact on valence. This section implements the proposed adaptation schemes on arousal and dominance, comparing the relative improvements over the speaker-independent baseline with the results for valence.
\begin{comment}
\begin{table}[t]
\caption{Performance achieved using different adaptation approaches on the \emph{test-B} set. The table reports the performance gain over the speaker-independent baseline reported in Table \ref{tab:results_within2}}
\centering
\fontsize{8}{10}\selectfont
\begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}}c|l|cc|cc}
\hline
& \multicolumn{1}{c|}{Adaptation} &\multicolumn{2}{c|}{Minimizing loss} & \multicolumn{2}{c}{Minimizing loss}\\
& \multicolumn{1}{c|}{Scheme} &\multicolumn{2}{c|}{in development set} & \multicolumn{2}{c}{in train set}\\
\cline{3-6}
&&CCC & Gain (\%)& CCC & Gain (\%)\\
\hline
\hline
\multirow{5}{*}{\rotatebox{90}{Valence}}
&Unique speaker (GA)& 0.3180 & 6.78 & 0.3429 & 15.17\\
&Oversampling (GA)& 0.3279 & 10.14 & 0.3390 & 13.86\\
&Weighting (GA)& 0.3287 & 10.4 & 0.3338 & 12.12\\
&Unique speaker (IA)& 0.3106 & 4.3 & 0.3157 & 5.04\\
&Weighting (IA)& 0.3172 & 6.51 & 0.3229 & 8.46\\
\hline
\multirow{5}{*}{\rotatebox{90}{Arousal}}
&Unique speaker (GA)& 0.7676 & 0.23 & 0.7744 & 1.10\\
&Oversampling (GA)& 0.7648 & 0.33 & 0.7783 & 1.62\\
&Weighting (GA)& 0.7660 & 0.013 & 0.7681 & 0.28\\
&Unique speaker (IA)& 0.7661 & 0.026 & 0.7688 & 0.38\\
&Weighting (IA)& 0.7662 & 0.047 & 0.7695 & 0.47\\
\hline
\multirow{5}{*}{\rotatebox{90}{Dominance}}
&Unique speaker (GA)& 0.6958 & 0.23 & 0.6992 & 0.72\\
&Oversampling (GA)& 0.6979 & 0.54 & 0.7005 & 0.91\\
&Weighting (GA)& 0.6949 & 0.10 & 0.6977 & 0.50\\
&Unique speaker (IA)& 0.6954 & 0.18 & 0.6981 & 0.56\\
&Weighting (IA)& 0.6962 & 0.29 & 0.6996 & 0.77\\
\hline
\end{tabular*}
\label{tab:results_all}
\end{table}
\end{comment}
\begin{comment}
\begin{table}[t]
\caption{Performance achieved using different adaptation approaches on the \emph{test-B} set. The table reports the performance gain over the speaker-independent baseline reported in Table \ref{tab:results_within2}}
\centering
\fontsize{8}{10}\selectfont
\begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}}c|l|cc|cc}
\hline
& \multicolumn{1}{c|}{Adaptation} &\multicolumn{2}{c|}{Minimizing loss} & \multicolumn{2}{c}{Minimizing loss}\\
& \multicolumn{1}{c|}{Scheme} &\multicolumn{2}{c|}{in development set} & \multicolumn{2}{c}{in adaptation set}\\
\cline{3-6}
&&CCC & Gain (\%)& CCC & Gain (\%)\\
\hline
\hline
\multirow{5}{*}{\rotatebox{90}{Valence}}
&Unique speaker (GA)& 0.3310 & 6.05 & 0.3340 & 7.01\\
&Oversampling (GA)& 0.3408 & 9.19 & 0.3454 & 10.66\\
&Weighting (GA)& \textbf{0.3424} & \textbf{9.70} & \textbf{0.3496} & \textbf{12.01}\\
&Unique speaker (IA)& 0.3285 & 5.25 & 0.3299 & 5.70\\
&Weighting (IA)& 0.3298 & 5.67 & 0.3312 & 6.12\\
\hline
\multirow{5}{*}{\rotatebox{90}{Arousal}}
&Unique speaker (GA)& 0.7281 & 0.48 & 0.7297 & 0.70\\
&Oversampling (GA)& 0.7298 & 0.71 & 0.7319 & 1.00\\
&Weighting (GA)& \textbf{0.7301} & \textbf{0.75} & \textbf{0.7331} & \textbf{1.17}\\
&Unique speaker (IA)& 0.7265 & 0.26 & 0.7283 & 0.51\\
&Weighting (IA)& 0.7277 & 0.42 & 0.7299 & 0.73\\
\hline
\multirow{5}{*}{\rotatebox{90}{Dominance}}
&Unique speaker (GA)& 0.6328 & 0.22 & 0.6354 & 0.63\\
&Oversampling (GA)& 0.6342 & 0.44 & 0.6368 & 0.85\\
&Weighting (GA)& \textbf{0.6343} & \textbf{0.45} & \textbf{0.6379} & \textbf{1.02}\\
&Unique speaker (IA)& 0.6318 & 0.06 & 0.6350 & 0.57\\
&Weighting (IA)& 0.6331 & 0.26 & 0.6359 & 0.71\\
\hline
\end{tabular*}
\label{tab:results_all_version6}
\end{table}
\end{comment}
\begin{table}[t]
\caption{Performance achieved using different adaptation approaches on the \emph{test-B} set. The table reports the performance gain over the speaker-independent baseline reported in Table \ref{tab:results_within2}}
\centering
\fontsize{8}{9}\selectfont
\begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}}c|l|cc|cc}
\hline
& \multicolumn{1}{c|}{Adaptation} &\multicolumn{2}{c|}{Minimizing loss} & \multicolumn{2}{c}{Minimizing loss}\\
& \multicolumn{1}{c|}{Scheme} &\multicolumn{2}{c|}{in development set} & \multicolumn{2}{c}{in adaptation set}\\
\cline{3-6}
&&CCC & Gain (\%)& CCC & Gain (\%)\\
\hline
\hline
\multirow{5}{*}{\rotatebox{90}{Valence}}
&Unique speaker (GA)& 0.3295 & 6.87 & 0.3320 & 7.68\\
&Oversampling (GA)& 0.3378 & 9.56 & 0.3447 & 11.80\\
&Weighting (GA)& \textbf{0.3412} & \textbf{10.67} & \textbf{0.3500} & \textbf{13.52}\\
&Unique speaker (IA)& 0.3332 & 8.07 & 0.3366 & 9.17\\
&Weighting (IA)& 0.3319 & 7.65 & 0.3385 & 9.79\\
\hline
\multirow{5}{*}{\rotatebox{90}{Arousal}}
&Unique speaker (GA)& 0.7196 & 0.44 & 0.7221 & 0.79\\
&Oversampling (GA)& 0.7209 & 0.62 & 0.7258 & 1.31\\
&Weighting (GA)& \textbf{0.7222} & \textbf{0.80} & \textbf{0.7296} & \textbf{1.84}\\
&Unique speaker (IA)& 0.7185 & 0.29 & 0.7267 & 1.43\\
&Weighting (IA)& 0.7202 & 0.53 & 0.7271& 1.49\\
\hline
\multirow{5}{*}{\rotatebox{90}{Dominance}}
&Unique speaker (GA)& 0.6410 & 0.56 & 0.6415 & 0.64\\
&Oversampling (GA)& 0.6428 & 0.84 & 0.6430 & 0.87\\
&Weighting (GA)& \textbf{0.6433} & \textbf{0.92} & \textbf{0.6451} & \textbf{1.20}\\
&Unique speaker (IA)& 0.6399 & 0.39 & 0.6419 & 0.70\\
&Weighting (IA)& 0.6417 & 0.67 & 0.6422 & 0.75\\
\hline
\end{tabular*}
\label{tab:results_all_version6}
\vspace{-0.3cm}
\end{table}
\begin{comment}
\begin{table}[t]
\caption{MSP-IMPROV: Performance achieved using different adaptation approaches on the \emph{test-B} set. The table reports the performance gain over the speaker-independent baseline reported in Table \ref{tab:results_within2}}
\centering
\fontsize{8}{10}\selectfont
\begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}}c|l|cc|cc}
\hline
& \multicolumn{1}{c|}{Adaptation} &\multicolumn{2}{c|}{Minimizing loss} & \multicolumn{2}{c}{Minimizing loss}\\
& \multicolumn{1}{c|}{Scheme} &\multicolumn{2}{c|}{in development set} & \multicolumn{2}{c}{in adaptation set}\\
\cline{3-6}
&&CCC & Gain (\%)& CCC & Gain (\%)\\
\hline
\hline
\multirow{5}{*}{\rotatebox{90}{Valence}}
&Unique speaker (GA)& 0.3866 & 13.04 & 0.3931 & 14.94\\
&Oversampling (GA)& 0.4014 & 17.36 & 0.3955 & 15.64\\
&Weighting (GA)& 0.3951 & 15.52 & 0.4025 & 17.69\\
&Unique speaker (IA)& 0.3855 & 12.71 & 0.3892 & 13.80\\
&Weighting (IA)& 0.3873 & 13.24 & 0.3901 & 14.06\\
\hline
\multirow{5}{*}{\rotatebox{90}{Arousal}}
&Unique speaker (GA)& 0.6057 & 1.66 & 0.6080 & 2.04\\
&Oversampling (GA)& 0.6086 & 2.14 & 0.6175 & 3.64\\
&Weighting (GA)& 0.6191 & 3.91 & 0.6205 & 4.14\\
&Unique speaker (IA)& 0.6087 & 2.16 & 0.6187 & 3.84\\
&Weighting (IA)& 0.6095 & 2.29 & 0.6144 & 3.12\\
\hline
\multirow{5}{*}{\rotatebox{90}{Dominance}}
&Unique speaker (GA)& 0 & 0 & 0 & 0\\
&Oversampling (GA)& 0 & 0 & 0 & 0\\
&Weighting (GA)& 0 & 0 & 0 & 0\\
&Unique speaker (IA)& 0 & 0 & 0 & 0\\
&Weighting (IA)& 0 & 0 & 0 & 0\\
\hline
\end{tabular*}
\label{tab:results_all}
\end{table}
\end{comment}
Table \ref{tab:results_all_version6} reports the performance and relative improvements over the speaker-independent baseline for valence, arousal, and dominance when using the proposed methods. {\color{black}The relative improvements for arousal and dominance are less than 1.9\%, mirroring the results observed in Table \ref{tab:results_within2} that shows relative improvements less than 3\% for arousal and dominance when labeled data from the target speakers is available (i.e., speaker-dependent condition). Therefore, it is not surprising that the method does not lead to big improvements for arousal and dominance where there is little room for improvements. We argue that the approach is successful even for arousal or dominance, since the relative improvements for these emotional attributes are similar to the values reported in Table 1 under speaker dependent conditions.} In contrast, the relative improvements for valence are as high as 13.52\%. These results validate our hypothesis that exploiting speaker-dependent characteristics between train and test speakers helps to personalize a SER system in the prediction of valence.
\begin{comment}
\begin{figure*}[tb]
\centering
\includegraphics[width=15cm]{images/legend_1.png}\\
\subfigure[Arousal]{
\includegraphics[width=5.5cm]{images/aro_improvement_1.pdf}
\label{fig:aro_imp}
}
\subfigure[Valence]{
\includegraphics[width=5.5cm]{images/val_improvement_2.pdf}
\label{fig:val_imp}
}
\subfigure[Dominance]{
\includegraphics[width=5.5cm]{images/dom_improvement_2.pdf}
\label{fig:dom_imp}
}
\caption{Improvement in performance achieved using different adaptation approaches. The percentages over the bars indicate the amount of improvement from the speaker-independent baseline. {\color{red} transform this figure into a table similar to Table \ref{tab:results_within2}, can we include optimization on train and development set?} {\color{red}- Almost done!}}
\label{fig:comparison}
\end{figure*}
\begin{table}[tb]
\caption{Adaptation results with oversampling and unique speakers approach. Speaker-independent baseline CCC = 0.2978 }
\centering
\fontsize{7}{8}\selectfont
\begin{tabular*}{1.0\columnwidth}{@{\extracolsep{\fill}}c|c|c|c|c|c}
\hline
& \multicolumn{5}{c}{Amount of adaptation data in seconds}\\
\hline
\hline
\shortstack{Approaches and \\ methods} & $50s$ & $100s$ & $150s$ & $200s$ & $300s$\\
\hline
\hline
\shortstack{Speaker-dependent \\ upper bound}& 0.3134 & 0.3221 & 0.3296 & 0.3466\\
\hline
\shortstack{Oversampling \\ Single model}& 0.3185 & 0.3210 & 0.3221 & 0.3280 & 0.3214\\
\hline
\shortstack{Unique samples \\ Single model}& 0.3102 & 0.3152 & 0.3161 & 0.3178 & 0.3084\\
\hline
\end{tabular*}
\label{tab:res2}
\end{table}
\end{comment}
\vspace{-0.3cm}
{\color{black}
\subsection{Comparison with Other Baselines}
\label{ssec:baselines}
\begin{table}[t]
\caption{Comparison of results in terms CCC obtained with the proposed personalization approach and other methods. All the experiments are done with MSP-Podcast corpus and evaluated on Test-B set. STL: Single Task Learning (Speaker-Independent baseline), MTL: Multi-Task Learning}
\centering
\fontsize{8}{9}\selectfont
\begin{tabular*}{0.98\columnwidth}{@{\extracolsep{\fill}}c|c|c|c|c}
\hline
Attributes & Proposed & STL & MTL & Ladder\\
&Approach&&&Networks\\
\hline
\multirow{1}{*}{Valence}
& \textbf{0.3500} & 0.3083 & 0.3302 & 0.3158\\
\hline
\multirow{1}{*}{Arousal}
& 0.7296 & 0.7164 & 0.7214 & \textbf{0.7421}\\
\hline
\multirow{1}{*}{Dominance}
& 0.6451 & 0.6374 & 0.6287 & \textbf{0.6498}\\
\hline
\end{tabular*}
\label{tab:comparison_results}
\vspace{-0.3cm}
\end{table}
We compare the results from our proposed personalization approach with other state-of-the art approaches. We consider \emph{multi-task learning} (MTL) \cite{Parthasarathy_2017_3}, and ladder network \cite{Parthasarathy_2018_3} for SER. These are some of the most successful approaches used in SER. The MTL approach jointly predicts arousal, valence and dominance, where the loss function is a weighted sum of the individual attribute losses. We used $\mathcal{L} = (1 - CCC)$ as the loss for each attribute ($\mathcal{L}_{aro}$, $\mathcal{L}_{val}$, $\mathcal{L}_{dom}$). Equation \ref{eq:mtl} shows the overall loss function, where $(\alpha,\beta) \in [0, 1]$ and $\alpha + \beta \leq 1$. The hyperparameters $\alpha$ and $\beta$ are tuned on the development set.
\vspace{-0.3cm}
\begin{eqnarray}
\mathcal{L}_{MTL} = \alpha \mathcal{L}_{aro} + \beta \mathcal{L}_{val} + (1 - \alpha - \beta) \mathcal{L}_{dom}
\label{eq:mtl}
\end{eqnarray}
The ladder network approach follows the implementation presented by Parthasarathy and Busso \cite{Parthasarathy_2018_3}. This method uses the reconstruction of feature representations at various layers in a DNN as auxiliary tasks. In addition, we consider the speaker-independent baseline discussed in previous sections, referred here to as \emph{single task learning} (STL).
Table \ref{tab:comparison_results} describes the results, which clearly show significant improvements over alternative methods in CCC for valence using our proposed approach, reinforcing our claim about the benefits of personalization in the estimation of valence. Other approaches are effective for arousal and dominance.
\vspace{-0.3cm}
\subsection{Performance on Other Corpora}
\label{ssec:iemocap_improv}
\begin{table}[t]
\caption{Comparison of CCC values between speaker-independent and speaker-dependent conditions for IEMOCAP (IEM) and MSP-IMPROV (IMP) databases. The DNN is trained with three layers and $256$ nodes per layer. The column `Gain' shows the relative improvement by training with partial data from the target speakers (\emph{test-A} set).}
\centering
\fontsize{8}{9}\selectfont
\begin{tabular*}{0.98\columnwidth}{@{\extracolsep{\fill}}c|c|c|c|c}
\hline
Attributes & Database & Speaker & Speaker & Gain\\
&&Independent&Dependent\\
\cline{3-5}
&& \emph{Test-B} set& \emph{Test-B} set& (\%)\\
\hline
\hline
\multirow{2}{*}{Valence}
& IEM & 0.4428 & 0.5072 & 14.54\\
& IMP & 0.3420 & 0.4164 & 21.75\\
\hline
\multirow{2}{*}{Arousal}
& IEM & 0.6953 & 0.7255 & 4.34\\
& IMP & 0.5958 & 0.6218 & 4.36\\
\hline
\multirow{2}{*}{Dominance}
& IEM & 0.5444 & 0.5678 & 4.29\\
& IMP & 0.5625 & 0.4911 & 6.18\\
\hline
\end{tabular*}
\label{tab:results_within_iemcap_improv}
\end{table}
\begin{table}[t]
\caption{IEMOCAP and MSP-IMPROV: Performance achieved using different adaptation approaches on the \emph{test-B} set. The table reports the performance gain over the speaker-independent baselines reported in Table \ref{tab:results_within_iemcap_improv}. All the experimental evaluations are done by minimizing the loss in the development set.}
\centering
\fontsize{8}{9}\selectfont
\begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}}c|l|cc|cc}
\hline
& \multicolumn{1}{c|}{Adaptation} &\multicolumn{2}{c|}{USC-IEMOCAP} & \multicolumn{2}{c}{MSP-IMPROV}\\
& \multicolumn{1}{c|}{Scheme} & & \\
\cline{3-6}
&&CCC & Gain (\%)& CCC & Gain (\%)\\
\hline
\hline
\multirow{5}{*}{\rotatebox{90}{Valence}}
&Unique speaker (GA)& 0.4761 & 7.52 & 0.3866 & 13.04\\
&Oversampling (GA)& 0.4790 & 8.17 & 0.4014 & 17.36\\
&Weighting (GA)& 0.4889 & 10.41 & 0.3915 & 15.52\\
&Unique speaker (IA)& 0.4725 & 6.70 & 0.3855 & 12.71\\
&Weighting (IA)& 0.4759 & 7.47 & 0.3873 & 13.24\\
\hline
\multirow{5}{*}{\rotatebox{90}{Arousal}}
&Unique speaker (GA)& 0.6988 & 0.50 & 0.6057 & 1.66\\
&Oversampling (GA)& 0.7001 & 0.69 & 0.6086 & 2.14\\
&Weighting (GA)& 0.7002 & 0.70 & 0.6191 & 3.91\\
&Unique speaker (IA)& 0.6966 & 0.18 & 0.6087 & 2.16\\
&Weighting (IA)& 0.7000 & 0.68 & 0.6095 & 2.29\\
\hline
\multirow{5}{*}{\rotatebox{90}{Dominance}}
&Unique speaker (GA)& 0.5491 & 0.86 & 0.4718 & 2.01\\
&Oversampling (GA)& 0.5500 & 1.02 & 0.4731 & 2.29\\
&Weighting (GA)& 0.5495 & 0.93 & 0.4820 & 4.21\\
&Unique speaker (IA)& 0.5451 & 0.12 & 0.4710 & 1.83\\
&Weighting (IA)& 0.5496 & 0.95 & 0.4713 & 1.90\\
\hline
\end{tabular*}
\label{tab:results_all_iemocap_improv}
\vspace{-0.3cm}
\end{table}
We validate the effectiveness of the proposed personalization approaches with other emotional databases. We use a $K$-fold cross-validation strategy to train DNN models using the USC-IEMOCAP and MSP-IMPROV databases. In each fold, we consider $2$ speakers as the test speakers and the rest of the speakers as the train speakers. With this cross-validation approach, all the speakers are at some point considered as the test speakers. The final results are averaged across the $K$ folds. Similar to Figure \ref{fig:corpus}, we split the test set of both IEMOCAP and MSP-IMPROV databases into two partitions for this study: \emph{test-A} and \emph{test-B} sets. The \emph{test-A} set includes 200s of recording for each of the speakers in the test set. This set is reserved for finding the closest training speakers to the target speakers. The \emph{test-B} set includes the rest of the recordings in the test set. We implement the global and individual adaptation models with all the different adaptation approaches by minimizing the loss in the development set.
Table \ref{tab:results_within_iemcap_improv} shows the CCC results for speaker-independent and speaker-dependent conditions using the USC-IEMOCAP and the MSP-IMPROV corpora. The results validate the findings in Table \ref{tab:results_within2}, showing that the speaker-dependent condition leads to higher relative gains for valence than for arousal or dominance.
Table \ref{tab:results_all_iemocap_improv} shows the results obtained with different adaptation schemes on the USC-IEMOCAP and MSP-IMPROV databases. The results are consistent with the findings observed with the MSP-Podcast corpus. We observe that the relative improvements over the speaker-independent baseline are much higher for valence than the relative improvements for arousal and dominance. With the USC-IEMOCAP corpus, we achieve relative gains in performance up to 10.41\% for valence whereas less than 1.03\% for arousal and dominance. Similarly, with the MSP-IMPROV corpus, we achieve relative gains in performance up to 17.36\% for valence whereas less than 4.22\% for arousal and dominance. These results show the effectiveness of our proposed approach applied to other emotional corpora, reinforcing our finding about the speaker-dependent nature of valence emotional cues.
}
\vspace{-0.3cm}
\section{Conclusions}
\label{sec:conclusion}
This paper demonstrated that a valence prediction system can be personalized to target speakers by exploiting speaker-dependent traits. The study proposed to create an adaptation set by identifying speakers in the train set that are closer to the speakers in the test set. Since we evaluate the similarity between speakers by comparing the acoustic feature spaces associated with each speaker without using emotional labels, the adaptation approaches are fully unsupervised. We proposed three methods to create this adaptation set: unique speaker, oversampling and weighting approaches. The adaptation sets from the selected \emph{closer} speakers are used to personalize the DNNs to the speakers in the test set. The experimental results showed that the global adaptation models achieved better performance than the individual adaptation models. Further improvements are observed when the loss function is minimized by monitoring the loss in the adaptation set. The proposed adaptation schemes lead to relative improvements up to 13.52\% over a speaker-independent baseline. We observed significant improvements in performance, even when only a few seconds of adaptation data (belonging to the train set) for each of the speakers in the test set was used for adaptation. However, increasing the amount of adaptation data did not contribute to further improvements of the model. The maximum performance gains were observed for 200s of adaptation data for each of the speakers in the test set. {\color{black}We also demonstrated the effectiveness of the proposed personalization approaches with the USC-IEMOCAP and MSP-IMPROV databases, showing consistent findings.}
{\color{black}
There are many interesting and important applications where our personalization models can be very helpful. In healthcare applications where medical data cannot be frequently obtained, a personalized system can keep track of a patient's expressive behaviors and his/her medical record for better and efficient treatment. Another example is on personal assistant systems on mobile devices, which are often used by a limited number of users. Over time, the system can collect enough data from the target users to improve their emotion recognition systems. For cloud-based applications, the training data needs to be stored in the cloud, instead of the edge device. With a simple modification on the approach to obtain the PCA projections (i.e., finding a common PCA space across testing speakers), the training data can be pre-estimated and stored. Therefore, this approach does not require storage or computational resources on the edge devices and can be efficiently implemented during inferences.}
This study demonstrated the importance of exploiting the speaker-dependent traits observed in the externalization of valence from speech, which led to clear improvements. It also showed that we can personalize SER models by just finding speakers in the train set that are similar to the target speakers. The proposed formulation is flexible, requiring only to know the block of data associated with each speaker in the test set. This assumption is reasonable since it is straightforward to group data per speaker in many practical applications (e.g., assigning all the speech collected during a call center session to a single user). As a future work, we will evaluate more sophisticated methods to assess the similarity of speakers by considering more than acoustic similarities. Also, we will explore the use of the proposed adaptation schemes in other deep learning frameworks such as autoencoders, \emph{generative adversarial networks} (GANs) or \emph{long short-term memory} (LSTM). Another open question is to investigate adaptation schemes that are effective for arousal and dominance.
\appendices
\section*{Acknowledgment}
This study was funded by National Science Foundation (NSF) career award IIS-1453781
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,286
|
Alice Wairimu Nderitu (geb. vor 1980) ist eine kenianische Friedens- und Konfliktforscherin und Mediatorin. Sie ist seit 2020 UN-Sonderberaterin zur Verhütung von Völkermord.
Herkunft und Ausbildung
Alice Nderitu wuchs in Njabini im zentralkenianischen Nyandarua County auf. Ihr Vater Vincent Nderitu arbeitete am Sasumua-Damm in Kinangop. Ihre Kindheit und Jugend war geprägt durch den gesellschaftlichen Umbruch nach dem Mau-Mau-Krieg, der Unabhängigkeit Kenias 1957 und die nachfolgende Afrikanisierung. Die Bekanntschaft der Familie mit Mukami Kimathi, der Witwe des von den Briten hingerichteten Mau-Mau-Führers Dedan Kimathi, wurde von Nderitu auch literarisch aufgearbeitet.
Sie absolvierte bis 1990 ein Bachelor-Studium in Kunst, Literatur und Philosophie an der University of Nairobi, wo sie 2013 auch einen Master in Friedens- und Konfliktstudien erwarb.
Karriere
Nderitu war Mitglied des Netzwerks African Women in Conflict Prevention and Mediation der Afrikanischen Union (Fem-Wise) und des Women Waging Peace Network, sie war Gründerin der Community Voices for Peace and Pluralism und Kolumnistin beim East African Newspaper.
Sie wurde in die kenianische Kommission für Zusammenhalt und Integration berufen. Sie war Mitgründerin und erste Mitvorsitzende der Uwiano Platform for Peace, einer Agentur für Konfliktprävention, die mobile Technologie Bürgern in die Hand gibt, die frühzeitig Hinweise auf Gewalt geben sollen, damit frühzeitig darauf reagiert werden kann.
Nderitu war von 2010 bis 2021 eine von drei Mediatoren für eine Friedensvereinbarung, die von zehn ethnischen Gruppen im kenianischen Nakuru unterzeichnet wurde. 16 Monate lang war sie in diesem Prozess unter 100 Stammesältesten und drei Mediatoren die einzige Frau. Sie war auch leitende Mediatorin in einem Befriedungsprozess, in den 29 ethnische Gemeinschaften im nigerianischen Bundesstaat Kaduna einbezogen waren und der zur Unterzeichnung der Kafanchan Peace Declaration führte. Entsprechend erfolgreich leitete sie auch ein Verfahren für 56 ethnische Gemeinschaften in Zentralnigeria (Southern Plateau).
Sie war Mitglied des kenianischen Komitees für die Prävention und die Verfolgung von Genozid, Kriegsverbrechen, Verbrechen gegen die Menschlichkeit und alle Formen von Diskriminierung. Als führende Expertin in einem männerdominierten Bereich war Nderitu Anwältin für die Inklusion von Frauen in den verschiedensten internationalen Foren und publizierte auch zu diesem Thema.
2015 wurde Nderitu von Kenias Staatspräsident Uhuru Kenyatta in eine Kommission berufen, die den Konflikt um die Auflösung der Regierung des südkenianischen Makueni County untersuchen sollte. Der 2022 abgelöste Uhuru war allerdings mit dem Abschlussbericht der von ihm eingesetzten Kommission nicht einverstanden.
Alice Nderitu wirkt auch als Dozentin für Genozid-Prävention an der Boston University.
2020 wurde sie von UN-Generalsekretär António Guterres als Nachfolgerin von Adama Dieng zur UN-Sonderberaterin zur Verhütung von Genozid ernannt.
Positionen
Würdigungen
2011 Transitional Justice Fellow beim Institute for Justice and Reconciliation (IJR) im südafrikanischen Kapstadt
2012 Auszeichnung als Woman Peace Maker Of the Year durch das Joan B. Kroc Institute for Peace and Justice der University of San Diego
2014 Raphael-Lemkin-Dozentur am Auschwitz Institute for Peace and Reconciliation (AIPG)
2015 Aspen Leadership scholarship
2017 Global Pluralism Award des Global Centre for Pluralism, gestiftet von Karim Aga Khan IV. und der kanadische Regierung
2018 Jack P. Blaney Award vom Morris J. Wosk Centre for Dialogue der Simon Fraser University im kanadischen Burnaby
2019 Diversity and Inclusion Peace and Cohesion Champion Award des Daima Trust, Kenia
2022 Ehrendoktorwürde für Literatur Keene State College.
Veröffentlichungen
»Mukami Kimathi – Mau Mau Woman Freedom Fighter«, Mdahalo Bridging Divides, Kenia, 2019 ISBN 978-9966-19-032-1
mit Anass Bendrif, Sahira al Karaguly, Mohammadi Laghzaoui, Esmah Lahlah, Maeve Moynihan, Joelle Rizk und Maytham Al Zubaidi: »An introduction to human rights in the Middle East and North Africa- a guide for NGOs«, Networklearning.org, Amsterdam, 2009, @book{3a2133e97e0b410a8eaefff291baad31
mit Jacqueline O'Neill: »7 myths standing in the way of women's inclusion« Inclusive Security, 2013
»From the Nakuru County peace accord (2010–2012)«, Centre for Humanitarian Dialogue, 2014
»African Peace Building: Civil Society Roles in Conflict« in Minding the Gap: African Conflict Management in a Time of Change, Pamela Aall und Chester A. Crocker (Hrsg.), CIGI, 2016, ISBN 978-1-928096-21-4
»Catherine Ndereba: The Authorised Biography of a Marathon World Record holder«, Mdahalo Bridging Divides Limited, 2016
»Beyond Ethnicism: Exploring Ethnic and Racial Diversity for Educators«, Mdahalo Bridging Divides Limited, 2018 ISBN 978-9966-19-030-7
»Kenya: Bridging Ethnic Divides, A Commissioner's Experience on Cohesion and Integration«, Mdahalo Bridging Divides Limited, 2018, ISBN 978-9-966190-31-4
mit Swanee Hunt: »WPS as a political movement« in The Oxford Handbook of Women, Peace, and Security, Sara E. Davies & Jacqui True (Hrsg.), Oxford University Press, New York. ISBN 978-0-19-762770-9
Weblinks
Webpräsenz von Peace and Pluralism
Video: The message by Alice Wairimu Nderitu, UN Special Adviser for the prevention of genocides, Mailand, 3. März 2022
Einzelnachweise
UN-Sonderberater
Friedensforscher
Konfliktforscher
Kenianer
Geboren im 20. Jahrhundert
Frau
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,921
|
Q: unable to cover test cases in Junit? I tried to understand Junit and eclEmma by writing a unit test for Stack methods, push(), pop() and peak().
But all of them failed. It seems that none of them got covered. I thought initially it was a syntactical issue with my codes in how to push an integer object onto the stack but it seems that it is not the issue.
import static org.junit.jupiter.api.Assertions.*;
import org.junit.Before;
import org.junit.jupiter.api.Test;
import java.util.Stack;
public class StackMethodTesting {
private Stack<Integer> aStackOfInt;
@Before
public void initialize()
{
aStackOfInt = new Stack<Integer>();
System.out.println(" a new Stack");
}
@Test
public void testpush() {
aStackOfInt.push(new Integer(1));
assertEquals(true,aStackOfInt.peek().equals(new Integer(1)));
}
@ Test
public void testPop() {
aStackOfInt.push(22);
assertEquals (new Integer(22),aStackOfInt.pop());
}
@Test
public void testpeek()
{
aStackOfInt.push(222);
assertEquals(new Integer(222),aStackOfInt.peek());
}
}
I'm assuming that the highlighted red codes mean that they are not being executed. If so, I don't know what went wrong. Here are the run result:
A: You are mixin to JUnit API in your tests, JUnit4 and JUnit5.
So, if you want to use the latest one (JUnit 5 which I recommend you), you should import everything from the JUnit5 package: org.junit.jupiter.
So, your test cases would look like this (notice I also did some other changes):
import static org.junit.jupiter.api.Assertions.*;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import java.util.Stack;
class StackMethodTesting {
private Stack<Integer> aStackOfInt;
@BeforeEach
void initialize()
{
aStackOfInt = new Stack<Integer>();
System.out.println(" a new Stack");
}
@Test
void testpush() {
Integer value = new Integer(1);
aStackOfInt.push(value);
assertTrue(aStackOfInt.peek().equals(value));
}
@Test
void testPop() {
Integer value = new Integer(22);
aStackOfInt.push(value);
assertEquals(value, aStackOfInt.pop());
}
@Test
void testpeek()
{
Integer value = new Integer(222);
aStackOfInt.push(value);
assertEquals(value, aStackOfInt.peek());
}
}
You can read more about JUnit5 here https://junit.org/junit5/docs/current/user-guide/#writing-tests-annotations.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 9,538
|
\section{Appendix}
\subsection{Sklar's Theorem}
\begin{theorem}[Sklar, 1959]
Let $F$ be a distribution function with margins $F_1, \dots F_d$. Then there exists a $d$-dimensional copula $C$ such that for all $(x_1, \dots, x_d) \in \mathbb{R}^d)$ it holds that $F(x_1, \dots, x_d) = C(F(x_1), \dots, F(x_d))$. Furthermore, if $F_1, \dots, F_d$ are continuous, then $C$ is unique. Conversely, if $C$ is a $d$-dimensional copula and $F_1, \dots, F_d$ are univariate distribution functions, then $F(x_1, \dots, x_d)=C(F(x_1),\dots,F(x_d))$ is a $d$-dimensional distribution.
\end{theorem}
\subsection{Derivations for deratives of inverses}
If $g$ is the inverse of $f$, that is, $ g_w(y) = f_w^{-1}(y) $ or $ g_w(f_w(t)) = t $ for some weights $w$.
If we treat $w$ as parameters as well, then we have scalar functions $g(a, b)$ and $f(c, d)$ such that the identity
$$g(f(t, w), w) = t$$
holds for all possible $w$.
\paragraph{Part 1.}
We want to find $\frac{\partial g (y, r)}{\partial y}\Bigg|_{\substack{y=a\\ r=w}}$. Since $f$ and $g$ are scalar functions of $y$, it is easy to see geometrically that
$$\frac{\partial g (y, r)}{\partial y}\Bigg|_{\substack{y=a\\ r=w}} =
1 \Bigg/ \left(\frac{\partial f (x, r)}{\partial x} \Bigg|_{\substack{x=g(a, w)\\ r=w}}\right)$$
\paragraph{Part 2.}
We want to find $ \frac{\partial g (y, r)}{\partial r}\Bigg|_{\substack{y=a\\ r=w}} $
for a given $w$ and $a$, given access to an oracle $f(x, r)$, $g(y, r)$, $\frac{\partial f(x, r)}{\partial r}$, $\frac{\partial f(x, r)}{\partial x}$ and for any values of $x, y, r$. Here, evaluating $g(y, w)$ requires a call to Newton's method and the $2$ partial derivatives may be obtained from autograd. Taking \textit{full} derivatives of the identity $g(f(t, w), w) = t$ with respect to $w$ yields
\begin{align*}
\frac{d g(f(t, w), w) }{dw} &=
\frac{\partial g}{\partial f} \frac{\partial f}{\partial w} + \frac{\partial g}{\partial w} \\
&= \left(\frac{\partial g (y, r)}{\partial y} \Bigg|_{\substack{y=f(t,w)\\ r=w}}\right) \cdot
\left( \frac{\partial f(x, r)}{\partial r} \Bigg|_{\substack{x=t\\ r=w}} \right) +
\frac{\partial g (y, r)}{\partial r} \Bigg|_{\substack{y=f(t,w)\\ r=w}}
\\
&= 0 \\
\frac{\partial g (y, r)}{\partial r} \Bigg|_{\substack{y=f(t,w)\\ r=w}} &=
- \left(\frac{\partial g (y, r)}{\partial y} \Bigg|_{\substack{y=f(t,w)\\ r=w}}\right) \cdot
\left( \frac{\partial f(x, r)}{\partial r} \Bigg|_{\substack{x=t\\ r=w}} \right)
\end{align*}
Note that this holds for all $t$. Performing a substitution gives
\begin{align*}
\frac{\partial g (y, r)}{\partial r} \Bigg|_{\substack{y=a\\ r=w}} &=
- \left(\frac{\partial g (y, r)}{\partial y} \Bigg|_{\substack{y=a\\ r=w}}\right) \cdot
\left( \frac{\partial f(x, r)}{\partial r} \Bigg|_{\substack{x=g(a, w)\\ r=w}} \right) \\
&=
-
\left( \frac{\partial f(x, r)}{\partial r} \Bigg|_{\substack{x=g(a, w)\\ r=w}} \right) \Bigg/
\left(\frac{\partial f (x, r)}{\partial x} \Bigg|_{\substack{x=g(a, w)\\ r=w}}\right),
\end{align*}
where the last line holds using $\left[h^{-1}\right]'(x) = 1/\left[h'(h^{-1}(x))\right]$ for scalar $h$ (Part 1).
\subsection{Proof of Theorem 2}
We first show that the output at each layer $\{\varphi^{\text{nn}}\}(t)$ is a convex combination of negative exponentials, i.e.,
\begin{align*}
\{\varphi^{\text{nn}}\}_{\ell, i}(t) &= \sum_{k=1}^{K_{\ell, i}} \alpha_k \exp (- \beta_{\ell, i, k} t ) \qquad \text{where } \sum_{k=1}^{K_{\ell, i}} \alpha_{\ell, i, k} = 1,
\end{align*}
where $K_{\ell} = \prod_{q=1}^{\ell-1} H_{q}$ and denotes the number of components in the mixture of exponentials (with potential repetitions).
The theorem is shown by induction on the layer index $\ell$.
The base case when $\ell=0$ is obvious by setting $K_{0, 1} = 1, \alpha_{0, 1}=1, \beta_{0, 1}=0$.
Now suppose that the induction hypothesis is true for all $\{\varphi^{\text{nn}}\}_{\ell-1, i}$, we have,
\begin{align}
\{\varphi^{\text{nn}}\}_{\ell, i}(t) &=
\exp(-B_{\ell, i} \cdot t) \sum_{j=1}^{H_{\ell-1}} A_{\ell, i, j} \{\varphi^{\text{nn}}\}_{\ell-1,j}(t) \nonumber \\
&= \exp(-B_{\ell, i} \cdot t) \sum_{j=1}^{H_{\ell-1}} A_{\ell, i, j} \sum_{k=1}^{K_{\ell-1}} \alpha_{\ell-1, j, k} \exp (- \beta_{\ell-1, j, k} t ) \tag{induction hypothesis} \nonumber \\
&= \sum_{j=1}^{H_{\ell-1}} \sum_{k=1}^{K_{\ell-1}} \underbrace{A_{\ell, i, j} \alpha_{\ell-1, j, k}}_{\alpha_{\ell, i, \cdot }} \exp (- \underbrace{(\beta_{\ell-1, j, k} + B_{\ell, i})}_{\beta_{\ell, i, \cdot }} t ) \nonumber \\
&= \sum_{k=1}^{K_{\ell}} \alpha_{\ell, i, k} \exp (- \beta_{\ell, i, k} t ). \label{eq:sum_of_exps}
\end{align}
In the third and fourth line, we can also see that $\sum_{k=1}^{K_{\ell}} \alpha_{\ell, i, k}$ since from the induction hypothesis $\sum_{k=1}^{K_{\ell-1}} \alpha_{\ell-1, j, k} = 1$ and the design of ACNet, which guarantees $\sum_{j=1}^{H_{\ell - 1}} A_{\ell, i, j} = 1$.
Theorem 2 follows from the fact that sum of completely monotone functons are also completely monotone. The range of $\{\varphi^{\text{nn}}\}$ follows directly from it being a convex combination of negative exponentials.
\subsection{Representation of $M$ in ACNet as a Markov reward process}
It is known that Archimedean copula with completely monotone generators are \emph{extendible}, and have generators $\varphi$ which are Laplace transforms of (almost surely) positive random variables $M$.
The random variable $M$ is known as the \emph{mixing variable} in a manner analogous to the De Finetti's theorem (observe that Archimedean copula are exchangable), such that a sample from the copula $C$ is given by $\left( \varphi(E_1/M), \ldots, \varphi(E_d/M) \right)$, where the $E_i$ are i.i.d. samples from an exponential distribution with scale parameter $1$.
Hence, $M$ is known as the mixing(latent) variable, since each $U_i$ is independent of $U_j, i \neq j$ conditioned on $M$.
For more information about extendible copula, refer to Chapters 1-3 of Matthias, Scherer, and Mai Jan-frederik.
From the derivations in \eqref{eq:sum_of_exps}, it can be seen that for all $\ell \in [L], i \in [H_\ell], k \in [K_{\ell, i}]$, we have
\begin{align*}
\beta_{\ell, i, k} = \sum_{q=1}^{\ell} B_{\ell, z^k_q}, \qquad \qquad \alpha_{\ell, i} = \prod_{\ell'=1}^{\ell} A_{\ell', z^{k}_{\ell'}, z^{k}_{\ell'-1}}
\end{align*}
where $z_q \in [H_q]$ such that the sequence of nodes $\left((0, z^k_0=1), (1, z^k_1), \dots, (\ell-1, z^k_{\ell-1}), (\ell, z^k_i)\right)$, each given of the form (layer, index), represents a forward path along the directed acyclic graph prescribed by the layers of the network, starting from the input node to the node $(\ell, i)$.
For the $i$-th output in the $\ell$-th layer, each constituent decay weight $\beta_{\ell, i, k}$ is the sum of `$B$-terms' taken along some path starting from the input node and ending at the $(\ell, i)$-th node.
Similarly, the $\alpha_{\ell, i, k}$ terms are the \emph{product} of weights of convex combinations, given by the `$A$-terms' taken along that same path.
Each term in the summand of \eqref{eq:sum_of_exps} has a one-to-one mapping with such a path.
Consequently, each constituent exponential function in the output node is represented by a path $\left( (0, z_0), (1, z_1), \dots, (L, z_L), (L+1, 1) \right)$.
Let $\mathcal{P}$ be the set of all such paths, where the $k$-th path is given by $p_k = \left( (0, z^{k}_0)=1, (1, z^{k}_1), \dots, (L, z^{k}_L), (L+1, z^{k}_{L+1}=1) \right)$.
\begin{align}
\{\varphi^{\text{nn}}\}_{L+1, 1}(t)
&= \sum_{k=1}^{K_{L+1}} \alpha_{L+1, 1, k} \exp (- \beta_{\ell, i, k} t ) \nonumber \\
&= \sum_{p_k \in \mathcal{P}} \left( \prod_{\ell=1}^{L+1} A_{\ell, z^{k}_\ell, z^{k}_{\ell-1}} \right) \left( \exp( - ( \sum_{\ell=1}^{L} B_{\ell, z^{k}_\ell})t) \right) \nonumber \\
&= \mathcal{L} \Bigg \{ \sum_{p_k \in \mathcal{P}} \left( \prod_{\ell=1}^{L+1} A_{\ell, z^{k}_\ell, z^{k}_{\ell-1}} \right) \delta \left( t - \sum_{\ell=1}^{L} B_{\ell, z^{k}_\ell} \right) \Bigg \} \label{eq:laplace}
\end{align}
Using the fact that $\sum_{j=1}^{H_{\ell - 1}} A_{\ell, i, j} = 1$ (by the design of ACNet), we can see that each $A_\ell$ is a transition matrix from one layer to the one which \emph{precedes} it.
Since $\ell \in [L]$, $\sum_{k=1}^{K_{\ell, i}} \alpha_{\ell, i, k} = 1$, the expression in \eqref{eq:laplace} is the Laplace transform of a discrete random variable $M$ taking values at $\sum_{\ell=1}^{L} B_{\ell, z^{k}_\ell}$ with probability $\left( \prod_{\ell=1}^{L+1} A_{\ell, z^{k}_\ell, z^{k}_{\ell-1}} \right)$, for each possible $p_k \in \mathcal{P}$.
This is precisely the random variable coressponding to the Markov reward process in the `reversed network' with rewards $\{B_\ell \}$ and transition matrixes $\{ A_\ell \}$---most notably, the transitions given by $A_\ell$ are independent of the previous transitions taken and only depend on current state.
A graphical representation of this when $L=2$ and $H_\ell=2$ is given in Figure~\ref{fig:backward_sample}.
This Markovian property is precisely why ACNet is able to represent a generator comprising an exponential (in terms of parameters) of negative exponential components.
Since we can sample from $M$, we are also able to sample from the copula efficiently using the algorithm of \cite{marshall1988families}.
The psuedocode for doing so is given in Algorithm~\ref{alg:sampling}.
\begin{figure}[t]
\centering
\includegraphics[width=0.7 \textwidth]{backward.pdf}
\caption{Sampling $M$ starting from the output node. Labels on edges denote probabilities of transition. Numbers in boxes correspond to rewards accumulated at each hidden node. Straight lines show a potential sample path in sampling, with total ward $B_{1,1} + B_{2,1}$.}
\label{fig:backward_sample}
\end{figure}
\begin{algorithm}[t]
\SetAlgoLined
\KwResult{$d$ dimensional sample from ACNet}
$M$ $\leftarrow 0$, state $\leftarrow$ output node\;
\While{state is not in first layer}{
Sample next state propotionate to $A$\;
state $\leftarrow$ next state\;
Accumulate $M$ according to state based on $B$\;
}
Draw $d$ i.i.d. samples $E_i \sim \text{Exp}(1)$ \;
\Return $\left( \{\varphi^{\text{nn}}\}\left( E_1/M \right), \dots, \{\varphi^{\text{nn}}\} \left( E_d/M \right) \right)$
\caption{Sampling from ACNet}
\label{alg:sampling}
\end{algorithm}
\subsection{Representational limits of ACNet}
Copulas are sometimes used to model upper and lower tail-dependencies. When $d=2$, they are quantified respectively by,
\begin{align*}
UTD_C &= \lim_{u \rightarrow 1^-} \frac{C(u, u) - 2u + 1}{1-u} = \lim_{u \rightarrow 1^-}\mathbb{P}(U_1 > u|U_2 > u) \tag{Upper tail dependency} \\
LTD_C &= \lim_{u \rightarrow 0^+} \frac{ C(u, u)}{u} = \lim_{u \rightarrow 0^+} \mathbb{P}(U_1 \leq u | U_2 \leq u) \tag{Lower tail dependency}
\end{align*}
assuming those limits exist. These quantities describe the limiting dependencies in the tails of the joint distribution. Many common Archimedean copula are have asymmetric tail dependencies, i.e., $UTD_C \neq LTD_C$. Both $UTD_C$ and $LTD_C$ of an Archimedean copula are closely linked to the mixing variable $M$. In particular, if $\mathbb{E}(M) < \infty$ then $UTD_C = 0$. Similarly, if $M$ is bounded away from zero, i.e., there exists $\epsilon$ such that $\mathbb{P}(M \in [0, \epsilon]) =0$, then $LTD_C=0$. Since $M$ is discrete with a finite support, both these conditions are satisfied and $UTD_C$ and $LTD_C$ are equal to $0$.
\subsection{Probabilistic quantities derivable from $C$ (or $F$)}
Table~\ref{tbl:prob_quantities} gives a list of some of the common probabilistic quantities which can be derived from $C$ (or $F$).
\begin{table}[h]
\begin{center}
\begin{tabular}{lll}
Name & Expression & Formula in terms of $C$ or $F$ \\
\hline \\
Distribution & $C(u_1, \dots, u_d)$ & $C(u_1, \dots, u_d)$ \\
Likelihood & $p(u_1, \dots u_d)$ & $\frac{\partial^{d} C(u_1, \dots, u_d}{\partial u_1, \dots, \partial u_d}$ \\
Cond. Distribution & $\mathbb{P}(X_{\bar{K}} \leq x_{\bar{K}} | X_K = x_K ) $ & $\frac{\partial F (x_K, x_{\bar{K}})}{\partial x_1 \cdots \partial x_k} \bigg /
\frac{\partial F (x_K, 1)}{\partial x_1, \cdots, \partial x_k}$ \\
Cond. Likelihood & $p (X_{\bar{K}} = x_{\bar{K}} | X_K = x_K ) $ & $\frac{\partial F (x_K, x_{\bar{K}})}{\partial x_1 \cdots \partial x_d} \bigg /
\frac{\partial F (x_K, 1)}{\partial x_1, \cdots, \partial x_k}$ \\
Probability
& $\mathbb{P}\left( U_1 \in \left[ \underline{u_1}, \overline{u_1} \right] \wedge
\dots \wedge U_d \in \left[ \underline{u_d}, \overline{u_d} \right] \right)$ &
See $d$-increasing property, \eqref{eq:d-increasing}
\end{tabular}
\end{center}
\caption{Probabilistic quantities written in terms of derivatives of $C$ or $F$.}
\label{tbl:prob_quantities}
\end{table}
\subsection{Datasets}
The POWER and GAS datasets are obtained from the UCI machine learning repository (\url{https://archive.ics.uci.edu/ml/index.php}).
The Boston housing dataset is commonly found and may be downloaded through scikit-learn (\url{https://scikit-learn.org/stable/datasets/index.html}) or Kaggle (\url{https://www.kaggle.com/c/boston-housing}).
The INTC-MSFT dataset is standard in copula libraries for R (\url{https://rdrr.io/cran/copula/man/rdj.html}).
The GOOG-FB dataset was obtained by the authors from Yahoo Finance.
We will provide instructions on how to obtain the final 2 datasets alongside our source code.
\section{Introduction}
Modeling dependencies between random variables is a central problem in machine learning and statistics.
Copulas are a special class of cumulative density functions which specify the dependencies between random variables without any restriction on their marginals.
This has led to long lines of research in modeling and learning copulas \citep{joe1994multivariate, elidan2010copula}, as well as their applications in fields such as finance and healthcare \citep{demongeot2013archimedean, cherubini2004copula}.
Amongst the most common class of copulas are \textit{Archimedean Copulas}, which are defined by a one-dimensional function $\varphi$, known as the generator, and often favored for their simplicity and ability to model extreme distributions.
A key problem in the application of Archimedean Copulas is the selection of the parametric form of $\varphi$ as well as the limitations on the expressiveness of commonly used copula.
Present workarounds include the selection of the best model between a fixed set of commonly used copulas, the use of methods based on information criterion such as Akaike and Bayesian Information Criterion (AIC, BIC), as well as more modern nonparametric methods.
In this paper, we propose ACNet, a novel network architecture which models the \textit{generator} of an Archimedean copula using a deep neural network, allowing for network parameters to be learnt using backpropagation and gradient descent. The core idea behind ACNet is to model the generator as a sum of convex combination of a finite set of exponential functions with varying rates of decay, while exploiting their invariance to convex combinations and multiplications with other exponentials. ACNet is built from simple, differentiable building blocks, ensuring that log-likelihood is a differentiable function of $\varphi$, ensuring ease of training via backpropagation. By possessing a larger set of parameters, ACNet is able to approximate all copulas with completely monotone generators, a large class which encompasses most of the commonly used copulas, but also other Archimedean copulas which have no straightforward closed forms. To our knowledge, ACNet is the first method to utilize deep representations to model generators for Archimedean copulas directly.
ACNet enjoys several theoretical properties, such as a simple interpretation of network weights in terms of a Markov reward process, resulting in a numerically stable, dimension independent method of sampling from the copula. Using this interpretation, we show that deep variants of ACNet are theoretically able to represent generators which shallow nets may not. By modeling the cumulative density directly, ACNet is able to provide a wide range of probabilistic quantities such as conditional densities and distributions using a \textit{single} trained model.
This flexibility in expression extends to both inference and training and is not possible with other deep methods such as Generative Adversarial Networks (GANs) or Normalizing Flows, which at best allow for the evaluation of densities.
Empirical results show that ACNet is able to learn standard copula with little to no hyperparameter tuning.
When tested on real-world data, we observed that ACNet was able to learn new generators which are a better qualitative description of observed data compared to commonly used Archimedean copulas.
Lastly, we demonstrate the effectiveness of ACNet in situations where measurements are uncertain within known boundaries.
This task is challenging for methods which learn densities as evaluating probabilities would then involve the costly numerical integration of densities.
We (i) propose ACNet, the first network to learn completely monotone generating functions for the purpose of learning copulas, (ii) study the theoretical properties of ACNet, including a simple interpretation of network weights and an efficient sampling process, (iii) show how ACNet may be used to compute probabilistic quantities beyond log-likelihood and cumulative densities, and (iv) evaluate ACNet on both synthetic and real-world data, demonstrating that ACNet combines the ease of use enjoyed by commonly used copulas and the representational capacity of Archimedean copulas.
The source code for this paper may be found at \url{https://github.com/lingchunkai/ACNet}.
\section{CDFs and Copulas}
Consider a $d$-dimensional continuous random vector $X = \{ X_1, X_2, \cdots X_d \}$ with marginals $F_i(x_i) = \mathbb{P}(X_i \leq x_i)$.
Given a $x \in \mathbb{R}^d$, the \textit{distribution} function $F(x) = \mathbb{P}\left( X_1 \leq x_1, \cdots X_d \leq x_d \right)$ specifies all marginal distributions $F_i(x_i)$ as well as any dependencies between $X$.
This paper focuses on continuous distribution functions which have well-defined densities.
\subsection{Copulas}
\label{sec:cop}
Of particular interest is a special type of distribution function known as a \textit{copula}.
Informally, copulas are distribution functions with uniform marginals in $[0, 1]$. Formally, $C(u_1, \cdots, u_d) :[0, 1]^d \rightarrow [0, 1]$ is a copula if the following 3 conditions are satisfied.
\begin{itemize}[noitemsep,topsep=0pt,leftmargin=*]
\item (Grounded) It is equal to $0$ if any of its arguments are $0$, i.e., $C(\dots, 0, \dots)=0$.
\item It is equal to $u_i$ if all other arguments $1$, i.e., for all $i \in [d]$, $C(1,\cdots, 1, u_i, 1, \cdots, 1) = u_i$.
\item ($d$-increasing) For all $u=(u_1, \dots, u_d)$ and $v=(v_1, \dots, v_d)$ where $u_i < v_i$ for all $i \in [d]$, \begin{align}
\sum_{(w_1, \dots w_d) \in \times^{d}_{i=1}\{ u_i, v_i \}}
(-1)^{|i:w_i=u_i|} C(w_1, \dots, w_d) \geq 0.
\label{eq:d-increasing}
\end{align}
Heuristically, the $d$-increasing property states that the probability assigned to any non-negative $d$-dimensional rectangle is non-negative.
\end{itemize}
Observe that the first 2 conditions are stronger than the limiting conditions required for distribution functions---in fact, groundedness coupled with the $d$-increasing property sufficiently define any distribution function. In particular, the second condition implies that Copulas have uniform marginals and hence, are special cases of distribution functions. Copulas have found numerous real world applications in engineering, medicine, and quantitative finance. The proliferation of applications may be attributed to \textit{Sklar's theorem} (see appendix for details). Loosely speaking, Sklar's theorem states that any $d$-dimensional continuous joint distribution may be uniquely decomposed into $d$ marginal distribution functions and a single copula $C$. The copula precisely captures dependencies between random variables in isolation from marginals. This allows for the creation of \textit{non-independent} distributions by combining marginals---potentially from different families and tying them together using a suitable copula.
\subsection{Archimedean copulas}
\label{sec:arch_cop}
In this paper, we will restrict ourselves to \textit{Archimedean copulas}.
Archimedean copulas enjoy simplicity by modeling dependencies in high dimensions using a single $1$-dimensional function:
\begin{align}
C(u_1, \cdots, u_d) &= \varphi \left( \varphi^{-1}(u_1) + \varphi^{-1}(u_1) + \cdots \varphi^{-1}(u_d) \right),
\label{eq:cop}
\end{align}
where $\varphi: [0, \infty) \rightarrow [0, 1]$ is $d$-monotone, i.e., $(-1)^k \varphi^{(k)}(t) \geq 0$ for all $k \leq d, t \geq 0$.
Here, $\varphi$ is known as the \textit{generator} of $C$. A single $d$-monotone function $\varphi$ defines a $d$-dimensional copula which satisfies the conditions laid out in Section~\ref{sec:cop}. We say that $\varphi$ is \textit{completely monotone} if $(-1)^k \varphi^{(k)}(t) \geq 0$ for all values of $k$. Completely monotone generators define copula regardless of the dimension $d$. Most (but not all) Archimedean copula are defined by completely monotone generators. For this reason, we focus on Archimedean copula with completely monotone generators, also known in the literature as \textit{extendible} Archimedean copula.
The following theorem by Bernstein (see \cite{murray1942} for details) characterizes all completely monotone $\varphi$ as a mixture of exponential functions.
\begin{theorem}[Bernstein-Widder]
A generator $\varphi$ is completely monotone if and only if $\varphi$ is the Laplace transform of a positive random variable $M$, i.e., $\varphi(t) = \mathbb{E}_M(\exp (-tM))$ and $\mathbb{P}(M > 0)=1$.
\label{thm:bernstein}
\end{theorem}
In fact, \cite{marshall1988families} show that $C$ has an easy interpretation in terms of the random variable $M$.
Specifically, if $U = (U_1, \cdots, U_d) \sim C$, where $C$ is generated by $\varphi$, which is in turn the Laplace transform of some non-negative random variable $M$ which almost never takes the value $0$, then, we have $U_i = \varphi( E_i / M)$, where $E_i \sim \text{Exp}(1)$.
It follows that sampling from $C$ is easy and efficient given access to a sampler for $M$ and an oracle for $\varphi$, which is the case for most commonly used copulas.
\subsection{Related work}
Copulas offer a wide range of applications, from finance and actuarial sciences \citep{cherubini2004copula,bouye2000copulas,genest2009advent,rodriguez2007measuring} to epidemiology \citep{demongeot2013archimedean,kuss2014meta}, engineering \citep{salvadori2004frequency,corbella2013simulating} and disaster modeling \citep{chen2013drought,madadgar2013drought}.
Copulas are popular for modeling extreme tail distributions. Recently, \cite{wiese2019copula} show that GANs and Normalizing Flows suffer from inherent limitations in modeling tail dependencies and propose using copulas to explicitly do so.
In lockstep with this proliferation of applications is the introduction of more sophisticated copulas and training/learning methods.
Vine copula and Copula bayesian networks \citep{joe1994multivariate,joe2010tail,elidan2010copula} extend bivariate parametric copula to higher dimensions; the former models high dimensional distributions using a collection of bivariate copula organised in a tree-like structure, while the latter extends bayesian networks while using copulas to reparameterize conditional densities.
Various mixture methods are also frequently used \citep{qu2019copula,silva2009mcmc,rodriguez2007measuring,khoudraji1997contributions} to construct richer representations from existing copula.
Other methods include non-parametric or semiparametric methods \citep{wilson2010copula,hernandez2011semiparametric,hoyos2020bayesian}.
In terms of model selection, \cite{gronneberg2014copula} introduce Copula Information Criterion (CIC), an analog to classical AIC and BIC methods for copula.
In the domain of deep neural networks, popular generative models include Generative Adversarial Networks \citep{goodfellow2014generative}, Variational Autoencoders \citep{kingma2013auto}, and Normalizing Flow methods \citep{rezende2015variational,dinh2016density}. These methods either describe a generative process or learn densities directly, as opposed to the joint distribution function. Unless explicitly designed to do so, these models are ill suited to inference on quantities such as conditional densities or distributions, while ACNet may do so via simple operations.
\section{Archimedean Copula networks}
Bernstein's theorem states that completely monotone functions are essentially mixtures of (potentially infinitely many) negative exponentials. This suggests that generators $\varphi$ could be approximated by a \textit{finite} sum of negative exponentials, which in turn defines an approximation for (extendible) Archimedean copula. Motivated by this, our proposed model parameterizes $\varphi$ using a large but finite mixture of negative exponentials. We achieve this large mixture (often exponential in model size) of exponentials using deep neural networks.\footnote{Approximating completely monotone functions using sums of exponentials has been studied \citep{kammler1979least,kammler1976chebyshev
, but not in the context required for learning copula.} We term the resultant network \textit{Archimedean-Copula Networks}, or ACNet for short.
\subsection{Representing $C$ from neural network representations of $\varphi$}
The key component of our model is the a neural network module $\{\varphi^{\text{nn}}\}:[0, \infty) \rightarrow [0, 1]$ specifying the generator and implicitly, the copula.
For simplicity we will assume that the network contains $L$ hidden layers with the $\ell$-th layer being of width $H_{\ell}$.
For convenience, the widths of the input and output layers are written as $H_0=1$ and $H_{L+1}=1$.
Layer $\ell$ has outputs of size $H_\ell$, denoted by $\{\varphi^{\text{nn}}\}_{\ell, i}$ where $i \in \{1, \dots, H_\ell\}$.
Structurally, $\{\varphi^{\text{nn}}\}$ looks similar to a standard feedforward network, with the additional characteristic that outputs for each layer is a convex combination of a finite number of negative exponentials (in inputs $t$).
Specifically, our network has the following representation.
\begin{align*}
\{\varphi^{\text{nn}}\}_{0, 1}(t) &= 1 \tag{Input layer}\\
\{\varphi^{\text{nn}}\}_{\ell,i}(t) &= \exp(-B_{\ell, i} \cdot t)\sum_{j=1}^{H_{\ell-1}} A_{\ell,i,j} \{\varphi^{\text{nn}}\}_{\ell-1, j}(t) \qquad
\forall \ell \in [L], i \in [H_\ell]
\tag{Hidden layers} \\
\{\varphi^{\text{nn}}\}(t) &= \{\varphi^{\text{nn}}\}_{L+1, 1}(t) = \sum_{j=1}^{H_{L}} A_{L+1,1,j} \{\varphi^{\text{nn}}\}_{L, j}(t)
\tag{Output layer}
\end{align*}
Each $A_\ell$ is a non-negative matrix of dimension $H_\ell \times H_{\ell-1}$ with each row lying on the $H_{\ell-1}$-dimension probability simplex, i.e., $\sum_{j=1}^{H_{\ell-1}} A_{\ell, i, j} = 1$. Each $B_\ell$ is a non-negative vector of size $H_\ell$.
Each unit in layer $\ell$ is formed by taking a convex combination of units in the previous layer, followed by multiplying this by some negative exponential of the form $\exp(-\beta t)$, where the latter is analogous to the `bias' term commonly found in feedforward networks.
When $L=1$, we get that $\{\varphi^{\text{nn}}\}(t)$ is equal to a convex combination of negative exponentials with rates of decay and weighting given by $B$ and $A$ respectively.
A graphical representation of $\{\varphi^{\text{nn}}\}$ is shown in Figure~\ref{fig:forward_network}.
\begin{theorem}
\label{thm:tot_mot}
$\{\varphi^{\text{nn}}\}(t)$ is a completely monotone function with domain $[0, \infty)$ and range $[0, 1]$.
\end{theorem}
\begin{proof}
(Sketch) Since sums of exponentials are `closed' under addition and multiplications of sums of exponentials, $\{\varphi^{\text{nn}}\}$ remains a convex combination of negative exponentials when $L>1$.
\end{proof}
It follows from Theorem~\ref{thm:tot_mot} that $\{\varphi^{\text{nn}}\}$ is a valid generator for all $d\geq2$. To ensure that $B$ is strictly positive and $A$ lies on the probability simplex, we perform the following reparameterization. Let $\Phi = \{ \Phi_A, \Phi_B \}$ be the network weights underlying parameters $A$ and $B$.
By setting $B$ = $\exp(\Phi_B)$, $A_{\ell, i, j} = \text{softmax}(\Phi_{A, \ell, i})_j$ and optimizing over $\Phi$, we ensure that the required constraints are satisfied.
\begin{figure}[t]
\centering
\includegraphics[width=0.9 \textwidth]{forward.pdf}
\caption{Forward pass through ACNet with $L=3, H_1=H_2=H_3=2$}
\label{fig:forward_network}
\end{figure}
\subsection{Extracting probabilistic quantities from $\{\varphi^{\text{nn}}\}$}
With $\{\varphi^{\text{nn}}\}$, we are now in a position to evaluate the copula $C$ using Equation~\eqref{eq:cop}.
This requires the computation of $\{\varphi^{\text{nn}}\}^{-1}(u_i)$, which has no simple closed form.
However, we may compute this inverse efficiently using Newton's root-finding method, i.e., by solving for $t$ in the equation $\{\varphi^{\text{nn}}\}(t) - u_i = 0$.
The $k$-th iteration of Newton's method involves computing the gradient $\{\varphi^{\text{nn}}\}'(t_k)$ and taking a suitable step.
The gradient of $\{\varphi^{\text{nn}}\}$ is readily obtained using auto-differentiation libraries such as PyTorch \citep{paszke2017automatic} and typically involves a `backward' pass through the network.
Empirically, root finding typically takes fewer than 50 iterations, i.e., computing $\{\varphi^{\text{nn}}\}^{-1}(u)$ requires an effectively constant number of forward and backward passes over $\{\varphi^{\text{nn}}\}$.
\subsection{Training ACNet by minimizing negative log-likelihood}
\label{sec:train_ACNet_ll}
Suppose we are given a dataset $\mathcal{D}$ of size $m$, $\{ x_1, \cdots x_m \}$, where each $x_j$ is a $d$-dimensional feature suitably normalized to $[0,1]^d$. We want to fit ACNet to $\mathcal{D}$ by minimizing the negative log-likelihood $-\sum_{j=1}^{m} \log \left( p({x_j}_1, \cdots, {x_j}_d) \right)$ via gradient descent. The density function for a single point may be obtained by differentiating $C$ over each of its parameters once,
\begin{align}
p(u_1, \cdots, u_d) &=
\frac{\partial^{(d)}C(u_1,\dots,u_d)}{\partial u_1,\dots, \partial u_d} =
\frac{\varphi^{(d)}(\varphi^{-1}(u_1)+ \cdots + \varphi^{-1}(u_d)))}{\prod_{i=1}^{d}\varphi'(\varphi^{-1}(u_i))}.
\label{eq:likelihood}
\end{align}
Gradient descent and backpropagation requires us to provide derivatives of $p$ with respect to the network parameters $\Phi$.
This requires taking derivatives of the expression in Equation~\eqref{eq:likelihood} with respect to $\Phi$.
In general, automatic differentiation libraries such as PyTorch \cite{paszke2017automatic} allow for higher derivatives to be readily computed by repeated application of the chain rule.
This process typically requires the user to furnish (often implicitly) the gradients of each constituent function in the expression.
However, automatic differentiation libraries do not have the built-in capability to compute gradients (given $\{\varphi^{\text{nn}}\}$) both with respect to inputs $u$ and network weights $\Phi$ of $\{\varphi^{\text{nn}}\}^{-1}$, the latter of which is required for optimization of $\Phi$ via gradient descent
To overcome this, we write a wrapper allowing for inverses of $1$-dimensional functions to be computed via Newton's method.
When given a function $\varphi(u; \Phi)$ parameterized by $\Phi$, our wrapper computes $\varphi^{-1}(u; \Phi)$ and provides the derivatives $\frac{\partial \varphi^{-1}(u; \Phi)}{\partial u}$ and $\frac{\partial \varphi^{-1}(u; \Phi)}{\partial \Phi}$.
The analytical expressions for both derivatives are shown below, with derivations deferred to the appendix.
\begin{align*}
\frac{\partial \varphi^{-1}(u; \Phi)}{\partial u}
= 1 \bigg/ \frac{\partial \varphi(t; \Phi)}{\partial t} \qquad \qquad
\frac{\partial \varphi^{-1}(u; \Phi)}{\partial \Phi} =
-
\frac{\partial \varphi(t; \Phi)}{\partial \Phi} \Bigg/
\frac{\partial \varphi (t; \Phi)}{\partial t}
\end{align*}
Here, the derivatives are evaluated at $t=\varphi^{-1}(u; \Phi)$.
By supplying these derivatives to an automatic differentiation library, $\varphi^{-1}(u; \Phi)$ can be computed in a fully differentiable fashion, allowing for computation of higher order derivatives and nested application of the chain rule to be done seamlessly.
Consequently, Equation~\eqref{eq:likelihood} and its derivatives may be easily computed without any further manual specification of gradients. Our implementation employs PyTorch \cite{paszke2017automatic} for automatic differentiation.
\subsection{Interpretation of network weights}
According to Bernstein's theorem (Theorem~\ref{thm:bernstein}), $\{\varphi^{\text{nn}}\}$ is the Laplace transform of some non-negative random variable $M$. Interestingly, the network structure of ACNet allows us to obtain an analytical representation of the distribution $M$. Since $\{\varphi^{\text{nn}}\}$ is the sum of negative exponentials, $M$ is a discrete distribution with support given by the decay rates of $\{\varphi^{\text{nn}}\}$. However, the structure of ACNet allows us to go further by implicitly describing a Markov reward model governing the mixing variable $M$.
Take the structure of ACNet as directed acyclic graph with reversed edges and consider a random walk starting from the \textit{output}.
The sampler begins with a reward of $0$.
The probability of transition from the $j$-th node in layer $\ell-1$ to the $i$-th node of layer $\ell$ is $A_{\ell, i, j}$.
When this occurs, it accumulates a reward of $B_{\ell, i}$.
The process terminates when we reach the \textit{input} node, where the realization of $M$ is the total reward accumulated throughout. Details can be found in the appendix.
The above interpretation has two consequences.
First, the size of the support of $M$ is upper bounded by the number of possible \textit{paths} that the Markov model possesses, which is typically exponential in $L$.
This shows that deeper nets allow for distributions with an exponentially larger support of $M$ compared to shallow nets.
Second, this hierarchical representation gives an efficient sampler for $M$, which can be exploited alongside the algorithm of \cite{marshall1988families} (see Section~\ref{sec:arch_cop}) to give an efficient sampling algorithm for $U$.
More details may be found in the appendix.
\subsection{Obtaining probabilistic quantities from ACNet}
\label{sec:prob_quant}
In Section~\ref{sec:train_ACNet_ll}, we trained ACNet by minimizing the log-loss of $\mathcal{D}$, where the likelihood $p(u_1, \dots, u_d)$ was obtained by repeated differentiation of the copula $C$ (Equation~\eqref{eq:likelihood}). Many other probabilistic quantities are often of interest, with applications in both inference and training.
\textbf{Scenario 1 (Inference).} Consider the setting where one utilizes surveys to study the correlation between one's age and income. Some natural inference problem follow, such as: given the age of a respondent, how likely is it that his income lies below a certain threshold, i.e., $\mathbb{P}\left( U_1 \leq u_1 | U_2 = u_2 \right)$. Similarly, one could be interested in conditional densities $p(u_1 | u_2)$ in order to facilitate conditional sampling using MCMC or for visualization purposes. We want our learned model to be able to answer \textit{all} such queries efficiently without modifying its structure for each type of query.
\textbf{Scenario 2 (Training with uncertain data).} Now, consider a related scenario where for respondents sometimes only report the \textit{range} of their age and incomes (e.g., age is in the range 21-25), even though underlying quantities are inherently continuous. To complicate matters, the dataset $\mathcal{D}$ is the amalgamation of multiple studies, each prescribing a different partition of ranges, i.e., $\mathcal{D}$ has rows containing a \textit{range} of possible values for each respondent, i.e., $\left( \left(\underline{u_1}, \overline{u_1}\right), \left( \underline{u_2}, \overline{u_2} \right) \right)$, where $\underline{u_i} \leq U_i \leq \overline{u_i}$. Our goal is to learn a joint distribution which respects this `uncertainty' in $\mathcal{D}$.\footnote{Unlike usual settings, we are not adding or assuming a known noise distribution but rather, assume that our data is known to a lower precision.}
To the best of our knowledge, no existing deep generative model is able to meet the demands of both scenarios. It turns out that many of these quantities may be obtained from $C$ using relatively simple operations. Suppose without loss of generality that one has observed that the first $k \in [d]$ random variables $X_K = \{ X_1, \cdots, X_k \} \subseteq X$ and obtain values $x_K = (x_1, \cdots, x_k)$.
We want to compute the posterior distribution of the next $d-k$ unobserved variables $X_{\bar{K}} = X\backslash X_K = \{ X_{k+1}, \cdots, X_d \}$ with $x_{\bar{K}}$ analogously denoting their values.
Then, the conditional distribution $\mathbb{P}(X_{\bar{K}} \leq x_{\bar{K}} | X_K = x_K )$ is the distribution function given that $X_K$ takes values $x_K$.
We have the following expression
\begin{align*}
\mathbb{P}(X_{\bar{K}} \leq x_{\bar{K}} | X_K = x_K )
= \int_{-\infty}^{x_{\bar{K}}} p(x_K, z) / p(x_K) dz
= \frac{\partial F (x_K, x_{\bar{K}})}{\partial x_1 \cdots \partial x_k} \bigg /
\frac{\partial F (x_K, 1)}{\partial x_1 \cdots \partial x_k},
\end{align*}
where the last equality follows from $\int_{-\infty}^{x_{\bar{K}}} p(x_K, z) dz = \frac{\partial}{\partial w} \int_{-\infty}^{x_K} \int_{-\infty}^{x_{\bar{K}}} p(w, z)dw dz = \frac{\partial F (x_K, x_{\bar{K}})}{\partial x_1 \cdots \partial x_k}$.
Many interesting quantities such as conditional densities $p(x_{\bar{K}} | x_{K})$ may be expressed in terms of $F$ in a similar fashion, using simple arithmetic operations and differentiation.
Crucially, these expressions remain differentiable and may be evaluated efficiently.
Since these derivations apply for any cumulative distribution $F$, they hold for any copula $C$ as well.
We list some of these commonly used probabilistic quantities and their relationship to $C$ in the appendix.
\input{sec_experiment}
\section{Conclusion}
In this paper, we propose ACNet, a novel neural network architecture which learns completely monotone generators of Archimedean copula.
ACNet's network weights can be interpreted as parameters of a Markov reward process, leading to an efficient sampling algorithm.
Using ACNet, one is able to compute numerous probabilistic quantities, unlike existing deep models.
Empirically, ACNet is able to match or outperform common Archimedean copulas in fitting synthetic and real-world data, and is also able to learn in the presence of uncertainty in data.
Future work include moving beyond completely monotone generators, learning hierarchical Archimedean copulas, as well as developing methods to jointly learn marginals.
\newpage
\section{Broader impact statement}
Copulas have held the dubious honor of being partially responsible for the financial crisis of 2008 \cite{mackenzie2014formula}.
Back then, it was commonplace for analysts and traders to model prices of collateralized debt obligations (CDOs) by means of the Gaussian copula \cite{li2000default}.
Gaussian copulas were extremely simple and gained popularity rapidly.
Yet today, this method is widely criticised as being overly simplistic as it effectively summarizes associations between securities into a single number.
Of course, copulas now have found a much wider range of applications, many of which are more grounded than credit and risk modeling.
Nonetheless, the criticism that Gaussian---or for that matter, any simple parametric measure of dependency is too simple, still stands.
ACNet is one attempt to tackle this problem, possibly beyond financial applications.
While still retaining the theoretical properties of Archimedean copula, ACNet can model dependencies which have no simple parametric form, and can alleviate some difficulties researchers have when facing the problem of model selection.
We hope that with a more complex model, the use of ACNet will be able to overcome some of the deficiencies exhibited by Gaussian copula.
Nonetheless, we continue to stress caution in the careless or flagrant application of copulas---or the overreliance on probabilistic modeling---in domains where such assumptions are not grounded.
At a level closer to machine learning, ACNet essentially models (a restricted set of) cumulative distributions.
As described in the paper, this has various applications (see for example, Scenario 2 in Section 3 of our paper), since it is computationally easy to obtain (conditional) densities from the distribution function, but not the other way round.
We hope that ACNet will motivate researchers to explore alternatives to learning density functions and apply them where appropriate.
\section{Funding transparency statement}
Co-authors Ling and Fang are supported in part by a research grant from Lockheed Martin. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of Lockheed Martin.
\small
\section{Experiments}
Here, we first empirically demonstrate the efficacy of ACNet in fitting both synthetic and real-world data.
We then end off by applying ACNet to Scenario $2$ of Section~\ref{sec:prob_quant}, and show that ACNet can be used to fit data even when the data exhibits uncertainty in measurements.
The goal of these experiments is \textit{not} to serve as comparison against neural density estimators (which typically model joint \textit{densities} and not joint \textit{distribution} functions), but rather as an alternative to frequently used parametric copula.
Experiments are conducted on a 3.1 GHz Intel Core i5 with 16 GB of RAM.
We utilize the PyTorch \citep{paszke2017automatic} framework for automatic differentiation.
We use double precision arithmetic as the inversion of $\varphi$ requires numerical precision.
When using Newton's method to compute $\varphi^{-1}$, we terminate when the error is $\leq 1e-10$.
For all our experiments we use ACNet with $L=2$ and $ H_1=H_2=10$, i.e., $2$ hidden layers each of width $10$.
The network is small but sufficient for our purpose since $\{\varphi^{\text{nn}}\}$ is only $1$-dimensional.
$\Phi_A$ and $\Phi_B$ were initialized in the range $[0, 1]$ and $(0, 2)$ uniformly at random.
We use stochastic gradient descent with a learning rate of $1e-5$, momentum of $0.9$, and a batch size of $200$.
No hyperparameter tuning was performed.
\subsection{Learning known Archimedean copulas}
\label{sec:expt_synth}
To verify that ACNet is able to learn commonly used Archimedean copulas, we generate synthetic datasets from the Clayton, Frank and Joe copulas.
These copulas exhibit different tail dependencies (see Figure~\ref{fig:clayton_gt}).
For example, the Clayton copula has high lower tail-dependence but no upper-tail dependency, which makes it useful for modelling quantities such stock prices, for example, two companies involved in the same supply chain are likely to perform poorly simultaneously, but one company performing well does not imply the other will.
These copula are governed by a single parameter, which are chosen to be $5$, $15$, and $3$ respectively.
For each copula, we generate $2000$ train and $1000$ test points and train ACNet for 40k epochs.
We compare the resultant learned distribution (Figure~\ref{fig:clayton_learned}) with the ground truth (Figure~\ref{fig:clayton_gt}).
Testing losses are compared in Table~\ref{tbl:synth}.
\input{clayton_learned}
From Figure~\ref{fig:clayton} and Table~\ref{tbl:synth}, we can see that ACNet is able to learn all 3 copula accurately by the end of training, and the contours of the log-likelihood match the ground truth almost exactly.
Figure~\ref{fig:clayton_epochs} shows how the learned density changes as the number of training epochs increases for the case of the Clayton copula.
We can see that as the number of training samples increases, the `tip' at the lower tail of the copula becomes sharper, i.e., ACNet learns the lower tails of the distribution more accurately.
\input{clayton_epochs}
\subsection{Experiments on real-world data}
\label{sec:expts_real}
To demonstrate the efficacy of ACNet, we applied ACNet to 3 real-world datasets.
As a preprocessing step, we normalize the data by scaling each dimension to the range $[0, 1]$ based on their ordinal ranks.
This ensures that the empirical marginals are approximately uniform.
Train and test sets are split based on a 3:1 ratio.
We normalize both train and test sets independently. This was done to avoid leakage of information from the train to the test set, which could occur if train and test sets were normalized together.
In practice, we observe no significant difference in these two methods of normalization.
Because real-world data tends to contain a small number of outliers, we inject into the training set points uniformly chosen from $[0, 1]^2$.
This is akin to a form of regularization and helps to prevent ACNet from overfitting. We inject $1$ point for every $100$ points in the training set.
We repeat each experiment 5 times with different train/test splits and report the average test loss.
\textbf{Boston Housing.} We model the \textit{negative} dependencies between per capita crime rate and the median value of owner occupied homes in Boston \cite{harrison1978hedonic}. Since Archimedean copulas with completely monotone generators can only model positive dependencies, we insert an additional preprocessing step where we flip the data along the vertical line at $0.5$. This dataset has 506 samples.
\textbf{(INTC-MSFT)} This data comprises five years of daily log-returns (1996-2000) of Intel (INTC) and Microsoft (MSFT) stocks, and was analysed in \cite{mcneil2015quantitative}. The dataset comprises 1262 samples.
\textbf{(GOOG-FB).} We collected daily closing prices of Google (GOOG) and Facebook (FB) from May 2015 to May 2020. The data was collected using Yahoo Finance and comprises 1259 samples.
\input{real_world}
For each of the datasets, we trained ACNet based on the processed data.
The learned distributions are illustrated in Figure~\ref{fig:real_world_results}.
Furthermore, we compare the performance of ACNet with the Clayton, Frank and Gumbel copula and report the test log-loss of ACNet with the best fit amongst the $3$ parametric copula (Table~\ref{tbl:real-world}) \footnote{We report the best performing model, with and without regularization.}.
The parametric copula were similarly trained by gradient descent.\footnote{There are multiple ways of training parametric copula---for example, by matching concordance measures such as Kendall's Tau and Spearman's Rho. We do not consider these alternative fitting methods here.}
Qualitatively, we observe that reasonable models were learnt for the first two datasets.
For example, in the Boston housing dataset, we are able to model the higher dependence in the left tail of the distribution, and the higher testing loss is likely due to overfitting of the small dataset.
In the last dataset, while ACNet is unable to exactly learn the copula, it is both \textit{qualitatively and quantatively better} than the parametric Archimedean copulas, which are unable to model the `two-phased' nature exhibited by this dataset.
\input{log_likelihood_all}
\subsection{Training and inference on other probabilistic quantities}
Here, we demonstrate the effectiveness in applying ACNet to learning joint distributions in the presence of uncertainty in data (see Section~\ref{sec:prob_quant}).
We use the same synthetic dataset of Section~\ref{sec:expt_synth}.
For each datapoint, instead of observing the tuple $(u_1, u_2)$, we observe $\left( \left(\underline{u_1}, \overline{u_1}\right), \left( \underline{u_2}, \overline{u_2} \right) \right)$, where $\underline{u_i} \leq U_i \leq \overline{u_i}$.
The upper and lower bounds of $u_i$ are chosen randomly such that $u_i - \underline{u_i}$ and $\overline{u_i}-u_i$ are uniformly chosen from $[0, \lambda]$, where $\lambda$ is a `noise` parameter associated with the experiment.
Note each entry has its own associated uncertainty.
Fitting ACNet simply involves running gradient descent to minimize the negative log probabilities $-\log \left( \mathbb{P}\left( U_1 \in \left[ \underline{u_1}, \overline{u_1} \right] \wedge U_d \in \left[ \underline{u_2}, \overline{u_2} \right] \right) \right)$.
We experiment with $\lambda=0.1, \lambda=0.25, \lambda=0.5$. Results are reported in Figure~\ref{fig:clayton_noisy}.
In all cases, ACNet is able to learn a reasonable rendition of the Clayton copula.
As expected, when $\lambda$ increases, we begin to see the inability to model the strong correlations in the lower tails.
This is expected, since the uncertainty limits the degree to which we can observe strong lower tail dependencies.
\input{clayton_learned_noisy}
\subsection{Practical considerations and limitations of ACNet}
\textbf{Experiments when $d>2$.} Here, we show that ACNet is capable of fitting distributions with more than $2$ dimensions.
We use the GAS dataset \cite{vergara2012chemical}, which comprises readings from chemical sensors used in simulations for drift compensation.
To simplify the situation, we use features $0$, $4$ and $7$ from a single sensor during the second month (see \cite{vergara2012chemical} for details) and perform normalization for each feature in a similar fashion Section~\ref{sec:expts_real}, yielding a dataset comprising $445$ readings.
The network architecture and train/test split are identical to Section~\ref{sec:expts_real}.
\input{gas_all.tex}
As before, we train ACNet by minimizing log-loss and compare our results against the Clayton, Frank, and Gumbel copulas.
The results are in Figure~\ref{fig:gas_expt}.
We observe that ACNet is able to fit the data reasonably despite the data not being entirely symmetric over the $3$ dimensions.
ACNet achieves a test/train loss of -1.389 and -1.456, outperforming the Frank copula (the best performing parametric copula), which obtained a test/train loss of -1.356 and -1.357.
Similar to the Boston housing dataset, ACNet overfits. This is unsurprising since the dataset is fairly small.
Generally, we do not recommend using ACNet with high dimensions.
First, this often results in numerical issues since training ACNet by minimizing the log-loss requires differentiating the copula $d$ times.
Generally, we observe that ACNet faces numerical problems for $d \geq 5$ even when employing double precision.
Second, high dimensional data is rarely symmetric unless there is some underlying structure supporting this belief.
\textbf{Failure cases.}
Not all datasets are well modelled by ACNet.
Consider the POWER dataset \cite{power} (Figure~\ref{fig:power_expt}), which contains measurements for electric power consumption in a single household.
For simplicity, we focus on the joint distribution of the power consumption between the kitchen and laundry room.
Clearly, the POWER dataset is unlike the previous distributions, as it posesses a high level of `discreteness'.
Since there are few appliances in each room and each active appliance consumes a fixed amount of power, we would expect that each combination of active appliances would lead to a distinct profile in power consumption.
As seen from Figure~\ref{fig:power_expt}, ACNet is unable to accurately fit this distribution.
It is worth noting however, that despite learning a distribution that appears qualitatively different, ACNet still achieves a test loss of -0.221, which is significantly better than the uniform distribution and slightly superior to the Clayton copula, the best fit among the copula we compared with.
\input{power_all}
\textbf{Running times.} ACNet's generator is represented by a neural network and is thus slower to train compared to single-parameter copulas.
However, performing training is still feasible in practice.
With our experimental setup, we are able to train $15$ minibatches each of size $200$ in $1$ second without utilizing a GPU.
Furthermore, in all our experiments, the network converges within $10 \cdot 4$ iterations.
For a training set with $2000$ points, ACNet converges in 3-5 hours.
Computational costs are split roughly evenly between the forward and backward passes---the former involves solving for the inverse while the latter involves taking $2$ (or more) rounds of differentiation.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,173
|
import JSONSerializer from '@ember-data/serializer/json';
import { underscore } from '@ember/string';
import { assign } from '@ember/polyfills';
export default JSONSerializer.extend({
normalizeArrayResponse(store, primaryModelClass, payload, id, requestType) {
let total = payload.number_of_results;
let totalPages = Math.min(Math.ceil(total / 20), 500);
let meta = { meta: { total, totalPages } };
payload = payload.items.map(item => {
return item;
});
let data = this._super(store, primaryModelClass, payload, id, requestType);
return assign(data, meta);
},
normalizeSingleResponse(store, primaryModelClass, payload, id, requestType) {
// strip "https://" from id
payload.id = payload.id.substr(8);
return this._super(store, primaryModelClass, payload, id, requestType);
},
// normalizeFindRecordResponse(store, primaryModelClass, payload) {
// payload.data.attributes.meta = payload.meta || {};
// return this._super(store, primaryModelClass, payload);
// },
keyForAttribute(attr) {
return underscore(attr);
},
});
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,754
|
Q: Stealing passwords via DLL injection? We have a couple places in our app (Xamarin C# cross-platform) where we deal with sensitive information - so we've password-protected the database (SQLCipher) and we encrypt the data we store outside of the database. However, I'm thinking that it wouldn't be difficult for a determined hacker to inject a DLL between our app and the database DLL and see the password going by in the Connect() function, and likewise to inject a DLL between our app and whichever system DLL provides System.Security.Cryptography and see our AES key going by.
Is there a way to protect against this? Or am I incorrect and this isn't actually a big security risk?
A: I believe, that the all what you need is to sign your assemblies "C#: why sign an assembly?".
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,045
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.